Miniature Robotic Spacecraft for Inspecting Other Spacecraft
NASA Technical Reports Server (NTRS)
Fredrickson, Steven; Abbott, Larry; Duran, Steve; Goode, Robert; Howard, Nathan; Jochim, David; Rickman, Steve; Straube, Tim; Studak, Bill; Wagenknecht, Jennifer;
2004-01-01
A report discusses the Miniature Autonomous Extravehicular Robotic Camera (Mini AERCam)-- a compact robotic spacecraft intended to be released from a larger spacecraft for exterior visual inspection of the larger spacecraft. The Mini AERCam is a successor to the AERCam Sprint -- a prior miniature robotic inspection spacecraft that was demonstrated in a space-shuttle flight experiment in 1997. The prototype of the Mini AERCam is a demonstration unit having approximately the form and function of a flight system. The Mini AERCam is approximately spherical with a diameter of about 7.5 in. (.19 cm) and a weight of about 10 lb (.4.5 kg), yet it has significant additional capabilities, relative to the 14-in. (36-cm), 35-lb (16-kg) AERCam Sprint. The Mini AERCam includes miniaturized avionics, instrumentation, communications, navigation, imaging, power, and propulsion subsystems, including two digital video cameras and a high-resolution still camera. The Mini AERCam is designed for either remote piloting or supervised autonomous operations, including station keeping and point-to-point maneuvering. The prototype has been tested on an air-bearing table and in a hardware-in-the-loop orbital simulation of the dynamics of maneuvering in proximity to the International Space Station.
Miniaturized Autonomous Extravehicular Robotic Camera (Mini AERCam)
NASA Technical Reports Server (NTRS)
Fredrickson, Steven E.
2001-01-01
The NASA Johnson Space Center (JSC) Engineering Directorate is developing the Autonomous Extravehicular Robotic Camera (AERCam), a low-volume, low-mass free-flying camera system . AERCam project team personnel recently initiated development of a miniaturized version of AERCam known as Mini AERCam. The Mini AERCam target design is a spherical "nanosatellite" free-flyer 7.5 inches in diameter and weighing 1 0 pounds. Mini AERCam is building on the success of the AERCam Sprint STS-87 flight experiment by adding new on-board sensing and processing capabilities while simultaneously reducing volume by 80%. Achieving enhanced capability in a smaller package depends on applying miniaturization technology across virtually all subsystems. Technology innovations being incorporated include micro electromechanical system (MEMS) gyros, "camera-on-a-chip" CMOS imagers, rechargeable xenon gas propulsion system , rechargeable lithium ion battery, custom avionics based on the PowerPC 740 microprocessor, GPS relative navigation, digital radio frequency communications and tracking, micropatch antennas, digital instrumentation, and dense mechanical packaging. The Mini AERCam free-flyer will initially be integrated into an approximate flight-like configuration for demonstration on an airbearing table. A pilot-in-the-loop and hardware-in-the-loop simulation to simulate on-orbit navigation and dynamics will complement the airbearing table demonstration. The Mini AERCam lab demonstration is intended to form the basis for future development of an AERCam flight system that provides beneficial on-orbit views unobtainable from fixed cameras, cameras on robotic manipulators, or cameras carried by EVA crewmembers.
Mini AERCam Inspection Robot for Human Space Missions
NASA Technical Reports Server (NTRS)
Fredrickson, Steven E.; Duran, Steve; Mitchell, Jennifer D.
2004-01-01
The Engineering Directorate of NASA Johnson Space Center has developed a nanosatellite-class free-flyer intended for future external inspection and remote viewing of human spacecraft. The Miniature Autonomous Extravehicular Robotic Camera (Mini AERCam) technology demonstration unit has been integrated into the approximate form and function of a flight system. The spherical Mini AERCam free flyer is 7.5 inches in diameter and weighs approximately 10 pounds, yet it incorporates significant additional capabilities compared to the 35 pound, 14 inch AERCam Sprint that flew as a Shuttle flight experiment in 1997. Mini AERCam hosts a full suite of miniaturized avionics, instrumentation, communications, navigation, imaging, power, and propulsion subsystems, including digital video cameras and a high resolution still image camera. The vehicle is designed for either remotely piloted operations or supervised autonomous operations including automatic stationkeeping and point-to-point maneuvering. Mini AERCam is designed to fulfill the unique requirements and constraints associated with using a free flyer to perform external inspections and remote viewing of human spacecraft operations. This paper describes the application of Mini AERCam for stand-alone spacecraft inspection, as well as for roles on teams of humans and robots conducting future space exploration missions.
Mini AERCam: A Free-Flying Robot for Space Inspection
NASA Technical Reports Server (NTRS)
Fredrickson, Steven
2001-01-01
The NASA Johnson Space Center Engineering Directorate is developing the Autonomous Extravehicular Robotic Camera (AERCam), a free-flying camera system for remote viewing and inspection of human spacecraft. The AERCam project team is currently developing a miniaturized version of AERCam known as Mini AERCam, a spherical nanosatellite 7.5 inches in diameter. Mini AERCam development builds on the success of AERCam Sprint, a 1997 Space Shuttle flight experiment, by integrating new on-board sensing and processing capabilities while simultaneously reducing volume by 80%. Achieving these productivity-enhancing capabilities in a smaller package depends on aggressive component miniaturization. Technology innovations being incorporated include micro electromechanical system (MEMS) gyros, "camera-on-a-chip" CMOS imagers, rechargeable xenon gas propulsion, rechargeable lithium ion battery, custom avionics based on the PowerPC 740 microprocessor, GPS relative navigation, digital radio frequency communications and tracking, micropatch antennas, digital instrumentation, and dense mechanical packaging. The Mini AERCam free-flyer will initially be integrated into an approximate flight-like configuration for laboratory demonstration on an airbearing table. A pilot-in-the-loop and hardware-in-the-loop simulation to simulate on-orbit navigation and dynamics will complement the airbearing table demonstration. The Mini AERCam lab demonstration is intended to form the basis for future development of an AERCam flight system that provides on-orbit views of the Space Shuttle and International Space Station unobtainable from fixed cameras, cameras on robotic manipulators, or cameras carried by space-walking crewmembers.
Design and Performance Evaluation of a UWB Communication and Tracking System for Mini-AERCam
NASA Technical Reports Server (NTRS)
Barton, Richard J.
2005-01-01
NASA Johnson Space Center (JSC) is developing a low-volume, low-mass, robotic free-flying camera known as Mini-AERCam (Autonomous Extra-vehicular Robotic Camera) to assist the International Space Station (ISS) operations. Mini-AERCam is designed to provide astronauts and ground control real-time video for camera views of ISS. The system will assist ISS crewmembers and ground personnel to monitor ongoing operations and perform visual inspections of exterior ISS components without requiring extravehicular activity (EAV). Mini-AERCam consists of a great number of subsystems. Many institutions and companies have been involved in the R&D for this project. A Mini-AERCam ground control system has been studied at Texas A&M University [3]. The path planning and control algorithms that direct the motions of Mini-AERCam have been developed through the joint effort of Carnegie Mellon University and the Texas Robotics and Automation Center [5]. NASA JSC has designed a layered control architecture that integrates all functions of Mini-AERCam [8]. The research described in this report is part of a larger effort focused on the communication and tracking subsystem that is designed to perform three major tasks: 1. To transmit commands from ISS to Mini-AERCam for control of robotic camera motions (downlink); 2. To transmit real-time video from Mini-AERCam to ISS for inspections (uplink); 3. To track the position of Mini-AERCam for precise motion control. The ISS propagation environment is unique due to the nature of the ISS structure and multiple RF interference sources [9]. The ISS is composed of various truss segments, solar panels, thermal radiator panels, and modules for laboratories and crew accommodations. A tracking system supplemental to GPS is desirable both to improve accuracy and to eliminate the structural blockage due to the close proximity of the ISS which could at times limit the number of GPS satellites accessible to the Mini-AERCam. Ideally, the tracking system will be a passive component of the communication system which will need to operate in a time-varying multipath environment created as the robot camera moves over the ISS structure. In addition, due to many interference sources located on the ISS, SSO, LEO satellites and ground-based transmitters, selecting a frequency for the ISS and Mini-AERCam link which will coexist with all interferers poses a major design challenge. To meet all of these challenges, ultrawideband (UWB) radio technology is being studied for use in the Mini-AERCam communication and tracking subsystem. The research described in this report is focused on design and evaluation of passive tracking system algorithms based on UWB radio transmissions from mini-AERCam.
NASA Technical Reports Server (NTRS)
Fredrickson, Steven E.; Duran, Steve G.; Braun, Angela N.; Straube, Timothy M.; Mitchell, Jennifer D.
2006-01-01
The NASA Johnson Space Center has developed a nanosatellite-class Free Flyer intended for future external inspection and remote viewing of human spacecraft. The Miniature Autonomous Extravehicular Robotic Camera (Mini AERCam) technology demonstration unit has been integrated into the approximate form and function of a flight system. The spherical Mini AERCam Free Flyer is 7.5 inches in diameter and weighs approximately 10 pounds, yet it incorporates significant additional capabilities compared to the 35-pound, 14-inch diameter AERCam Sprint that flew as a Shuttle flight experiment in 1997. Mini AERCam hosts a full suite of miniaturized avionics, instrumentation, communications, navigation, power, propulsion, and imaging subsystems, including digital video cameras and a high resolution still image camera. The vehicle is designed for either remotely piloted operations or supervised autonomous operations, including automatic stationkeeping, point-to-point maneuvering, and waypoint tracking. The Mini AERCam Free Flyer is accompanied by a sophisticated control station for command and control, as well as a docking system for automated deployment, docking, and recharge at a parent spacecraft. Free Flyer functional testing has been conducted successfully on both an airbearing table and in a six-degree-of-freedom closed-loop orbital simulation with avionics hardware in the loop. Mini AERCam aims to provide beneficial on-orbit views that cannot be obtained from fixed cameras, cameras on robotic manipulators, or cameras carried by crewmembers during extravehicular activities (EVA s). On Shuttle or International Space Station (ISS), for example, Mini AERCam could support external robotic operations by supplying orthogonal views to the intravehicular activity (IVA) robotic operator, supply views of EVA operations to IVA and/or ground crews monitoring the EVA, and carry out independent visual inspections of areas of interest around the spacecraft. To enable these future benefits with minimal impact on IVA operators and ground controllers, the Mini AERCam system architecture incorporates intelligent systems attributes that support various autonomous capabilities. 1) A robust command sequencer enables task-level command scripting. Command scripting is employed for operations such as automatic inspection scans over a region of interest, and operator-hands-off automated docking. 2) A system manager built on the same expert-system software as the command sequencer provides detection and smart-response capability for potential system-level anomalies, like loss of communications between the Free Flyer and control station. 3) An AERCam dynamics manager provides nominal and off-nominal management of guidance, navigation, and control (GN&C) functions. It is employed for safe trajectory monitoring, contingency maneuvering, and related roles. This paper will describe these architectural components of Mini AERCam autonomy, as well as the interaction of these elements with a human operator during supervised autonomous control.
Control and Guidance of Low-Cost Robots via Gesture Perception for Monitoring Activities in the Home
Sempere, Angel D.; Serna-Leon, Arturo; Gil, Pablo; Puente, Santiago; Torres, Fernando
2015-01-01
This paper describes the development of a low-cost mini-robot that is controlled by visual gestures. The prototype allows a person with disabilities to perform visual inspections indoors and in domestic spaces. Such a device could be used as the operator's eyes obviating the need for him to move about. The robot is equipped with a motorised webcam that is also controlled by visual gestures. This camera is used to monitor tasks in the home using the mini-robot while the operator remains quiet and motionless. The prototype was evaluated through several experiments testing the ability to use the mini-robot’s kinematics and communication systems to make it follow certain paths. The mini-robot can be programmed with specific orders and can be tele-operated by means of 3D hand gestures to enable the operator to perform movements and monitor tasks from a distance. PMID:26690448
NASA Technical Reports Server (NTRS)
Wagenknecht, J.; Fredrickson, S.; Manning, T.; Jones, B.
2003-01-01
Engineers at NASA Johnson Space Center have designed, developed, and tested a nanosatellite-class free-flyer intended for future external inspection and remote viewing of human spaceflight activities. The technology demonstration system, known as the Miniature Autonomous Extravehicular Robotic Camera (Mini AERCam), has been integrated into the approximate form and function of a flight system. The primary focus has been to develop a system capable of providing external views of the International Space Station. The Mini AERCam system is spherical-shaped and less than eight inches in diameter. It has a full suite of guidance, navigation, and control hardware and software, and is equipped with two digital video cameras and a high resolution still image camera. The vehicle is designed for either remotely piloted operations or supervised autonomous operations. Tests have been performed in both a six degree-of-freedom closed-loop orbital simulation and on an air-bearing table. The Mini AERCam system can also be used as a test platform for evaluating algorithms and relative navigation for autonomous proximity operations and docking around the Space Shuttle Orbiter or the ISS.
NASA Johnson Space Center: Mini AERCam Testing with GSS6560
NASA Technical Reports Server (NTRS)
Cryant, Scott P.
2004-01-01
This slide presentation reviews the testing of the Miniature Autonomous Extravehicular Robotic Camera (Mini AERCam) with the GPS/SBAS simulation system, GSS6560. There is a listing of several GPS based programs at NASA Johnson, including the testing of Shuttle testing of the GPS system. Including information about Space Integrated GPS/INS (SIGI) testing. There is also information about the standalone ISS SIGI test,and testing of the SIGI for the Crew Return Vehicle. The Mini AERCam is a small, free-flying camera for remote inspections of the ISS, it uses precise relative navigation with differential carrier phase GPS to provide situational awareness to operators. The closed loop orbital testing with and without the use of the GSS6550 system of the Mini AERCam system is reviewed.
Embedded mobile farm robot for identification of diseased plants
NASA Astrophysics Data System (ADS)
Sadistap, S. S.; Botre, B. A.; Pandit, Harshavardhan; Chandrasekhar; Rao, Adesh
2013-07-01
This paper presents the development of a mobile robot used in farms for identification of diseased plants. It puts forth two of the major aspects of robotics namely automated navigation and image processing. The robot navigates on the basis of the GPS (Global Positioning System) location and data obtained from IR (Infrared) sensors to avoid any obstacles in its path. It uses an image processing algorithm to differentiate between diseased and non-diseased plants. A robotic platform consisting of an ARM9 processor, motor drivers, robot mechanical assembly, camera and infrared sensors has been used. Mini2440 microcontroller has been used wherein Embedded linux OS (Operating System) is implemented.
Flexible mini gamma camera reconstructions of extended sources using step and shoot and list mode.
Gardiazabal, José; Matthies, Philipp; Vogel, Jakob; Frisch, Benjamin; Navab, Nassir; Ziegler, Sibylle; Lasser, Tobias
2016-12-01
Hand- and robot-guided mini gamma cameras have been introduced for the acquisition of single-photon emission computed tomography (SPECT) images. Less cumbersome than whole-body scanners, they allow for a fast acquisition of the radioactivity distribution, for example, to differentiate cancerous from hormonally hyperactive lesions inside the thyroid. This work compares acquisition protocols and reconstruction algorithms in an attempt to identify the most suitable approach for fast acquisition and efficient image reconstruction, suitable for localization of extended sources, such as lesions inside the thyroid. Our setup consists of a mini gamma camera with precise tracking information provided by a robotic arm, which also provides reproducible positioning for our experiments. Based on a realistic phantom of the thyroid including hot and cold nodules as well as background radioactivity, the authors compare "step and shoot" (SAS) and continuous data (CD) acquisition protocols in combination with two different statistical reconstruction methods: maximum-likelihood expectation-maximization (ML-EM) for time-integrated count values and list-mode expectation-maximization (LM-EM) for individually detected gamma rays. In addition, the authors simulate lower uptake values by statistically subsampling the experimental data in order to study the behavior of their approach without changing other aspects of the acquired data. All compared methods yield suitable results, resolving the hot nodules and the cold nodule from the background. However, the CD acquisition is twice as fast as the SAS acquisition, while yielding better coverage of the thyroid phantom, resulting in qualitatively more accurate reconstructions of the isthmus between the lobes. For CD acquisitions, the LM-EM reconstruction method is preferable, as it yields comparable image quality to ML-EM at significantly higher speeds, on average by an order of magnitude. This work identifies CD acquisition protocols combined with LM-EM reconstruction as a prime candidate for the wider introduction of SPECT imaging with flexible mini gamma cameras in the clinical practice.
ROTEX-TRIIFEX: Proposal for a joint FRG-USA telerobotic flight experiment
NASA Technical Reports Server (NTRS)
Hirzinger, G.; Bejczy, A. K.
1989-01-01
The concepts and main elements of a RObot Technology EXperiment (ROTEX) proposed to fly with the next German spacelab mission, D2, are presented. It provides a 1 meter size, six axis robot inside a spacelab rack, equipped with a multisensory gripper (force-torque sensors, an array of range finders, and mini stereo cameras). The robot will perform assembly and servicing tasks in a generic way, and will grasp a floating object. The man machine and supervisory control concepts for teleoperation from the spacelab and from ground are discussed. The predictive estimation schemes for an extensive use of time-delay compensating 3D computer graphics are explained.
Designing a Microhydraulically driven Mini robotic Squid
2016-05-20
applications for microrobots include remote monitoring, surveillance, search and rescue, nanoassembly, medicine, and in-vivo surgery . Robotics platforms...Secretary of Defense for Research and Engineering. Designing a Microhydraulically-driven Mini- robotic Squid by Kevin Dehan Meng B.S., U.S. Air...Committee on Graduate Students 2 Designing a Microhydraulically-driven Mini- robotic Squid by Kevin Dehan Meng Submitted to the Department
UWB Tracking System Design for Free-Flyers
NASA Technical Reports Server (NTRS)
Ni, Jianjun; Arndt, Dickey; Phan, Chan; Ngo, Phong; Gross, Julia; Dusl, John
2004-01-01
This paper discusses an ultra-wideband (UWB) tracking system design effort for Mini-AERCam (Autonomous Extra-vehicular Robotic Camera), a free-flying video camera system under development at NASA Johnson Space Center for aid in surveillance around the International Space Station (ISS). UWB technology is exploited to implement the tracking system due to its properties, such as high data rate, fine time resolution, and low power spectral density. A system design using commercially available UWB products is proposed. A tracking algorithm TDOA (Time Difference of Arrival) that operates cooperatively with the UWB system is developed in this research effort. Matlab simulations show that the tracking algorithm can achieve fine tracking resolution with low noise TDOA data. Lab experiments demonstrate the UWB tracking capability with fine resolution.
Event-Based Sensing and Control for Remote Robot Guidance: An Experimental Case
Santos, Carlos; Martínez-Rey, Miguel; Santiso, Enrique
2017-01-01
This paper describes the theoretical and practical foundations for remote control of a mobile robot for nonlinear trajectory tracking using an external localisation sensor. It constitutes a classical networked control system, whereby event-based techniques for both control and state estimation contribute to efficient use of communications and reduce sensor activity. Measurement requests are dictated by an event-based state estimator by setting an upper bound to the estimation error covariance matrix. The rest of the time, state prediction is carried out with the Unscented transformation. This prediction method makes it possible to select the appropriate instants at which to perform actuations on the robot so that guidance performance does not degrade below a certain threshold. Ultimately, we obtained a combined event-based control and estimation solution that drastically reduces communication accesses. The magnitude of this reduction is set according to the tracking error margin of a P3-DX robot following a nonlinear trajectory, remotely controlled with a mini PC and whose pose is detected by a camera sensor. PMID:28878144
NASA Astrophysics Data System (ADS)
Petrişor, Silviu-Mihai; Bârsan, GhiÅ£Ä.
2013-12-01
The authors of this paper wish to highlight elements regarding the organology, functioning and simulation, in a real workspace, of a tracked mini robot structure destined for special applications in theatres of operation, a technological product which is subject to a national patent granted to our institution (patent no. RO a 2012 01051), the result of research activities undertaken under a contract won by national competition, a grant for young research teams, PN-RUTE- 2010 type. The issues outlined in this paper are aspects related to the original invention in comparison with other mini-robot structures, the inventors presenting succinctly the technological product description and its applicability both in the military and applicative area as well as in the educational one. Additionally, the advantages of using the technological product are shown in a real workspace, the constructive and functional solution before, finally, presenting, based on the modelling of the mechanical structure of the tilting module attached to the mini-robot, an application on the simulation and programming of the mini-robot under study.
Microscopic pick-and-place teleoperation
NASA Astrophysics Data System (ADS)
Bhatti, Pamela; Hannaford, Blake; Marbot, Pierre-Henry
1993-03-01
A three degree-of-freedom direct drive mini robot has been developed for biomedical applications. The design approach of the mini robot relies heavily upon electromechanical components from the Winchester disk drive industry. In the current design, the first joint is driven by actuators from a 5.25' drive, and the following joints are driven by actuators typical of 3.5' drives. The system has 5 - 10 micrometers of position repeatability and resolution in all three axes. A mini gripper attachment has been fabricated for the robot to explore manipulation of objects ranging from 50 micrometers to 500 micrometers . Mounted on the robot, the gripper has successfully performed pick and place operations under teleoperated control. The mini robot serves to precisely position the gripper, and a needle-like finger of the gripper deflects so the fingers can grip a target object. The gripper finger capable of motion is fabricated with a piezoelectric bimorph crystal which deflects with an applied DC voltage. The experimental results are promising, and the mini gripper may be modified for future biomedical and micro assembly applications.
2010-01-12
CAPE CANAVERAL, Fla. - In the Remote Manipulator System Lab inside the Vehicle Assembly Building at NASA's Kennedy Space Center in Florida, space shuttle Atlantis' orbiter boom sensor system, or OBSS, awaits inspection. The 50-foot-long OBSS attaches to the end of the shuttle’s robotic arm and supports the cameras and laser systems used to inspect the shuttle’s thermal protection system while in space. Atlantis is next slated to deliver an Integrated Cargo Carrier and Russian-built Mini Research Module to the International Space Station on the STS-132 mission. The second in a series of new pressurized components for Russia, the module will be permanently attached to the Zarya module. Three spacewalks are planned to store spare components outside the station, including six spare batteries, a boom assembly for the Ku-band antenna and spares for the Canadian Dextre robotic arm extension. A radiator, airlock and European robotic arm for the Russian Multi-purpose Laboratory Module also are payloads on the flight. Launch is targeted for May 14, 2010. Photo credit: NASA/Jack Pfaller
2010-01-12
CAPE CANAVERAL, Fla. - In the Remote Manipulator System Lab inside the Vehicle Assembly Building at NASA's Kennedy Space Center in Florida, space shuttle Atlantis' orbiter boom sensor system, or OBSS, is prepared for maintenance. The 50-foot-long OBSS attaches to the end of the shuttle’s robotic arm and supports the cameras and laser systems used to inspect the shuttle’s thermal protection system while in space. Atlantis is next slated to deliver an Integrated Cargo Carrier and Russian-built Mini Research Module to the International Space Station on the STS-132 mission. The second in a series of new pressurized components for Russia, the module will be permanently attached to the Zarya module. Three spacewalks are planned to store spare components outside the station, including six spare batteries, a boom assembly for the Ku-band antenna and spares for the Canadian Dextre robotic arm extension. A radiator, airlock and European robotic arm for the Russian Multi-purpose Laboratory Module also are payloads on the flight. Launch is targeted for May 14, 2010. Photo credit: NASA/Jack Pfaller
Avionics for a Small Robotic Inspection Spacecraft
NASA Technical Reports Server (NTRS)
Abbott, Larry; Shuler, Robert L., Jr.
2005-01-01
A report describes the tentative design of the avionics of the Mini-AERCam -- a proposed 7.5-in. (approximately 19-cm)-diameter spacecraft that would contain three digital video cameras to be used in visual inspection of the exterior of a larger spacecraft (a space shuttle or the International Space Station). The Mini-AERCam would maneuver by use of its own miniature thrusters under radio control by astronauts inside the larger spacecraft. The design of the Mini-AERCam avionics is subject to a number of constraints, most of which can be summarized as severely competing requirements to maximize radiation hardness and maneuvering, image-acquisition, and data-communication capabilities while minimizing cost, size, and power consumption. The report discusses the design constraints, the engineering approach to satisfying the constraints, and the resulting iterations of the design. The report places special emphasis on the design of a flight computer that would (1) acquire position and orientation data from a Global Positioning System receiver and a microelectromechanical gyroscope, respectively; (2) perform all flight-control (including thruster-control) computations in real time; and (3) control video, tracking, power, and illumination systems.
Comparison of three different techniques for camera and motion control of a teleoperated robot.
Doisy, Guillaume; Ronen, Adi; Edan, Yael
2017-01-01
This research aims to evaluate new methods for robot motion control and camera orientation control through the operator's head orientation in robot teleoperation tasks. Specifically, the use of head-tracking in a non-invasive way, without immersive virtual reality devices was combined and compared with classical control modes for robot movements and camera control. Three control conditions were tested: 1) a condition with classical joystick control of both the movements of the robot and the robot camera, 2) a condition where the robot movements were controlled by a joystick and the robot camera was controlled by the user head orientation, and 3) a condition where the movements of the robot were controlled by hand gestures and the robot camera was controlled by the user head orientation. Performance, workload metrics and their evolution as the participants gained experience with the system were evaluated in a series of experiments: for each participant, the metrics were recorded during four successive similar trials. Results shows that the concept of robot camera control by user head orientation has the potential of improving the intuitiveness of robot teleoperation interfaces, specifically for novice users. However, more development is needed to reach a margin of progression comparable to a classical joystick interface. Copyright © 2016 Elsevier Ltd. All rights reserved.
UWB Tracking System Design with TDOA Algorithm
NASA Technical Reports Server (NTRS)
Ni, Jianjun; Arndt, Dickey; Ngo, Phong; Phan, Chau; Gross, Julia; Dusl, John; Schwing, Alan
2006-01-01
This presentation discusses an ultra-wideband (UWB) tracking system design effort using a tracking algorithm TDOA (Time Difference of Arrival). UWB technology is exploited to implement the tracking system due to its properties, such as high data rate, fine time resolution, and low power spectral density. A system design using commercially available UWB products is proposed. A two-stage weighted least square method is chosen to solve the TDOA non-linear equations. Matlab simulations in both two-dimensional space and three-dimensional space show that the tracking algorithm can achieve fine tracking resolution with low noise TDOA data. The error analysis reveals various ways to improve the tracking resolution. Lab experiments demonstrate the UWBTDOA tracking capability with fine resolution. This research effort is motivated by a prototype development project Mini-AERCam (Autonomous Extra-vehicular Robotic Camera), a free-flying video camera system under development at NASA Johnson Space Center for aid in surveillance around the International Space Station (ISS).
NASA Astrophysics Data System (ADS)
Chen, Xuedong; Sun, Yi; Huang, Qingjiu; Jia, Wenchuan; Pu, Huayan
This paper focuses on the design of a modular multi-legged walking robot MiniQuad-I, which can be reconfigured into variety configurations, including quadruped and hexapod configurations for different tasks by changing the layout of modules. Critical design considerations when taking the adaptability, maintainability and extensibility in count simultaneously are discussed and then detailed designs of each module are presented. The biomimetic control architecture of MiniQuad-I is proposed, which can improve the capability of agility and independence of the robot. Simulations and experiments on crawling, object picking and obstacle avoiding are performed to verify functions of the MiniQuad-I.
NASA Technical Reports Server (NTRS)
Everett, Louis J.
1994-01-01
The work reported here demonstrates how to automatically compute the position and attitude of a targeting reflective alignment concept (TRAC) camera relative to the robot end effector. In the robotics literature this is known as the sensor registration problem. The registration problem is important to solve if TRAC images need to be related to robot position. Previously, when TRAC operated on the end of a robot arm, the camera had to be precisely located at the correct orientation and position. If this location is in error, then the robot may not be able to grapple an object even though the TRAC sensor indicates it should. In addition, if the camera is significantly far from the alignment it is expected to be at, TRAC may give incorrect feedback for the control of the robot. A simple example is if the robot operator thinks the camera is right side up but the camera is actually upside down, the camera feedback will tell the operator to move in an incorrect direction. The automatic calibration algorithm requires the operator to translate and rotate the robot arbitrary amounts along (about) two coordinate directions. After the motion, the algorithm determines the transformation matrix from the robot end effector to the camera image plane. This report discusses the TRAC sensor registration problem.
An intelligent space for mobile robot localization using a multi-camera system.
Rampinelli, Mariana; Covre, Vitor Buback; de Queiroz, Felippe Mendonça; Vassallo, Raquel Frizera; Bastos-Filho, Teodiano Freire; Mazo, Manuel
2014-08-15
This paper describes an intelligent space, whose objective is to localize and control robots or robotic wheelchairs to help people. Such an intelligent space has 11 cameras distributed in two laboratories and a corridor. The cameras are fixed in the environment, and image capturing is done synchronously. The system was programmed as a client/server with TCP/IP connections, and a communication protocol was defined. The client coordinates the activities inside the intelligent space, and the servers provide the information needed for that. Once the cameras are used for localization, they have to be properly calibrated. Therefore, a calibration method for a multi-camera network is also proposed in this paper. A robot is used to move a calibration pattern throughout the field of view of the cameras. Then, the captured images and the robot odometry are used for calibration. As a result, the proposed algorithm provides a solution for multi-camera calibration and robot localization at the same time. The intelligent space and the calibration method were evaluated under different scenarios using computer simulations and real experiments. The results demonstrate the proper functioning of the intelligent space and validate the multi-camera calibration method, which also improves robot localization.
An Intelligent Space for Mobile Robot Localization Using a Multi-Camera System
Rampinelli, Mariana.; Covre, Vitor Buback.; de Queiroz, Felippe Mendonça.; Vassallo, Raquel Frizera.; Bastos-Filho, Teodiano Freire.; Mazo, Manuel.
2014-01-01
This paper describes an intelligent space, whose objective is to localize and control robots or robotic wheelchairs to help people. Such an intelligent space has 11 cameras distributed in two laboratories and a corridor. The cameras are fixed in the environment, and image capturing is done synchronously. The system was programmed as a client/server with TCP/IP connections, and a communication protocol was defined. The client coordinates the activities inside the intelligent space, and the servers provide the information needed for that. Once the cameras are used for localization, they have to be properly calibrated. Therefore, a calibration method for a multi-camera network is also proposed in this paper. A robot is used to move a calibration pattern throughout the field of view of the cameras. Then, the captured images and the robot odometry are used for calibration. As a result, the proposed algorithm provides a solution for multi-camera calibration and robot localization at the same time. The intelligent space and the calibration method were evaluated under different scenarios using computer simulations and real experiments. The results demonstrate the proper functioning of the intelligent space and validate the multi-camera calibration method, which also improves robot localization. PMID:25196009
NASA Technical Reports Server (NTRS)
Sandor, Aniko; Cross, E. Vincent, II; Chang, Mai Lee
2015-01-01
Human-robot interaction (HRI) is a discipline investigating the factors affecting the interactions between humans and robots. It is important to evaluate how the design of interfaces affect the human's ability to perform tasks effectively and efficiently when working with a robot. By understanding the effects of interface design on human performance, workload, and situation awareness, interfaces can be developed to appropriately support the human in performing tasks with minimal errors and with appropriate interaction time and effort. Thus, the results of research on human-robot interfaces have direct implications for the design of robotic systems. For efficient and effective remote navigation of a rover, a human operator needs to be aware of the robot's environment. However, during teleoperation, operators may get information about the environment only through a robot's front-mounted camera causing a keyhole effect. The keyhole effect reduces situation awareness which may manifest in navigation issues such as higher number of collisions, missing critical aspects of the environment, or reduced speed. One way to compensate for the keyhole effect and the ambiguities operators experience when they teleoperate a robot is adding multiple cameras and including the robot chassis in the camera view. Augmented reality, such as overlays, can also enhance the way a person sees objects in the environment or in camera views by making them more visible. Scenes can be augmented with integrated telemetry, procedures, or map information. Furthermore, the addition of an exocentric (i.e., third-person) field of view from a camera placed in the robot's environment may provide operators with the additional information needed to gain spatial awareness of the robot. Two research studies investigated possible mitigation approaches to address the keyhole effect: 1) combining the inclusion of the robot chassis in the camera view with augmented reality overlays, and 2) modifying the camera frame of reference. The first study investigated the effects of inclusion and exclusion of the robot chassis along with superimposing a simple arrow overlay onto the video feed of operator task performance during teleoperation of a mobile robot in a driving task. In this study, the front half of the robot chassis was made visible through the use of three cameras, two side-facing and one forward-facing. The purpose of the second study was to compare operator performance when teleoperating a robot from an egocentric-only and combined (egocentric plus exocentric camera) view. Camera view parameters that are found to be beneficial in these laboratory experiments can be implemented on NASA rovers and tested in a real-world driving and navigation scenario on-site at the Johnson Space Center.
Watching elderly and disabled person's physical condition by remotely controlled monorail robot
NASA Astrophysics Data System (ADS)
Nagasaka, Yasunori; Matsumoto, Yoshinori; Fukaya, Yasutoshi; Takahashi, Tomoichi; Takeshita, Toru
2001-10-01
We are developing a nursing system using robots and cameras. The cameras are mounted on a remote controlled monorail robot which moves inside a room and watches the elderly. It is necessary to pay attention to the elderly at home or nursing homes all time. This requires staffs to pay attention to them at every time. The purpose of our system is to help those staffs. This study intends to improve such situation. A host computer controls a monorail robot to go in front of the elderly using the images taken by cameras on the ceiling. A CCD camera is mounted on the monorail robot to take pictures of their facial expression or movements. The robot sends the images to a host computer that checks them whether something unusual happens or not. We propose a simple calibration method for positioning the monorail robots to track the moves of the elderly for keeping their faces at center of camera view. We built a small experiment system, and evaluated our camera calibration method and image processing algorithm.
Zygomalas, Apollon; Giokas, Konstantinos; Koutsouris, Dimitrios
2014-01-01
Aim. Modular mini-robots can be used in novel minimally invasive surgery techniques like natural orifice transluminal endoscopic surgery (NOTES) and laparoendoscopic single site (LESS) surgery. The control of these miniature assistants is complicated. The aim of this study is the in silico investigation of a remote controlling interface for modular miniature robots which can be used in minimally invasive surgery. Methods. The conceptual controlling system was developed, programmed, and simulated using professional robotics simulation software. Three different modes of control were programmed. The remote controlling surgical interface was virtually designed as a high scale representation of the respective modular mini-robot, therefore a modular controlling system itself. Results. With the proposed modular controlling system the user could easily identify the conformation of the modular mini-robot and adequately modify it as needed. The arrangement of each module was always known. The in silico investigation gave useful information regarding the controlling mode, the adequate speed of rearrangements, and the number of modules needed for efficient working tasks. Conclusions. The proposed conceptual model may promote the research and development of more sophisticated modular controlling systems. Modular surgical interfaces may improve the handling and the dexterity of modular miniature robots during minimally invasive procedures. PMID:25295187
Zygomalas, Apollon; Giokas, Konstantinos; Koutsouris, Dimitrios
2014-01-01
Aim. Modular mini-robots can be used in novel minimally invasive surgery techniques like natural orifice transluminal endoscopic surgery (NOTES) and laparoendoscopic single site (LESS) surgery. The control of these miniature assistants is complicated. The aim of this study is the in silico investigation of a remote controlling interface for modular miniature robots which can be used in minimally invasive surgery. Methods. The conceptual controlling system was developed, programmed, and simulated using professional robotics simulation software. Three different modes of control were programmed. The remote controlling surgical interface was virtually designed as a high scale representation of the respective modular mini-robot, therefore a modular controlling system itself. Results. With the proposed modular controlling system the user could easily identify the conformation of the modular mini-robot and adequately modify it as needed. The arrangement of each module was always known. The in silico investigation gave useful information regarding the controlling mode, the adequate speed of rearrangements, and the number of modules needed for efficient working tasks. Conclusions. The proposed conceptual model may promote the research and development of more sophisticated modular controlling systems. Modular surgical interfaces may improve the handling and the dexterity of modular miniature robots during minimally invasive procedures.
A Mini-Curriculum for Robotics Education.
ERIC Educational Resources Information Center
Jones, Preston K.
This practicum report documents the development of a four-lesson multimedia program for robotics instruction for fourth and seventh grade students. The commercial film "Robot Revolution" and the videocassette tape "Robotics" were used, along with two author-developed slide/audiotape presentations and 14 overhead transparency foils. Two robots,…
Rentschler, M E; Dumpert, J; Platt, S R; Ahmed, S I; Farritor, S M; Oleynikov, D
2006-01-01
The use of small incisions in laparoscopy reduces patient trauma, but also limits the surgeon's ability to view and touch the surgical environment directly. These limitations generally restrict the application of laparoscopy to procedures less complex than those performed during open surgery. Although current robot-assisted laparoscopy improves the surgeon's ability to manipulate and visualize the target organs, the instruments and cameras remain fundamentally constrained by the entry incisions. This limits tool tip orientation and optimal camera placement. The current work focuses on developing a new miniature mobile in vivo adjustable-focus camera robot to provide sole visual feedback to surgeons during laparoscopic surgery. A miniature mobile camera robot was inserted through a trocar into the insufflated abdominal cavity of an anesthetized pig. The mobile robot allowed the surgeon to explore the abdominal cavity remotely and view trocar and tool insertion and placement without entry incision constraints. The surgeon then performed a cholecystectomy using the robot camera alone for visual feedback. This successful trial has demonstrated that miniature in vivo mobile robots can provide surgeons with sufficient visual feedback to perform common procedures while reducing patient trauma.
Compensation for positioning error of industrial robot for flexible vision measuring system
NASA Astrophysics Data System (ADS)
Guo, Lei; Liang, Yajun; Song, Jincheng; Sun, Zengyu; Zhu, Jigui
2013-01-01
Positioning error of robot is a main factor of accuracy of flexible coordinate measuring system which consists of universal industrial robot and visual sensor. Present compensation methods for positioning error based on kinematic model of robot have a significant limitation that it isn't effective in the whole measuring space. A new compensation method for positioning error of robot based on vision measuring technique is presented. One approach is setting global control points in measured field and attaching an orientation camera to vision sensor. Then global control points are measured by orientation camera to calculate the transformation relation from the current position of sensor system to global coordinate system and positioning error of robot is compensated. Another approach is setting control points on vision sensor and two large field cameras behind the sensor. Then the three dimensional coordinates of control points are measured and the pose and position of sensor is calculated real-timely. Experiment result shows the RMS of spatial positioning is 3.422mm by single camera and 0.031mm by dual cameras. Conclusion is arithmetic of single camera method needs to be improved for higher accuracy and accuracy of dual cameras method is applicable.
Finite-time tracking control for multiple non-holonomic mobile robots based on visual servoing
NASA Astrophysics Data System (ADS)
Ou, Meiying; Li, Shihua; Wang, Chaoli
2013-12-01
This paper investigates finite-time tracking control problem of multiple non-holonomic mobile robots via visual servoing. It is assumed that the pinhole camera is fixed to the ceiling, and camera parameters are unknown. The desired reference trajectory is represented by a virtual leader whose states are available to only a subset of the followers, and the followers have only interaction. First, the camera-objective visual kinematic model is introduced by utilising the pinhole camera model for each mobile robot. Second, a unified tracking error system between camera-objective visual servoing model and desired reference trajectory is introduced. Third, based on the neighbour rule and by using finite-time control method, continuous distributed cooperative finite-time tracking control laws are designed for each mobile robot with unknown camera parameters, where the communication topology among the multiple mobile robots is assumed to be a directed graph. Rigorous proof shows that the group of mobile robots converges to the desired reference trajectory in finite time. Simulation example illustrates the effectiveness of our method.
NASA Technical Reports Server (NTRS)
Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)
1991-01-01
A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.
Composite video and graphics display for camera viewing systems in robotics and teleoperation
NASA Technical Reports Server (NTRS)
Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)
1993-01-01
A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.
NASA Astrophysics Data System (ADS)
Rau, J.-Y.; Jhan, J.-P.; Huang, C.-Y.
2015-08-01
Miniature Multiple Camera Array (MiniMCA-12) is a frame-based multilens/multispectral sensor composed of 12 lenses with narrow band filters. Due to its small size and light weight, it is suitable to mount on an Unmanned Aerial System (UAS) for acquiring high spectral, spatial and temporal resolution imagery used in various remote sensing applications. However, due to its wavelength range is only 10 nm that results in low image resolution and signal-to-noise ratio which are not suitable for image matching and digital surface model (DSM) generation. In the meantime, the spectral correlation among all 12 bands of MiniMCA images are low, it is difficult to perform tie-point matching and aerial triangulation at the same time. In this study, we thus propose the use of a DSLR camera to assist automatic aerial triangulation of MiniMCA-12 imagery and to produce higher spatial resolution DSM for MiniMCA12 ortho-image generation. Depending on the maximum payload weight of the used UAS, these two kinds of sensors could be collected at the same time or individually. In this study, we adopt a fixed-wing UAS to carry a Canon EOS 5D Mark2 DSLR camera and a MiniMCA-12 multi-spectral camera. For the purpose to perform automatic aerial triangulation between a DSLR camera and the MiniMCA-12, we choose one master band from MiniMCA-12 whose spectral range has overlap with the DSLR camera. However, all lenses of MiniMCA-12 have different perspective centers and viewing angles, the original 12 channels have significant band misregistration effect. Thus, the first issue encountered is to reduce the band misregistration effect. Due to all 12 MiniMCA lenses being frame-based, their spatial offsets are smaller than 15 cm and all images are almost 98% overlapped, we thus propose a modified projective transformation (MPT) method together with two systematic error correction procedures to register all 12 bands of imagery on the same image space. It means that those 12 bands of images acquired at the same exposure time will have same interior orientation parameters (IOPs) and exterior orientation parameters (EOPs) after band-to-band registration (BBR). Thus, in the aerial triangulation stage, the master band of MiniMCA-12 was treated as a reference channel to link with DSLR RGB images. It means, all reference images from the master band of MiniMCA-12 and all RGB images were triangulated at the same time with same coordinate system of ground control points (GCP). Due to the spatial resolution of RGB images is higher than the MiniMCA-12, the GCP can be marked on the RGB images only even they cannot be recognized on the MiniMCA images. Furthermore, a one meter gridded digital surface model (DSM) is created by the RGB images and applied to the MiniMCA imagery for ortho-rectification. Quantitative error analyses show that the proposed BBR scheme can achieve 0.33 pixels of average misregistration residuals length and the co-registration errors among 12 MiniMCA ortho-images and between MiniMCA and Canon RGB ortho-images are all less than 0.6 pixels. The experimental results demonstrate that the proposed method is robust, reliable and accurate for future remote sensing applications.
Robot Tracer with Visual Camera
NASA Astrophysics Data System (ADS)
Jabbar Lubis, Abdul; Dwi Lestari, Yuyun; Dafitri, Haida; Azanuddin
2017-12-01
Robot is a versatile tool that can function replace human work function. The robot is a device that can be reprogrammed according to user needs. The use of wireless networks for remote monitoring needs can be utilized to build a robot that can be monitored movement and can be monitored using blueprints and he can track the path chosen robot. This process is sent using a wireless network. For visual robot using high resolution cameras to facilitate the operator to control the robot and see the surrounding circumstances.
Robot therapy: a new approach for mental healthcare of the elderly - a mini-review.
Shibata, Takanori; Wada, Kazuyoshi
2011-01-01
Mental healthcare of elderly people is a common problem in advanced countries. Recently, high technology has developed robots for use not only in factories but also for our living environment. In particular, human-interactive robots for psychological enrichment, which provide services by interacting with humans while stimulating their minds, are rapidly spreading. Such robots not only simply entertain but also render assistance, guide, provide therapy, educate, enable communication, and so on. Robot therapy, which uses robots as a substitution for animals in animal-assisted therapy and activity, is a new application of robots and is attracting the attention of many researchers and psychologists. The seal robot named Paro was developed especially for robot therapy and was used at hospitals and facilities for elderly people in several countries. Recent research has revealed that robot therapy has the same effects on people as animal therapy. In addition, it is being recognized as a new method of mental healthcare for elderly people. In this mini review, we introduce the merits and demerits of animal therapy. Then we explain the human-interactive robot for psychological enrichment, the required functions for therapeutic robots, and the seal robot. Finally, we provide examples of robot therapy for elderly people, including dementia patients. Copyright © 2010 S. Karger AG, Basel.
Color Camera for Curiosity Robotic Arm
2010-11-16
The Mars Hand Lens Imager MAHLI camera will fly on NASA Mars Science Laboratory mission, launching in late 2011. This photo of the camera was taken before MAHLI November 2010 installation onto the robotic arm of the mission Mars rover, Curiosity.
Canedo-Rodriguez, Adrián; Iglesias, Roberto; Regueiro, Carlos V.; Alvarez-Santos, Victor; Pardo, Xose Manuel
2013-01-01
To bring cutting edge robotics from research centres to social environments, the robotics community must start providing affordable solutions: the costs must be reduced and the quality and usefulness of the robot services must be enhanced. Unfortunately, nowadays the deployment of robots and the adaptation of their services to new environments are tasks that usually require several days of expert work. With this in view, we present a multi-agent system made up of intelligent cameras and autonomous robots, which is easy and fast to deploy in different environments. The cameras will enhance the robot perceptions and allow them to react to situations that require their services. Additionally, the cameras will support the movement of the robots. This will enable our robots to navigate even when there are not maps available. The deployment of our system does not require expertise and can be done in a short period of time, since neither software nor hardware tuning is needed. Every system task is automatic, distributed and based on self-organization processes. Our system is scalable, robust, and flexible to the environment. We carried out several real world experiments, which show the good performance of our proposal. PMID:23271604
Canedo-Rodriguez, Adrián; Iglesias, Roberto; Regueiro, Carlos V; Alvarez-Santos, Victor; Pardo, Xose Manuel
2012-12-27
To bring cutting edge robotics from research centres to social environments, the robotics community must start providing affordable solutions: the costs must be reduced and the quality and usefulness of the robot services must be enhanced. Unfortunately, nowadays the deployment of robots and the adaptation of their services to new environments are tasks that usually require several days of expert work. With this in view, we present a multi-agent system made up of intelligent cameras and autonomous robots, which is easy and fast to deploy in different environments. The cameras will enhance the robot perceptions and allow them to react to situations that require their services. Additionally, the cameras will support the movement of the robots. This will enable our robots to navigate even when there are not maps available. The deployment of our system does not require expertise and can be done in a short period of time, since neither software nor hardware tuning is needed. Every system task is automatic, distributed and based on self-organization processes. Our system is scalable, robust, and flexible to the environment. We carried out several real world experiments, which show the good performance of our proposal.
2010-01-12
CAPE CANAVERAL, Fla. - In the Remote Manipulator System Lab inside the Vehicle Assembly Building at NASA's Kennedy Space Center in Florida, this close-up shows the forward transition and X-guide restraint of the inspection boom assembly, or IBA, on space shuttle Atlantis' orbiter boom sensor system, or OBSS. The IBA is removed from the shuttle every other processing flow for a detailed inspection. After five consecutive flights, all IBA internal components are submitted to a thorough electrical checkout in the lab. The 50-foot-long OBSS attaches to the end of the shuttle’s robotic arm and supports the cameras and laser systems used to inspect the shuttle’s thermal protection system while in space. Atlantis is next slated to deliver an Integrated Cargo Carrier and Russian-built Mini Research Module to the International Space Station on the STS-132 mission. The second in a series of new pressurized components for Russia, the module will be permanently attached to the Zarya module. Three spacewalks are planned to store spare components outside the station, including six spare batteries, a boom assembly for the Ku-band antenna and spares for the Canadian Dextre robotic arm extension. A radiator, airlock and European robotic arm for the Russian Multi-purpose Laboratory Module also are payloads on the flight. Launch is targeted for May 14, 2010. Photo credit: NASA/Jack Pfaller
Laser-Camera Vision Sensing for Spacecraft Mobile Robot Navigation
NASA Technical Reports Server (NTRS)
Maluf, David A.; Khalil, Ahmad S.; Dorais, Gregory A.; Gawdiak, Yuri
2002-01-01
The advent of spacecraft mobile robots-free-flyng sensor platforms and communications devices intended to accompany astronauts or remotely operate on space missions both inside and outside of a spacecraft-has demanded the development of a simple and effective navigation schema. One such system under exploration involves the use of a laser-camera arrangement to predict relative positioning of the mobile robot. By projecting laser beams from the robot, a 3D reference frame can be introduced. Thus, as the robot shifts in position, the position reference frame produced by the laser images is correspondingly altered. Using normalization and camera registration techniques presented in this paper, the relative translation and rotation of the robot in 3D are determined from these reference frame transformations.
Red ball ranging optimization based on dual camera ranging method
NASA Astrophysics Data System (ADS)
Kuang, Lei; Sun, Weijia; Liu, Jiaming; Tang, Matthew Wai-Chung
2018-05-01
In this paper, the process of positioning and moving to target red ball by NAO robot through its camera system is analyzed and improved using the dual camera ranging method. The single camera ranging method, which is adapted by NAO robot, was first studied and experimented. Since the existing error of current NAO Robot is not a single variable, the experiments were divided into two parts to obtain more accurate single camera ranging experiment data: forward ranging and backward ranging. Moreover, two USB cameras were used in our experiments that adapted Hough's circular method to identify a ball, while the HSV color space model was used to identify red color. Our results showed that the dual camera ranging method reduced the variance of error in ball tracking from 0.68 to 0.20.
3D vision upgrade kit for TALON robot
NASA Astrophysics Data System (ADS)
Edmondson, Richard; Vaden, Justin; Hyatt, Brian; Morris, James; Pezzaniti, J. Larry; Chenault, David B.; Tchon, Joe; Barnidge, Tracy; Kaufman, Seth; Pettijohn, Brad
2010-04-01
In this paper, we report on the development of a 3D vision field upgrade kit for TALON robot consisting of a replacement flat panel stereoscopic display, and multiple stereo camera systems. An assessment of the system's use for robotic driving, manipulation, and surveillance operations was conducted. The 3D vision system was integrated onto a TALON IV Robot and Operator Control Unit (OCU) such that stock components could be electrically disconnected and removed, and upgrade components coupled directly to the mounting and electrical connections. A replacement display, replacement mast camera with zoom, auto-focus, and variable convergence, and a replacement gripper camera with fixed focus and zoom comprise the upgrade kit. The stereo mast camera allows for improved driving and situational awareness as well as scene survey. The stereo gripper camera allows for improved manipulation in typical TALON missions.
Testbed for remote telepresence research
NASA Astrophysics Data System (ADS)
Adnan, Sarmad; Cheatham, John B., Jr.
1992-11-01
Teleoperated robots offer solutions to problems associated with operations in remote and unknown environments, such as space. Teleoperated robots can perform tasks related to inspection, maintenance, and retrieval. A video camera can be used to provide some assistance in teleoperations, but for fine manipulation and control, a telepresence system that gives the operator a sense of actually being at the remote location is more desirable. A telepresence system comprised of a head-tracking stereo camera system, a kinematically redundant arm, and an omnidirectional mobile robot has been developed at the mechanical engineering department at Rice University. This paper describes the design and implementation of this system, its control hardware, and software. The mobile omnidirectional robot has three independent degrees of freedom that permit independent control of translation and rotation, thereby simulating a free flying robot in a plane. The kinematically redundant robot arm has eight degrees of freedom that assist in obstacle and singularity avoidance. The on-board control computers permit control of the robot from the dual hand controllers via a radio modem system. A head-mounted display system provides the user with a stereo view from a pair of cameras attached to the mobile robotics system. The head tracking camera system moves stereo cameras mounted on a three degree of freedom platform to coordinate with the operator's head movements. This telepresence system provides a framework for research in remote telepresence, and teleoperations for space.
JPRS Report, Science & Technology, Japan, 4th Intelligent Robots Symposium, Volume 2
1989-03-16
accidents caused by strikes by robots,5 a quantitative model for safety evaluation,6 and evaluations of actual systems7 in order to contribute to...Mobile Robot Position Referencing Using Map-Based Vision Systems.... 160 Safety Evaluation of Man-Robot System 171 Fuzzy Path Pattern of Automatic...camera are made after the robot stops to prevent damage from occurring through obstacle interference. The position of the camera is indicated on the
Robot Evolutionary Localization Based on Attentive Visual Short-Term Memory
Vega, Julio; Perdices, Eduardo; Cañas, José M.
2013-01-01
Cameras are one of the most relevant sensors in autonomous robots. However, two of their challenges are to extract useful information from captured images, and to manage the small field of view of regular cameras. This paper proposes implementing a dynamic visual memory to store the information gathered from a moving camera on board a robot, followed by an attention system to choose where to look with this mobile camera, and a visual localization algorithm that incorporates this visual memory. The visual memory is a collection of relevant task-oriented objects and 3D segments, and its scope is wider than the current camera field of view. The attention module takes into account the need to reobserve objects in the visual memory and the need to explore new areas. The visual memory is useful also in localization tasks, as it provides more information about robot surroundings than the current instantaneous image. This visual system is intended as underlying technology for service robot applications in real people's homes. Several experiments have been carried out, both with simulated and real Pioneer and Nao robots, to validate the system and each of its components in office scenarios. PMID:23337333
Tsunoda, Koichi; Tsunoda, Atsunobu; Ishimoto, ShinnIchi; Kimura, Satoko
2006-01-01
The exclusive charge-coupled device (CCD) camera system for the endoscope and electronic fiberscopes are in widespread use. However, both are usually stationary in an office or examination room, and a wheeled cart is needed for mobility. The total costs of the CCD camera system and electronic fiberscopy system are at least US Dollars 10,000 and US Dollars 30,000, respectively. Recently, the performance of audio and visual instruments has improved dramatically, with a concomitant reduction in their cost. Commercially available CCD video cameras with small monitors have become common. They provide excellent image quality and are much smaller and less expensive than previous models. The authors have developed adaptors for the popular mini-digital video (mini-DV) camera. The camera also provides video and acoustic output signals; therefore, the endoscopic images can be viewed on a large monitor simultaneously. The new system (a mini-DV video camera and an adaptor) costs only US Dollars 1,000. Therefore, the system is both cost-effective and useful for the outpatient clinic or casualty setting, or on house calls for the purpose of patient education. In the future, the authors plan to introduce the clinical application of a high-vision camera and an infrared camera as medical instruments for clinical and research situations.
Fast and robust curve skeletonization for real-world elongated objects
USDA-ARS?s Scientific Manuscript database
These datasets were generated for calibrating robot-camera systems. In an extension, we also considered the problem of calibrating robots with more than one camera. These datasets are provided as a companion to the paper, "Solving the Robot-World Hand-Eye(s) Calibration Problem with Iterative Meth...
Fire Extinguisher Robot Using Ultrasonic Camera and Wi-Fi Network Controlled with Android Smartphone
NASA Astrophysics Data System (ADS)
Siregar, B.; Purba, H. A.; Efendi, S.; Fahmi, F.
2017-03-01
Fire disasters can occur anytime and result in high losses. It is often that fire fighters cannot access the source of fire due to the damage of building and very high temperature, or even due to the presence of explosive materials. With such constraints and high risk in the handling of the fire, a technological breakthrough that can help fighting the fire is necessary. Our paper proposed the use of robots to extinguish the fire that can be controlled from a specified distance in order to reduce the risk. A fire extinguisher robot was assembled with the intention to extinguish the fire by using a water pump as actuators. The robot movement was controlled using Android smartphones via Wi-fi networks utilizing Wi-fi module contained in the robot. User commands were sent to the microcontroller on the robot and then translated into robotic movement. We used ATMega8 as main microcontroller in the robot. The robot was equipped with cameras and ultrasonic sensors. The camera played role in giving feedback to user and in finding the source of fire. Ultrasonic sensors were used to avoid collisions during movement. Feedback provided by camera on the robot displayed on a screen of smartphone. In lab, testing environment the robot can move following the user command such as turn right, turn left, forward and backward. The ultrasonic sensors worked well that the robot can be stopped at a distance of less than 15 cm. In the fire test, the robot can perform the task properly to extinguish the fire.
A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera
Chao, Chun-Tang; Chung, Ming-Hsuan; Chiou, Juing-Shian; Wang, Chi-Jo
2016-01-01
In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots. PMID:27023556
A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera.
Chao, Chun-Tang; Chung, Ming-Hsuan; Chiou, Juing-Shian; Wang, Chi-Jo
2016-03-25
In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots.
NASA Astrophysics Data System (ADS)
Lu, Qun; Yu, Li; Zhang, Dan; Zhang, Xuebo
2018-01-01
This paper presentsa global adaptive controller that simultaneously solves tracking and regulation for wheeled mobile robots with unknown depth and uncalibrated camera-to-robot extrinsic parameters. The rotational angle and the scaled translation between the current camera frame and the reference camera frame, as well as the ones between the desired camera frame and the reference camera frame can be calculated in real time by using the pose estimation techniques. A transformed system is first obtained, for which an adaptive controller is then designed to accomplish both tracking and regulation tasks, and the controller synthesis is based on Lyapunov's direct method. Finally, the effectiveness of the proposed method is illustrated by a simulation study.
Line following using a two camera guidance system for a mobile robot
NASA Astrophysics Data System (ADS)
Samu, Tayib; Kelkar, Nikhal; Perdue, David; Ruthemeyer, Michael A.; Matthews, Bradley O.; Hall, Ernest L.
1996-10-01
Automated unmanned guided vehicles have many potential applications in manufacturing, medicine, space and defense. A mobile robot has been designed for the 1996 Automated Unmanned Vehicle Society competition which was held in Orlando, Florida on July 15, 1996. The competition required the vehicle to follow solid and dashed lines around an approximately 800 ft. path while avoiding obstacles, overcoming terrain changes such as inclines and sand traps, and attempting to maximize speed. The purpose of this paper is to describe the algorithm developed for the line following. The line following algorithm images two windows and locates their centroid and with the knowledge that the points are on the ground plane, a mathematical and geometrical relationship between the image coordinates of the points and their corresponding ground coordinates are established. The angle of the line and minimum distance from the robot centroid are then calculated and used in the steering control. Two cameras are mounted on the robot with a camera on each side. One camera guides the robot and when it loses track of the line on its side, the robot control system automatically switches to the other camera. The test bed system has provided an educational experience for all involved and permits understanding and extending the state of the art in autonomous vehicle design.
Visual control of robots using range images.
Pomares, Jorge; Gil, Pablo; Torres, Fernando
2010-01-01
In the last years, 3D-vision systems based on the time-of-flight (ToF) principle have gained more importance in order to obtain 3D information from the workspace. In this paper, an analysis of the use of 3D ToF cameras to guide a robot arm is performed. To do so, an adaptive method to simultaneous visual servo control and camera calibration is presented. Using this method a robot arm is guided by using range information obtained from a ToF camera. Furthermore, the self-calibration method obtains the adequate integration time to be used by the range camera in order to precisely determine the depth information.
Beyl, Tim; Nicolai, Philip; Comparetti, Mirko D; Raczkowsky, Jörg; De Momi, Elena; Wörn, Heinz
2016-07-01
Scene supervision is a major tool to make medical robots safer and more intuitive. The paper shows an approach to efficiently use 3D cameras within the surgical operating room to enable for safe human robot interaction and action perception. Additionally the presented approach aims to make 3D camera-based scene supervision more reliable and accurate. A camera system composed of multiple Kinect and time-of-flight cameras has been designed, implemented and calibrated. Calibration and object detection as well as people tracking methods have been designed and evaluated. The camera system shows a good registration accuracy of 0.05 m. The tracking of humans is reliable and accurate and has been evaluated in an experimental setup using operating clothing. The robot detection shows an error of around 0.04 m. The robustness and accuracy of the approach allow for an integration into modern operating room. The data output can be used directly for situation and workflow detection as well as collision avoidance.
Robust and adaptive band-to-band image transform of UAS miniature multi-lens multispectral camera
NASA Astrophysics Data System (ADS)
Jhan, Jyun-Ping; Rau, Jiann-Yeou; Haala, Norbert
2018-03-01
Utilizing miniature multispectral (MS) or hyperspectral (HS) cameras by mounting them on an Unmanned Aerial System (UAS) has the benefits of convenience and flexibility to collect remote sensing imagery for precision agriculture, vegetation monitoring, and environment investigation applications. Most miniature MS cameras adopt a multi-lens structure to record discrete MS bands of visible and invisible information. The differences in lens distortion, mounting positions, and viewing angles among lenses mean that the acquired original MS images have significant band misregistration errors. We have developed a Robust and Adaptive Band-to-Band Image Transform (RABBIT) method for dealing with the band co-registration of various types of miniature multi-lens multispectral cameras (Mini-MSCs) to obtain band co-registered MS imagery for remote sensing applications. The RABBIT utilizes modified projective transformation (MPT) to transfer the multiple image geometry of a multi-lens imaging system to one sensor geometry, and combines this with a robust and adaptive correction (RAC) procedure to correct several systematic errors and to obtain sub-pixel accuracy. This study applies three state-of-the-art Mini-MSCs to evaluate the RABBIT method's performance, specifically the Tetracam Miniature Multiple Camera Array (MiniMCA), Micasense RedEdge, and Parrot Sequoia. Six MS datasets acquired at different target distances and dates, and locations are also applied to prove its reliability and applicability. Results prove that RABBIT is feasible for different types of Mini-MSCs with accurate, robust, and rapid image processing efficiency.
2010-01-18
CAPE CANAVERAL, Fla. - In Orbiter Processing Facility 1 at NASA's Kennedy Space Center in Florida, a crane lowers the orbiter boom sensor system, or OBSS, into space shuttle Atlantis' payload bay where it will be installed. The OBSS' inspection boom assembly, or IBA, is removed from the arm every other processing flow for a detailed inspection. After five consecutive flights, all IBA internal components are submitted to a thorough electrical checkout in the Remote Manipulator System Lab. The 50-foot-long OBSS attaches to the end of the shuttle’s robotic arm and supports the cameras and laser systems used to inspect the shuttle’s thermal protection system while in space. Atlantis is next slated to deliver an Integrated Cargo Carrier and Russian-built Mini Research Module to the International Space Station on the STS-132 mission. Launch is targeted for May 14. Photo credit: NASA/Jim Grossmann
2010-01-18
CAPE CANAVERAL, Fla. - In Orbiter Processing Facility 1 at NASA's Kennedy Space Center in Florida, installation of the orbiter boom sensor system, or OBSS, into space shuttle Atlantis' payload bay is under way. The OBSS' inspection boom assembly, or IBA, is removed from the arm every other processing flow for a detailed inspection. After five consecutive flights, all IBA internal components are submitted to a thorough electrical checkout in the Remote Manipulator System Lab. The 50-foot-long OBSS attaches to the end of the shuttle’s robotic arm and supports the cameras and laser systems used to inspect the shuttle’s thermal protection system while in space. Atlantis is next slated to deliver an Integrated Cargo Carrier and Russian-built Mini Research Module to the International Space Station on the STS-132 mission. Launch is targeted for May 14. Photo credit: NASA/Jim Grossmann
2010-01-18
CAPE CANAVERAL, Fla. - In Orbiter Processing Facility 1 at NASA's Kennedy Space Center in Florida, technicians prepare to install the orbiter boom sensor system, or OBSS, into space shuttle Atlantis' payload bay. The OBSS' inspection boom assembly, or IBA, is removed from the arm every other processing flow for a detailed inspection. After five consecutive flights, all IBA internal components are submitted to a thorough electrical checkout in the Remote Manipulator System Lab. The 50-foot-long OBSS attaches to the end of the shuttle’s robotic arm and supports the cameras and laser systems used to inspect the shuttle’s thermal protection system while in space. Atlantis is next slated to deliver an Integrated Cargo Carrier and Russian-built Mini Research Module to the International Space Station on the STS-132 mission. Launch is targeted for May 14. Photo credit: NASA/Jim Grossmann
2010-01-18
CAPE CANAVERAL, Fla. - In Orbiter Processing Facility 1 at NASA's Kennedy Space Center in Florida, the orbiter boom sensor system, or OBSS, is installed in space shuttle Atlantis' payload bay. The OBSS' inspection boom assembly, or IBA, is removed from the arm every other processing flow for a detailed inspection. After five consecutive flights, all IBA internal components are submitted to a thorough electrical checkout in the Remote Manipulator System Lab. The 50-foot-long OBSS attaches to the end of the shuttle’s robotic arm and supports the cameras and laser systems used to inspect the shuttle’s thermal protection system while in space. Atlantis is next slated to deliver an Integrated Cargo Carrier and Russian-built Mini Research Module to the International Space Station on the STS-132 mission. Launch is targeted for May 14. Photo credit: NASA/Jim Grossmann
2010-01-12
CAPE CANAVERAL, Fla. - In the Remote Manipulator System Lab, or RMS Lab, inside the Vehicle Assembly Building at NASA's Kennedy Space Center in Florida, Rafael Rodriguez, lead RMS advanced systems technician with United Space Alliance, installs the mid-transition thermal blanket onto the inspection boom assembly, or IBA, on space shuttle Atlantis' orbiter boom sensor system, or OBSS. The IBA is removed from the shuttle every other processing flow for a detailed inspection. After five consecutive flights, all IBA internal components are submitted to a thorough electrical checkout in the lab. The 50-foot-long OBSS attaches to the end of the shuttle’s robotic arm and supports the cameras and laser systems used to inspect the shuttle’s thermal protection system while in space. Atlantis is next slated to deliver an Integrated Cargo Carrier and Russian-built Mini Research Module to the International Space Station on the STS-132 mission. The second in a series of new pressurized components for Russia, the module will be permanently attached to the Zarya module. Three spacewalks are planned to store spare components outside the station, including six spare batteries, a boom assembly for the Ku-band antenna and spares for the Canadian Dextre robotic arm extension. A radiator, airlock and European robotic arm for the Russian Multi-purpose Laboratory Module also are payloads on the flight. Launch is targeted for May 14, 2010. Photo credit: NASA/Jack Pfaller
2010-01-12
CAPE CANAVERAL, Fla. - In the Remote Manipulator System Lab inside the Vehicle Assembly Building at NASA's Kennedy Space Center in Florida, Patrick Manning, an advanced systems technician with United Space Alliance, installs the mid-transition thermal blanket onto the inspection boom assembly, or IBA, on space shuttle Atlantis' orbiter boom sensor system, or OBSS. The IBA is removed from the shuttle every other processing flow for a detailed inspection. After five consecutive flights, all IBA internal components are submitted to a thorough electrical checkout in the lab. The 50-foot-long OBSS attaches to the end of the shuttle’s robotic arm and supports the cameras and laser systems used to inspect the shuttle’s thermal protection system while in space. Atlantis is next slated to deliver an Integrated Cargo Carrier and Russian-built Mini Research Module to the International Space Station on the STS-132 mission. The second in a series of new pressurized components for Russia, the module will be permanently attached to the Zarya module. Three spacewalks are planned to store spare components outside the station, including six spare batteries, a boom assembly for the Ku-band antenna and spares for the Canadian Dextre robotic arm extension. A radiator, airlock and European robotic arm for the Russian Multi-purpose Laboratory Module also are payloads on the flight. Launch is targeted for May 14, 2010. Photo credit: NASA/Jack Pfaller
2010-01-12
CAPE CANAVERAL, Fla. - In the Remote Manipulator System Lab inside the Vehicle Assembly Building at NASA's Kennedy Space Center in Florida, this close-up shows the electrical flight grapple fixture which will be installed in the forward transition and X-guide restraint of the inspection boom assembly, or IBA, on space shuttle Atlantis' orbiter boom sensor system, or OBSS. The IBA is removed from the shuttle every other processing flow for a detailed inspection. After five consecutive flights, all IBA internal components are submitted to a thorough electrical checkout in the lab. The 50-foot-long OBSS attaches to the end of the shuttle’s robotic arm and supports the cameras and laser systems used to inspect the shuttle’s thermal protection system while in space. Atlantis is next slated to deliver an Integrated Cargo Carrier and Russian-built Mini Research Module to the International Space Station on the STS-132 mission. The second in a series of new pressurized components for Russia, the module will be permanently attached to the Zarya module. Three spacewalks are planned to store spare components outside the station, including six spare batteries, a boom assembly for the Ku-band antenna and spares for the Canadian Dextre robotic arm extension. A radiator, airlock and European robotic arm for the Russian Multi-purpose Laboratory Module also are payloads on the flight. Launch is targeted for May 14, 2010. Photo credit: NASA/Jack Pfaller
- Astrophysics - DES - PreCam PreCam Work at ANL The Argonne/HEP Dark Energy Survey (DES) group, working on the Dark Energy Camera (DECam), built a mini-DECam camera called PreCam. This camera has provided valuable
Building Robota, a Mini-Humanoid Robot for the Rehabilitation of Children with Autism
ERIC Educational Resources Information Center
Billard, Aude; Robins, Ben; Nadel, Jacqueline; Dautenhahn, Kerstin
2007-01-01
The Robota project constructs a series of multiple-degrees-of-freedom, doll-shaped humanoid robots, whose physical features resemble those of a human baby. The Robota robots have been applied as assistive technologies in behavioral studies with low-functioning children with autism. These studies investigate the potential of using an imitator robot…
NASA Technical Reports Server (NTRS)
Kim, Won S.; Bejczy, Antal K.
1993-01-01
A highly effective predictive/preview display technique for telerobotic servicing in space under several seconds communication time delay has been demonstrated on a large laboratory scale in May 1993, involving the Jet Propulsion Laboratory as the simulated ground control station and, 2500 miles away, the Goddard Space Flight Center as the simulated satellite servicing set-up. The technique is based on a high-fidelity calibration procedure that enables a high-fidelity overlay of 3-D graphics robot arm and object models over given 2-D TV camera images of robot arm and objects. To generate robot arm motions, the operator can confidently interact in real time with the graphics models of the robot arm and objects overlaid on an actual camera view of the remote work site. The technique also enables the operator to generate high-fidelity synthetic TV camera views showing motion events that are hidden in a given TV camera view or for which no TV camera views are available. The positioning accuracy achieved by this technique for a zoomed-in camera setting was about +/-5 mm, well within the allowable +/-12 mm error margin at the insertion of a 45 cm long tool in the servicing task.
Towards next generation 3D cameras
NASA Astrophysics Data System (ADS)
Gupta, Mohit
2017-03-01
We are in the midst of a 3D revolution. Robots enabled by 3D cameras are beginning to autonomously drive cars, perform surgeries, and manage factories. However, when deployed in the real-world, these cameras face several challenges that prevent them from measuring 3D shape reliably. These challenges include large lighting variations (bright sunlight to dark night), presence of scattering media (fog, body tissue), and optically complex materials (metal, plastic). Due to these factors, 3D imaging is often the bottleneck in widespread adoption of several key robotics technologies. I will talk about our work on developing 3D cameras based on time-of-flight and active triangulation that addresses these long-standing problems. This includes designing `all-weather' cameras that can perform high-speed 3D scanning in harsh outdoor environments, as well as cameras that recover shape of objects with challenging material properties. These cameras are, for the first time, capable of measuring detailed (<100 microns resolution) scans in extremely demanding scenarios with low-cost components. Several of these cameras are making a practical impact in industrial automation, being adopted in robotic inspection and assembly systems.
Sensory Interactive Teleoperator Robotic Grasping
NASA Technical Reports Server (NTRS)
Alark, Keli; Lumia, Ron
1997-01-01
As the technological world strives for efficiency, the need for economical equipment that increases operator proficiency in minimal time is fundamental. This system links a CCD camera, a controller and a robotic arm to a computer vision system to provide an alternative method of image analysis. The machine vision system which was employed possesses software tools for acquiring and analyzing images which are received through a CCD camera. After feature extraction on the object in the image was performed, information about the object's location, orientation and distance from the robotic gripper is sent to the robot controller so that the robot can manipulate the object.
Motion Imagery and Robotics Application Project (MIRA)
NASA Technical Reports Server (NTRS)
Grubbs, Rodney P.
2010-01-01
This viewgraph presentation describes the Motion Imagery and Robotics Application (MIRA) Project. A detailed description of the MIRA camera service software architecture, encoder features, and on-board communications are presented. A description of a candidate camera under development is also shown.
Detecting method of subjects' 3D positions and experimental advanced camera control system
NASA Astrophysics Data System (ADS)
Kato, Daiichiro; Abe, Kazuo; Ishikawa, Akio; Yamada, Mitsuho; Suzuki, Takahito; Kuwashima, Shigesumi
1997-04-01
Steady progress is being made in the development of an intelligent robot camera capable of automatically shooting pictures with a powerful sense of reality or tracking objects whose shooting requires advanced techniques. Currently, only experienced broadcasting cameramen can provide these pictures.TO develop an intelligent robot camera with these abilities, we need to clearly understand how a broadcasting cameraman assesses his shooting situation and how his camera is moved during shooting. We use a real- time analyzer to study a cameraman's work and his gaze movements at studios and during sports broadcasts. This time, we have developed a detecting method of subjects' 3D positions and an experimental camera control system to help us further understand the movements required for an intelligent robot camera. The features are as follows: (1) Two sensor cameras shoot a moving subject and detect colors, producing its 3D coordinates. (2) Capable of driving a camera based on camera movement data obtained by a real-time analyzer. 'Moving shoot' is the name we have given to the object position detection technology on which this system is based. We used it in a soccer game, producing computer graphics showing how players moved. These results will also be reported.
Holländer, Sebastian W; Klingen, Hans Joachim; Fritz, Marliese; Djalali, Peter; Birk, Dieter
2014-11-01
Despite advances in instruments and techniques in laparoscopic surgery, one thing remains uncomfortable: the camera assistance. The aim of this study was to investigate the benefit of a joystick-guided camera holder (SoloAssist®, Aktormed, Barbing, Germany) for laparoscopic surgery and to compare the robotic assistance to human assistance. 1033 consecutive laparoscopic procedures were performed assisted by the SoloAssist®. Failures and aborts were documented and nine surgeons were interviewed by questionnaire regarding their experiences. In 71 of 1033 procedures, robotic assistance was aborted and the procedure was continued manually, mostly because of frequent changes of position, narrow spaces, and adverse angular degrees. One case of short circuit was reported. Emergency stop was necessary in three cases due to uncontrolled movement into the abdominal cavity. Eight of nine surgeons prefer robotic to human assistance, mostly because of a steady image and self-control. The SoloAssist® robot is a reliable system for laparoscopic procedures. Emergency shutdown was necessary in only three cases. Some minor weak spots could have been identified. Most surgeons prefer robotic assistance to human assistance. We feel that the SoloAssist® makes standard laparoscopic surgery more comfortable and further development is desirable, but it cannot fully replace a human assistant.
Towards Camera-LIDAR Fusion-Based Terrain Modelling for Planetary Surfaces: Review and Analysis
Shaukat, Affan; Blacker, Peter C.; Spiteri, Conrad; Gao, Yang
2016-01-01
In recent decades, terrain modelling and reconstruction techniques have increased research interest in precise short and long distance autonomous navigation, localisation and mapping within field robotics. One of the most challenging applications is in relation to autonomous planetary exploration using mobile robots. Rovers deployed to explore extraterrestrial surfaces are required to perceive and model the environment with little or no intervention from the ground station. Up to date, stereopsis represents the state-of-the art method and can achieve short-distance planetary surface modelling. However, future space missions will require scene reconstruction at greater distance, fidelity and feature complexity, potentially using other sensors like Light Detection And Ranging (LIDAR). LIDAR has been extensively exploited for target detection, identification, and depth estimation in terrestrial robotics, but is still under development to become a viable technology for space robotics. This paper will first review current methods for scene reconstruction and terrain modelling using cameras in planetary robotics and LIDARs in terrestrial robotics; then we will propose camera-LIDAR fusion as a feasible technique to overcome the limitations of either of these individual sensors for planetary exploration. A comprehensive analysis will be presented to demonstrate the advantages of camera-LIDAR fusion in terms of range, fidelity, accuracy and computation. PMID:27879625
NASA Astrophysics Data System (ADS)
Kim, Min Young; Cho, Hyung Suck; Kim, Jae H.
2002-10-01
In recent years, intelligent autonomous mobile robots have drawn tremendous interests as service robots for serving human or industrial robots for replacing human. To carry out the task, robots must be able to sense and recognize 3D space that they live or work. In this paper, we deal with the topic related to 3D sensing system for the environment recognition of mobile robots. For this, the structured lighting is basically utilized for a 3D visual sensor system because of the robustness on the nature of the navigation environment and the easy extraction of feature information of interest. The proposed sensing system is classified into a trinocular vision system, which is composed of the flexible multi-stripe laser projector, and two cameras. The principle of extracting the 3D information is based on the optical triangulation method. With modeling the projector as another camera and using the epipolar constraints which the whole cameras makes, the point-to-point correspondence between the line feature points in each image is established. In this work, the principle of this sensor is described in detail, and a series of experimental tests is performed to show the simplicity and efficiency and accuracy of this sensor system for 3D the environment sensing and recognition.
2010-01-18
CAPE CANAVERAL, Fla. - In Orbiter Processing Facility 1 at NASA's Kennedy Space Center in Florida, technicians ensure that the installation of the orbiter boom sensor system, or OBSS, into space shuttle Atlantis' payload bay meets the correct specifications. The OBSS' inspection boom assembly, or IBA, is removed from the arm every other processing flow for a detailed inspection. After five consecutive flights, all IBA internal components are submitted to a thorough electrical checkout in the Remote Manipulator System Lab. The 50-foot-long OBSS attaches to the end of the shuttle’s robotic arm and supports the cameras and laser systems used to inspect the shuttle’s thermal protection system while in space. Atlantis is next slated to deliver an Integrated Cargo Carrier and Russian-built Mini Research Module to the International Space Station on the STS-132 mission. Launch is targeted for May 14. Photo credit: NASA/Jim Grossmann
2010-01-18
CAPE CANAVERAL, Fla. - In Orbiter Processing Facility 1 at NASA's Kennedy Space Center in Florida, technicians install the orbiter boom sensor system, or OBSS, in space shuttle Atlantis' payload bay across from the remote manipulator system arm. The OBSS' inspection boom assembly, or IBA, is removed from the arm every other processing flow for a detailed inspection. After five consecutive flights, all IBA internal components are submitted to a thorough electrical checkout in the Remote Manipulator System Lab. The 50-foot-long OBSS attaches to the end of the shuttle’s robotic arm and supports the cameras and laser systems used to inspect the shuttle’s thermal protection system while in space. Atlantis is next slated to deliver an Integrated Cargo Carrier and Russian-built Mini Research Module to the International Space Station on the STS-132 mission. Launch is targeted for May 14. Photo credit: NASA/Jim Grossmann
Towards a compact and precise sample holder for macromolecular crystallography.
Papp, Gergely; Rossi, Christopher; Janocha, Robert; Sorez, Clement; Lopez-Marrero, Marcos; Astruc, Anthony; McCarthy, Andrew; Belrhali, Hassan; Bowler, Matthew W; Cipriani, Florent
2017-10-01
Most of the sample holders currently used in macromolecular crystallography offer limited storage density and poor initial crystal-positioning precision upon mounting on a goniometer. This has now become a limiting factor at high-throughput beamlines, where data collection can be performed in a matter of seconds. Furthermore, this lack of precision limits the potential benefits emerging from automated harvesting systems that could provide crystal-position information which would further enhance alignment at beamlines. This situation provided the motivation for the development of a compact and precise sample holder with corresponding pucks, handling tools and robotic transfer protocols. The development process included four main phases: design, prototype manufacture, testing with a robotic sample changer and validation under real conditions on a beamline. Two sample-holder designs are proposed: NewPin and miniSPINE. They share the same robot gripper and allow the storage of 36 sample holders in uni-puck footprint-style pucks, which represents 252 samples in a dry-shipping dewar commonly used in the field. The pucks are identified with human- and machine-readable codes, as well as with radio-frequency identification (RFID) tags. NewPin offers a crystal-repositioning precision of up to 10 µm but requires a specific goniometer socket. The storage density could reach 64 samples using a special puck designed for fully robotic handling. miniSPINE is less precise but uses a goniometer mount compatible with the current SPINE standard. miniSPINE is proposed for the first implementation of the new standard, since it is easier to integrate at beamlines. An upgraded version of the SPINE sample holder with a corresponding puck named SPINEplus is also proposed in order to offer a homogenous and interoperable system. The project involved several European synchrotrons and industrial companies in the fields of consumables and sample-changer robotics. Manual handling of miniSPINE was tested at different institutes using evaluation kits, and pilot beamlines are being equipped with compatible robotics for large-scale evaluation. A companion paper describes a new sample changer FlexED8 (Papp et al., 2017, Acta Cryst., D73, 841-851).
Liu, Tao; Guo, Yin; Yang, Shourui; Yin, Shibin; Zhu, Jigui
2017-01-01
Industrial robots are expected to undertake ever more advanced tasks in the modern manufacturing industry, such as intelligent grasping, in which robots should be capable of recognizing the position and orientation of a part before grasping it. In this paper, a monocular-based 6-degree of freedom (DOF) pose estimation technology to enable robots to grasp large-size parts at informal poses is proposed. A camera was mounted on the robot end-flange and oriented to measure several featured points on the part before the robot moved to grasp it. In order to estimate the part pose, a nonlinear optimization model based on the camera object space collinearity error in different poses is established, and the initial iteration value is estimated with the differential transformation. Measuring poses of the camera are optimized based on uncertainty analysis. Also, the principle of the robotic intelligent grasping system was developed, with which the robot could adjust its pose to grasp the part. In experimental tests, the part poses estimated with the method described in this paper were compared with those produced by a laser tracker, and results show the RMS angle and position error are about 0.0228° and 0.4603 mm. Robotic intelligent grasping tests were also successfully performed in the experiments. PMID:28216555
Liu, Tao; Guo, Yin; Yang, Shourui; Yin, Shibin; Zhu, Jigui
2017-02-14
Industrial robots are expected to undertake ever more advanced tasks in the modern manufacturing industry, such as intelligent grasping, in which robots should be capable of recognizing the position and orientation of a part before grasping it. In this paper, a monocular-based 6-degree of freedom (DOF) pose estimation technology to enable robots to grasp large-size parts at informal poses is proposed. A camera was mounted on the robot end-flange and oriented to measure several featured points on the part before the robot moved to grasp it. In order to estimate the part pose, a nonlinear optimization model based on the camera object space collinearity error in different poses is established, and the initial iteration value is estimated with the differential transformation. Measuring poses of the camera are optimized based on uncertainty analysis. Also, the principle of the robotic intelligent grasping system was developed, with which the robot could adjust its pose to grasp the part. In experimental tests, the part poses estimated with the method described in this paper were compared with those produced by a laser tracker, and results show the RMS angle and position error are about 0.0228° and 0.4603 mm. Robotic intelligent grasping tests were also successfully performed in the experiments.
Sensor Fusion Based Model for Collision Free Mobile Robot Navigation.
Almasri, Marwah; Elleithy, Khaled; Alajlan, Abrar
2015-12-26
Autonomous mobile robots have become a very popular and interesting topic in the last decade. Each of them are equipped with various types of sensors such as GPS, camera, infrared and ultrasonic sensors. These sensors are used to observe the surrounding environment. However, these sensors sometimes fail and have inaccurate readings. Therefore, the integration of sensor fusion will help to solve this dilemma and enhance the overall performance. This paper presents a collision free mobile robot navigation based on the fuzzy logic fusion model. Eight distance sensors and a range finder camera are used for the collision avoidance approach where three ground sensors are used for the line or path following approach. The fuzzy system is composed of nine inputs which are the eight distance sensors and the camera, two outputs which are the left and right velocities of the mobile robot's wheels, and 24 fuzzy rules for the robot's movement. Webots Pro simulator is used for modeling the environment and the robot. The proposed methodology, which includes the collision avoidance based on fuzzy logic fusion model and line following robot, has been implemented and tested through simulation and real time experiments. Various scenarios have been presented with static and dynamic obstacles using one robot and two robots while avoiding obstacles in different shapes and sizes.
Capturing method for integral three-dimensional imaging using multiviewpoint robotic cameras
NASA Astrophysics Data System (ADS)
Ikeya, Kensuke; Arai, Jun; Mishina, Tomoyuki; Yamaguchi, Masahiro
2018-03-01
Integral three-dimensional (3-D) technology for next-generation 3-D television must be able to capture dynamic moving subjects with pan, tilt, and zoom camerawork as good as in current TV program production. We propose a capturing method for integral 3-D imaging using multiviewpoint robotic cameras. The cameras are controlled through a cooperative synchronous system composed of a master camera controlled by a camera operator and other reference cameras that are utilized for 3-D reconstruction. When the operator captures a subject using the master camera, the region reproduced by the integral 3-D display is regulated in real space according to the subject's position and view angle of the master camera. Using the cooperative control function, the reference cameras can capture images at the narrowest view angle that does not lose any part of the object region, thereby maximizing the resolution of the image. 3-D models are reconstructed by estimating the depth from complementary multiviewpoint images captured by robotic cameras arranged in a two-dimensional array. The model is converted into elemental images to generate the integral 3-D images. In experiments, we reconstructed integral 3-D images of karate players and confirmed that the proposed method satisfied the above requirements.
Sambot II: A self-assembly modular swarm robot
NASA Astrophysics Data System (ADS)
Zhang, Yuchao; Wei, Hongxing; Yang, Bo; Jiang, Cancan
2018-04-01
The new generation of self-assembly modular swarm robot Sambot II, based on the original generation of self-assembly modular swarm robot Sambot, adopting laser and camera module for information collecting, is introduced in this manuscript. The visual control algorithm of Sambot II is detailed and feasibility of the algorithm is verified by the laser and camera experiments. At the end of this manuscript, autonomous docking experiments of two Sambot II robots are presented. The results of experiments are showed and analyzed to verify the feasibility of whole scheme of Sambot II.
Nonholonomic camera-space manipulation using cameras mounted on a mobile base
NASA Astrophysics Data System (ADS)
Goodwine, Bill; Seelinger, Michael J.; Skaar, Steven B.; Ma, Qun
1998-10-01
The body of work called `Camera Space Manipulation' is an effective and proven method of robotic control. Essentially, this technique identifies and refines the input-output relationship of the plant using estimation methods and drives the plant open-loop to its target state. 3D `success' of the desired motion, i.e., the end effector of the manipulator engages a target at a particular location with a particular orientation, is guaranteed when there is camera space success in two cameras which are adequately separated. Very accurate, sub-pixel positioning of a robotic end effector is possible using this method. To date, however, most efforts in this area have primarily considered holonomic systems. This work addresses the problem of nonholonomic camera space manipulation by considering the problem of a nonholonomic robot with two cameras and a holonomic manipulator on board the nonholonomic platform. While perhaps not as common in robotics, such a combination of holonomic and nonholonomic degrees of freedom are ubiquitous in industry: fork lifts and earth moving equipment are common examples of a nonholonomic system with an on-board holonomic actuator. The nonholonomic nature of the system makes the automation problem more difficult due to a variety of reasons; in particular, the target location is not fixed in the image planes, as it is for holonomic systems (since the cameras are attached to a moving platform), and there is a fundamental `path dependent' nature of nonholonomic kinematics. This work focuses on the sensor space or camera-space-based control laws necessary for effectively implementing an autonomous system of this type.
Navigation of a care and welfare robot
NASA Astrophysics Data System (ADS)
Yukawa, Toshihiro; Hosoya, Osamu; Saito, Naoki; Okano, Hideharu
2005-12-01
In this paper, we propose the development of a robot that can perform nursing tasks in a hospital. In a narrow environment such as a sickroom or a hallway, the robot must be able to move freely in arbitrary directions. Therefore, the robot needs to have high controllability and the capability to make precise movements. Our robot can recognize a line by using cameras, and can be controlled in the reference directions by means of comparison with original cell map information; furthermore, it moves safely on the basis of an original center-line established permanently in the building. Correspondence between the robot and a centralized control center enables the robot's autonomous movement in the hospital. Through a navigation system using cell map information, the robot is able to perform nursing tasks smoothly by changing the camera angle.
Innovation in robotic surgery: the Indian scenario.
Deshpande, Suresh V
2015-01-01
Robotics is the science. In scientific words a "Robot" is an electromechanical arm device with a computer interface, a combination of electrical, mechanical, and computer engineering. It is a mechanical arm that performs tasks in Industries, space exploration, and science. One such idea was to make an automated arm - A robot - In laparoscopy to control the telescope-camera unit electromechanically and then with a computer interface using voice control. It took us 5 long years from 2004 to bring it to the level of obtaining a patent. That was the birth of the Swarup Robotic Arm (SWARM) which is the first and the only Indian contribution in the field of robotics in laparoscopy as a total voice controlled camera holding robotic arm developed without any support by industry or research institutes.
Carlson, Jay; Kowalczuk, Jędrzej; Psota, Eric; Pérez, Lance C
2012-01-01
Robotic surgical platforms require vision feedback systems, which often consist of low-resolution, expensive, single-imager analog cameras. These systems are retooled for 3D display by simply doubling the cameras and outboard control units. Here, a fully-integrated digital stereoscopic video camera employing high-definition sensors and a class-compliant USB video interface is presented. This system can be used with low-cost PC hardware and consumer-level 3D displays for tele-medical surgical applications including military medical support, disaster relief, and space exploration.
A probabilistic model of overt visual attention for cognitive robots.
Begum, Momotaz; Karray, Fakhri; Mann, George K I; Gosine, Raymond G
2010-10-01
Visual attention is one of the major requirements for a robot to serve as a cognitive companion for human. The robotic visual attention is mostly concerned with overt attention which accompanies head and eye movements of a robot. In this case, each movement of the camera head triggers a number of events, namely transformation of the camera and the image coordinate systems, change of content of the visual field, and partial appearance of the stimuli. All of these events contribute to the reduction in probability of meaningful identification of the next focus of attention. These events are specific to overt attention with head movement and, therefore, their effects are not addressed in the classical models of covert visual attention. This paper proposes a Bayesian model as a robot-centric solution for the overt visual attention problem. The proposed model, while taking inspiration from the primates visual attention mechanism, guides a robot to direct its camera toward behaviorally relevant and/or visually demanding stimuli. A particle filter implementation of this model addresses the challenges involved in overt attention with head movement. Experimental results demonstrate the performance of the proposed model.
A smart telerobotic system driven by monocular vision
NASA Technical Reports Server (NTRS)
Defigueiredo, R. J. P.; Maccato, A.; Wlczek, P.; Denney, B.; Scheerer, J.
1994-01-01
A robotic system that accepts autonomously generated motion and control commands is described. The system provides images from the monocular vision of a camera mounted on a robot's end effector, eliminating the need for traditional guidance targets that must be predetermined and specifically identified. The telerobotic vision system presents different views of the targeted object relative to the camera, based on a single camera image and knowledge of the target's solid geometry.
Adaptive tracking control of a wheeled mobile robot via an uncalibrated camera system.
Dixon, W E; Dawson, D M; Zergeroglu, E; Behal, A
2001-01-01
This paper considers the problem of position/orientation tracking control of wheeled mobile robots via visual servoing in the presence of parametric uncertainty associated with the mechanical dynamics and the camera system. Specifically, we design an adaptive controller that compensates for uncertain camera and mechanical parameters and ensures global asymptotic position/orientation tracking. Simulation and experimental results are included to illustrate the performance of the control law.
3D display for enhanced tele-operation and other applications
NASA Astrophysics Data System (ADS)
Edmondson, Richard; Pezzaniti, J. Larry; Vaden, Justin; Hyatt, Brian; Morris, James; Chenault, David; Bodenhamer, Andrew; Pettijohn, Bradley; Tchon, Joe; Barnidge, Tracy; Kaufman, Seth; Kingston, David; Newell, Scott
2010-04-01
In this paper, we report on the use of a 3D vision field upgrade kit for TALON robot consisting of a replacement flat panel stereoscopic display, and multiple stereo camera systems. An assessment of the system's use for robotic driving, manipulation, and surveillance operations was conducted. A replacement display, replacement mast camera with zoom, auto-focus, and variable convergence, and a replacement gripper camera with fixed focus and zoom comprise the upgrade kit. The stereo mast camera allows for improved driving and situational awareness as well as scene survey. The stereo gripper camera allows for improved manipulation in typical TALON missions.
Robot calibration with a photogrammetric on-line system using reseau scanning cameras
NASA Astrophysics Data System (ADS)
Diewald, Bernd; Godding, Robert; Henrich, Andreas
1994-03-01
The possibility for testing and calibration of industrial robots becomes more and more important for manufacturers and users of such systems. Exacting applications in connection with the off-line programming techniques or the use of robots as measuring machines are impossible without a preceding robot calibration. At the LPA an efficient calibration technique has been developed. Instead of modeling the kinematic behavior of a robot, the new method describes the pose deviations within a user-defined section of the robot's working space. High- precision determination of 3D coordinates of defined path positions is necessary for calibration and can be done by digital photogrammetric systems. For the calibration of a robot at the LPA a digital photogrammetric system with three Rollei Reseau Scanning Cameras was used. This system allows an automatic measurement of a large number of robot poses with high accuracy.
Localization of Mobile Robots Using Odometry and an External Vision Sensor
Pizarro, Daniel; Mazo, Manuel; Santiso, Enrique; Marron, Marta; Jimenez, David; Cobreces, Santiago; Losada, Cristina
2010-01-01
This paper presents a sensor system for robot localization based on the information obtained from a single camera attached in a fixed place external to the robot. Our approach firstly obtains the 3D geometrical model of the robot based on the projection of its natural appearance in the camera while the robot performs an initialization trajectory. This paper proposes a structure-from-motion solution that uses the odometry sensors inside the robot as a metric reference. Secondly, an online localization method based on a sequential Bayesian inference is proposed, which uses the geometrical model of the robot as a link between image measurements and pose estimation. The online approach is resistant to hard occlusions and the experimental setup proposed in this paper shows its effectiveness in real situations. The proposed approach has many applications in both the industrial and service robot fields. PMID:22319318
Localization of mobile robots using odometry and an external vision sensor.
Pizarro, Daniel; Mazo, Manuel; Santiso, Enrique; Marron, Marta; Jimenez, David; Cobreces, Santiago; Losada, Cristina
2010-01-01
This paper presents a sensor system for robot localization based on the information obtained from a single camera attached in a fixed place external to the robot. Our approach firstly obtains the 3D geometrical model of the robot based on the projection of its natural appearance in the camera while the robot performs an initialization trajectory. This paper proposes a structure-from-motion solution that uses the odometry sensors inside the robot as a metric reference. Secondly, an online localization method based on a sequential Bayesian inference is proposed, which uses the geometrical model of the robot as a link between image measurements and pose estimation. The online approach is resistant to hard occlusions and the experimental setup proposed in this paper shows its effectiveness in real situations. The proposed approach has many applications in both the industrial and service robot fields.
Homography-based visual servo regulation of mobile robots.
Fang, Yongchun; Dixon, Warren E; Dawson, Darren M; Chawda, Prakash
2005-10-01
A monocular camera-based vision system attached to a mobile robot (i.e., the camera-in-hand configuration) is considered in this paper. By comparing corresponding target points of an object from two different camera images, geometric relationships are exploited to derive a transformation that relates the actual position and orientation of the mobile robot to a reference position and orientation. This transformation is used to synthesize a rotation and translation error system from the current position and orientation to the fixed reference position and orientation. Lyapunov-based techniques are used to construct an adaptive estimate to compensate for a constant, unmeasurable depth parameter, and to prove asymptotic regulation of the mobile robot. The contribution of this paper is that Lyapunov techniques are exploited to craft an adaptive controller that enables mobile robot position and orientation regulation despite the lack of an object model and the lack of depth information. Experimental results are provided to illustrate the performance of the controller.
Multi-camera sensor system for 3D segmentation and localization of multiple mobile robots.
Losada, Cristina; Mazo, Manuel; Palazuelos, Sira; Pizarro, Daniel; Marrón, Marta
2010-01-01
This paper presents a method for obtaining the motion segmentation and 3D localization of multiple mobile robots in an intelligent space using a multi-camera sensor system. The set of calibrated and synchronized cameras are placed in fixed positions within the environment (intelligent space). The proposed algorithm for motion segmentation and 3D localization is based on the minimization of an objective function. This function includes information from all the cameras, and it does not rely on previous knowledge or invasive landmarks on board the robots. The proposed objective function depends on three groups of variables: the segmentation boundaries, the motion parameters and the depth. For the objective function minimization, we use a greedy iterative algorithm with three steps that, after initialization of segmentation boundaries and depth, are repeated until convergence.
Auto-converging stereo cameras for 3D robotic tele-operation
NASA Astrophysics Data System (ADS)
Edmondson, Richard; Aycock, Todd; Chenault, David
2012-06-01
Polaris Sensor Technologies has developed a Stereovision Upgrade Kit for TALON robot to provide enhanced depth perception to the operator. This kit previously required the TALON Operator Control Unit to be equipped with the optional touchscreen interface to allow for operator control of the camera convergence angle adjustment. This adjustment allowed for optimal camera convergence independent of the distance from the camera to the object being viewed. Polaris has recently improved the performance of the stereo camera by implementing an Automatic Convergence algorithm in a field programmable gate array in the camera assembly. This algorithm uses scene content to automatically adjust the camera convergence angle, freeing the operator to focus on the task rather than adjustment of the vision system. The autoconvergence capability has been demonstrated on both visible zoom cameras and longwave infrared microbolometer stereo pairs.
NASA Astrophysics Data System (ADS)
Zheng, Li; Yi, Ruan
2009-11-01
Power line inspection and maintenance already benefit from developments in mobile robotics. This paper presents mobile robots capable of crossing obstacles on overhead ground wires. A teleoperated robot realizes inspection and maintenance tasks on power transmission line equipment. The inspection robot is driven by 11 motor with two arms, two wheels and two claws. The inspection robot is designed to realize the function of observation, grasp, walk, rolling, turn, rise, and decline. This paper is oriented toward 100% reliable obstacle detection and identification, and sensor fusion to increase the autonomy level. An embedded computer based on PC/104 bus is chosen as the core of control system. Visible light camera and thermal infrared Camera are both installed in a programmable pan-and-tilt camera (PPTC) unit. High-quality visual feedback rapidly becomes crucial for human-in-the-loop control and effective teleoperation. The communication system between the robot and the ground station is based on Mesh wireless networks by 700 MHz bands. An expert system programmed with Visual C++ is developed to implement the automatic control. Optoelectronic laser sensors and laser range scanner were installed in robot for obstacle-navigation control to grasp the overhead ground wires. A novel prototype with careful considerations on mobility was designed to inspect the 500KV power transmission lines. Results of experiments demonstrate that the robot can be applied to execute the navigation and inspection tasks.
Robotic surgical skill acquisition: What one needs to know?
Sood, Akshay; Jeong, Wooju; Ahlawat, Rajesh; Campbell, Logan; Aggarwal, Shruti; Menon, Mani; Bhandari, Mahendra
2015-01-01
Robotic surgery has been eagerly adopted by patients and surgeons alike in the field of urology, over the last decade. However, there is a lack of standardization in training curricula and accreditation guidelines to ensure surgeon competence and patient safety. Accordingly, in this review, we aim to highlight ‘who’ needs to learn ‘what’ and ‘how’, to become competent in robotic surgery. We demonstrate that both novice and experienced open surgeons require supervision and mentoring during the initial phases of robotic surgery skill acquisition. The experienced open surgeons possess domain knowledge, however, need to acquire technical knowledge under supervision (either in simulated or clinical environment) to successfully transition to robotic surgery, whereas, novice surgeons need to acquire both domain as well as technical knowledge to become competent in robotic surgery. With regard to training curricula, a variety of training programs such as academic fellowships, mini-fellowships, and mentored skill courses exist, and cater to the needs and expectations of postgraduate surgeons adequately. Fellowships provide the most comprehensive training, however, may not be suitable to all surgeon-learners secondary to the long-term time commitment. For these surgeon-learners short-term courses such as the mini-fellowships or mentored skill courses might be more apt. Lastly, with regards to credentialing uniformity in criteria regarding accreditation is lacking but earnest efforts are underway. Currently, accreditation for competence in robotic surgery is institutional specific. PMID:25598593
Robotic surgical skill acquisition: What one needs to know?
Sood, Akshay; Jeong, Wooju; Ahlawat, Rajesh; Campbell, Logan; Aggarwal, Shruti; Menon, Mani; Bhandari, Mahendra
2015-01-01
Robotic surgery has been eagerly adopted by patients and surgeons alike in the field of urology, over the last decade. However, there is a lack of standardization in training curricula and accreditation guidelines to ensure surgeon competence and patient safety. Accordingly, in this review, we aim to highlight 'who' needs to learn 'what' and 'how', to become competent in robotic surgery. We demonstrate that both novice and experienced open surgeons require supervision and mentoring during the initial phases of robotic surgery skill acquisition. The experienced open surgeons possess domain knowledge, however, need to acquire technical knowledge under supervision (either in simulated or clinical environment) to successfully transition to robotic surgery, whereas, novice surgeons need to acquire both domain as well as technical knowledge to become competent in robotic surgery. With regard to training curricula, a variety of training programs such as academic fellowships, mini-fellowships, and mentored skill courses exist, and cater to the needs and expectations of postgraduate surgeons adequately. Fellowships provide the most comprehensive training, however, may not be suitable to all surgeon-learners secondary to the long-term time commitment. For these surgeon-learners short-term courses such as the mini-fellowships or mentored skill courses might be more apt. Lastly, with regards to credentialing uniformity in criteria regarding accreditation is lacking but earnest efforts are underway. Currently, accreditation for competence in robotic surgery is institutional specific.
Ali, S M; Reisner, L A; King, B; Cao, A; Auner, G; Klein, M; Pandya, A K
2008-01-01
A redesigned motion control system for the medical robot Aesop allows automating and programming its movements. An IR eye tracking system has been integrated with this control interface to implement an intelligent, autonomous eye gaze-based laparoscopic positioning system. A laparoscopic camera held by Aesop can be moved based on the data from the eye tracking interface to keep the user's gaze point region at the center of a video feedback monitor. This system setup provides autonomous camera control that works around the surgeon, providing an optimal robotic camera platform.
Systems and Algorithms for Automated Collaborative Observation Using Networked Robotic Cameras
ERIC Educational Resources Information Center
Xu, Yiliang
2011-01-01
The development of telerobotic systems has evolved from Single Operator Single Robot (SOSR) systems to Multiple Operator Multiple Robot (MOMR) systems. The relationship between human operators and robots follows the master-slave control architecture and the requests for controlling robot actuation are completely generated by human operators. …
System of launchable mesoscale robots for distributed sensing
NASA Astrophysics Data System (ADS)
Yesin, Kemal B.; Nelson, Bradley J.; Papanikolopoulos, Nikolaos P.; Voyles, Richard M.; Krantz, Donald G.
1999-08-01
A system of launchable miniature mobile robots with various sensors as payload is used for distributed sensing. The robots are projected to areas of interest either by a robot launcher or by a human operator using standard equipment. A wireless communication network is used to exchange information with the robots. Payloads such as a MEMS sensor for vibration detection, a microphone and an active video module are used mainly to detect humans. The video camera provides live images through a wireless video transmitter and a pan-tilt mechanism expands the effective field of view. There are strict restrictions on total volume and power consumption of the payloads due to the small size of the robot. Emerging technologies are used to address these restrictions. In this paper, we describe the use of microrobotic technologies to develop active vision modules for the mesoscale robot. A single chip CMOS video sensor is used along with a miniature lens that is approximately the size of a sugar cube. The device consumes 100 mW; about 5 times less than the power consumption of a comparable CCD camera. Miniature gearmotors 3 mm in diameter are used to drive the pan-tilt mechanism. A miniature video transmitter is used to transmit analog video signals from the camera.
Sensor Fusion Based Model for Collision Free Mobile Robot Navigation
Almasri, Marwah; Elleithy, Khaled; Alajlan, Abrar
2015-01-01
Autonomous mobile robots have become a very popular and interesting topic in the last decade. Each of them are equipped with various types of sensors such as GPS, camera, infrared and ultrasonic sensors. These sensors are used to observe the surrounding environment. However, these sensors sometimes fail and have inaccurate readings. Therefore, the integration of sensor fusion will help to solve this dilemma and enhance the overall performance. This paper presents a collision free mobile robot navigation based on the fuzzy logic fusion model. Eight distance sensors and a range finder camera are used for the collision avoidance approach where three ground sensors are used for the line or path following approach. The fuzzy system is composed of nine inputs which are the eight distance sensors and the camera, two outputs which are the left and right velocities of the mobile robot’s wheels, and 24 fuzzy rules for the robot’s movement. Webots Pro simulator is used for modeling the environment and the robot. The proposed methodology, which includes the collision avoidance based on fuzzy logic fusion model and line following robot, has been implemented and tested through simulation and real time experiments. Various scenarios have been presented with static and dynamic obstacles using one robot and two robots while avoiding obstacles in different shapes and sizes. PMID:26712766
NASA Technical Reports Server (NTRS)
2000-01-01
The Automated Endoscopic System for Optimal Positioning, or AESOP, was developed by Computer Motion, Inc. under a SBIR contract from the Jet Propulsion Lab. AESOP is a robotic endoscopic positioning system used to control the motion of a camera during endoscopic surgery. The camera, which is mounted at the end of a robotic arm, previously had to be held in place by the surgical staff. With AESOP the robotic arm can make more precise and consistent movements. AESOP is also voice controlled by the surgeon. It is hoped that this technology can be used in space repair missions which require precision beyond human dexterity. A new generation of the same technology entitled the ZEUS Robotic Surgical System can make endoscopic procedures even more successful. ZEUS allows the surgeon control various instruments in its robotic arms, allowing for the precision the procedure requires.
MonoSLAM: real-time single camera SLAM.
Davison, Andrew J; Reid, Ian D; Molton, Nicholas D; Stasse, Olivier
2007-06-01
We present a real-time algorithm which can recover the 3D trajectory of a monocular camera, moving rapidly through a previously unknown scene. Our system, which we dub MonoSLAM, is the first successful application of the SLAM methodology from mobile robotics to the "pure vision" domain of a single uncontrolled camera, achieving real time but drift-free performance inaccessible to Structure from Motion approaches. The core of the approach is the online creation of a sparse but persistent map of natural landmarks within a probabilistic framework. Our key novel contributions include an active approach to mapping and measurement, the use of a general motion model for smooth camera movement, and solutions for monocular feature initialization and feature orientation estimation. Together, these add up to an extremely efficient and robust algorithm which runs at 30 Hz with standard PC and camera hardware. This work extends the range of robotic systems in which SLAM can be usefully applied, but also opens up new areas. We present applications of MonoSLAM to real-time 3D localization and mapping for a high-performance full-size humanoid robot and live augmented reality with a hand-held camera.
NASA Technical Reports Server (NTRS)
Howard, Andrew B.; Ansar, Adnan I.; Litwin, Todd E.; Goldberg, Steven B.
2009-01-01
The JPL Robot Vision Library (JPLV) provides real-time robot vision algorithms for developers who are not vision specialists. The package includes algorithms for stereo ranging, visual odometry and unsurveyed camera calibration, and has unique support for very wideangle lenses
NASA Technical Reports Server (NTRS)
Hollars, M. G.; Cannon, R. H., Jr.; Alexander, H. L.; Morse, D. F.
1987-01-01
The Stanford University Aerospace Robotics Laboratory is actively developing and experimentally testing advanced robot control strategies for space robotic applications. Early experiments focused on control of very lightweight one-link manipulators and other flexible structures. The results are being extended to position and force control of mini-manipulators attached to flexible manipulators and multilink manipulators with flexible drive trains. Experimental results show that end-point sensing and careful dynamic modeling or adaptive control are key to the success of these control strategies. Free-flying space robot simulators that operate on an air cushion table have been built to test control strategies in which the dynamics of the base of the robot and the payload are important.
NASA Astrophysics Data System (ADS)
Kang, Sungil; Roh, Annah; Nam, Bodam; Hong, Hyunki
2011-12-01
This paper presents a novel vision system for people detection using an omnidirectional camera mounted on a mobile robot. In order to determine regions of interest (ROI), we compute a dense optical flow map using graphics processing units, which enable us to examine compliance with the ego-motion of the robot in a dynamic environment. Shape-based classification algorithms are employed to sort ROIs into human beings and nonhumans. The experimental results show that the proposed system detects people more precisely than previous methods.
NASA Astrophysics Data System (ADS)
Keller, H. U.; Hartwig, H.; Kramm, R.; Koschny, D.; Markiewicz, W. J.; Thomas, N.; Fernades, M.; Smith, P. H.; Reynolds, R.; Lemmon, M. T.; Weinberg, J.; Marcialis, R.; Tanner, R.; Boss, B. J.; Oquest, C.; Paige, D. A.
2001-08-01
The Robotic Arm Camera (RAC) is one of the key instruments newly developed for the Mars Volatiles and Climate Surveyor payload of the Mars Polar Lander. This lightweight instrument employs a front lens with variable focus range and takes images at distances from 11 mm (image scale 1:1) to infinity. Color images with a resolution of better than 50 μm can be obtained to characterize the Martian soil. Spectral information of nearby objects is retrieved through illumination with blue, green, and red lamp sets. The design and performance of the camera are described in relation to the science objectives and operation. The RAC uses the same CCD detector array as the Surface Stereo Imager and shares the readout electronics with this camera. The RAC is mounted at the wrist of the Robotic Arm and can characterize the contents of the scoop, the samples of soil fed to the Thermal Evolved Gas Analyzer, the Martian surface in the vicinity of the lander, and the interior of trenches dug out by the Robotic Arm. It can also be used to take panoramic images and to retrieve stereo information with an effective baseline surpassing that of the Surface Stereo Imager by about a factor of 3.
Live video monitoring robot controlled by web over internet
NASA Astrophysics Data System (ADS)
Lokanath, M.; Akhil Sai, Guruju
2017-11-01
Future is all about robots, robot can perform tasks where humans cannot, Robots have huge applications in military and industrial area for lifting heavy weights, for accurate placements, for repeating the same task number of times, where human are not efficient. Generally robot is a mix of electronic, electrical and mechanical engineering and can do the tasks automatically on its own or under the supervision of humans. The camera is the eye for robot, call as robovision helps in monitoring security system and also can reach into the places where the human eye cannot reach. This paper presents about developing a live video streaming robot controlled from the website. We designed the web, controlling for the robot to move left, right, front and back while streaming video. As we move to the smart environment or IoT (Internet of Things) by smart devices the system we developed here connects over the internet and can be operated with smart mobile phone using a web browser. The Raspberry Pi model B chip acts as heart for this system robot, the sufficient motors, surveillance camera R pi 2 are connected to Raspberry pi.
A Fully Sensorized Cooperative Robotic System for Surgical Interventions
Tovar-Arriaga, Saúl; Vargas, José Emilio; Ramos, Juan M.; Aceves, Marco A.; Gorrostieta, Efren; Kalender, Willi A.
2012-01-01
In this research a fully sensorized cooperative robot system for manipulation of needles is presented. The setup consists of a DLR/KUKA Light Weight Robot III especially designed for safe human/robot interaction, a FD-CT robot-driven angiographic C-arm system, and a navigation camera. Also, new control strategies for robot manipulation in the clinical environment are introduced. A method for fast calibration of the involved components and the preliminary accuracy tests of the whole possible errors chain are presented. Calibration of the robot with the navigation system has a residual error of 0.81 mm (rms) with a standard deviation of ±0.41 mm. The accuracy of the robotic system while targeting fixed points at different positions within the workspace is of 1.2 mm (rms) with a standard deviation of ±0.4 mm. After calibration, and due to close loop control, the absolute positioning accuracy was reduced to the navigation camera accuracy which is of 0.35 mm (rms). The implemented control allows the robot to compensate for small patient movements. PMID:23012551
NASA Technical Reports Server (NTRS)
Rochlis-Zumbado, Jennifer; Sandor, Aniko; Ezer, Neta
2012-01-01
Risk of Inadequate Design of Human and Automation/Robotic Integration (HARI) is a new Human Research Program (HRP) risk. HRI is a research area that seeks to understand the complex relationship among variables that affect the way humans and robots work together to accomplish goals. The DRP addresses three major HRI study areas that will provide appropriate information for navigation guidance to a teleoperator of a robot system, and contribute to the closure of currently identified HRP gaps: (1) Overlays -- Use of overlays for teleoperation to augment the information available on the video feed (2) Camera views -- Type and arrangement of camera views for better task performance and awareness of surroundings (3) Command modalities -- Development of gesture and voice command vocabularies
DOE Office of Scientific and Technical Information (OSTI.GOV)
2005-03-30
The Robotic Follow Algorithm enables allows any robotic vehicle to follow a moving target while reactively choosing a route around nearby obstacles. The robotic follow behavior can be used with different camera systems and can be used with thermal or visual tracking as well as other tracking methods such as radio frequency tags.
Soft Robotic Manipulator for Improving Dexterity in Minimally Invasive Surgery.
Diodato, Alessandro; Brancadoro, Margherita; De Rossi, Giacomo; Abidi, Haider; Dall'Alba, Diego; Muradore, Riccardo; Ciuti, Gastone; Fiorini, Paolo; Menciassi, Arianna; Cianchetti, Matteo
2018-02-01
Combining the strengths of surgical robotics and minimally invasive surgery (MIS) holds the potential to revolutionize surgical interventions. The MIS advantages for the patients are obvious, but the use of instrumentation suitable for MIS often translates in limiting the surgeon capabilities (eg, reduction of dexterity and maneuverability and demanding navigation around organs). To overcome these shortcomings, the application of soft robotics technologies and approaches can be beneficial. The use of devices based on soft materials is already demonstrating several advantages in all the exploitation areas where dexterity and safe interaction are needed. In this article, the authors demonstrate that soft robotics can be synergistically used with traditional rigid tools to improve the robotic system capabilities and without affecting the usability of the robotic platform. A bioinspired soft manipulator equipped with a miniaturized camera has been integrated with the Endoscopic Camera Manipulator arm of the da Vinci Research Kit both from hardware and software viewpoints. Usability of the integrated system has been evaluated with nonexpert users through a standard protocol to highlight difficulties in controlling the soft manipulator. This is the first time that an endoscopic tool based on soft materials has been integrated into a surgical robot. The soft endoscopic camera can be easily operated through the da Vinci Research Kit master console, thus increasing the workspace and the dexterity, and without limiting intuitive and friendly use.
New ultrasensitive pickup device for deep-sea robots: underwater super-HARP color TV camera
NASA Astrophysics Data System (ADS)
Maruyama, Hirotaka; Tanioka, Kenkichi; Uchida, Tetsuo
1994-11-01
An ultra-sensitive underwater super-HARP color TV camera has been developed. The characteristics -- spectral response, lag, etc. -- of the super-HARP tube had to be designed for use underwater because the propagation of light in water is very different from that in air, and also depends on the light's wavelength. The tubes have new electrostatic focusing and magnetic deflection functions and are arranged in parallel to miniaturize the camera. A deep sea robot (DOLPHIN 3K) was fitted with this camera and used for the first sea test in Sagami Bay, Japan. The underwater visual information was clear enough to promise significant improvements in both deep sea surveying and safety. It was thus confirmed that the Super- HARP camera is very effective for underwater use.
Image-Based Visual Servoing for Robotic Systems: A Nonlinear Lyapunov-Based Control Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dixon, Warren
2004-06-01
There is significant motivation to provide robotic systems with improved autonomy as a means to significantly accelerate deactivation and decommissioning (D&D) operations while also reducing the associated costs, removing human operators from hazardous environments, and reducing the required burden and skill of human operators. To achieve improved autonomy, this project focused on the basic science challenges leading to the development of visual servo controllers. The challenge in developing these controllers is that a camera provides 2-dimensional image information about the 3-dimensional Euclidean-space through a perspective (range dependent) projection that can be corrupted by uncertainty in the camera calibration matrix andmore » by disturbances such as nonlinear radial distortion. Disturbances in this relationship (i.e., corruption in the sensor information) propagate erroneous information to the feedback controller of the robot, leading to potentially unpredictable task execution. This research project focused on the development of a visual servo control methodology that targets compensating for disturbances in the camera model (i.e., camera calibration and the recovery of range information) as a means to achieve predictable response by the robotic system operating in unstructured environments. The fundamental idea is to use nonlinear Lyapunov-based techniques along with photogrammetry methods to overcome the complex control issues and alleviate many of the restrictive assumptions that impact current robotic applications. The outcome of this control methodology is a plug-and-play visual servoing control module that can be utilized in conjunction with current technology such as feature recognition and extraction to enable robotic systems with the capabilities of increased accuracy, autonomy, and robustness, with a larger field of view (and hence a larger workspace). The developed methodology has been reported in numerous peer-reviewed publications and the performance and enabling capabilities of the resulting visual servo control modules have been demonstrated on mobile robot and robot manipulator platforms.« less
Building Robota, a mini-humanoid robot for the rehabilitation of children with autism.
Billard, Aude; Robins, Ben; Nadel, Jacqueline; Dautenhahn, Kerstin
2007-01-01
The Robota project constructs a series of multiple-degrees-of-freedom, doll-shaped humanoid robots, whose physical features resemble those of a human baby. The Robota robots have been applied as assistive technologies in behavioral studies with low-functioning children with autism. These studies investigate the potential of using an imitator robot to assess children's imitation ability and to teach children simple coordinated behaviors. In this article, the authors review the recent technological developments that have made the Robota robots suitable for use with children with autism. They critically appraise the main outcomes of two sets of behavioral studies conducted with Robota and discuss how these results inform future development of the Robota robots and robots in general for the rehabilitation of children with complex developmental disabilities.
Solving the robot-world, hand-eye(s) calibration problem with iterative methods
USDA-ARS?s Scientific Manuscript database
Robot-world, hand-eye calibration is the problem of determining the transformation between the robot end effector and a camera, as well as the transformation between the robot base and the world coordinate system. This relationship has been modeled as AX = ZB, where X and Z are unknown homogeneous ...
[RESEARCH PROGRESS OF PERIPHERAL NERVE SURGERY ASSISTED BY Da Vinci ROBOTIC SYSTEM].
Shen, Jie; Song, Diyu; Wang, Xiaoyu; Wang, Changjiang; Zhang, Shuming
2016-02-01
To summarize the research progress of peripheral nerve surgery assisted by Da Vinci robotic system. The recent domestic and international articles about peripheral nerve surgery assisted by Da Vinci robotic system were reviewed and summarized. Compared with conventional microsurgery, peripheral nerve surgery assisted by Da Vinci robotic system has distinctive advantages, such as elimination of physiological tremors and three-dimensional high-resolution vision. It is possible to perform robot assisted limb nerve surgery using either the traditional brachial plexus approach or the mini-invasive approach. The development of Da Vinci robotic system has revealed new perspectives in peripheral nerve surgery. But it has still been at the initial stage, more basic and clinical researches are still needed.
Thermal tracking in mobile robots for leak inspection activities.
Ibarguren, Aitor; Molina, Jorge; Susperregi, Loreto; Maurtua, Iñaki
2013-10-09
Maintenance tasks are crucial for all kind of industries, especially in extensive industrial plants, like solar thermal power plants. The incorporation of robots is a key issue for automating inspection activities, as it will allow a constant and regular control over the whole plant. This paper presents an autonomous robotic system to perform pipeline inspection for early detection and prevention of leakages in thermal power plants, based on the work developed within the MAINBOT (http://www.mainbot.eu) European project. Based on the information provided by a thermographic camera, the system is able to detect leakages in the collectors and pipelines. Beside the leakage detection algorithms, the system includes a particle filter-based tracking algorithm to keep the target in the field of view of the camera and to avoid the irregularities of the terrain while the robot patrols the plant. The information provided by the particle filter is further used to command a robot arm, which handles the camera and ensures that the target is always within the image. The obtained results show the suitability of the proposed approach, adding a tracking algorithm to improve the performance of the leakage detection system.
Thermal Tracking in Mobile Robots for Leak Inspection Activities
Ibarguren, Aitor; Molina, Jorge; Susperregi, Loreto; Maurtua, Iñaki
2013-01-01
Maintenance tasks are crucial for all kind of industries, especially in extensive industrial plants, like solar thermal power plants. The incorporation of robots is a key issue for automating inspection activities, as it will allow a constant and regular control over the whole plant. This paper presents an autonomous robotic system to perform pipeline inspection for early detection and prevention of leakages in thermal power plants, based on the work developed within the MAINBOT (http://www.mainbot.eu) European project. Based on the information provided by a thermographic camera, the system is able to detect leakages in the collectors and pipelines. Beside the leakage detection algorithms, the system includes a particle filter-based tracking algorithm to keep the target in the field of view of the camera and to avoid the irregularities of the terrain while the robot patrols the plant. The information provided by the particle filter is further used to command a robot arm, which handles the camera and ensures that the target is always within the image. The obtained results show the suitability of the proposed approach, adding a tracking algorithm to improve the performance of the leakage detection system. PMID:24113684
Calibration of an Outdoor Distributed Camera Network with a 3D Point Cloud
Ortega, Agustín; Silva, Manuel; Teniente, Ernesto H.; Ferreira, Ricardo; Bernardino, Alexandre; Gaspar, José; Andrade-Cetto, Juan
2014-01-01
Outdoor camera networks are becoming ubiquitous in critical urban areas of the largest cities around the world. Although current applications of camera networks are mostly tailored to video surveillance, recent research projects are exploiting their use to aid robotic systems in people-assisting tasks. Such systems require precise calibration of the internal and external parameters of the distributed camera network. Despite the fact that camera calibration has been an extensively studied topic, the development of practical methods for user-assisted calibration that minimize user intervention time and maximize precision still pose significant challenges. These camera systems have non-overlapping fields of view, are subject to environmental stress, and are likely to suffer frequent recalibration. In this paper, we propose the use of a 3D map covering the area to support the calibration process and develop an automated method that allows quick and precise calibration of a large camera network. We present two cases of study of the proposed calibration method: one is the calibration of the Barcelona Robot Lab camera network, which also includes direct mappings (homographies) between image coordinates and world points in the ground plane (walking areas) to support person and robot detection and localization algorithms. The second case consist of improving the GPS positioning of geo-tagged images taken with a mobile device in the Facultat de Matemàtiques i Estadística (FME) patio at the Universitat Politècnica de Catalunya (UPC). PMID:25076221
Calibration of an outdoor distributed camera network with a 3D point cloud.
Ortega, Agustín; Silva, Manuel; Teniente, Ernesto H; Ferreira, Ricardo; Bernardino, Alexandre; Gaspar, José; Andrade-Cetto, Juan
2014-07-29
Outdoor camera networks are becoming ubiquitous in critical urban areas of the largest cities around the world. Although current applications of camera networks are mostly tailored to video surveillance, recent research projects are exploiting their use to aid robotic systems in people-assisting tasks. Such systems require precise calibration of the internal and external parameters of the distributed camera network. Despite the fact that camera calibration has been an extensively studied topic, the development of practical methods for user-assisted calibration that minimize user intervention time and maximize precision still pose significant challenges. These camera systems have non-overlapping fields of view, are subject to environmental stress, and are likely to suffer frequent recalibration. In this paper, we propose the use of a 3D map covering the area to support the calibration process and develop an automated method that allows quick and precise calibration of a large camera network. We present two cases of study of the proposed calibration method: one is the calibration of the Barcelona Robot Lab camera network, which also includes direct mappings (homographies) between image coordinates and world points in the ground plane (walking areas) to support person and robot detection and localization algorithms. The second case consist of improving the GPS positioning of geo-tagged images taken with a mobile device in the Facultat de Matemàtiques i Estadística (FME) patio at the Universitat Politècnica de Catalunya (UPC).
Stereo optical guidance system for control of industrial robots
NASA Technical Reports Server (NTRS)
Powell, Bradley W. (Inventor); Rodgers, Mike H. (Inventor)
1992-01-01
A device for the generation of basic electrical signals which are supplied to a computerized processing complex for the operation of industrial robots. The system includes a stereo mirror arrangement for the projection of views from opposite sides of a visible indicia formed on a workpiece. The views are projected onto independent halves of the retina of a single camera. The camera retina is of the CCD (charge-coupled-device) type and is therefore capable of providing signals in response to the image projected thereupon. These signals are then processed for control of industrial robots or similar devices.
A Novel Bioinspired PVDF Micro/Nano Hair Receptor for a Robot Sensing System
Li, Fei; Liu, Weiting; Stefanini, Cesare; Fu, Xin; Dario, Paolo
2010-01-01
This paper describes the concept and design of a novel artificial hair receptor for the sensing system of micro intelligent robots such as a cricket-like jumping mini robot. The concept is inspired from the natural hair receptor of animals, also called cilium or filiform hair by different research groups, which is usually used as a vibration receptor or a flow detector by insects, mammals and fishes. The suspended fiber model is firstly built and the influence of scaling down is analyzed theoretically. The design of this artificial hair receptor is based on aligned suspended PVDF (polyvinylidene fluoride) fibers, manufactures with a novel method called thermo-direct drawing technique, and aligned suspended submicron diameter fibers are thus successfully fabricated on a flexible Kapton. In the post process step, some key problems such as separated electrodes deposition along with the fiber drawing direction and poling of micro/nano fibers to impart them with good piezoeffective activity have been presented. The preliminary validation experiments show that the artificial hair receptor has a reliable response with good sensibility to external pressure variation and, medium flow as well as its prospects in the application on sensing system of mini/micro bio-robots. PMID:22315581
Automatic Recognition Of Moving Objects And Its Application To A Robot For Picking Asparagus
NASA Astrophysics Data System (ADS)
Baylou, P.; Amor, B. El Hadj; Bousseau, G.
1983-10-01
After a brief description of the robot for picking white asparagus, a statistical study of the different shapes of asparagus tips allowed us to determine certain discriminating parameters to detect the tips as they appear on the silhouette of the mound of earth. The localisation was done stereometrically with the help of two cameras. As the robot carrying the system of vision-localisation moves, the images are altered and decision cri-teria modified. A study of the image from mobile objects produced by both tube and CCD came-ras was carried out. A simulation of this phenomenon has been achieved in order to determine the modifications concerning object shapes, thresholding levels and decision parameters in function of the robot speed.
Development of a table tennis robot for ball interception using visual feedback
NASA Astrophysics Data System (ADS)
Parnichkun, Manukid; Thalagoda, Janitha A.
2016-07-01
This paper presents a concept of intercepting a moving table tennis ball using a robot. The robot has four degrees of freedom(DOF) which are simplified in such a way that The system is able to perform the task within the bounded limit. It employs computer vision to localize the ball. For ball identification, Colour Based Threshold Segmentation(CBTS) and Background Subtraction(BS) methodologies are used. Coordinate Transformation(CT) is employed to transform the data, which is taken based on camera coordinate frame to the general coordinate frame. The sensory system consisted of two HD Web Cameras. The computation time of image processing from web cameras is long .it is not possible to intercept table tennis ball using only image processing. Therefore the projectile motion model is employed to predict the final destination of the ball.
A Gradient Optimization Approach to Adaptive Multi-Robot Control
2009-09-01
implemented for deploying a group of three flying robots with downward facing cameras to monitor an environment on the ground. Thirdly, the multi-robot...theoretically proven, and implemented on multi-robot platforms. Thesis Supervisor: Daniela Rus Title: Professor of Electrical Engineering and Computer...often nonlinear, and they are coupled through a network which changes over time. Thirdly, implementing multi-robot controllers requires maintaining mul
3D vision system for intelligent milking robot automation
NASA Astrophysics Data System (ADS)
Akhloufi, M. A.
2013-12-01
In a milking robot, the correct localization and positioning of milking teat cups is of very high importance. The milking robots technology has not changed since a decade and is based primarily on laser profiles for teats approximate positions estimation. This technology has reached its limit and does not allow optimal positioning of the milking cups. Also, in the presence of occlusions, the milking robot fails to milk the cow. These problems, have economic consequences for producers and animal health (e.g. development of mastitis). To overcome the limitations of current robots, we have developed a new system based on 3D vision, capable of efficiently positioning the milking cups. A prototype of an intelligent robot system based on 3D vision for real-time positioning of a milking robot has been built and tested under various conditions on a synthetic udder model (in static and moving scenarios). Experimental tests, were performed using 3D Time-Of-Flight (TOF) and RGBD cameras. The proposed algorithms permit the online segmentation of teats by combing 2D and 3D visual information. The obtained results permit the teat 3D position computation. This information is then sent to the milking robot for teat cups positioning. The vision system has a real-time performance and monitors the optimal positioning of the cups even in the presence of motion. The obtained results, with both TOF and RGBD cameras, show the good performance of the proposed system. The best performance was obtained with RGBD cameras. This latter technology will be used in future real life experimental tests.
Enhanced operator perception through 3D vision and haptic feedback
NASA Astrophysics Data System (ADS)
Edmondson, Richard; Light, Kenneth; Bodenhamer, Andrew; Bosscher, Paul; Wilkinson, Loren
2012-06-01
Polaris Sensor Technologies (PST) has developed a stereo vision upgrade kit for TALON® robot systems comprised of a replacement gripper camera and a replacement mast zoom camera on the robot, and a replacement display in the Operator Control Unit (OCU). Harris Corporation has developed a haptic manipulation upgrade for TALON® robot systems comprised of a replacement arm and gripper and an OCU that provides haptic (force) feedback. PST and Harris have recently collaborated to integrate the 3D vision system with the haptic manipulation system. In multiple studies done at Fort Leonard Wood, Missouri it has been shown that 3D vision and haptics provide more intuitive perception of complicated scenery and improved robot arm control, allowing for improved mission performance and the potential for reduced time on target. This paper discusses the potential benefits of these enhancements to robotic systems used for the domestic homeland security mission.
Image acquisition device of inspection robot based on adaptive rotation regulation of polarizer
NASA Astrophysics Data System (ADS)
Dong, Maoqi; Wang, Xingguang; Liang, Tao; Yang, Guoqing; Zhang, Chuangyou; Gao, Faqin
2017-12-01
An image processing device of inspection robot with adaptive polarization adjustment is proposed, that the device includes the inspection robot body, the image collecting mechanism, the polarizer and the polarizer automatic actuating device. Where, the image acquisition mechanism is arranged at the front of the inspection robot body for collecting equipment image data in the substation. Polarizer is fixed on the automatic actuating device of polarizer, and installed in front of the image acquisition mechanism, and that the optical axis of the camera vertically goes through the polarizer and the polarizer rotates with the optical axis of the visible camera as the central axis. The simulation results show that the system solves the fuzzy problems of the equipment that are caused by glare, reflection of light and shadow, and the robot can observe details of the running status of electrical equipment. And the full coverage of the substation equipment inspection robot observation target is achieved, which ensures the safe operation of the substation equipment.
Mini Compton Camera Based on an Array of Virtual Frisch-Grid CdZnTe Detectors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Wonho; Bolotnikov, Aleksey; Lee, Taewoong
In this study, we constructed a mini Compton camera based on an array of CdZnTe detectors and assessed its spectral and imaging properties. The entire array consisted of 6×6 Frisch-grid CdZnTe detectors, each with a size of 6×6 ×15 mm 3. Since it is easier and more practical to grow small CdZnTe crystals rather than large monolithic ones, constructing a mosaic array of parallelepiped crystals can be an effective way to build a more efficient, large-volume detector. With the fully operational CdZnTe array, we measured the energy spectra for 133Ba -, 137Cs -, 60Co-radiation sources; we also located these sourcesmore » using a Compton imaging approach. Although the Compton camera was small enough to hand-carry, its intrinsic efficiency was several orders higher than those generated in previous researches using spatially separated arrays, because our camera measured the interactions inside the CZT detector array, wherein the detector elements were positioned very close to each other. Lastly, the performance of our camera was compared with that based on a pixelated detector.« less
Mini Compton Camera Based on an Array of Virtual Frisch-Grid CdZnTe Detectors
Lee, Wonho; Bolotnikov, Aleksey; Lee, Taewoong; ...
2016-02-15
In this study, we constructed a mini Compton camera based on an array of CdZnTe detectors and assessed its spectral and imaging properties. The entire array consisted of 6×6 Frisch-grid CdZnTe detectors, each with a size of 6×6 ×15 mm 3. Since it is easier and more practical to grow small CdZnTe crystals rather than large monolithic ones, constructing a mosaic array of parallelepiped crystals can be an effective way to build a more efficient, large-volume detector. With the fully operational CdZnTe array, we measured the energy spectra for 133Ba -, 137Cs -, 60Co-radiation sources; we also located these sourcesmore » using a Compton imaging approach. Although the Compton camera was small enough to hand-carry, its intrinsic efficiency was several orders higher than those generated in previous researches using spatially separated arrays, because our camera measured the interactions inside the CZT detector array, wherein the detector elements were positioned very close to each other. Lastly, the performance of our camera was compared with that based on a pixelated detector.« less
Visual Control for Multirobot Organized Rendezvous.
Lopez-Nicolas, G; Aranda, M; Mezouar, Y; Sagues, C
2012-08-01
This paper addresses the problem of visual control of a set of mobile robots. In our framework, the perception system consists of an uncalibrated flying camera performing an unknown general motion. The robots are assumed to undergo planar motion considering nonholonomic constraints. The goal of the control task is to drive the multirobot system to a desired rendezvous configuration relying solely on visual information given by the flying camera. The desired multirobot configuration is defined with an image of the set of robots in that configuration without any additional information. We propose a homography-based framework relying on the homography induced by the multirobot system that gives a desired homography to be used to define the reference target, and a new image-based control law that drives the robots to the desired configuration by imposing a rigidity constraint. This paper extends our previous work, and the main contributions are that the motion constraints on the flying camera are removed, the control law is improved by reducing the number of required steps, the stability of the new control law is proved, and real experiments are provided to validate the proposal.
Vision robot with rotational camera for searching ID tags
NASA Astrophysics Data System (ADS)
Kimura, Nobutaka; Moriya, Toshio
2008-02-01
We propose a new concept, called "real world crawling", in which intelligent mobile sensors completely recognize environments by actively gathering information in those environments and integrating that information on the basis of location. First we locate objects by widely and roughly scanning the entire environment with these mobile sensors, and we check the objects in detail by moving the sensors to find out exactly what and where they are. We focused on the automation of inventory counting with barcodes as an application of our concept. We developed "a barcode reading robot" which autonomously moved in a warehouse. It located and read barcode ID tags using a camera and a barcode reader while moving. However, motion blurs caused by the robot's translational motion made it difficult to recognize the barcodes. Because of the high computational cost of image deblurring software, we used the pan rotation of the camera to reduce these blurs. We derived the appropriate pan rotation velocity from the robot's translational velocity and from the distance to the surfaces of barcoded boxes. We verified the effectiveness of our method in an experimental test.
Shuttlecock detection system for fully-autonomous badminton robot with two high-speed video cameras
NASA Astrophysics Data System (ADS)
Masunari, T.; Yamagami, K.; Mizuno, M.; Une, S.; Uotani, M.; Kanematsu, T.; Demachi, K.; Sano, S.; Nakamura, Y.; Suzuki, S.
2017-02-01
Two high-speed video cameras are successfully used to detect the motion of a flying shuttlecock of badminton. The shuttlecock detection system is applied to badminton robots that play badminton fully autonomously. The detection system measures the three dimensional position and velocity of a flying shuttlecock, and predicts the position where the shuttlecock falls to the ground. The badminton robot moves quickly to the position where the shuttle-cock falls to, and hits the shuttlecock back into the opponent's side of the court. In the game of badminton, there is a large audience, and some of them move behind a flying shuttlecock, which are a kind of background noise and makes it difficult to detect the motion of the shuttlecock. The present study demonstrates that such noises can be eliminated by the method of stereo imaging with two high-speed cameras.
NASA Technical Reports Server (NTRS)
Otaguro, W. S.; Kesler, L. O.; Land, K. C.; Rhoades, D. E.
1987-01-01
An intelligent tracker capable of robotic applications requiring guidance and control of platforms, robotic arms, and end effectors has been developed. This packaged system capable of supervised autonomous robotic functions is partitioned into a multiple processor/parallel processing configuration. The system currently interfaces to cameras but has the capability to also use three-dimensional inputs from scanning laser rangers. The inputs are fed into an image processing and tracking section where the camera inputs are conditioned for the multiple tracker algorithms. An executive section monitors the image processing and tracker outputs and performs all the control and decision processes. The present architecture of the system is presented with discussion of its evolutionary growth for space applications. An autonomous rendezvous demonstration of this system was performed last year. More realistic demonstrations in planning are discussed.
Wide Field Camera 3 Accommodations for HST Robotics Servicing Mission
NASA Technical Reports Server (NTRS)
Ginyard, Amani
2005-01-01
This slide presentation discusses the objectives of the Hubble Space Telescope (HST) Robotics Servicing and Deorbit Mission (HRSDM), reviews the Wide Field Camera 3 (WFC3), and also reviews the contamination accomodations for the WFC3. The objectives of the HRSDM are (1) to provide a disposal capability at the end of HST's useful life, (2) to upgrade the hardware by installing two new scientific instruments: replace the Corrective Optics Space Telescope Axial Replacement (COSTAR) with the Cosmic Origins Spectrograph (COS), and to replace the Wide Field/Planetary Camera-2 (WFPC2) with Wide Field Camera-3, and (3) Extend the Scientific life of HST for a minimum of 5 years after servicing. Included are slides showing the Hubble Robotic Vehicle (HRV) and slides describing what the HRV contains. There are also slides describing the WFC3. One of the mechanisms of the WFC3 is to serve partially as replacement gyroscopes for HST. There are also slides that discuss the contamination requirements for the Rate Sensor Units (RSUs), that are part of the Rate Gyroscope Assembly on the WFC3.
Putzer, David; Klug, Sebastian; Moctezuma, Jose Luis; Nogler, Michael
2014-12-01
Time-of-flight (TOF) cameras can guide surgical robots or provide soft tissue information for augmented reality in the medical field. In this study, a method to automatically track the soft tissue envelope of a minimally invasive hip approach in a cadaver study is described. An algorithm for the TOF camera was developed and 30 measurements on 8 surgical situs (direct anterior approach) were carried out. The results were compared to a manual measurement of the soft tissue envelope. The TOF camera showed an overall recognition rate of the soft tissue envelope of 75%. On comparing the results from the algorithm with the manual measurements, a significant difference was found (P > .005). In this preliminary study, we have presented a method for automatically recognizing the soft tissue envelope of the surgical field in a real-time application. Further improvements could result in a robotic navigation device for minimally invasive hip surgery. © The Author(s) 2014.
Using Visual Odometry to Estimate Position and Attitude
NASA Technical Reports Server (NTRS)
Maimone, Mark; Cheng, Yang; Matthies, Larry; Schoppers, Marcel; Olson, Clark
2007-01-01
A computer program in the guidance system of a mobile robot generates estimates of the position and attitude of the robot, using features of the terrain on which the robot is moving, by processing digitized images acquired by a stereoscopic pair of electronic cameras mounted rigidly on the robot. Developed for use in localizing the Mars Exploration Rover (MER) vehicles on Martian terrain, the program can also be used for similar purposes on terrestrial robots moving in sufficiently visually textured environments: examples include low-flying robotic aircraft and wheeled robots moving on rocky terrain or inside buildings. In simplified terms, the program automatically detects visual features and tracks them across stereoscopic pairs of images acquired by the cameras. The 3D locations of the tracked features are then robustly processed into an estimate of overall vehicle motion. Testing has shown that by use of this software, the error in the estimate of the position of the robot can be limited to no more than 2 percent of the distance traveled, provided that the terrain is sufficiently rich in features. This software has proven extremely useful on the MER vehicles during driving on sandy and highly sloped terrains on Mars.
Pilot Fullerton points Hasselblad camera out forward flight deck window W6
NASA Technical Reports Server (NTRS)
1982-01-01
Pilot Fullerton, wearing communications kit assembly (ASSY) mini headset (HDST), points Hasselblad camera out forward flight deck pilots station window W6. Forward flight deck control panels F4, F8, and R1, flight mirror assy, Volume R5 Kit, and pilots ejection seat (S2) headrest appear in view.
Detecting Target Objects by Natural Language Instructions Using an RGB-D Camera
Bao, Jiatong; Jia, Yunyi; Cheng, Yu; Tang, Hongru; Xi, Ning
2016-01-01
Controlling robots by natural language (NL) is increasingly attracting attention for its versatility, convenience and no need of extensive training for users. Grounding is a crucial challenge of this problem to enable robots to understand NL instructions from humans. This paper mainly explores the object grounding problem and concretely studies how to detect target objects by the NL instructions using an RGB-D camera in robotic manipulation applications. In particular, a simple yet robust vision algorithm is applied to segment objects of interest. With the metric information of all segmented objects, the object attributes and relations between objects are further extracted. The NL instructions that incorporate multiple cues for object specifications are parsed into domain-specific annotations. The annotations from NL and extracted information from the RGB-D camera are matched in a computational state estimation framework to search all possible object grounding states. The final grounding is accomplished by selecting the states which have the maximum probabilities. An RGB-D scene dataset associated with different groups of NL instructions based on different cognition levels of the robot are collected. Quantitative evaluations on the dataset illustrate the advantages of the proposed method. The experiments of NL controlled object manipulation and NL-based task programming using a mobile manipulator show its effectiveness and practicability in robotic applications. PMID:27983604
Polymorphic robotic system controlled by an observing camera
NASA Astrophysics Data System (ADS)
Koçer, Bilge; Yüksel, Tugçe; Yümer, M. Ersin; Özen, C. Alper; Yaman, Ulas
2010-02-01
Polymorphic robotic systems, which are composed of many modular robots that act in coordination to achieve a goal defined on the system level, have been drawing attention of industrial and research communities since they bring additional flexibility in many applications. This paper introduces a new polymorphic robotic system, in which the detection and control of the modules are attained by a stationary observing camera. The modules do not have any sensory equipment for positioning or detecting each other. They are self-powered, geared with means of wireless communication and locking mechanisms, and are marked to enable the image processing algorithm detect the position and orientation of each of them in a two dimensional space. Since the system does not depend on the modules for positioning and commanding others, in a circumstance where one or more of the modules malfunction, the system will be able to continue operating with the rest of the modules. Moreover, to enhance the compatibility and robustness of the system under different illumination conditions, stationary reference markers are employed together with global positioning markers, and an adaptive filtering parameter decision methodology is enclosed. To the best of authors' knowledge, this is the first study to introduce a remote camera observer to control modules of a polymorphic robotic system.
Robotic retroperitoneal partial nephrectomy: a step-by-step guide.
Ghani, Khurshid R; Porter, James; Menon, Mani; Rogers, Craig
2014-08-01
To describe a step-by-step guide for successful implementation of the retroperitoneal approach to robotic partial nephrectomy (RPN) PATIENTS AND METHODS: The patient is placed in the flank position and the table fully flexed to increase the space between the 12th rib and iliac crest. Access to the retroperitoneal space is obtained using a balloon-dilating device. Ports include a 12-mm camera port, two 8-mm robotic ports and a 12-mm assistant port placed in the anterior axillary line cephalad to the anterior superior iliac spine, and 7-8 cm caudal to the ipsilateral robotic port. Positioning and port placement strategies for successful technique include: (i) Docking robot directly over the patient's head parallel to the spine; (ii) incision for camera port ≈1.9 cm (1 fingerbreadth) above the iliac crest, lateral to the triangle of Petit; (iii) Seldinger technique insertion of kidney-shaped balloon dilator into retroperitoneal space; (iv) Maximising distance between all ports; (v) Ensuring camera arm is placed in the outer part of the 'sweet spot'. The retroperitoneal approach to RPN permits direct access to the renal hilum, no need for bowel mobilisation and excellent visualisation of posteriorly located tumours. © 2014 The Authors. BJU International © 2014 BJU International.
Trans-subxiphoid robotic thymectomy.
Suda, Takashi; Tochii, Daisuke; Tochii, Sachiko; Takagi, Yasushi
2015-05-01
Minimally invasive surgery has replaced median sternotomy for resectable anterior mediastinal masses and is performed by various approaches. We developed a new minimally invasive surgical procedure by combining the subxiphoid approach performed through a midline camera port with the use of a robotic surgery system (Intuitive Surgical, Sunnyvale, CA, USA). A 3-cm transverse incision was made 1 cm below the xiphoid process. Then, a port designed for single-port surgery was inserted. Through this port, CO2 gas was injected at 8 mmHg. The thymus was then detached from the back of the sternum. A 1-cm skin incision was made bilaterally in the sixth intercostal space, followed by insertion of a port for the robotic system. A camera port was inserted into the subxiphoid port, to which the camera scope was mounted, and thymectomy was performed. We have performed the operation in 3 patients. In our experience, this procedure provides a good operative view in the neck region and makes verification of the phrenic nerve easy. Furthermore, with the da Vinci surgical system, which enables surgical manipulation from a correct angle due to the multijoint robotic arms, trans-subxiphoid robotic thymectomy may be a promising new thymectomy procedure. © The Author 2015. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.
The Interdependence of Computers, Robots, and People.
ERIC Educational Resources Information Center
Ludden, Laverne; And Others
Computers and robots are becoming increasingly more advanced, with smaller and cheaper computers now doing jobs once reserved for huge multimillion dollar computers and with robots performing feats such as painting cars and using television cameras to simulate vision as they perform factory tasks. Technicians expect computers to become even more…
Robotic Welding and Inspection System
DOE Office of Scientific and Technical Information (OSTI.GOV)
H. B. Smartt; D. P. Pace; E. D. Larsen
2008-06-01
This paper presents a robotic system for GTA welding of lids on cylindrical vessels. The system consists of an articulated robot arm, a rotating positioner, end effectors for welding, grinding, ultrasonic and eddy current inspection. Features include weld viewing cameras, modular software, and text-based procedural files for process and motion trajectories.
[Optimization of end-tool parameters based on robot hand-eye calibration].
Zhang, Lilong; Cao, Tong; Liu, Da
2017-04-01
A new one-time registration method was developed in this research for hand-eye calibration of a surgical robot to simplify the operation process and reduce the preparation time. And a new and practical method is introduced in this research to optimize the end-tool parameters of the surgical robot based on analysis of the error sources in this registration method. In the process with one-time registration method, firstly a marker on the end-tool of the robot was recognized by a fixed binocular camera, and then the orientation and position of the marker were calculated based on the joint parameters of the robot. Secondly the relationship between the camera coordinate system and the robot base coordinate system could be established to complete the hand-eye calibration. Because of manufacturing and assembly errors of robot end-tool, an error equation was established with the transformation matrix between the robot end coordinate system and the robot end-tool coordinate system as the variable. Numerical optimization was employed to optimize end-tool parameters of the robot. The experimental results showed that the one-time registration method could significantly improve the efficiency of the robot hand-eye calibration compared with the existing methods. The parameter optimization method could significantly improve the absolute positioning accuracy of the one-time registration method. The absolute positioning accuracy of the one-time registration method can meet the requirements of the clinical surgery.
Object recognition for autonomous robot utilizing distributed knowledge database
NASA Astrophysics Data System (ADS)
Takatori, Jiro; Suzuki, Kenji; Hartono, Pitoyo; Hashimoto, Shuji
2003-10-01
In this paper we present a novel method of object recognition utilizing a remote knowledge database for an autonomous robot. The developed robot has three robot arms with different sensors; two CCD cameras and haptic sensors. It can see, touch and move the target object from different directions. Referring to remote knowledge database of geometry and material, the robot observes and handles the objects to understand them including their physical characteristics.
Automation of the targeting and reflective alignment concept
NASA Technical Reports Server (NTRS)
Redfield, Robin C.
1992-01-01
The automated alignment system, described herein, employs a reflective, passive (requiring no power) target and includes a PC-based imaging system and one camera mounted on a six degree of freedom robot manipulator. The system detects and corrects for manipulator misalignment in three translational and three rotational directions by employing the Targeting and Reflective Alignment Concept (TRAC), which simplifies alignment by decoupling translational and rotational alignment control. The concept uses information on the camera and the target's relative position based on video feedback from the camera. These relative positions are converted into alignment errors and minimized by motions of the robot. The system is robust to exogenous lighting by virtue of a subtraction algorithm which enables the camera to only see the target. These capabilities are realized with relatively minimal complexity and expense.
Robot-assisted general surgery.
Hazey, Jeffrey W; Melvin, W Scott
2004-06-01
With the initiation of laparoscopic techniques in general surgery, we have seen a significant expansion of minimally invasive techniques in the last 16 years. More recently, robotic-assisted laparoscopy has moved into the general surgeon's armamentarium to address some of the shortcomings of laparoscopic surgery. AESOP (Computer Motion, Goleta, CA) addressed the issue of visualization as a robotic camera holder. With the introduction of the ZEUS robotic surgical system (Computer Motion), the ability to remotely operate laparoscopic instruments became a reality. US Food and Drug Administration approval in July 2000 of the da Vinci robotic surgical system (Intuitive Surgical, Sunnyvale, CA) further defined the ability of a robotic-assist device to address limitations in laparoscopy. This includes a significant improvement in instrument dexterity, dampening of natural hand tremors, three-dimensional visualization, ergonomics, and camera stability. As experience with robotic technology increased and its applications to advanced laparoscopic procedures have become more understood, more procedures have been performed with robotic assistance. Numerous studies have shown equivalent or improved patient outcomes when robotic-assist devices are used. Initially, robotic-assisted laparoscopic cholecystectomy was deemed safe, and now robotics has been shown to be safe in foregut procedures, including Nissen fundoplication, Heller myotomy, gastric banding procedures, and Roux-en-Y gastric bypass. These techniques have been extrapolated to solid-organ procedures (splenectomy, adrenalectomy, and pancreatic surgery) as well as robotic-assisted laparoscopic colectomy. In this chapter, we review the evolution of robotic technology and its applications in general surgical procedures.
Single-Command Approach and Instrument Placement by a Robot on a Target
NASA Technical Reports Server (NTRS)
Huntsberger, Terrance; Cheng, Yang
2005-01-01
AUTOAPPROACH is a computer program that enables a mobile robot to approach a target autonomously, starting from a distance of as much as 10 m, in response to a single command. AUTOAPPROACH is used in conjunction with (1) software that analyzes images acquired by stereoscopic cameras aboard the robot and (2) navigation and path-planning software that utilizes odometer readings along with the output of the image-analysis software. Intended originally for application to an instrumented, wheeled robot (rover) in scientific exploration of Mars, AUTOAPPROACH could be adapted to terrestrial applications, notably including the robotic removal of land mines and other unexploded ordnance. A human operator generates the approach command by selecting the target in images acquired by the robot cameras. The approach path consists of multiple legs. Feature points are derived from images that contain the target and are thereafter tracked to correct odometric errors and iteratively refine estimates of the position and orientation of the robot relative to the target on successive legs. The approach is terminated when the robot attains the position and orientation required for placing a scientific instrument at the target. The workspace of the robot arm is then autonomously checked for self/terrain collisions prior to the deployment of the scientific instrument onto the target.
Extrinsic Calibration of Camera and 2D Laser Sensors without Overlap.
Ahmad Yousef, Khalil M; Mohd, Bassam J; Al-Widyan, Khalid; Hayajneh, Thaier
2017-10-14
Extrinsic calibration of a camera and a 2D laser range finder (lidar) sensors is crucial in sensor data fusion applications; for example SLAM algorithms used in mobile robot platforms. The fundamental challenge of extrinsic calibration is when the camera-lidar sensors do not overlap or share the same field of view. In this paper we propose a novel and flexible approach for the extrinsic calibration of a camera-lidar system without overlap, which can be used for robotic platform self-calibration. The approach is based on the robot-world hand-eye calibration (RWHE) problem; proven to have efficient and accurate solutions. First, the system was mapped to the RWHE calibration problem modeled as the linear relationship AX = ZB , where X and Z are unknown calibration matrices. Then, we computed the transformation matrix B , which was the main challenge in the above mapping. The computation is based on reasonable assumptions about geometric structure in the calibration environment. The reliability and accuracy of the proposed approach is compared to a state-of-the-art method in extrinsic 2D lidar to camera calibration. Experimental results from real datasets indicate that the proposed approach provides better results with an L2 norm translational and rotational deviations of 314 mm and 0 . 12 ∘ respectively.
[Robotics in general surgery: personal experience, critical analysis and prospectives].
Fracastoro, Gerolamo; Borzellino, Giuseppe; Castelli, Annalisa; Fiorini, Paolo
2005-01-01
Today mini invasive surgery has the chance to be enhanced with sophisticated informative systems (Computer Assisted Surgery, CAS) like robotics, tele-mentoring and tele-presence. ZEUS and da Vinci, present in more than 120 Centres in the world, have been used in many fields of surgery and have been tested in some general surgical procedures. Since the end of 2003, we have performed 70 experimental procedures and 24 operations of general surgery with ZEUS robotic system, after having properly trained 3 surgeons and the operating room staff. Apart from the robot set-up, the mean operative time of the robotic operations was similar to the laparoscopic ones; no complications due to robotic technique occurred. The Authors report benefits and disadvantages related to robots' utilization, problems still to be solved and the possibility to make use of them with tele-surgery, training and virtual surgery.
NASA Astrophysics Data System (ADS)
Conforti, Vito; Trifoglio, Massimo; Bulgarelli, Andrea; Gianotti, Fulvio; Fioretti, Valentina; Tacchini, Alessandro; Zoli, Andrea; Malaguti, Giuseppe; Capalbi, Milvia; Catalano, Osvaldo
2014-07-01
ASTRI (Astrofisica con Specchi a Tecnologia Replicante Italiana) is a Flagship Project financed by the Italian Ministry of Education, University and Research, and led by INAF, the Italian National Institute of Astrophysics. Within this framework, INAF is currently developing an end-to-end prototype of a Small Size dual-mirror Telescope. In a second phase the ASTRI project foresees the installation of the first elements of the array at CTA southern site, a mini-array of 7 telescopes. The ASTRI Camera DAQ Software is aimed at the Camera data acquisition, storage and display during Camera development as well as during commissioning and operations on the ASTRI SST-2M telescope prototype that will operate at the INAF observing station located at Serra La Nave on the Mount Etna (Sicily). The Camera DAQ configuration and operations will be sequenced either through local operator commands or through remote commands received from the Instrument Controller System that commands and controls the Camera. The Camera DAQ software will acquire data packets through a direct one-way socket connection with the Camera Back End Electronics. In near real time, the data will be stored in both raw and FITS format. The DAQ Quick Look component will allow the operator to display in near real time the Camera data packets. We are developing the DAQ software adopting the iterative and incremental model in order to maximize the software reuse and to implement a system which is easily adaptable to changes. This contribution presents the Camera DAQ Software architecture with particular emphasis on its potential reuse for the ASTRI/CTA mini-array.
Miniature in vivo robotics and novel robotic surgical platforms.
Shah, Bhavin C; Buettner, Shelby L; Lehman, Amy C; Farritor, Shane M; Oleynikov, Dmitry
2009-05-01
Robotic surgical systems, such as the da Vinci Surgical System (Intuitive Surgical, Inc., Sunnyvale, California), have revolutionized laparoscopic surgery but are limited by large size, increased costs, and limitations in imaging. Miniature in vivo robots are being developed that are inserted entirely into the peritoneal cavity for laparoscopic and natural orifice transluminal endoscopic surgical (NOTES) procedures. In the future, miniature camera robots and microrobots should be able to provide a mobile viewing platform. This article discusses the current state of miniature robotics and novel robotic surgical platforms and the development of future robotic technology for general surgery and urology.
Remote Viewer for Maritime Robotics Software
NASA Technical Reports Server (NTRS)
Kuwata, Yoshiaki; Wolf, Michael; Huntsberger, Terrance L.; Howard, Andrew B.
2013-01-01
This software is a viewer program for maritime robotics software that provides a 3D visualization of the boat pose, its position history, ENC (Electrical Nautical Chart) information, camera images, map overlay, and detected tracks.
Indirect iterative learning control for a discrete visual servo without a camera-robot model.
Jiang, Ping; Bamforth, Leon C A; Feng, Zuren; Baruch, John E F; Chen, YangQuan
2007-08-01
This paper presents a discrete learning controller for vision-guided robot trajectory imitation with no prior knowledge of the camera-robot model. A teacher demonstrates a desired movement in front of a camera, and then, the robot is tasked to replay it by repetitive tracking. The imitation procedure is considered as a discrete tracking control problem in the image plane, with an unknown and time-varying image Jacobian matrix. Instead of updating the control signal directly, as is usually done in iterative learning control (ILC), a series of neural networks are used to approximate the unknown Jacobian matrix around every sample point in the demonstrated trajectory, and the time-varying weights of local neural networks are identified through repetitive tracking, i.e., indirect ILC. This makes repetitive segmented training possible, and a segmented training strategy is presented to retain the training trajectories solely within the effective region for neural network approximation. However, a singularity problem may occur if an unmodified neural-network-based Jacobian estimation is used to calculate the robot end-effector velocity. A new weight modification algorithm is proposed which ensures invertibility of the estimation, thus circumventing the problem. Stability is further discussed, and the relationship between the approximation capability of the neural network and the tracking accuracy is obtained. Simulations and experiments are carried out to illustrate the validity of the proposed controller for trajectory imitation of robot manipulators with unknown time-varying Jacobian matrices.
Parameterizations for reducing camera reprojection error for robot-world hand-eye calibration
USDA-ARS?s Scientific Manuscript database
Accurate robot-world, hand-eye calibration is crucial to automation tasks. In this paper, we discuss the robot-world, hand-eye calibration problem which has been modeled as the linear relationship AX equals ZB, where X and Z are the unknown calibration matrices composed of rotation and translation ...
Zhong, Xungao; Zhong, Xunyu; Peng, Xiafu
2013-10-08
In this paper, a global-state-space visual servoing scheme is proposed for uncalibrated model-independent robotic manipulation. The scheme is based on robust Kalman filtering (KF), in conjunction with Elman neural network (ENN) learning techniques. The global map relationship between the vision space and the robotic workspace is learned using an ENN. This learned mapping is shown to be an approximate estimate of the Jacobian in global space. In the testing phase, the desired Jacobian is arrived at using a robust KF to improve the ENN learning result so as to achieve robotic precise convergence of the desired pose. Meanwhile, the ENN weights are updated (re-trained) using a new input-output data pair vector (obtained from the KF cycle) to ensure robot global stability manipulation. Thus, our method, without requiring either camera or model parameters, avoids the corrupted performances caused by camera calibration and modeling errors. To demonstrate the proposed scheme's performance, various simulation and experimental results have been presented using a six-degree-of-freedom robotic manipulator with eye-in-hand configurations.
The phantom robot - Predictive displays for teleoperation with time delay
NASA Technical Reports Server (NTRS)
Bejczy, Antal K.; Kim, Won S.; Venema, Steven C.
1990-01-01
An enhanced teleoperation technique for time-delayed bilateral teleoperator control is discussed. The control technique selected for time delay is based on the use of a high-fidelity graphics phantom robot that is being controlled in real time (without time delay) against the static task image. Thus, the motion of the phantom robot image on the monitor predicts the motion of the real robot. The real robot's motion will follow the phantom robot's motion on the monitor with the communication time delay implied in the task. Real-time high-fidelity graphics simulation of a PUMA arm is generated and overlaid on the actual camera view of the arm. A simple camera calibration technique is used for calibrated graphics overlay. A preliminary experiment is performed with the predictive display by using a very simple tapping task. The results with this simple task indicate that predictive display enhances the human operator's telemanipulation task performance significantly during free motion when there is a long time delay. It appears, however, that either two-view or stereoscopic predictive displays are necessary for general three-dimensional tasks.
Jarc, Anthony M; Curet, Myriam J
2017-03-01
Effective visualization of the operative field is vital to surgical safety and education. However, additional metrics for visualization are needed to complement other common measures of surgeon proficiency, such as time or errors. Unlike other surgical modalities, robot-assisted minimally invasive surgery (RAMIS) enables data-driven feedback to trainees through measurement of camera adjustments. The purpose of this study was to validate and quantify the importance of novel camera metrics during RAMIS. New (n = 18), intermediate (n = 8), and experienced (n = 13) surgeons completed 25 virtual reality simulation exercises on the da Vinci Surgical System. Three camera metrics were computed for all exercises and compared to conventional efficiency measures. Both camera metrics and efficiency metrics showed construct validity (p < 0.05) across most exercises (camera movement frequency 23/25, camera movement duration 22/25, camera movement interval 19/25, overall score 24/25, completion time 25/25). Camera metrics differentiated new and experienced surgeons across all tasks as well as efficiency metrics. Finally, camera metrics significantly (p < 0.05) correlated with completion time (camera movement frequency 21/25, camera movement duration 21/25, camera movement interval 20/25) and overall score (camera movement frequency 20/25, camera movement duration 19/25, camera movement interval 20/25) for most exercises. We demonstrate construct validity of novel camera metrics and correlation between camera metrics and efficiency metrics across many simulation exercises. We believe camera metrics could be used to improve RAMIS proficiency-based curricula.
A control system of a mini survey facility for photometric monitoring
NASA Astrophysics Data System (ADS)
Tsutsui, Hironori; Yanagisawa, Kenshi; Izumiura, Hideyuki; Shimizu, Yasuhiro; Hanaue, Takumi; Ita, Yoshifusa; Ichikawa, Takashi; Komiyama, Takahiro
2016-08-01
We have built a control system for a mini survey facility dedicated to photometric monitoring of nearby bright (K<5) stars in the near-infrared region. The facility comprises a 4-m-diameter rotating dome and a small (30-mm aperture) wide-field (5 × 5 sq. deg. field of view) infrared (1.0-2.5 microns) camera on an equatorial fork mount, as well as power sources and other associated equipment. All the components other than the camera are controlled by microcomputerbased I/O boards that were developed in-house and are in many of the open-use instruments in our observatory. We present the specifications and configuration of the facility hardware, as well as the structure of its control software.
NASA Astrophysics Data System (ADS)
Kobayashi, Hayato; Osaki, Tsugutoyo; Okuyama, Tetsuro; Gramm, Joshua; Ishino, Akira; Shinohara, Ayumi
This paper describes an interactive experimental environment for autonomous soccer robots, which is a soccer field augmented by utilizing camera input and projector output. This environment, in a sense, plays an intermediate role between simulated environments and real environments. We can simulate some parts of real environments, e.g., real objects such as robots or a ball, and reflect simulated data into the real environments, e.g., to visualize the positions on the field, so as to create a situation that allows easy debugging of robot programs. The significant point compared with analogous work is that virtual objects are touchable in this system owing to projectors. We also show the portable version of our system that does not require ceiling cameras. As an application in the augmented environment, we address the learning of goalie strategies on real quadruped robots in penalty kicks. We make our robots utilize virtual balls in order to perform only quadruped locomotion in real environments, which is quite difficult to simulate accurately. Our robots autonomously learn and acquire more beneficial strategies without human intervention in our augmented environment than those in a fully simulated environment.
2011-01-11
and its variance σ2Ûi are determined. Ûi = ûi + Pu,EN (PEN )−1 [( Ejc Njc ) − ( êi n̂i )] (15) σ2 Ûi = Pui − P u,EN i ( PENi )−1 PEN,ui (16) where...screen; the operator can click a robot’s camera view to select it as the Focus Robot. The Focus Robot’s camera stream is enlarged and displayed in the
Curiosity's Mars Hand Lens Imager (MAHLI): Inital Observations and Activities
NASA Technical Reports Server (NTRS)
Edgett, K. S.; Yingst, R. A.; Minitti, M. E.; Robinson, M. L.; Kennedy, M. R.; Lipkaman, L. J.; Jensen, E. H.; Anderson, R. C.; Bean, K. M.; Beegle, L. W.;
2013-01-01
MAHLI (Mars Hand Lens Imager) is a 2-megapixel focusable macro lens color camera on the turret on Curiosity's robotic arm. The investigation centers on stratigraphy, grain-scale texture, structure, mineralogy, and morphology of geologic materials at Curiosity's Gale robotic field site. MAHLI acquires focused images at working distances of 2.1 cm to infinity; for reference, at 2.1 cm the scale is 14 microns/pixel; at 6.9 cm it is 31 microns/pixel, like the Spirit and Opportunity Microscopic Imager (MI) cameras.
NASA Technical Reports Server (NTRS)
Owens, Christopher
2016-01-01
Phobos is a vital precursor and catalyst before our next giant leap to Mars. The principle period of a Phobos' mission could be a series of robotic precursor missions for experimental perception, soil examination, ecological approval and landing site distinguishing proof. For my summer intern position at Johnson Space Center I chipped away at creating a GUNNS (General Use Nodal Network Solver) based power subsystem model for the miniATHLETE hopper, which is a conceptual (idea-based) robotic lander that will operate on Phobos. Keeping in mind the end goal to begin on my venture, I needed to comprehend my undertaking before whatever else, in which I concentrated on C++ to see how to implement the code that GUNNS generates to a Trick S_define file. Prior to coming to this internship at Johnson Space Center Dr. Edwin Zack Crues provided a class on modeling and simulation, which introduced me to the Trick simulation environment. The goal of my project was to develop a GUNNS based power subsystem model for the miniATHLETE hopper. The model needed to incorporate a solar array, battery, hopping legs, and onboard scientific instruments (sensitive measuring/recording devices). The secondary bonus goal after I completed the electrical aspect of my model was to develop a GUNNS based thermal subsystem model for the miniATHLETE hopper. Stringing the two aspects together, I would need to code up a signal aspect to make the system work as one. Accomplishing my goals would not be an easy thing, however I had successfully completed the electrical aspect model with twenty-four servos, six cameras, and multiple sensors. Venturing to complete my project has eluded me to many failures in my design to tune many things like the battery to the proper voltage and the load to the proper wattage. During this time I had touched up on advanced topics in calculus in which I implemented in the converter in my electrical model. I am currently working with my mentor Zu Qun Li to create a signal aspect to control the temperatures inside my electrical aspect model. During my time at JSC I had effectively figured out how to create subsystem models utilizing Trick and GUNNS. I obtain essential knowledge of power and thermal subsystem design for a robotic vehicle. I also learned how to work and communicate in a team effectively to accomplish a goal. Before coming to Johnson Space Center my future career and educational goals included uncertainty, however now I have a completely new look on my path to a prosperous future. My NASA experience has unquestionably impacted me to accomplish and surpass my own particular desires. After my time at Johnson Space Center I plan to apply for a coop position for NASA. This has been a dream come true that I adored each moment being at JSC realizing that I am far fit for doing things most individuals can just long for.
Klancar, Gregor; Kristan, Matej; Kovacic, Stanislav; Orqueda, Omar
2004-07-01
In this paper a global vision scheme for estimation of positions and orientations of mobile robots is presented. It is applied to robot soccer application which is a fast dynamic game and therefore needs an efficient and robust vision system implemented. General applicability of the vision system can be found in other robot applications such as mobile transport robots in production, warehouses, attendant robots, fast vision tracking of targets of interest and entertainment robotics. Basic operation of the vision system is divided into two steps. In the first, the incoming image is scanned and pixels are classified into a finite number of classes. At the same time, a segmentation algorithm is used to find corresponding regions belonging to one of the classes. In the second step, all the regions are examined. Selection of the ones that are a part of the observed object is made by means of simple logic procedures. The novelty is focused on optimization of the processing time needed to finish the estimation of possible object positions. Better results of the vision system are achieved by implementing camera calibration and shading correction algorithm. The former corrects camera lens distortion, while the latter increases robustness to irregular illumination conditions.
People Detection by a Mobile Robot Using Stereo Vision in Dynamic Indoor Environments
NASA Astrophysics Data System (ADS)
Méndez-Polanco, José Alberto; Muñoz-Meléndez, Angélica; Morales, Eduardo F.
People detection and tracking is a key issue for social robot design and effective human robot interaction. This paper addresses the problem of detecting people with a mobile robot using a stereo camera. People detection using mobile robots is a difficult task because in real world scenarios it is common to find: unpredictable motion of people, dynamic environments, and different degrees of human body occlusion. Additionally, we cannot expect people to cooperate with the robot to perform its task. In our people detection method, first, an object segmentation method that uses the distance information provided by a stereo camera is used to separate people from the background. The segmentation method proposed in this work takes into account human body proportions to segment people and provides a first estimation of people location. After segmentation, an adaptive contour people model based on people distance to the robot is used to calculate a probability of detecting people. Finally, people are detected merging the probabilities of the contour people model and by evaluating evidence over time by applying a Bayesian scheme. We present experiments on detection of standing and sitting people, as well as people in frontal and side view with a mobile robot in real world scenarios.
Effects of Imperfect Automation on Operator’s Supervisory Control of Multiple Robots
2011-08-01
7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) U.S. Army Research Laboratory ATTN: RDRL- HRM -AT Aberdeen Proving Ground, MD 21005-5425 8...Survey, the Ishihara Color Vision Test, and the Cube 6 Comparison test. Participants then received training and practice on the tasks they were about to...completing various tasks, several mini- exercises for practicing the steps, and exercises for performing the robotic control tasks. The type and
Remote hardware-reconfigurable robotic camera
NASA Astrophysics Data System (ADS)
Arias-Estrada, Miguel; Torres-Huitzil, Cesar; Maya-Rueda, Selene E.
2001-10-01
In this work, a camera with integrated image processing capabilities is discussed. The camera is based on an imager coupled to an FPGA device (Field Programmable Gate Array) which contains an architecture for real-time computer vision low-level processing. The architecture can be reprogrammed remotely for application specific purposes. The system is intended for rapid modification and adaptation for inspection and recognition applications, with the flexibility of hardware and software reprogrammability. FPGA reconfiguration allows the same ease of upgrade in hardware as a software upgrade process. The camera is composed of a digital imager coupled to an FPGA device, two memory banks, and a microcontroller. The microcontroller is used for communication tasks and FPGA programming. The system implements a software architecture to handle multiple FPGA architectures in the device, and the possibility to download a software/hardware object from the host computer into its internal context memory. System advantages are: small size, low power consumption, and a library of hardware/software functionalities that can be exchanged during run time. The system has been validated with an edge detection and a motion processing architecture, which will be presented in the paper. Applications targeted are in robotics, mobile robotics, and vision based quality control.
The Malaysian Robotic Solar Observatory (P29)
NASA Astrophysics Data System (ADS)
Othman, M.; Asillam, M. F.; Ismail, M. K. H.
2006-11-01
Robotic observatory with small telescopes can make significant contributions to astronomy observation. They provide an encouraging environment for astronomers to focus on data analysis and research while at the same time reducing time and cost for observation. The observatory will house the primary 50cm robotic telescope in the main dome which will be used for photometry, spectroscopy and astrometry observation activities. The secondary telescope is a robotic multi-apochromatic refractor (maximum diameter: 15 cm) which will be housed in the smaller dome. This telescope set will be used for solar observation mainly in three different wavelengths simultaneously: the Continuum, H-Alpha and Calcium K-line. The observatory is also equipped with an automated weather station, cloud & rain sensor and all-sky camera to monitor the climatic condition, sense the clouds (before raining) as well as to view real time sky view above the observatory. In conjunction with the Langkawi All-Sky Camera, the observatory website will also display images from the Malaysia - Antarctica All-Sky Camera used to monitor the sky at Scott Base Antarctica. Both all-sky images can be displayed simultaneously to show the difference between the equatorial and Antarctica skies. This paper will describe the Malaysian Robotic Observatory including the systems available and method of access by other astronomers. We will also suggest possible collaboration with other observatories in this region.
Bengochea-Guevara, José M; Conesa-Muñoz, Jesus; Andújar, Dionisio; Ribeiro, Angela
2016-02-24
The concept of precision agriculture, which proposes farming management adapted to crop variability, has emerged in recent years. To effectively implement precision agriculture, data must be gathered from the field in an automated manner at minimal cost. In this study, a small autonomous field inspection vehicle was developed to minimise the impact of the scouting on the crop and soil compaction. The proposed approach integrates a camera with a GPS receiver to obtain a set of basic behaviours required of an autonomous mobile robot to inspect a crop field with full coverage. A path planner considered the field contour and the crop type to determine the best inspection route. An image-processing method capable of extracting the central crop row under uncontrolled lighting conditions in real time from images acquired with a reflex camera positioned on the front of the robot was developed. Two fuzzy controllers were also designed and developed to achieve vision-guided navigation. A method for detecting the end of a crop row using camera-acquired images was developed. In addition, manoeuvres necessary for the robot to change rows were established. These manoeuvres enabled the robot to autonomously cover the entire crop by following a previously established plan and without stepping on the crop row, which is an essential behaviour for covering crops such as maize without damaging them.
Bengochea-Guevara, José M.; Conesa-Muñoz, Jesus; Andújar, Dionisio; Ribeiro, Angela
2016-01-01
The concept of precision agriculture, which proposes farming management adapted to crop variability, has emerged in recent years. To effectively implement precision agriculture, data must be gathered from the field in an automated manner at minimal cost. In this study, a small autonomous field inspection vehicle was developed to minimise the impact of the scouting on the crop and soil compaction. The proposed approach integrates a camera with a GPS receiver to obtain a set of basic behaviours required of an autonomous mobile robot to inspect a crop field with full coverage. A path planner considered the field contour and the crop type to determine the best inspection route. An image-processing method capable of extracting the central crop row under uncontrolled lighting conditions in real time from images acquired with a reflex camera positioned on the front of the robot was developed. Two fuzzy controllers were also designed and developed to achieve vision-guided navigation. A method for detecting the end of a crop row using camera-acquired images was developed. In addition, manoeuvres necessary for the robot to change rows were established. These manoeuvres enabled the robot to autonomously cover the entire crop by following a previously established plan and without stepping on the crop row, which is an essential behaviour for covering crops such as maize without damaging them. PMID:26927102
Robotic hip arthroscopy in human anatomy.
Kather, Jens; Hagen, Monika E; Morel, Philippe; Fasel, Jean; Markar, Sheraz; Schueler, Michael
2010-09-01
Robotic technology offers technical advantages that might offer new solutions for hip arthroscopy. Two hip arthroscopies were performed in human cadavers using the da Vinci surgical system. During both surgeries, a robotic camera and 5 or 8 mm da Vinci trocars with instruments were inserted into the hip joint for manipulation. Introduction of cameras and working instruments, docking of the robotic system and instrument manipulation was successful in both cases. The long articulating area of 5 mm instruments limited movements inside the joint; an 8 mm instrument with a shorter area of articulation offered an improved range of motion. Hip arthroscopy using the da Vinci standard system appears a feasible alternative to standard arthroscopy. Instruments and method of application must be modified and improved before routine clinical application but further research in this area seems justified, considering the clinical value of such an approach. Copyright 2010 John Wiley & Sons, Ltd.
A miniature cable-driven robot for crawling on the heart.
Patronik, N A; Zenati, M A; Riviere, C N
2005-01-01
This document describes the design and preliminary testing of a cable-driven robot for the purpose of traveling on the surface of the beating heart to administer therapy. This methodology obviates mechanical stabilization and lung deflation, which are typically required during minimally invasive cardiac surgery. Previous versions of the robot have been remotely actuated through push-pull wires, while visual feedback was provided by fiber optic transmission. Although these early models were able to perform locomotion in vivo on porcine hearts, the stiffness of the wire-driven transmission and fiber optic camera limited the mobility of the robots. The new prototype described in this document is actuated by two antagonistic cable pairs, and contains a color CCD camera located in the front section of the device. These modifications have resulted in superior mobility and visual feedback. The cable-driven prototype has successfully demonstrated prehension, locomotion, and tissue dye injection during in vitro testing with a poultry model.
Unmanned aerial systems for photogrammetry and remote sensing: A review
NASA Astrophysics Data System (ADS)
Colomina, I.; Molina, P.
2014-06-01
We discuss the evolution and state-of-the-art of the use of Unmanned Aerial Systems (UAS) in the field of Photogrammetry and Remote Sensing (PaRS). UAS, Remotely-Piloted Aerial Systems, Unmanned Aerial Vehicles or simply, drones are a hot topic comprising a diverse array of aspects including technology, privacy rights, safety and regulations, and even war and peace. Modern photogrammetry and remote sensing identified the potential of UAS-sourced imagery more than thirty years ago. In the last five years, these two sister disciplines have developed technology and methods that challenge the current aeronautical regulatory framework and their own traditional acquisition and processing methods. Navety and ingenuity have combined off-the-shelf, low-cost equipment with sophisticated computer vision, robotics and geomatic engineering. The results are cm-level resolution and accuracy products that can be generated even with cameras costing a few-hundred euros. In this review article, following a brief historic background and regulatory status analysis, we review the recent unmanned aircraft, sensing, navigation, orientation and general data processing developments for UAS photogrammetry and remote sensing with emphasis on the nano-micro-mini UAS segment.
Font, Davinia; Pallejà, Tomàs; Tresanchez, Marcel; Runcan, David; Moreno, Javier; Martínez, Dani; Teixidó, Mercè; Palacín, Jordi
2014-01-01
This paper proposes the development of an automatic fruit harvesting system by combining a low cost stereovision camera and a robotic arm placed in the gripper tool. The stereovision camera is used to estimate the size, distance and position of the fruits whereas the robotic arm is used to mechanically pickup the fruits. The low cost stereovision system has been tested in laboratory conditions with a reference small object, an apple and a pear at 10 different intermediate distances from the camera. The average distance error was from 4% to 5%, and the average diameter error was up to 30% in the case of a small object and in a range from 2% to 6% in the case of a pear and an apple. The stereovision system has been attached to the gripper tool in order to obtain relative distance, orientation and size of the fruit. The harvesting stage requires the initial fruit location, the computation of the inverse kinematics of the robotic arm in order to place the gripper tool in front of the fruit, and a final pickup approach by iteratively adjusting the vertical and horizontal position of the gripper tool in a closed visual loop. The complete system has been tested in controlled laboratory conditions with uniform illumination applied to the fruits. As a future work, this system will be tested and improved in conventional outdoor farming conditions. PMID:24984059
Font, Davinia; Pallejà, Tomàs; Tresanchez, Marcel; Runcan, David; Moreno, Javier; Martínez, Dani; Teixidó, Mercè; Palacín, Jordi
2014-06-30
This paper proposes the development of an automatic fruit harvesting system by combining a low cost stereovision camera and a robotic arm placed in the gripper tool. The stereovision camera is used to estimate the size, distance and position of the fruits whereas the robotic arm is used to mechanically pickup the fruits. The low cost stereovision system has been tested in laboratory conditions with a reference small object, an apple and a pear at 10 different intermediate distances from the camera. The average distance error was from 4% to 5%, and the average diameter error was up to 30% in the case of a small object and in a range from 2% to 6% in the case of a pear and an apple. The stereovision system has been attached to the gripper tool in order to obtain relative distance, orientation and size of the fruit. The harvesting stage requires the initial fruit location, the computation of the inverse kinematics of the robotic arm in order to place the gripper tool in front of the fruit, and a final pickup approach by iteratively adjusting the vertical and horizontal position of the gripper tool in a closed visual loop. The complete system has been tested in controlled laboratory conditions with uniform illumination applied to the fruits. As a future work, this system will be tested and improved in conventional outdoor farming conditions.
EVA Robotic Assistant Project: Platform Attitude Prediction
NASA Technical Reports Server (NTRS)
Nickels, Kevin M.
2003-01-01
The Robotic Systems Technology Branch is currently working on the development of an EVA Robotic Assistant under the sponsorship of the Surface Systems Thrust of the NASA Cross Enterprise Technology Development Program (CETDP). This will be a mobile robot that can follow a field geologist during planetary surface exploration, carry his tools and the samples that he collects, and provide video coverage of his activity. Prior experiments have shown that for such a robot to be useful it must be able to follow the geologist at walking speed over any terrain of interest. Geologically interesting terrain tends to be rough rather than smooth. The commercial mobile robot that was recently purchased as an initial testbed for the EVA Robotic Assistant Project, an ATRV Jr., is capable of faster than walking speed outside but it has no suspension. Its wheels with inflated rubber tires are attached to axles that are connected directly to the robot body. Any angular motion of the robot produced by driving over rough terrain will directly affect the pointing of the on-board stereo cameras. The resulting image motion is expected to make tracking of the geologist more difficult. This will either require the tracker to search a larger part of the image to find the target from frame to frame or to search mechanically in pan and tilt whenever the image motion is large enough to put the target outside the image in the next frame. This project consists of the design and implementation of a Kalman filter that combines the output of the angular rate sensors and linear accelerometers on the robot to estimate the motion of the robot base. The motion of the stereo camera pair mounted on the robot that results from this motion as the robot drives over rough terrain is then straightforward to compute. The estimates may then be used, for example, to command the robot s on-board pan-tilt unit to compensate for the camera motion induced by the base movement. This has been accomplished in two ways: first, a standalone head stabilizer has been implemented and second, the estimates have been used to influence the search algorithm of the stereo tracking algorithm. Studies of the image motion of a tracked object indicate that the image motion of objects is suppressed while the robot crossing rough terrain. This work expands the range of speed and surface roughness over which the robot should be able to track and follow a field geologist and accept arm gesture commands from the geologist.
CHAMP (Camera, Handlens, and Microscope Probe)
NASA Technical Reports Server (NTRS)
Mungas, Greg S.; Boynton, John E.; Balzer, Mark A.; Beegle, Luther; Sobel, Harold R.; Fisher, Ted; Klein, Dan; Deans, Matthew; Lee, Pascal; Sepulveda, Cesar A.
2005-01-01
CHAMP (Camera, Handlens And Microscope Probe)is a novel field microscope capable of color imaging with continuously variable spatial resolution from infinity imaging down to diffraction-limited microscopy (3 micron/pixel). As a robotic arm-mounted imager, CHAMP supports stereo imaging with variable baselines, can continuously image targets at an increasing magnification during an arm approach, can provide precision rangefinding estimates to targets, and can accommodate microscopic imaging of rough surfaces through a image filtering process called z-stacking. CHAMP was originally developed through the Mars Instrument Development Program (MIDP) in support of robotic field investigations, but may also find application in new areas such as robotic in-orbit servicing and maintenance operations associated with spacecraft and human operations. We overview CHAMP'S instrument performance and basic design considerations below.
NASA Astrophysics Data System (ADS)
Tyczka, Dale R.; Wright, Robert; Janiszewski, Brian; Chatten, Martha Jane; Bowen, Thomas A.; Skibba, Brian
2012-06-01
Nearly all explosive ordnance disposal robots in use today employ monoscopic standard-definition video cameras to relay live imagery from the robot to the operator. With this approach, operators must rely on shadows and other monoscopic depth cues in order to judge distances and object depths. Alternatively, they can contact an object with the robot's manipulator to determine its position, but that approach carries with it the risk of detonation from unintentionally disturbing the target or nearby objects. We recently completed a study in which high-definition (HD) and stereoscopic video cameras were used in addition to conventional standard-definition (SD) cameras in order to determine if higher resolutions and/or stereoscopic depth cues improve operators' overall performance of various unmanned ground vehicle (UGV) tasks. We also studied the effect that the different vision modes had on operator comfort. A total of six different head-aimed vision modes were used including normal-separation HD stereo, SD stereo, "micro" (reduced separation) SD stereo, HD mono, and SD mono (two types). In general, the study results support the expectation that higher resolution and stereoscopic vision aid UGV teleoperation, but the degree of improvement was found to depend on the specific task being performed; certain tasks derived notably more benefit from improved depth perception than others. This effort was sponsored by the Joint Ground Robotics Enterprise under Robotics Technology Consortium Agreement #69-200902 T01. Technical management was provided by the U.S. Air Force Research Laboratory's Robotics Research and Development Group at Tyndall AFB, Florida.
Astrobee: Space Station Robotic Free Flyer
NASA Technical Reports Server (NTRS)
Provencher, Chris; Bualat, Maria G.; Barlow, Jonathan; Fong, Terrence W.; Smith, Marion F.; Smith, Ernest E.; Sanchez, Hugo S.
2016-01-01
Astrobee is a free flying robot that will fly inside the International Space Station and primarily serve as a research platform for robotics in zero gravity. Astrobee will also provide mobile camera views to ISS flight and payload controllers, and collect various sensor data within the ISS environment for the ISS Program. Astrobee consists of two free flying robots, a dock, and ground data system. This presentation provides an overview, high level design description, and project status.
Employing Omnidirectional Visual Control for Mobile Robotics.
ERIC Educational Resources Information Center
Wright, J. R., Jr.; Jung, S.; Steplight, S.; Wright, J. R., Sr.; Das, A.
2000-01-01
Describes projects using conventional technologies--incorporation of relatively inexpensive visual control with mobile robots using a simple remote control vehicle platform, a camera, a mirror, and a computer. Explains how technology teachers can apply them in the classroom. (JOW)
NASA Technical Reports Server (NTRS)
Almeida, Eduardo DeBrito
2012-01-01
This report discusses work completed over the summer at the Jet Propulsion Laboratory (JPL), California Institute of Technology. A system is presented to guide ground or aerial unmanned robots using computer vision. The system performs accurate camera calibration, camera pose refinement and surface extraction from images collected by a camera mounted on the vehicle. The application motivating the research is planetary exploration and the vehicles are typically rovers or unmanned aerial vehicles. The information extracted from imagery is used primarily for navigation, as robot location is the same as the camera location and the surfaces represent the terrain that rovers traverse. The processed information must be very accurate and acquired very fast in order to be useful in practice. The main challenge being addressed by this project is to achieve high estimation accuracy and high computation speed simultaneously, a difficult task due to many technical reasons.
Wrist Camera Orientation for Effective Telerobotic Orbital Replaceable Unit (ORU) Changeout
NASA Technical Reports Server (NTRS)
Jones, Sharon Monica; Aldridge, Hal A.; Vazquez, Sixto L.
1997-01-01
The Hydraulic Manipulator Testbed (HMTB) is the kinematic replica of the Flight Telerobotic Servicer (FTS). One use of the HMTB is to evaluate advanced control techniques for accomplishing robotic maintenance tasks on board the Space Station. Most maintenance tasks involve the direct manipulation of the robot by a human operator when high-quality visual feedback is important for precise control. An experiment was conducted in the Systems Integration Branch at the Langley Research Center to compare several configurations of the manipulator wrist camera for providing visual feedback during an Orbital Replaceable Unit changeout task. Several variables were considered such as wrist camera angle, camera focal length, target location, lighting. Each study participant performed the maintenance task by using eight combinations of the variables based on a Latin square design. The results of this experiment and conclusions based on data collected are presented.
Sávio, Luís Felipe; Panizzutti Barboza, Marcelo; Alameddine, Mahmoud; Ahdoot, Michael; Alonzo, David; Ritch, Chad R
2018-03-01
To describe our novel technique for performing a combined partial penectomy and bilateral robotic inguinal lymphadenectomy using intraoperative near-infrared (NIR) fluorescence guidance with indocyanine green (ICG) and the DaVinci Firefly camera system. A 58-year-old man presented status post recent excisional biopsy of a 2-cm lesion on the left coronal aspect of the glans penis. Pathology revealed "invasive squamous cell carcinoma of the penis with multifocal positive margins." His examination was suspicious for cT2 primary and his inguinal nodes were cN0. He was counseled to undergo partial penectomy with possible combined vs staged bilateral robotic inguinal lymphadenectomy. Preoperative computed tomography scan was negative for pathologic lymphadenopathy. Before incision, 5 mL of ICG was injected subcutaneously beneath the tumor. Bilateral thigh pockets were then developed simultaneously and a right, then left robotic modified inguinal lymphadenectomy was performed using NIR fluorescence guidance via the DaVinci Firefly camera. A partial penectomy was then performed in the standard fashion. The combined procedure was performed successfully without complication. Total operative time was 379 minutes and total robotic console time was 95 minutes for the right and 58 minutes to the left. Estimated blood loss on the right and left were 15 and 25 mL, respectively. A total of 24 lymph nodes were retrieved. This video demonstrates a safe and feasible approach for combined partial penectomy and bilateral inguinal lymphadenectomy with NIR guidance using ICG and the DaVinci Firefly camera system. The combined robotic approach has minimal morbidity and avoids the need for a staged procedure. Furthermore, use of NIR guidance with ICG during robotic inguinal lymphadenectomy is feasible and may help identify sentinel lymph nodes and improve the quality of dissection. Further studies are needed to confirm the utility of NIR guidance for robotic sentinel lymph node dissection. Copyright © 2017 Elsevier Inc. All rights reserved.
Real-time multiple human perception with color-depth cameras on a mobile robot.
Zhang, Hao; Reardon, Christopher; Parker, Lynne E
2013-10-01
The ability to perceive humans is an essential requirement for safe and efficient human-robot interaction. In real-world applications, the need for a robot to interact in real time with multiple humans in a dynamic, 3-D environment presents a significant challenge. The recent availability of commercial color-depth cameras allow for the creation of a system that makes use of the depth dimension, thus enabling a robot to observe its environment and perceive in the 3-D space. Here we present a system for 3-D multiple human perception in real time from a moving robot equipped with a color-depth camera and a consumer-grade computer. Our approach reduces computation time to achieve real-time performance through a unique combination of new ideas and established techniques. We remove the ground and ceiling planes from the 3-D point cloud input to separate candidate point clusters. We introduce the novel information concept, depth of interest, which we use to identify candidates for detection, and that avoids the computationally expensive scanning-window methods of other approaches. We utilize a cascade of detectors to distinguish humans from objects, in which we make intelligent reuse of intermediary features in successive detectors to improve computation. Because of the high computational cost of some methods, we represent our candidate tracking algorithm with a decision directed acyclic graph, which allows us to use the most computationally intense techniques only where necessary. We detail the successful implementation of our novel approach on a mobile robot and examine its performance in scenarios with real-world challenges, including occlusion, robot motion, nonupright humans, humans leaving and reentering the field of view (i.e., the reidentification challenge), human-object and human-human interaction. We conclude with the observation that the incorporation of the depth information, together with the use of modern techniques in new ways, we are able to create an accurate system for real-time 3-D perception of humans by a mobile robot.
Klibansky, David; Rothstein, Richard I
2012-09-01
The increasing complexity of intralumenal and emerging translumenal endoscopic procedures has created an opportunity to apply robotics in endoscopy. Computer-assisted or direct-drive robotic technology allows the triangulation of flexible tools through telemanipulation. The creation of new flexible operative platforms, along with other emerging technology such as nanobots and steerable capsules, can be transformational for endoscopic procedures. In this review, we cover some background information on the use of robotics in surgery and endoscopy, and review the emerging literature on platforms, capsules, and mini-robotic units. The development of techniques in advanced intralumenal endoscopy (endoscopic mucosal resection and endoscopic submucosal dissection) and translumenal endoscopic procedures (NOTES) has generated a number of novel platforms, flexible tools, and devices that can apply robotic principles to endoscopy. The development of a fully flexible endoscopic surgical toolkit will enable increasingly advanced procedures to be performed through natural orifices. The application of platforms and new flexible tools to the areas of advanced endoscopy and NOTES heralds the opportunity to employ useful robotic technology. Following the examples of the utility of robotics from the field of laparoscopic surgery, we can anticipate the emerging role of robotic technology in endoscopy.
A lightweight, inexpensive robotic system for insect vision.
Sabo, Chelsea; Chisholm, Robert; Petterson, Adam; Cope, Alex
2017-09-01
Designing hardware for miniaturized robotics which mimics the capabilities of flying insects is of interest, because they share similar constraints (i.e. small size, low weight, and low energy consumption). Research in this area aims to enable robots with similarly efficient flight and cognitive abilities. Visual processing is important to flying insects' impressive flight capabilities, but currently, embodiment of insect-like visual systems is limited by the hardware systems available. Suitable hardware is either prohibitively expensive, difficult to reproduce, cannot accurately simulate insect vision characteristics, and/or is too heavy for small robotic platforms. These limitations hamper the development of platforms for embodiment which in turn hampers the progress on understanding of how biological systems fundamentally work. To address this gap, this paper proposes an inexpensive, lightweight robotic system for modelling insect vision. The system is mounted and tested on a robotic platform for mobile applications, and then the camera and insect vision models are evaluated. We analyse the potential of the system for use in embodiment of higher-level visual processes (i.e. motion detection) and also for development of navigation based on vision for robotics in general. Optic flow from sample camera data is calculated and compared to a perfect, simulated bee world showing an excellent resemblance. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-15
... Robotics International, Inc., Cell Wireless Corp., Cellcom Corporation (n/k/a Cellcom I Corp.), and Central... securities of Cell Robotics International, Inc. because it has not filed any periodic reports since the...
Implementation of a Multi-Robot Coverage Algorithm on a Two-Dimensional, Grid-Based Environment
2017-06-01
two planar laser range finders with a 180-degree field of view , color camera, vision beacons, and wireless communicator. In their system, the robots...Master’s thesis 4. TITLE AND SUBTITLE IMPLEMENTATION OF A MULTI -ROBOT COVERAGE ALGORITHM ON A TWO -DIMENSIONAL, GRID-BASED ENVIRONMENT 5. FUNDING NUMBERS...path planning coverage algorithm for a multi -robot system in a two -dimensional, grid-based environment. We assess the applicability of a topology
New methods of measuring and calibrating robots
NASA Astrophysics Data System (ADS)
Janocha, Hartmut; Diewald, Bernd
1995-10-01
ISO 9283 and RIA R15.05 define industrial robot parameters which are applied to compare the efficiency of different robots. Hitherto, however, no suitable measurement systems have been available. ICAROS is a system which combines photogrammetrical procedures with an inertial navigation system. For the first time, this combination allows the high-precision static and dynamic measurement of the position as well as of the orientation of the robot endeffector. Thus, not only the measuring data for the determination of all industrial robot parameters can be acquired. By integration of a new over-all-calibration procedure, ICAROS also allows the reduction of the absolute robot pose errors to the range of its repeatability. The integration of both system components as well as measurement and calibration results are presented in this paper, using a six-axes robot as example. A further approach also presented here takes into consideration not only the individual robot errors but also the tolerances of workpieces. This allows the adjustment of off-line programs of robots based on inexact or idealized CAD data in any pose. Thus the robot position which is defined relative to the workpiece to be processed, is achieved as required. This includes the possibility to transfer teached robot programs to other devices without additional expenditure. The adjustment is based on the measurement of the robot position using two miniaturized CCD cameras mounted near the endeffector which are carried along by the robot during the correction phase. In the area viewed by both cameras, the robot position is determined in relation to prominent geometry elements, e.g. lines or holes. The scheduled data to be compared therewith can either be calculated in modern off-line programming systems during robot programming, or they can be determined at the so-called master robot if a transfer of the robot program is desired.
An automatic markerless registration method for neurosurgical robotics based on an optical camera.
Meng, Fanle; Zhai, Fangwen; Zeng, Bowei; Ding, Hui; Wang, Guangzhi
2018-02-01
Current markerless registration methods for neurosurgical robotics use the facial surface to match the robot space with the image space, and acquisition of the facial surface usually requires manual interaction and constrains the patient to a supine position. To overcome these drawbacks, we propose a registration method that is automatic and does not constrain patient position. An optical camera attached to the robot end effector captures images around the patient's head from multiple views. Then, high coverage of the head surface is reconstructed from the images through multi-view stereo vision. Since the acquired head surface point cloud contains color information, a specific mark that is manually drawn on the patient's head prior to the capture procedure can be extracted to automatically accomplish coarse registration rather than using facial anatomic landmarks. Then, fine registration is achieved by registering the high coverage of the head surface without relying solely on the facial region, thus eliminating patient position constraints. The head surface was acquired by the camera with a good repeatability accuracy. The average target registration error of 8 different patient positions measured with targets inside a head phantom was [Formula: see text], while the mean surface registration error was [Formula: see text]. The method proposed in this paper achieves automatic markerless registration in multiple patient positions and guarantees registration accuracy inside the head. This method provides a new approach for establishing the spatial relationship between the image space and the robot space.
NASA Technical Reports Server (NTRS)
Diner, Daniel B. (Inventor)
1994-01-01
Real-time video presentations are provided in the field of operator-supervised automation and teleoperation, particularly in control stations having movable cameras for optimal viewing of a region of interest in robotics and teleoperations for performing different types of tasks. Movable monitors to match the corresponding camera orientations (pan, tilt, and roll) are provided in order to match the coordinate systems of all the monitors to the operator internal coordinate system. Automated control of the arrangement of cameras and monitors, and of the configuration of system parameters, is provided for optimal viewing and performance of each type of task for each operator since operators have different individual characteristics. The optimal viewing arrangement and system parameter configuration is determined and stored for each operator in performing each of many types of tasks in order to aid the automation of setting up optimal arrangements and configurations for successive tasks in real time. Factors in determining what is optimal include the operator's ability to use hand-controllers for each type of task. Robot joint locations, forces and torques are used, as well as the operator's identity, to identify the current type of task being performed in order to call up a stored optimal viewing arrangement and system parameter configuration.
Development of robots and application to industrial processes
NASA Technical Reports Server (NTRS)
Palm, W. J.; Liscano, R.
1984-01-01
An algorithm is presented for using a robot system with a single camera to position in three-dimensional space a slender object for insertion into a hole; for example, an electrical pin-type termination into a connector hole. The algorithm relies on a control-configured end effector to achieve the required horizontal translations and rotational motion, and it does not require camera calibration. A force sensor in each fingertip is integrated with the vision system to allow the robot to teach itself new reference points when different connectors and pins are used. Variability in the grasped orientation and position of the pin can be accomodated with the sensor system. Performance tests show that the system is feasible. More work is needed to determine more precisely the effects of lighting levels and lighting direction.
Experiments in teleoperator and autonomous control of space robotic vehicles
NASA Technical Reports Server (NTRS)
Alexander, Harold L.
1991-01-01
A program of research embracing teleoperator and automatic navigational control of freely flying satellite robots is presented. Current research goals include: (1) developing visual operator interfaces for improved vehicle teleoperation; (2) determining the effects of different visual interface system designs on operator performance; and (3) achieving autonomous vision-based vehicle navigation and control. This research program combines virtual-environment teleoperation studies and neutral-buoyancy experiments using a space-robot simulator vehicle currently under development. Visual-interface design options under investigation include monoscopic versus stereoscopic displays and cameras, helmet-mounted versus panel-mounted display monitors, head-tracking versus fixed or manually steerable remote cameras, and the provision of vehicle-fixed visual cues, or markers, in the remote scene for improved sensing of vehicle position, orientation, and motion.
Design issues for stereo vision systems used on tele-operated robotic platforms
NASA Astrophysics Data System (ADS)
Edmondson, Richard; Vaden, Justin; Hyatt, Brian; Morris, Jim; Pezzaniti, J. Larry; Chenault, David B.; Tchon, Joe; Barnidge, Tracy; Kaufman, Seth; Pettijohn, Brad
2010-02-01
The use of tele-operated Unmanned Ground Vehicles (UGVs) for military uses has grown significantly in recent years with operations in both Iraq and Afghanistan. In both cases the safety of the Soldier or technician performing the mission is improved by the large standoff distances afforded by the use of the UGV, but the full performance capability of the robotic system is not utilized due to insufficient depth perception provided by the standard two dimensional video system, causing the operator to slow the mission to ensure the safety of the UGV given the uncertainty of the perceived scene using 2D. To address this Polaris Sensor Technologies has developed, in a series of developments funded by the Leonard Wood Institute at Ft. Leonard Wood, MO, a prototype Stereo Vision Upgrade (SVU) Kit for the Foster-Miller TALON IV robot which provides the operator with improved depth perception and situational awareness, allowing for shorter mission times and higher success rates. Because there are multiple 2D cameras being replaced by stereo camera systems in the SVU Kit, and because the needs of the camera systems for each phase of a mission vary, there are a number of tradeoffs and design choices that must be made in developing such a system for robotic tele-operation. Additionally, human factors design criteria drive optical parameters of the camera systems which must be matched to the display system being used. The problem space for such an upgrade kit will be defined, and the choices made in the development of this particular SVU Kit will be discussed.
Levels of Autonomy and Autonomous System Performance Assessment for Intelligent Unmanned Systems
2014-04-01
LIDAR and camera sensors that is driven entirely by teleoperation would be AL 0. If that same robot used its LIDAR and camera data to generate a...obstacle detection, mapping, path planning 3 CMMAD semi- autonomous counter- mine system (Few 2010) Talon UGV, camera, LIDAR , metal detector...NCAP framework are performed on individual UMS components and do not require mission level evaluations. For example, bench testing of camera, LIDAR
Recent results in visual servoing
NASA Astrophysics Data System (ADS)
Chaumette, François
2008-06-01
Visual servoing techniques consist in using the data provided by a vision sensor in order to control the motions of a dynamic system. Such systems are usually robot arms, mobile robots, aerial robots,… but can also be virtual robots for applications in computer animation, or even a virtual camera for applications in computer vision and augmented reality. A large variety of positioning tasks, or mobile target tracking, can be implemented by controlling from one to all the degrees of freedom of the system. Whatever the sensor configuration, which can vary from one on-board camera on the robot end-effector to several free-standing cameras, a set of visual features has to be selected at best from the image measurements available, allowing to control the degrees of freedom desired. A control law has also to be designed so that these visual features reach a desired value, defining a correct realization of the task. With a vision sensor providing 2D measurements, potential visual features are numerous, since as well 2D data (coordinates of feature points in the image, moments, …) as 3D data provided by a localization algorithm exploiting the extracted 2D measurements can be considered. It is also possible to combine 2D and 3D visual features to take the advantages of each approach while avoiding their respective drawbacks. From the selected visual features, the behavior of the system will have particular properties as for stability, robustness with respect to noise or to calibration errors, robot 3D trajectory, etc. The talk will present the main basic aspects of visual servoing, as well as technical advances obtained recently in the field inside the Lagadic group at INRIA/INRISA Rennes. Several application results will be also described.
Hardware platform for multiple mobile robots
NASA Astrophysics Data System (ADS)
Parzhuber, Otto; Dolinsky, D.
2004-12-01
This work is concerned with software and communications architectures that might facilitate the operation of several mobile robots. The vehicles should be remotely piloted or tele-operated via a wireless link between the operator and the vehicles. The wireless link will carry control commands from the operator to the vehicle, telemetry data from the vehicle back to the operator and frequently also a real-time video stream from an on board camera. For autonomous driving the link will carry commands and data between the vehicles. For this purpose we have developed a hardware platform which consists of a powerful microprocessor, different sensors, stereo- camera and Wireless Local Area Network (WLAN) for communication. The adoption of IEEE802.11 standard for the physical and access layer protocols allow a straightforward integration with the internet protocols TCP/IP. For the inspection of the environment the robots are equipped with a wide variety of sensors like ultrasonic, infrared proximity sensors and a small inertial measurement unit. Stereo cameras give the feasibility of the detection of obstacles, measurement of distance and creation of a map of the room.
Human-Robot Interaction: Status and Challenges.
Sheridan, Thomas B
2016-06-01
The current status of human-robot interaction (HRI) is reviewed, and key current research challenges for the human factors community are described. Robots have evolved from continuous human-controlled master-slave servomechanisms for handling nuclear waste to a broad range of robots incorporating artificial intelligence for many applications and under human supervisory control. This mini-review describes HRI developments in four application areas and what are the challenges for human factors research. In addition to a plethora of research papers, evidence of success is manifest in live demonstrations of robot capability under various forms of human control. HRI is a rapidly evolving field. Specialized robots under human teleoperation have proven successful in hazardous environments and medical application, as have specialized telerobots under human supervisory control for space and repetitive industrial tasks. Research in areas of self-driving cars, intimate collaboration with humans in manipulation tasks, human control of humanoid robots for hazardous environments, and social interaction with robots is at initial stages. The efficacy of humanoid general-purpose robots has yet to be proven. HRI is now applied in almost all robot tasks, including manufacturing, space, aviation, undersea, surgery, rehabilitation, agriculture, education, package fetch and delivery, policing, and military operations. © 2016, Human Factors and Ergonomics Society.
Tsu, James Hok-Leung; Ng, Ada Tsui-Lin; Wong, Jason Ka-Wing; Wong, Edmond Ming-Ho; Ho, Kwan-Lun; Yiu, Ming-Kwong
2014-03-01
Trocar-site hernia is an uncommon but serious complication after laparoscopic surgery as it frequently requires surgical intervention. We describe a 75-year-old man with Gleason score 4 + 3, clinical stage T1c prostate adenocarcinoma who underwent an uneventful robot-assisted transperitoneal laparoscopic radical prostatectomy. On post-operative day four, he developed symptoms of small bowel obstruction due to herniation and incarceration of the small bowels in a Spigelian-type hernia at the left lower quadrant 8-mm trocar site. Surgical exploration was performed via a mini-laparotomy to reduce the bowel and repair the fascial layers. A literature search was performed to review other cases of trocar-site hernia through the 8-mm robotic port after robot-assisted surgery and the suggested methods of prevention.
Unilateral robotic hybrid mini-maze: a novel experimental approach.
Moslemi, Mohammad; Rawashdeh, Badi; Meyer, Mark; Nguyen, Duy; Poston, Robert; Gharagozloo, Farid
2016-03-01
A complete Cox maze IV procedure is difficult to accomplish using current endoscopic and minimally invasive techniques. These techniques are hampered by inability to adequately dissect the posterior structures of the heart and place all necessary lesions. We present a novel approach, using robotic technology, that achieves placement of all the lesions of the complete maze procedure. In three cadaveric human models, the technical feasibility of using robotic instruments through the right chest to dissect the posterior structures of the heart and place all Cox maze lesions was performed. The entire posterior aspect of the heart was dissected in the cadaveric model facilitating successful placement of all Cox maze IV lesions with robotic assistance through minimally invasive incisions. The robotic Cox maze IV procedure through the novel right thoracic approach is feasible. This obviates the need for sternotomy and avoids the associated morbidity of the conventional Cox-maze procedure. Copyright © 2015 John Wiley & Sons, Ltd.
Development of a teaching system for an industrial robot using stereo vision
NASA Astrophysics Data System (ADS)
Ikezawa, Kazuya; Konishi, Yasuo; Ishigaki, Hiroyuki
1997-12-01
The teaching and playback method is mainly a teaching technique for industrial robots. However, this technique takes time and effort in order to teach. In this study, a new teaching algorithm using stereo vision based on human demonstrations in front of two cameras is proposed. In the proposed teaching algorithm, a robot is controlled repetitively according to angles determined by the fuzzy sets theory until it reaches an instructed teaching point, which is relayed through cameras by an operator. The angles are recorded and used later in playback. The major advantage of this algorithm is that no calibrations are needed. This is because the fuzzy sets theory, which is able to express qualitatively the control commands to the robot, is used instead of conventional kinematic equations. Thus, a simple and easy teaching operation is realized with this teaching algorithm. Simulations and experiments have been performed on the proposed teaching system, and data from testing has confirmed the usefulness of our design.
Tuschy, Benjamin; Berlit, Sebastian; Brade, Joachim; Sütterlin, Marc; Hornemann, Amadeus
2014-01-01
To investigate the clinical assessment of a full high-definition (HD) three-dimensional robot-assisted laparoscopic device in gynaecological surgery. This study included 70 women who underwent gynaecological laparoscopic procedures. Demographic parameters, type and duration of surgery and perioperative complications were analyzed. Fifteen surgeons were postoperatively interviewed regarding their assessment of this new system with a standardized questionnaire. The clinical assessment revealed that three-dimensional full-HD visualisation is comfortable and improves spatial orientation and hand-to-eye coordination. The majority of the surgeons stated they would prefer a three-dimensional system to a conventional two-dimensional device and stated that the robotic camera arm led to more relaxed working conditions. Three-dimensional laparoscopy is feasible, comfortable and well-accepted in daily routine. The three-dimensional visualisation improves surgeons' hand-to-eye coordination, intracorporeal suturing and fine dissection. The combination of full-HD three-dimensional visualisation with the robotic camera arm results in very high image quality and stability.
Qi, Liming; Xia, Yong; Qi, Wenjing; Gao, Wenyue; Wu, Fengxia; Xu, Guobao
2016-01-19
Both a wireless electrochemiluminescence (ECL) electrode microarray chip and the dramatic increase in ECL by embedding a diode in an electromagnetic receiver coil have been first reported. The newly designed device consists of a chip and a transmitter. The chip has an electromagnetic receiver coil, a mini-diode, and a gold electrode array. The mini-diode can rectify alternating current into direct current and thus enhance ECL intensities by 18 thousand times, enabling a sensitive visual detection using common cameras or smart phones as low cost detectors. The detection limit of hydrogen peroxide using a digital camera is comparable to that using photomultiplier tube (PMT)-based detectors. Coupled with a PMT-based detector, the device can detect luminol with higher sensitivity with linear ranges from 10 nM to 1 mM. Because of the advantages including high sensitivity, high throughput, low cost, high portability, and simplicity, it is promising in point of care testing, drug screening, and high throughput analysis.
A mobile robots experimental environment with event-based wireless communication.
Guinaldo, María; Fábregas, Ernesto; Farias, Gonzalo; Dormido-Canto, Sebastián; Chaos, Dictino; Sánchez, José; Dormido, Sebastián
2013-07-22
An experimental platform to communicate between a set of mobile robots through a wireless network has been developed. The mobile robots get their position through a camera which performs as sensor. The video images are processed in a PC and a Waspmote card sends the corresponding position to each robot using the ZigBee standard. A distributed control algorithm based on event-triggered communications has been designed and implemented to bring the robots into the desired formation. Each robot communicates to its neighbors only at event times. Furthermore, a simulation tool has been developed to design and perform experiments with the system. An example of usage is presented.
Low power consumption mini rotary actuator with SMA wires
NASA Astrophysics Data System (ADS)
Manfredi, Luigi; Huan, Yu; Cuschieri, Alfred
2017-11-01
Shape memory alloys (SMAs) are smart materials widely used as actuators for their high power to weight ratio despite their well-known low energy efficiency and limited mechanical bandwidth. For robotic applications, SMAs exhibit limitations due to high power consumption and limited stroke, varying from 4% to 7% of the total length. Hysteresis, during the contraction and extension cycle, requires a complex control algorithm. On the positive side, the small size and low weight are eminently suited for the design of mini actuators for robotic platforms. This paper describes the design and construction of a light weight and low power consuming mini rotary actuator with on-board contact-less position and force sensors. The design is specifically intended to reduce (i) energy consumption, (ii) dimensions of the sensory system, and (iii) provide a simple control without any need for SMA characterisation. The torque produced is controlled by on-board force sensors. Experiments were performed to investigate the energy consumption and performance (step and sinusoidal angle profiles with a frequency varying from 0.5 to 10 Hz and maximal amplitude of {15}\\circ ). We describe a transient capacitor effect related to the SMA wires during the sinusoidal profile when the active SMA wire is powered and the antagonist one switched-off, resulting in a transient current time varying from 300 to 400 ms.
Evolving technologies in robotic surgery for minimally invasive treatment of gynecologic cancers.
Levinson, Kimberly L; Auer, Melinda; Escobar, Pedro F
2013-09-01
Since the introduction of robotic technology, there have been significant changes to the field of gynecologic oncology. The number of minimally invasive procedures has drastically increased, with robotic procedures rising remarkably. With recent evidence suggesting that minimally invasive techniques should be the standard of care for early endometrial and cervical cancers, the push for new technology and advancements has continued. Several emerging robotic technologies have significant potential in the field of gynecologic oncology. The single-site robotic platform enables robotic surgery through a single incision; the Firefly camera detects the fluorescent dye indocyanine green, which may improve sensitivity in sentinel lymph node biopsy; and a robotic vessel-sealing device and stapler will continue to improve efficiency of the robotic surgeon.
Jung, Kyunghwa; Choi, Hyunseok; Hong, Hanpyo; Adikrishna, Arnold; Jeon, In-Ho; Hong, Jaesung
2017-02-01
A hands-free region-of-interest (ROI) selection interface is proposed for solo surgery using a wide-angle endoscope. A wide-angle endoscope provides images with a larger field of view than a conventional endoscope. With an appropriate selection interface for a ROI, surgeons can also obtain a detailed local view as if they moved a conventional endoscope in a specific position and direction. To manipulate the endoscope without releasing the surgical instrument in hand, a mini-camera is attached to the instrument, and the images taken by the attached camera are analyzed. When a surgeon moves the instrument, the instrument orientation is calculated by an image processing. Surgeons can select the ROI with this instrument movement after switching from 'task mode' to 'selection mode.' The accelerated KAZE algorithm is used to track the features of the camera images once the instrument is moved. Both the wide-angle and detailed local views are displayed simultaneously, and a surgeon can move the local view area by moving the mini-camera attached to the surgical instrument. Local view selection for a solo surgery was performed without releasing the instrument. The accuracy of camera pose estimation was not significantly different between camera resolutions, but it was significantly different between background camera images with different numbers of features (P < 0.01). The success rate of ROI selection diminished as the number of separated regions increased. However, separated regions up to 12 with a region size of 160 × 160 pixels were selected with no failure. Surgical tasks on a phantom model and a cadaver were attempted to verify the feasibility in a clinical environment. Hands-free endoscope manipulation without releasing the instruments in hand was achieved. The proposed method requires only a small, low-cost camera and an image processing. The technique enables surgeons to perform solo surgeries without a camera assistant.
On-line dimensional measurement of small components on the eyeglasses assembly line
NASA Astrophysics Data System (ADS)
Rosati, G.; Boschetti, G.; Biondi, A.; Rossi, A.
2009-03-01
Dimensional measurement of the subassemblies at the beginning of the assembly line is a very crucial process for the eyeglasses industry, since even small manufacturing errors of the components can lead to very visible defects on the final product. For this reason, all subcomponents of the eyeglass are verified before beginning the assembly process either with a 100% inspection or on a statistical basis. Inspection is usually performed by human operators, with high costs and a degree of repeatability which is not always satisfactory. This paper presents a novel on-line measuring system for dimensional verification of small metallic subassemblies for the eyeglasses industry. The machine vision system proposed, which was designed to be used at the beginning of the assembly line, could also be employed in the Statistical Process Control (SPC) by the manufacturer of the subassemblies. The automated system proposed is based on artificial vision, and exploits two CCD cameras and an anthropomorphic robot to inspect and manipulate the subcomponents of the eyeglass. Each component is recognized by the first camera in a quite large workspace, picked up by the robot and placed in the small vision field of the second camera which performs the measurement process. Finally, the part is palletized by the robot. The system can be easily taught by the operator by simply placing the template object in the vision field of the measurement camera (for dimensional data acquisition) and hence by instructing the robot via the Teaching Control Pendant within the vision field of the first camera (for pick-up transformation acquisition). The major problem we dealt with is that the shape and dimensions of the subassemblies can vary in a quite wide range, but different positioning of the same component can look very similar one to another. For this reason, a specific shape recognition procedure was developed. In the paper, the whole system is presented together with first experimental lab results.
USDA-ARS?s Scientific Manuscript database
An Unmanned Agricultural Robotics System (UARS) is acquired, rebuilt with desired hardware, and operated in both classrooms and field. The UARS includes crop height sensor, crop canopy analyzer, normalized difference vegetative index (NDVI) sensor, multispectral camera, and hyperspectral radiometer...
Human-Robot Interface: Issues in Operator Performance, Interface Design, and Technologies
2006-07-01
and the use of lightweight portable robotic sensor platforms. 5 robotics has reached a point where some generalities of HRI transcend specific...displays with control devices such as joysticks, wheels, and pedals (Kamsickas, 2003). Typical control stations include panels displaying (a) sensor ...tasks that do not involve mobility and usually involve camera control or data fusion from sensors Active search: Search tasks that involve mobility
A positional estimation technique for an autonomous land vehicle in an unstructured environment
NASA Technical Reports Server (NTRS)
Talluri, Raj; Aggarwal, J. K.
1990-01-01
This paper presents a solution to the positional estimation problem of an autonomous land vehicle navigating in an unstructured mountainous terrain. A Digital Elevation Map (DEM) of the area in which the robot is to navigate is assumed to be given. It is also assumed that the robot is equipped with a camera that can be panned and tilted, and a device to measure the elevation of the robot above the ground surface. No recognizable landmarks are assumed to be present in the environment in which the robot is to navigate. The solution presented makes use of the DEM information, and structures the problem as a heuristic search in the DEM for the possible robot location. The shape and position of the horizon line in the image plane and the known camera geometry of the perspective projection are used as parameters to search the DEM. Various heuristics drawn from the geometric constraints are used to prune the search space significantly. The algorithm is made robust to errors in the imaging process by accounting for the worst care errors. The approach is tested using DEM data of areas in Colorado and Texas. The method is suitable for use in outdoor mobile robots and planetary rovers.
Positional estimation techniques for an autonomous mobile robot
NASA Technical Reports Server (NTRS)
Nandhakumar, N.; Aggarwal, J. K.
1990-01-01
Techniques for positional estimation of a mobile robot navigation in an indoor environment are described. A comprehensive review of the various positional estimation techniques studied in the literature is first presented. The techniques are divided into four different types and each of them is discussed briefly. Two different kinds of environments are considered for positional estimation; mountainous natural terrain and an urban, man-made environment with polyhedral buildings. In both cases, the robot is assumed to be equipped with single visual camera that can be panned and tilted and also a 3-D description (world model) of the environment is given. Such a description could be obtained from a stereo pair of aerial images or from the architectural plans of the buildings. Techniques for positional estimation using the camera input and the world model are presented.
Optical designs for the Mars '03 rover cameras
NASA Astrophysics Data System (ADS)
Smith, Gregory H.; Hagerott, Edward C.; Scherr, Lawrence M.; Herkenhoff, Kenneth E.; Bell, James F.
2001-12-01
In 2003, NASA is planning to send two robotic rover vehicles to explore the surface of Mars. The spacecraft will land on airbags in different, carefully chosen locations. The search for evidence indicating conditions favorable for past or present life will be a high priority. Each rover will carry a total of ten cameras of five various types. There will be a stereo pair of color panoramic cameras, a stereo pair of wide- field navigation cameras, one close-up camera on a movable arm, two stereo pairs of fisheye cameras for hazard avoidance, and one Sun sensor camera. This paper discusses the lenses for these cameras. Included are the specifications, design approaches, expected optical performances, prescriptions, and tolerances.
Study on Fins' Effect of Boiling Flow in Millimeter Channel Heat Exchanger
NASA Astrophysics Data System (ADS)
Watanabe, Satoshi
2005-11-01
Recently, a lot of researches about compact heat exchangers with mini-channels have been carried out with the hope of obtaining a high-efficiency heat transfer, due to the higher ratio of surface area than existing heat exchangers. However, there are many uncertain phenomena in fields such as boiling flow in mini-channels. Thus, in order to understand the boiling flow in mini-channels to design high-efficiency heat exchangers, this work focused on the visualization measurement of boiling flow in a millimeter channel. A transparent acrylic channel (heat exchanger form), high-speed camera (2000 fps at 1024 x 1024 pixels), and halogen lamp (backup light) were used as the visualization system. The channel's depth is 2 mm, width is 30 mm, and length is 400 mm. In preparation for commercial use, two types of channels were experimented on: a fins type and a normal slit type (without fins). The fins are circular cylindrical obstacles (diameter is 5 mm) to promote heat transfer, set in a triangular array (distance between each center point is 10 mm). Especially in this work, boiling flow and heat transfer promotion in the millimeter channel heat exchanger with fins was evaluated using a high-speed camera.
Visual homing with a pan-tilt based stereo camera
NASA Astrophysics Data System (ADS)
Nirmal, Paramesh; Lyons, Damian M.
2013-01-01
Visual homing is a navigation method based on comparing a stored image of the goal location and the current image (current view) to determine how to navigate to the goal location. It is theorized that insects, such as ants and bees, employ visual homing methods to return to their nest. Visual homing has been applied to autonomous robot platforms using two main approaches: holistic and feature-based. Both methods aim at determining distance and direction to the goal location. Navigational algorithms using Scale Invariant Feature Transforms (SIFT) have gained great popularity in the recent years due to the robustness of the feature operator. Churchill and Vardy have developed a visual homing method using scale change information (Homing in Scale Space, HiSS) from SIFT. HiSS uses SIFT feature scale change information to determine distance between the robot and the goal location. Since the scale component is discrete with a small range of values, the result is a rough measurement with limited accuracy. We have developed a method that uses stereo data, resulting in better homing performance. Our approach utilizes a pan-tilt based stereo camera, which is used to build composite wide-field images. We use the wide-field images combined with stereo-data obtained from the stereo camera to extend the keypoint vector described in to include a new parameter, depth (z). Using this info, our algorithm determines the distance and orientation from the robot to the goal location. We compare our method with HiSS in a set of indoor trials using a Pioneer 3-AT robot equipped with a BumbleBee2 stereo camera. We evaluate the performance of both methods using a set of performance measures described in this paper.
A Robotic Platform for Corn Seedling Morphological Traits Characterization
Lu, Hang; Tang, Lie; Whitham, Steven A.; Mei, Yu
2017-01-01
Crop breeding plays an important role in modern agriculture, improving plant performance, and increasing yield. Identifying the genes that are responsible for beneficial traits greatly facilitates plant breeding efforts for increasing crop production. However, associating genes and their functions with agronomic traits requires researchers to observe, measure, record, and analyze phenotypes of large numbers of plants, a repetitive and error-prone job if performed manually. An automated seedling phenotyping system aimed at replacing manual measurement, reducing sampling time, and increasing the allowable work time is thus highly valuable. Toward this goal, we developed an automated corn seedling phenotyping platform based on a time-of-flight of light (ToF) camera and an industrial robot arm. A ToF camera is mounted on the end effector of the robot arm. The arm positions the ToF camera at different viewpoints for acquiring 3D point cloud data. A camera-to-arm transformation matrix was calculated using a hand-eye calibration procedure and applied to transfer different viewpoints into an arm-based coordinate frame. Point cloud data filters were developed to remove the noise in the background and in the merged seedling point clouds. A 3D-to-2D projection and an x-axis pixel density distribution method were used to segment the stem and leaves. Finally, separated leaves were fitted with 3D curves for morphological traits characterization. This platform was tested on a sample of 60 corn plants at their early growth stages with between two to five leaves. The error ratios of the stem height and leave length measurements are 13.7% and 13.1%, respectively, demonstrating the feasibility of this robotic system for automated corn seedling phenotyping. PMID:28895892
A Robotic Platform for Corn Seedling Morphological Traits Characterization.
Lu, Hang; Tang, Lie; Whitham, Steven A; Mei, Yu
2017-09-12
Crop breeding plays an important role in modern agriculture, improving plant performance, and increasing yield. Identifying the genes that are responsible for beneficial traits greatly facilitates plant breeding efforts for increasing crop production. However, associating genes and their functions with agronomic traits requires researchers to observe, measure, record, and analyze phenotypes of large numbers of plants, a repetitive and error-prone job if performed manually. An automated seedling phenotyping system aimed at replacing manual measurement, reducing sampling time, and increasing the allowable work time is thus highly valuable. Toward this goal, we developed an automated corn seedling phenotyping platform based on a time-of-flight of light (ToF) camera and an industrial robot arm. A ToF camera is mounted on the end effector of the robot arm. The arm positions the ToF camera at different viewpoints for acquiring 3D point cloud data. A camera-to-arm transformation matrix was calculated using a hand-eye calibration procedure and applied to transfer different viewpoints into an arm-based coordinate frame. Point cloud data filters were developed to remove the noise in the background and in the merged seedling point clouds. A 3D-to-2D projection and an x -axis pixel density distribution method were used to segment the stem and leaves. Finally, separated leaves were fitted with 3D curves for morphological traits characterization. This platform was tested on a sample of 60 corn plants at their early growth stages with between two to five leaves. The error ratios of the stem height and leave length measurements are 13.7% and 13.1%, respectively, demonstrating the feasibility of this robotic system for automated corn seedling phenotyping.
Chiang, Mao-Hsiung; Lin, Hao-Ting; Hou, Chien-Lun
2011-01-01
In this paper, a stereo vision 3D position measurement system for a three-axial pneumatic parallel mechanism robot arm is presented. The stereo vision 3D position measurement system aims to measure the 3D trajectories of the end-effector of the robot arm. To track the end-effector of the robot arm, the circle detection algorithm is used to detect the desired target and the SAD algorithm is used to track the moving target and to search the corresponding target location along the conjugate epipolar line in the stereo pair. After camera calibration, both intrinsic and extrinsic parameters of the stereo rig can be obtained, so images can be rectified according to the camera parameters. Thus, through the epipolar rectification, the stereo matching process is reduced to a horizontal search along the conjugate epipolar line. Finally, 3D trajectories of the end-effector are computed by stereo triangulation. The experimental results show that the stereo vision 3D position measurement system proposed in this paper can successfully track and measure the fifth-order polynomial trajectory and sinusoidal trajectory of the end-effector of the three- axial pneumatic parallel mechanism robot arm. PMID:22319408
NASA Astrophysics Data System (ADS)
Celik, Koray
This thesis presents a novel robotic navigation strategy by using a conventional tactical monocular camera, proving the feasibility of using a monocular camera as the sole proximity sensing, object avoidance, mapping, and path-planning mechanism to fly and navigate small to medium scale unmanned rotary-wing aircraft in an autonomous manner. The range measurement strategy is scalable, self-calibrating, indoor-outdoor capable, and has been biologically inspired by the key adaptive mechanisms for depth perception and pattern recognition found in humans and intelligent animals (particularly bats), designed to assume operations in previously unknown, GPS-denied environments. It proposes novel electronics, aircraft, aircraft systems, systems, and procedures and algorithms that come together to form airborne systems which measure absolute ranges from a monocular camera via passive photometry, mimicking that of a human-pilot like judgement. The research is intended to bridge the gap between practical GPS coverage and precision localization and mapping problem in a small aircraft. In the context of this study, several robotic platforms, airborne and ground alike, have been developed, some of which have been integrated in real-life field trials, for experimental validation. Albeit the emphasis on miniature robotic aircraft this research has been tested and found compatible with tactical vests and helmets, and it can be used to augment the reliability of many other types of proximity sensors.
A Mobile Robots Experimental Environment with Event-Based Wireless Communication
Guinaldo, María; Fábregas, Ernesto; Farias, Gonzalo; Dormido-Canto, Sebastián; Chaos, Dictino; Sánchez, José; Dormido, Sebastián
2013-01-01
An experimental platform to communicate between a set of mobile robots through a wireless network has been developed. The mobile robots get their position through a camera which performs as sensor. The video images are processed in a PC and a Waspmote card sends the corresponding position to each robot using the ZigBee standard. A distributed control algorithm based on event-triggered communications has been designed and implemented to bring the robots into the desired formation. Each robot communicates to its neighbors only at event times. Furthermore, a simulation tool has been developed to design and perform experiments with the system. An example of usage is presented. PMID:23881139
Commander Truly on aft flight deck holding communication kit assembly (ASSY)
NASA Technical Reports Server (NTRS)
1983-01-01
On aft flight deck, Commander Truly holds communication kit assembly (ASSY) headset (HDST) interface unit (HIU) and mini-HDST in front of the onorbit station. HASSELBLAD camera is positioned on overhead window W8.
On-Line Point Positioning with Single Frame Camera Data
1992-03-15
tion algorithms and methods will be found in robotics and industrial quality control. 1. Project data The project has been defined as "On-line point...development and use of the OLT algorithms and meth- ods for applications in robotics , industrial quality control and autonomous vehicle naviga- tion...Of particular interest in robotics and autonomous vehicle navigation is, for example, the task of determining the position and orientation of a mobile
Surveying the Lunar Surface for New Craters with Mini-RF/Goldstone X-Band Bistatic Observations
NASA Astrophysics Data System (ADS)
Cahill, J. T.; Patterson, G.; Turner, F. S.; Morgan, G.; Stickle, A. M.; Speyerer, E. J.; Espiritu, R. C.; Thomson, B. J.
2017-12-01
A multi-look temporal imaging survey by Speyerer et al. (2016) using Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Camera (NAC) has highlighted detectable and frequent impact bombardment processes actively modifying the lunar surface. Over 220 new resolvable impacts have been detected since NASA's Lunar Reconnaissance Orbiter (LRO) entered orbit around the Moon, at a flux that is substantially higher than anticipated from previous studies (Neukum et al., 2001). The Miniature Radio Frequency (Mini-RF) instrument aboard LRO is a hybrid dual-polarized synthetic aperture radar (SAR) that now operates in concert with the Arecibo Observatory (AO) and the Goldstone deep space communications complex 34-meter antenna DSS-13 to collect S- and X-band (12.6 and 4.2 cm, respectively) bistatic radar data of the Moon, respectively. Here we targeted some of the larger (>30 m) craters identified by Speyerer et al. (2016) and executed bistatic X-band radar observations both to evaluate our ability to detect and resolve these impact features and further characterize the spatial extent and material size of their ejecta outside optical wavelengths. Data acquired during Mini-RF monostatic operations, when the transmitter was active, show no coverage of the regions in question before or after two of the new impacts occurred. This makes Mini-RF and Earth-based bistatic observations all the more valuable for examination of these fresh new geologic features. Preliminary analyses of Arecibo/Greenbank and Mini-RF/Goldstone observations are unable to resolve the new crater cavities (due to our current resolving capability of 100 m/px), but they further confirm lunar surface roughness changes occurred between 2008 and 2017. Mini-RF X-band observations show newly ejected material was dispersed on the order of 100-300 meters from the point of impact. Scattering observed in the X-band data suggests the presence of rocky ejecta 4 - 45 cm in diameter on the surface and buried to depths of at least 0.5 m.
University of Pennsylvania MAGIC 2010 Final Report
2011-01-10
and mapping ( SLAM ) techniques are employed to build a local map of the environment surrounding the robot. Readings from the two complementary LIDAR sen...IMU, LIDAR , Cameras Localization Disrupter UGV Local Navigation Sensors: GPS, IMU, LIDAR , Cameras Laser Control Localization Task Planner Strategy/Plan...various components shown in Figure 2. This is comprised of the following subsystems: • Sensor UGV: Mobile UGVs with LIDAR and camera sensors, GPS, and
Constructing a Database from Multiple 2D Images for Camera Pose Estimation and Robot Localization
NASA Technical Reports Server (NTRS)
Wolf, Michael; Ansar, Adnan I.; Brennan, Shane; Clouse, Daniel S.; Padgett, Curtis W.
2012-01-01
The LMDB (Landmark Database) Builder software identifies persistent image features (landmarks) in a scene viewed multiple times and precisely estimates the landmarks 3D world positions. The software receives as input multiple 2D images of approximately the same scene, along with an initial guess of the camera poses for each image, and a table of features matched pair-wise in each frame. LMDB Builder aggregates landmarks across an arbitrarily large collection of frames with matched features. Range data from stereo vision processing can also be passed to improve the initial guess of the 3D point estimates. The LMDB Builder aggregates feature lists across all frames, manages the process to promote selected features to landmarks, and iteratively calculates the 3D landmark positions using the current camera pose estimations (via an optimal ray projection method), and then improves the camera pose estimates using the 3D landmark positions. Finally, it extracts image patches for each landmark from auto-selected key frames and constructs the landmark database. The landmark database can then be used to estimate future camera poses (and therefore localize a robotic vehicle that may be carrying the cameras) by matching current imagery to landmark database image patches and using the known 3D landmark positions to estimate the current pose.
Novel Robotic Tools for Piping Inspection and Repair, Phase 1
2014-02-13
35 Figure 57 - Accowle ODVS cross section and reflective path ......................................... 36 Figure 58 - Leopard Imaging HD...mounted to iPhone ............................................................................. 39 Figure 63 - Kogeto mounted to Leopard Imaging HD...40 Figure 65 - Leopard Imaging HD camera pipe test (letters) ............................................. 40 Figure 66 - Leopard Imaging HD camera
In Brief: NASA's Phoenix spacecraft lands on Mars
NASA Astrophysics Data System (ADS)
Showstack, Randy; Kumar, Mohi
2008-06-01
After a 9.5-month, 679-million-kilometer flight from Florida, NASA's Phoenix spacecraft made a soft landing in Vastitas Borealis in Mars's northern polar region on 25 May. The lander, whose camera already has returned some spectacular images, is on a 3-month mission to examine the area and dig into the soil of this site-chosen for its likelihood of having frozen water near the surface-and analyze samples. In addition to a robotic arm and robotic arm camera, the lander's instruments include a surface stereo imager; thermal and evolved-gas analyzer; microscopy, electrochemistry, and conductivity analyzer; and a meteorological station that is tracking daily weather and seasonal changes.
NASA Technical Reports Server (NTRS)
Bejczy, Antal K.
1995-01-01
This presentation focuses on the application of computer graphics or 'virtual reality' (VR) techniques as a human-computer interface tool in the operation of telerobotic systems. VR techniques offer very valuable task realization aids for planning, previewing and predicting robotic actions, operator training, and for visual perception of non-visible events like contact forces in robotic tasks. The utility of computer graphics in telerobotic operation can be significantly enhanced by high-fidelity calibration of virtual reality images to actual TV camera images. This calibration will even permit the creation of artificial (synthetic) views of task scenes for which no TV camera views are available.
Trifocal Tensor-Based Adaptive Visual Trajectory Tracking Control of Mobile Robots.
Chen, Jian; Jia, Bingxi; Zhang, Kaixiang
2017-11-01
In this paper, a trifocal tensor-based approach is proposed for the visual trajectory tracking task of a nonholonomic mobile robot equipped with a roughly installed monocular camera. The desired trajectory is expressed by a set of prerecorded images, and the robot is regulated to track the desired trajectory using visual feedback. Trifocal tensor is exploited to obtain the orientation and scaled position information used in the control system, and it works for general scenes owing to the generality of trifocal tensor. In the previous works, the start, current, and final images are required to share enough visual information to estimate the trifocal tensor. However, this requirement can be easily violated for perspective cameras with limited field of view. In this paper, key frame strategy is proposed to loosen this requirement, extending the workspace of the visual servo system. Considering the unknown depth and extrinsic parameters (installing position of the camera), an adaptive controller is developed based on Lyapunov methods. The proposed control strategy works for almost all practical circumstances, including both trajectory tracking and pose regulation tasks. Simulations are made based on the virtual experimentation platform (V-REP) to evaluate the effectiveness of the proposed approach.
Laser-Directed Ranging System Implementing Single Camera System for Telerobotics Applications
NASA Technical Reports Server (NTRS)
Wells, Dennis L. (Inventor); Li, Larry C. (Inventor); Cox, Brian J. (Inventor)
1995-01-01
The invention relates generally to systems for determining the range of an object from a reference point and, in one embodiment, to laser-directed ranging systems useful in telerobotics applications. Digital processing techniques are employed which minimize the complexity and cost of the hardware and software for processing range calculations, thereby enhancing the commercial attractiveness of the system for use in relatively low-cost robotic systems. The system includes a video camera for generating images of the target, image digitizing circuitry, and an associated frame grabber circuit. The circuit first captures one of the pairs of stereo video images of the target, and then captures a second video image of the target as it is partly illuminated by the light beam, suitably generated by a laser. The two video images, taken sufficiently close together in time to minimize camera and scene motion, are converted to digital images and then compared. Common pixels are eliminated, leaving only a digital image of the laser-illuminated spot on the target. Mw centroid of the laser illuminated spot is dm obtained and compared with a predetermined reference point, predetermined by design or calibration, which represents the coordinate at the focal plane of the laser illumination at infinite range. Preferably, the laser and camera are mounted on a servo-driven platform which can be oriented to direct the camera and the laser toward the target. In one embodiment the platform is positioned in response to movement of the operator's head. Position and orientation sensors are used to monitor head movement. The disparity between the digital image of the laser spot and the reference point is calculated for determining range to the target. Commercial applications for the system relate to active range-determination systems, such as those used with robotic systems in which it is necessary to determine the, range to a workpiece or object to be grasped or acted upon by a robot arm end-effector in response to commands generated by an operator. In one embodiment, the system provides a real-time image of the target for the operator as the robot approaches the object. The system is also adapted for use in virtual reality systems in which a remote object or workpiece is to be acted upon by a remote robot arm or other mechanism controlled by an operator.
On the accuracy potential of focused plenoptic camera range determination in long distance operation
NASA Astrophysics Data System (ADS)
Sardemann, Hannes; Maas, Hans-Gerd
2016-04-01
Plenoptic cameras have found increasing interest in optical 3D measurement techniques in recent years. While their basic principle is 100 years old, the development in digital photography, micro-lens fabrication technology and computer hardware has boosted the development and lead to several commercially available ready-to-use cameras. Beyond their popular option of a posteriori image focusing or total focus image generation, their basic ability of generating 3D information from single camera imagery depicts a very beneficial option for certain applications. The paper will first present some fundamentals on the design and history of plenoptic cameras and will describe depth determination from plenoptic camera image data. It will then present an analysis of the depth determination accuracy potential of plenoptic cameras. While most research on plenoptic camera accuracy so far has focused on close range applications, we will focus on mid and long ranges of up to 100 m. This range is especially relevant, if plenoptic cameras are discussed as potential mono-sensorial range imaging devices in (semi-)autonomous cars or in mobile robotics. The results show the expected deterioration of depth measurement accuracy with depth. At depths of 30-100 m, which may be considered typical in autonomous driving, depth errors in the order of 3% (with peaks up to 10-13 m) were obtained from processing small point clusters on an imaged target. Outliers much higher than these values were observed in single point analysis, stressing the necessity of spatial or spatio-temporal filtering of the plenoptic camera depth measurements. Despite these obviously large errors, a plenoptic camera may nevertheless be considered a valid option for the application fields of real-time robotics like autonomous driving or unmanned aerial and underwater vehicles, where the accuracy requirements decrease with distance.
Automated Recognition of Geologically Significant Shapes in MER PANCAM and MI Images
NASA Technical Reports Server (NTRS)
Morris, Robert; Shipman, Mark; Roush, Ted L.
2004-01-01
Autonomous recognition of scientifically important information provides the capability of: 1) Prioritizing data return; 2) Intelligent data compression; 3) Reactive behavior onboard robotic vehicles. Such capabilities are desirable as mission scenarios include longer durations with decreasing interaction from mission control. To address such issues, we have implemented several computer algorithms, intended to autonomously recognize morphological shapes of scientific interest within a software architecture envisioned for future rover missions. Mars Exploration Rovers (MER) instrument payloads include a Panoramic Camera (PANCAM) and Microscopic Imager (MI). These provide a unique opportunity to evaluate our algorithms when applied to data obtained from the surface of Mars. Early in the mission we applied our algorithms to images available at the mission web site (http://marsrovers.jpl.nasa.gov/gallery/images.html), even though these are not at full resolution. Some algorithms would normally use ancillary information, e.g. camera pointing and position of the sun, but these data were not readily available. The initial results of applying our algorithms to the PANCAM and MI images are encouraging. The horizon is recognized in all images containing it; such information could be used to eliminate unwanted areas from the image prior to data transmission to Earth. Additionally, several rocks were identified that represent targets for the mini-thermal emission spectrometer. Our algorithms also recognize the layers, identified by mission scientists. Such information could be used to prioritize data return or in a decision-making process regarding future rover activities. The spherules seen in MI images were also autonomously recognized. Our results indicate that reliable recognition of scientifically relevant morphologies in images is feasible.
3-dimensional telepresence system for a robotic environment
Anderson, Matthew O.; McKay, Mark D.
2000-01-01
A telepresence system includes a camera pair remotely controlled by a control module affixed to an operator. The camera pair provides for three dimensional viewing and the control module, affixed to the operator, affords hands-free operation of the camera pair. In one embodiment, the control module is affixed to the head of the operator and an initial position is established. A triangulating device is provided to track the head movement of the operator relative to the initial position. A processor module receives input from the triangulating device to determine where the operator has moved relative to the initial position and moves the camera pair in response thereto. The movement of the camera pair is predetermined by a software map having a plurality of operation zones. Each zone therein corresponds to unique camera movement parameters such as speed of movement. Speed parameters include constant speed, or increasing or decreasing. Other parameters include pan, tilt, slide, raise or lowering of the cameras. Other user interface devices are provided to improve the three dimensional control capabilities of an operator in a local operating environment. Such other devices include a pair of visual display glasses, a microphone and a remote actuator. The pair of visual display glasses are provided to facilitate three dimensional viewing, hence depth perception. The microphone affords hands-free camera movement by utilizing voice commands. The actuator allows the operator to remotely control various robotic mechanisms in the remote operating environment.
Li, Zhenyu; Wang, Bin; Liu, Hong
2016-08-30
Satellite capturing with free-floating space robots is still a challenging task due to the non-fixed base and unknown mass property issues. In this paper gyro and eye-in-hand camera data are adopted as an alternative choice for solving this problem. For this improved system, a new modeling approach that reduces the complexity of system control and identification is proposed. With the newly developed model, the space robot is equivalent to a ground-fixed manipulator system. Accordingly, a self-tuning control scheme is applied to handle such a control problem including unknown parameters. To determine the controller parameters, an estimator is designed based on the least-squares technique for identifying the unknown mass properties in real time. The proposed method is tested with a credible 3-dimensional ground verification experimental system, and the experimental results confirm the effectiveness of the proposed control scheme.
Li, Zhenyu; Wang, Bin; Liu, Hong
2016-01-01
Satellite capturing with free-floating space robots is still a challenging task due to the non-fixed base and unknown mass property issues. In this paper gyro and eye-in-hand camera data are adopted as an alternative choice for solving this problem. For this improved system, a new modeling approach that reduces the complexity of system control and identification is proposed. With the newly developed model, the space robot is equivalent to a ground-fixed manipulator system. Accordingly, a self-tuning control scheme is applied to handle such a control problem including unknown parameters. To determine the controller parameters, an estimator is designed based on the least-squares technique for identifying the unknown mass properties in real time. The proposed method is tested with a credible 3-dimensional ground verification experimental system, and the experimental results confirm the effectiveness of the proposed control scheme. PMID:27589748
Development of tools and techniques for monitoring underwater artifacts
NASA Astrophysics Data System (ADS)
Lazar, Iulian; Ghilezan, Alin; Hnatiuc, Mihaela
2016-12-01
The different assessments provide information on the best methods to approach an artifact. The presence and extent of potential threats to archaeology must also be determined. In this paper we present an underwater robot, built in the laboratory, able to identify the artifact and to get it to the surface. It is an underwater remotely operated vehicle (ROV) which can be controlled remotely from the shore, a boat or a control station and communication is possible through an Ethernet cable with a maximum length of 100 m. The robot is equipped with an IP camera which sends real time images that can be accessed anywhere from within the network. The camera also has a microSD card to store the video. The methods developed for data communication between the robot and the user is present. A communication protocol between the client and server is developed to control the ROV.
NASA Technical Reports Server (NTRS)
Rice, J. W., Jr.; Smith, P. H.; Marshall, J. R.
1999-01-01
The first microscopic sedimentological studies of the Martian surface will commence with the landing of the Mars Polar Lander (MPL) December 3, 1999. The Robotic Arm Camera (RAC) has a resolution of 25 um/p which will permit detailed micromorphological analysis of surface and subsurface materials. The Robotic Ann will be able to dig up to 50 cm below the surface. The walls of the trench will also be inspected by RAC to look for evidence of stratigraphic and / or sedimentological relationships. The 2001 Mars Lander will build upon and expand the sedimentological research begun by the RAC on MPL. This will be accomplished by: (1) Macroscopic (dm to cm): Descent Imager, Pancam, RAC; (2) Microscopic (mm to um RAC, MECA Optical Microscope (Figure 2), AFM This paper will focus on investigations that can be conducted by the RAC and MECA Optical Microscope.
Magnet-Based System for Docking of Miniature Spacecraft
NASA Technical Reports Server (NTRS)
Howard, Nathan; Nguyen, Hai D.
2007-01-01
A prototype system for docking a miniature spacecraft with a larger spacecraft has been developed by engineers at the Johnson Space Center. Engineers working on Mini AERCam, a free-flying robotic camera, needed to find a way to successfully dock and undock their miniature spacecraft to refuel the propulsion and recharge the batteries. The subsystems developed (see figure) include (1) a docking port, designed for the larger spacecraft, which contains an electromagnet, a ball lock mechanism, and a service probe; and (2) a docking cluster, designed for the smaller spacecraft, which contains either a permanent magnet or an electromagnet. A typical docking operation begins with the docking spacecraft maneuvering into position near the docking port on the parent vehicle. The electromagnet( s) are then turned on, and, if necessary, the docking spacecraft is then maneuvered within the capture envelope of the docking port. The capture envelope for this system is approximated by a 5-in. (12.7-cm) cube centered on the front of the docking-port electromagnet and within an angular misalignment of <30 . Thereafter, the magnetic forces draw the smaller spacecraft toward the larger one and this brings the spacecraft into approximate alignment prior to contact. Mechanical alignment guides provide the final rotational alignment into one of 12 positions. Once the docking vehicle has been captured magnetically in the docking port, the ball-lock mechanism is activated, which locks the two spacecraft together. At this point the electromagnet( s) are turned off, and the service probe extended if recharge and refueling are to be performed. Additionally, during undocking, the polarity of one electromagnet can be reversed to provide a gentle push to separate the two spacecraft. This system is currently being incorporated into the design of Mini AERCam vehicle.
More About Hazard-Response Robot For Combustible Atmospheres
NASA Technical Reports Server (NTRS)
Stone, Henry W.; Ohm, Timothy R.
1995-01-01
Report presents additional information about design and capabilities of mobile hazard-response robot called "Hazbot III." Designed to operate safely in combustible and/or toxic atmosphere. Includes cameras and chemical sensors helping human technicians determine location and nature of hazard so human emergency team can decide how to eliminate hazard without approaching themselves.
Human-Robot Emergency Response - Experimental Platform and Preliminary Dataset
2014-07-28
Proceedings of the IEEE International Conference on Robotics and Automation, Leuven, Belgium, May 16–21 1998, pp. 3715–3720. [13] itseez, “ Opencv ,” http...function and camshift function in OpenCV [13]. In each image obtained form cameras, we first calculate back projection of a histogram model of a human. In
Autonomous Navigation by a Mobile Robot
NASA Technical Reports Server (NTRS)
Huntsberger, Terrance; Aghazarian, Hrand
2005-01-01
ROAMAN is a computer program for autonomous navigation of a mobile robot on a long (as much as hundreds of meters) traversal of terrain. Developed for use aboard a robotic vehicle (rover) exploring the surface of a remote planet, ROAMAN could also be adapted to similar use on terrestrial mobile robots. ROAMAN implements a combination of algorithms for (1) long-range path planning based on images acquired by mast-mounted, wide-baseline stereoscopic cameras, and (2) local path planning based on images acquired by body-mounted, narrow-baseline stereoscopic cameras. The long-range path-planning algorithm autonomously generates a series of waypoints that are passed to the local path-planning algorithm, which plans obstacle-avoiding legs between the waypoints. Both the long- and short-range algorithms use an occupancy-grid representation in computations to detect obstacles and plan paths. Maps that are maintained by the long- and short-range portions of the software are not shared because substantial localization errors can accumulate during any long traverse. ROAMAN is not guaranteed to generate an optimal shortest path, but does maintain the safety of the rover.
NectarCAM, a camera for the medium sized telescopes of the Cherenkov telescope array
NASA Astrophysics Data System (ADS)
Glicenstein, J.-F.; Shayduk, M.
2017-01-01
NectarCAM is a camera proposed for the medium-sized telescopes of the Cherenkov Telescope Array (CTA) which covers the core energy range of 100 GeV to 30 TeV. It has a modular design and is based on the NECTAr chip, at the heart of which is a GHz sampling Switched Capacitor Array and 12-bit Analog to Digital converter. The camera will be equipped with 265 7-photomultiplier modules, covering a field of view of 8 degrees. Each module includes photomultiplier bases, high voltage supply, pre-amplifier, trigger, readout and Ethernet transceiver. The recorded events last between a few nanoseconds and tens of nanoseconds. The expected performance of the camera are discussed. Prototypes of NectarCAM components have been built to validate the design. Preliminary results of a 19-module mini-camera are presented, as well as future plans for building and testing a full size camera.
Improving Robotic Operator Performance Using Augmented Reality
NASA Technical Reports Server (NTRS)
Maida, James C.; Bowen, Charles K.; Pace, John W.
2007-01-01
The Special Purpose Dexterous Manipulator (SPDM) is a two-armed robot that functions as an extension to the end effector of the Space Station Robotics Manipulator System (SSRMS), currently in use on the International Space Station (ISS). Crew training for the SPDM is accomplished using a robotic hardware simulator, which performs most of SPDM functions under normal static Earth gravitational forces. Both the simulator and SPDM are controlled from a standard robotic workstation using a laptop for the user interface and three monitors for camera views. Most operations anticipated for the SPDM involve the manipulation, insertion, and removal of any of several types of Orbital Replaceable Unit (ORU), modules which control various ISS functions. Alignment tolerances for insertion of the ORU into its receptacle are 0.25 inch and 0.5 degree from nominal values. The pre-insertion alignment task must be performed within these tolerances by using available video camera views of the intrinsic features of the ORU and receptacle, without special registration markings. Since optimum camera views may not be available, and dynamic orbital lighting conditions may limit periods of viewing, a successful ORU insertion operation may require an extended period of time. This study explored the feasibility of using augmented reality (AR) to assist SPDM operations. Geometric graphical symbols were overlaid on one of the workstation monitors to afford cues to assist the operator in attaining adequate pre-insertion ORU alignment. Twelve skilled subjects performed eight ORU insertion tasks using the simulator with and without the AR symbols in a repeated measures experimental design. Results indicated that using the AR symbols reduced pre-insertion alignment error for all subjects and reduced the time to complete pre-insertion alignment for most subjects.
NASA Technical Reports Server (NTRS)
Blusiu, Julian O.
1997-01-01
Many Future NASA missions will be designed to robotically explore planets, moons and asteroids by collecting soil samples and conducting in-situ analyses to establish ground composition and look for the presence of specific components.
Autonomous Mobile Platform for Research in Cooperative Robotics
NASA Technical Reports Server (NTRS)
Daemi, Ali; Pena, Edward; Ferguson, Paul
1998-01-01
This paper describes the design and development of a platform for research in cooperative mobile robotics. The structure and mechanics of the vehicles are based on R/C cars. The vehicle is rendered mobile by a DC motor and servo motor. The perception of the robot's environment is achieved using IR sensors and a central vision system. A laptop computer processes images from a CCD camera located above the testing area to determine the position of objects in sight. This information is sent to each robot via RF modem. Each robot is operated by a Motorola 68HC11E micro-controller, and all actions of the robots are realized through the connections of IR sensors, modem, and motors. The intelligent behavior of each robot is based on a hierarchical fuzzy-rule based approach.
[Robotic fundoplication for gastro-oesophageal reflux disease].
Costi, Renato; Himpens, Jacques; Iusco, Domenico; Sarli, Leopoldo; Violi, Vincenzo; Roncoroni, Luigi; Cadière, Guy Bernard
2004-01-01
Presented as a possible "second" revolution in general surgery after the introduction of laparoscopy during the last few years, the robotic approach to mini-invasive surgery has not yet witnessed wide, large-scale diffusion among general surgeons and is still considered an "experimental approach". In general surgery, the laparoscopic treatment of gastrooesophageal reflux is the second most frequently performed robot-assisted procedure after cholecystectomy. A review of the literature and an analysis of the costs may allow a preliminary evaluation of the pros and cons of robotic fundoplication, which may then be applicable to other general surgery procedures. Eleven articles report 91 cases of robotic fundoplication (75 Nissen, 9 Thal, 7 Toupet). To date, there is no evidence of benefit in terms of duration of surgery, rate of complications and hospital stay. Moreover, robotic fundoplication is more expensive than the traditional laparoscopic approach (the additional cost per procedure due to robotics is 1,882.97 euros). Only further technological upgrades and advances will make the use of robotics competitive in general surgery. The development of multi-functional instruments and of tactile feedback at the console, enlargement of the three-dimensional laparoscopic view and specific "team" training will enable the use of robotic surgery to be extended to increasingly difficult procedures and to non-specialised environments.
Atmospheric Seeing and Transparency Robotic Observatory
NASA Astrophysics Data System (ADS)
Cline, J. D.; Castelaz, M. W.
2002-12-01
A robotic 12.7 cm telescope and camera (together called OVIEW) have been designed to do photometry of 50 of the brightest stars in the local sky 24 hours a day. Each star is imaged through a broadband 500 nm filter. Software automatically analyzes the brightness of the star and the stellar seeing disk. The results are published in real-time on a web page. Comparison of stellar brightness with known apparent magnitude is a measure of transparency with instrument resolution of one arcsecond. We will describe the observatory, software, and website. We will also describe other telescopes on the Optical Ridge at the Pisgah Astronomical Research Institute (PARI). On the same pier as OVIEW is a second robotic 12.7 cm telescope and camera that image the sun and moon. The solar and lunar images are published live on the Internet. Also on the Optical Ridge is a robotic 20 cm telescope. This telescope is operated by UNC-Chapel Hill and has been operating on the Optical Ridge for more than 2 years surveying the plane of the Milky Way for binary low mass stars. UNC-Chapel Hill also operates a 25 cm telescope with an IR camera for photometry of gamma ray burst optical afterglows. An additional 25 cm telescope with a new 3.2 megapixel CCD is used for undergraduate research and W UMa binary star photometry. We acknowledge the AAS Small Grant Program for partial support of the solar/lunar telescope.
NASA Astrophysics Data System (ADS)
Nair, Binu M.; Diskin, Yakov; Asari, Vijayan K.
2012-10-01
We present an autonomous system capable of performing security check routines. The surveillance machine, the Clearpath Husky robotic platform, is equipped with three IP cameras with different orientations for the surveillance tasks of face recognition, human activity recognition, autonomous navigation and 3D reconstruction of its environment. Combining the computer vision algorithms onto a robotic machine has given birth to the Robust Artificial Intelligencebased Defense Electro-Robot (RAIDER). The end purpose of the RAIDER is to conduct a patrolling routine on a single floor of a building several times a day. As the RAIDER travels down the corridors off-line algorithms use two of the RAIDER's side mounted cameras to perform a 3D reconstruction from monocular vision technique that updates a 3D model to the most current state of the indoor environment. Using frames from the front mounted camera, positioned at the human eye level, the system performs face recognition with real time training of unknown subjects. Human activity recognition algorithm will also be implemented in which each detected person is assigned to a set of action classes picked to classify ordinary and harmful student activities in a hallway setting.The system is designed to detect changes and irregularities within an environment as well as familiarize with regular faces and actions to distinguish potentially dangerous behavior. In this paper, we present the various algorithms and their modifications which when implemented on the RAIDER serves the purpose of indoor surveillance.
Toward real-time endoscopically-guided robotic navigation based on a 3D virtual surgical field model
NASA Astrophysics Data System (ADS)
Gong, Yuanzheng; Hu, Danying; Hannaford, Blake; Seibel, Eric J.
2015-03-01
The challenge is to accurately guide the surgical tool within the three-dimensional (3D) surgical field for roboticallyassisted operations such as tumor margin removal from a debulked brain tumor cavity. The proposed technique is 3D image-guided surgical navigation based on matching intraoperative video frames to a 3D virtual model of the surgical field. A small laser-scanning endoscopic camera was attached to a mock minimally-invasive surgical tool that was manipulated toward a region of interest (residual tumor) within a phantom of a debulked brain tumor. Video frames from the endoscope provided features that were matched to the 3D virtual model, which were reconstructed earlier by raster scanning over the surgical field. Camera pose (position and orientation) is recovered by implementing a constrained bundle adjustment algorithm. Navigational error during the approach to fluorescence target (residual tumor) is determined by comparing the calculated camera pose to the measured camera pose using a micro-positioning stage. From these preliminary results, computation efficiency of the algorithm in MATLAB code is near real-time (2.5 sec for each estimation of pose), which can be improved by implementation in C++. Error analysis produced 3-mm distance error and 2.5 degree of orientation error on average. The sources of these errors come from 1) inaccuracy of the 3D virtual model, generated on a calibrated RAVEN robotic platform with stereo tracking; 2) inaccuracy of endoscope intrinsic parameters, such as focal length; and 3) any endoscopic image distortion from scanning irregularities. This work demonstrates feasibility of micro-camera 3D guidance of a robotic surgical tool.
Microwave Scanning System Correlations
2010-08-11
The follow equipment is needed for each of the individual scanning systems: Handheld Scanner Equipment list 1. Dell Netbook (with the...proper software installed by Evisive) 2. Bluetooth USB port transmitter 3. Handheld Probe 4. USB to mini-USB Converter (links camera to netbook
NASA Technical Reports Server (NTRS)
2002-01-01
Goddard Space Flight Center and Triangle Research & Development Corporation collaborated to create "Smart Eyes," a charge coupled device camera that, for the first time, could read and measure bar codes without the use of lasers. The camera operated in conjunction with software and algorithms created by Goddard and Triangle R&D that could track bar code position and direction with speed and precision, as well as with software that could control robotic actions based on vision system input. This accomplishment was intended for robotic assembly of the International Space Station, helping NASA to increase production while using less manpower. After successfully completing the two- phase SBIR project with Goddard, Triangle R&D was awarded a separate contract from the U.S. Department of Transportation (DOT), which was interested in using the newly developed NASA camera technology to heighten automotive safety standards. In 1990, Triangle R&D and the DOT developed a mask made from a synthetic, plastic skin covering to measure facial lacerations resulting from automobile accidents. By pairing NASA's camera technology with Triangle R&D's and the DOT's newly developed mask, a system that could provide repeatable, computerized evaluations of laceration injury was born.
Obstacle Classification and 3D Measurement in Unstructured Environments Based on ToF Cameras
Yu, Hongshan; Zhu, Jiang; Wang, Yaonan; Jia, Wenyan; Sun, Mingui; Tang, Yandong
2014-01-01
Inspired by the human 3D visual perception system, we present an obstacle detection and classification method based on the use of Time-of-Flight (ToF) cameras for robotic navigation in unstructured environments. The ToF camera provides 3D sensing by capturing an image along with per-pixel 3D space information. Based on this valuable feature and human knowledge of navigation, the proposed method first removes irrelevant regions which do not affect robot's movement from the scene. In the second step, regions of interest are detected and clustered as possible obstacles using both 3D information and intensity image obtained by the ToF camera. Consequently, a multiple relevance vector machine (RVM) classifier is designed to classify obstacles into four possible classes based on the terrain traversability and geometrical features of the obstacles. Finally, experimental results in various unstructured environments are presented to verify the robustness and performance of the proposed approach. We have found that, compared with the existing obstacle recognition methods, the new approach is more accurate and efficient. PMID:24945679
Efficient visual grasping alignment for cylinders
NASA Technical Reports Server (NTRS)
Nicewarner, Keith E.; Kelley, Robert B.
1992-01-01
Monocular information from a gripper-mounted camera is used to servo the robot gripper to grasp a cylinder. The fundamental concept for rapid pose estimation is to reduce the amount of information that needs to be processed during each vision update interval. The grasping procedure is divided into four phases: learn, recognition, alignment, and approach. In the learn phase, a cylinder is placed in the gripper and the pose estimate is stored and later used as the servo target. This is performed once as a calibration step. The recognition phase verifies the presence of a cylinder in the camera field of view. An initial pose estimate is computed and uncluttered scan regions are selected. The radius of the cylinder is estimated by moving the robot a fixed distance toward the cylinder and observing the change in the image. The alignment phase processes only the scan regions obtained previously. Rapid pose estimates are used to align the robot with the cylinder at a fixed distance from it. The relative motion of the cylinder is used to generate an extrapolated pose-based trajectory for the robot controller. The approach phase guides the robot gripper to a grasping position. The cylinder can be grasped with a minimal reaction force and torque when only rough global pose information is initially available.
Efficient visual grasping alignment for cylinders
NASA Technical Reports Server (NTRS)
Nicewarner, Keith E.; Kelley, Robert B.
1991-01-01
Monocular information from a gripper-mounted camera is used to servo the robot gripper to grasp a cylinder. The fundamental concept for rapid pose estimation is to reduce the amount of information that needs to be processed during each vision update interval. The grasping procedure is divided into four phases: learn, recognition, alignment, and approach. In the learn phase, a cylinder is placed in the gripper and the pose estimate is stored and later used as the servo target. This is performed once as a calibration step. The recognition phase verifies the presence of a cylinder in the camera field of view. An initial pose estimate is computed and uncluttered scan regions are selected. The radius of the cylinder is estimated by moving the robot a fixed distance toward the cylinder and observing the change in the image. The alignment phase processes only the scan regions obtained previously. Rapid pose estimates are used to align the robot with the cylinder at a fixed distance from it. The relative motion of the cylinder is used to generate an extrapolated pose-based trajectory for the robot controller. The approach phase guides the robot gripper to a grasping position. The cylinder can be grasped with a minimal reaction force and torque when only rough global pose information is initially available.
Kim, Da Hee; Kim, Hwan; Kwak, Sanghyun; Baek, Kwangha; Na, Gina; Kim, Ji Hoon; Kim, Se Heon
2016-10-01
The da Vinci system (da Vinci Surgical System; Intuitive Surgical Inc.) has rapidly developed in several years from the S system to the Si system and now the Xi System. To investigate the surgical feasibility and to provide workflow guidance for the newly released system, we used the new da Vinci Xi system for transoral robotic surgery (TORS) on a cadaveric specimen. Bilateral supraglottic partial laryngectomy, hypopharyngectomy, lateral oropharyngectomy, and base of the tongue resection were serially performed in search of the optimal procedures with the new system. The new surgical robotic system has been upgraded in all respects. The telescope and camera were incorporated into one system, with a digital end-mounted camera. Overhead boom rotation allows multiquadrant access without axis limitation, the arms are now thinner and longer with grabbing movements for easy adjustments. The patient clearance button dramatically reduces external collisions. The new surgical robotic system has been optimized for improved anatomic access, with better-equipped appurtenances. This cadaveric study of TORS offers guidance on the best protocol for surgical workflow with the new Xi system leading to improvements in the functional results of TORS.
Vision System Measures Motions of Robot and External Objects
NASA Technical Reports Server (NTRS)
Talukder, Ashit; Matthies, Larry
2008-01-01
A prototype of an advanced robotic vision system both (1) measures its own motion with respect to a stationary background and (2) detects other moving objects and estimates their motions, all by use of visual cues. Like some prior robotic and other optoelectronic vision systems, this system is based partly on concepts of optical flow and visual odometry. Whereas prior optoelectronic visual-odometry systems have been limited to frame rates of no more than 1 Hz, a visual-odometry subsystem that is part of this system operates at a frame rate of 60 to 200 Hz, given optical-flow estimates. The overall system operates at an effective frame rate of 12 Hz. Moreover, unlike prior machine-vision systems for detecting motions of external objects, this system need not remain stationary: it can detect such motions while it is moving (even vibrating). The system includes a stereoscopic pair of cameras mounted on a moving robot. The outputs of the cameras are digitized, then processed to extract positions and velocities. The initial image-data-processing functions of this system are the same as those of some prior systems: Stereoscopy is used to compute three-dimensional (3D) positions for all pixels in the camera images. For each pixel of each image, optical flow between successive image frames is used to compute the two-dimensional (2D) apparent relative translational motion of the point transverse to the line of sight of the camera. The challenge in designing this system was to provide for utilization of the 3D information from stereoscopy in conjunction with the 2D information from optical flow to distinguish between motion of the camera pair and motions of external objects, compute the motion of the camera pair in all six degrees of translational and rotational freedom, and robustly estimate the motions of external objects, all in real time. To meet this challenge, the system is designed to perform the following image-data-processing functions: The visual-odometry subsystem (the subsystem that estimates the motion of the camera pair relative to the stationary background) utilizes the 3D information from stereoscopy and the 2D information from optical flow. It computes the relationship between the 3D and 2D motions and uses a least-mean-squares technique to estimate motion parameters. The least-mean-squares technique is suitable for real-time implementation when the number of external-moving-object pixels is smaller than the number of stationary-background pixels.
The research on visual industrial robot which adopts fuzzy PID control algorithm
NASA Astrophysics Data System (ADS)
Feng, Yifei; Lu, Guoping; Yue, Lulin; Jiang, Weifeng; Zhang, Ye
2017-03-01
The control system of six degrees of freedom visual industrial robot based on the control mode of multi-axis motion control cards and PC was researched. For the variable, non-linear characteristics of industrial robot`s servo system, adaptive fuzzy PID controller was adopted. It achieved better control effort. In the vision system, a CCD camera was used to acquire signals and send them to video processing card. After processing, PC controls the six joints` motion by motion control cards. By experiment, manipulator can operate with machine tool and vision system to realize the function of grasp, process and verify. It has influence on the manufacturing of the industrial robot.
Engineering of a miniaturized, robotic clinical laboratory
Nourse, Marilyn B.; Engel, Kate; Anekal, Samartha G.; Bailey, Jocelyn A.; Bhatta, Pradeep; Bhave, Devayani P.; Chandrasekaran, Shekar; Chen, Yutao; Chow, Steven; Das, Ushati; Galil, Erez; Gong, Xinwei; Gessert, Steven F.; Ha, Kevin D.; Hu, Ran; Hyland, Laura; Jammalamadaka, Arvind; Jayasurya, Karthik; Kemp, Timothy M.; Kim, Andrew N.; Lee, Lucie S.; Liu, Yang Lily; Nguyen, Alphonso; O'Leary, Jared; Pangarkar, Chinmay H.; Patel, Paul J.; Quon, Ken; Ramachandran, Pradeep L.; Rappaport, Amy R.; Roy, Joy; Sapida, Jerald F.; Sergeev, Nikolay V.; Shee, Chandan; Shenoy, Renuka; Sivaraman, Sharada; Sosa‐Padilla, Bernardo; Tran, Lorraine; Trent, Amanda; Waggoner, Thomas C.; Wodziak, Dariusz; Yuan, Amy; Zhao, Peter; Holmes, Elizabeth A.
2018-01-01
Abstract The ability to perform laboratory testing near the patient and with smaller blood volumes would benefit patients and physicians alike. We describe our design of a miniaturized clinical laboratory system with three components: a hardware platform (ie, the miniLab) that performs preanalytical and analytical processing steps using miniaturized sample manipulation and detection modules, an assay‐configurable cartridge that provides consumable materials and assay reagents, and a server that communicates bidirectionally with the miniLab to manage assay‐specific protocols and analyze, store, and report results (i.e., the virtual analyzer). The miniLab can detect analytes in blood using multiple methods, including molecular diagnostics, immunoassays, clinical chemistry, and hematology. Analytical performance results show that our qualitative Zika virus assay has a limit of detection of 55 genomic copies/ml. For our anti‐herpes simplex virus type 2 immunoglobulin G, lipid panel, and lymphocyte subset panel assays, the miniLab has low imprecision, and method comparison results agree well with those from the United States Food and Drug Administration‐cleared devices. With its small footprint and versatility, the miniLab has the potential to provide testing of a range of analytes in decentralized locations. PMID:29376134
Engineering of a miniaturized, robotic clinical laboratory.
Nourse, Marilyn B; Engel, Kate; Anekal, Samartha G; Bailey, Jocelyn A; Bhatta, Pradeep; Bhave, Devayani P; Chandrasekaran, Shekar; Chen, Yutao; Chow, Steven; Das, Ushati; Galil, Erez; Gong, Xinwei; Gessert, Steven F; Ha, Kevin D; Hu, Ran; Hyland, Laura; Jammalamadaka, Arvind; Jayasurya, Karthik; Kemp, Timothy M; Kim, Andrew N; Lee, Lucie S; Liu, Yang Lily; Nguyen, Alphonso; O'Leary, Jared; Pangarkar, Chinmay H; Patel, Paul J; Quon, Ken; Ramachandran, Pradeep L; Rappaport, Amy R; Roy, Joy; Sapida, Jerald F; Sergeev, Nikolay V; Shee, Chandan; Shenoy, Renuka; Sivaraman, Sharada; Sosa-Padilla, Bernardo; Tran, Lorraine; Trent, Amanda; Waggoner, Thomas C; Wodziak, Dariusz; Yuan, Amy; Zhao, Peter; Young, Daniel L; Robertson, Channing R; Holmes, Elizabeth A
2018-01-01
The ability to perform laboratory testing near the patient and with smaller blood volumes would benefit patients and physicians alike. We describe our design of a miniaturized clinical laboratory system with three components: a hardware platform (ie, the miniLab) that performs preanalytical and analytical processing steps using miniaturized sample manipulation and detection modules, an assay-configurable cartridge that provides consumable materials and assay reagents, and a server that communicates bidirectionally with the miniLab to manage assay-specific protocols and analyze, store, and report results (i.e., the virtual analyzer). The miniLab can detect analytes in blood using multiple methods, including molecular diagnostics, immunoassays, clinical chemistry, and hematology. Analytical performance results show that our qualitative Zika virus assay has a limit of detection of 55 genomic copies/ml. For our anti-herpes simplex virus type 2 immunoglobulin G, lipid panel, and lymphocyte subset panel assays, the miniLab has low imprecision, and method comparison results agree well with those from the United States Food and Drug Administration-cleared devices. With its small footprint and versatility, the miniLab has the potential to provide testing of a range of analytes in decentralized locations.
NASA Astrophysics Data System (ADS)
Niemeyer, F.; Schima, R.; Grenzdörffer, G.
2013-08-01
Numerous unmanned aerial systems (UAS) are currently flooding the market. For the most diverse applications UAVs are special designed and used. Micro and mini UAS (maximum take-off weight up to 5 kg) are of particular interest, because legal restrictions are still manageable but also the payload capacities are sufficient for many imaging sensors. Currently a camera system with four oblique and one nadir looking cameras is under development at the Chair for Geodesy and Geoinformatics. The so-called "Four Vision" camera system was successfully built and tested in the air. A MD4-1000 UAS from microdrones is used as a carrier system. Light weight industrial cameras are used and controlled by a central computer. For further photogrammetric image processing, each individual camera, as well as all the cameras together have to be calibrated. This paper focuses on the determination of the relative orientation between the cameras with the „Australis" software and will give an overview of the results and experiences of test flights.
2012-03-13
ISS030-E-135163 (13 March 2012) --- A fisheye lens attached to an electronic still camera was used to capture this image of Robonaut 2 humanoid robot during another system checkout in the Destiny laboratory of the International Space Station. Teams on the ground commanded Robonaut through a series of dexterity tests as it spelled out ?Hello world? in sign language.
2012-03-13
ISS030-E-135148 (13 March 2012) --- A fisheye lens attached to an electronic still camera was used to capture this image of Robonaut 2 humanoid robot during another system checkout in the Destiny laboratory of the International Space Station. Teams on the ground commanded Robonaut through a series of dexterity tests as it spelled out ?Hello world? in sign language.
2012-03-13
ISS030-E-135140 (13 March 2012) --- A fisheye lens attached to an electronic still camera was used to capture this image of Robonaut 2 humanoid robot during another system checkout in the Destiny laboratory of the International Space Station. Teams on the ground commanded Robonaut through a series of dexterity tests as it spelled out ?Hello world? in sign language.
2012-03-13
ISS030-E-135185 (13 March 2012) --- A fisheye lens attached to an electronic still camera was used to capture this image of Robonaut 2 humanoid robot during another system checkout in the Destiny laboratory of the International Space Station. Teams on the ground commanded Robonaut through a series of dexterity tests as it spelled out ?Hello world? in sign language.
2012-03-13
ISS030-E-135187 (13 March 2012) --- A fisheye lens attached to an electronic still camera was used to capture this image of Robonaut 2 humanoid robot during another system checkout in the Destiny laboratory of the International Space Station. Teams on the ground commanded Robonaut through a series of dexterity tests as it spelled out ?Hello world? in sign language.
2012-03-13
ISS030-E-135135 (13 March 2012) --- A fisheye lens attached to an electronic still camera was used to capture this image of Robonaut 2 humanoid robot during another system checkout in the Destiny laboratory of the International Space Station. Teams on the ground commanded Robonaut through a series of dexterity tests as it spelled out ?Hello world? in sign language.
2012-03-13
ISS030-E-135157 (13 March 2012) --- A fisheye lens attached to an electronic still camera was used to capture this image of Robonaut 2 humanoid robot during another system checkout in the Destiny laboratory of the International Space Station. Teams on the ground commanded Robonaut through a series of dexterity tests as it spelled out ?Hello world? in sign language.
NASA Astrophysics Data System (ADS)
Matras, A.
2017-08-01
The paper discusses the impact of the feed screw heating on the machining accuracy. The test stand was built based on HASS Mini Mill 2 CNC milling machine and a Flir SC620 infrared camera. Measurements of workpiece were performed on Talysurf Intra 50 Taylor Hobson profilometer. The research proved that the intensive work of the milling machine lasted 60 minutes, causing thermal expansion of the feed screw what influence on the dimension error of the workpiece.
A small cable tunnel inspection robot design
NASA Astrophysics Data System (ADS)
Zhou, Xiaolong; Guo, Xiaoxue; Huang, Jiangcheng; Xiao, Jie
2017-04-01
Modern city mainly rely on internal electricity cable tunnel, this can reduce the influence of high voltage over-head lines of urban city appearance and function. In order to reduce the dangers of cable tunnel artificial inspection and high labor intensity, we design a small caterpillar chassis in combination with two degrees of freedom robot with two degrees of freedom camera pan and tilt, used in the cable tunnel inspection work. Caterpillar chassis adopts simple return roller, damping structure. Mechanical arm with three parallel shafts, finish the up and down and rotated action. Two degrees of freedom camera pan and tilt are used to monitor cable tunnel with 360 °no dead angle. It looks simple, practical and efficient.
Real-time image mosaicing for medical applications.
Loewke, Kevin E; Camarillo, David B; Jobst, Christopher A; Salisbury, J Kenneth
2007-01-01
In this paper we describe the development of a robotically-assisted image mosaicing system for medical applications. The processing occurs in real-time due to a fast initial image alignment provided by robotic position sensing. Near-field imaging, defined by relatively large camera motion, requires translations as well as pan and tilt orientations to be measured. To capture these measurements we use 5-d.o.f. sensing along with a hand-eye calibration to account for sensor offset. This sensor-based approach speeds up the mosaicing, eliminates cumulative errors, and readily handles arbitrary camera motions. Our results have produced visually satisfactory mosaics on a dental model but can be extended to other medical images.
Design of the high resolution optical instrument for the Pleiades HR Earth observation satellites
NASA Astrophysics Data System (ADS)
Lamard, Jean-Luc; Gaudin-Delrieu, Catherine; Valentini, David; Renard, Christophe; Tournier, Thierry; Laherrere, Jean-Marc
2017-11-01
As part of its contribution to Earth observation from space, ALCATEL SPACE designed, built and tested the High Resolution cameras for the European intelligence satellites HELIOS I and II. Through these programmes, ALCATEL SPACE enjoys an international reputation. Its capability and experience in High Resolution instrumentation is recognised by the most customers. Coming after the SPOT program, it was decided to go ahead with the PLEIADES HR program. PLEIADES HR is the optical high resolution component of a larger optical and radar multi-sensors system : ORFEO, which is developed in cooperation between France and Italy for dual Civilian and Defense use. ALCATEL SPACE has been entrusted by CNES with the development of the high resolution camera of the Earth observation satellites PLEIADES HR. The first optical satellite of the PLEIADES HR constellation will be launched in mid-2008, the second will follow in 2009. To minimize the development costs, a mini satellite approach has been selected, leading to a compact concept for the camera design. The paper describes the design and performance budgets of this novel high resolution and large field of view optical instrument with emphasis on the technological features. This new generation of camera represents a breakthrough in comparison with the previous SPOT cameras owing to a significant step in on-ground resolution, which approaches the capabilities of aerial photography. Recent advances in detector technology, optical fabrication and electronics make it possible for the PLEIADES HR camera to achieve their image quality performance goals while staying within weight and size restrictions normally considered suitable only for much lower performance systems. This camera design delivers superior performance using an innovative low power, low mass, scalable architecture, which provides a versatile approach for a variety of imaging requirements and allows for a wide number of possibilities of accommodation with a mini-satellite class platform.
Sprint: The first flight demonstration of the external work system robots
NASA Technical Reports Server (NTRS)
Price, Charles R.; Grimm, Keith
1995-01-01
The External Works Systems (EWS) 'X Program' is a new NASA initiative that will, in the next ten years, develop a new generation of space robots for active and participative support of zero g external operations. The robotic development will center on three areas: the assistant robot, the associate robot, and the surrogate robot that will support external vehicular activities (EVA) prior to and after, during, and instead of space-suited human external activities respectively. The EWS robotics program will be a combination of technology developments and flight demonstrations for operational proof of concept. The first EWS flight will be a flying camera called 'Sprint' that will seek to demonstrate operationally flexible, remote viewing capability for EVA operations, inspections, and contingencies for the space shuttle and space station. This paper describes the need for Sprint and its characteristics.
Acquiring neural signals for developing a perception and cognition model
NASA Astrophysics Data System (ADS)
Li, Wei; Li, Yunyi; Chen, Genshe; Shen, Dan; Blasch, Erik; Pham, Khanh; Lynch, Robert
2012-06-01
The understanding of how humans process information, determine salience, and combine seemingly unrelated information is essential to automated processing of large amounts of information that is partially relevant, or of unknown relevance. Recent neurological science research in human perception, and in information science regarding contextbased modeling, provides us with a theoretical basis for using a bottom-up approach for automating the management of large amounts of information in ways directly useful for human operators. However, integration of human intelligence into a game theoretic framework for dynamic and adaptive decision support needs a perception and cognition model. For the purpose of cognitive modeling, we present a brain-computer-interface (BCI) based humanoid robot system to acquire brainwaves during human mental activities of imagining a humanoid robot-walking behavior. We use the neural signals to investigate relationships between complex humanoid robot behaviors and human mental activities for developing the perception and cognition model. The BCI system consists of a data acquisition unit with an electroencephalograph (EEG), a humanoid robot, and a charge couple CCD camera. An EEG electrode cup acquires brainwaves from the skin surface on scalp. The humanoid robot has 20 degrees of freedom (DOFs); 12 DOFs located on hips, knees, and ankles for humanoid robot walking, 6 DOFs on shoulders and arms for arms motion, and 2 DOFs for head yaw and pitch motion. The CCD camera takes video clips of the human subject's hand postures to identify mental activities that are correlated to the robot-walking behaviors. We use the neural signals to investigate relationships between complex humanoid robot behaviors and human mental activities for developing the perception and cognition model.
Close-Up After Preparatory Test of Drilling on Mars
2013-02-07
After an activity called the mini drill test by NASA Mars rover Curiosity, the rover MAHLI camera recorded this view of the results. The test generated a ring of powdered rock for inspection in advance of the rover first full drilling.
Spirit Mini-TES Observations: From Bonneville Crater to the Columbia Hills.
NASA Astrophysics Data System (ADS)
Blaney, D. L.; Athena Science
2004-11-01
During the Mars Exploration Rover Extended Mission the Spirit rover traveled from the rim of the crater informally known as "Bonneville, Crater" into the hills informally known as the "Columbia Hills" in Gusev Crater. During this >3 km drive Mini-TES (Miniature Thermal Emission Spectrometer) collected systematic observations to characterize spectral diversity and targeted observations of rocks, soils, rover tracks, and trenches. Surface temperatures have steadily decreased during the drive and arrival into the Columbia hills with the approach of winter. Mini-TES covers the 5-29 micron spectral region with a 20 mrad aperture that is co-registered with panoramic and navigation cameras. As at the landing site (Christensen et al., Science, 2004), many dark rocks in the plains between "Bonneville Crater" show long wavelength (15-25 μm) absorptions due to olivine consistent with the detection of olivine-bearing basalt at this site from orbital TES infrared spectroscopy. Rocks with the spectral signature of olivine are rarer in the Columbia Hills. Measurements of outcrops of presumably intact bedrock lack any olivine signature and are consistent with other results indicating that these rocks are highly altered. Rock coatings and fine dust on rocks are common. Soils have thin dust coatings and disturbed soil (e.g rover tracks and trenches) are consistent with basalt. Mini-TES observations were coordinated with Panoramic Camera (Pancam) observations to allow us to search for correlations of visible spectra properties with infrared. This work was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under contract to NASA.
Intelligent navigation and accurate positioning of an assist robot in indoor environments
NASA Astrophysics Data System (ADS)
Hua, Bin; Rama, Endri; Capi, Genci; Jindai, Mitsuru; Tsuri, Yosuke
2017-12-01
Intact robot's navigation and accurate positioning in indoor environments are still challenging tasks. Especially in robot applications, assisting disabled and/or elderly people in museums/art gallery environments. In this paper, we present a human-like navigation method, where the neural networks control the wheelchair robot to reach the goal location safely, by imitating the supervisor's motions, and positioning in the intended location. In a museum similar environment, the mobile robot starts navigation from various positions, and uses a low-cost camera to track the target picture, and a laser range finder to make a safe navigation. Results show that the neural controller with the Conjugate Gradient Backpropagation training algorithm gives a robust response to guide the mobile robot accurately to the goal position.
Robot Manipulator Technologies for Planetary Exploration
NASA Technical Reports Server (NTRS)
Das, H.; Bao, X.; Bar-Cohen, Y.; Bonitz, R.; Lindemann, R.; Maimone, M.; Nesnas, I.; Voorhees, C.
1999-01-01
NASA exploration missions to Mars, initiated by the Mars Pathfinder mission in July 1997, will continue over the next decade. The missions require challenging innovations in robot design and improvements in autonomy to meet ambitious objectives under tight budget and time constraints. The authors are developing design tools, component technologies and capabilities to address these needs for manipulation with robots for planetary exploration. The specific developments are: 1) a software analysis tool to reduce robot design iteration cycles and optimize on design solutions, 2) new piezoelectric ultrasonic motors (USM) for light-weight and high torque actuation in planetary environments, 3) use of advanced materials and structures for strong and light-weight robot arms and 4) intelligent camera-image coordinated autonomous control of robot arms for instrument placement and sample acquisition from a rover vehicle.
Maneuverability and mobility in palm-sized legged robots
NASA Astrophysics Data System (ADS)
Kohut, Nicholas J.; Birkmeyer, Paul M.; Peterson, Kevin C.; Fearing, Ronald S.
2012-06-01
Palm sized legged robots show promise for military and civilian applications, including exploration of hazardous or difficult to reach places, search and rescue, espionage, and battlefield reconnaissance. However, they also face many technical obstacles, including- but not limited to- actuator performance, weight constraints, processing power, and power density. This paper presents an overview of several robots from the Biomimetic Millisystems Laboratory at UC Berkeley, including the OctoRoACH, a steerable, running legged robot capable of basic navigation and equipped with a camera and active tail; CLASH, a dynamic climbing robot; and BOLT, a hybrid crawling and flying robot. The paper also discusses, and presents some preliminary solutions to, the technical obstacles listed above plus issues such as robustness to unstructured environments, limited sensing and communication bandwidths, and system integration.
Autonomous Aerial Refueling Ground Test Demonstration—A Sensor-in-the-Loop, Non-Tracking Method
Chen, Chao-I; Koseluk, Robert; Buchanan, Chase; Duerner, Andrew; Jeppesen, Brian; Laux, Hunter
2015-01-01
An essential capability for an unmanned aerial vehicle (UAV) to extend its airborne duration without increasing the size of the aircraft is called the autonomous aerial refueling (AAR). This paper proposes a sensor-in-the-loop, non-tracking method for probe-and-drogue style autonomous aerial refueling tasks by combining sensitivity adjustments of a 3D Flash LIDAR camera with computer vision based image-processing techniques. The method overcomes the inherit ambiguity issues when reconstructing 3D information from traditional 2D images by taking advantage of ready to use 3D point cloud data from the camera, followed by well-established computer vision techniques. These techniques include curve fitting algorithms and outlier removal with the random sample consensus (RANSAC) algorithm to reliably estimate the drogue center in 3D space, as well as to establish the relative position between the probe and the drogue. To demonstrate the feasibility of the proposed method on a real system, a ground navigation robot was designed and fabricated. Results presented in the paper show that using images acquired from a 3D Flash LIDAR camera as real time visual feedback, the ground robot is able to track a moving simulated drogue and continuously narrow the gap between the robot and the target autonomously. PMID:25970254
Extrinsic Calibration of Camera and 2D Laser Sensors without Overlap
Al-Widyan, Khalid
2017-01-01
Extrinsic calibration of a camera and a 2D laser range finder (lidar) sensors is crucial in sensor data fusion applications; for example SLAM algorithms used in mobile robot platforms. The fundamental challenge of extrinsic calibration is when the camera-lidar sensors do not overlap or share the same field of view. In this paper we propose a novel and flexible approach for the extrinsic calibration of a camera-lidar system without overlap, which can be used for robotic platform self-calibration. The approach is based on the robot–world hand–eye calibration (RWHE) problem; proven to have efficient and accurate solutions. First, the system was mapped to the RWHE calibration problem modeled as the linear relationship AX=ZB, where X and Z are unknown calibration matrices. Then, we computed the transformation matrix B, which was the main challenge in the above mapping. The computation is based on reasonable assumptions about geometric structure in the calibration environment. The reliability and accuracy of the proposed approach is compared to a state-of-the-art method in extrinsic 2D lidar to camera calibration. Experimental results from real datasets indicate that the proposed approach provides better results with an L2 norm translational and rotational deviations of 314 mm and 0.12∘ respectively. PMID:29036905
The Topological Panorama Camera: A New Tool for Teaching Concepts Related to Space and Time.
ERIC Educational Resources Information Center
Gelphman, Janet L.; And Others
1992-01-01
Included are the description, operating characteristics, uses, and future plans for the Topological Panorama Camera, which is an experimental, robotic photographic device capable of producing visual renderings of the mathematical characteristics of an equation in terms of position changes of an object or in terms of the shape of the space…
Intelligent Surveillance Robot with Obstacle Avoidance Capabilities Using Neural Network
2015-01-01
For specific purpose, vision-based surveillance robot that can be run autonomously and able to acquire images from its dynamic environment is very important, for example, in rescuing disaster victims in Indonesia. In this paper, we propose architecture for intelligent surveillance robot that is able to avoid obstacles using 3 ultrasonic distance sensors based on backpropagation neural network and a camera for face recognition. 2.4 GHz transmitter for transmitting video is used by the operator/user to direct the robot to the desired area. Results show the effectiveness of our method and we evaluate the performance of the system. PMID:26089863
Assistance System for Disabled People: A Robot Controlled by Blinking and Wireless Link
NASA Astrophysics Data System (ADS)
Del Val, Lara; Jiménez, María I.; Alonso, Alonso; de La Rosa, Ramón; Izquierdo, Alberto; Carrera, Albano
Disabled people already profit from a lot of technical assistance that improves their quality of life. This article presents a system which will allow interaction between a physically disabled person and his environment. This system is controlled by voluntary muscular movements, particularly those of face muscles. These movements will be translated into machine-understandable instructions, and they will be sent by means of a wireless link to a mobile robot that will execute them. Robot includes a video camera, in order to show the user the environment of the route that the robot follows. This system gives a greater personal autonomy to people with reduced mobility.
Parallel robot for micro assembly with integrated innovative optical 3D-sensor
NASA Astrophysics Data System (ADS)
Hesselbach, Juergen; Ispas, Diana; Pokar, Gero; Soetebier, Sven; Tutsch, Rainer
2002-10-01
Recent advances in the fields of MEMS and MOEMS often require precise assembly of very small parts with an accuracy of a few microns. In order to meet this demand, a new approach using a robot based on parallel mechanisms in combination with a novel 3D-vision system has been chosen. The planar parallel robot structure with 2 DOF provides a high resolution in the XY-plane. It carries two additional serial axes for linear and rotational movement in/about z direction. In order to achieve high precision as well as good dynamic capabilities, the drive concept for the parallel (main) axes incorporates air bearings in combination with a linear electric servo motors. High accuracy position feedback is provided by optical encoders with a resolution of 0.1 μm. To allow for visualization and visual control of assembly processes, a camera module fits into the hollow tool head. It consists of a miniature CCD camera and a light source. In addition a modular gripper support is integrated into the tool head. To increase the accuracy a control loop based on an optoelectronic sensor will be implemented. As a result of an in-depth analysis of different approaches a photogrammetric system using one single camera and special beam-splitting optics was chosen. A pattern of elliptical marks is applied to the surfaces of workpiece and gripper. Using a model-based recognition algorithm the image processing software identifies the gripper and the workpiece and determines their relative position. A deviation vector is calculated and fed into the robot control to guide the gripper.
Fringe projection profilometry with portable consumer devices
NASA Astrophysics Data System (ADS)
Liu, Danji; Pan, Zhipeng; Wu, Yuxiang; Yue, Huimin
2018-01-01
A fringe projection profilometry (FPP) using portable consumer devices is attractive because it can realize optical three dimensional (3D) measurement for ordinary consumers in their daily lives. We demonstrate a FPP using a camera in a smart mobile phone and a digital consumer mini projector. In our experiment of testing the smart phone (iphone7) camera performance, the rare-facing camera in the iphone7 causes the FPP to have a fringe contrast ratio of 0.546, nonlinear carrier phase aberration value of 0.6 rad, and nonlinear phase error of 0.08 rad and RMS random phase error of 0.033 rad. In contrast, the FPP using the industrial camera has a fringe contrast ratio of 0.715, nonlinear carrier phase aberration value of 0.5 rad, nonlinear phase error of 0.05 rad and RMS random phase error of 0.011 rad. Good performance is achieved by using the FPP composed of an iphone7 and a mini projector. 3D information of a facemask with a size for an adult is also measured by using the FPP that uses portable consumer devices. After the system calibration, the 3D absolute information of the facemask is obtained. The measured results are in good agreement with the ones that are carried out in a traditional way. Our results show that it is possible to use portable consumer devices to construct a good FPP, which is useful for ordinary people to get 3D information in their daily lives.
Obstacle Detection and Avoidance of a Mobile Robotic Platform Using Active Depth Sensing
2014-06-01
price of nearly one tenth of a laser range finder, the Xbox Kinect uses an infrared projector and camera to capture images of its environment in three...inception. At the price of nearly one tenth of a laser range finder, the Xbox Kinect uses an infrared projector and camera to capture images of its...cropped between 280 and 480 pixels. ........11 Figure 9. RGB image captured by the camera on the Xbox Kinect. ...............................12 Figure
Color and Contour Based Identification of Stem of Coconut Bunch
NASA Astrophysics Data System (ADS)
Kannan Megalingam, Rajesh; Manoharan, Sakthiprasad K.; Reddy, Rajesh G.; Sriteja, Gone; Kashyap, Ashwin
2017-08-01
Vision is the key component of Artificial Intelligence and Automated Robotics. Sensors or Cameras are the sight organs for a robot. Only through this, they are able to locate themselves or identify the shape of a regular or an irregular object. This paper presents the method of Identification of an object based on color and contour recognition using a camera through digital image processing techniques for robotic applications. In order to identify the contour, shape matching technique is used, which takes the input data from the database provided, and uses it to identify the contour by checking for shape match. The shape match is based on the idea of iterating through each contour of the threshold image. The color is identified on HSV Scale, by approximating the desired range of values from the database. HSV data along with iteration is used for identifying a quadrilateral, which is our required contour. This algorithm could also be used in a non-deterministic plane, which only uses HSV values exclusively.
Design And Control Of Agricultural Robot For Tomato Plants Treatment And Harvesting
NASA Astrophysics Data System (ADS)
Sembiring, Arnes; Budiman, Arif; Lestari, Yuyun D.
2017-12-01
Although Indonesia is one of the biggest agricultural country in the world, implementation of robotic technology, otomation and efficiency enhancement in agriculture process hasn’t extensive yet. This research proposed a low cost agricultural robot architecture. The robot could help farmer to survey their farm area, treat the tomato plants and harvest the ripe tomatoes. Communication between farmer and robot was facilitated by wireless line using radio wave to reach wide area (120m radius). The radio wave was combinated with Bluetooth to simplify the communication between robot and farmer’s Android smartphone. The robot was equipped with a camera, so the farmers could survey the farm situation through 7 inch monitor display real time. The farmers controlled the robot and arm movement through an user interface in Android smartphone. The user interface contains control icons that allow farmers to control the robot movement (formard, reverse, turn right and turn left) and cut the spotty leaves or harvest the ripe tomatoes.
Characteristics and requirements of robotic manipulators for space operations
NASA Technical Reports Server (NTRS)
Andary, James F.; Hewitt, Dennis R.; Spidaliere, Peter D.; Lambeck, Robert W.
1992-01-01
A robotic manipulator, DTF-1, developed as part of the Flight Telerobotic Servicer (FTS) project at Goddard Space Flight Center is discussed focusing on the technical, operational, and safety requirements. The DTF-1 system design, which is based on the manipulator, gripper, cameras, computer, and an operator control station incorporates the fundamental building blocks of the original FTS, the end product of which was to have been a light-weight, dexterous telerobotic device. For the first time in the history of NASA, space technology and robotics were combined to find new and unique solutions to the demanding requirements of flying a sophisticated robotic manipulator in space. DTF-1 is considered to be the prototype for all future development in space robotics.
Effect of a human-type communication robot on cognitive function in elderly women living alone.
Tanaka, Masaaki; Ishii, Akira; Yamano, Emi; Ogikubo, Hiroki; Okazaki, Masatsugu; Kamimura, Kazuro; Konishi, Yasuharu; Emoto, Shigeru; Watanabe, Yasuyoshi
2012-09-01
Considering the high prevalence of dementia, it would be of great value to develop effective tools to improve cognitive function. We examined the effects of a human-type communication robot on cognitive function in elderly women living alone. In this study, 34 healthy elderly female volunteers living alone were randomized to living with either a communication robot or a control robot at home for 8 weeks. The shape, voice, and motion features of the communication robot resemble those of a 3-year-old boy, while the control robot was not designed to talk or nod. Before living with the robot and 4 and 8 weeks after living with the robot, experiments were conducted to evaluate a variety of cognitive functions as well as saliva cortisol, sleep, and subjective fatigue, motivation, and healing. The Mini-Mental State Examination score, judgement, and verbal memory function were improved after living with the communication robot; those functions were not altered with the control robot. In addition, the saliva cortisol level was decreased, nocturnal sleeping hours tended to increase, and difficulty in maintaining sleep tended to decrease with the communication robot, although alterations were not shown with the control. The proportions of the participants in whom effects on attenuation of fatigue, enhancement of motivation, and healing could be recognized were higher in the communication robot group relative to the control group. This study demonstrates that living with a human-type communication robot may be effective for improving cognitive functions in elderly women living alone.
Proposal of Self-Learning and Recognition System of Facial Expression
NASA Astrophysics Data System (ADS)
Ogawa, Yukihiro; Kato, Kunihito; Yamamoto, Kazuhiko
We describe realization of more complicated function by using the information acquired from some equipped unripe functions. The self-learning and recognition system of the human facial expression, which achieved under the natural relation between human and robot, are proposed. The robot with this system can understand human facial expressions and behave according to their facial expressions after the completion of learning process. The system modelled after the process that a baby learns his/her parents’ facial expressions. Equipping the robot with a camera the system can get face images and equipping the CdS sensors on the robot’s head the robot can get the information of human action. Using the information of these sensors, the robot can get feature of each facial expression. After self-learning is completed, when a person changed his facial expression in front of the robot, the robot operates actions under the relevant facial expression.
Preclinical Evaluation of Robotic-Assisted Sentinel Lymph Node Fluorescence Imaging
Liss, Michael A.; Farshchi-Heydari, Salman; Qin, Zhengtao; Hickey, Sean A.; Hall, David J.; Kane, Christopher J.; Vera, David R.
2015-01-01
An ideal substance to provide convenient and accurate targeting for sentinel lymph node (SLN) mapping during robotic-assisted surgery has yet to be found. We used an animal model to determine the ability of the FireFly camera system to detect fluorescent SLNs after administration of a dual-labeled molecular imaging agent. Methods We injected the footpads of New Zealand White rabbits with 1.7 or 8.4 nmol of tilmanocept labeled with 99mTc and a near-infrared fluorophore, IRDye800CW. One and 36 h after injection, popliteal lymph nodes, representing the SLNs, were dissected with the assistance of the FireFly camera system, a fluorescence-capable endoscopic imaging system. After excision of the paraaortic lymph nodes, which represented non-SLNs, we assayed all lymph nodes for radioactivity and fluorescence intensity. Results Fluorescence within all popliteal lymph nodes was easily detected by the FireFly camera system. Fluorescence within the lymph channel could be imaged during the 1-h studies. When compared with the paraaortic lymph nodes, the popliteal lymph nodes retain greater than 95% of the radioactivity at both 1 and 36 h after injection. At both doses (1.7 and 8.4 nmol), the popliteal nodes had higher (P < 0.050) optical fluorescence intensity than the paraaortic nodes at the 1- and 36-h time points. Conclusion The FireFly camera system can easily detect tilmanocept labeled with a near-infrared fluorophore at least 36 h after administration. This ability will permit image acquisition and subsequent verification of fluorescence-labeled SLNs during robotic-assisted surgery. PMID:25024425
Preclinical evaluation of robotic-assisted sentinel lymph node fluorescence imaging.
Liss, Michael A; Farshchi-Heydari, Salman; Qin, Zhengtao; Hickey, Sean A; Hall, David J; Kane, Christopher J; Vera, David R
2014-09-01
An ideal substance to provide convenient and accurate targeting for sentinel lymph node (SLN) mapping during robotic-assisted surgery has yet to be found. We used an animal model to determine the ability of the FireFly camera system to detect fluorescent SLNs after administration of a dual-labeled molecular imaging agent. We injected the footpads of New Zealand White rabbits with 1.7 or 8.4 nmol of tilmanocept labeled with (99m)Tc and a near-infrared fluorophore, IRDye800CW. One and 36 h after injection, popliteal lymph nodes, representing the SLNs, were dissected with the assistance of the FireFly camera system, a fluorescence-capable endoscopic imaging system. After excision of the paraaortic lymph nodes, which represented non-SLNs, we assayed all lymph nodes for radioactivity and fluorescence intensity. Fluorescence within all popliteal lymph nodes was easily detected by the FireFly camera system. Fluorescence within the lymph channel could be imaged during the 1-h studies. When compared with the paraaortic lymph nodes, the popliteal lymph nodes retain greater than 95% of the radioactivity at both 1 and 36 h after injection. At both doses (1.7 and 8.4 nmol), the popliteal nodes had higher (P < 0.050) optical fluorescence intensity than the paraaortic nodes at the 1- and 36-h time points. The FireFly camera system can easily detect tilmanocept labeled with a near-infrared fluorophore at least 36 h after administration. This ability will permit image acquisition and subsequent verification of fluorescence-labeled SLNs during robotic-assisted surgery. © 2014 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
Mini gamma camera, camera system and method of use
Majewski, Stanislaw; Weisenberger, Andrew G.; Wojcik, Randolph F.
2001-01-01
A gamma camera comprising essentially and in order from the front outer or gamma ray impinging surface: 1) a collimator, 2) a scintillator layer, 3) a light guide, 4) an array of position sensitive, high resolution photomultiplier tubes, and 5) printed circuitry for receipt of the output of the photomultipliers. There is also described, a system wherein the output supplied by the high resolution, position sensitive photomultipiler tubes is communicated to: a) a digitizer and b) a computer where it is processed using advanced image processing techniques and a specific algorithm to calculate the center of gravity of any abnormality observed during imaging, and c) optional image display and telecommunications ports.
A projective surgical navigation system for cancer resection
NASA Astrophysics Data System (ADS)
Gan, Qi; Shao, Pengfei; Wang, Dong; Ye, Jian; Zhang, Zeshu; Wang, Xinrui; Xu, Ronald
2016-03-01
Near infrared (NIR) fluorescence imaging technique can provide precise and real-time information about tumor location during a cancer resection surgery. However, many intraoperative fluorescence imaging systems are based on wearable devices or stand-alone displays, leading to distraction of the surgeons and suboptimal outcome. To overcome these limitations, we design a projective fluorescence imaging system for surgical navigation. The system consists of a LED excitation light source, a monochromatic CCD camera, a host computer, a mini projector and a CMOS camera. A software program is written by C++ to call OpenCV functions for calibrating and correcting fluorescence images captured by the CCD camera upon excitation illumination of the LED source. The images are projected back to the surgical field by the mini projector. Imaging performance of this projective navigation system is characterized in a tumor simulating phantom. Image-guided surgical resection is demonstrated in an ex-vivo chicken tissue model. In all the experiments, the projected images by the projector match well with the locations of fluorescence emission. Our experimental results indicate that the proposed projective navigation system can be a powerful tool for pre-operative surgical planning, intraoperative surgical guidance, and postoperative assessment of surgical outcome. We have integrated the optoelectronic elements into a compact and miniaturized system in preparation for further clinical validation.
Robotic Arm Camera Image of the South Side of the Thermal and Evolved-Gas Analyzer (Door TA4
NASA Technical Reports Server (NTRS)
2008-01-01
The Thermal and Evolved-Gas Analyzer (TEGA) instrument aboard NASA's Phoenix Mars Lander is shown with one set of oven doors open and dirt from a sample delivery. After the 'seventh shake' of TEGA, a portion of the dirt sample entered the oven via a screen for analysis. This image was taken by the Robotic Arm Camera on Sol 18 (June 13, 2008), or 18th Martian day of the mission. The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.NASA Technical Reports Server (NTRS)
2008-01-01
The Robotic Arm Camera on NASA's Phoenix Mars Lander took this image on Oct. 18, 2008, during the 142nd Martian day, or sol, since landing. The flat patch in the center of the image has the informal name 'Holy Cow,' based on researchers' reaction when they saw the initial image of it only a few days after the May 25, 2008 landing. Researchers first saw this flat patch in an image taken by the Robotic Arm Camera on May 30, the fifth Martian day of the mission. The Phoenix mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.Oh, Taekjun; Lee, Donghwa; Kim, Hyungjin; Myung, Hyun
2015-01-01
Localization is an essential issue for robot navigation, allowing the robot to perform tasks autonomously. However, in environments with laser scan ambiguity, such as long corridors, the conventional SLAM (simultaneous localization and mapping) algorithms exploiting a laser scanner may not estimate the robot pose robustly. To resolve this problem, we propose a novel localization approach based on a hybrid method incorporating a 2D laser scanner and a monocular camera in the framework of a graph structure-based SLAM. 3D coordinates of image feature points are acquired through the hybrid method, with the assumption that the wall is normal to the ground and vertically flat. However, this assumption can be relieved, because the subsequent feature matching process rejects the outliers on an inclined or non-flat wall. Through graph optimization with constraints generated by the hybrid method, the final robot pose is estimated. To verify the effectiveness of the proposed method, real experiments were conducted in an indoor environment with a long corridor. The experimental results were compared with those of the conventional GMappingapproach. The results demonstrate that it is possible to localize the robot in environments with laser scan ambiguity in real time, and the performance of the proposed method is superior to that of the conventional approach. PMID:26151203
STS-98 U.S. Lab Destiny rests in Atlantis' payload bay
NASA Technical Reports Server (NTRS)
2001-01-01
KENNEDY SPACE CENTER, Fla. -- This closeup reveals the tight clearance between an elbow camera on the robotic arm (left) and the U.S. Lab Destiny when the payload bay doors are closed. Measurements of the elbow camera revealed only a one-inch clearance from the U.S. Lab payload, which is under review. A key element in the construction of the International Space Station, Destiny is 28 feet long and weighs 16 tons. Destiny will be attached to the Unity node on the ISS using the Shuttle'''s robot arm, with the help of the camera. This research and command-and-control center is the most sophisticated and versatile space laboratory ever built. It will ultimately house a total of 23 experiment racks for crew support and scientific research. Destiny will fly on STS-98, the seventh construction flight to the ISS. Launch of STS-98 is scheduled for Jan. 19 at 2:11 a.m. EST.
Distributed memory approaches for robotic neural controllers
NASA Technical Reports Server (NTRS)
Jorgensen, Charles C.
1990-01-01
The suitability is explored of two varieties of distributed memory neutral networks as trainable controllers for a simulated robotics task. The task requires that two cameras observe an arbitrary target point in space. Coordinates of the target on the camera image planes are passed to a neural controller which must learn to solve the inverse kinematics of a manipulator with one revolute and two prismatic joints. Two new network designs are evaluated. The first, radial basis sparse distributed memory (RBSDM), approximates functional mappings as sums of multivariate gaussians centered around previously learned patterns. The second network types involved variations of Adaptive Vector Quantizers or Self Organizing Maps. In these networks, random N dimensional points are given local connectivities. They are then exposed to training patterns and readjust their locations based on a nearest neighbor rule. Both approaches are tested based on their ability to interpolate manipulator joint coordinates for simulated arm movement while simultaneously performing stereo fusion of the camera data. Comparisons are made with classical k-nearest neighbor pattern recognition techniques.
Automated exterior inspection of an aircraft with a pan-tilt-zoom camera mounted on a mobile robot
NASA Astrophysics Data System (ADS)
Jovančević, Igor; Larnier, Stanislas; Orteu, Jean-José; Sentenac, Thierry
2015-11-01
This paper deals with an automated preflight aircraft inspection using a pan-tilt-zoom camera mounted on a mobile robot moving autonomously around the aircraft. The general topic is image processing framework for detection and exterior inspection of different types of items, such as closed or unlatched door, mechanical defect on the engine, the integrity of the empennage, or damage caused by impacts or cracks. The detection step allows to focus on the regions of interest and point the camera toward the item to be checked. It is based on the detection of regular shapes, such as rounded corner rectangles, circles, and ellipses. The inspection task relies on clues, such as uniformity of isolated image regions, convexity of segmented shapes, and periodicity of the image intensity signal. The approach is applied to the inspection of four items of Airbus A320: oxygen bay handle, air-inlet vent, static ports, and fan blades. The results are promising and demonstrate the feasibility of an automated exterior inspection.
Space Technology Game Changing Development Astrobee: ISS Robotic Free Flyer
NASA Technical Reports Server (NTRS)
Bualat, Maria Gabriele
2015-01-01
Astrobee will be a free-flying robot that can be remotely operated by astronauts in space or by mission controllers on the ground. NASA is developing Astrobee to perform a variety of intravehicular activities (IVA), such as operations inside the International Space Station. These IVA tasks include interior environmental surveys (e.g., sound level measurement), inventory and mobile camera work. Astrobee will also serve as a platform for robotics research in microgravity. Here we describe the Astrobee project objectives, concept of operations, development approach, key challenges, and initial design.
Motion and Emotional Behavior Design for Pet Robot Dog
NASA Astrophysics Data System (ADS)
Cheng, Chi-Tai; Yang, Yu-Ting; Miao, Shih-Heng; Wong, Ching-Chang
A pet robot dog with two ears, one mouth, one facial expression plane, and one vision system is designed and implemented so that it can do some emotional behaviors. Three processors (Inter® Pentium® M 1.0 GHz, an 8-bit processer 8051, and embedded soft-core processer NIOS) are used to control the robot. One camera, one power detector, four touch sensors, and one temperature detector are used to obtain the information of the environment. The designed robot with 20 DOF (degrees of freedom) is able to accomplish the walking motion. A behavior system is built on the implemented pet robot so that it is able to choose a suitable behavior for different environmental situation. From the practical test, we can see that the implemented pet robot dog can do some emotional interaction with the human.
ROBOSIGHT: Robotic Vision System For Inspection And Manipulation
NASA Astrophysics Data System (ADS)
Trivedi, Mohan M.; Chen, ChuXin; Marapane, Suresh
1989-02-01
Vision is an important sensory modality that can be used for deriving information critical to the proper, efficient, flexible, and safe operation of an intelligent robot. Vision systems are uti-lized for developing higher level interpretation of the nature of a robotic workspace using images acquired by cameras mounted on a robot. Such information can be useful for tasks such as object recognition, object location, object inspection, obstacle avoidance and navigation. In this paper we describe efforts directed towards developing a vision system useful for performing various robotic inspection and manipulation tasks. The system utilizes gray scale images and can be viewed as a model-based system. It includes general purpose image analysis modules as well as special purpose, task dependent object status recognition modules. Experiments are described to verify the robust performance of the integrated system using a robotic testbed.
Evaluation of the ROSA™ Spine robot for minimally invasive surgical procedures.
Lefranc, M; Peltier, J
2016-10-01
The ROSA® robot (Medtech, Montpellier, France) is a new medical device designed to assist the surgeon during minimally invasive spine procedures. The device comprises a patient-side cart (bearing the robotic arm and a workstation) and an optical navigation camera. The ROSA® Spine robot enables accurate pedicle screw placement. Thanks to its robotic arm and navigation abilities, the robot monitors movements of the spine throughout the entire surgical procedure and thus enables accurate, safe arthrodesis for the treatment of degenerative lumbar disc diseases, exactly as planned by the surgeon. Development perspectives include (i) assistance at all levels of the spine, (ii) improved planning abilities (virtualization of the entire surgical procedure) and (iii) use for almost any percutaneous spinal procedures not limited in screw positioning such as percutaneous endoscopic lumbar discectomy, intracorporeal implant positioning, over te top laminectomy or radiofrequency ablation.
Development of the Research Platform of Small Autonomous Blimp Robot
NASA Astrophysics Data System (ADS)
Takaya, Toshihiko; Kawamura, Hidenori; Yamamoto, Masahito; Ohuchi, Azuma
A blimp robot is attractive as an small flight robot and can float in the air by buoyancy and realize safe to the crash small flight with low energy and can movement for a long time compared with other flight robots with low energy and can movement for a long time compared with other flight robots. However, control of an airplane robot is difficult for the nonlinear characteristic exposed to inertia by the air flow in response to influence. Therefore, the applied research which carried out the maximum use of such in recent years a blimp robot's feature is prosperous. In this paper, we realized development of blimp robot for research which can be used general-purpose by carrying out clue division of the blimp robot body at a unit, and constituting and building for research of blimp robot, and application development. On the other hand, by developing a general-purpose blimp robot research platform, improvement in the research efficiency of many researchers can be attained, and further, research start of blimp robot becomes easy and contributes to development of research. We performed the experiments for the above-mentioned proof. 1. Checked basic keeping position performance and that various orbital operation was possible. And the unit exchange ease of software unit was checked by the experiment which exchanges the control layer of software for learning control from PID control, and carries out comparison of operation. 2. In order to check the exchange ease of hardware unit, the sensor was exchanged for the microphon from the camera, and control of operation was checked. 3. For the unit addition ease, the microphon which carries out sound detection with the picture detection with a camera was added, and control of operation was verified. 4. The unit exchange was carried out for the check of a function addition and the topological map generation experiment by addition of an ultrasonic sensor was conducted. Developed blimp robot for research mounted the exchange ease and the additional ease of a unit in hardware using an analog and digital I/F fomenting realized in the combination of the software module of a layered structure in software was performed. Consequently, an addition and exchange of a function were able to become easy and were able to realize the research platform of blimp robot.
Easy robot programming for beginners and kids using augmented reality environments
NASA Astrophysics Data System (ADS)
Sakamoto, Kunio; Nishiguchi, Masahiro
2010-11-01
The authors have developed the mobile robot which can be programmed by command and instruction cards. All you have to do is to arrange cards on a table and to shot the programming stage by a camera. Our card programming system recognizes instruction cards and translates icon commands into the motor driver program. This card programming environment also provides low-level structure programming.
Variable Star Observing with the Bradford Robotic Telescope
NASA Astrophysics Data System (ADS)
Kinne, Richard C. S.
2011-05-01
With the recent addition of Johnson BVRI filters on the Bradford Robotic Telescope's 24 sq. arc minute camera, this scope has become a possibility to be considered when monitoring certain stars such as LPVs. This presentation will examine the mechanics of observing with the BRT and show examples of work that has been done by the author and how that data has been reduced using VPhot.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ogunmolu, O; Gans, N; Jiang, S
Purpose: We propose a surface-image-guided soft robotic patient positioning system for maskless head-and-neck radiotherapy. The ultimate goal of this project is to utilize a soft robot to realize non-rigid patient positioning and real-time motion compensation. In this proof-of-concept study, we design a position-based visual servoing control system for an air-bladder-based soft robot and investigate its performance in controlling the flexion/extension cranial motion on a mannequin head phantom. Methods: The current system consists of Microsoft Kinect depth camera, an inflatable air bladder (IAB), pressured air source, pneumatic valve actuators, custom-built current regulators, and a National Instruments myRIO microcontroller. The performance ofmore » the designed system was evaluated on a mannequin head, with a ball joint fixed below its neck to simulate torso-induced head motion along flexion/extension direction. The IAB is placed beneath the mannequin head. The Kinect camera captures images of the mannequin head, extracts the face, and measures the position of the head relative to the camera. This distance is sent to the myRIO, which runs control algorithms and sends actuation commands to the valves, inflating and deflating the IAB to induce head motion. Results: For a step input, i.e. regulation of the head to a constant displacement, the maximum error was a 6% overshoot, which the system then reduces to 0% steady-state error. In this initial investigation, the settling time to reach the regulated position was approximately 8 seconds, with 2 seconds of delay between the command start of motion due to capacitance of the pneumatics, for a total of 10 seconds to regulate the error. Conclusion: The surface image-guided soft robotic patient positioning system can achieve accurate mannequin head flexion/extension motion. Given this promising initial Result, the extension of the current one-dimensional soft robot control to multiple IABs for non-rigid positioning control will be pursued.« less
Onboard functional and molecular imaging: A design investigation for robotic multipinhole SPECT
Bowsher, James; Yan, Susu; Roper, Justin; Giles, William; Yin, Fang-Fang
2014-01-01
Purpose: Onboard imaging—currently performed primarily by x-ray transmission modalities—is essential in modern radiation therapy. As radiation therapy moves toward personalized medicine, molecular imaging, which views individual gene expression, may also be important onboard. Nuclear medicine methods, such as single photon emission computed tomography (SPECT), are premier modalities for molecular imaging. The purpose of this study is to investigate a robotic multipinhole approach to onboard SPECT. Methods: Computer-aided design (CAD) studies were performed to assess the feasibility of maneuvering a robotic SPECT system about a patient in position for radiation therapy. In order to obtain fast, high-quality SPECT images, a 49-pinhole SPECT camera was designed which provides high sensitivity to photons emitted from an imaging region of interest. This multipinhole system was investigated by computer-simulation studies. Seventeen hot spots 10 and 7 mm in diameter were placed in the breast region of a supine female phantom. Hot spot activity concentration was six times that of background. For the 49-pinhole camera and a reference, more conventional, broad field-of-view (FOV) SPECT system, projection data were computer simulated for 4-min scans and SPECT images were reconstructed. Hot-spot localization was evaluated using a nonprewhitening forced-choice numerical observer. Results: The CAD simulation studies found that robots could maneuver SPECT cameras about patients in position for radiation therapy. In the imaging studies, most hot spots were apparent in the 49-pinhole images. Average localization errors for 10-mm- and 7-mm-diameter hot spots were 0.4 and 1.7 mm, respectively, for the 49-pinhole system, and 3.1 and 5.7 mm, respectively, for the reference broad-FOV system. Conclusions: A robot could maneuver a multipinhole SPECT system about a patient in position for radiation therapy. The system could provide onboard functional and molecular imaging with 4-min scan times. PMID:24387490
Onboard functional and molecular imaging: A design investigation for robotic multipinhole SPECT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowsher, James, E-mail: james.bowsher@duke.edu; Giles, William; Yin, Fang-Fang
2014-01-15
Purpose: Onboard imaging—currently performed primarily by x-ray transmission modalities—is essential in modern radiation therapy. As radiation therapy moves toward personalized medicine, molecular imaging, which views individual gene expression, may also be important onboard. Nuclear medicine methods, such as single photon emission computed tomography (SPECT), are premier modalities for molecular imaging. The purpose of this study is to investigate a robotic multipinhole approach to onboard SPECT. Methods: Computer-aided design (CAD) studies were performed to assess the feasibility of maneuvering a robotic SPECT system about a patient in position for radiation therapy. In order to obtain fast, high-quality SPECT images, a 49-pinholemore » SPECT camera was designed which provides high sensitivity to photons emitted from an imaging region of interest. This multipinhole system was investigated by computer-simulation studies. Seventeen hot spots 10 and 7 mm in diameter were placed in the breast region of a supine female phantom. Hot spot activity concentration was six times that of background. For the 49-pinhole camera and a reference, more conventional, broad field-of-view (FOV) SPECT system, projection data were computer simulated for 4-min scans and SPECT images were reconstructed. Hot-spot localization was evaluated using a nonprewhitening forced-choice numerical observer. Results: The CAD simulation studies found that robots could maneuver SPECT cameras about patients in position for radiation therapy. In the imaging studies, most hot spots were apparent in the 49-pinhole images. Average localization errors for 10-mm- and 7-mm-diameter hot spots were 0.4 and 1.7 mm, respectively, for the 49-pinhole system, and 3.1 and 5.7 mm, respectively, for the reference broad-FOV system. Conclusions: A robot could maneuver a multipinhole SPECT system about a patient in position for radiation therapy. The system could provide onboard functional and molecular imaging with 4-min scan times.« less
Camera calibration method of binocular stereo vision based on OpenCV
NASA Astrophysics Data System (ADS)
Zhong, Wanzhen; Dong, Xiaona
2015-10-01
Camera calibration, an important part of the binocular stereo vision research, is the essential foundation of 3D reconstruction of the spatial object. In this paper, the camera calibration method based on OpenCV (open source computer vision library) is submitted to make the process better as a result of obtaining higher precision and efficiency. First, the camera model in OpenCV and an algorithm of camera calibration are presented, especially considering the influence of camera lens radial distortion and decentering distortion. Then, camera calibration procedure is designed to compute those parameters of camera and calculate calibration errors. High-accurate profile extraction algorithm and a checkboard with 48 corners have also been used in this part. Finally, results of calibration program are presented, demonstrating the high efficiency and accuracy of the proposed approach. The results can reach the requirement of robot binocular stereo vision.
Parallel Robot for Lower Limb Rehabilitation Exercises.
Rastegarpanah, Alireza; Saadat, Mozafar; Borboni, Alberto
2016-01-01
The aim of this study is to investigate the capability of a 6-DoF parallel robot to perform various rehabilitation exercises. The foot trajectories of twenty healthy participants have been measured by a Vicon system during the performing of four different exercises. Based on the kinematics and dynamics of a parallel robot, a MATLAB program was developed in order to calculate the length of the actuators, the actuators' forces, workspace, and singularity locus of the robot during the performing of the exercises. The calculated length of the actuators and the actuators' forces were used by motion analysis in SolidWorks in order to simulate different foot trajectories by the CAD model of the robot. A physical parallel robot prototype was built in order to simulate and execute the foot trajectories of the participants. Kinect camera was used to track the motion of the leg's model placed on the robot. The results demonstrate the robot's capability to perform a full range of various rehabilitation exercises.
Parallel Robot for Lower Limb Rehabilitation Exercises
Saadat, Mozafar; Borboni, Alberto
2016-01-01
The aim of this study is to investigate the capability of a 6-DoF parallel robot to perform various rehabilitation exercises. The foot trajectories of twenty healthy participants have been measured by a Vicon system during the performing of four different exercises. Based on the kinematics and dynamics of a parallel robot, a MATLAB program was developed in order to calculate the length of the actuators, the actuators' forces, workspace, and singularity locus of the robot during the performing of the exercises. The calculated length of the actuators and the actuators' forces were used by motion analysis in SolidWorks in order to simulate different foot trajectories by the CAD model of the robot. A physical parallel robot prototype was built in order to simulate and execute the foot trajectories of the participants. Kinect camera was used to track the motion of the leg's model placed on the robot. The results demonstrate the robot's capability to perform a full range of various rehabilitation exercises. PMID:27799727
Monitoring of volcanic emissions for risk assessment at Popocatépetl volcano (Mexico)
NASA Astrophysics Data System (ADS)
Delgado, Hugo; Campion, Robin; Fickel, Matthias; Cortés Ramos, Jorge; Alvarez Nieves, José Manuel; Taquet, Noemi; Grutter, Michel; Osiris García Gómez, Israel; Darío Sierra Mondragón, Rubén; Meza Hernández, Israel
2015-04-01
In January 2014, the Mexican Agency FOPREDEN (Natural Disaster Prevention Fund) accepted to fund a project to renew, upgrade and complement the gas monitoring facilities. The UNAM-CENAPRED (National Center for Disaster Prevention) gas monitoring system currently consists of: • A COSPEC instrument and two mini-DOAS used for mobile traverse measurements • An SO2 camera used for punctual campaign • A network of three permanent scanning mini-DOAS (NOVAC type 1 instrument) and one permanent mini-DOAS (NOVAC type II, currently under repair). The activity planed in the framework of the new project, of which several of them are already successfully implemented, include: • Completely refurbished permanent scanning mini-DOAS network consisting of four stations and the punctual deployment of three RADES (Rapid Deployment System) for assessing plume geometry and chemistry or for responding to emergency situations. • Prolongation of the mobile traverse measurements in order to continuously update the 20 years-long SO2 flux database obtained with the COSPEC, now coupled with a mobile DOAS for redundancy. • The development and installation of a permanent SO2 camera, for monitoring in real time the short timescale variations of the SO2 emissions. • The installation of two permanent FTIR spectrometers, one measuring the plume thermal emissions and the other measuring with the solar occultation geometry, for frequent measurements of molecular ratio between SO2, HCl, HF and SiF4 • The exploitation in near-real time of the satellite imagery (OMI, MODIS and ASTER) available for the volcano. A special attention will be paid to increase the reliability and graphical representation of these data stream in order to facilitate their use for decision-making by the civil protection authority in charge of the volcano.
Double Star Measurements Using a Webcam and CCD Camera, Annual Report of 2016
NASA Astrophysics Data System (ADS)
Schlimmer, Jeorg
2018-01-01
This report shows the results on 223 double star measurements from 2016; mini-mum separation is 1.23 a.s. (STF1024AB), maximum separation is 371 a.s. (STF1424AD). The mean value of all measurements is 18.7 a.s.
Commander Brand shaves in front of forward middeck lockers
NASA Technical Reports Server (NTRS)
1982-01-01
Commander Brand, wearing shorts, shaves in front of forward middeck lockers using personal hygiene mirror assembly (assy). Open modular locker single tray assy, Field Sequential (FS) crew cabin camera, communications kit assy mini headset (HDST) and HDST interface unit (HIU), personal hygiene kit, and meal tray assemblies appear in view.
Commander Truly on aft flight deck holding communication kit assembly (ASSY)
1983-09-05
STS008-04-106 (30 Aug-5 Sept 1983) --- On aft flight deck, Richard M. Truly, STS-8 commander, holds communication kit assembly (ASSY) headset (HDST) interface unit (HIU) and mini-HDST in front of the on orbit station. Hasselblad camera is positioned on overhead window W8.
Mars Global Surveyor MOC Images
NASA Technical Reports Server (NTRS)
1999-01-01
Images of several dust devils were captured by the Mars Orbiter Camera (MOC) during its global geodesy campaign. The images shown were taken two days apart, May 13, 1999 and May 15, 1999. Dust devils are columnar vortices of wind that move across the landscape and pick up dust. They look like mini tornadoes.
Improving semantic scene understanding using prior information
NASA Astrophysics Data System (ADS)
Laddha, Ankit; Hebert, Martial
2016-05-01
Perception for ground robot mobility requires automatic generation of descriptions of the robot's surroundings from sensor input (cameras, LADARs, etc.). Effective techniques for scene understanding have been developed, but they are generally purely bottom-up in that they rely entirely on classifying features from the input data based on learned models. In fact, perception systems for ground robots have a lot of information at their disposal from knowledge about the domain and the task. For example, a robot in urban environments might have access to approximate maps that can guide the scene interpretation process. In this paper, we explore practical ways to combine such prior information with state of the art scene understanding approaches.
Visual environment recognition for robot path planning using template matched filters
NASA Astrophysics Data System (ADS)
Orozco-Rosas, Ulises; Picos, Kenia; Díaz-Ramírez, Víctor H.; Montiel, Oscar; Sepúlveda, Roberto
2017-08-01
A visual approach in environment recognition for robot navigation is proposed. This work includes a template matching filtering technique to detect obstacles and feasible paths using a single camera to sense a cluttered environment. In this problem statement, a robot can move from the start to the goal by choosing a single path between multiple possible ways. In order to generate an efficient and safe path for mobile robot navigation, the proposal employs a pseudo-bacterial potential field algorithm to derive optimal potential field functions using evolutionary computation. Simulation results are evaluated in synthetic and real scenes in terms of accuracy of environment recognition and efficiency of path planning computation.
An automated miniature robotic vehicle inspection system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dobie, Gordon; Summan, Rahul; MacLeod, Charles
2014-02-18
A novel, autonomous reconfigurable robotic inspection system for quantitative NDE mapping is presented. The system consists of a fleet of wireless (802.11g) miniature robotic vehicles, each approximately 175 × 125 × 85 mm with magnetic wheels that enable them to inspect industrial structures such as storage tanks, chimneys and large diameter pipe work. The robots carry one of a number of payloads including a two channel MFL sensor, a 5 MHz dry coupled UT thickness wheel probe and a machine vision camera that images the surface. The system creates an NDE map of the structure overlaying results onto a 3Dmore » model in real time. The authors provide an overview of the robot design, data fusion algorithms (positioning and NDE) and visualization software.« less
Park, Jae Byung; Lee, Seung Hun; Lee, Il Jae
2009-01-01
In this study, we propose a precise 3D lug pose detection sensor for automatic robot welding of a lug to a huge steel plate used in shipbuilding, where the lug is a handle to carry the huge steel plate. The proposed sensor consists of a camera and four laser line diodes, and its design parameters are determined by analyzing its detectable range and resolution. For the lug pose acquisition, four laser lines are projected on both lug and plate, and the projected lines are detected by the camera. For robust detection of the projected lines against the illumination change, the vertical threshold, thinning, Hough transform and separated Hough transform algorithms are successively applied to the camera image. The lug pose acquisition is carried out by two stages: the top view alignment and the side view alignment. The top view alignment is to detect the coarse lug pose relatively far from the lug, and the side view alignment is to detect the fine lug pose close to the lug. After the top view alignment, the robot is controlled to move close to the side of the lug for the side view alignment. By this way, the precise 3D lug pose can be obtained. Finally, experiments with the sensor prototype are carried out to verify the feasibility and effectiveness of the proposed sensor. PMID:22400007
The Mars Hand Lens Imager (MAHLI) aboard the Mars rover, Curiosity
NASA Astrophysics Data System (ADS)
Edgett, K. S.; Ravine, M. A.; Caplinger, M. A.; Ghaemi, F. T.; Schaffner, J. A.; Malin, M. C.; Baker, J. M.; Dibiase, D. R.; Laramee, J.; Maki, J. N.; Willson, R. G.; Bell, J. F., III; Cameron, J. F.; Dietrich, W. E.; Edwards, L. J.; Hallet, B.; Herkenhoff, K. E.; Heydari, E.; Kah, L. C.; Lemmon, M. T.; Minitti, M. E.; Olson, T. S.; Parker, T. J.; Rowland, S. K.; Schieber, J.; Sullivan, R. J.; Sumner, D. Y.; Thomas, P. C.; Yingst, R. A.
2009-08-01
The Mars Science Laboratory (MSL) rover, Curiosity, is expected to land on Mars in 2012. The Mars Hand Lens Imager (MAHLI) will be used to document martian rocks and regolith with a 2-megapixel RGB color CCD camera with a focusable macro lens mounted on an instrument-bearing turret on the end of Curiosity's robotic arm. The flight MAHLI can focus on targets at working distances of 20.4 mm to infinity. At 20.4 mm, images have a pixel scale of 13.9 μm/pixel. The pixel scale at 66 mm working distance is about the same (31 μm/pixel) as that of the Mars Exploration Rover (MER) Microscopic Imager (MI). MAHLI camera head placement is dependent on the capabilities of the MSL robotic arm, the design for which presently has a placement uncertainty of ~20 mm in 3 dimensions; hence, acquisition of images at the minimum working distance may be challenging. The MAHLI consists of 3 parts: a camera head, a Digital Electronics Assembly (DEA), and a calibration target. The camera head and DEA are connected by a JPL-provided cable which transmits data, commands, and power. JPL is also providing a contact sensor. The camera head will be mounted on the rover's robotic arm turret, the DEA will be inside the rover body, and the calibration target will be mounted on the robotic arm azimuth motor housing. Camera Head. MAHLI uses a Kodak KAI-2020CM interline transfer CCD (1600 x 1200 active 7.4 μm square pixels with RGB filtered microlenses arranged in a Bayer pattern). The optics consist of a group of 6 fixed lens elements, a movable group of 3 elements, and a fixed sapphire window front element. Undesired near-infrared radiation is blocked using a coating deposited on the inside surface of the sapphire window. The lens is protected by a dust cover with a Lexan window through which imaging can be ac-complished if necessary, and targets can be illuminated by sunlight or two banks of two white light LEDs. Two 365 nm UV LEDs are included to search for fluores-cent materials at night. DEA and Onboard Processing. The DEA incorpo-rates the circuit elements required for data processing, compression, and buffering. It also includes all power conversion and regulation capabilities for both the DEA and the camera head. The DEA has an 8 GB non-volatile flash memory plus 128 MB volatile storage. Images can be commanded as full-frame or sub-frame and the camera has autofocus and autoexposure capa-bilities. MAHLI can also acquire 720p, ~7 Hz high definition video. Onboard processing includes options for Bayer pattern filter interpolation, JPEG-based compression, and focus stack merging (z-stacking). Malin Space Science Systems (MSSS) built and will operate the MAHLI. Alliance Spacesystems, LLC, designed and built the lens mechanical assembly. MAHLI shares common electronics, detector, and software designs with the MSL Mars Descent Imager (MARDI) and the 2 MSL Mast Cameras (Mastcam). Pre-launch images of geologic materials imaged by MAHLI are online at: http://www.msss.com/msl/mahli/prelaunch_images/.
Enhanced Lighting Techniques and Augmented Reality to Improve Human Task Performance
NASA Technical Reports Server (NTRS)
Maida, James C.; Bowen, Charles K.; Pace, John W.
2005-01-01
One of the most versatile tools designed for use on the International Space Station (ISS) is the Special Purpose Dexterous Manipulator (SPDM) robot. Operators for this system are trained at NASA Johnson Space Center (JSC) using a robotic simulator, the Dexterous Manipulator Trainer (DMT), which performs most SPDM functions under normal static Earth gravitational forces. The SPDM is controlled from a standard Robotic Workstation. A key feature of the SPDM and DMT is the Force/Moment Accommodation (FMA) system, which limits the contact forces and moments acting on the robot components, on its payload, an Orbital Replaceable Unit (ORU), and on the receptacle for the ORU. The FMA system helps to automatically alleviate any binding of the ORU as it is inserted or withdrawn from a receptacle, but it is limited in its correction capability. A successful ORU insertion generally requires that the reference axes of the ORU and receptacle be aligned to within approximately 0.25 inch and 0.5 degree of nominal values. The only guides available for the operator to achieve these alignment tolerances are views from any available video cameras. No special registration markings are provided on the ORU or receptacle, so the operator must use their intrinsic features in the video display to perform the pre-insertion alignment task. Since optimum camera views may not be available, and dynamic orbital lighting conditions may limit viewing periods, long times are anticipated for performing some ORU insertion or extraction operations. This study explored the feasibility of using augmented reality (AR) to assist with SPDM operations. Geometric graphical symbols were overlaid on the end effector (EE) camera view to afford cues to assist the operator in attaining adequate pre-insertion ORU alignment.
McDougall, Elspeth M; Corica, Federico A; Chou, David S; Abdelshehid, Corollos S; Uribe, Carlos A; Stoliar, Gabriella; Sala, Leandro G; Khonsari, Sepi S; Eichel, Louis; Boker, John R; Ahlering, Thomas E; Clayman, Ralph V
2006-03-01
To assist practising urologists acquire and incorporate robot-assisted laparoscopic prostatectomy (RALP) into their practice, a 5 day mini-residency (M-R) programme with a mentor, preceptor and potential proctor experience was established at the University of California, Irvine, Yamanouchi Center for Urological Education. The follow-up results from the initial 21 RALP M-R participants are presented. Between September 2003 and September 2004, 21 urologists from six states and four countries underwent a RALP M-R. Each participant underwent 1:2 teacher:attendee instruction over a 5 day period, which included inanimate model skills training, animal/cadaver laboratory skills training and operating room observation experience. Participants were also offered a proctoring experience at their hospital if they so desired. A questionnaire survey was mailed 1-14 months (mean 7.2 months) following completion of the mini-residency and these results were tabulated and reviewed. A 100% response rate was achieved from the mailed questionnaires. The mean M-R participant age was 43 years (range 33-55 years). One-third of the M-R participants were practising in an academic environment. Most of the participants (55%) had no fellowship training. Of those with fellowship training (45%), three (15%) were in laparoscopy and three (15%) were in oncology; 25% of the participants were in large (>6 physicians), 25% in small (2-6 physicians) and 15% in solo practices; 70% of the participants were located in an urban setting. The majority of the participants (80%) had laparoscopic experience during residency training and had performed 20-60 laparoscopic cases prior to attending the M-R programme. Within 7.2 months after M-R (range 1-14 months), 95% of the participants were practising robot-assisted laparoscopic prostatectomy and 25% of the RALP M-R participants had also performed robotic-assisted laparoscopic pyeloplasty. Of the M-R participants, 38% availed themselves of the preceptor/proctor component of the programme; among these, 100% reported that they were performing RALP vs. only 92% of the MR participants who did not have a proctor experience. The 5 day length of the M-R was considered to be of satisfactory duration by 90% of the participants, while 1 participant considered it too brief and 1 considered it too long. All but one of the participants rated the M-R as a very or extremely valuable experience. All the M-R participants indicated that they would recommend this training programme to a colleague. A 5 day intensive RALP M-R course seems to encourage postgraduate urologists, already familiar with laparoscopy, to successfully incorporate robotic surgery into their practice. The take rate, or the percentage of participants performing robotic-assisted surgery within 14 months after M-R, was 95%. Continued follow-up will ultimately determine the long-term effectiveness of this 1 week intensive training programme for postgraduate urologists. Copyright 2006 John Wiley & Sons, Ltd.
Use of spring-roll EAP actuator applied as end-effector of a hyper-redundant robot
NASA Astrophysics Data System (ADS)
Errico, Gianmarco; Fava, Victor; Resta, Ferruccio; Ripamonti, Francesco
2015-04-01
This paper presents a hyper-redundant continuous robot used to perform work in places which humans can not reach. This type of robot is generally a bio-inspired solution, it is composed by a lot of flexible segments driven by multiple actuators and its dynamics is described by a lot degrees of freedom. In this paper a model composed of some rigid links connected to each other by revolution joint is presented. In each link a torsional spring is added in order to simulate the resistant torque between the links and the interactions among the cables and the robot during the relative rotation. Moreover a type of EAP actuator, called spring roll, is used as the end-effector of the robot. Through a suitable sensor, such as a camera, the spring roll allows to track a target and it closes the control loop on the robot to follow it.
ISS Expedition 53 U.S. Spacewalk 46
2017-10-20
Outside the International Space Station, Expedition 53 Commander Randy Bresnik and Flight Engineer Joe Acaba of NASA conducted a spacewalk Oct. 20 to continue upgrades to and maintenance of station hardware. It was the third spacewalk in two weeks for Expedition 53 crewmembers outside the Quest airlock. During the excursion, Bresnik and Acaba replaced a failed camera light on the new Latching End Effector “hand” on the Canadarm2 robotic arm, installed a new high definition camera on the starboard truss of the complex, replaced a fuse on the Dextre Special Dexterous Manipulator attachment for the arm and removed thermal blankets from two spare electrical routing units for future robotic replacement work, if required. It was the fifth spacewalk in Bresnik’s career and the third for Acaba.
Design and control of active vision based mechanisms for intelligent robots
NASA Technical Reports Server (NTRS)
Wu, Liwei; Marefat, Michael M.
1994-01-01
In this paper, we propose a design of an active vision system for intelligent robot application purposes. The system has the degrees of freedom of pan, tilt, vergence, camera height adjustment, and baseline adjustment with a hierarchical control system structure. Based on this vision system, we discuss two problems involved in the binocular gaze stabilization process: fixation point selection and vergence disparity extraction. A hierarchical approach to determining point of fixation from potential gaze targets using evaluation function representing human visual behavior to outside stimuli is suggested. We also characterize different visual tasks in two cameras for vergence control purposes, and a phase-based method based on binarized images to extract vergence disparity for vergence control is presented. A control algorithm for vergence control is discussed.
Serendipitous Offline Learning in a Neuromorphic Robot.
Stewart, Terrence C; Kleinhans, Ashley; Mundy, Andrew; Conradt, Jörg
2016-01-01
We demonstrate a hybrid neuromorphic learning paradigm that learns complex sensorimotor mappings based on a small set of hard-coded reflex behaviors. A mobile robot is first controlled by a basic set of reflexive hand-designed behaviors. All sensor data is provided via a spike-based silicon retina camera (eDVS), and all control is implemented via spiking neurons simulated on neuromorphic hardware (SpiNNaker). Given this control system, the robot is capable of simple obstacle avoidance and random exploration. To train the robot to perform more complex tasks, we observe the robot and find instances where the robot accidentally performs the desired action. Data recorded from the robot during these times is then used to update the neural control system, increasing the likelihood of the robot performing that task in the future, given a similar sensor state. As an example application of this general-purpose method of training, we demonstrate the robot learning to respond to novel sensory stimuli (a mirror) by turning right if it is present at an intersection, and otherwise turning left. In general, this system can learn arbitrary relations between sensory input and motor behavior.
Performance of a scanning laser line striper in outdoor lighting
NASA Astrophysics Data System (ADS)
Mertz, Christoph
2013-05-01
For search and rescue robots and reconnaissance robots it is important to detect objects in their vicinity. We have developed a scanning laser line striper that can produce dense 3D images using active illumination. The scanner consists of a camera and a MEMS-micro mirror based projector. It can also detect the presence of optically difficult material like glass and metal. The sensor can be used for autonomous operation or it can help a human operator to better remotely control the robot. In this paper we will evaluate the performance of the scanner under outdoor illumination, i.e. from operating in the shade to operating in full sunlight. We report the range, resolution and accuracy of the sensor and its ability to reconstruct objects like grass, wooden blocks, wires, metal objects, electronic devices like cell phones, blank RPG, and other inert explosive devices. Furthermore we evaluate its ability to detect the presence of glass and polished metal objects. Lastly we report on a user study that shows a significant improvement in a grasping task. The user is tasked with grasping a wire with the remotely controlled hand of a robot. We compare the time it takes to complete the task using the 3D scanner with using a traditional video camera.
Wu, Defeng; Chen, Tianfei; Li, Aiguo
2016-08-30
A robot-based three-dimensional (3D) measurement system is presented. In the presented system, a structured light vision sensor is mounted on the arm of an industrial robot. Measurement accuracy is one of the most important aspects of any 3D measurement system. To improve the measuring accuracy of the structured light vision sensor, a novel sensor calibration approach is proposed to improve the calibration accuracy. The approach is based on a number of fixed concentric circles manufactured in a calibration target. The concentric circle is employed to determine the real projected centres of the circles. Then, a calibration point generation procedure is used with the help of the calibrated robot. When enough calibration points are ready, the radial alignment constraint (RAC) method is adopted to calibrate the camera model. A multilayer perceptron neural network (MLPNN) is then employed to identify the calibration residuals after the application of the RAC method. Therefore, the hybrid pinhole model and the MLPNN are used to represent the real camera model. Using a standard ball to validate the effectiveness of the presented technique, the experimental results demonstrate that the proposed novel calibration approach can achieve a highly accurate model of the structured light vision sensor.
Combating Terrorism Technical Support Office. 2008 Review
2009-01-15
threat object displayed at the operator control unit of the robotic platform. Remote Utility Conversion Kit The Remote Utility Conversion Kit (RUCK) is a...three- dimensional and isometric simulations and games. Develop crowd models, adversarial behavior models, network-based simulations, mini-simulations...Craft-Littoral The modular unmanned surface craft-littoral ( MUSCL ) is a spin- off of EOD/LIC’s Unmanned Reconnaissance Observation Craft, developed
Comparative evaluation of three commercial systems for nucleic acid extraction from urine specimens.
Tang, Yi-Wei; Sefers, Susan E; Li, Haijing; Kohn, Debra J; Procop, Gary W
2005-09-01
A nucleic acid extraction system that can handle small numbers of specimens with a short test turnaround time and short hands-on time is desirable for emergent testing. We performed a comparative validation on three systems: the MagNA Pure compact system (Compact), the NucliSens miniMAG extraction instrument (miniMAG), and the BioRobot EZ1 system (EZ1). A total of 75 urine specimens submitted for polyomavirus BK virus detection were used. The human beta-actin gene was detected on 75 (100%), 75 (100%), and 72 (96%) nucleic acid extracts prepared by the miniMAG, EZ1, and Compact, respectively. The miniMAG produced the highest quantity of nucleic acids and the best precision among the three systems. The agreement rate was 100% for BKV detection on nucleic acid extracts prepared by the three extraction systems. When a full panel of specimens was run, the hands-on time and test turnaround time were 105.7 and 121.1 min for miniMAG, 6.1 and 22.6 min for EZ1, and 7.4 and 33.7 min for Compact, respectively. The EZ1 and Compact systems processed automatic nucleic acid extraction properly, providing a good solution to the need for sporadic but emergent specimen detection. The miniMAG yielded the highest quantity of nucleic acids, suggesting that this system would be the best for specimens containing a low number of microorganisms of interest.
NASA Astrophysics Data System (ADS)
Song, Zhen; Moore, Kevin L.; Chen, YangQuan; Bahl, Vikas
2003-09-01
As an outgrowth of series of projects focused on mobility of unmanned ground vehicles (UGV), an omni-directional (ODV), multi-robot, autonomous mobile parking security system has been developed. The system has two types of robots: the low-profile Omni-Directional Inspection System (ODIS), which can be used for under-vehicle inspections, and the mid-sized T4 robot, which serves as a ``marsupial mothership'' for the ODIS vehicles and performs coarse resolution inspection. A key task for the T4 robot is license plate recognition (LPR). For a successful LPR task without compromising the recognition rate, the robot must be able to identify the bumper locations of vehicles in the parking area and then precisely position the LPR camera relative to the bumper. This paper describes a 2D-laser scanner based approach to bumper identification and laser servoing for the T4 robot. The system uses a gimbal-mounted scanning laser. As the T4 robot travels down a row of parking stalls, data is collected from the laser every 100ms. For each parking stall in the range of the laser during the scan, the data is matched to a ``bumper box'' corresponding to where a car bumper is expected, resulting in a point cloud of data corresponding to a vehicle bumper for each stall. Next, recursive line-fitting algorithms are used to determine a line for the data in each stall's ``bumper box.'' The fitting technique uses Hough based transforms, which are robust against segmentation problems and fast enough for real-time line fitting. Once a bumper line is fitted with an acceptable confidence, the bumper location is passed to the T4 motion controller, which moves to position the LPR camera properly relative to the bumper. The paper includes examples and results that show the effectiveness of the technique, including its ability to work in real-time.
2007-07-01
engineering of a process or system that mimics biology, to investigate behaviours in robots that emulate animals such as self - healing and swarming [2...7.3.5 References 7-25 7.4 Adaptive Automation for Robotic Military Systems 7-29 7.4.1 Introduction 7-29 7.4.2 Human Performance Issues for...Figure 6-7 Integrated Display of Video, Range Readings, and Robot Representation 6-31 Figure 6-8 Representing the Pose of a Panning Camera 6-32 Figure
2017-03-24
iss050e059529 (03/24/2017) --- Flight Engineer Thomas Pesquet of ESA (European Space Agency) is seen performing maintenance on the Dextre robot during a spacewalk. Pesquet and Expedition 50 Commander Shane Kimbrough of NASA conducted a six hour and 34 minute spacewalk on March 24, 2017. The two astronauts successfully disconnected cables and electrical connections on the Pressurized Mating Adapter-3 to prepare for its robotic move, lubricated the latching end effector on the Special Purpose Dexterous Manipulator “extension” for the Canadarm2 robotic arm, inspected a radiator valve and replaced cameras on the Japanese segment of the outpost.
2017-03-24
iss050e059608 (03/24/2017) --- NASA astronaut Peggy Whitson controls the robotic arm aboard the International Space Station during a spacewalk. Expedition 50 Commander Shane Kimbrough of NASA and Flight Engineer Thomas Pesquet of ESA (European Space Agency) conducted a six hour and 34 minute spacewalk on March 24, 2017. The two astronauts successfully disconnected cables and electrical connections on the Pressurized Mating Adapter-3 to prepare for its robotic move, lubricated the latching end effector on the Special Purpose Dexterous Manipulator “extension” for the Canadarm2 robotic arm, inspected a radiator valve and replaced cameras on the Japanese segment of the outpost.
Quantified, Interactive Simulation of AMCW ToF Camera Including Multipath Effects
Lambers, Martin; Kolb, Andreas
2017-01-01
In the last decade, Time-of-Flight (ToF) range cameras have gained increasing popularity in robotics, automotive industry, and home entertainment. Despite technological developments, ToF cameras still suffer from error sources such as multipath interference or motion artifacts. Thus, simulation of ToF cameras, including these artifacts, is important to improve camera and algorithm development. This paper presents a physically-based, interactive simulation technique for amplitude modulated continuous wave (AMCW) ToF cameras, which, among other error sources, includes single bounce indirect multipath interference based on an enhanced image-space approach. The simulation accounts for physical units down to the charge level accumulated in sensor pixels. Furthermore, we present the first quantified comparison for ToF camera simulators. We present bidirectional reference distribution function (BRDF) measurements for selected, purchasable materials in the near-infrared (NIR) range, craft real and synthetic scenes out of these materials and quantitatively compare the range sensor data. PMID:29271888
Quantified, Interactive Simulation of AMCW ToF Camera Including Multipath Effects.
Bulczak, David; Lambers, Martin; Kolb, Andreas
2017-12-22
In the last decade, Time-of-Flight (ToF) range cameras have gained increasing popularity in robotics, automotive industry, and home entertainment. Despite technological developments, ToF cameras still suffer from error sources such as multipath interference or motion artifacts. Thus, simulation of ToF cameras, including these artifacts, is important to improve camera and algorithm development. This paper presents a physically-based, interactive simulation technique for amplitude modulated continuous wave (AMCW) ToF cameras, which, among other error sources, includes single bounce indirect multipath interference based on an enhanced image-space approach. The simulation accounts for physical units down to the charge level accumulated in sensor pixels. Furthermore, we present the first quantified comparison for ToF camera simulators. We present bidirectional reference distribution function (BRDF) measurements for selected, purchasable materials in the near-infrared (NIR) range, craft real and synthetic scenes out of these materials and quantitatively compare the range sensor data.
Performance evaluation and clinical applications of 3D plenoptic cameras
NASA Astrophysics Data System (ADS)
Decker, Ryan; Shademan, Azad; Opfermann, Justin; Leonard, Simon; Kim, Peter C. W.; Krieger, Axel
2015-06-01
The observation and 3D quantification of arbitrary scenes using optical imaging systems is challenging, but increasingly necessary in many fields. This paper provides a technical basis for the application of plenoptic cameras in medical and medical robotics applications, and rigorously evaluates camera integration and performance in the clinical setting. It discusses plenoptic camera calibration and setup, assesses plenoptic imaging in a clinically relevant context, and in the context of other quantitative imaging technologies. We report the methods used for camera calibration, precision and accuracy results in an ideal and simulated surgical setting. Afterwards, we report performance during a surgical task. Test results showed the average precision of the plenoptic camera to be 0.90mm, increasing to 1.37mm for tissue across the calibrated FOV. The ideal accuracy was 1.14mm. The camera showed submillimeter error during a simulated surgical task.
Smart mobile robot system for rubbish collection
NASA Astrophysics Data System (ADS)
Ali, Mohammed A. H.; Sien Siang, Tan
2018-03-01
This paper records the research and procedures of developing a smart mobility robot with detection system to collect rubbish. The objective of this paper is to design a mobile robot that can detect and recognize medium-size rubbish such as drinking cans. Besides that, the objective is also to design a mobile robot with the ability to estimate the position of rubbish from the robot. In addition, the mobile robot is also able to approach the rubbish based on position of rubbish. This paper explained about the types of image processing, detection and recognition methods and image filters. This project implements RGB subtraction method as the prior system. Other than that, algorithm for distance measurement based on image plane is implemented in this project. This project is limited to use computer webcam as the sensor. Secondly, the robot is only able to approach the nearest rubbish in the same views of camera vision and any rubbish that contain RGB colour components on its body.
A Demonstrator Intelligent Scheduler For Sensor-Based Robots
NASA Astrophysics Data System (ADS)
Perrotta, Gabriella; Allen, Charles R.; Shepherd, Andrew J.
1987-10-01
The development of an execution module capable of functioning as as on-line supervisor for a robot equipped with a vision sensor and tactile sensing gripper system is described. The on-line module is supported by two off-line software modules which provide a procedural based assembly constraints language to allow the assembly task to be defined. This input is then converted into a normalised and minimised form. The host Robot programming language permits high level motions to be issued at the to level, hence allowing a low programming overhead to the designer, who must describe the assembly sequence. Components are selected for pick and place robot movement, based on information derived from two cameras, one static and the other mounted on the end effector of the robot. The approach taken is multi-path scheduling as described by Fox pi. The system is seen to permit robot assembly in a less constrained parts presentation environment making full use of the sensory detail available on the robot.
2008-01-01
Robotic colorectal surgery has gradually been performed more with the help of the technological advantages of the da Vinci® system. Advanced technological advantages of the da Vinci® system compared with standard laparoscopic colorectal surgery have been reported. These are a stable camera platform, three-dimensional imaging, excellent ergonomics, tremor elimination, ambidextrous capability, motion scaling, and instruments with multiple degrees of freedom. However, despite these technological advantages, most studies did not report the clinical advantages of robotic colorectal surgery compared to standard laparoscopic colorectal surgery. Only one study recently implies the real benefits of robotic rectal cancer surgery. The purpose of this review article is to outline the early concerns of robotic colorectal surgery using the da Vinci® system, to present early clinical outcomes from the most current series, and to discuss not only the safety and the feasibility but also the real benefits of robotic colorectal surgery. Moreover, this article will comment on the possible future clinical advantages and limitations of the da Vinci® system in robotic colorectal surgery. PMID:19108010
Telescope Array Control System Based on Wireless Touch Screen Platform
NASA Astrophysics Data System (ADS)
Fu, X. N.; Huang, L.; Wei, J. Y.
2016-07-01
GWAC (Ground-based Wide Angle Cameras) are the ground-based observational instruments of the Sino-French cooperation SVOM (Space Variable Objects Monitor) astronomical satellite, and Mini-GWAC is a pathfinder and supplement of GWAC. In the context of the Mini-GWAC telescope array, this paper introduces the design and implementation of a kind of telescope array control system, which is based on wireless serial interface module to communicate. We describe the development and implementation of the system in detail in terms of control system principle, system hardware structure, software design, experiment, and test. The system uses the touch-control PC which is based on the Windows CE system as the upper-computer, the wireless transceiver module and PLC (Programmable Logic Controller) as the core. It has the advantages of low cost, reliable data transmission, and simple operation. So far, the control system has been applied to Mini-GWAC successfully.
Telescope Array Control System Based on Wireless Touch Screen Platform
NASA Astrophysics Data System (ADS)
Fu, Xia-nan; Huang, Lei; Wei, Jian-yan
2017-10-01
Ground-based Wide Angle Cameras (GMAC) are the ground-based observational facility for the SVOM (Space Variable Object Monitor) astronomical satellite of Sino-French cooperation, and Mini-GWAC is the pathfinder and supplement of GWAC. In the context of the Mini-GWAC telescope array, this paper introduces the design and implementation of a kind of telescope array control system based on the wireless touch screen platform. We describe the development and implementation of the system in detail in terms of control system principle, system hardware structure, software design, experiment, and test etc. The system uses a touch-control PC which is based on the Windows CE system as the upper computer, while the wireless transceiver module and PLC (Programmable Logic Controller) are taken as the system kernel. It has the advantages of low cost, reliable data transmission, and simple operation. And the control system has been applied to the Mini-GWAC successfully.
Martian Soil Ready for Robotic Laboratory Analysis
NASA Technical Reports Server (NTRS)
2008-01-01
NASA's Phoenix Mars Lander scooped up this Martian soil on the mission's 11th Martian day, or sol, after landing (June 5, 2008) as the first soil sample for delivery to the laboratory on the lander deck. The material includes a light-toned clod possibly from crusted surface of the ground, similar in appearance to clods observed near a foot of the lander. This approximately true-color view of the contents of the scoop on the Robotic Arm comes from combining separate images taken by the Robotic Arm Camera on Sol 11, using illumination by red, green and blue light-emitting diodes on the camera. The scoop loaded with this sample was poised over an open sample-delivery door of Thermal and Evolved-Gas Analyzer at the end of Sol 11, ready to be dumped into the instrument on the next sol. The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.Head-coupled remote stereoscopic camera system for telepresence applications
NASA Astrophysics Data System (ADS)
Bolas, Mark T.; Fisher, Scott S.
1990-09-01
The Virtual Environment Workstation Project (VIEW) at NASA's Ames Research Center has developed a remotely controlled stereoscopic camera system that can be used for telepresence research and as a tool to develop and evaluate configurations for head-coupled visual systems associated with space station telerobots and remote manipulation robotic arms. The prototype camera system consists of two lightweight CCD video cameras mounted on a computer controlled platform that provides real-time pan, tilt, and roll control of the camera system in coordination with head position transmitted from the user. This paper provides an overall system description focused on the design and implementation of the camera and platform hardware configuration and the development of control software. Results of preliminary performance evaluations are reported with emphasis on engineering and mechanical design issues and discussion of related psychophysiological effects and objectives.
A robust approach for a filter-based monocular simultaneous localization and mapping (SLAM) system.
Munguía, Rodrigo; Castillo-Toledo, Bernardino; Grau, Antoni
2013-07-03
Simultaneous localization and mapping (SLAM) is an important problem to solve in robotics theory in order to build truly autonomous mobile robots. This work presents a novel method for implementing a SLAM system based on a single camera sensor. The SLAM with a single camera, or monocular SLAM, is probably one of the most complex SLAM variants. In this case, a single camera, which is freely moving through its environment, represents the sole sensor input to the system. The sensors have a large impact on the algorithm used for SLAM. Cameras are used more frequently, because they provide a lot of information and are well adapted for embedded systems: they are light, cheap and power-saving. Nevertheless, and unlike range sensors, which provide range and angular information, a camera is a projective sensor providing only angular measurements of image features. Therefore, depth information (range) cannot be obtained in a single step. In this case, special techniques for feature system-initialization are needed in order to enable the use of angular sensors (as cameras) in SLAM systems. The main contribution of this work is to present a novel and robust scheme for incorporating and measuring visual features in filtering-based monocular SLAM systems. The proposed method is based in a two-step technique, which is intended to exploit all the information available in angular measurements. Unlike previous schemes, the values of parameters used by the initialization technique are derived directly from the sensor characteristics, thus simplifying the tuning of the system. The experimental results show that the proposed method surpasses the performance of previous schemes.
Not Your Mother's View: The Dynamics of Toddler Visual Experience
ERIC Educational Resources Information Center
Smith, Linda B.; Yu, Chen; Pereira, Alfredo F.
2011-01-01
Human toddlers learn about objects through second-by-second, minute-by-minute sensory-motor interactions. In an effort to understand how toddlers' bodily actions structure the visual learning environment, mini-video cameras were placed low on the foreheads of toddlers, and for comparison also on the foreheads of their parents, as they jointly…
Operating Room of the Future: Advanced Technologies in Safe and Efficient Operating Rooms
2008-10-01
fit” or compatibility with different tasks. Ideally, the optimal match between tasks and well-designed display alternatives will be self -apparent...hierarchical display environment. The FARO robot arm is used as an accurate and reliable tracker to control a virtual camera. The virtual camera pose is...in learning outcomes due to self -feedback, improvements in learning outcomes due to instructor feedback and synchronous versus asynchronous
Robotic Vehicle Communications Interoperability
1988-08-01
starter (cold start) X X Fire suppression X Fording control X Fuel control X Fuel tank selector X Garage toggle X Gear selector X X X X Hazard warning...optic Sensors Sensor switch Video Radar IR Thermal imaging system Image intensifier Laser ranger Video camera selector Forward Stereo Rear Sensor control...optic sensors Sensor switch Video Radar IR Thermal imaging system Image intensifier Laser ranger Video camera selector Forward Stereo Rear Sensor
FPGA Implementation of Stereo Disparity with High Throughput for Mobility Applications
NASA Technical Reports Server (NTRS)
Villalpando, Carlos Y.; Morfopolous, Arin; Matthies, Larry; Goldberg, Steven
2011-01-01
High speed stereo vision can allow unmanned robotic systems to navigate safely in unstructured terrain, but the computational cost can exceed the capacity of typical embedded CPUs. In this paper, we describe an end-to-end stereo computation co-processing system optimized for fast throughput that has been implemented on a single Virtex 4 LX160 FPGA. This system is capable of operating on images from a 1024 x 768 3CCD (true RGB) camera pair at 15 Hz. Data enters the FPGA directly from the cameras via Camera Link and is rectified, pre-filtered and converted into a disparity image all within the FPGA, incurring no CPU load. Once complete, a rectified image and the final disparity image are read out over the PCI bus, for a bandwidth cost of 68 MB/sec. Within the FPGA there are 4 distinct algorithms: Camera Link capture, Bilinear rectification, Bilateral subtraction pre-filtering and the Sum of Absolute Difference (SAD) disparity. Each module will be described in brief along with the data flow and control logic for the system. The system has been successfully fielded upon the Carnegie Mellon University's National Robotics Engineering Center (NREC) Crusher system during extensive field trials in 2007 and 2008 and is being implemented for other surface mobility systems at JPL.
View of the "handshake" of the SLP between the SSRMS and RMS during STS-100
2001-04-28
S100-E-5898 (28 April 2001) --- A STS-100 crew member with a digital still camera recorded this image of an historical event through an overhead window on the aft flight deck of the Space Shuttle Endeavour. A Canadian “handshake in space” occurred at 4:02 p.m (CDT), April 28, 2001, as the Canadian-built space station robotic arm – operated by Expedition Two flight engineer Susan J. Helms –transferred its launch cradle over to Endeavour’s robotic arm, with Canadian Space Agency astronaut Chris A. Hadfield at the controls. The exchange of the pallet from station arm to shuttle arm marked the first ever robotic-to-robotic transfer in space.
NASA Astrophysics Data System (ADS)
Ji, Peng; Song, Aiguo; Song, Zimo; Liu, Yuqing; Jiang, Guohua; Zhao, Guopu
2017-02-01
In this paper, we describe a heading direction correction algorithm for a tracked mobile robot. To save hardware resources as far as possible, the mobile robot’s wrist camera is used as the only sensor, which is rotated to face stairs. An ensemble heading deviation detector is proposed to help the mobile robot correct its heading direction. To improve the generalization ability, a multi-scale Gabor filter is used to process the input image previously. Final deviation result is acquired by applying the majority vote strategy on all the classifiers’ results. The experimental results show that our detector is able to enable the mobile robot to correct its heading direction adaptively while it is climbing the stairs.
NASA Technical Reports Server (NTRS)
1994-01-01
A commercially available ANDROS Mark V-A robot was used by Jet Propulsion Laboratory (JPL) as the departure point in the development of the HAZBOT III, a prototype teleoperated mobile robot designed for response to emergencies. Teleoperated robots contribute significantly to reducing human injury levels by performing tasks too hazardous for humans. ANDROS' manufacturer, REMOTEC, Inc., in turn, adopted some of the JPL concepts, particularly the control panel. HAZBOT III has exceptional mobility, employs solid state electronics and brushless DC motors for safer operation, and is designed so combustible gases cannot penetrate areas containing electronics and motors. Other features include the six-degree-of-freedom manipulator, the 30-pound squeeze force parallel jaw gripper and two video cameras, one for general viewing and navigation and the other for manipulation/grasping.
Experiments in teleoperator and autonomous control of space robotic vehicles
NASA Technical Reports Server (NTRS)
Alexander, Harold L.
1990-01-01
A research program and strategy are described which include fundamental teleoperation issues and autonomous-control issues of sensing and navigation for satellite robots. The program consists of developing interfaces for visual operation and studying the consequences of interface designs as well as developing navigation and control technologies based on visual interaction. A space-robot-vehicle simulator is under development for use in virtual-environment teleoperation experiments and neutral-buoyancy investigations. These technologies can be utilized in a study of visual interfaces to address tradeoffs between head-tracking and manual remote cameras, panel-mounted and helmet-mounted displays, and stereoscopic and monoscopic display systems. The present program can provide significant data for the development of control experiments for autonomously controlled satellite robots.
An egocentric vision based assistive co-robot.
Zhang, Jingzhe; Zhuang, Lishuo; Wang, Yang; Zhou, Yameng; Meng, Yan; Hua, Gang
2013-06-01
We present the prototype of an egocentric vision based assistive co-robot system. In this co-robot system, the user is wearing a pair of glasses with a forward looking camera, and is actively engaged in the control loop of the robot in navigational tasks. The egocentric vision glasses serve for two purposes. First, it serves as a source of visual input to request the robot to find a certain object in the environment. Second, the motion patterns computed from the egocentric video associated with a specific set of head movements are exploited to guide the robot to find the object. These are especially helpful for quadriplegic individuals who do not have needed hand functionality for interaction and control with other modalities (e.g., joystick). In our co-robot system, when the robot does not fulfill the object finding task in a pre-specified time window, it would actively solicit user controls for guidance. Then the users can use the egocentric vision based gesture interface to orient the robot towards the direction of the object. After that the robot will automatically navigate towards the object until it finds it. Our experiments validated the efficacy of the closed-loop design to engage the human in the loop.
Automated Meteor Detection by All-Sky Digital Camera Systems
NASA Astrophysics Data System (ADS)
Suk, Tomáš; Šimberová, Stanislava
2017-12-01
We have developed a set of methods to detect meteor light traces captured by all-sky CCD cameras. Operating at small automatic observatories (stations), these cameras create a network spread over a large territory. Image data coming from these stations are merged in one central node. Since a vast amount of data is collected by the stations in a single night, robotic storage and analysis are essential to processing. The proposed methodology is adapted to data from a network of automatic stations equipped with digital fish-eye cameras and includes data capturing, preparation, pre-processing, analysis, and finally recognition of objects in time sequences. In our experiments we utilized real observed data from two stations.
Electronic Still Camera image of Astronaut Claude Nicollier working with RMS
1993-12-05
S61-E-006 (5 Dec 1993) --- The robot arm controlling work of Swiss scientist Claude Nicollier was photographed with an Electronic Still Camera (ESC), and down linked to ground controllers soon afterward. With the mission specialist's assistance, Endeavour's crew captured the Hubble Space Telescope (HST) on December 4, 1993. Four of the seven crew members will work in alternating pairs outside Endeavour's shirt sleeve environment to service the giant telescope. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.
Localization Using Visual Odometry and a Single Downward-Pointing Camera
NASA Technical Reports Server (NTRS)
Swank, Aaron J.
2012-01-01
Stereo imaging is a technique commonly employed for vision-based navigation. For such applications, two images are acquired from different vantage points and then compared using transformations to extract depth information. The technique is commonly used in robotics for obstacle avoidance or for Simultaneous Localization And Mapping, (SLAM). Yet, the process requires a number of image processing steps and therefore tends to be CPU-intensive, which limits the real-time data rate and use in power-limited applications. Evaluated here is a technique where a monocular camera is used for vision-based odometry. In this work, an optical flow technique with feature recognition is performed to generate odometry measurements. The visual odometry sensor measurements are intended to be used as control inputs or measurements in a sensor fusion algorithm using low-cost MEMS based inertial sensors to provide improved localization information. Presented here are visual odometry results which demonstrate the challenges associated with using ground-pointing cameras for visual odometry. The focus is for rover-based robotic applications for localization within GPS-denied environments.
Telesurgical laparoscopic cholecystectomy between two countries.
Cheah, W K; Lee, B; Lenzi, J E; Goh, P M
2000-11-01
Telesurgery is a form of operative videoconferencing in which a remotely located surgeon observes a procedure through a camera and provides visual and auditory feedback to the operative site. With the use of more robotic devices in laparoscopic surgery, various forms of telesurgery have been tried. We describe the first two international telesurgical, telementored, robot-assisted laparoscopic cholecystectomies performed in the world, between the Johns Hopkins Institute, Baltimore, Maryland, USA, and the National University Hospital, Singapore.
Mitral repair and the robot: a revolutionary tool or marketing ploy?
Ghoneim, Aly; Bouhout, Ismail; Makhdom, Fahd; Chu, Michael W A
2018-03-01
In this review, we discuss the current evidence supporting each minimally invasive mitral repair approach and their associated controversies. Current evidence demonstrates that minimally invasive mitral repair techniques yield similar mitral repair results to conventional sternotomy with the benefits of shorter hospital stay, quicker recovery, better cosmesis and improved patient satisfaction. Despite this, broad adoption of minimally invasive mitral repair is still not achieved. Two main approaches of minimally invasive mitral repair exist: endoscopic mini-thoracotomy and robotic-assisted approaches. Both minimally invasive approaches share many commonalities; however, most centres are strongly polarized to one approach over another creating controversy and debate about the most effective minimally invasive approach.
Vision Based Localization in Urban Environments
NASA Technical Reports Server (NTRS)
McHenry, Michael; Cheng, Yang; Matthies, Larry
2005-01-01
As part of DARPA's MARS2020 program, the Jet Propulsion Laboratory developed a vision-based system for localization in urban environments that requires neither GPS nor active sensors. System hardware consists of a pair of small FireWire cameras and a standard Pentium-based computer. The inputs to the software system consist of: 1) a crude grid-based map describing the positions of buildings, 2) an initial estimate of robot location and 3) the video streams produced by each camera. At each step during the traverse the system: captures new image data, finds image features hypothesized to lie on the outside of a building, computes the range to those features, determines an estimate of the robot's motion since the previous step and combines that data with the map to update a probabilistic representation of the robot's location. This probabilistic representation allows the system to simultaneously represent multiple possible locations, For our testing, we have derived the a priori map manually using non-orthorectified overhead imagery, although this process could be automated. The software system consists of two primary components. The first is the vision system which uses binocular stereo ranging together with a set of heuristics to identify features likely to be part of building exteriors and to compute an estimate of the robot's motion since the previous step. The resulting visual features and the associated range measurements are software component, a particle-filter based localization system. This system uses the map and the then fed to the second primary most recent results from the vision system to update the estimate of the robot's location. This report summarizes the design of both the hardware and software and will include the results of applying the system to the global localization of a robot over an approximately half-kilometer traverse across JPL'S Pasadena campus.
NASA Astrophysics Data System (ADS)
van Hecke, Kevin; de Croon, Guido C. H. E.; Hennes, Daniel; Setterfield, Timothy P.; Saenz-Otero, Alvar; Izzo, Dario
2017-11-01
Although machine learning holds an enormous promise for autonomous space robots, it is currently not employed because of the inherent uncertain outcome of learning processes. In this article we investigate a learning mechanism, Self-Supervised Learning (SSL), which is very reliable and hence an important candidate for real-world deployment even on safety-critical systems such as space robots. To demonstrate this reliability, we introduce a novel SSL setup that allows a stereo vision equipped robot to cope with the failure of one of its cameras. The setup learns to estimate average depth using a monocular image, by using the stereo vision depths from the past as trusted ground truth. We present preliminary results from an experiment on the International Space Station (ISS) performed with the MIT/NASA SPHERES VERTIGO satellite. The presented experiments were performed on October 8th, 2015 on board the ISS. The main goals were (1) data gathering, and (2) navigation based on stereo vision. First the astronaut Kimiya Yui moved the satellite around the Japanese Experiment Module to gather stereo vision data for learning. Subsequently, the satellite freely explored the space in the module based on its (trusted) stereo vision system and a pre-programmed exploration behavior, while simultaneously performing the self-supervised learning of monocular depth estimation on board. The two main goals were successfully achieved, representing the first online learning robotic experiments in space. These results lay the groundwork for a follow-up experiment in which the satellite will use the learned single-camera depth estimation for autonomous exploration in the ISS, and are an advancement towards future space robots that continuously improve their navigation capabilities over time, even in harsh and completely unknown space environments.
... laparoscopically (using a tiny video camera) or using robotic surgery depends on: The extent of the surgery ... by URAC, also known as the American Accreditation HealthCare Commission (www.urac.org). URAC's accreditation program is ...
Design, development, and evaluation of an MRI-guided SMA spring-actuated neurosurgical robot
Ho, Mingyen; Kim, Yeongjin; Cheng, Shing Shin; Gullapalli, Rao; Desai, Jaydev P.
2015-01-01
In this paper, we present our work on the development of a magnetic resonance imaging (MRI)-compatible Minimally Invasive Neurosurgical Intracranial Robot (MINIR) comprising of shape memory alloy (SMA) spring actuators and tendon-sheath mechanism. We present the detailed modeling and analysis along with experimental results of the characterization of SMA spring actuators. Furthermore, to demonstrate image-feedback control, we used the images obtained from a camera to control the motion of the robot so that eventually continuous MR images could be used in the future to control the robot motion. Since the image tracking algorithm may fail in some situations, we also developed a temperature feedback control scheme which served as a backup controller for the robot. Experimental results demonstrated that both image feedback and temperature feedback can be used to control the motion of MINIR. A series of MRI compatibility tests were performed on the robot and the experimental results demonstrated that the robot is MRI compatible and no significant visual image distortion was observed in the MR images during robot operation. PMID:26622075
Functionalization of Tactile Sensation for Robot Based on Haptograph and Modal Decomposition
NASA Astrophysics Data System (ADS)
Yokokura, Yuki; Katsura, Seiichiro; Ohishi, Kiyoshi
In the real world, robots should be able to recognize the environment in order to be of help to humans. A video camera and a laser range finder are devices that can help robots recognize the environment. However, these devices cannot obtain tactile information from environments. Future human-assisting-robots should have the ability to recognize haptic signals, and a disturbance observer can possibly be used to provide the robot with this ability. In this study, a disturbance observer is employed in a mobile robot to functionalize the tactile sensation. This paper proposes a method that involves the use of haptograph and modal decomposition for the haptic recognition of road environments. The haptograph presents a graphic view of the tactile information. It is possible to classify road conditions intuitively. The robot controller is designed by considering the decoupled modal coordinate system, which consists of translational and rotational modes. Modal decomposition is performed by using a quarry matrix. Once the robot is provided with the ability to recognize tactile sensations, its usefulness to humans will increase.
Multisensor-based human detection and tracking for mobile service robots.
Bellotto, Nicola; Hu, Huosheng
2009-02-01
One of fundamental issues for service robots is human-robot interaction. In order to perform such a task and provide the desired services, these robots need to detect and track people in the surroundings. In this paper, we propose a solution for human tracking with a mobile robot that implements multisensor data fusion techniques. The system utilizes a new algorithm for laser-based leg detection using the onboard laser range finder (LRF). The approach is based on the recognition of typical leg patterns extracted from laser scans, which are shown to also be very discriminative in cluttered environments. These patterns can be used to localize both static and walking persons, even when the robot moves. Furthermore, faces are detected using the robot's camera, and the information is fused to the legs' position using a sequential implementation of unscented Kalman filter. The proposed solution is feasible for service robots with a similar device configuration and has been successfully implemented on two different mobile platforms. Several experiments illustrate the effectiveness of our approach, showing that robust human tracking can be performed within complex indoor environments.
A Search-and-Rescue Robot System for Remotely Sensing the Underground Coal Mine Environment
Gao, Junyao; Zhao, Fangzhou; Liu, Yi
2017-01-01
This paper introduces a search-and-rescue robot system used for remote sensing of the underground coal mine environment, which is composed of an operating control unit and two mobile robots with explosion-proof and waterproof function. This robot system is designed to observe and collect information of the coal mine environment through remote control. Thus, this system can be regarded as a multifunction sensor, which realizes remote sensing. When the robot system detects danger, it will send out signals to warn rescuers to keep away. The robot consists of two gas sensors, two cameras, a two-way audio, a 1 km-long fiber-optic cable for communication and a mechanical explosion-proof manipulator. Especially, the manipulator is a novel explosion-proof manipulator for cleaning obstacles, which has 3-degree-of-freedom, but is driven by two motors. Furthermore, the two robots can communicate in series for 2 km with the operating control unit. The development of the robot system may provide a reference for developing future search-and-rescue systems. PMID:29065560
Intelligent viewing control for robotic and automation systems
NASA Astrophysics Data System (ADS)
Schenker, Paul S.; Peters, Stephen F.; Paljug, Eric D.; Kim, Won S.
1994-10-01
We present a new system for supervisory automated control of multiple remote cameras. Our primary purpose in developing this system has been to provide capability for knowledge- based, `hands-off' viewing during execution of teleoperation/telerobotic tasks. The reported technology has broader applicability to remote surveillance, telescience observation, automated manufacturing workcells, etc. We refer to this new capability as `Intelligent Viewing Control (IVC),' distinguishing it from a simple programmed camera motion control. In the IVC system, camera viewing assignment, sequencing, positioning, panning, and parameter adjustment (zoom, focus, aperture, etc.) are invoked and interactively executed by real-time by a knowledge-based controller, drawing on a priori known task models and constraints, including operator preferences. This multi-camera control is integrated with a real-time, high-fidelity 3D graphics simulation, which is correctly calibrated in perspective to the actual cameras and their platform kinematics (translation/pan-tilt). Such merged graphics- with-video design allows the system user to preview and modify the planned (`choreographed') viewing sequences. Further, during actual task execution, the system operator has available both the resulting optimized video sequence, as well as supplementary graphics views from arbitrary perspectives. IVC, including operator-interactive designation of robot task actions, is presented to the user as a well-integrated video-graphic single screen user interface allowing easy access to all relevant telerobot communication/command/control resources. We describe and show pictorial results of a preliminary IVC system implementation for telerobotic servicing of a satellite.
NASA Astrophysics Data System (ADS)
Åström, Anders; Forchheimer, Robert
2012-03-01
Based on the Near-Sensor Image Processing (NSIP) concept and recent results concerning optical flow and Time-to- Impact (TTI) computation with this architecture, we show how these results can be used and extended for robot vision applications. The first case involves estimation of the tilt of an approaching planar surface. The second case concerns the use of two NSIP cameras to estimate absolute distance and speed similar to a stereo-matching system but without the need to do image correlations. Going back to a one-camera system, the third case deals with the problem to estimate the shape of the approaching surface. It is shown that the previously developed TTI method not only gives a very compact solution with respect to hardware complexity, but also surprisingly high performance.
Robust Video Stabilization Using Particle Keypoint Update and l1-Optimized Camera Path
Jeon, Semi; Yoon, Inhye; Jang, Jinbeum; Yang, Seungji; Kim, Jisung; Paik, Joonki
2017-01-01
Acquisition of stabilized video is an important issue for various type of digital cameras. This paper presents an adaptive camera path estimation method using robust feature detection to remove shaky artifacts in a video. The proposed algorithm consists of three steps: (i) robust feature detection using particle keypoints between adjacent frames; (ii) camera path estimation and smoothing; and (iii) rendering to reconstruct a stabilized video. As a result, the proposed algorithm can estimate the optimal homography by redefining important feature points in the flat region using particle keypoints. In addition, stabilized frames with less holes can be generated from the optimal, adaptive camera path that minimizes a temporal total variation (TV). The proposed video stabilization method is suitable for enhancing the visual quality for various portable cameras and can be applied to robot vision, driving assistant systems, and visual surveillance systems. PMID:28208622
Calibration of the motor-assisted robotic stereotaxy system: MARS.
Heinig, Maximilian; Hofmann, Ulrich G; Schlaefer, Alexander
2012-11-01
The motor-assisted robotic stereotaxy system presents a compact and light-weight robotic system for stereotactic neurosurgery. Our system is designed to position probes in the human brain for various applications, for example, deep brain stimulation. It features five fully automated axes. High positioning accuracy is of utmost importance in robotic neurosurgery. First, the key parameters of the robot's kinematics are determined using an optical tracking system. Next, the positioning errors at the center of the arc--which is equivalent to the target position in stereotactic interventions--are investigated using a set of perpendicular cameras. A modeless robot calibration method is introduced and evaluated. To conclude, the application accuracy of the robot is studied in a phantom trial. We identified the bending of the arc under load as the robot's main error source. A calibration algorithm was implemented to compensate for the deflection of the robot's arc. The mean error after the calibration was 0.26 mm, the 68.27th percentile was 0.32 mm, and the 95.45th was 0.50 mm. The kinematic properties of the robot were measured, and based on the results an appropriate calibration method was derived. With mean errors smaller than currently used mechanical systems, our results show that the robot's accuracy is appropriate for stereotactic interventions.
2008-06-04
This image was taken by NASA's Phoenix Mars Lander's Robotic Arm Camera (RAC) on the ninth Martian day of the mission, or Sol 9 (June 3, 2008). The center of the image shows a trench informally called "Dodo" after the second dig. "Dodo" is located within the previously determined digging area, informally called "Knave of Hearts." The light square to the right of the trench is the Robotic Arm's Thermal and Electrical Conductivity Probe (TECP). The Robotic Arm has scraped to a bright surface which indicated the Arm has reached a solid structure underneath the surface, which has been seen in other images as well. http://photojournal.jpl.nasa.gov/catalog/PIA10763
DOE Robotic and Remote Systems Assistance to the Government of Japan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Derek Wadsworth; Victor Walker
At the request of the Government of Japan, DOE did a complex wide survey of available remotely operated and robotic systems to assist in the initial assessment of the damage to the Fukushima Daiichi reactors following an earthquake and subsequent tsunami. As a result several radiation hardened cameras and a Talon robot were identified as systems that could immediately assist in the effort and were subsequently sent to Japan. These systems were transferred to the Government of Japan and used to map radiation levels surrounding the damaged facilities. This report describes the equipment, its use, data collected, and lessons learnedmore » from the experience.« less
Robots Save Soldiers' Lives Overseas (MarcBot)
NASA Technical Reports Server (NTRS)
2009-01-01
Marshall Space Flight Center mobile communications platform designs for future lunar missions led to improvements to fleets of tactical robots now being deployed by U.S. Army. The Multi-function Agile Remote Control Robot (MARCbot) helps soldiers search out and identify improvised explosive devices. NASA used the MARCbots to test its mobile communications platform, and in working with it, made the robot faster while adding capabilities -- upgrading to a digital camera, encrypting the controllers and video transmission, as well as increasing the range and adding communications abilities. They also simplified the design, providing more plug-and-play sensors and replacing some of the complex electronics with more trouble-free, low-cost components. Applied Geo Technology, a tribally-owned corporation in Choctaw, Mississippi, was given the task of manufacturing the modified robots. The company is now producing 40 units per month, 300 of which have already been deployed overseas.
NASA Astrophysics Data System (ADS)
Kuhnert, Lars; Ax, Markus; Langer, Matthias; Nguyen van, Duong; Kuhnert, Klaus-Dieter
This paper describes an absolute localisation method for an unmanned ground vehicle (UGV) if GPS is unavailable for the vehicle. The basic idea is to combine an unmanned aerial vehicle (UAV) to the ground vehicle and use it as an external sensor platform to achieve an absolute localisation of the robotic team. Beside the discussion of the rather naive method directly using the GPS position of the aerial robot to deduce the ground robot's position the main focus of this paper lies on the indirect usage of the telemetry data of the aerial robot combined with live video images of an onboard camera to realise a registration of local video images with apriori registered orthophotos. This yields to a precise driftless absolute localisation of the unmanned ground vehicle. Experiments with our robotic team (AMOR and PSYCHE) successfully verify this approach.
In vivo demonstration of surgical task assistance using miniature robots.
Hawks, Jeff A; Kunowski, Jacob; Platt, Stephen R
2012-10-01
Laparoscopy is beneficial to patients as measured by less painful recovery and an earlier return to functional health compared to conventional open surgery. However, laparoscopy requires the manipulation of long, slender tools from outside the patient's body. As a result, laparoscopy generally benefits only patients undergoing relatively simple procedures. An innovative approach to laparoscopy uses miniature in vivo robots that fit entirely inside the abdominal cavity. Our previous work demonstrated that a mobile, wireless robot platform can be successfully operated inside the abdominal cavity with different payloads (biopsy, camera, and physiological sensors). We hope that these robots are a step toward reducing the invasiveness of laparoscopy. The current study presents design details and results of laboratory and in vivo demonstrations of several new payload designs (clamping, cautery, and liquid delivery). Laboratory and in vivo cooperation demonstrations between multiple robots are also presented.
Navigation system for a mobile robot with a visual sensor using a fish-eye lens
NASA Astrophysics Data System (ADS)
Kurata, Junichi; Grattan, Kenneth T. V.; Uchiyama, Hironobu
1998-02-01
Various position sensing and navigation systems have been proposed for the autonomous control of mobile robots. Some of these systems have been installed with an omnidirectional visual sensor system that proved very useful in obtaining information on the environment around the mobile robot for position reckoning. In this article, this type of navigation system is discussed. The sensor is composed of one TV camera with a fish-eye lens, using a reference target on a ceiling and hybrid image processing circuits. The position of the robot, with respect to the floor, is calculated by integrating the information obtained from a visual sensor and a gyroscope mounted in the mobile robot, and the use of a simple algorithm based on PTP control for guidance is discussed. An experimental trial showed that the proposed system was both valid and useful for the navigation of an indoor vehicle.
Beaming into the News: A System for and Case Study of Tele-Immersive Journalism.
Kishore, Sameer; Navarro, Xavi; Dominguez, Eva; de la Pena, Nonny; Slater, Mel
2016-05-25
We show how a combination of virtual reality and robotics can be used to beam a physical representation of a person to a distant location, and describe an application of this system in the context of journalism. Full body motion capture data of a person is streamed and mapped in real time, onto the limbs of a humanoid robot present at the remote location. A pair of cameras in the robot's 'eyes' stream stereoscopic video back to the HMD worn by the visitor, and a two-way audio connection allows the visitor to talk to people in the remote destination. By fusing the multisensory data of the visitor with the robot, the visitor's 'consciousness' is transformed to the robot's body. This system was used by a journalist to interview a neuroscientist and a chef 900 miles distant, about food for the brain, resulting in an article published in the popular press.
Beaming into the News: A System for and Case Study of Tele-Immersive Journalism.
Kishore, Sameer; Navarro, Xavi; Dominguez, Eva; De La Pena, Nonny; Slater, Mel
2018-03-01
We show how a combination of virtual reality and robotics can be used to beam a physical representation of a person to a distant location, and describe an application of this system in the context of journalism. Full body motion capture data of a person is streamed and mapped in real time, onto the limbs of a humanoid robot present at the remote location. A pair of cameras in the robots eyes stream stereoscopic video back to the HMD worn by the visitor, and a two-way audio connection allows the visitor to talk to people in the remote destination. By fusing the multisensory data of the visitor with the robot, the visitors consciousness is transformed to the robots body. This system was used by a journalist to interview a neuroscientist and a chef 900 miles distant, about food for the brain, resulting in an article published in the popular press.
Bioprocess automation on a Mini Pilot Plant enables fast quantitative microbial phenotyping.
Unthan, Simon; Radek, Andreas; Wiechert, Wolfgang; Oldiges, Marco; Noack, Stephan
2015-03-11
The throughput of cultivation experiments in bioprocess development has drastically increased in recent years due to the availability of sophisticated microliter scale cultivation devices. However, as these devices still require time-consuming manual work, the bottleneck was merely shifted to media preparation, inoculation and finally the analyses of cultivation samples. A first step towards solving these issues was undertaken in our former study by embedding a BioLector in a robotic workstation. This workstation already allowed for the optimization of heterologous protein production processes, but remained limited when aiming for the characterization of small molecule producer strains. In this work, we extended our workstation to a versatile Mini Pilot Plant (MPP) by integrating further robotic workflows and microtiter plate assays that now enable a fast and accurate phenotyping of a broad range of microbial production hosts. A fully automated harvest procedure was established, which repeatedly samples up to 48 wells from BioLector cultivations in response to individually defined trigger conditions. The samples are automatically clarified by centrifugation and finally frozen for subsequent analyses. Sensitive metabolite assays in 384-well plate scale were integrated on the MPP for the direct determination of substrate uptake (specifically D-glucose and D-xylose) and product formation (specifically amino acids). In a first application, we characterized a set of Corynebacterium glutamicum L-lysine producer strains and could rapidly identify a unique strain showing increased L-lysine titers, which was subsequently confirmed in lab-scale bioreactor experiments. In a second study, we analyzed the substrate uptake kinetics of a previously constructed D-xylose-converting C. glutamicum strain during cultivation on mixed carbon sources in a fully automated experiment. The presented MPP is designed to face the challenges typically encountered during early-stage bioprocess development. Especially the bottleneck of sample analyses from fast and parallelized microtiter plate cultivations can be solved using cutting-edge robotic automation. As robotic workstations become increasingly attractive for biotechnological research, we expect our setup to become a template for future bioprocess development.
Speech-Based Robotic Control for Dismounted Soldiers: Evaluation of Visual Display Options
2014-05-01
teleoperation to improve processes such as robot responsiveness and camera video bandwidth (Chen et al., 2007). In addition, further advances in...California. 2 Illinois 1 Ohio 1 North Carolina 2 Georgia 1 New York 1 Florida 1 Canada 1 Peru . 1 Guam 1 34 19. Has anyone ever told you...completely stopped. Time limit suggests speed, but having to stop to accurately choose object would hinder that process . 1 With the course in place, a
2012-08-21
FINAL DEMONSTRATION OF A WIRELESS DATA TASK SUPPORTED BY SLS ADVANCED DEVELOPMENT USED TO DEMONSTRATE REAL-TIME VIDEO OVER WIRELESS CONNECTIONS ALONG WITH DATA AND COMMANDS AS DEMONSTRATED VIA THE ROBOTIC ARMS. THE ARMS AND VIDEO CAMERAS WERE MOUNTED ON FREE FLOATING AIR-BEARING VEHICLES TO SIMULATE CONDITIONS IN SPACE. THEY WERE USED TO SHOW HOW A CHASE VEHICLE COULD MOVE UP TO AND CAPTURE A SATELLITE, SUCH AS THE FASTSAT MOCKUP DEMONSTRITING HOW ROBOTIC TECHNOLOGY AND SMALL SPACECRAFT COULD ASSIST WITH ORBITAL DEBRIS MITIGATION
2012-08-21
FINAL DEMONSTRATION OF A WIRELESS DATA TASK SUPPORTED BY SLS ADVANCED DEVELOPMENT USED TO DEMONSTRATE REAL-TIME VIDEO OVER WIRELESS CONNECTIONS ALONG WITH DATA AND COMMANDS AS DEMONSTRATED VIA THE ROBOTIC ARMS. THE ARMS AND VIDEO CAMERAS WERE MOUNTED ON FREE FLOATING AIR-BEARING VEHICLES TO SIMULATE CONDITIONS IN SPACE. THEY WERE USED TO SHOW HOW A CHASE VEHICLE COULD MOVE UP TO AND CAPTURE A SATELLITE, SUCH AS THE FASTSAT MOCKUP DEMONSTRITING HOW ROBOTIC TECHNOLOGY AND SMALL SPACECRAFT COULD ASSIST WITH ORBITAL DEBRIS MITIGATION
2012-08-21
FINAL DEMONSTRATION OF A WIRELESS DATA TASK SUPPORTED BY SLS ADVANCED DEVELOPMENT USED TO DEMONSTRATE REAL-TIME VIDEO OVER WIRELESS CONNECTIONS ALONG WITH DATA AND COMMANDS AS DEMONSTRATED VIA THE ROBOTIC ARMS. THE ARMS AND VIDEO CAMERAS WERE MOUNTED ON FREE FLOATING AIR-BEARING VEHICLES TO SIMULATE CONDITIONS IN SPACE. THEY WERE USED TO SHOW HOW A CHASE VEHICLE COULD MOVE UP TO AND CAPTURE A SATELLITE, SUCH AS THE FASTSAT MOCKUP DEMONSTRITING HOW ROBOTIC TECHNOLOGY AND SMALL SPACECRAFT COULD ASSIST WITH ORBITAL DEBRIS MITIGATION
Feasibility of telementoring between Baltimore (USA) and Rome (Italy): the first five cases.
Micali, S; Virgili, G; Vannozzi, E; Grassi, N; Jarrett, T W; Bauer, J J; Vespasiani, G; Kavoussi, L R
2000-08-01
Telemedicine is the use of telecommunication technology to deliver healthcare. Telementoring has been developed to allow a surgeon at a remote site to offer guidance and assistance to a less-experienced surgeon. We report on our experience during laparoscopic urologic procedures with mentoring between Rome, Italy, and Baltimore, USA. Over a period of 3 months, two laparoscopic left spermatic vein ligations, one retroperitoneal renal biopsy, one laparoscopic nephrectomy, and one percutaneous access to the kidney were telementored. Transperitoneal laparoscopic cases were performed with the use of AESOP, a robotic for remote manipulation of the endoscopic camera. A second robot, PAKY, was used to perform radiologically guided needle orientation and insertion for percutaneous renal access. In addition to controlling the robotic devices, the system provided real-time video display for either the laparoscope or an externally mounted camera located in the operating room, full duplex audio, telestration over live video, and access to electrocautery for tissue cutting or hemostasis. All procedures were accomplished with an uneventful postoperative course. One technical failure occurred because the robotic device was not properly positioned on the operating table. The round-trip delay of image transmission was less than 1 second. International telementoring is a feasible technique that can enhance surgeon education and decrease the likelihood of complications attributable to inexperience with new operative techniques.
2003-05-09
KENNEDY SPACE CENTER, FLA. - The Mars Exploration Rover 2 (MER-2) undergoes a weight and center of gravity determination in the Payload Hazardous Servicing Facility. NASA's twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can't yet go. Launch of MER-2 is scheduled for June 5 from Cape Canaveral Air Force Station.
2003-05-09
KENNEDY SPACE CENTER, FLA. - Workers in the Payload Hazardous Servicing Facility prepare the Mars Exploration Rover 2 (MER-2) for a weight and center of gravity determination. NASA's twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can't yet go. Launch of MER-2 is scheduled for June 5 from Cape Canaveral Air Force Station.
2003-05-09
KENNEDY SPACE CENTER, FLA. - Workers in the Payload Hazardous Servicing Facility are preparing to determine weight and center of gravity for the Mars Exploration Rover 2 (MER-2). NASA's twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can't yet go. Launch of MER-2 is scheduled for June 5 from Cape Canaveral Air Force Station.
2003-05-23
KENNEDY SPACE CENTER, FLA. - In the Payload Hazardous Servicing Facility, workers prepare to mate the Mars Exploration Rover-2 (MER-2) to the third stage of a Delta II rocket for launch on June 5. NASA’s twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can’t yet go. MER-1 (MER-B) will launch June 25.
2003-05-19
KENNEDY SPACE CENTER, FLA. - In the Payload Hazardous Servicing Facility, the Mars Exploration Rover 2 (MER-2) is moved to a spin table. NASA’s twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can’t yet go. The MER-2 is scheduled to launch June 5 from Launch Pad 17-A, Cape Canaveral Air Force Station.
2003-05-23
KENNEDY SPACE CENTER, FLA. - In the Payload Hazardous Servicing Facility, workers mate the Mars Exploration Rover-2 (MER-2) to the third stage of a Delta II rocket for launch on June 5. NASA’s twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can’t yet go. MER-1 (MER-B) will launch June 25.
A cognitive approach to vision for a mobile robot
NASA Astrophysics Data System (ADS)
Benjamin, D. Paul; Funk, Christopher; Lyons, Damian
2013-05-01
We describe a cognitive vision system for a mobile robot. This system works in a manner similar to the human vision system, using saccadic, vergence and pursuit movements to extract information from visual input. At each fixation, the system builds a 3D model of a small region, combining information about distance, shape, texture and motion. These 3D models are embedded within an overall 3D model of the robot's environment. This approach turns the computer vision problem into a search problem, with the goal of constructing a physically realistic model of the entire environment. At each step, the vision system selects a point in the visual input to focus on. The distance, shape, texture and motion information are computed in a small region and used to build a mesh in a 3D virtual world. Background knowledge is used to extend this structure as appropriate, e.g. if a patch of wall is seen, it is hypothesized to be part of a large wall and the entire wall is created in the virtual world, or if part of an object is recognized, the whole object's mesh is retrieved from the library of objects and placed into the virtual world. The difference between the input from the real camera and from the virtual camera is compared using local Gaussians, creating an error mask that indicates the main differences between them. This is then used to select the next points to focus on. This approach permits us to use very expensive algorithms on small localities, thus generating very accurate models. It also is task-oriented, permitting the robot to use its knowledge about its task and goals to decide which parts of the environment need to be examined. The software components of this architecture include PhysX for the 3D virtual world, OpenCV and the Point Cloud Library for visual processing, and the Soar cognitive architecture, which controls the perceptual processing and robot planning. The hardware is a custom-built pan-tilt stereo color camera. We describe experiments using both static and moving objects.
NASA Technical Reports Server (NTRS)
Chen, Alexander Y.
1990-01-01
Scientific research associates advanced robotic system (SRAARS) is an intelligent robotic system which has autonomous learning capability in geometric reasoning. The system is equipped with one global intelligence center (GIC) and eight local intelligence centers (LICs). It controls mainly sixteen links with fourteen active joints, which constitute two articulated arms, an extensible lower body, a vision system with two CCD cameras and a mobile base. The on-board knowledge-based system supports the learning controller with model representations of both the robot and the working environment. By consecutive verifying and planning procedures, hypothesis-and-test routines and learning-by-analogy paradigm, the system would autonomously build up its own understanding of the relationship between itself (i.e., the robot) and the focused environment for the purposes of collision avoidance, motion analysis and object manipulation. The intelligence of SRAARS presents a valuable technical advantage to implement robotic systems for space exploration and space station operations.
Self-Portrait of Curiosity Stunt Double
2012-12-11
Camera and robotic-arm maneuvers for taking a self-portrait of the NASA Curiosity rover on Mars were checked first, at NASA Jet Propulsion Laboratory in Pasadena, Calif., using the main test rover for the Curiosity.
Caltrans bridge inspection aerial robot.
DOT National Transportation Integrated Search
2008-10-01
The California Department of Transportation (Caltrans) project resulted in the development of a twin-motor, : single duct, electric-powered Aerobot designed of carrying video cameras up to 200 feet in elevation to enable : close inspection of bridges...
Robotic Mining Competition - Media Day
2017-05-25
Lilliana Villareal, Spacecraft and Offline Operations manager in the Ground Systems Development and Operations Program, is interviewed on-camera by Al Feinberg, with the Communications and Public Engagement Directorate, during NASA's 8th Annual Robotic Mining Competition at the Kennedy Space Center Visitor Complex in Florida. More than 40 student teams from colleges and universities around the U.S. used their uniquely-designed mining robots to dig in a supersized sandbox filled with BP-1, or simulated Martian soil, and participated in other competition requirements, May 22-26. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's Journey to Mars.
Robonaut: a robot designed to work with humans in space
NASA Technical Reports Server (NTRS)
Bluethmann, William; Ambrose, Robert; Diftler, Myron; Askew, Scott; Huber, Eric; Goza, Michael; Rehnmark, Fredrik; Lovchik, Chris; Magruder, Darby
2003-01-01
The Robotics Technology Branch at the NASA Johnson Space Center is developing robotic systems to assist astronauts in space. One such system, Robonaut, is a humanoid robot with the dexterity approaching that of a suited astronaut. Robonaut currently has two dexterous arms and hands, a three degree-of-freedom articulating waist, and a two degree-of-freedom neck used as a camera and sensor platform. In contrast to other space manipulator systems, Robonaut is designed to work within existing corridors and use the same tools as space walking astronauts. Robonaut is envisioned as working with astronauts, both autonomously and by teleoperation, performing a variety of tasks including, routine maintenance, setting up and breaking down worksites, assisting crew members while outside of spacecraft, and serving in a rapid response capacity.
NASA Astrophysics Data System (ADS)
Singh, N. Nirmal; Chatterjee, Amitava; Rakshit, Anjan
2010-02-01
The present article describes the development of a peripheral interface controller (PIC) microcontroller-based system for interfacing external add-on peripherals with a real mobile robot, for real life applications. This system serves as an important building block of a complete integrated vision-based mobile robot system, integrated indigenously in our laboratory. The system is composed of the KOALA mobile robot in conjunction with a personal computer (PC) and a two-camera-based vision system where the PIC microcontroller is used to drive servo motors, in interrupt-driven mode, to control additional degrees of freedom of the vision system. The performance of the developed system is tested by checking it under the control of several user-specified commands, issued from the PC end.
Li, Luyang; Liu, Yun-Hui; Jiang, Tianjiao; Wang, Kai; Fang, Mu
2018-02-01
Despite tremendous efforts made for years, trajectory tracking control (TC) of a nonholonomic mobile robot (NMR) without global positioning system remains an open problem. The major reason is the difficulty to localize the robot by using its onboard sensors only. In this paper, a newly designed adaptive trajectory TC method is proposed for the NMR without its position, orientation, and velocity measurements. The controller is designed on the basis of a novel algorithm to estimate position and velocity of the robot online from visual feedback of an omnidirectional camera. It is theoretically proved that the proposed algorithm yields the TC errors to asymptotically converge to zero. Real-world experiments are conducted on a wheeled NMR to validate the feasibility of the control system.
Robotic Mining Competition - Media Day
2017-05-25
Stan Starr, branch chief for Applied Physics in the Exploration Research and Technology Programs, is interviewed on-camera by Sarah McNulty, with the Communication and Public Engagement Directorate, during NASA's 8th Annual Robotic Mining Competition at the Kennedy Space Center Visitor Complex in Florida. More than 40 student teams from colleges and universities around the U.S. used their uniquely-designed mining robots to dig in a supersized sandbox filled with BP-1, or simulated Martian soil, and participated in other competition requirements, May 22-26. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's Journey to Mars.
Robonaut: a robot designed to work with humans in space.
Bluethmann, William; Ambrose, Robert; Diftler, Myron; Askew, Scott; Huber, Eric; Goza, Michael; Rehnmark, Fredrik; Lovchik, Chris; Magruder, Darby
2003-01-01
The Robotics Technology Branch at the NASA Johnson Space Center is developing robotic systems to assist astronauts in space. One such system, Robonaut, is a humanoid robot with the dexterity approaching that of a suited astronaut. Robonaut currently has two dexterous arms and hands, a three degree-of-freedom articulating waist, and a two degree-of-freedom neck used as a camera and sensor platform. In contrast to other space manipulator systems, Robonaut is designed to work within existing corridors and use the same tools as space walking astronauts. Robonaut is envisioned as working with astronauts, both autonomously and by teleoperation, performing a variety of tasks including, routine maintenance, setting up and breaking down worksites, assisting crew members while outside of spacecraft, and serving in a rapid response capacity.
A Robust Approach for a Filter-Based Monocular Simultaneous Localization and Mapping (SLAM) System
Munguía, Rodrigo; Castillo-Toledo, Bernardino; Grau, Antoni
2013-01-01
Simultaneous localization and mapping (SLAM) is an important problem to solve in robotics theory in order to build truly autonomous mobile robots. This work presents a novel method for implementing a SLAM system based on a single camera sensor. The SLAM with a single camera, or monocular SLAM, is probably one of the most complex SLAM variants. In this case, a single camera, which is freely moving through its environment, represents the sole sensor input to the system. The sensors have a large impact on the algorithm used for SLAM. Cameras are used more frequently, because they provide a lot of information and are well adapted for embedded systems: they are light, cheap and power-saving. Nevertheless, and unlike range sensors, which provide range and angular information, a camera is a projective sensor providing only angular measurements of image features. Therefore, depth information (range) cannot be obtained in a single step. In this case, special techniques for feature system-initialization are needed in order to enable the use of angular sensors (as cameras) in SLAM systems. The main contribution of this work is to present a novel and robust scheme for incorporating and measuring visual features in filtering-based monocular SLAM systems. The proposed method is based in a two-step technique, which is intended to exploit all the information available in angular measurements. Unlike previous schemes, the values of parameters used by the initialization technique are derived directly from the sensor characteristics, thus simplifying the tuning of the system. The experimental results show that the proposed method surpasses the performance of previous schemes. PMID:23823972
Visible camera cryostat design and performance for the SuMIRe Prime Focus Spectrograph (PFS)
NASA Astrophysics Data System (ADS)
Smee, Stephen A.; Gunn, James E.; Golebiowski, Mirek; Hope, Stephen C.; Madec, Fabrice; Gabriel, Jean-Francois; Loomis, Craig; Le fur, Arnaud; Dohlen, Kjetil; Le Mignant, David; Barkhouser, Robert; Carr, Michael; Hart, Murdock; Tamura, Naoyuki; Shimono, Atsushi; Takato, Naruhisa
2016-08-01
We describe the design and performance of the SuMIRe Prime Focus Spectrograph (PFS) visible camera cryostats. SuMIRe PFS is a massively multi-plexed ground-based spectrograph consisting of four identical spectrograph modules, each receiving roughly 600 fibers from a 2394 fiber robotic positioner at the prime focus. Each spectrograph module has three channels covering wavelength ranges 380 nm - 640 nm, 640 nm - 955 nm, and 955 nm - 1.26 um, with the dispersed light being imaged in each channel by a f/1.07 vacuum Schmidt camera. The cameras are very large, having a clear aperture of 300 mm at the entrance window, and a mass of 280 kg. In this paper we describe the design of the visible camera cryostats and discuss various aspects of cryostat performance.
NASA Astrophysics Data System (ADS)
Robert, K.; Matabos, M.; Sarrazin, J.; Sarradin, P.; Lee, R. W.; Juniper, K.
2010-12-01
Hydrothermal vent environments are among the most dynamic benthic habitats in the ocean. The relative roles of physical and biological factors in shaping vent community structure remain unclear. Undersea cabled observatories offer the power and bandwidth required for high-resolution, time-series study of the dynamics of vent communities and the physico-chemical forces that influence them. The NEPTUNE Canada cabled instrument array at the Endeavour hydrothermal vents provides a unique laboratory for researchers to conduct long-term, integrated studies of hydrothermal vent ecosystem dynamics in relation to environmental variability. Beginning in September-October 2010, NEPTUNE Canada (NC) will be deploying a multi-disciplinary suite of instruments on the Endeavour Segment of the Juan de Fuca Ridge. Two camera and sensor systems will be used to study ecosystem dynamics in relation to hydrothermal discharge. These studies will make use of new experimental protocols for time-series observations that we have been developing since 2008 at other observatory sites connected to the VENUS and NC networks. These protocols include sampling design, camera calibration (i.e. structure, position, light, settings) and image analysis methodologies (see communication by Aron et al.). The camera systems to be deployed in the Main Endeavour vent field include a Sidus high definition video camera (2010) and the TEMPO-mini system (2011), designed by IFREMER (France). Real-time data from three sensors (O2, dissolved Fe, temperature) integrated with the TEMPO-mini system will enhance interpretation of imagery. For the first year of observations, a suite of internally recording temperature probes will be strategically placed in the field of view of the Sidus camera. These installations aim at monitoring variations in vent community structure and dynamics (species composition and abundances, interactions within and among species) in response to changes in environmental conditions at different temporal scales. High-resolution time-series studies also provide a mean of studying population dynamics, biological rhythms, organism growth and faunal succession. In addition to programmed time-series monitoring, the NC infrastructure will also permit manual and automated modification of observational protocols in response to natural events. This will enhance our ability to document potentially critical but short-lived environmental forces affecting vent communities.
IMU-based online kinematic calibration of robot manipulator.
Du, Guanglong; Zhang, Ping
2013-01-01
Robot calibration is a useful diagnostic method for improving the positioning accuracy in robot production and maintenance. An online robot self-calibration method based on inertial measurement unit (IMU) is presented in this paper. The method requires that the IMU is rigidly attached to the robot manipulator, which makes it possible to obtain the orientation of the manipulator with the orientation of the IMU in real time. This paper proposed an efficient approach which incorporates Factored Quaternion Algorithm (FQA) and Kalman Filter (KF) to estimate the orientation of the IMU. Then, an Extended Kalman Filter (EKF) is used to estimate kinematic parameter errors. Using this proposed orientation estimation method will result in improved reliability and accuracy in determining the orientation of the manipulator. Compared with the existing vision-based self-calibration methods, the great advantage of this method is that it does not need the complex steps, such as camera calibration, images capture, and corner detection, which make the robot calibration procedure more autonomous in a dynamic manufacturing environment. Experimental studies on a GOOGOL GRB3016 robot show that this method has better accuracy, convenience, and effectiveness than vision-based methods.
Robotic assisted andrological surgery
Parekattil, Sijo J; Gudeloglu, Ahmet
2013-01-01
The introduction of the operative microscope for andrological surgery in the 1970s provided enhanced magnification and accuracy, unparalleled to any previous visual loop or magnification techniques. This technology revolutionized techniques for microsurgery in andrology. Today, we may be on the verge of a second such revolution by the incorporation of robotic assisted platforms for microsurgery in andrology. Robotic assisted microsurgery is being utilized to a greater degree in andrology and a number of other microsurgical fields, such as ophthalmology, hand surgery, plastics and reconstructive surgery. The potential advantages of robotic assisted platforms include elimination of tremor, improved stability, surgeon ergonomics, scalability of motion, multi-input visual interphases with up to three simultaneous visual views, enhanced magnification, and the ability to manipulate three surgical instruments and cameras simultaneously. This review paper begins with the historical development of robotic microsurgery. It then provides an in-depth presentation of the technique and outcomes of common robotic microsurgical andrological procedures, such as vasectomy reversal, subinguinal varicocelectomy, targeted spermatic cord denervation (for chronic orchialgia) and robotic assisted microsurgical testicular sperm extraction (microTESE). PMID:23241637
NASA Astrophysics Data System (ADS)
Nokata, Makoto; Hirai, Wataru; Itatani, Ryosuke
This paper presents a robotic training system that can exercise the user without bodily restraint, neither markers nor sensors are attached to the trainee. We developed the robot system that has a total of four mounted components: a laser sensor, a camera, a cushion, and an electric motor. This paper have showed the method used for determining whether the trainee was bending forward or backward while walking, and the extent of the tilt, using the recorded image of the back of the trainee's head. A characteristic of our software algorithms has been that the image was divided into 9 quadrants, and each quadrant undergoes Hough transformation. We have verified experimentally that by using our algorithms for the four patterns of forward, backward, diagonally, and crouching, the tilt of the trainee's body have been accurately determined. We created a flowchart for determining the direction of movement according to experimental results. By adjusting the values used to make the distinction according to the position and the angle of the camera, and the width of the back of the trainee's head, we were able to accurately determine the walking condition of the trainee, and achieve early detection of the start of a fall.
Function-based design process for an intelligent ground vehicle vision system
NASA Astrophysics Data System (ADS)
Nagel, Robert L.; Perry, Kenneth L.; Stone, Robert B.; McAdams, Daniel A.
2010-10-01
An engineering design framework for an autonomous ground vehicle vision system is discussed. We present both the conceptual and physical design by following the design process, development and testing of an intelligent ground vehicle vision system constructed for the 2008 Intelligent Ground Vehicle Competition. During conceptual design, the requirements for the vision system are explored via functional and process analysis considering the flows into the vehicle and the transformations of those flows. The conceptual design phase concludes with a vision system design that is modular in both hardware and software and is based on a laser range finder and camera for visual perception. During physical design, prototypes are developed and tested independently, following the modular interfaces identified during conceptual design. Prototype models, once functional, are implemented into the final design. The final vision system design uses a ray-casting algorithm to process camera and laser range finder data and identify potential paths. The ray-casting algorithm is a single thread of the robot's multithreaded application. Other threads control motion, provide feedback, and process sensory data. Once integrated, both hardware and software testing are performed on the robot. We discuss the robot's performance and the lessons learned.
Parallel-Processing Software for Correlating Stereo Images
NASA Technical Reports Server (NTRS)
Klimeck, Gerhard; Deen, Robert; Mcauley, Michael; DeJong, Eric
2007-01-01
A computer program implements parallel- processing algorithms for cor relating images of terrain acquired by stereoscopic pairs of digital stereo cameras on an exploratory robotic vehicle (e.g., a Mars rove r). Such correlations are used to create three-dimensional computatio nal models of the terrain for navigation. In this program, the scene viewed by the cameras is segmented into subimages. Each subimage is assigned to one of a number of central processing units (CPUs) opera ting simultaneously.
NASA Technical Reports Server (NTRS)
1994-01-01
Omniview, a motionless, noiseless, exceptionally versatile camera was developed for NASA as a receiving device for guiding space robots. The system can see in one direction and provide as many as four views simultaneously. Developed by Omniview, Inc. (formerly TRI) under a NASA Small Business Innovation Research (SBIR) grant, the system's image transformation electronics produce a real-time image from anywhere within a hemispherical field. Lens distortion is removed, and a corrected "flat" view appears on a monitor. Key elements are a high resolution charge coupled device (CCD), image correction circuitry and a microcomputer for image processing. The system can be adapted to existing installations. Applications include security and surveillance, teleconferencing, imaging, virtual reality, broadcast video and military operations. Omniview technology is now called IPIX. The company was founded in 1986 as TeleRobotics International, became Omniview in 1995, and changed its name to Interactive Pictures Corporation in 1997.
STS-109 Crew Interviews - Currie
NASA Technical Reports Server (NTRS)
2002-01-01
STS-109 Mission Specialist 2 Nancy Jane Currie is seen during a prelaunch interview. She answers questions about her inspiration to become an astronaut and her career path. She gives details on the Columbia Orbiter mission which has as its main tasks the maintenance and augmentation of the Hubble Space Telescope (HST). While she will do many things during the mission, the most important will be her role as the primary operator of the robotic arm, which is responsible for grappling the HST, bringing it to the Orbiter bay, and providing support for the astronauts during their EVAs (Extravehicular Activities). Additionally, the robotic arm will be responsible for transferring new and replacement equipment from the Orbiter to the HST. This equipment includes: two solar arrays, a Power Control Unit (PCU), the Advanced Camera for Surveys, and a replacement cooling system for NICMOS (Near Infrared Camera Multi-Object Spectrometer).
Passive Infrared Thermographic Imaging for Mobile Robot Object Identification
NASA Astrophysics Data System (ADS)
Hinders, M. K.; Fehlman, W. L.
2010-02-01
The usefulness of thermal infrared imaging as a mobile robot sensing modality is explored, and a set of thermal-physical features used to characterize passive thermal objects in outdoor environments is described. Objects that extend laterally beyond the thermal camera's field of view, such as brick walls, hedges, picket fences, and wood walls as well as compact objects that are laterally within the thermal camera's field of view, such as metal poles and tree trunks, are considered. Classification of passive thermal objects is a subtle process since they are not a source for their own emission of thermal energy. A detailed analysis is included of the acquisition and preprocessing of thermal images, as well as the generation and selection of thermal-physical features from these objects within thermal images. Classification performance using these features is discussed, as a precursor to the design of a physics-based model to automatically classify these objects.
Phoenix Robotic Arm's Workspace After 90 Sols
NASA Technical Reports Server (NTRS)
2008-01-01
During the first 90 Martian days, or sols, after its May 25, 2008, landing on an arctic plain of Mars, NASA's Phoenix Mars Lander dug several trenches in the workspace reachable with the lander's robotic arm. The lander's Surface Stereo Imager camera recorded this view of the workspace on Sol 90, early afternoon local Mars time (overnight Aug. 25 to Aug. 26, 2008). The shadow of the the camera itself, atop its mast, is just left of the center of the image and roughly a third of a meter (one foot) wide. The workspace is on the north side of the lander. The trench just to the right of center is called 'Neverland.' The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.Robotic Arm Camera on Mars, with Lights Off
NASA Technical Reports Server (NTRS)
2008-01-01
This approximate color image is a view of NASA's Phoenix Mars Lander's Robotic Arm Camera (RAC) as seen by the lander's Surface Stereo Imager (SSI). This image was taken on the afternoon of the 116th Martian day, or sol, of the mission (September 22, 2008). The RAC is about 8 centimeters (3 inches) tall. The SSI took images of the RAC to test both the light-emitting diodes (LEDs) and cover function. Individual images were taken in three SSI filters that correspond to the red, green, and blue LEDs one at a time. This yields proper coloring when imaging Phoenix's surrounding Martian environment. The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.NASA Astrophysics Data System (ADS)
Azhar, N.; Saad, W. H. M.; Manap, N. A.; Saad, N. M.; Syafeeza, A. R.
2017-06-01
This study presents the approach of 3D image reconstruction using an autonomous robotic arm for the image acquisition process. A low cost of the automated imaging platform is created using a pair of G15 servo motor connected in series to an Arduino UNO as a main microcontroller. Two sets of sequential images were obtained using different projection angle of the camera. The silhouette-based approach is used in this study for 3D reconstruction from the sequential images captured from several different angles of the object. Other than that, an analysis based on the effect of different number of sequential images on the accuracy of 3D model reconstruction was also carried out with a fixed projection angle of the camera. The effecting elements in the 3D reconstruction are discussed and the overall result of the analysis is concluded according to the prototype of imaging platform.
Three-dimensional face pose detection and tracking using monocular videos: tool and application.
Dornaika, Fadi; Raducanu, Bogdan
2009-08-01
Recently, we have proposed a real-time tracker that simultaneously tracks the 3-D head pose and facial actions in monocular video sequences that can be provided by low quality cameras. This paper has two main contributions. First, we propose an automatic 3-D face pose initialization scheme for the real-time tracker by adopting a 2-D face detector and an eigenface system. Second, we use the proposed methods-the initialization and tracking-for enhancing the human-machine interaction functionality of an AIBO robot. More precisely, we show how the orientation of the robot's camera (or any active vision system) can be controlled through the estimation of the user's head pose. Applications based on head-pose imitation such as telepresence, virtual reality, and video games can directly exploit the proposed techniques. Experiments on real videos confirm the robustness and usefulness of the proposed methods.
2010-05-18
ISS023-E-047431 (18 May 2010) --- Intersecting the thin line of Earth's atmosphere, the docked space shuttle Atlantis is featured in this image photographed by an Expedition 23 crew member on the International Space Station. The Russian-built Mini-Research Module 1 (MRM-1) is visible in the payload bay as the shuttle robotic arm prepares to unberth the module from Atlantis and position it for handoff to the station robotic arm. Named Rassvet, Russian for "dawn," the module is the second in a series of new pressurized components for Russia and will be permanently attached to the Earth-facing port of the Zarya Functional Cargo Block (FGB). Rassvet will be used for cargo storage and will provide an additional docking port to the station.
Tapper, Anna-Maija; Hannola, Mikko; Zeitlin, Rainer; Isojärvi, Jaana; Sintonen, Harri; Ikonen, Tuija S
2014-06-01
In order to assess the effectiveness and costs of robot-assisted hysterectomy compared with conventional techniques we reviewed the literature separately for benign and malignant conditions, and conducted a cost analysis for different techniques of hysterectomy from a hospital economic database. Unlimited systematic literature search of Medline, Cochrane and CRD databases produced only two randomized trials, both for benign conditions. For the outcome assessment, data from two HTA reports, one systematic review, and 16 original articles were extracted and analyzed. Furthermore, one cost modelling and 13 original cost studies were analyzed. In malignant conditions, less blood loss, fewer complications and a shorter hospital stay were considered as the main advantages of robot-assisted surgery, like any mini-invasive technique when compared to open surgery. There were no significant differences between the techniques regarding oncological outcomes. When compared to laparoscopic hysterectomy, the main benefit of robot-assistance was a shorter learning curve associated with fewer conversions but the length of robotic operation was often longer. In benign conditions, no clinically significant differences were reported and vaginal hysterectomy was considered the optimal choice when feasible. According to Finnish data, the costs of robot-assisted hysterectomies were 1.5-3 times higher than the costs of conventional techniques. In benign conditions the difference in cost was highest. Because of expensive disposable supplies, unit costs were high regardless of the annual number of robotic operations. Hence, in the current distribution of cost pattern, economical effectiveness cannot be markedly improved by increasing the volume of robotic surgery. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Telemedicine and robotics: paving the way to the globalization of surgery.
Senapati, S; Advincula, A P
2005-12-01
The concept of delivering health services at a distance, or telemedicine is becoming an emerging tool for the field of surgery. For the surgical services, telepresence surgery through robotics is gradually being incorporated into health care practices. This article will provide a brief overview of the principles surrounding telemedicine and telepresence surgery as they specifically relate to robotics. Where limitations have been reached in laparoscopy, robotics has allowed further steps forward. The development of robotics in medicine has been a progression from passive to immersive technology. In gynecology, the utilization of robotics has evolved from the use of Aesop, a robotic arm for camera manipulation, to full robotic systems such as Zeus, and the daVinci surgical system. These systems have not only been used directly for a variety of procedures but have also become a useful tool for conferencing and the mentoring of surgeons from afar. As this mode of technology becomes assimilated into the culture of surgery and medicine globally, caution must be taken to carefully navigate the economic, legal and ethical implications of telemedicine. Despite the challenges faced, telepresence surgery holds promise for more widespread applications.
NASA Technical Reports Server (NTRS)
Larimer, Stanley J.; Lisec, Thomas R.; Spiessbach, Andrew J.
1990-01-01
Proposed walking-beam robot simpler and more rugged than articulated-leg walkers. Requires less data processing, and uses power more efficiently. Includes pair of tripods, one nested in other. Inner tripod holds power supplies, communication equipment, computers, instrumentation, sampling arms, and articulated sensor turrets. Outer tripod holds mast on which antennas for communication with remote control site and video cameras for viewing local and distant terrain mounted. Propels itself by raising, translating, and lowering tripods in alternation. Steers itself by rotating raised tripod on turntable.
Shah, Rachit D; Cao, Alex; Golenberg, Lavie; Ellis, R Darin; Auner, Gregory W; Pandya, Abhilash K; Klein, Michael D
2009-04-01
Technical advances in the application of laparoscopic and robotic surgical systems have improved platform usability. The authors hypothesized that using two monitors instead of one would lead to faster performance with fewer errors. All tasks were performed using a surgical robot in a training box. One of the monitors was a standard camera with two preset zoom levels (zoomed in and zoomed out, single-monitor condition). The second monitor provided a static panoramic view of the whole surgical field. The standard camera was static at the zoomed-in level for the dual-monitor condition of the study. The study had two groups of participants: 4 surgeons proficient in both robotic and advanced laparoscopic skills and 10 lay persons (nonsurgeons) who were given adequate time to train and familiarize themselves with the equipment. Running a 50-cm rope was the basic task. Advanced tasks included running a suture through predetermined points and intracorporeal knot tying with 3-0 silk. Trial completion times and errors, categorized into three groups (orientation, precision, and task), were recorded. The trial completion times for all the tasks, basic and advanced, in the two groups were not significantly different. Fewer orientation errors occurred in the nonsurgeon group during knot tying (p=0.03) and in both groups during suturing (p=0.0002) in the dual-monitor arm of the study. Differences in precision and task error were not significant. Using two camera views helps both surgeons and lay persons perform complex tasks with fewer errors. These results may be due to better awareness of the surgical field with regard to the location of the instruments, leading to better field orientation. This display setup has potential for use in complex minimally invasive surgeries such as esophagectomy and gastric bypass. This technique also would be applicable to open microsurgery.
Under-vehicle autonomous inspection through undercarriage signatures
NASA Astrophysics Data System (ADS)
Schoenherr, Edward; Smuda, Bill
2005-05-01
Increased threats to gate security have caused recent need for improved vehicle inspection methods at security checkpoints in various fields of defense and security. A fast, reliable system of under-vehicle inspection that detects possibly harmful or unwanted materials hidden on vehicle undercarriages and notifies the user of the presence of these materials while allowing the user a safe standoff distance from the inspection site is desirable. An autonomous under-vehicle inspection system would provide for this. The proposed system would function as follows: A low-clearance tele-operated robotic platform would be equipped with sonar/laser range finding sensors as well as a video camera. As a vehicle to be inspected enters a checkpoint, the robot would autonomously navigate under the vehicle, using algorithms to detect tire locations for weigh points. During this navigation, data would be collected from the sonar/laser range finding hardware. This range data would be used to compile an impression of the vehicle undercarriage. Once this impression is complete, the system would compare it to a database of pre-scanned undercarriage impressions. Based on vehicle makes and models, any variance between the undercarriage being inspected and the impression compared against in the database would be marked as potentially threatening. If such variances exist, the robot would navigate to these locations and place the video camera in such a manner that the location in question can be viewed from a standoff position through a TV monitor. At this time, manual control of the robot navigation and camera control can be taken to imply further, more detailed inspection of the area/materials in question. After-market vehicle modifications would provide some difficulty, yet with enough pre-screening of such modifications, the system should still prove accurate. Also, impression scans that are taken in the field can be stored and tagged with a vehicles's license plate number, and future inspections of that vehicle can be compared to already screened and cleared impressions of the same vehicle in order to search for variance.
Study of optical techniques for the Ames unitary wind tunnels. Part 4: Model deformation
NASA Technical Reports Server (NTRS)
Lee, George
1992-01-01
A survey of systems capable of model deformation measurements was conducted. The survey included stereo-cameras, scanners, and digitizers. Moire, holographic, and heterodyne interferometry techniques were also looked at. Stereo-cameras with passive or active targets are currently being deployed for model deformation measurements at NASA Ames and LaRC, Boeing, and ONERA. Scanners and digitizers are widely used in robotics, motion analysis, medicine, etc., and some of the scanner and digitizers can meet the model deformation requirements. Commercial stereo-cameras, scanners, and digitizers are being improved in accuracy, reliability, and ease of operation. A number of new systems are coming onto the market.
The prototype cameras for trans-Neptunian automatic occultation survey
NASA Astrophysics Data System (ADS)
Wang, Shiang-Yu; Ling, Hung-Hsu; Hu, Yen-Sang; Geary, John C.; Chang, Yin-Chang; Chen, Hsin-Yo; Amato, Stephen M.; Huang, Pin-Jie; Pratlong, Jerome; Szentgyorgyi, Andrew; Lehner, Matthew; Norton, Timothy; Jorden, Paul
2016-08-01
The Transneptunian Automated Occultation Survey (TAOS II) is a three robotic telescope project to detect the stellar occultation events generated by TransNeptunian Objects (TNOs). TAOS II project aims to monitor about 10000 stars simultaneously at 20Hz to enable statistically significant event rate. The TAOS II camera is designed to cover the 1.7 degrees diameter field of view of the 1.3m telescope with 10 mosaic 4.5k×2k CMOS sensors. The new CMOS sensor (CIS 113) has a back illumination thinned structure and high sensitivity to provide similar performance to that of the back-illumination thinned CCDs. Due to the requirements of high performance and high speed, the development of the new CMOS sensor is still in progress. Before the science arrays are delivered, a prototype camera is developed to help on the commissioning of the robotic telescope system. The prototype camera uses the small format e2v CIS 107 device but with the same dewar and also the similar control electronics as the TAOS II science camera. The sensors, mounted on a single Invar plate, are cooled to the operation temperature of about 200K as the science array by a cryogenic cooler. The Invar plate is connected to the dewar body through a supporting ring with three G10 bipods. The control electronics consists of analog part and a Xilinx FPGA based digital circuit. One FPGA is needed to control and process the signal from a CMOS sensor for 20Hz region of interests (ROI) readout.
NASA Technical Reports Server (NTRS)
Talley, Tom
2003-01-01
Johnson Space Center (JSC) is designing a small, remotely controlled vehicle that will carry two color and one black and white video cameras in space. The device will launch and retrieve from the Space Vehicle and be used for remote viewing. Off the shelf cellular technology is being used as the basis for communication system design. Existing plans include using multiple antennas to make simultaneous estimates of the azimuth of the MiniAERCam from several sites on the Space Station and use triangulation to find the location of the device. Adding range detection capability to each of the nodes on the Space Vehicle would allow an estimate of the location of the MiniAERCam to be made at each Communication And Telemetry Box (CATBox) independent of all the other communication nodes. This project will investigate the techniques used by the Global Positioning System (GPS) to achieve accurate positioning information and adapt those strategies that are appropriate to the design of the CATBox range determination system.
Low Noise Camera for Suborbital Science Applications
NASA Technical Reports Server (NTRS)
Hyde, David; Robertson, Bryan; Holloway, Todd
2015-01-01
Low-cost, commercial-off-the-shelf- (COTS-) based science cameras are intended for lab use only and are not suitable for flight deployment as they are difficult to ruggedize and repackage into instruments. Also, COTS implementation may not be suitable since mission science objectives are tied to specific measurement requirements, and often require performance beyond that required by the commercial market. Custom camera development for each application is cost prohibitive for the International Space Station (ISS) or midrange science payloads due to nonrecurring expenses ($2,000 K) for ground-up camera electronics design. While each new science mission has a different suite of requirements for camera performance (detector noise, speed of image acquisition, charge-coupled device (CCD) size, operation temperature, packaging, etc.), the analog-to-digital conversion, power supply, and communications can be standardized to accommodate many different applications. The low noise camera for suborbital applications is a rugged standard camera platform that can accommodate a range of detector types and science requirements for use in inexpensive to mid range payloads supporting Earth science, solar physics, robotic vision, or astronomy experiments. Cameras developed on this platform have demonstrated the performance found in custom flight cameras at a price per camera more than an order of magnitude lower.
ProtoDESI: First On-Sky Technology Demonstration for the Dark Energy Spectroscopic Instrument
NASA Astrophysics Data System (ADS)
Fagrelius, Parker; Abareshi, Behzad; Allen, Lori; Ballester, Otger; Baltay, Charles; Besuner, Robert; Buckley-Geer, Elizabeth; Butler, Karen; Cardiel, Laia; Dey, Arjun; Duan, Yutong; Elliott, Ann; Emmet, William; Gershkovich, Irena; Honscheid, Klaus; Illa, Jose M.; Jimenez, Jorge; Joyce, Richard; Karcher, Armin; Kent, Stephen; Lambert, Andrew; Lampton, Michael; Levi, Michael; Manser, Christopher; Marshall, Robert; Martini, Paul; Paat, Anthony; Probst, Ronald; Rabinowitz, David; Reil, Kevin; Robertson, Amy; Rockosi, Connie; Schlegel, David; Schubnell, Michael; Serrano, Santiago; Silber, Joseph; Soto, Christian; Sprayberry, David; Summers, David; Tarlé, Greg; Weaver, Benjamin A.
2018-02-01
The Dark Energy Spectroscopic Instrument (DESI) is under construction to measure the expansion history of the universe using the baryon acoustic oscillations technique. The spectra of 35 million galaxies and quasars over 14,000 square degrees will be measured during a 5-year survey. A new prime focus corrector for the Mayall telescope at Kitt Peak National Observatory will deliver light to 5,000 individually targeted fiber-fed robotic positioners. The fibers in turn feed ten broadband multi-object spectrographs. We describe the ProtoDESI experiment, that was installed and commissioned on the 4-m Mayall telescope from 2016 August 14 to September 30. ProtoDESI was an on-sky technology demonstration with the goal to reduce technical risks associated with aligning optical fibers with targets using robotic fiber positioners and maintaining the stability required to operate DESI. The ProtoDESI prime focus instrument, consisting of three fiber positioners, illuminated fiducials, and a guide camera, was installed behind the existing Mosaic corrector on the Mayall telescope. A fiber view camera was mounted in the Cassegrain cage of the telescope and provided feedback metrology for positioning the fibers. ProtoDESI also provided a platform for early integration of hardware with the DESI Instrument Control System that controls the subsystems, provides communication with the Telescope Control System, and collects instrument telemetry data. Lacking a spectrograph, ProtoDESI monitored the output of the fibers using a fiber photometry camera mounted on the prime focus instrument. ProtoDESI was successful in acquiring targets with the robotically positioned fibers and demonstrated that the DESI guiding requirements can be met.
2003 Mars Exploration Rover Mission: Robotic Field Geologists for a Mars Sample Return Mission
NASA Technical Reports Server (NTRS)
Ming, Douglas W.
2008-01-01
The Mars Exploration Rover (MER) Spirit landed in Gusev crater on Jan. 4, 2004 and the rover Opportunity arrived on the plains of Meridiani Planum on Jan. 25, 2004. The rovers continue to return new discoveries after 4 continuous Earth years of operations on the surface of the red planet. Spirit has successfully traversed 7.5 km over the Gusev crater plains, ascended to the top of Husband Hill, and entered into the Inner Basin of the Columbia Hills. Opportunity has traveled nearly 12 km over flat plains of Meridiani and descended into several impact craters. Spirit and Opportunity carry an integrated suite of scientific instruments and tools called the Athena science payload. The Athena science payload consists of the 1) Panoramic Camera (Pancam) that provides high-resolution, color stereo imaging, 2) Miniature Thermal Emission Spectrometer (Mini-TES) that provides spectral cubes at mid-infrared wavelengths, 3) Microscopic Imager (MI) for close-up imaging, 4) Alpha Particle X-Ray Spectrometer (APXS) for elemental chemistry, 5) Moessbauer Spectrometer (MB) for the mineralogy of Fe-bearing materials, 6) Rock Abrasion Tool (RAT) for removing dusty and weathered surfaces and exposing fresh rock underneath, and 7) Magnetic Properties Experiment that allow the instruments to study the composition of magnetic martian materials [1]. The primary objective of the Athena science investigation is to explore two sites on the martian surface where water may once have been present, and to assess past environmental conditions at those sites and their suitability for life. The Athena science instruments have made numerous scientific discoveries over the 4 plus years of operations. The objectives of this paper are to 1) describe the major scientific discoveries of the MER robotic field geologists and 2) briefly summarize what major outstanding questions were not answered by MER that might be addressed by returning samples to our laboratories on Earth.
Hazardous materials emergency response mobile robot
NASA Technical Reports Server (NTRS)
Stone, Henry W. (Inventor); Lloyd, James (Inventor); Alahuzos, George (Inventor)
1992-01-01
A simple or unsophisticated robot incapable of effecting straight-line motion at the end of its arm inserts a key held in its end effector or hand into a door lock with nearly straight-line motion by gently thrusting its back heels downwardly so that it pivots forwardly on its front toes while holding its arm stationary. The relatively slight arc traveled by the robot's hand is compensated by a complaint tool with which the robot hand grips the door key. A visible beam is projected through the axis of the hand or gripper on the robot arm end at an angle to the general direction in which the robot thrusts the gripper forward. As the robot hand approaches a target surface, a video camera on the robot wrist watches the beam spot on the target surface fall from a height proportional to the distance between the robot hand and the target surface until the beam spot is nearly aligned with the top of the robot hand. Holes in the front face of the hand are connected through internal passages inside the arm to an on-board chemical sensor. Full rotation of the hand or gripper about the robot arm's wrist is made possible by slip rings in the wrist which permit passage of the gases taken in through the nose holes in the front of the hand through the wrist regardless of the rotational orientation of the wrist.
Dickstein-Fischer, Laurie; Fischer, Gregory S
2014-01-01
It is estimated that Autism Spectrum Disorder (ASD) affects 1 in 68 children. Early identification of an ASD is exceedingly important to the introduction of an intervention. We are developing a robot-assisted approach that will serve as an improved diagnostic and early intervention tool for children with autism. The robot, named PABI® (Penguin for Autism Behavioral Interventions), is a compact humanoid robot taking on an expressive cartoon-like embodiment. The robot is affordable, durable, and portable so that it can be used in various settings including schools, clinics, and the home. Thus enabling significantly enhanced and more readily available diagnosis and continuation of care. Through facial expressions, body motion, verbal cues, stereo vision-based tracking, and a tablet computer, the robot is capable of interacting meaningfully with an autistic child. Initial implementations of the robot, as part of a comprehensive treatment model (CTM), include Applied Behavioral Analysis (ABA) therapy where the child interacts with a tablet computer wirelessly interfaced with the robot. At the same time, the robot makes meaningful expressions and utterances and uses stereo cameras in eyes to track the child, maintain eye contact, and collect data such as affect and gaze direction for charting of progress. In this paper we present the clinical justification, anticipated usage with corresponding requirements, prototype development of the robotic system, and demonstration of a sample application for robot-assisted ABA therapy.
Comparison of Monte Carlo simulated and measured performance parameters of miniPET scanner
NASA Astrophysics Data System (ADS)
Kis, S. A.; Emri, M.; Opposits, G.; Bükki, T.; Valastyán, I.; Hegyesi, Gy.; Imrek, J.; Kalinka, G.; Molnár, J.; Novák, D.; Végh, J.; Kerek, A.; Trón, L.; Balkay, L.
2007-02-01
In vivo imaging of small laboratory animals is a valuable tool in the development of new drugs. For this purpose, miniPET, an easy to scale modular small animal PET camera has been developed at our institutes. The system has four modules, which makes it possible to rotate the whole detector system around the axis of the field of view. Data collection and image reconstruction are performed using a data acquisition (DAQ) module with Ethernet communication facility and a computer cluster of commercial PCs. Performance tests were carried out to determine system parameters, such as energy resolution, sensitivity and noise equivalent count rate. A modified GEANT4-based GATE Monte Carlo software package was used to simulate PET data analogous to those of the performance measurements. GATE was run on a Linux cluster of 10 processors (64 bit, Xeon with 3.0 GHz) and controlled by a SUN grid engine. The application of this special computer cluster reduced the time necessary for the simulations by an order of magnitude. The simulated energy spectra, maximum rate of true coincidences and sensitivity of the camera were in good agreement with the measured parameters.
Amelioration de la precision d'un bras robotise pour une application d'ebavurage
NASA Astrophysics Data System (ADS)
Mailhot, David
Process automation is a more and more referred solution when it comes to complex, tedious or even dangerous tasks for human. Flexibility, low cost and compactness make industrial robots very attractive for automation. Even if many developments have been made to enhance robot's performances, they still can not meet some industries requirements. For instance, aerospace industry requires very tight tolerances on a large variety of parts, which is not what robots were designed for at first. When it comes to robotic deburring, robot imprecision is a major problem that needs to be addressed before it can be implemented in production. This master's thesis explores different calibration techniques for robot's dimensions that could overcome the problem and make the robotic deburring application possible. Some calibration techniques that are easy to implement in production environment are simulated and compared. A calibration technique for tool's dimensions is simulated and implemented to evaluate its potential. The most efficient technique will be used within the application. Finally, the production environment and requirements are explained. The remaining imprecision will be compensated by the use of a force/torque sensor integrated with the robot's controller and by the use of a camera. Many tests are made to define the best parameters to use to deburr a specific feature on a chosen part. Concluding tests are shown and demonstrate the potential use of robotic deburring. Keywords: robotic calibration, robotic arm, robotic precision, robotic deburring
Plugin-docking system for autonomous charging using particle filter
NASA Astrophysics Data System (ADS)
Koyasu, Hiroshi; Wada, Masayoshi
2017-03-01
Autonomous charging of the robot battery is one of the key functions for the sake of expanding working areas of the robots. To realize it, most of existing systems use custom docking stations or artificial markers. By the other words, they can only charge on a few specific outlets. If the limit can be removed, working areas of the robots significantly expands. In this paper, we describe a plugin-docking system for the autonomous charging, which does not require any custom docking stations or artificial markers. A single camera is used for recognizing the 3D position of an outlet socket. A particle filter-based image tracking algorithm which is robust to the illumination change is applied. The algorithm is implemented on a robot with an omnidirectional moving system. The experimental results show the effectiveness of our system.
Vision Based Autonomous Robotic Control for Advanced Inspection and Repair
NASA Technical Reports Server (NTRS)
Wehner, Walter S.
2014-01-01
The advanced inspection system is an autonomous control and analysis system that improves the inspection and remediation operations for ground and surface systems. It uses optical imaging technology with intelligent computer vision algorithms to analyze physical features of the real-world environment to make decisions and learn from experience. The advanced inspection system plans to control a robotic manipulator arm, an unmanned ground vehicle and cameras remotely, automatically and autonomously. There are many computer vision, image processing and machine learning techniques available as open source for using vision as a sensory feedback in decision-making and autonomous robotic movement. My responsibilities for the advanced inspection system are to create a software architecture that integrates and provides a framework for all the different subsystem components; identify open-source algorithms and techniques; and integrate robot hardware.
Autonomous exploration and mapping of unknown environments
NASA Astrophysics Data System (ADS)
Owens, Jason; Osteen, Phil; Fields, MaryAnne
2012-06-01
Autonomous exploration and mapping is a vital capability for future robotic systems expected to function in arbitrary complex environments. In this paper, we describe an end-to-end robotic solution for remotely mapping buildings. For a typical mapping system, an unmanned system is directed to enter an unknown building at a distance, sense the internal structure, and, barring additional tasks, while in situ, create a 2-D map of the building. This map provides a useful and intuitive representation of the environment for the remote operator. We have integrated a robust mapping and exploration system utilizing laser range scanners and RGB-D cameras, and we demonstrate an exploration and metacognition algorithm on a robotic platform. The algorithm allows the robot to safely navigate the building, explore the interior, report significant features to the operator, and generate a consistent map - all while maintaining localization.
Maurice, Matthew J; Kaouk, Jihad H
2017-12-01
To assess the feasibility of radical perineal cystoprostatectomy using the latest generation purpose-built single-port robotic surgical system. In two male cadavers the da Vinci ® SP1098 Surgical System (Intuitive Surgical, Sunnyvale, CA, USA) was used to perform radical perineal cystoprostatectomy and bilateral extended pelvic lymph node dissection (ePLND). New features in this model include enhanced high-definition three-dimensional optics, improved instrument manoeuvrability, and a real-time instrument tracking and guidance system. The surgery was accomplished through a 3-cm perineal incision via a novel robotic single-port system, which accommodates three double-jointed articulating robotic instruments, an articulating camera, and an accessory laparoscopic instrument. The primary outcomes were technical feasibility, intraoperative complications, and total robotic operative time. The cases were completed successfully without conversion. There were no accidental punctures or lacerations. The robotic operative times were 197 and 202 min. In this preclinical model, robotic radical perineal cystoprostatectomy and ePLND was feasible using the SP1098 robotic platform. Further investigation is needed to assess the feasibility of urinary diversion using this novel approach and new technology. © 2017 The Authors BJU International © 2017 BJU International Published by John Wiley & Sons Ltd.
Miniature Telerobots in Space Applications
NASA Technical Reports Server (NTRS)
Venema, S. C.; Hannaford, B.
1995-01-01
Ground controlled telerobots can be used to reduce astronaut workload while retaining much of the human capabilities of planning, execution, and error recovery for specific tasks. Miniature robots can be used for delicate and time consuming tasks such as biological experiment servicing without incurring the significant mass and power penalties associated with larger robot systems. However, questions remain regarding the technical and economic effectiveness of such mini-telerobotic systems. This paper address some of these open issues and the details of two projects which will provide some of the needed answers. The Microtrex project is a joint University of Washington/NASA project which plans on flying a miniature robot as a Space Shuttle experiment to evaluate the effects of microgravity on ground-controlled manipulation while subject to variable time-delay communications. A related project involving the University of Washington and Boeing Defense and Space will evaluate the effectiveness f using a minirobot to service biological experiments in a space station experiment 'glove-box' rack mock-up, again while subject to realistic communications constraints.
Infrared stereo calibration for unmanned ground vehicle navigation
NASA Astrophysics Data System (ADS)
Harguess, Josh; Strange, Shawn
2014-06-01
The problem of calibrating two color cameras as a stereo pair has been heavily researched and many off-the-shelf software packages, such as Robot Operating System and OpenCV, include calibration routines that work in most cases. However, the problem of calibrating two infrared (IR) cameras for the purposes of sensor fusion and point could generation is relatively new and many challenges exist. We present a comparison of color camera and IR camera stereo calibration using data from an unmanned ground vehicle. There are two main challenges in IR stereo calibration; the calibration board (material, design, etc.) and the accuracy of calibration pattern detection. We present our analysis of these challenges along with our IR stereo calibration methodology. Finally, we present our results both visually and analytically with computed reprojection errors.
A Fast Robot Identification and Mapping Algorithm Based on Kinect Sensor.
Zhang, Liang; Shen, Peiyi; Zhu, Guangming; Wei, Wei; Song, Houbing
2015-08-14
Internet of Things (IoT) is driving innovation in an ever-growing set of application domains such as intelligent processing for autonomous robots. For an autonomous robot, one grand challenge is how to sense its surrounding environment effectively. The Simultaneous Localization and Mapping with RGB-D Kinect camera sensor on robot, called RGB-D SLAM, has been developed for this purpose but some technical challenges must be addressed. Firstly, the efficiency of the algorithm cannot satisfy real-time requirements; secondly, the accuracy of the algorithm is unacceptable. In order to address these challenges, this paper proposes a set of novel improvement methods as follows. Firstly, the ORiented Brief (ORB) method is used in feature detection and descriptor extraction. Secondly, a bidirectional Fast Library for Approximate Nearest Neighbors (FLANN) k-Nearest Neighbor (KNN) algorithm is applied to feature match. Then, the improved RANdom SAmple Consensus (RANSAC) estimation method is adopted in the motion transformation. In the meantime, high precision General Iterative Closest Points (GICP) is utilized to register a point cloud in the motion transformation optimization. To improve the accuracy of SLAM, the reduced dynamic covariance scaling (DCS) algorithm is formulated as a global optimization problem under the G2O framework. The effectiveness of the improved algorithm has been verified by testing on standard data and comparing with the ground truth obtained on Freiburg University's datasets. The Dr Robot X80 equipped with a Kinect camera is also applied in a building corridor to verify the correctness of the improved RGB-D SLAM algorithm. With the above experiments, it can be seen that the proposed algorithm achieves higher processing speed and better accuracy.
Maui Space Surveillance System Satellite Categorization Laboratory
NASA Astrophysics Data System (ADS)
Deiotte, R.; Guyote, M.; Kelecy, T.; Hall, D.; Africano, J.; Kervin, P.
The MSSS satellite categorization laboratory is a fusion of robotics and digital imaging processes that aims to decompose satellite photometric characteristics and behavior in a controlled setting. By combining a robot, light source and camera to acquire non-resolved images of a model satellite, detailed photometric analyses can be performed to extract relevant information about shape features, elemental makeup, and ultimately attitude and function. Using the laboratory setting a detailed analysis can be done on any type of material or design and the results cataloged in a database that will facilitate object identification by "curve-fitting" individual elements in the basis set to observational data that might otherwise be unidentifiable. Currently the laboratory has created, an ST-Robotics five degree of freedom robotic arm, collimated light source and non-focused Apogee camera have all been integrated into a MATLAB based software package that facilitates automatic data acquisition and analysis. Efforts to date have been aimed at construction of the lab as well as validation and verification of simple geometric objects. Simple tests on spheres, cubes and simple satellites show promising results that could lead to a much better understanding of non-resolvable space object characteristics. This paper presents a description of the laboratory configuration and validation test results with emphasis on the non-resolved photometric characteristics for a variety of object shapes, spin dynamics and orientations. The future vision, utility and benefits of the laboratory to the SSA community as a whole are also discussed.
2003-05-10
The backshell for the Mars Exploration Rover 1 (MER-1) is moved toward the rover (foreground, left). The backshell is a protective cover for the rover. NASA's twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can't yet go. MER-1 is scheduled to launch June 25 as MER-B aboard a Delta II rocket from Cape Canaveral Air Force Station.
2003-05-15
KENNEDY SPACE CENTER, FLA. - In the foreground, three solid rocket boosters (SRBs) suspended in the launch tower flank the Delta II rocket (in the background) that will launch Mars Exploration Rover 2 (MER-2). NASA’s twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can’t yet go. MER-2 is scheduled to launch June 5 as MER-A. MER-1 (MER-B) will launch June 25.
2003-05-10
KENNEDY SPACE CENTER, FLA. - Workers in the Payload Hazardous Servicing Facility prepare to lift and move the backshell that will cover the Mars Exploration Rover 1 (MER-1) and its lander. NASA's twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can't yet go. MER-1 is scheduled to launch June 25 as MER-B aboard a Delta II rocket from Cape Canaveral Air Force Station.
Hazardous materials emergency response mobile robot
NASA Technical Reports Server (NTRS)
Stone, Henry W. (Inventor); Lloyd, James W. (Inventor); Alahuzos, George A. (Inventor)
1995-01-01
A simple or unsophisticated robot incapable of effecting straight-line motion at the end of its arm is presented. This robot inserts a key held in its end effector or hand into a door lock with nearly straight-line motion by gently thrusting its back heels downwardly so that it pivots forwardly on its front toes while holding its arm stationary. The relatively slight arc traveled by the robot's hand is compensated by a complaint tool with which the robot hand grips the door key. A visible beam is projected through the axis of the hand or gripper on the robot arm end at an angle to the general direction in which the robot thrusts the gripper forward. As the robot hand approaches a target surface, a video camera on the robot wrist watches the beam spot on the target surface fall from a height proportional to the distance between the robot hand and the target surface until the beam spot is nearly aligned with the top of the robot hand. Holes in the front face of the hand are connected through internal passages inside the arm to an on-board chemical sensor. Full rotation of the hand or gripper about the robot arm's wrist is made possible by slip rings in the wrist which permit passage of the gases taken in through the nose holes in the front of the hand through the wrist regardless of the rotational orientation of the wrist.
Patel, Manish N; Aboumohamed, Ahmed; Hemal, Ashok
2015-12-01
To describe our robot-assisted nephroureterectomy (RNU) technique for benign indications and RNU with en bloc excision of bladder cuff (BCE) and lymphadenectomy (LND) for malignant indications using the da Vinci Si and da Vinci Xi robotic platform, with its pros and cons. The port placement described for Si can be used for standard and S robotic systems. This is the first report in the literature on the use of the da Vinci Xi robotic platform for RNU. After a substantial experience of RNU using different da Vinci robots from the standard to the Si platform in a single-docking fashion for benign and malignant conditions, we started using the newly released da Vinci Xi robot since 2014. The most important differences are in port placement and effective use of the features of da Vinci Xi robot while performing simultaneous upper and lower tract surgery. Patient positioning, port placement, step-by-step technique of single docking RNU-LND-BCE using the da Vinci Si and da Vinci Xi robot are shown in an accompanying video with the goal that centres using either robotic system benefit from the hints and tips. The first segment of video describes RNU-LND-BCE using the da Vinci Si followed by the da Vinci Xi to highlight differences. There was no need for patient repositioning or robot re-docking with the new da Vinci Xi robotic platform. We have experience of using different robotic systems for single docking RNU in 70 cases for benign (15) and malignant (55) conditions. The da Vinci Xi robotic platform helps operating room personnel in its easy movement, allows easier patient side-docking with the help of its boom feature, in addition to easy and swift movements of the robotic arms. The patient clearance feature can be used to avoid collision with the robotic arms or the patient's body. In patients with challenging body habitus and in situations where bladder cuff management is difficult, modifications can be made through reassigning the camera to a different port with utilisation of the retargeting feature of the da Vinci Xi when working on the bladder cuff or in the pelvis. The vision of the camera used for da Vinci Xi was initially felt to be inferior to that of the da Vinci Si; however, with a subsequent software upgrade this was much improved. The base of the da Vinci Xi is bigger, which does not slide and occasionally requires a change in table placement/operating room setup, and requires side-docking especially when dealing with very tall and obese patients for pelvic surgery. RNU alone or with LND-BCE is a challenging surgical procedure that addresses the upper and lower urinary tract simultaneously. Single docking and single robotic port placement for RNU-LND-BCE has evolved with the development of different generations of the robotic system. These procedures can be performed safely and effectively using the da Vinci S, Si or Xi robotic platform. The new da Vinci Xi robotic platform is more user-friendly, has easy installation, and is intuitive for surgeons using its features. © 2015 The Authors BJU International © 2015 BJU International Published by John Wiley & Sons Ltd.
Bekele, Esubalew T; Lahiri, Uttama; Swanson, Amy R.; Crittendon, Julie A.; Warren, Zachary E.; Sarkar, Nilanjan
2013-01-01
Emerging technology, especially robotic technology, has been shown to be appealing to children with autism spectrum disorders (ASD). Such interest may be leveraged to provide repeatable, accurate and individualized intervention services to young children with ASD based on quantitative metrics. However, existing robot-mediated systems tend to have limited adaptive capability that may impact individualization. Our current work seeks to bridge this gap by developing an adaptive and individualized robot-mediated technology for children with ASD. The system is composed of a humanoid robot with its vision augmented by a network of cameras for real-time head tracking using a distributed architecture. Based on the cues from the child’s head movement, the robot intelligently adapts itself in an individualized manner to generate prompts and reinforcements with potential to promote skills in the ASD core deficit area of early social orienting. The system was validated for feasibility, accuracy, and performance. Results from a pilot usability study involving six children with ASD and a control group of six typically developing (TD) children are presented. PMID:23221831
The Mars Surveyor '01 Rover and Robotic Arm
NASA Technical Reports Server (NTRS)
Bonitz, Robert G.; Nguyen, Tam T.; Kim, Won S.
1999-01-01
The Mars Surveyor 2001 Lander will carry with it both a Robotic Arm and Rover to support various science and technology experiments. The Marie Curie Rover, the twin sister to Sojourner Truth, is expected to explore the surface of Mars in early 2002. Scientific investigations to determine the elemental composition of surface rocks and soil using the Alpha Proton X-Ray Spectrometer (APXS) will be conducted along with several technology experiments including the Mars Experiment on Electrostatic Charging (MEEC) and the Wheel Abrasion Experiment (WAE). The Rover will follow uplinked operational sequences each day, but will be capable of autonomous reactions to the unpredictable features of the Martian environment. The Mars Surveyor 2001 Robotic Arm will perform rover deployment, and support various positioning, digging, and sample acquiring functions for MECA (Mars Environmental Compatibility Assessment) and Mossbauer Spectrometer experiments. The Robotic Arm will also collect its own sensor data for engineering data analysis. The Robotic Arm Camera (RAC) mounted on the forearm of the Robotic Arm will capture various images with a wide range of focal length adjustment during scientific experiments and rover deployment
Curiosity Drill in Place for Load Testing Before Drilling
2013-01-28
The percussion drill in the turret of tools at the end of the robotic arm of NASA Mars rover Curiosity has been positioned in contact with the rock surface in this image from the rover front Hazard-Avoidance Camera Hazcam.
Mars Science Laboratory Engineering Cameras
NASA Technical Reports Server (NTRS)
Maki, Justin N.; Thiessen, David L.; Pourangi, Ali M.; Kobzeff, Peter A.; Lee, Steven W.; Dingizian, Arsham; Schwochert, Mark A.
2012-01-01
NASA's Mars Science Laboratory (MSL) Rover, which launched to Mars in 2011, is equipped with a set of 12 engineering cameras. These cameras are build-to-print copies of the Mars Exploration Rover (MER) cameras, which were sent to Mars in 2003. The engineering cameras weigh less than 300 grams each and use less than 3 W of power. Images returned from the engineering cameras are used to navigate the rover on the Martian surface, deploy the rover robotic arm, and ingest samples into the rover sample processing system. The navigation cameras (Navcams) are mounted to a pan/tilt mast and have a 45-degree square field of view (FOV) with a pixel scale of 0.82 mrad/pixel. The hazard avoidance cameras (Haz - cams) are body-mounted to the rover chassis in the front and rear of the vehicle and have a 124-degree square FOV with a pixel scale of 2.1 mrad/pixel. All of the cameras utilize a frame-transfer CCD (charge-coupled device) with a 1024x1024 imaging region and red/near IR bandpass filters centered at 650 nm. The MSL engineering cameras are grouped into two sets of six: one set of cameras is connected to rover computer A and the other set is connected to rover computer B. The MSL rover carries 8 Hazcams and 4 Navcams.
NASA Technical Reports Server (NTRS)
Batten, Adam; Edwards, Graeme; Gerasimov, Vadim; Hoschke, Nigel; Isaacs, Peter; Lewis, Chris; Moore, Richard; Oppolzer, Florien; Price, Don; Prokopenko, Mikhail;
2010-01-01
This report describes a significant advance in the capability of the CSIRO/NASA structural health monitoring Concept Demonstrator (CD). The main thrust of the work has been the development of a mobile robotic agent, and the hardware and software modifications and developments required to enable the demonstrator to operate as a single, self-organizing, multi-agent system. This single-robot system is seen as the forerunner of a system in which larger numbers of small robots perform inspection and repair tasks cooperatively, by self-organization. While the goal of demonstrating self-organized damage diagnosis was not fully achieved in the time available, much of the work required for the final element that enables the robot to point the video camera and transmit an image has been completed. A demonstration video of the CD and robotic systems operating will be made and forwarded to NASA.
NASA Astrophysics Data System (ADS)
Erickson, David; Lacheray, Hervé; Lai, Gilbert; Haddadi, Amir
2014-06-01
This paper presents the latest advancements of the Haptics-based Immersive Tele-robotic System (HITS) project, a next generation Improvised Explosive Device (IED) disposal (IEDD) robotic interface containing an immersive telepresence environment for a remotely-controlled three-articulated-robotic-arm system. While the haptic feedback enhances the operator's perception of the remote environment, a third teleoperated dexterous arm, equipped with multiple vision sensors and cameras, provides stereo vision with proper visual cues, and a 3D photo-realistic model of the potential IED. This decentralized system combines various capabilities including stable and scaled motion, singularity avoidance, cross-coupled hybrid control, active collision detection and avoidance, compliance control and constrained motion to provide a safe and intuitive control environment for the operators. Experimental results and validation of the current system are presented through various essential IEDD tasks. This project demonstrates that a two-armed anthropomorphic Explosive Ordnance Disposal (EOD) robot interface can achieve complex neutralization techniques against realistic IEDs without the operator approaching at any time.
Video. Natural Orifice Translumenal Endoscopic Surgery with a miniature in vivo surgical robot.
Lehman, Amy C; Dumpert, Jason; Wood, Nathan A; Visty, Abigail Q; Farritor, Shane M; Varnell, Brandon; Oleynikov, Dmitry
2009-07-01
The application of flexible endoscopy tools for Natural Orifice Translumenal Endoscopic Surgery (NOTES) is constrained due to limitations in dexterity, instrument insertion, navigation, visualization, and retraction. Miniature endolumenal robots can mitigate these constraints by providing a stable platform for visualization and dexterous manipulation. This video demonstrates the feasibility of using an endolumenal miniature robot to improve vision and to apply off-axis forces for task assistance in NOTES procedures. A two-armed miniature in vivo robot has been developed for NOTES. The robot is remotely controlled, has on-board cameras for guidance, and grasper and cautery end effectors for manipulation. Two basic configurations of the robot allow for flexibility during insertion and rigidity for visualization and tissue manipulation. Embedded magnets in the body of the robot and in an exterior surgical console are used for attaching the robot to the interior abdominal wall. This enables the surgeon to arbitrarily position the robot throughout a procedure. The visualization and task assistance capabilities of the miniature robot were demonstrated in a nonsurvivable NOTES procedure in a porcine model. An endoscope was used to create a transgastric incision and advance an overtube into the peritoneal cavity. The robot was then inserted through the overtube and into the peritoneal cavity using an endoscope. The surgeon successfully used the robot to explore the peritoneum and perform small-bowel dissection. This study has demonstrated the feasibility of inserting an endolumenal robot per os. Once deployed, the robot provided visualization and dexterous capabilities from multiple orientations. Further miniaturization and increased dexterity will enhance future capabilities.
Bilateral assessment of functional tasks for robot-assisted therapy applications
Wang, Sarah; Bai, Ping; Strachota, Elaine; Tchekanov, Guennady; Melbye, Jeff; McGuire, John
2011-01-01
This article presents a novel evaluation system along with methods to evaluate bilateral coordination of arm function on activities of daily living tasks before and after robot-assisted therapy. An affordable bilateral assessment system (BiAS) consisting of two mini-passive measuring units modeled as three degree of freedom robots is described. The process for evaluating functional tasks using the BiAS is presented and we demonstrate its ability to measure wrist kinematic trajectories. Three metrics, phase difference, movement overlap, and task completion time, are used to evaluate the BiAS system on a bilateral symmetric (bi-drink) and a bilateral asymmetric (bi-pour) functional task. Wrist position and velocity trajectories are evaluated using these metrics to provide insight into temporal and spatial bilateral deficits after stroke. The BiAS system quantified movements of the wrists during functional tasks and detected differences in impaired and unimpaired arm movements. Case studies showed that stroke patients compared to healthy subjects move slower and are less likely to use their arm simultaneously even when the functional task requires simultaneous movement. After robot-assisted therapy, interlimb coordination spatial deficits moved toward normal coordination on functional tasks. PMID:21881901
IMU-Based Online Kinematic Calibration of Robot Manipulator
2013-01-01
Robot calibration is a useful diagnostic method for improving the positioning accuracy in robot production and maintenance. An online robot self-calibration method based on inertial measurement unit (IMU) is presented in this paper. The method requires that the IMU is rigidly attached to the robot manipulator, which makes it possible to obtain the orientation of the manipulator with the orientation of the IMU in real time. This paper proposed an efficient approach which incorporates Factored Quaternion Algorithm (FQA) and Kalman Filter (KF) to estimate the orientation of the IMU. Then, an Extended Kalman Filter (EKF) is used to estimate kinematic parameter errors. Using this proposed orientation estimation method will result in improved reliability and accuracy in determining the orientation of the manipulator. Compared with the existing vision-based self-calibration methods, the great advantage of this method is that it does not need the complex steps, such as camera calibration, images capture, and corner detection, which make the robot calibration procedure more autonomous in a dynamic manufacturing environment. Experimental studies on a GOOGOL GRB3016 robot show that this method has better accuracy, convenience, and effectiveness than vision-based methods. PMID:24302854
Dynamic multisensor fusion for mobile robot navigation in an indoor environment
NASA Astrophysics Data System (ADS)
Jin, Taeseok; Lee, Jang-Myung; Luk, Bing L.; Tso, Shiu K.
2001-10-01
In this study, as the preliminary step for developing a multi-purpose Autonomous robust carrier mobile robot to transport trolleys or heavy goods and serve as robotic nursing assistant in hospital wards. The aim of this paper is to present the use of multi-sensor data fusion such as sonar, CCD camera dn IR sensor for map-building mobile robot to navigate, and presents an experimental mobile robot designed to operate autonomously within both indoor and outdoor environments. Smart sensory systems are crucial for successful autonomous systems. We will give an explanation for the robot system architecture designed and implemented in this study and a short review of existing techniques, since there exist several recent thorough books and review paper on this paper. Instead we will focus on the main results with relevance to the intelligent service robot project at the Centre of Intelligent Design, Automation & Manufacturing (CIDAM). We will conclude by discussing some possible future extensions of the project. It is first dealt with the general principle of the navigation and guidance architecture, then the detailed functions recognizing environments updated, obstacle detection and motion assessment, with the first results form the simulations run.
Visual Detection and Tracking System for a Spherical Amphibious Robot
Guo, Shuxiang; Pan, Shaowu; Shi, Liwei; Guo, Ping; He, Yanlin; Tang, Kun
2017-01-01
With the goal of supporting close-range observation tasks of a spherical amphibious robot, such as ecological observations and intelligent surveillance, a moving target detection and tracking system was designed and implemented in this study. Given the restrictions presented by the amphibious environment and the small-sized spherical amphibious robot, an industrial camera and vision algorithms using adaptive appearance models were adopted to construct the proposed system. To handle the problem of light scattering and absorption in the underwater environment, the multi-scale retinex with color restoration algorithm was used for image enhancement. Given the environmental disturbances in practical amphibious scenarios, the Gaussian mixture model was used to detect moving targets entering the field of view of the robot. A fast compressive tracker with a Kalman prediction mechanism was used to track the specified target. Considering the limited load space and the unique mechanical structure of the robot, the proposed vision system was fabricated with a low power system-on-chip using an asymmetric and heterogeneous computing architecture. Experimental results confirmed the validity and high efficiency of the proposed system. The design presented in this paper is able to meet future demands of spherical amphibious robots in biological monitoring and multi-robot cooperation. PMID:28420134
Visual Detection and Tracking System for a Spherical Amphibious Robot.
Guo, Shuxiang; Pan, Shaowu; Shi, Liwei; Guo, Ping; He, Yanlin; Tang, Kun
2017-04-15
With the goal of supporting close-range observation tasks of a spherical amphibious robot, such as ecological observations and intelligent surveillance, a moving target detection and tracking system was designed and implemented in this study. Given the restrictions presented by the amphibious environment and the small-sized spherical amphibious robot, an industrial camera and vision algorithms using adaptive appearance models were adopted to construct the proposed system. To handle the problem of light scattering and absorption in the underwater environment, the multi-scale retinex with color restoration algorithm was used for image enhancement. Given the environmental disturbances in practical amphibious scenarios, the Gaussian mixture model was used to detect moving targets entering the field of view of the robot. A fast compressive tracker with a Kalman prediction mechanism was used to track the specified target. Considering the limited load space and the unique mechanical structure of the robot, the proposed vision system was fabricated with a low power system-on-chip using an asymmetric and heterogeneous computing architecture. Experimental results confirmed the validity and high efficiency of the proposed system. The design presented in this paper is able to meet future demands of spherical amphibious robots in biological monitoring and multi-robot cooperation.
The Leipzig experience with robotic valve surgery.
Autschbach, R; Onnasch, J F; Falk, V; Walther, T; Krüger, M; Schilling, L O; Mohr, F W
2000-01-01
The study describes the single-center experience using robot-assisted videoscopic mitral valve surgery and the early results with a remote telemanipulator-assisted approach for mitral valve repair. Out of a series of 230 patients who underwent minimally invasive mitral valve surgery, in 167 patients surgery was performed with the use of robotic assistance. A voice-controlled robotic arm was used for videoscopic guidance in 152 cases. Most recently, a computer-enhanced telemanipulator was used in 15 patients to perform the operation remotely. The mitral valve was repaired in 117 and replaced in all other patients. The voice-controlled robotic arm (AESOP 3000) facilitated videoscopic-assisted mitral valve surgery. The procedure was completed without the need for an additional assistant as "solo surgery." Additional procedures like radiofrequency ablation and tricuspid valve repair were performed in 21 and 4 patients, respectively. Duration of bypass and clamp time was comparable to conventional procedures (107 A 34 and 50 A 16 min, respectively). Hospital mortality was 1.2%. Using the da Vinci telemanipulation system, remote mitral valve repair was successfully performed in 13 of 15 patients. Robotic-assisted less invasive mitral valve surgery has evolved to a reliable technique with reproducible results for primary operations and for reoperations. Robotic assistance has enabled a solo surgery approach. The combination with radiofrequency ablation (Mini Maze) in patients with chronic atrial fibrillation has proven to be beneficial. The use of telemanipulation systems for remote mitral valve surgery is promising, but a number of problems have to be solved before the introduction of a closed chest mitral valve procedure.
2010-05-18
ISS023-E-046806 (18 May 2010) --- Backdropped by Earth?s horizon and the blackness of space, the docked space shuttle Atlantis is featured in this image photographed by an Expedition 23 crew member on the International Space Station. The Russian-built Mini-Research Module 1 (MRM-1) is visible in the payload bay as the shuttle robotic arm prepares to unberth the module from Atlantis and position it for handoff to the station robotic arm (visible at right). Named Rassvet, Russian for "dawn," the module is the second in a series of new pressurized components for Russia and will be permanently attached to the Earth-facing port of the Zarya Functional Cargo Block (FGB). Rassvet will be used for cargo storage and will provide an additional docking port to the station.
Considerations for human-machine interfaces in tele-operations
NASA Technical Reports Server (NTRS)
Newport, Curt
1991-01-01
Numerous factors impact on the efficiency of tele-operative manipulative work. Generally, these are related to the physical environment of the tele-operator and how he interfaces with robotic control consoles. The capabilities of the operator can be influenced by considerations such as temperature, eye strain, body fatigue, and boredom created by repetitive work tasks. In addition, the successful combination of man and machine will, in part, be determined by the configuration of the visual and physical interfaces available to the teleoperator. The design and operation of system components such as full-scale and mini-master manipulator controllers, servo joysticks, and video monitors will have a direct impact on operational efficiency. As a result, the local environment and the interaction of the operator with the robotic control console have a substantial effect on mission productivity.
2008-10-22
SRIHARIKOTA, India – The Indian Space Research Organization, or ISRO, launches its robotic Chandrayaan-1 rocket with two NASA instruments aboard on India's maiden moon voyage to map the lunar surface. The Moon Mineralogy Mapper will assess mineral resources, and the Miniature Synthetic Aperture Radar, or Mini-SAR, will map the polar regions and look for ice deposits. Data from the two instruments will contribute to NASA's increased understanding of the lunar environment as it implements the nation's space exploration policy, which calls for robotic and human missions to the moon. In addition to the two science instruments, NASA will provide space communications support to Chandrayaan-1. The primary location for the NASA ground tracking station will be at the Johns Hopkins University Applied Physics Laboratory in Laurel, Md. Photo credit: NASA
NASA Technical Reports Server (NTRS)
1994-01-01
In laparoscopic surgery, tiny incisions are made in the patient's body and a laparoscope (an optical tube with a camera at the end) is inserted. The camera's image is projected onto two video screens, whose views guide the surgeon through the procedure. AESOP, a medical robot developed by Computer Motion, Inc. with NASA assistance, eliminates the need for a human assistant to operate the camera. The surgeon uses a foot pedal control to move the device, allowing him to use both hands during the surgery. Miscommunication is avoided; AESOP's movement is smooth and steady, and the memory vision is invaluable. Operations can be completed more quickly, and the patient spends less time under anesthesia. AESOP has been approved by the FDA.
Karolinska prostatectomy: a robot-assisted laparoscopic radical prostatectomy technique.
Nilsson, Andreas E; Carlsson, Stefan; Laven, Brett A; Wiklund, N Peter
2006-01-01
The last decade has witnessed an increasing trend towards minimally invasive management of prostate cancer, including laparoscopic and, more recently, robot-assisted laparoscopic prostatectomy. Several different laparoscopic approaches have been continuously developed during the last 5 years and it is still unclear which technique yields the best outcome. We present our current technique of robot-assisted laparoscopic radical prostatectomy. The technique described has evolved during the course of >400 robotic prostatectomies performed by the robotic team since the robot-assisted laparoscopic radical prostatectomy program was introduced at Karolinska University Hospital in January 2002. Our procedure comprises several modifications of previously reported ones, and we utilize fewer robotic instruments to reduce costs. An extended posterior dissection is performed to aid in the bladder neck-sparing dissection. In nerve-sparing procedures the vesicles are divided to avoid damage to the erectile nerves. In order to preserve the apical anatomy the dorsal venous complex is incised sharply and is first over-sewn after the apical dissection is completed. Our technique enables a more fluent dissection than previously described robotic techniques. Minimizing changes of instruments and the camera not only cuts costs but also reduces inefficient operating maneuvers, such as switching between 30 degrees and 0 degrees lenses during the procedure. We present a technique which in our hands has achieved excellent functional and oncological results.
Estimation of Visual Maps with a Robot Network Equipped with Vision Sensors
Gil, Arturo; Reinoso, Óscar; Ballesta, Mónica; Juliá, Miguel; Payá, Luis
2010-01-01
In this paper we present an approach to the Simultaneous Localization and Mapping (SLAM) problem using a team of autonomous vehicles equipped with vision sensors. The SLAM problem considers the case in which a mobile robot is equipped with a particular sensor, moves along the environment, obtains measurements with its sensors and uses them to construct a model of the space where it evolves. In this paper we focus on the case where several robots, each equipped with its own sensor, are distributed in a network and view the space from different vantage points. In particular, each robot is equipped with a stereo camera that allow the robots to extract visual landmarks and obtain relative measurements to them. We propose an algorithm that uses the measurements obtained by the robots to build a single accurate map of the environment. The map is represented by the three-dimensional position of the visual landmarks. In addition, we consider that each landmark is accompanied by a visual descriptor that encodes its visual appearance. The solution is based on a Rao-Blackwellized particle filter that estimates the paths of the robots and the position of the visual landmarks. The validity of our proposal is demonstrated by means of experiments with a team of real robots in a office-like indoor environment. PMID:22399930
Estimation of visual maps with a robot network equipped with vision sensors.
Gil, Arturo; Reinoso, Óscar; Ballesta, Mónica; Juliá, Miguel; Payá, Luis
2010-01-01
In this paper we present an approach to the Simultaneous Localization and Mapping (SLAM) problem using a team of autonomous vehicles equipped with vision sensors. The SLAM problem considers the case in which a mobile robot is equipped with a particular sensor, moves along the environment, obtains measurements with its sensors and uses them to construct a model of the space where it evolves. In this paper we focus on the case where several robots, each equipped with its own sensor, are distributed in a network and view the space from different vantage points. In particular, each robot is equipped with a stereo camera that allow the robots to extract visual landmarks and obtain relative measurements to them. We propose an algorithm that uses the measurements obtained by the robots to build a single accurate map of the environment. The map is represented by the three-dimensional position of the visual landmarks. In addition, we consider that each landmark is accompanied by a visual descriptor that encodes its visual appearance. The solution is based on a Rao-Blackwellized particle filter that estimates the paths of the robots and the position of the visual landmarks. The validity of our proposal is demonstrated by means of experiments with a team of real robots in a office-like indoor environment.
A haptic sensing upgrade for the current EOD robotic fleet
NASA Astrophysics Data System (ADS)
Rowe, Patrick
2014-06-01
The past decade and a half has seen a tremendous rise in the use of mobile manipulator robotic platforms for bomb inspection and disposal, explosive ordnance disposal, and other extremely hazardous tasks in both military and civilian settings. Skilled operators are able to control these robotic vehicles in amazing ways given the very limited situational awareness obtained from a few on-board camera views. Future generations of robotic platforms will, no doubt, provide some sort of additional force or haptic sensor feedback to further enhance the operator's interaction with the robot, especially when dealing with fragile, unstable, and explosive objects. Unfortunately, the robot operators need this capability today. This paper discusses an approach to provide existing (and future) robotic mobile manipulator platforms, with which trained operators are already familiar and highly proficient, this desired haptic and force feedback capability. The goals of this technology are to be rugged, reliable, and affordable. It should also be able to be applied to a wide range of existing robots with a wide variety of manipulator/gripper sizes and styles. Finally, the presentation of the haptic information to the operator is discussed, given the fact that control devices that physically interact with the operators are not widely available and still in the research stages.
Nakib, Ghassan; Calcaterra, Valeria; Scorletti, Federico; Romano, Piero; Goruppi, Ilaria; Mencherini, Simonetta; Avolio, Luigi; Pelizzo, Gloria
2013-02-01
Robotic assisted surgery is not yet widely applied in the pediatric field. We report our initial experience regarding the feasibility, safety, benefits, and limitations of robot-assisted surgery in pediatric gynecological patients. Descriptive, retrospective report of experience with pediatric gynecological patients over a period of 12 months. Department of Pediatric Surgery, IRCCS Policlinico San Matteo Foundation. Children and adolescents, with a surgical diagnosis of ovarian and/or tubal lesions. Robot assembly time and operative time, days of hospitalization, time to cessation of pain medication, complication rate, conversion rate to laparoscopic procedure and trocar insertion strategy. Six children and adolescents (2.4-15 yrs), weighing 12-55 kg, underwent robotic assisted surgery for adnexal pathologies: 2 for ovarian cystectomy, 2 for oophorectomy, 1 for right oophorectomy and left salpingo-oophorectomy for gonadal disgenesis, 1 for exploration for suspected pelvic malformation. Mean operative time was 117.5 ± 34.9 minutes. Conversion to laparatomy was not necessary in any of the cases. No intra- or postoperative complications occurred. Initial results indicate that robotic assisted surgery is safely applicable in the pediatric gynecological population, although it is still premature to conclude that it provides better clinical outcomes than traditional laparoscopic surgery. Randomized, prospective, comparative studies will help characterize the advantages and disadvantages of this new technology in pediatric patients. Copyright © 2013 North American Society for Pediatric and Adolescent Gynecology. Published by Elsevier Inc. All rights reserved.
Quantifying Traversability of Terrain for a Mobile Robot
NASA Technical Reports Server (NTRS)
Howard, Ayanna; Seraji, Homayoun; Werger, Barry
2005-01-01
A document presents an updated discussion on a method of autonomous navigation for a robotic vehicle navigating across rough terrain. The method involves, among other things, the use of a measure of traversability, denoted the fuzzy traversability index, which embodies the information about the slope and roughness of terrain obtained from analysis of images acquired by cameras mounted on the robot. The improvements presented in the report focus on the use of the fuzzy traversability index to generate a traversability map and a grid map for planning the safest path for the robot. Once grid traversability values have been computed, they are utilized for rejecting unsafe path segments and for computing a traversalcost function for ranking candidate paths, selected by a search algorithm, from a specified initial position to a specified final position. The output of the algorithm is a set of waypoints designating a path having a minimal-traversal cost.
Astrobee: A New Platform for Free-Flying Robotics on the International Space Station
NASA Technical Reports Server (NTRS)
Smith, Trey; Barlow, Jonathan; Bualat, Maria; Fong, Terrence; Provencher, Christopher; Sanchez, Hugo; Smith, Ernest
2016-01-01
The Astrobees are next-generation free-flying robots that will operate in the interior of the International Space Station (ISS). Their primary purpose is to provide a flexible platform for research on zero-g freeflying robotics, with the ability to carry a wide variety of future research payloads and guest science software. They will also serve utility functions: as free-flying cameras to record video of astronaut activities, and as mobile sensor platforms to conduct surveys of the ISS. The Astrobee system includes two robots, a docking station, and a ground data system (GDS). It is developed by the Human Exploration Telerobotics 2 (HET-2) Project, which began in Oct. 2014, and will deliver the Astrobees for launch to ISS in 2017. This paper covers selected aspects of the Astrobee design, focusing on capabilities relevant to potential users of the platform.
Vision based object pose estimation for mobile robots
NASA Technical Reports Server (NTRS)
Wu, Annie; Bidlack, Clint; Katkere, Arun; Feague, Roy; Weymouth, Terry
1994-01-01
Mobile robot navigation using visual sensors requires that a robot be able to detect landmarks and obtain pose information from a camera image. This paper presents a vision system for finding man-made markers of known size and calculating the pose of these markers. The algorithm detects and identifies the markers using a weighted pattern matching template. Geometric constraints are then used to calculate the position of the markers relative to the robot. The selection of geometric constraints comes from the typical pose of most man-made signs, such as the sign standing vertical and the dimensions of known size. This system has been tested successfully on a wide range of real images. Marker detection is reliable, even in cluttered environments, and under certain marker orientations, estimation of the orientation has proven accurate to within 2 degrees, and distance estimation to within 0.3 meters.
On the reproducibility of expert-operated and robotic ultrasound acquisitions.
Kojcev, Risto; Khakzar, Ashkan; Fuerst, Bernhard; Zettinig, Oliver; Fahkry, Carole; DeJong, Robert; Richmon, Jeremy; Taylor, Russell; Sinibaldi, Edoardo; Navab, Nassir
2017-06-01
We present the evaluation of the reproducibility of measurements performed using robotic ultrasound imaging in comparison with expert-operated sonography. Robotic imaging for interventional procedures may be a valuable contribution, but requires reproducibility for its acceptance in clinical routine. We study this by comparing repeated measurements based on robotic and expert-operated ultrasound imaging. Robotic ultrasound acquisition is performed in three steps under user guidance: First, the patient is observed using a 3D camera on the robot end effector, and the user selects the region of interest. This allows for automatic planning of the robot trajectory. Next, the robot executes a sweeping motion following the planned trajectory, during which the ultrasound images and tracking data are recorded. As the robot is compliant, deviations from the path are possible, for instance due to patient motion. Finally, the ultrasound slices are compounded to create a volume. Repeated acquisitions can be performed automatically by comparing the previous and current patient surface. After repeated image acquisitions, the measurements based on acquisitions performed by the robotic system and expert are compared. Within our case series, the expert measured the anterior-posterior, longitudinal, transversal lengths of both of the left and right thyroid lobes on each of the 4 healthy volunteers 3 times, providing 72 measurements. Subsequently, the same procedure was performed using the robotic system resulting in a cumulative total of 144 clinically relevant measurements. Our results clearly indicated that robotic ultrasound enables more repeatable measurements. A robotic ultrasound platform leads to more reproducible data, which is of crucial importance for planning and executing interventions.
A Robotic arm for optical and gamma radwaste inspection
NASA Astrophysics Data System (ADS)
Russo, L.; Cosentino, L.; Pappalardo, A.; Piscopo, M.; Scirè, C.; Scirè, S.; Vecchio, G.; Muscato, G.; Finocchiaro, P.
2014-12-01
We propose Radibot, a simple and cheap robotic arm for remote inspection, which interacts with the radwaste environment by means of a scintillation gamma detector and a video camera representing its light (< 1 kg) payload. It moves vertically thanks to a crane, while the other three degrees of freedom are obtained by means of revolute joints. A dedicated algorithm allows to automatically choose the best kinematics in order to reach a graphically selected position, while still allowing to fully drive the arm by means of a standard videogame joypad.
NASA Technical Reports Server (NTRS)
1983-01-01
Voyager, Infrared Astronomical Satellite, Galileo, Viking, Solar Mesosphere Explorer, Wide-field/Planetary Camera, Venus Mapper, International Solar Polar Mission - Solar Interplanetary Satellite, Extreme Ultraviolet Explores, Starprobe, International Halley Watch, Marine Mark II, Samex, Shuttle Imaging Radar-A, Deep Space Network, Biomedical Technology, Ocean Studies and Robotics are summarized.
View of the Cupola RWS taken with Fish-Eye Lens
2010-05-08
ISS023-E-039983 (8 May 2010) --- A fish-eye lens attached to an electronic still camera was used by an Expedition 23 crew member to capture this image of the robotic workstation in the Cupola of the International Space Station.
NASA Astrophysics Data System (ADS)
Guo, Dejun; Bourne, Joseph R.; Wang, Hesheng; Yim, Woosoon; Leang, Kam K.
2017-08-01
This paper presents the design and implementation of an adaptive-repetitive visual-servo control system for a moving high-flying vehicle (HFV) with an uncalibrated camera to monitor, track, and precisely control the movements of a low-flying vehicle (LFV) or mobile ground robot. Applications of this control strategy include the use of high-flying unmanned aerial vehicles (UAVs) with computer vision for monitoring, controlling, and coordinating the movements of lower altitude agents in areas, for example, where GPS signals may be unreliable or nonexistent. When deployed, a remote operator of the HFV defines the desired trajectory for the LFV in the HFV's camera frame. Due to the circular motion of the HFV, the resulting motion trajectory of the LFV in the image frame can be periodic in time, thus an adaptive-repetitive control system is exploited for regulation and/or trajectory tracking. The adaptive control law is able to handle uncertainties in the camera's intrinsic and extrinsic parameters. The design and stability analysis of the closed-loop control system is presented, where Lyapunov stability is shown. Simulation and experimental results are presented to demonstrate the effectiveness of the method for controlling the movement of a low-flying quadcopter, demonstrating the capabilities of the visual-servo control system for localization (i.e.,, motion capturing) and trajectory tracking control. In fact, results show that the LFV can be commanded to hover in place as well as track a user-defined flower-shaped closed trajectory, while the HFV and camera system circulates above with constant angular velocity. On average, the proposed adaptive-repetitive visual-servo control system reduces the average RMS tracking error by over 77% in the image plane and over 71% in the world frame compared to using just the adaptive visual-servo control law.
Robonaut: A Robotic Astronaut Assistant
NASA Technical Reports Server (NTRS)
Ambrose, Robert O.; Diftler, Myron A.
2001-01-01
NASA's latest anthropomorphic robot, Robonaut, has reached a milestone in its capability. This highly dexterous robot, designed to assist astronauts in space, is now performing complex tasks at the Johnson Space Center that could previously only be carried out by humans. With 43 degrees of freedom, Robonaut is the first humanoid built for space and incorporates technology advances in dexterous hands, modular manipulators, lightweight materials, and telepresence control systems. Robonaut is human size, has a three degree of freedom (DOF) articulated waist, and two, seven DOF arms, giving it an impressive work space for interacting with its environment. Its two, five fingered hands allow manipulation of a wide range of tools. A pan/tilt head with multiple stereo camera systems provides data for both teleoperators and computer vision systems.
NASA Astrophysics Data System (ADS)
Chen, C.; Zou, X.; Tian, M.; Li, J.; Wu, W.; Song, Y.; Dai, W.; Yang, B.
2017-11-01
In order to solve the automation of 3D indoor mapping task, a low cost multi-sensor robot laser scanning system is proposed in this paper. The multiple-sensor robot laser scanning system includes a panorama camera, a laser scanner, and an inertial measurement unit and etc., which are calibrated and synchronized together to achieve simultaneously collection of 3D indoor data. Experiments are undertaken in a typical indoor scene and the data generated by the proposed system are compared with ground truth data collected by a TLS scanner showing an accuracy of 99.2% below 0.25 meter, which explains the applicability and precision of the system in indoor mapping applications.
High-resolution hyperspectral ground mapping for robotic vision
NASA Astrophysics Data System (ADS)
Neuhaus, Frank; Fuchs, Christian; Paulus, Dietrich
2018-04-01
Recently released hyperspectral cameras use large, mosaiced filter patterns to capture different ranges of the light's spectrum in each of the camera's pixels. Spectral information is sparse, as it is not fully available in each location. We propose an online method that avoids explicit demosaicing of camera images by fusing raw, unprocessed, hyperspectral camera frames inside an ego-centric ground surface map. It is represented as a multilayer heightmap data structure, whose geometry is estimated by combining a visual odometry system with either dense 3D reconstruction or 3D laser data. We use a publicly available dataset to show that our approach is capable of constructing an accurate hyperspectral representation of the surface surrounding the vehicle. We show that in many cases our approach increases spatial resolution over a demosaicing approach, while providing the same amount of spectral information.
ProtoDESI: First On-Sky Technology Demonstration for the Dark Energy Spectroscopic Instrument
Fagrelius, Parker; Abareshi, Behzad; Allen, Lori; ...
2018-01-15
The Dark Energy Spectroscopic Instrument (DESI) is under construction to measure the expansion history of the universe using the baryon acoustic oscillations technique. The spectra of 35 million galaxies and quasars over 14,000 square degrees will be measured during a 5-year survey. A new prime focus corrector for the Mayall telescope at Kitt Peak National Observatory will deliver light to 5,000 individually targeted fiber-fed robotic positioners. The fibers in turn feed ten broadband multi-object spectrographs. We describe the ProtoDESI experiment, that was installed and commissioned on the 4-m Mayall telescope from 2016 August 14 to September 30. ProtoDESI was anmore » on-sky technology demonstration with the goal to reduce technical risks associated with aligning optical fibers with targets using robotic fiber positioners and maintaining the stability required to operate DESI. The ProtoDESI prime focus instrument, consisting of three fiber positioners, illuminated fiducials, and a guide camera, was installed behind the existing Mosaic corrector on the Mayall telescope. A fiber view camera was mounted in the Cassegrain cage of the telescope and provided feedback metrology for positioning the fibers. ProtoDESI also provided a platform for early integration of hardware with the DESI Instrument Control System that controls the subsystems, provides communication with the Telescope Control System, and collects instrument telemetry data. In conclusion, lacking a spectrograph, ProtoDESI monitored the output of the fibers using a fiber photometry camera mounted on the prime focus instrument. ProtoDESI was successful in acquiring targets with the robotically positioned fibers and demonstrated that the DESI guiding requirements can be met.« less
ProtoDESI: First On-Sky Technology Demonstration for the Dark Energy Spectroscopic Instrument
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fagrelius, Parker; Abareshi, Behzad; Allen, Lori
The Dark Energy Spectroscopic Instrument (DESI) is under construction to measure the expansion history of the universe using the baryon acoustic oscillations technique. The spectra of 35 million galaxies and quasars over 14,000 square degrees will be measured during a 5-year survey. A new prime focus corrector for the Mayall telescope at Kitt Peak National Observatory will deliver light to 5,000 individually targeted fiber-fed robotic positioners. The fibers in turn feed ten broadband multi-object spectrographs. We describe the ProtoDESI experiment, that was installed and commissioned on the 4-m Mayall telescope from 2016 August 14 to September 30. ProtoDESI was anmore » on-sky technology demonstration with the goal to reduce technical risks associated with aligning optical fibers with targets using robotic fiber positioners and maintaining the stability required to operate DESI. The ProtoDESI prime focus instrument, consisting of three fiber positioners, illuminated fiducials, and a guide camera, was installed behind the existing Mosaic corrector on the Mayall telescope. A fiber view camera was mounted in the Cassegrain cage of the telescope and provided feedback metrology for positioning the fibers. ProtoDESI also provided a platform for early integration of hardware with the DESI Instrument Control System that controls the subsystems, provides communication with the Telescope Control System, and collects instrument telemetry data. In conclusion, lacking a spectrograph, ProtoDESI monitored the output of the fibers using a fiber photometry camera mounted on the prime focus instrument. ProtoDESI was successful in acquiring targets with the robotically positioned fibers and demonstrated that the DESI guiding requirements can be met.« less
2010-05-18
ISS023-E-047527 (18 May 2010) --- In the grasp of the station?s robotic Canadarm2, the Russian-built Mini-Research Module 1 (MRM-1) is attached to the Earth-facing port of the Zarya Functional Cargo Block (FGB) of the International Space Station. Named Rassvet, Russian for "dawn," the module is the second in a series of new pressurized components for Russia. Rassvet will be used for cargo storage and will provide an additional docking port to the station.
2014-09-01
Marshall “ Wind Turbines and Energy” • Eugene Whatley 12th Grade T. Marshall “Acceleration of Battery-Powered cars on Different Surfaces” • Jhaelynn...There were several mini-demos including: making a model for wind tunnel, egg carton gliders, and ring wing gliders. C3.3 Robotics Team The...115 F3.6 WHAT ARE WIND TUNNELS
Continuous Shape Estimation of Continuum Robots Using X-ray Images.
Lobaton, Edgar J; Fu, Jinghua; Torres, Luis G; Alterovitz, Ron
2013-05-06
We present a new method for continuously and accurately estimating the shape of a continuum robot during a medical procedure using a small number of X-ray projection images (e.g., radiographs or fluoroscopy images). Continuum robots have curvilinear structure, enabling them to maneuver through constrained spaces by bending around obstacles. Accurately estimating the robot's shape continuously over time is crucial for the success of procedures that require avoidance of anatomical obstacles and sensitive tissues. Online shape estimation of a continuum robot is complicated by uncertainty in its kinematic model, movement of the robot during the procedure, noise in X-ray images, and the clinical need to minimize the number of X-ray images acquired. Our new method integrates kinematics models of the robot with data extracted from an optimally selected set of X-ray projection images. Our method represents the shape of the continuum robot over time as a deformable surface which can be described as a linear combination of time and space basis functions. We take advantage of probabilistic priors and numeric optimization to select optimal camera configurations, thus minimizing the expected shape estimation error. We evaluate our method using simulated concentric tube robot procedures and demonstrate that obtaining between 3 and 10 images from viewpoints selected by our method enables online shape estimation with errors significantly lower than using the kinematic model alone or using randomly spaced viewpoints.
Concurrent initialization for Bearing-Only SLAM.
Munguía, Rodrigo; Grau, Antoni
2010-01-01
Simultaneous Localization and Mapping (SLAM) is perhaps the most fundamental problem to solve in robotics in order to build truly autonomous mobile robots. The sensors have a large impact on the algorithm used for SLAM. Early SLAM approaches focused on the use of range sensors as sonar rings or lasers. However, cameras have become more and more used, because they yield a lot of information and are well adapted for embedded systems: they are light, cheap and power saving. Unlike range sensors which provide range and angular information, a camera is a projective sensor which measures the bearing of images features. Therefore depth information (range) cannot be obtained in a single step. This fact has propitiated the emergence of a new family of SLAM algorithms: the Bearing-Only SLAM methods, which mainly rely in especial techniques for features system-initialization in order to enable the use of bearing sensors (as cameras) in SLAM systems. In this work a novel and robust method, called Concurrent Initialization, is presented which is inspired by having the complementary advantages of the Undelayed and Delayed methods that represent the most common approaches for addressing the problem. The key is to use concurrently two kinds of feature representations for both undelayed and delayed stages of the estimation. The simulations results show that the proposed method surpasses the performance of previous schemes.
Automatic learning rate adjustment for self-supervising autonomous robot control
NASA Technical Reports Server (NTRS)
Arras, Michael K.; Protzel, Peter W.; Palumbo, Daniel L.
1992-01-01
Described is an application in which an Artificial Neural Network (ANN) controls the positioning of a robot arm with five degrees of freedom by using visual feedback provided by two cameras. This application and the specific ANN model, local liner maps, are based on the work of Ritter, Martinetz, and Schulten. We extended their approach by generating a filtered, average positioning error from the continuous camera feedback and by coupling the learning rate to this error. When the network learns to position the arm, the positioning error decreases and so does the learning rate until the system stabilizes at a minimum error and learning rate. This abolishes the need for a predetermined cooling schedule. The automatic cooling procedure results in a closed loop control with no distinction between a learning phase and a production phase. If the positioning error suddenly starts to increase due to an internal failure such as a broken joint, or an environmental change such as a camera moving, the learning rate increases accordingly. Thus, learning is automatically activated and the network adapts to the new condition after which the error decreases again and learning is 'shut off'. The automatic cooling is therefore a prerequisite for the autonomy and the fault tolerance of the system.
NASA Technical Reports Server (NTRS)
Krasowski, Michael J.; Prokop, Norman F.; Greer, Lawrence C.
2011-01-01
A platform has been developed for two or more vehicles with one or more residing within the other (a marsupial pair). This configuration consists of a large, versatile robot that is carrying a smaller, more specialized autonomous operating robot(s) and/or mobile repeaters for extended transmission. The larger vehicle, which is equipped with a ramp and/or a robotic arm, is used to operate over a more challenging topography than the smaller one(s) that may have a more limited inspection area to traverse. The intended use of this concept is to facilitate the insertion of a small video camera and sensor platform into a difficult entry area. In a terrestrial application, this may be a bus or a subway car with narrow aisles or steep stairs. The first field-tested configuration is a tracked vehicle bearing a rigid ramp of fixed length and width. A smaller six-wheeled vehicle approximately 10 in. (25 cm) wide by 12 in. (30 cm) long resides at the end of the ramp within the larger vehicle. The ramp extends from the larger vehicle and is tipped up into the air. Using video feedback from a camera atop the larger robot, the operator at a remote location can steer the larger vehicle to the bus door. Once positioned at the door, the operator can switch video feedback to a camera at the end of the ramp to facilitate the mating of the end of the ramp to the top landing at the upper terminus of the steps. The ramp can be lowered by remote control until its end is in contact with the top landing. At the same time, the end of the ramp bearing the smaller vehicle is raised to minimize the angle of the slope the smaller vehicle has to climb, and further gives the operator a better view of the entry to the bus from the smaller vehicle. Control is passed over to the smaller vehicle and, using video feedback from the camera, it is driven up the ramp, turned oblique into the bus, and then sent down the aisle for surveillance. The demonstrated vehicle was used to scale the steps leading to the interior of a bus whose landing is 44 in. (.1.1 m) from the road surface. This vehicle can position the end of its ramp to a surface over 50 in. (.1.3 m) above ground level and can drive over rail heights exceeding 6 in. (.15 cm). Thus configured, this vehicle can conceivably deliver the smaller robot to the end platform of New York City subway cars from between the rails. This innovation is scalable to other formulations for size, mobility, and surveillance functions. Conceivably the larger vehicle can be configured to traverse unstable rubble and debris to transport a smaller search and rescue vehicle as close as possible to the scene of a disaster such as a collapsed building. The smaller vehicle, tethered or otherwise, and capable of penetrating and traversing within the confined spaces in the collapsed structure, can transport imaging and other sensors to look for victims or other targets.
ARGon{sup 3}: ''3D appearance robot-based gonioreflectometer'' at PTB
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoepe, A.; Atamas, T.; Huenerhoff, D.
At the Physikalisch-Technische Bundesanstalt, the National Metrology Institute of Germany, a new facility for measuring visual appearance-related quantities has been built up. The acronym ARGon{sup 3} stands for ''3D appearance robot-based gonioreflectometer''. Compared to standard gonioreflectometers, there are two main new features within this setup. First, a photometric luminance camera with a spatial resolution of 28 {mu}m on the device under test (DUT) enables spatially high-resolved measurements of luminance and color coordinates. Second, a line-scan CCD-camera mounted to a spectrometer provides measurements of the radiance factor, respectively the bidirectional reflectance distribution function, in full V({lambda})-range (360 nm-830 nm) with arbitrarymore » angles of irradiation and detection relative to the surface normal, on a time scale of about 2 min. First goniometric measurements of diffuse reflection within 3D-space above the DUT with subsequent colorimetric representation of the obtained data of special effect pigments based on the interference effect are presented.« less
STS-109 Flight Day 3 Highlights
NASA Technical Reports Server (NTRS)
2002-01-01
This footage from the third day of the STS-109 mission to service the Hubble Space Telescope (HST) begins with the grappling of the HST by the robotic arm of the Columbia Orbiter, operated by Mission Specialist Nancy Currie. During the grappling, numerous angles deliver close-up images of the telescope which appears to be in good shape despite many years in orbit around the Earth. Following the positioning of the HST on its berthing platform in the Shuttle bay, the robotic arm is used to perform an external survey of the telescope. Some cursory details are given about different equipment which will be installed on the HST including a replacement cooling system for the Near Infrared Camera Multi-Object Spectrometer (NICMOS) and the Advanced Camera for Surveys. Following the survey, there is footage of the retraction of both of the telescope's two flexible solar arrays, which was successful. These arrays will be replaced by rigid solar arrays with decreased surface area and increased performance.
Robotic Arm Camera on Mars with Lights On
NASA Technical Reports Server (NTRS)
2008-01-01
This image is a composite view of NASA's Phoenix Mars Lander's Robotic Arm Camera (RAC) with its lights on, as seen by the lander's Surface Stereo Imager (SSI). This image combines images taken on the afternoon of Phoenix's 116th Martian day, or sol (September 22, 2008). The RAC is about 8 centimeters (3 inches) tall. The SSI took images of the RAC to test both the light-emitting diodes (LEDs) and cover function. Individual images were taken in three SSI filters that correspond to the red, green, and blue LEDs one at a time. When combined, it appears that all three sets of LEDs are on at the same time. This composite image is not true color. The streaks of color extending from the LEDs are an artifact from saturated exposure. The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.Assessment of a visually guided autonomous exploration robot
NASA Astrophysics Data System (ADS)
Harris, C.; Evans, R.; Tidey, E.
2008-10-01
A system has been developed to enable a robot vehicle to autonomously explore and map an indoor environment using only visual sensors. The vehicle is equipped with a single camera, whose output is wirelessly transmitted to an off-board standard PC for processing. Visual features within the camera imagery are extracted and tracked, and their 3D positions are calculated using a Structure from Motion algorithm. As the vehicle travels, obstacles in its surroundings are identified and a map of the explored region is generated. This paper discusses suitable criteria for assessing the performance of the system by computer-based simulation and practical experiments with a real vehicle. Performance measures identified include the positional accuracy of the 3D map and the vehicle's location, the efficiency and completeness of the exploration and the system reliability. Selected results are presented and the effect of key system parameters and algorithms on performance is assessed. This work was funded by the Systems Engineering for Autonomous Systems (SEAS) Defence Technology Centre established by the UK Ministry of Defence.
Introducing a Low-Cost Mini-Uav for - and Multispectral-Imaging
NASA Astrophysics Data System (ADS)
Bendig, J.; Bolten, A.; Bareth, G.
2012-07-01
The trend to minimize electronic devices also accounts for Unmanned Airborne Vehicles (UAVs) as well as for sensor technologies and imaging devices. Consequently, it is not surprising that UAVs are already part of our daily life and the current pace of development will increase civil applications. A well known and already wide spread example is the so called flying video game based on Parrot's AR.Drone which is remotely controlled by an iPod, iPhone, or iPad (http://ardrone.parrot.com). The latter can be considered as a low-weight and low-cost Mini-UAV. In this contribution a Mini-UAV is considered to weigh less than 5 kg and is being able to carry 0.2 kg to 1.5 kg of sensor payload. While up to now Mini-UAVs like Parrot's AR.Drone are mainly equipped with RGB cameras for videotaping or imaging, the development of such carriage systems clearly also goes to multi-sensor platforms like the ones introduced for larger UAVs (5 to 20 kg) by Jaakkolla et al. (2010) for forestry applications or by Berni et al. (2009) for agricultural applications. The problem when designing a Mini-UAV for multi-sensor imaging is the limitation of payload of up to 1.5 kg and a total weight of the whole system below 5 kg. Consequently, the Mini-UAV without sensors but including navigation system and GPS sensors must weigh less than 3.5 kg. A Mini-UAV system with these characteristics is HiSystems' MK-Okto (www.mikrokopter.de). Total weight including battery without sensors is less than 2.5 kg. Payload of a MK-Okto is approx. 1 kg and maximum speed is around 30 km/h. The MK-Okto can be operated up to a wind speed of less than 19 km/h which corresponds to Beaufort scale number 3 for wind speed. In our study, the MK-Okto is equipped with a handheld low-weight NEC F30IS thermal imaging system. The F30IS which was developed for veterinary applications, covers 8 to 13 μm, weighs only 300 g, and is capturing the temperature range between -20 °C and 100 °C. Flying at a height of 100 m, the camera's image covers an area of approx. 50 by 40 m. The sensor's resolution is 160 x 120 pixel and the field of view is 28° (H) x 21° (V). According to the producer, absolute accuracy for temperature is ±1 °C and the thermal sensitivity is >0.1 K. Additionally, the MK-Okto is equipped with Tetracam's Mini MCA. The Mini MCA in our study is a four band multispectral imaging system. Total weight is 700 g and spectral characteristics can be modified by filters between 400 and 1000 nm. In this study, three bands with a width of 10 nm (green: 550 nm, red: 671 nm, NIR1: 800 nm) and one band of 20 nm width (NIR2: 950 nm) have been used. Even so the MK-Okto is able to carry both sensors at the same time, the imaging systems were used separately for this contribution. First results of a combined thermal- and multispectral MK-Okto campaign in 2011 are presented and evaluated for a sugarbeet field experiment examining pathogens and drought stress.