Sample records for image guided robotic

  1. UROLOGIC ROBOTS AND FUTURE DIRECTIONS

    PubMed Central

    Mozer, Pierre; Troccaz, Jocelyne; Stoianovici, Dan

    2009-01-01

    Purpose of review Robot-assisted laparoscopic surgery in urology has gained immense popularity with the Da Vinci system but a lot of research teams are working on new robots. The purpose of this paper is to review current urologic robots and present future developments directions. Recent findings Future systems are expected to advance in two directions: improvements of remote manipulation robots and developments of image-guided robots. Summary The final goal of robots is to allow safer and more homogeneous outcomes with less variability of surgeon performance, as well as new tools to perform tasks based on medical transcutaneous imaging, in a less invasive way, at lower costs. It is expected that improvements for remote system could be augmented reality, haptic feed back, size reduction and development of new tools for NOTES surgery. The paradigm of image-guided robots is close to a clinical availability and the most advanced robots are presented with end-user technical assessments. It is also notable that the potential of robots lies much further ahead than the accomplishments of the daVinci system. The integration of imaging with robotics holds a substantial promise, because this can accomplish tasks otherwise impossible. Image guided robots have the potential to offer a paradigm shift. PMID:19057227

  2. Urologic robots and future directions.

    PubMed

    Mozer, Pierre; Troccaz, Jocelyne; Stoianovici, Dan

    2009-01-01

    Robot-assisted laparoscopic surgery in urology has gained immense popularity with the daVinci system, but a lot of research teams are working on new robots. The purpose of this study is to review current urologic robots and present future development directions. Future systems are expected to advance in two directions: improvements of remote manipulation robots and developments of image-guided robots. The final goal of robots is to allow safer and more homogeneous outcomes with less variability of surgeon performance, as well as new tools to perform tasks on the basis of medical transcutaneous imaging, in a less invasive way, at lower costs. It is expected that improvements for a remote system could be augmented in reality, with haptic feedback, size reduction, and development of new tools for natural orifice translumenal endoscopic surgery. The paradigm of image-guided robots is close to clinical availability and the most advanced robots are presented with end-user technical assessments. It is also notable that the potential of robots lies much further ahead than the accomplishments of the daVinci system. The integration of imaging with robotics holds a substantial promise, because this can accomplish tasks otherwise impossible. Image-guided robots have the potential to offer a paradigm shift.

  3. A magnetic resonance image-guided breast needle intervention robot system: overview and design considerations.

    PubMed

    Park, Samuel Byeongjun; Kim, Jung-Gun; Lim, Ki-Woong; Yoon, Chae-Hyun; Kim, Dong-Jun; Kang, Han-Sung; Jo, Yung-Ho

    2017-08-01

    We developed an image-guided intervention robot system that can be operated in a magnetic resonance (MR) imaging gantry. The system incorporates a bendable needle intervention robot for breast cancer patients that overcomes the space limitations of the MR gantry. Most breast coil designs for breast MR imaging have side openings to allow manual localization. However, for many intervention procedures, the patient must be removed from the gantry. A robotic manipulation system with integrated image guidance software was developed. Our robotic manipulator was designed to be slim, so as to fit between the patient's side and the MR gantry wall. Only non-magnetic materials were used, and an electromagnetic shield was employed for cables and circuits. The image guidance software was built using open source libraries. In situ feasibility tests were performed in a 3-T MR system. One target point in the breast phantom was chosen by the clinician for each experiment, and our robot moved the needle close to the target point. Without image-guided feedback control, the needle end could not hit the target point (distance = 5 mm) in the first experiment. Using our robotic system, the needle hits the target lesion of the breast phantom at a distance of 2.3 mm from the same target point using image-guided feedback. The second experiment was performed using other target points, and the distance between the final needle end point and the target point was 0.8 mm. We successfully developed an MR-guided needle intervention robot for breast cancer patients. Further research will allow the expansion of these interventions.

  4. Body-mounted robotic instrument guide for image-guided cryotherapy of renal cancer

    PubMed Central

    Hata, Nobuhiko; Song, Sang-Eun; Olubiyi, Olutayo; Arimitsu, Yasumichi; Fujimoto, Kosuke; Kato, Takahisa; Tuncali, Kemal; Tani, Soichiro; Tokuda, Junichi

    2016-01-01

    Purpose: Image-guided cryotherapy of renal cancer is an emerging alternative to surgical nephrectomy, particularly for those who cannot sustain the physical burden of surgery. It is well known that the outcome of this therapy depends on the accurate placement of the cryotherapy probe. Therefore, a robotic instrument guide may help physicians aim the cryotherapy probe precisely to maximize the efficacy of the treatment and avoid damage to critical surrounding structures. The objective of this paper was to propose a robotic instrument guide for orienting cryotherapy probes in image-guided cryotherapy of renal cancers. The authors propose a body-mounted robotic guide that is expected to be less susceptible to guidance errors caused by the patient’s whole body motion. Methods: Keeping the device’s minimal footprint in mind, the authors developed and validated a body-mounted, robotic instrument guide that can maintain the geometrical relationship between the device and the patient’s body, even in the presence of the patient’s frequent body motions. The guide can orient the cryotherapy probe with the skin incision point as the remote-center-of-motion. The authors’ validation studies included an evaluation of the mechanical accuracy and position repeatability of the robotic instrument guide. The authors also performed a mock MRI-guided cryotherapy procedure with a phantom to compare the advantage of robotically assisted probe replacements over a free-hand approach, by introducing organ motions to investigate their effects on the accurate placement of the cryotherapy probe. Measurements collected for performance analysis included accuracy and time taken for probe placements. Multivariate analysis was performed to assess if either or both organ motion and the robotic guide impacted these measurements. Results: The mechanical accuracy and position repeatability of the probe placement using the robotic instrument guide were 0.3 and 0.1 mm, respectively, at a depth of 80 mm. The phantom test indicated that the accuracy of probe placement was significantly better with the robotic instrument guide (4.1 mm) than without the guide (6.3 mm, p<0.001), even in the presence of body motion. When independent organ motion was artificially added, in addition to body motion, the advantage of accurate probe placement using the robotic instrument guide disappeared statistically [i.e., 6.0 mm with the robotic guide and 5.9 mm without the robotic guide (p = 0.906)]. When the robotic instrument guide was used, the total time required to complete the procedure was reduced from 19.6 to 12.7 min (p<0.001). Multivariable analysis indicated that the robotic instrument guide, not the organ motion, was the cause of statistical significance. The statistical power the authors obtained was 88% in accuracy assessment and 99% higher in duration measurement. Conclusions: The body-mounted robotic instrument guide allows positioning of the probe during image-guided cryotherapy of renal cancer and was done in fewer attempts and in less time than the free-hand approach. The accuracy of the placement of the cryotherapy probe was better using the robotic instrument guide than without the guide when no organ motion was present. The accuracy between the robotic and free-hand approach becomes comparable when organ motion was present. PMID:26843245

  5. Piezoelectrically Actuated Robotic System for MRI-Guided Prostate Percutaneous Therapy

    PubMed Central

    Su, Hao; Shang, Weijian; Cole, Gregory; Li, Gang; Harrington, Kevin; Camilo, Alexander; Tokuda, Junichi; Tempany, Clare M.; Hata, Nobuhiko; Fischer, Gregory S.

    2014-01-01

    This paper presents a fully-actuated robotic system for percutaneous prostate therapy under continuously acquired live magnetic resonance imaging (MRI) guidance. The system is composed of modular hardware and software to support the surgical workflow of intra-operative MRI-guided surgical procedures. We present the development of a 6-degree-of-freedom (DOF) needle placement robot for transperineal prostate interventions. The robot consists of a 3-DOF needle driver module and a 3-DOF Cartesian motion module. The needle driver provides needle cannula translation and rotation (2-DOF) and stylet translation (1-DOF). A custom robot controller consisting of multiple piezoelectric motor drivers provides precision closed-loop control of piezoelectric motors and enables simultaneous robot motion and MR imaging. The developed modular robot control interface software performs image-based registration, kinematics calculation, and exchanges robot commands and coordinates between the navigation software and the robot controller with a new implementation of the open network communication protocol OpenIGTLink. Comprehensive compatibility of the robot is evaluated inside a 3-Tesla MRI scanner using standard imaging sequences and the signal-to-noise ratio (SNR) loss is limited to 15%. The image deterioration due to the present and motion of robot demonstrates unobservable image interference. Twenty-five targeted needle placements inside gelatin phantoms utilizing an 18-gauge ceramic needle demonstrated 0.87 mm root mean square (RMS) error in 3D Euclidean distance based on MRI volume segmentation of the image-guided robotic needle placement procedure. PMID:26412962

  6. Automated dental implantation using image-guided robotics: registration results.

    PubMed

    Sun, Xiaoyan; McKenzie, Frederic D; Bawab, Sebastian; Li, Jiang; Yoon, Yongki; Huang, Jen-K

    2011-09-01

    One of the most important factors affecting the outcome of dental implantation is the accurate insertion of the implant into the patient's jaw bone, which requires a high degree of anatomical accuracy. With the accuracy and stability of robots, image-guided robotics is expected to provide more reliable and successful outcomes for dental implantation. Here, we proposed the use of a robot for drilling the implant site in preparation for the insertion of the implant. An image-guided robotic system for automated dental implantation is described in this paper. Patient-specific 3D models are reconstructed from preoperative Cone-beam CT images, and implantation planning is performed with these virtual models. A two-step registration procedure is applied to transform the preoperative plan of the implant insertion into intra-operative operations of the robot with the help of a Coordinate Measurement Machine (CMM). Experiments are carried out with a phantom that is generated from the patient-specific 3D model. Fiducial Registration Error (FRE) and Target Registration Error (TRE) values are calculated to evaluate the accuracy of the registration procedure. FRE values are less than 0.30 mm. Final TRE values after the two-step registration are 1.42 ± 0.70 mm (N = 5). The registration results of an automated dental implantation system using image-guided robotics are reported in this paper. Phantom experiments show that the practice of robot in the dental implantation is feasible and the system accuracy is comparable to other similar systems for dental implantation.

  7. Image-guided robotic surgery.

    PubMed

    Marescaux, Jacques; Solerc, Luc

    2004-06-01

    Medical image processing leads to an improvement in patient care by guiding the surgical gesture. Three-dimensional models of patients that are generated from computed tomographic scans or magnetic resonance imaging allow improved surgical planning and surgical simulation that offers the opportunity for a surgeon to train the surgical gesture before performing it for real. These two preoperative steps can be used intra-operatively because of the development of augmented reality, which consists of superimposing the preoperative three-dimensional model of the patient onto the real intraoperative view. Augmented reality provides the surgeon with a view of the patient in transparency and can also guide the surgeon, thanks to the real-time tracking of surgical tools during the procedure. When adapted to robotic surgery, this tool tracking enables visual serving with the ability to automatically position and control surgical robotic arms in three dimensions. It is also now possible to filter physiologic movements such as breathing or the heart beat. In the future, by combining augmented reality and robotics, these image-guided robotic systems will enable automation of the surgical procedure, which will be the next revolution in surgery.

  8. Use of an image-guided robotic radiosurgery system for the treatment of canine nonlymphomatous nasal tumors.

    PubMed

    Glasser, Seth A; Charney, Sarah; Dervisis, Nikolaos G; Witten, Matthew R; Ettinger, Susan; Berg, Jason; Joseph, Richard

    2014-01-01

    An image-guided robotic stereotactic radiosurgery (SRS) system can be used to deliver curative-intent radiation in either single fraction or hypofractionated doses. Medical records for 19 dogs with nonlymphomatous nasal tumors treated with hypofractionated image-guided robotic stereotactic body radiotherapy (SBRT), either with or without adjunctive treatment, were retrospectively analyzed for survival and prognostic factors. Median survival time (MST) was evaluated using Kaplan-Meier survival curves. Age, breed, tumor type, stage, tumor size, prescribed radiation dose, and heterogeneity index were analyzed for prognostic significance. Dogs were treated with three consecutive-day, 8-12 gray (Gy) fractions of image-guided robotic SBRT. Overall MST was 399 days. No significant prognostic factors were identified. Acute side effects were rare and mild. Late side effects included one dog with an oronasal fistula and six dogs with seizures. In three of six dogs, seizures were a presenting complaint prior to SBRT. The cause of seizures in the remaining three dogs could not be definitively determined due to lack of follow-up computed tomography (CT) imaging. The seizures could have been related to either progression of disease or late radiation effect. Results indicate that image-guided robotic SBRT, either with or without adjunctive therapy, for canine nonlymphomatous nasal tumors provides comparable survival times (STs) to daily fractionated megavoltage radiation with fewer required fractions and fewer acute side effects.

  9. Toward image guided robotic surgery: system validation.

    PubMed

    Herrell, Stanley D; Kwartowitz, David Morgan; Milhoua, Paul M; Galloway, Robert L

    2009-02-01

    Navigation for current robotic assisted surgical techniques is primarily accomplished through a stereo pair of laparoscopic camera images. These images provide standard optical visualization of the surface but provide no subsurface information. Image guidance methods allow the visualization of subsurface information to determine the current position in relationship to that of tracked tools. A robotic image guided surgical system was designed and implemented based on our previous laboratory studies. A series of experiments using tissue mimicking phantoms with injected target lesions was performed. The surgeon was asked to resect "tumor" tissue with and without the augmentation of image guidance using the da Vinci robotic surgical system. Resections were performed and compared to an ideal resection based on the radius of the tumor measured from preoperative computerized tomography. A quantity called the resection ratio, that is the ratio of resected tissue compared to the ideal resection, was calculated for each of 13 trials and compared. The mean +/- SD resection ratio of procedures augmented with image guidance was smaller than that of procedures without image guidance (3.26 +/- 1.38 vs 9.01 +/- 1.81, p <0.01). Additionally, procedures using image guidance were shorter (average 8 vs 13 minutes). It was demonstrated that there is a benefit from the augmentation of laparoscopic video with updated preoperative images. Incorporating our image guided system into the da Vinci robotic system improved overall tissue resection, as measured by our metric. Adding image guidance to the da Vinci robotic surgery system may result in the potential for improvements such as the decreased removal of benign tissue while maintaining an appropriate surgical margin.

  10. [Image guided and robotic treatment--the advance of cybernetics in clinical medicine].

    PubMed

    Fosse, E; Elle, O J; Samset, E; Johansen, M; Røtnes, J S; Tønnessen, T I; Edwin, B

    2000-01-10

    The introduction of advanced technology in hospitals has changed the treatment practice towards more image guided and minimal invasive procedures. Modern computer and communication technology opens up for robot aided and pre-programmed intervention. Several robotic systems are in clinical use today both in microsurgery and in major cardiac and orthopedic operations. As this trend develops, professions which are new in this context such as physicists, mathematicians and cybernetic engineers will be increasingly important in the treatment of patients.

  11. A networked modular hardware and software system for MRI-guided robotic prostate interventions

    NASA Astrophysics Data System (ADS)

    Su, Hao; Shang, Weijian; Harrington, Kevin; Camilo, Alex; Cole, Gregory; Tokuda, Junichi; Hata, Nobuhiko; Tempany, Clare; Fischer, Gregory S.

    2012-02-01

    Magnetic resonance imaging (MRI) provides high resolution multi-parametric imaging, large soft tissue contrast, and interactive image updates making it an ideal modality for diagnosing prostate cancer and guiding surgical tools. Despite a substantial armamentarium of apparatuses and systems has been developed to assist surgical diagnosis and therapy for MRI-guided procedures over last decade, the unified method to develop high fidelity robotic systems in terms of accuracy, dynamic performance, size, robustness and modularity, to work inside close-bore MRI scanner still remains a challenge. In this work, we develop and evaluate an integrated modular hardware and software system to support the surgical workflow of intra-operative MRI, with percutaneous prostate intervention as an illustrative case. Specifically, the distinct apparatuses and methods include: 1) a robot controller system for precision closed loop control of piezoelectric motors, 2) a robot control interface software that connects the 3D Slicer navigation software and the robot controller to exchange robot commands and coordinates using the OpenIGTLink open network communication protocol, and 3) MRI scan plane alignment to the planned path and imaging of the needle as it is inserted into the target location. A preliminary experiment with ex-vivo phantom validates the system workflow, MRI-compatibility and shows that the robotic system has a better than 0.01mm positioning accuracy.

  12. The utility of indocyanine green fluorescence imaging during robotic adrenalectomy.

    PubMed

    Colvin, Jennifer; Zaidi, Nisar; Berber, Eren

    2016-08-01

    Indocyanine green (ICG) has been used for medical imaging since 1950s, but has more recently become available for use in minimally invasive surgery owing to improvements in technology. This study investigates the use of ICG florescence to guide an accurate dissection by delineating the borders of adrenal tumors during robotic adrenalectomy (RA). This prospective study compared conventional robotic view with ICG fluorescence imaging in 40 consecutive patients undergoing RA. Independent, non-blinded observers assessed how accurately ICG fluorescence delineated the borders of adrenal tumors compared to conventional robotic view. A total of 40 patients underwent 43 adrenalectomies. ICG imaging was superior, equivalent, or inferior to conventional robotic view in 46.5% (n = 20), 25.6% (n = 11), and 27.9% (n = 12) of the procedures. On univariate analysis, the only parameter that predicted the superiority of ICG imaging over conventional robotic view was the tumor type, with adrenocortical tumors being delineated more accurately on ICG imaging compared to conventional robotic view. This study demonstrates the utility of ICG to guide the dissection and removal of adrenal tumors during RA. A simple reproducible method is reported, with a detailed description of the utility based on tumor type, approach and side. J. Surg. Oncol. 2016;114:153-156. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  13. Magnetic resonance-compatible robotic and mechatronics systems for image-guided interventions and rehabilitation: a review study.

    PubMed

    Tsekos, Nikolaos V; Khanicheh, Azadeh; Christoforou, Eftychios; Mavroidis, Constantinos

    2007-01-01

    The continuous technological progress of magnetic resonance imaging (MRI), as well as its widespread clinical use as a highly sensitive tool in diagnostics and advanced brain research, has brought a high demand for the development of magnetic resonance (MR)-compatible robotic/mechatronic systems. Revolutionary robots guided by real-time three-dimensional (3-D)-MRI allow reliable and precise minimally invasive interventions with relatively short recovery times. Dedicated robotic interfaces used in conjunction with fMRI allow neuroscientists to investigate the brain mechanisms of manipulation and motor learning, as well as to improve rehabilitation therapies. This paper gives an overview of the motivation, advantages, technical challenges, and existing prototypes for MR-compatible robotic/mechatronic devices.

  14. Does Needle Rotation Improve Lesion Targeting?

    PubMed Central

    Badaan, Shadi; Petrisor, Doru; Kim, Chunwoo; Mozer, Pierre; Mazilu, Dumitru; Gruionu, Lucian; Patriciu, Alex; Cleary, Kevin; Stoianovici, Dan

    2011-01-01

    Background Image-guided robots are manipulators that operate based on medical images. Perhaps the most common class of image-guided robots are robots for needle interventions. Typically, these robots actively position and/or orient a needle guide, but needle insertion is still done by the physician. While this arrangement may have safety advantages and keep the physician in control of needle insertion, actuated needle drivers can incorporate other useful features. Methods We first present a new needle driver that can actively insert and rotate a needle. With this device we investigate the use of needle rotation in controlled in-vitro experiments performed with a specially developed revolving needle driver. Results These experiments show that needle rotation can improve targeting and may reduce errors by as much as 70%. Conclusion The new needle driver provides a unique kinematic architecture that enables insertion with a compact mechanism. Perhaps the most interesting conclusion of the study is that lesions of soft tissue organs may not be perfectly targeted with a needle without using special techniques, either manually or with a robotic device. The results of this study show that needle rotation may be an effective method of reducing targeting errors. PMID:21360796

  15. Environmental Recognition and Guidance Control for Autonomous Vehicles using Dual Vision Sensor and Applications

    NASA Astrophysics Data System (ADS)

    Moriwaki, Katsumi; Koike, Issei; Sano, Tsuyoshi; Fukunaga, Tetsuya; Tanaka, Katsuyuki

    We propose a new method of environmental recognition around an autonomous vehicle using dual vision sensor and navigation control based on binocular images. We consider to develop a guide robot that can play the role of a guide dog as the aid to people such as the visually impaired or the aged, as an application of above-mentioned techniques. This paper presents a recognition algorithm, which finds out the line of a series of Braille blocks and the boundary line between a sidewalk and a roadway where a difference in level exists by binocular images obtained from a pair of parallelarrayed CCD cameras. This paper also presents a tracking algorithm, with which the guide robot traces along a series of Braille blocks and avoids obstacles and unsafe areas which exist in the way of a person with the guide robot.

  16. Design, development, and evaluation of an MRI-guided SMA spring-actuated neurosurgical robot

    PubMed Central

    Ho, Mingyen; Kim, Yeongjin; Cheng, Shing Shin; Gullapalli, Rao; Desai, Jaydev P.

    2015-01-01

    In this paper, we present our work on the development of a magnetic resonance imaging (MRI)-compatible Minimally Invasive Neurosurgical Intracranial Robot (MINIR) comprising of shape memory alloy (SMA) spring actuators and tendon-sheath mechanism. We present the detailed modeling and analysis along with experimental results of the characterization of SMA spring actuators. Furthermore, to demonstrate image-feedback control, we used the images obtained from a camera to control the motion of the robot so that eventually continuous MR images could be used in the future to control the robot motion. Since the image tracking algorithm may fail in some situations, we also developed a temperature feedback control scheme which served as a backup controller for the robot. Experimental results demonstrated that both image feedback and temperature feedback can be used to control the motion of MINIR. A series of MRI compatibility tests were performed on the robot and the experimental results demonstrated that the robot is MRI compatible and no significant visual image distortion was observed in the MR images during robot operation. PMID:26622075

  17. Robotically assisted small animal MRI-guided mouse biopsy

    NASA Astrophysics Data System (ADS)

    Wilson, Emmanuel; Chiodo, Chris; Wong, Kenneth H.; Fricke, Stanley; Jung, Mira; Cleary, Kevin

    2010-02-01

    Small mammals, namely mice and rats, play an important role in biomedical research. Imaging, in conjunction with accurate therapeutic agent delivery, has tremendous value in small animal research since it enables serial, non-destructive testing of animals and facilitates the study of biomarkers of disease progression. The small size of organs in mice lends some difficulty to accurate biopsies and therapeutic agent delivery. Image guidance with the use of robotic devices should enable more accurate and repeatable targeting for biopsies and delivery of therapeutic agents, as well as the ability to acquire tissue from a pre-specified location based on image anatomy. This paper presents our work in integrating a robotic needle guide device, specialized stereotaxic mouse holder, and magnetic resonance imaging, with a long-term goal of performing accurate and repeatable targeting in anesthetized mice studies.

  18. Development of a Meso-Scale SMA-Based Torsion Actuator for Image-Guided Procedures.

    PubMed

    Sheng, Jun; Gandhi, Dheeraj; Gullapalli, Rao; Simard, J Marc; Desai, Jaydev P

    2017-02-01

    This paper presents the design, modeling, and control of a meso-scale torsion actuator based on shape memory alloy (SMA) for image-guided surgical procedures. Developing a miniature torsion actuator is challenging, but it opens the possibility of significantly enhancing the robot agility and maneuverability. The proposed torsion actuator is bi-directionally actuated by a pair of antagonistic SMA torsion springs through alternate Joule heating and natural cooling. The torsion actuator is integrated into a surgical robot prototype to demonstrate its working performance in the humid environment under C-Arm CT image guidance.

  19. Development of a Meso-Scale SMA-Based Torsion Actuator for Image-Guided Procedures

    PubMed Central

    Sheng, Jun; Gandhi, Dheeraj; Gullapalli, Rao; Simard, J. Marc; Desai, Jaydev P.

    2016-01-01

    This paper presents the design, modeling, and control of a meso-scale torsion actuator based on shape memory alloy (SMA) for image-guided surgical procedures. Developing a miniature torsion actuator is challenging, but it opens the possibility of significantly enhancing the robot agility and maneuverability. The proposed torsion actuator is bi-directionally actuated by a pair of antagonistic SMA torsion springs through alternate Joule heating and natural cooling. The torsion actuator is integrated into a surgical robot prototype to demonstrate its working performance in the humid environment under C-Arm CT image guidance. PMID:28210189

  20. Development of a Pneumatic Robot for MRI-guided Transperineal Prostate Biopsy and Brachytherapy: New Approaches

    PubMed Central

    Song, Sang-Eun; Cho, Nathan B.; Fischer, Gregory; Hata, Nobuhito; Tempany, Clare; Fichtinger, Gabor; Iordachita, Iulian

    2011-01-01

    Magnetic Resonance Imaging (MRI) guided prostate biopsy and brachytherapy has been introduced in order to enhance the cancer detection and treatment. For the accurate needle positioning, a number of robotic assistants have been developed. However, problems exist due to the strong magnetic field and limited workspace. Pneumatically actuated robots have shown the minimum distraction in the environment but the confined workspace limits optimal robot design and thus controllability is often poor. To overcome the problem, a simple external damping mechanism using timing belts was sought and a 1-DOF mechanism test result indicated sufficient positioning accuracy. Based on the damping mechanism and modular system design approach, a new workspace-optimized 4-DOF parallel robot was developed for the MRI-guided prostate biopsy and brachytherapy. A preliminary evaluation of the robot was conducted using previously developed pneumatic controller and satisfying results were obtained. PMID:21399734

  1. Advancements in Magnetic Resonance–Guided Robotic Interventions in the Prostate

    PubMed Central

    Macura, Katarzyna J.; Stoianovici, Dan

    2011-01-01

    Magnetic resonance imaging (MRI) provides more detailed anatomical images of the prostate compared with the transrectal ultrasound imaging. Therefore, for the purpose of intervention in the prostate gland, diagnostic or therapeutic, MRI guidance offers a possibility of more precise targeting that may be crucial to the success of prostate interventions. However, access within the scanner is limited for manual instrument handling and the MR environment is most demanding among all imaging equipment with respect to the instrumentation used. A solution to this problem is the use of MR-compatible robots purposely designed to operate in the space and environmental restrictions inside the MR scanner allowing real-time interventions. Building an MRI-compatible robot is a very challenging engineering task because, in addition to the material restrictions that MRI instruments have, the robot requires actuators and sensors that limit the type of energies that can be used. Several important design problems have to be overcome before a successful MR-compatible robot application can be built. A number of MR-compatible robots, ranging from a simple manipulator to a fully automated system, have been developed, proposing ingenious solutions to the design challenge. Several systems have been already tested clinically for prostate biopsy and brachytherapy. As technology matures, precise image guidance for prostate interventions performed or assisted by specialized MR-compatible robotic devices may provide a uniquely accurate solution for guiding the intervention directly based on MR findings and feedback. Such an instrument would become a valuable clinical tool for biopsies directly targeting imaged tumor foci and delivering tumor-centered focal therapy. PMID:19512852

  2. ROS-IGTL-Bridge: an open network interface for image-guided therapy using the ROS environment.

    PubMed

    Frank, Tobias; Krieger, Axel; Leonard, Simon; Patel, Niravkumar A; Tokuda, Junichi

    2017-08-01

    With the growing interest in advanced image-guidance for surgical robot systems, rapid integration and testing of robotic devices and medical image computing software are becoming essential in the research and development. Maximizing the use of existing engineering resources built on widely accepted platforms in different fields, such as robot operating system (ROS) in robotics and 3D Slicer in medical image computing could simplify these tasks. We propose a new open network bridge interface integrated in ROS to ensure seamless cross-platform data sharing. A ROS node named ROS-IGTL-Bridge was implemented. It establishes a TCP/IP network connection between the ROS environment and external medical image computing software using the OpenIGTLink protocol. The node exports ROS messages to the external software over the network and vice versa simultaneously, allowing seamless and transparent data sharing between the ROS-based devices and the medical image computing platforms. Performance tests demonstrated that the bridge could stream transforms, strings, points, and images at 30 fps in both directions successfully. The data transfer latency was <1.2 ms for transforms, strings and points, and 25.2 ms for color VGA images. A separate test also demonstrated that the bridge could achieve 900 fps for transforms. Additionally, the bridge was demonstrated in two representative systems: a mock image-guided surgical robot setup consisting of 3D slicer, and Lego Mindstorms with ROS as a prototyping and educational platform for IGT research; and the smart tissue autonomous robot surgical setup with 3D Slicer. The study demonstrated that the bridge enabled cross-platform data sharing between ROS and medical image computing software. This will allow rapid and seamless integration of advanced image-based planning/navigation offered by the medical image computing software such as 3D Slicer into ROS-based surgical robot systems.

  3. Virtual wall-based haptic-guided teleoperated surgical robotic system for single-port brain tumor removal surgery.

    PubMed

    Seung, Sungmin; Choi, Hongseok; Jang, Jongseong; Kim, Young Soo; Park, Jong-Oh; Park, Sukho; Ko, Seong Young

    2017-01-01

    This article presents a haptic-guided teleoperation for a tumor removal surgical robotic system, so-called a SIROMAN system. The system was developed in our previous work to make it possible to access tumor tissue, even those that seat deeply inside the brain, and to remove the tissue with full maneuverability. For a safe and accurate operation to remove only tumor tissue completely while minimizing damage to the normal tissue, a virtual wall-based haptic guidance together with a medical image-guided control is proposed and developed. The virtual wall is extracted from preoperative medical images, and the robot is controlled to restrict its motion within the virtual wall using haptic feedback. Coordinate transformation between sub-systems, a collision detection algorithm, and a haptic-guided teleoperation using a virtual wall are described in the context of using SIROMAN. A series of experiments using a simplified virtual wall are performed to evaluate the performance of virtual wall-based haptic-guided teleoperation. With haptic guidance, the accuracy of the robotic manipulator's trajectory is improved by 57% compared to one without. The tissue removal performance is also improved by 21% ( p < 0.05). The experiments show that virtual wall-based haptic guidance provides safer and more accurate tissue removal for single-port brain surgery.

  4. Reliable vision-guided grasping

    NASA Technical Reports Server (NTRS)

    Nicewarner, Keith E.; Kelley, Robert B.

    1992-01-01

    Automated assembly of truss structures in space requires vision-guided servoing for grasping a strut when its position and orientation are uncertain. This paper presents a methodology for efficient and robust vision-guided robot grasping alignment. The vision-guided grasping problem is related to vision-guided 'docking' problems. It differs from other hand-in-eye visual servoing problems, such as tracking, in that the distance from the target is a relevant servo parameter. The methodology described in this paper is hierarchy of levels in which the vision/robot interface is decreasingly 'intelligent,' and increasingly fast. Speed is achieved primarily by information reduction. This reduction exploits the use of region-of-interest windows in the image plane and feature motion prediction. These reductions invariably require stringent assumptions about the image. Therefore, at a higher level, these assumptions are verified using slower, more reliable methods. This hierarchy provides for robust error recovery in that when a lower-level routine fails, the next-higher routine will be called and so on. A working system is described which visually aligns a robot to grasp a cylindrical strut. The system uses a single camera mounted on the end effector of a robot and requires only crude calibration parameters. The grasping procedure is fast and reliable, with a multi-level error recovery system.

  5. Endocavity Ultrasound Probe Manipulators

    PubMed Central

    Stoianovici, Dan; Kim, Chunwoo; Schäfer, Felix; Huang, Chien-Ming; Zuo, Yihe; Petrisor, Doru; Han, Misop

    2014-01-01

    We developed two similar structure manipulators for medical endocavity ultrasound probes with 3 and 4 degrees of freedom (DoF). These robots allow scanning with ultrasound for 3-D imaging and enable robot-assisted image-guided procedures. Both robots use remote center of motion kinematics, characteristic of medical robots. The 4-DoF robot provides unrestricted manipulation of the endocavity probe. With the 3-DoF robot the insertion motion of the probe must be adjusted manually, but the device is simpler and may also be used to manipulate external-body probes. The robots enabled a novel surgical approach of using intraoperative image-based navigation during robot-assisted laparoscopic prostatectomy (RALP), performed with concurrent use of two robotic systems (Tandem, T-RALP). Thus far, a clinical trial for evaluation of safety and feasibility has been performed successfully on 46 patients. This paper describes the architecture and design of the robots, the two prototypes, control features related to safety, preclinical experiments, and the T-RALP procedure. PMID:24795525

  6. Assistive technology for ultrasound-guided central venous catheter placement.

    PubMed

    Ikhsan, Mohammad; Tan, Kok Kiong; Putra, Andi Sudjana

    2018-01-01

    This study evaluated the existing technology used to improve the safety and ease of ultrasound-guided central venous catheterization. Electronic database searches were conducted in Scopus, IEEE, Google Patents, and relevant conference databases (SPIE, MICCAI, and IEEE conferences) for related articles on assistive technology for ultrasound-guided central venous catheterization. A total of 89 articles were examined and pointed to several fields that are currently the focus of improvements to ultrasound-guided procedures. These include improving needle visualization, needle guides and localization technology, image processing algorithms to enhance and segment important features within the ultrasound image, robotic assistance using probe-mounted manipulators, and improving procedure ergonomics through in situ projections of important information. Probe-mounted robotic manipulators provide a promising avenue for assistive technology developed for freehand ultrasound-guided percutaneous procedures. However, there is currently a lack of clinical trials to validate the effectiveness of these devices.

  7. Image-Guided Surgical Robotic System for Percutaneous Reduction of Joint Fractures.

    PubMed

    Dagnino, Giulio; Georgilas, Ioannis; Morad, Samir; Gibbons, Peter; Tarassoli, Payam; Atkins, Roger; Dogramadzi, Sanja

    2017-11-01

    Complex joint fractures often require an open surgical procedure, which is associated with extensive soft tissue damages and longer hospitalization and rehabilitation time. Percutaneous techniques can potentially mitigate these risks but their application to joint fractures is limited by the current sub-optimal 2D intra-operative imaging (fluoroscopy) and by the high forces involved in the fragment manipulation (due to the presence of soft tissue, e.g., muscles) which might result in fracture malreduction. Integration of robotic assistance and 3D image guidance can potentially overcome these issues. The authors propose an image-guided surgical robotic system for the percutaneous treatment of knee joint fractures, i.e., the robot-assisted fracture surgery (RAFS) system. It allows simultaneous manipulation of two bone fragments, safer robot-bone fixation system, and a traction performing robotic manipulator. This system has led to a novel clinical workflow and has been tested both in laboratory and in clinically relevant cadaveric trials. The RAFS system was tested on 9 cadaver specimens and was able to reduce 7 out of 9 distal femur fractures (T- and Y-shape 33-C1) with acceptable accuracy (≈1 mm, ≈5°), demonstrating its applicability to fix knee joint fractures. This study paved the way to develop novel technologies for percutaneous treatment of complex fractures including hip, ankle, and shoulder, thus representing a step toward minimally-invasive fracture surgeries.

  8. Accurate three-dimensional virtual reconstruction of surgical field using calibrated trajectories of an image-guided medical robot

    PubMed Central

    Gong, Yuanzheng; Hu, Danying; Hannaford, Blake; Seibel, Eric J.

    2014-01-01

    Abstract. Brain tumor margin removal is challenging because diseased tissue is often visually indistinguishable from healthy tissue. Leaving residual tumor leads to decreased survival, and removing normal tissue causes life-long neurological deficits. Thus, a surgical robotics system with a high degree of dexterity, accurate navigation, and highly precise resection is an ideal candidate for image-guided removal of fluorescently labeled brain tumor cells. To image, we developed a scanning fiber endoscope (SFE) which acquires concurrent reflectance and fluorescence wide-field images at a high resolution. This miniature flexible endoscope was affixed to the arm of a RAVEN II surgical robot providing programmable motion with feedback control using stereo-pair surveillance cameras. To verify the accuracy of the three-dimensional (3-D) reconstructed surgical field, a multimodal physical-sized model of debulked brain tumor was used to obtain the 3-D locations of residual tumor for robotic path planning to remove fluorescent cells. Such reconstruction is repeated intraoperatively during margin clean-up so the algorithm efficiency and accuracy are important to the robotically assisted surgery. Experimental results indicate that the time for creating this 3-D surface can be reduced to one-third by using known trajectories of a robot arm, and the error from the reconstructed phantom is within 0.67 mm in average compared to the model design. PMID:26158071

  9. Robot-assisted real-time magnetic resonance image-guided transcatheter aortic valve replacement.

    PubMed

    Miller, Justin G; Li, Ming; Mazilu, Dumitru; Hunt, Tim; Horvath, Keith A

    2016-05-01

    Real-time magnetic resonance imaging (rtMRI)-guided transcatheter aortic valve replacement (TAVR) offers improved visualization, real-time imaging, and pinpoint accuracy with device delivery. Unfortunately, performing a TAVR in a MRI scanner can be a difficult task owing to limited space and an awkward working environment. Our solution was to design a MRI-compatible robot-assisted device to insert and deploy a self-expanding valve from a remote computer console. We present our preliminary results in a swine model. We used an MRI-compatible robotic arm and developed a valve delivery module. A 12-mm trocar was inserted in the apex of the heart via a subxiphoid incision. The delivery device and nitinol stented prosthesis were mounted on the robot. Two continuous real-time imaging planes provided a virtual real-time 3-dimensional reconstruction. The valve was deployed remotely by the surgeon via a graphic user interface. In this acute nonsurvival study, 8 swine underwent robot-assisted rtMRI TAVR for evaluation of feasibility. Device deployment took a mean of 61 ± 5 seconds. Postdeployment necropsy was performed to confirm correlations between imaging and actual valve positions. These results demonstrate the feasibility of robotic-assisted TAVR using rtMRI guidance. This approach may eliminate some of the challenges of performing a procedure while working inside of an MRI scanner, and may improve the success of TAVR. It provides superior visualization during the insertion process, pinpoint accuracy of deployment, and, potentially, communication between the imaging device and the robotic module to prevent incorrect or misaligned deployment. Copyright © 2016 The American Association for Thoracic Surgery. Published by Elsevier Inc. All rights reserved.

  10. Positional calibration of an ultrasound image-guided robotic breast biopsy system.

    PubMed

    Nelson, Thomas R; Tran, Amy; Fakourfar, Hourieh; Nebeker, Jakob

    2012-03-01

    Precision biopsy of small lesions is essential in providing high-quality patient diagnosis and management. Localization depends on high-quality imaging. We have developed a dedicated, fully automatic volume breast ultrasound (US) imaging system for early breast cancer detection. This work focuses on development of an image-guided robotic biopsy system that is integrated with the volume breast US system for performing minimally invasive breast biopsies. The objective of this work was to assess the positional accuracy of the robotic system for breast biopsy. We have adapted a compact robotic arm for performing breast biopsy. The arm incorporates a force torque sensor and is modified to accommodate breast biopsy sampling needles mounted on the robot end effector. Volume breast US images are used as input to a targeting algorithm that provides the physician with control of biopsy device guidance and trajectory optimization. In this work, the positional accuracy was evaluated using (1) a light-emitting diode (LED) mounted on the end effector and (2) a LED mounted on the end of a biopsy needle, each of which was imaged for each robot controller position as part of mapping the positional accuracy throughout a volume that would contain the breast. We measured the error in each location and the cumulative error. Robotic device performance over the volume provided mean accuracy ± SD of 0.76 ± 0.13 mm (end effector) and 0.55 ± 0.13 mm (needle sample location), sufficient for a targeting accuracy within ±1 mm, which is suitable for clinical use. Depth positioning error also was small: 0.38 ± 0.03 mm. Reproducibility was excellent with less than 0.5% variation. Overall accuracy and reproducibility of the compact robotic device were excellent, well within clinical biopsy performance requirements. Volume breast US data provide high-quality input to a biopsy sampling algorithm under physician control. Robotic devices may provide more precise device placement, assisting physicians with biopsy procedures.

  11. [Principles of MR-guided interventions, surgery, navigation, and robotics].

    PubMed

    Melzer, A

    2010-08-01

    The application of magnetic resonance imaging (MRI) as an imaging technique in interventional and surgical techniques provides a new dimension of soft tissue-oriented precise procedures without exposure to ionizing radiation and nephrotoxic allergenic, iodine-containing contrast agents. The technical capabilities of MRI in combination with interventional devices and systems, navigation, and robotics are discussed.

  12. Pneumatically Operated MRI-Compatible Needle Placement Robot for Prostate Interventions

    PubMed Central

    Fischer, Gregory S.; Iordachita, Iulian; Csoma, Csaba; Tokuda, Junichi; Mewes, Philip W.; Tempany, Clare M.; Hata, Nobuhiko; Fichtinger, Gabor

    2011-01-01

    Magnetic Resonance Imaging (MRI) has potential to be a superior medical imaging modality for guiding and monitoring prostatic interventions. The strong magnetic field prevents the use of conventional mechatronics and the confined physical space makes it extremely challenging to access the patient. We have designed a robotic assistant system that overcomes these difficulties and promises safe and reliable intra-prostatic needle placement inside closed high-field MRI scanners. The robot performs needle insertion under real-time 3T MR image guidance; workspace requirements, MR compatibility, and workflow have been evaluated on phantoms. The paper explains the robot mechanism and controller design and presents results of preliminary evaluation of the system. PMID:21686038

  13. Pneumatically Operated MRI-Compatible Needle Placement Robot for Prostate Interventions.

    PubMed

    Fischer, Gregory S; Iordachita, Iulian; Csoma, Csaba; Tokuda, Junichi; Mewes, Philip W; Tempany, Clare M; Hata, Nobuhiko; Fichtinger, Gabor

    2008-06-13

    Magnetic Resonance Imaging (MRI) has potential to be a superior medical imaging modality for guiding and monitoring prostatic interventions. The strong magnetic field prevents the use of conventional mechatronics and the confined physical space makes it extremely challenging to access the patient. We have designed a robotic assistant system that overcomes these difficulties and promises safe and reliable intra-prostatic needle placement inside closed high-field MRI scanners. The robot performs needle insertion under real-time 3T MR image guidance; workspace requirements, MR compatibility, and workflow have been evaluated on phantoms. The paper explains the robot mechanism and controller design and presents results of preliminary evaluation of the system.

  14. Development and preliminary evaluation of an ultrasonic motor actuated needle guide for 3T MRI-guided transperineal prostate interventions

    NASA Astrophysics Data System (ADS)

    Song, Sang-Eun; Tokuda, Junichi; Tuncali, Kemal; Tempany, Clare; Hata, Nobuhiko

    2012-02-01

    Image guided prostate interventions have been accelerated by Magnetic Resonance Imaging (MRI) and robotic technologies in the past few years. However, transrectal ultrasound (TRUS) guided procedure still remains as vast majority in clinical practice due to engineering and clinical complexity of the MRI-guided robotic interventions. Subsequently, great advantages and increasing availability of MRI have not been utilized at its maximum capacity in clinic. To benefit patients from the advantages of MRI, we developed an MRI-compatible motorized needle guide device "Smart Template" that resembles a conventional prostate template to perform MRI-guided prostate interventions with minimal changes in the clinical procedure. The requirements and specifications of the Smart Template were identified from our latest MRI-guided intervention system that has been clinically used in manual mode for prostate biopsy. Smart Template consists of vertical and horizontal crossbars that are driven by two ultrasonic motors via timing-belt and mitergear transmissions. Navigation software that controls the crossbar position to provide needle insertion positions was also developed. The software can be operated independently or interactively with an open-source navigation software, 3D Slicer, that has been developed for prostate intervention. As preliminary evaluation, MRI distortion and SNR test were conducted. Significant MRI distortion was found close to the threaded brass alloy components of the template. However, the affected volume was limited outside the clinical region of interest. SNR values over routine MRI scan sequences for prostate biopsy indicated insignificant image degradation during the presence of the robotic system and actuation of the ultrasonic motors.

  15. Robotic active positioning for magnetic resonance-guided high-intensity focused ultrasound

    NASA Astrophysics Data System (ADS)

    Xiao, Xu; Huang, Zhihong; Volovick, Alexander; Melzer, Andreas

    2012-11-01

    Magnetic resonance (MR) guided High-intensity focused ultrasound (HIFU) is a noninvasive method producing thermal necrosis and cavitation at the position of tumors with high accuracy. Because the typical size of the high-intensity focused ultrasound focus are much smaller than the targeted tumor or other tissues, multiple sonications and focus repositioning become necessary for HIFU treatment. In order to reach a much wider range, manual repositioning or using MR compatible mechanical actuators could be used. The repositioning technique is a time consuming procedure because it needs a series of MR imaging to detect the transducer and markers preplaced on the mechanical devices. We combined an active tracking technique into the MR guided HIFU system. In this work, the robotic system used is the MR-compatible robotics from InnoMotion{trade mark, serif} (IBSMM, Engineering spol. s r.o. / Ltd, Czech) which is originally designed for MR-guided needle biopsy. The precision and positioning speed of the combined robotic HIFU system are evaluated in this study. Compared to the existing MR guided HIFU systems, the combined robotic system with active tracking techniques provides a potential that allows the HIFU treatment to operate in a larger spatial range and with a faster speed.

  16. Precision instrument placement using a 4-DOF robot with integrated fiducials for minimally invasive interventions

    NASA Astrophysics Data System (ADS)

    Stenzel, Roland; Lin, Ralph; Cheng, Peng; Kronreif, Gernot; Kornfeld, Martin; Lindisch, David; Wood, Bradford J.; Viswanathan, Anand; Cleary, Kevin

    2007-03-01

    Minimally invasive procedures are increasingly attractive to patients and medical personnel because they can reduce operative trauma, recovery times, and overall costs. However, during these procedures, the physician has a very limited view of the interventional field and the exact position of surgical instruments. We present an image-guided platform for precision placement of surgical instruments based upon a small four degree-of-freedom robot (B-RobII; ARC Seibersdorf Research GmbH, Vienna, Austria). This platform includes a custom instrument guide with an integrated spiral fiducial pattern as the robot's end-effector, and it uses intra-operative computed tomography (CT) to register the robot to the patient directly before the intervention. The physician can then use a graphical user interface (GUI) to select a path for percutaneous access, and the robot will automatically align the instrument guide along this path. Potential anatomical targets include the liver, kidney, prostate, and spine. This paper describes the robotic platform, workflow, software, and algorithms used by the system. To demonstrate the algorithmic accuracy and suitability of the custom instrument guide, we also present results from experiments as well as estimates of the maximum error between target and instrument tip.

  17. Magnetic resonance imaging properties of multimodality anthropomorphic silicone rubber phantoms for validating surgical robots and image guided therapy systems

    NASA Astrophysics Data System (ADS)

    Cheung, Carling L.; Looi, Thomas; Drake, James; Kim, Peter C. W.

    2012-02-01

    The development of image guided robotic and mechatronic platforms for medical applications requires a phantom model for initial testing. Finding an appropriate phantom becomes challenging when the targeted patient population is pediatrics, particularly infants, neonates or fetuses. Our group is currently developing a pediatricsized surgical robot that operates under fused MRI and laparoscopic video guidance. To support this work, we describe a method for designing and manufacturing silicone rubber organ phantoms for the purpose of testing the robotics and the image fusion system. A surface model of the organ is obtained and converted into a mold that is then rapid-prototyped using a 3D printer. The mold is filled with a solution containing a particular ratio of silicone rubber to slacker additive to achieve a specific set of tactile and imaging characteristics in the phantom. The expected MRI relaxation times of different ratios of silicone rubber to slacker additive are experimentally quantified so that the imaging properties of the phantom can be matched to those of the organ that it represents. Samples of silicone rubber and slacker additive mixed in ratios ranging from 1:0 to 1:1.5 were prepared and scanned using inversion recovery and spin echo sequences with varying TI and TE, respectively, in order to fit curves to calculate the expected T1 and T2 relaxation times of each ratio. A set of infantsized abdominal organs was prepared, which were successfully sutured by the robot and imaged using different modalities.

  18. A Filtering Approach for Image-Guided Surgery With a Highly Articulated Surgical Snake Robot.

    PubMed

    Tully, Stephen; Choset, Howie

    2016-02-01

    The objective of this paper is to introduce a probabilistic filtering approach to estimate the pose and internal shape of a highly flexible surgical snake robot during minimally invasive surgery. Our approach renders a depiction of the robot that is registered to preoperatively reconstructed organ models to produce a 3-D visualization that can be used for surgical feedback. Our filtering method estimates the robot shape using an extended Kalman filter that fuses magnetic tracker data with kinematic models that define the motion of the robot. Using Lie derivative analysis, we show that this estimation problem is observable, and thus, the shape and configuration of the robot can be successfully recovered with a sufficient number of magnetic tracker measurements. We validate this study with benchtop and in-vivo image-guidance experiments in which the surgical robot was driven along the epicardial surface of a porcine heart. This paper introduces a filtering approach for shape estimation that can be used for image guidance during minimally invasive surgery. The methods being introduced in this paper enable informative image guidance for highly articulated surgical robots, which benefits the advancement of robotic surgery.

  19. Toward Intraoperative Image-Guided Transoral Robotic Surgery

    PubMed Central

    Liu, Wen P.; Reaugamornrat, Sureerat; Deguet, Anton; Sorger, Jonathan M.; Siewerdsen, Jeffrey H.; Richmon, Jeremy; Taylor, Russell H.

    2014-01-01

    This paper presents the development and evaluation of video augmentation on the stereoscopic da Vinci S system with intraoperative image guidance for base of tongue tumor resection in transoral robotic surgery (TORS). Proposed workflow for image-guided TORS begins by identifying and segmenting critical oropharyngeal structures (e.g., the tumor and adjacent arteries and nerves) from preoperative computed tomography (CT) and/or magnetic resonance (MR) imaging. These preoperative planned data can be deformably registered to the intraoperative endoscopic view using mobile C-arm cone-beam computed tomography (CBCT) [1, 2]. Augmentation of TORS endoscopic video defining surgical targets and critical structures has the potential to improve navigation, spatial orientation, and confidence in tumor resection. Experiments in animal specimens achieved statistically significant improvement in target localization error when comparing the proposed image guidance system to simulated current practice. PMID:25525474

  20. MR guided FUS therapy with a Robotic Assistance System

    NASA Astrophysics Data System (ADS)

    Jenne, Jürgen W.; Krafft, Axel J.; Maier, Florian; Rauschenberg, Jaane; Semmler, Wolfhard; Huber, Peter E.; Bock, Michael

    2009-04-01

    Magnetic Resonance imaging guided Focus Ultrasound Surgery (MRgFUS) is a highly precise method to ablate tissue non-invasively. To date, there is only one commercial MRgFUS system available and only a few are in a prototype stage. The objective of this ongoing project is to establish an MRgFUS therapy unit as add-on for a commercially available robotic assistance system originally designed for percutaneous needle interventions in whole-body MR scanners.

  1. Mechanical Validation of an MRI Compatible Stereotactic Neurosurgery Robot in Preparation for Pre-Clinical Trials.

    PubMed

    Nycz, Christopher J; Gondokaryono, Radian; Carvalho, Paulo; Patel, Nirav; Wartenberg, Marek; Pilitsis, Julie G; Fischer, Gregory S

    2017-09-01

    The use of magnetic resonance imaging (MRI) for guiding robotic surgical devices has shown great potential for performing precisely targeted and controlled interventions. To fully realize these benefits, devices must work safely within the tight confines of the MRI bore without negatively impacting image quality. Here we expand on previous work exploring MRI guided robots for neural interventions by presenting the mechanical design and assessment of a device for positioning, orienting, and inserting an interstitial ultrasound-based ablation probe. From our previous work we have added a 2 degree of freedom (DOF) needle driver for use with the aforementioned probe, revised the mechanical design to improve strength and function, and performed an evaluation of the mechanism's accuracy and effect on MR image quality. The result of this work is a 7-DOF MRI robot capable of positioning a needle tip and orienting it's axis with accuracy of 1.37 ± 0.06 mm and 0.79° ± 0.41°, inserting it along it's axis with an accuracy of 0.06 ± 0.07 mm , and rotating it about it's axis to an accuracy of 0.77° ± 1.31°. This was accomplished with no significant reduction in SNR caused by the robot's presence in the MRI bore, ≤ 10.3% reduction in SNR from running the robot's motors during a scan, and no visible paramagnetic artifacts.

  2. Operation and force analysis of the guide wire in a minimally invasive vascular interventional surgery robot system

    NASA Astrophysics Data System (ADS)

    Yang, Xue; Wang, Hongbo; Sun, Li; Yu, Hongnian

    2015-03-01

    To develop a robot system for minimally invasive surgery is significant, however the existing minimally invasive surgery robots are not applicable in practical operations, due to their limited functioning and weaker perception. A novel wire feeder is proposed for minimally invasive vascular interventional surgery. It is used for assisting surgeons in delivering a guide wire, balloon and stenting into a specific lesion location. By contrasting those existing wire feeders, the motion methods for delivering and rotating the guide wire in blood vessel are described, and their mechanical realization is presented. A new resistant force detecting method is given in details. The change of the resistance force can help the operator feel the block or embolism existing in front of the guide wire. The driving torque for rotating the guide wire is developed at different positions. Using the CT reconstruction image and extracted vessel paths, the path equation of the blood vessel is obtained. Combining the shapes of the guide wire outside the blood vessel, the whole bending equation of the guide wire is obtained. That is a risk criterion in the delivering process. This process can make operations safer and man-machine interaction more reliable. A novel surgery robot for feeding guide wire is designed, and a risk criterion for the system is given.

  3. Robotic System for MRI-Guided Stereotactic Neurosurgery

    PubMed Central

    Li, Gang; Cole, Gregory A.; Shang, Weijian; Harrington, Kevin; Camilo, Alex; Pilitsis, Julie G.; Fischer, Gregory S.

    2015-01-01

    Stereotaxy is a neurosurgical technique that can take several hours to reach a specific target, typically utilizing a mechanical frame and guided by preoperative imaging. An error in any one of the numerous steps or deviations of the target anatomy from the preoperative plan such as brain shift (up to 20 mm), may affect the targeting accuracy and thus the treatment effectiveness. Moreover, because the procedure is typically performed through a small burr hole opening in the skull that prevents tissue visualization, the intervention is basically “blind” for the operator with limited means of intraoperative confirmation that may result in reduced accuracy and safety. The presented system is intended to address the clinical needs for enhanced efficiency, accuracy, and safety of image-guided stereotactic neurosurgery for Deep Brain Stimulation (DBS) lead placement. The work describes a magnetic resonance imaging (MRI)-guided, robotically actuated stereotactic neural intervention system for deep brain stimulation procedure, which offers the potential of reducing procedure duration while improving targeting accuracy and enhancing safety. This is achieved through simultaneous robotic manipulation of the instrument and interactively updated in situ MRI guidance that enables visualization of the anatomy and interventional instrument. During simultaneous actuation and imaging, the system has demonstrated less than 15% signal-to-noise ratio (SNR) variation and less than 0.20% geometric distortion artifact without affecting the imaging usability to visualize and guide the procedure. Optical tracking and MRI phantom experiments streamline the clinical workflow of the prototype system, corroborating targeting accuracy with 3-axis root mean square error 1.38 ± 0.45 mm in tip position and 2.03 ± 0.58° in insertion angle. PMID:25376035

  4. CT-guided robotically-assisted infiltration of foot and ankle joints.

    PubMed

    Wiewiorski, Martin; Valderrabano, Victor; Kretzschmar, Martin; Rasch, Helmut; Markus, Tanja; Dziergwa, Severine; Kos, Sebastian; Bilecen, Deniz; Jacob, Augustinus Ludwig

    2009-01-01

    It was our aim to describe a CT-guided robotically-assisted infiltration technique for diagnostic injections in foot and ankle orthopaedics. CT-guided mechatronically-assisted joint infiltration was performed on 16 patients referred to the orthopaedic department for diagnostic foot and ankle assessment. All interventions were performed using an INNOMOTION-assistance device on a multislice CT scanner in an image-guided therapy suite. Successful infiltration was defined as CT localization of contrast media in the target joint. Additionally, pre- and post-interventional VAS pain scores were assessed. All injections (16/16 joints) were technically successful. Contrast media deposit was documented in all targeted joints. Significant relief of pain was noted by all 16 patients (p<0.01). CT-guided robotically-assisted intervention is an exact, reliable and safe application method for diagnostic infiltration of midfoot and hindfoot joints. The high accuracy and feasibility in a clinical environment make it a viable alternative to the commonly used fluoroscopic-guided procedures.

  5. A MR-conditional High-torque Pneumatic Stepper Motor for MRI-guided and Robot-assisted Intervention

    PubMed Central

    Chen, Yue; Kwok, Ka-Wai; Tse, Zion Tsz Ho

    2015-01-01

    Magnetic Resonance Imaging allows for visualizing detailed pathological and morphological changes of soft tissue. This increasingly attracts attention on MRI-guided intervention; hence, MR-conditional actuations have been widely investigated for development of image-guided and robot-assisted surgical devices under the MRI. This paper presents a simple design of MR-conditional stepper motor which can provide precise and high-torque actuation without adversely affecting the MR image quality. This stepper motor consists of two MR-conditional pneumatic cylinders and the corresponding supporting structures. Alternating the pressurized air can drive the motor to rotate each step in 3.6° with the motor coupled to a planetary gearbox. Experimental studies were conducted to validate its dynamics performance. Maximum 800mNm output torque can be achieved. The motor accuracy independently varied by two factors: motor operating speed and step size, was also investigated. The motor was tested within a Siemens 3T MRI scanner. The image artifact and the signal-to-noise ratio (SNR) were evaluated in order to study its MRI compliancy. The results show that the presented pneumatic stepper motor generated 2.35% SNR reduction in MR images and no observable artifact was presented besides the motor body itself. The proposed motor test also demonstrates a standard to evaluate the motor capability for later incorporation with motorized devices used in robot-assisted surgery under MRI. PMID:24957635

  6. CT fluoroscopy-guided robotically-assisted lung biopsy

    NASA Astrophysics Data System (ADS)

    Xu, Sheng; Fichtinger, Gabor; Taylor, Russell H.; Banovac, Filip; Cleary, Kevin

    2006-03-01

    Lung biopsy is a common interventional radiology procedure. One of the difficulties in performing the lung biopsy is that lesions move with respiration. This paper presents a new robotically assisted lung biopsy system for CT fluoroscopy that can automatically compensate for the respiratory motion during the intervention. The system consists of a needle placement robot to hold the needle on the CT scan plane, a radiolucent Z-frame for registration of the CT and robot coordinate systems, and a frame grabber to obtain the CT fluoroscopy image in real-time. The CT fluoroscopy images are used to noninvasively track the motion of a pulmonary lesion in real-time. The position of the lesion in the images is automatically determined by the image processing software and the motion of the robot is controlled to compensate for the lesion motion. The system was validated under CT fluoroscopy using a respiratory motion simulator. A swine study was also done to show the feasibility of the technique in a respiring animal.

  7. Visual control of robots using range images.

    PubMed

    Pomares, Jorge; Gil, Pablo; Torres, Fernando

    2010-01-01

    In the last years, 3D-vision systems based on the time-of-flight (ToF) principle have gained more importance in order to obtain 3D information from the workspace. In this paper, an analysis of the use of 3D ToF cameras to guide a robot arm is performed. To do so, an adaptive method to simultaneous visual servo control and camera calibration is presented. Using this method a robot arm is guided by using range information obtained from a ToF camera. Furthermore, the self-calibration method obtains the adequate integration time to be used by the range camera in order to precisely determine the depth information.

  8. A robotic C-arm cone beam CT system for image-guided proton therapy: design and performance.

    PubMed

    Hua, Chiaho; Yao, Weiguang; Kidani, Takao; Tomida, Kazuo; Ozawa, Saori; Nishimura, Takenori; Fujisawa, Tatsuya; Shinagawa, Ryousuke; Merchant, Thomas E

    2017-11-01

    A ceiling-mounted robotic C-arm cone beam CT (CBCT) system was developed for use with a 190° proton gantry system and a 6-degree-of-freedom robotic patient positioner. We report on the mechanical design, system accuracy, image quality, image guidance accuracy, imaging dose, workflow, safety and collision-avoidance. The robotic CBCT system couples a rotating C-ring to the C-arm concentrically with a kV X-ray tube and a flat-panel imager mounted to the C-ring. CBCT images are acquired with flex correction and maximally 360° rotation for a 53 cm field of view. The system was designed for clinical use with three imaging locations. Anthropomorphic phantoms were imaged to evaluate the image guidance accuracy. The position accuracy and repeatability of the robotic C-arm was high (<0.5 mm), as measured with a high-accuracy laser tracker. The isocentric accuracy of the C-ring rotation was within 0.7 mm. The coincidence of CBCT imaging and radiation isocentre was better than 1 mm. The average image guidance accuracy was within 1 mm and 1° for the anthropomorphic phantoms tested. Daily volumetric imaging for proton patient positioning was specified for routine clinical practice. Our novel gantry-independent robotic CBCT system provides high-accuracy volumetric image guidance for proton therapy. Advances in knowledge: Ceiling-mounted robotic CBCT provides a viable option than CT on-rails for partial gantry and fixed-beam proton systems with the added advantage of acquiring images at the treatment isocentre.

  9. Haptic feedback in OP:Sense - augmented reality in telemanipulated robotic surgery.

    PubMed

    Beyl, T; Nicolai, P; Mönnich, H; Raczkowksy, J; Wörn, H

    2012-01-01

    In current research, haptic feedback in robot assisted interventions plays an important role. However most approaches to haptic feedback only regard the mapping of the current forces at the surgical instrument to the haptic input devices, whereas surgeons demand a combination of medical imaging and telemanipulated robotic setups. In this paper we describe how this feature is integrated in our robotic research platform OP:Sense. The proposed method allows the automatic transfer of segmented imaging data to the haptic renderer and therefore allows enriching the haptic feedback with virtual fixtures based on imaging data. Anatomical structures are extracted from pre-operative generated medical images or virtual walls are defined by the surgeon inside the imaging data. Combining real forces with virtual fixtures can guide the surgeon to the regions of interest as well as helps to prevent the risk of damage to critical structures inside the patient. We believe that the combination of medical imaging and telemanipulation is a crucial step for the next generation of MIRS-systems.

  10. A fully actuated robotic assistant for MRI-guided prostate biopsy and brachytherapy

    NASA Astrophysics Data System (ADS)

    Li, Gang; Su, Hao; Shang, Weijian; Tokuda, Junichi; Hata, Nobuhiko; Tempany, Clare M.; Fischer, Gregory S.

    2013-03-01

    Intra-operative medical imaging enables incorporation of human experience and intelligence in a controlled, closed-loop fashion. Magnetic resonance imaging (MRI) is an ideal modality for surgical guidance of diagnostic and therapeutic procedures, with its ability to perform high resolution, real-time, high soft tissue contrast imaging without ionizing radiation. However, for most current image-guided approaches only static pre-operative images are accessible for guidance, which are unable to provide updated information during a surgical procedure. The high magnetic field, electrical interference, and limited access of closed-bore MRI render great challenges to developing robotic systems that can perform inside a diagnostic high-field MRI while obtaining interactively updated MR images. To overcome these limitations, we are developing a piezoelectrically actuated robotic assistant for actuated percutaneous prostate interventions under real-time MRI guidance. Utilizing a modular design, the system enables coherent and straight forward workflow for various percutaneous interventions, including prostate biopsy sampling and brachytherapy seed placement, using various needle driver configurations. The unified workflow compromises: 1) system hardware and software initialization, 2) fiducial frame registration, 3) target selection and motion planning, 4) moving to the target and performing the intervention (e.g. taking a biopsy sample) under live imaging, and 5) visualization and verification. Phantom experiments of prostate biopsy and brachytherapy were executed under MRI-guidance to evaluate the feasibility of the workflow. The robot successfully performed fully actuated biopsy sampling and delivery of simulated brachytherapy seeds under live MR imaging, as well as precise delivery of a prostate brachytherapy seed distribution with an RMS accuracy of 0.98mm.

  11. Line following using a two camera guidance system for a mobile robot

    NASA Astrophysics Data System (ADS)

    Samu, Tayib; Kelkar, Nikhal; Perdue, David; Ruthemeyer, Michael A.; Matthews, Bradley O.; Hall, Ernest L.

    1996-10-01

    Automated unmanned guided vehicles have many potential applications in manufacturing, medicine, space and defense. A mobile robot has been designed for the 1996 Automated Unmanned Vehicle Society competition which was held in Orlando, Florida on July 15, 1996. The competition required the vehicle to follow solid and dashed lines around an approximately 800 ft. path while avoiding obstacles, overcoming terrain changes such as inclines and sand traps, and attempting to maximize speed. The purpose of this paper is to describe the algorithm developed for the line following. The line following algorithm images two windows and locates their centroid and with the knowledge that the points are on the ground plane, a mathematical and geometrical relationship between the image coordinates of the points and their corresponding ground coordinates are established. The angle of the line and minimum distance from the robot centroid are then calculated and used in the steering control. Two cameras are mounted on the robot with a camera on each side. One camera guides the robot and when it loses track of the line on its side, the robot control system automatically switches to the other camera. The test bed system has provided an educational experience for all involved and permits understanding and extending the state of the art in autonomous vehicle design.

  12. Enabling image fusion for a CT guided needle placement robot

    NASA Astrophysics Data System (ADS)

    Seifabadi, Reza; Xu, Sheng; Aalamifar, Fereshteh; Velusamy, Gnanasekar; Puhazhendi, Kaliyappan; Wood, Bradford J.

    2017-03-01

    Purpose: This study presents development and integration of hardware and software that enables ultrasound (US) and computer tomography (CT) fusion for a FDA-approved CT-guided needle placement robot. Having real-time US image registered to a priori-taken intraoperative CT image provides more anatomic information during needle insertion, in order to target hard-to-see lesions or avoid critical structures invisible to CT, track target motion, and to better monitor ablation treatment zone in relation to the tumor location. Method: A passive encoded mechanical arm is developed for the robot in order to hold and track an abdominal US transducer. This 4 degrees of freedom (DOF) arm is designed to attach to the robot end-effector. The arm is locked by default and is released by a press of button. The arm is designed such that the needle is always in plane with US image. The articulated arm is calibrated to improve its accuracy. Custom designed software (OncoNav, NIH) was developed to fuse real-time US image to a priori-taken CT. Results: The accuracy of the end effector before and after passive arm calibration was 7.07mm +/- 4.14mm and 1.74mm +/-1.60mm, respectively. The accuracy of the US image to the arm calibration was 5mm. The feasibility of US-CT fusion using the proposed hardware and software was demonstrated in an abdominal commercial phantom. Conclusions: Calibration significantly improved the accuracy of the arm in US image tracking. Fusion of US to CT using the proposed hardware and software was feasible.

  13. Optical Flow-Based State Estimation for Guided Projectiles

    DTIC Science & Technology

    2015-06-01

    Computer Vision and Image Understanding. 2012;116(5):606–633. 3. Corke P, Lobo J, Dias J. An introduction to inertial and visual sensing. The...International Journal of Robotics Research. 2007;26(6):519–535. 4. Hutchinson S, Hager GD, Corke PI. A tutorial on visual servo control. Robotics and

  14. The current status of robot-assisted radical prostatectomy

    PubMed Central

    Dasgupta, Prokar; Kirby, Roger S.

    2009-01-01

    Robot-assisted radical prostatectomy (RARP) is a rapidly evolving technique for the treatment of localized prostate cancer. In the United States, over 65% of radical prostatectomies are robot-assisted, although the acceptance of this technology in Europe and the rest of the world has been somewhat slower. This article reviews the current literature on RARP with regard to oncological, continence and potency outcomes–the so-called 'trifecta'. Preliminary data appear to show an advantage of RARP over open prostatectomy, with reduced blood loss, decreased pain, early mobilization, shorter hospital stay and lower margin rates. Most studies show good postoperative continence and potency with RARP; however, this needs to be viewed in the context of the paucity of randomized data available in the literature. There is no definitive evidence to show an advantage over standard laparoscopy, but the fact that this technique has reached parity with laparoscopy within 5 years is encouraging. Finally, evolving techniques of single-port robotic prostatectomy, laser-guided robotics, catheter-free prostatectomy and image-guided robotics are discussed. PMID:19050687

  15. [Robot-aided training in rehabilitation].

    PubMed

    Hachisuka, Kenji

    2010-02-01

    Recently, new training techniques that involve the use of robots have been used in the rehabilitation of patients with hemiplegia and paraplegia. Robots used for training the arm include the MIT-MANUS, Arm Trainer, mirror-image motion enabler (MIME) robot, and the assisted rehabilitation and measurement (ARM) Guide. Robots that are used for lower-limb training are the Rehabot, Gait Trainer, Lokomat, LOPES Exoskeleton Robot, and Gait Assist Robot. Robot-aided therapy has enabled the functional training of the arm and the lower limbs in an effective, easy, and comfortable manner. Therefore, with this type of therapy, the patients can repeatedly undergo sufficient and accurate training for a prolonged period. However, evidence of the benefits of robot-aided training has not yet been established.

  16. Imaging-guided thoracoscopic resection of a ground-glass opacity lesion in a hybrid operating room equipped with a robotic C-arm CT system.

    PubMed

    Hsieh, Chen-Ping; Hsieh, Ming-Ju; Fang, Hsin-Yueh; Chao, Yin-Kai

    2017-05-01

    The intraoperative identification of small pulmonary nodules through video-assisted thoracoscopic surgery remains challenging. Although preoperative CT-guided nodule localization is commonly used to detect tumors during video-assisted thoracoscopic surgery (VATS), this approach carries inherent risks. We report the case of a patient with stage I lung cancer presenting as an area of ground-glass opacity (GGO) in the right upper pulmonary lobe. He successfully underwent a single-stage, CT-guided localization and removal of the pulmonary nodule within a hybrid operating room (OR) equipped with a robotic C-arm.

  17. TU-FG-BRB-11: Design and Evaluation of a Robotic C-Arm CBCT System for Image-Guided Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hua, C; Yao, W; Farr, J

    Purpose: To describe the design and performance of a ceiling-mounted robotic C-arm CBCT system for image-guided proton therapy. Methods: Uniquely different from traditional C-arm CBCT used in interventional radiology, the imaging system was designed to provide volumetric image guidance for patients treated on a 190-degree proton gantry system and a 6 degree-of-freedom (DOF) robotic patient positioner. The mounting of robotic arms to the ceiling rails, rather than gantry or nozzle, provides the flexibility in imaging locations (isocenter, iso+27cm in X, iso+100cm in Y) in the room and easier upgrade as technology advances. A kV X-ray tube and a 43×43cm flatmore » panel imager were mounted to a rotating C-ring (87cm diameter), which is coupled to the C-arm concentrically. Both C-arm and the robotic arm remain stationary during imaging to maintain high position accuracy. Source-to-axis distance and source-to-imager distance are 100 and 150cm, respectively. A 14:1 focused anti-scatter grid and a bowtie filer are used for image acquisition. A unique automatic collimator device of 4 independent blades for adjusting field of view and reducing patient dose has also been developed. Results: Sub-millimeter position accuracy and repeatability of the robotic C-arm were measured with a laser tracker. High quality CBCT images for positioning can be acquired with a weighted CTDI of 3.6mGy (head in 200° full fan mode: 100kV, 20mA, 20ms, 10fps)-8.7 mGy (pelvis in 360° half fan mode: 125kV, 42mA, 20ms, 10fps). Image guidance accuracy achieved <1mm (3D vector) with automatic 3D-3D registration for anthropomorphic head and pelvis phantoms. Since November 2015, 22 proton therapy patients have undergone daily CBCT imaging for 6 DOF positioning. Conclusion: Decoupled from gantry and nozzle, this CBCT system provides a unique solution for volumetric image guidance with half/partial proton gantry systems. We demonstrated that daily CBCT can be integrated into proton therapy for pre-treatment position verification.« less

  18. SU-E-J-12: An Image-Guided Soft Robotic Patient Positioning System for Maskless Head-And-Neck Cancer Radiotherapy: A Proof-Of-Concept Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ogunmolu, O; Gans, N; Jiang, S

    Purpose: We propose a surface-image-guided soft robotic patient positioning system for maskless head-and-neck radiotherapy. The ultimate goal of this project is to utilize a soft robot to realize non-rigid patient positioning and real-time motion compensation. In this proof-of-concept study, we design a position-based visual servoing control system for an air-bladder-based soft robot and investigate its performance in controlling the flexion/extension cranial motion on a mannequin head phantom. Methods: The current system consists of Microsoft Kinect depth camera, an inflatable air bladder (IAB), pressured air source, pneumatic valve actuators, custom-built current regulators, and a National Instruments myRIO microcontroller. The performance ofmore » the designed system was evaluated on a mannequin head, with a ball joint fixed below its neck to simulate torso-induced head motion along flexion/extension direction. The IAB is placed beneath the mannequin head. The Kinect camera captures images of the mannequin head, extracts the face, and measures the position of the head relative to the camera. This distance is sent to the myRIO, which runs control algorithms and sends actuation commands to the valves, inflating and deflating the IAB to induce head motion. Results: For a step input, i.e. regulation of the head to a constant displacement, the maximum error was a 6% overshoot, which the system then reduces to 0% steady-state error. In this initial investigation, the settling time to reach the regulated position was approximately 8 seconds, with 2 seconds of delay between the command start of motion due to capacitance of the pneumatics, for a total of 10 seconds to regulate the error. Conclusion: The surface image-guided soft robotic patient positioning system can achieve accurate mannequin head flexion/extension motion. Given this promising initial Result, the extension of the current one-dimensional soft robot control to multiple IABs for non-rigid positioning control will be pursued.« less

  19. Technology transfer: Imaging tracker to robotic controller

    NASA Technical Reports Server (NTRS)

    Otaguro, M. S.; Kesler, L. O.; Land, Ken; Erwin, Harry; Rhoades, Don

    1988-01-01

    The transformation of an imaging tracker to a robotic controller is described. A multimode tracker was developed for fire and forget missile systems. The tracker locks on to target images within an acquisition window using multiple image tracking algorithms to provide guidance commands to missile control systems. This basic tracker technology is used with the addition of a ranging algorithm based on sizing a cooperative target to perform autonomous guidance and control of a platform for an Advanced Development Project on automation and robotics. A ranging tracker is required to provide the positioning necessary for robotic control. A simple functional demonstration of the feasibility of this approach was performed and described. More realistic demonstrations are under way at NASA-JSC. In particular, this modified tracker, or robotic controller, will be used to autonomously guide the Man Maneuvering Unit (MMU) to targets such as disabled astronauts or tools as part of the EVA Retriever efforts. It will also be used to control the orbiter's Remote Manipulator Systems (RMS) in autonomous approach and positioning demonstrations. These efforts will also be discussed.

  20. [Computerization and robotics in medical practice].

    PubMed

    Dervaderics, J

    1997-10-26

    The article gives the outlines of all principles used in computing included the non-electrical and analog computers and the artifical intelligence followed by citing examples as well. The principles and medical utilization of virtual reality are also mentioned. There are discussed: surgical planning, image guided surgery, robotic surgery, telepresence and telesurgery, and telemedicine implemented partially via Internet.

  1. Construction of a high-tech operating room for image-guided surgery using VR.

    PubMed

    Suzuki, Naoki; Hattori, Asaki; Suzuki, Shigeyuki; Otake, Yoshito; Hayashibe, Mitsuhiro; Kobayashi, Susumu; Nezu, Takehiko; Sakai, Haruo; Umezawa, Yuji

    2005-01-01

    This project aimed to construct an operating room to implement high dimensional (3D, 4D) medical imaging and medical virtual reality techniques that would enable clinical tests for new surgical procedures. We designed and constructed such an operating room at Dai-san Hospital, the Jikei Univ. School of Medicine, Tokyo, Japan. The room was equipped with various facilities for image-guided, robot and tele- surgery. In this report, we describe an outline of our "high-tech operating room" and future plans.

  2. Frameless robotically targeted stereotactic brain biopsy: feasibility, diagnostic yield, and safety.

    PubMed

    Bekelis, Kimon; Radwan, Tarek A; Desai, Atman; Roberts, David W

    2012-05-01

    Frameless stereotactic brain biopsy has become an established procedure in many neurosurgical centers worldwide. Robotic modifications of image-guided frameless stereotaxy hold promise for making these procedures safer, more effective, and more efficient. The authors hypothesized that robotic brain biopsy is a safe, accurate procedure, with a high diagnostic yield and a safety profile comparable to other stereotactic biopsy methods. This retrospective study included 41 patients undergoing frameless stereotactic brain biopsy of lesions (mean size 2.9 cm) for diagnostic purposes. All patients underwent image-guided, robotic biopsy in which the SurgiScope system was used in conjunction with scalp fiducial markers and a preoperatively selected target and trajectory. Forty-five procedures, with 50 supratentorial targets selected, were performed. The mean operative time was 44.6 minutes for the robotic biopsy procedures. This decreased over the second half of the study by 37%, from 54.7 to 34.5 minutes (p < 0.025). The diagnostic yield was 97.8% per procedure, with a second procedure being diagnostic in the single nondiagnostic case. Complications included one transient worsening of a preexisting deficit (2%) and another deficit that was permanent (2%). There were no infections. Robotic biopsy involving a preselected target and trajectory is safe, accurate, efficient, and comparable to other procedures employing either frame-based stereotaxy or frameless, nonrobotic stereotaxy. It permits biopsy in all patients, including those with small target lesions. Robotic biopsy planning facilitates careful preoperative study and optimization of needle trajectory to avoid sulcal vessels, bridging veins, and ventricular penetration.

  3. CISUS: an integrated 3D ultrasound system for IGT using a modular tracking API

    NASA Astrophysics Data System (ADS)

    Boctor, Emad M.; Viswanathan, Anand; Pieper, Steve; Choti, Michael A.; Taylor, Russell H.; Kikinis, Ron; Fichtinger, Gabor

    2004-05-01

    Ultrasound has become popular in clinical/surgical applications, both as the primary image guidance modality and also in conjunction with other modalities like CT or MRI. Three dimensional ultrasound (3DUS) systems have also demonstrated usefulness in image-guided therapy (IGT). At the same time, however, current lack of open-source and open-architecture multi-modal medical visualization systems prevents 3DUS from fulfilling its potential. Several stand-alone 3DUS systems, like Stradx or In-Vivo exist today. Although these systems have been found to be useful in real clinical setting, it is difficult to augment their functionality and integrate them in versatile IGT systems. To address these limitations, a robotic/freehand 3DUS open environment (CISUS) is being integrated into the 3D Slicer, an open-source research tool developed for medical image analysis and surgical planning. In addition, the system capitalizes on generic application programming interfaces (APIs) for tracking devices and robotic control. The resulting platform-independent open-source system may serve as a valuable tool to the image guided surgery community. Other researchers could straightforwardly integrate the generic CISUS system along with other functionalities (i.e. dual view visualization, registration, real-time tracking, segmentation, etc) to rapidly create their medical/surgical applications. Our current driving clinical application is robotically assisted and freehand 3DUS-guided liver ablation, which is fully being integrated under the CISUS-3D Slicer. Initial functionality and pre-clinical feasibility are demonstrated on phantom and ex-vivo animal models.

  4. Magnetic resonance-guided prostate interventions.

    PubMed

    Haker, Steven J; Mulkern, Robert V; Roebuck, Joseph R; Barnes, Agnieska Szot; Dimaio, Simon; Hata, Nobuhiko; Tempany, Clare M C

    2005-10-01

    We review our experience using an open 0.5-T magnetic resonance (MR) interventional unit to guide procedures in the prostate. This system allows access to the patient and real-time MR imaging simultaneously and has made it possible to perform prostate biopsy and brachytherapy under MR guidance. We review MR imaging of the prostate and its use in targeted therapy, and describe our use of image processing methods such as image registration to further facilitate precise targeting. We describe current developments with a robot assist system being developed to aid radioactive seed placement.

  5. Principles and advantages of robotics in urologic surgery.

    PubMed

    Renda, Antonio; Vallancien, Guy

    2003-04-01

    Although the available minimally invasive surgical techniques (ie, laparoscopy) have clear advantages, these procedures continue to cause problems for patients. Surgical tools are limited by set axes of movement, restricting the degree of freedom available to the surgeon. In addition, depth perception is lost with the use of two-dimensional viewing systems. As surgeons view a "virtual" target on a television screen, they are hampered by decreased sensory input and a concurrent loss of dexterity. The development of robotic assistance systems in recent years could be the key to overcoming these difficulties. Using robotic systems, surgeons can experience a more natural and ergonomic surgical "feel." Surgical assistance, dexterity and precision enhancement, systems networking, and image-guided therapy are among the benefits offered by surgical robots. In return, the surgeon gains a shorter learning curve, reduced fatigue, and the opportunity to perform complex procedures that would be difficult using conventional laparoscopy. With the development of image-guided technology, robotic systems will become useful tools for surgical training and simulation. Remote surgery is not a routine procedure, but several teams are working on this and experiencing good results. However, economic concerns are the major drawbacks of these systems; before remote surgery becomes routinely feasible, the clinical benefits must be balanced with high investment and running costs.

  6. "MRI Stealth" robot for prostate interventions.

    PubMed

    Stoianovici, Dan; Song, Danny; Petrisor, Doru; Ursu, Daniel; Mazilu, Dumitru; Muntener, Michael; Mutener, Michael; Schar, Michael; Patriciu, Alexandru

    2007-01-01

    The paper reports an important achievement in MRI instrumentation, a pneumatic, fully actuated robot located within the scanner alongside the patient and operating under remote control based on the images. Previous MRI robots commonly used piezoelectric actuation limiting their compatibility. Pneumatics is an ideal choice for MRI compatibility because it is decoupled from electromagnetism, but pneumatic actuators were hardly controllable. This achievement was possible due to a recent technology breakthrough, the invention of a new type of pneumatic motor, PneuStep 1, designed for the robot reported here with uncompromised MRI compatibility, high-precision, and medical safety. MrBot is one of the "MRI stealth" robots today (the second is described in this issue by Zangos et al.). Both of these systems are also multi-imager compatible, being able to operate with the imager of choice or cross-imaging modalities. For MRI compatibility the robot is exclusively constructed of nonmagnetic and dielectric materials such as plastics, ceramics, crystals, rubbers and is electricity free. Light-based encoding is used for feedback, so that all electric components are distally located outside the imager's room. MRI robots are modern, digital medical instruments in line with advanced imaging equipment and methods. These allow for accessing patients within closed bore scanners and performing interventions under direct (in scanner) imaging feedback. MRI robots could allow e.g. to biopsy small lesions imaged with cutting edge cancer imaging methods, or precisely deploy localized therapy at cancer foci. Our robot is the first to show the feasibility of fully automated in-scanner interventions. It is customized for the prostate and operates transperineally for needle interventions. It can accommodate various needle drivers for different percutaneous procedures such as biopsy, thermal ablations, or brachytherapy. The first needle driver is customized for fully automated low-dose radiation seed brachytherapy. This paper gives an introduction to the challenges of MRI robot compatibility and presents the solutions adopted in making the MrBot. Its multi-imager compatibility and other preclinical tests are included. The robot shows the technical feasibility of MRI-guided prostate interventions, yet its clinical utility is still to be determined.

  7. Robotic intrafractional US guidance for liver SABR: System design, beam avoidance, and clinical imaging.

    PubMed

    Schlosser, Jeffrey; Gong, Ren Hui; Bruder, Ralf; Schweikard, Achim; Jang, Sungjune; Henrie, John; Kamaya, Aya; Koong, Albert; Chang, Daniel T; Hristov, Dimitre

    2016-11-01

    To present a system for robotic 4D ultrasound (US) imaging concurrent with radiotherapy beam delivery and estimate the proportion of liver stereotactic ablative body radiotherapy (SABR) cases in which robotic US image guidance can be deployed without interfering with clinically used VMAT beam configurations. The image guidance hardware comprises a 4D US machine, an optical tracking system for measuring US probe pose, and a custom-designed robot for acquiring hands-free US volumes. In software, a simulation environment incorporating the LINAC, couch, planning CT, and robotic US guidance hardware was developed. Placement of the robotic US hardware was guided by a target visibility map rendered on the CT surface by using the planning CT to simulate US propagation. The visibility map was validated in a prostate phantom and evaluated in patients by capturing live US from imaging positions suggested by the visibility map. In 20 liver SABR patients treated with VMAT, the simulation environment was used to virtually place the robotic hardware and US probe. Imaging targets were either planning target volumes (PTVs, range 5.9-679.5 ml) or gross tumor volumes (GTVs, range 0.9-343.4 ml). Presence or absence of mechanical interference with LINAC, couch, and patient body as well as interferences with treated beams was recorded. For PTV targets, robotic US guidance without mechanical interference was possible in 80% of the cases and guidance without beam interference was possible in 60% of the cases. For the smaller GTV targets, these proportions were 95% and 85%, respectively. GTV size (1/20), elongated shape (1/20), and depth (1/20) were the main factors limiting the availability of noninterfering imaging positions. The robotic US imaging system was deployed in two liver SABR patients during CT simulation with successful acquisition of 4D US sequences in different imaging positions. This study indicates that for VMAT liver SABR, robotic US imaging of a relevant internal target may be possible in 85% of the cases while using treatment plans currently deployed in the clinic. With beam replanning to account for the presence of robotic US guidance, intrafractional US may be an option for 95% of the liver SABR cases.

  8. Stochastic approach to error estimation for image-guided robotic systems.

    PubMed

    Haidegger, Tamas; Gyõri, Sándor; Benyo, Balazs; Benyó, Zoltáán

    2010-01-01

    Image-guided surgical systems and surgical robots are primarily developed to provide patient safety through increased precision and minimal invasiveness. Even more, robotic devices should allow for refined treatments that are not possible by other means. It is crucial to determine the accuracy of a system, to define the expected overall task execution error. A major step toward this aim is to quantitatively analyze the effect of registration and tracking-series of multiplication of erroneous homogeneous transformations. First, the currently used models and algorithms are introduced along with their limitations, and a new, probability distribution based method is described. The new approach has several advantages, as it was demonstrated in our simulations. Primarily, it determines the full 6 degree of freedom accuracy of the point of interest, allowing for the more accurate use of advanced application-oriented concepts, such as Virtual Fixtures. On the other hand, it becomes feasible to consider different surgical scenarios with varying weighting factors.

  9. Evaluation of microsurgical tasks with OCT-guided and/or robot-assisted ophthalmic forceps

    PubMed Central

    Yu, Haoran; Shen, Jin-Hui; Shah, Rohan J.; Simaan, Nabil; Joos, Karen M.

    2015-01-01

    Real-time intraocular optical coherence tomography (OCT) visualization of tissues with surgical feedback can enhance retinal surgery. An intraocular 23-gauge B-mode forward-imaging co-planar OCT-forceps, coupling connectors and algorithms were developed to form a unique ophthalmic surgical robotic system. Approach to the surface of a phantom or goat retina by a manual or robotic-controlled forceps, with and without real-time OCT guidance, was performed. Efficiency of lifting phantom membranes was examined. Placing the co-planar OCT imaging probe internal to the surgical tool reduced instrument shadowing and permitted constant tracking. Robotic assistance together with real-time OCT feedback improved depth perception accuracy. The first-generation integrated OCT-forceps was capable of peeling membrane phantoms despite smooth tips. PMID:25780736

  10. Minimally invasive positioning robot system of femoral neck hollow screw implants based on x-ray error correction

    NASA Astrophysics Data System (ADS)

    Zou, Yunpeng; Xu, Ying; Hu, Lei; Guo, Na; Wang, Lifeng

    2017-01-01

    Aiming the high failure rate, the high radiation quantity and the poor positioning accuracy of femoral neck traditional surgery, this article develops a set of new positioning robot system of femoral neck hollow screw implants based on X-rays error correction, which bases on the study of x-rays perspective principle and the Motion Principle of 6 DOF(degree of freedom) series robot UR(Universal Robots). Compared with Computer Assisted Navigation System, this system owns better positioning accuracy and more simple operation. In addition, without extra Equipment of Visual Tracking, this system can reduce a lot of cost. During the surgery, Doctor can plan the operation path and the pose of mark needle according to the positive and lateral X-rays images of patients. Then they can calculate the pixel ratio according to the ratio of the actual length of mark line and the length on image. After that, they can calculate the amount of exercise of UR Robot according to the relative position between operation path and guide pin and the fixed relationship between guide pin and UR robot. Then, they can control UR to drive the positioning guide pin to the operation path. At this point, check the positioning guide pin and the planning path is coincident, if not, repeat the previous steps, until the positioning guide pin and the planning path coincide which will eventually complete the positioning operation. Moreover, to verify the positioning accuracy, this paper make an errors analysis aiming to thirty cases of the experimental model of bone. The result shows that the motion accuracy of the UR Robot is 0.15mm and the Integral error precision is within 0.8mm. To verify the clinical feasibility of this system, this article analysis on three cases of the clinical experiment. In the whole process of positioning, the X-rays irradiation time is 2-3s, the number of perspective is 3-5 and the whole positioning time is 7-10min. The result shows that this system can complete accurately femoral neck positioning surgery. Meanwhile, it can greatly reduce the X-rays radiation of medical staff and patients. To summarize, it has a significant value in clinical application.

  11. 3T MR-guided in-bore transperineal prostate biopsy: A comparison of robotic and manual needle-guidance templates.

    PubMed

    Tilak, Gaurie; Tuncali, Kemal; Song, Sang-Eun; Tokuda, Junichi; Olubiyi, Olutayo; Fennessy, Fiona; Fedorov, Andriy; Penzkofer, Tobias; Tempany, Clare; Hata, Nobuhiko

    2015-07-01

    To demonstrate the utility of a robotic needle-guidance template device as compared to a manual template for in-bore 3T transperineal magnetic resonance imaging (MRI)-guided prostate biopsy. This two-arm mixed retrospective-prospective study included 99 cases of targeted transperineal prostate biopsies. The biopsy needles were aimed at suspicious foci noted on multiparametric 3T MRI using manual template (historical control) as compared with a robotic template. The following data were obtained: the accuracy of average and closest needle placement to the focus, histologic yield, percentage of cancer volume in positive core samples, complication rate, and time to complete the procedure. In all, 56 cases were performed using the manual template and 43 cases were performed using the robotic template. The mean accuracy of the best needle placement attempt was higher in the robotic group (2.39 mm) than the manual group (3.71 mm, P < 0.027). The mean core procedure time was shorter in the robotic (90.82 min) than the manual group (100.63 min, P < 0.030). Percentage of cancer volume in positive core samples was higher in the robotic group (P < 0.001). Cancer yields and complication rates were not statistically different between the two subgroups (P = 0.557 and P = 0.172, respectively). The robotic needle-guidance template helps accurate placement of biopsy needles in MRI-guided core biopsy of prostate cancer. © 2014 Wiley Periodicals, Inc.

  12. MRI-guided robotics at the U of Houston: evolving methodologies for interventions and surgeries.

    PubMed

    Tsekos, Nikolaos V

    2009-01-01

    Currently, we witness the rapid evolution of minimally invasive surgeries (MIS) and image guided interventions (IGI) for offering improved patient management and cost effectiveness. It is well recognized that sustaining and expand this paradigm shift would require new computational methodology that integrates sensing with multimodal imaging, actively controlled robotic manipulators, the patient and the operator. Such approach would include (1) assessing in real-time tissue deformation secondary to the procedure and physiologic motion, (2) monitoring the tool(s) in 3D, and (3) on-the-fly update information about the pathophysiology of the targeted tissue. With those capabilities, real time image guidance may facilitate a paradigm shift and methodological leap from "keyhole" visualization (i.e. endoscopy or laparoscopy) to one that uses a volumetric and informational rich perception of the Area of Operation (AoO). This capability may eventually enable a wider range and level of complexity IGI and MIS.

  13. “MRI Stealth” robot for prostate interventions

    PubMed Central

    STOIANOVICI, DAN; SONG, DANNY; PETRISOR, DORU; URSU, DANIEL; MAZILU, DUMITRU; MUTENER, MICHAEL; SCHAR, MICHAEL; PATRICIU, ALEXANDRU

    2011-01-01

    The paper reports an important achievement in MRI instrumentation, a pneumatic, fully actuated robot located within the scanner alongside the patient and operating under remote control based on the images. Previous MRI robots commonly used piezoelectric actuation limiting their compatibility. Pneumatics is an ideal choice for MRI compatibility because it is decoupled from electromagnetism, but pneumatic actuators were hardly controllable. This achievement was possible due to a recent technology breakthrough, the invention of a new type of pneumatic motor, PneuStep (1), designed for the robot reported here with uncompromised MRI compatibility, high-precision, and medical safety. MrBot is one of the “MRI stealth” robots today (the second is described in this issue by Zangos et al.). Both of these systems are also multi-imager compatible, being able to operate with the imager of choice or cross-imaging modalities. For MRI compatibility the robot is exclusively constructed of nonmagnetic and dielectric materials such as plastics, ceramics, crystals, rubbers and is electricity free. Light-based encoding is used for feedback, so that all electric components are distally located outside the imager’s room. MRI robots are modern, digital medical instruments in line with advanced imaging equipment and methods. These allow for accessing patients within closed bore scanners and performing interventions under direct (in scanner) imaging feedback. MRI robots could allow e.g. to biopsy small lesions imaged with cutting edge cancer imaging methods, or precisely deploy localized therapy at cancer foci. Our robot is the first to show the feasibility of fully automated in-scanner interventions. It is customized for the prostate and operates transperineally for needle interventions. It can accommodate various needle drivers for different percutaneous procedures such as biopsy, thermal ablations, or brachytherapy. The first needle driver is customized for fully automated low-dose radiation seed brachytherapy. This paper gives an introduction to the challenges of MRI robot compatibility and presents the solutions adopted in making the MrBot. Its multi-imager compatibility and other preclinical tests are included. The robot shows the technical feasibility of MRI-guided prostate interventions, yet its clinical utility is still to be determined. PMID:17763098

  14. Improved Image-Guided Laparoscopic Prostatectomy

    DTIC Science & Technology

    2011-08-01

    standard daVinci tool . The ultrasound probe is driven by a Sonix RP ultrasound system (Ultrasonix Medical Corp., Richmond BC Canada), which provides...probe (Intuitive Surgical, Sunnyvale, CA) was integrated with the daVinci surgical system for use in Robot-Assisted Laparoscopic Prostatectomy (RALP...laparoscopy using the daVinci Surgical System (Intuitive Surgical, Sunnyvale, CA). The surgical robot introduces many benefits, including three

  15. System Integration and In Vivo Testing of a Robot for Ultrasound Guidance and Monitoring During Radiotherapy.

    PubMed

    Sen, Hasan Tutkun; Bell, Muyinatu A Lediju; Zhang, Yin; Ding, Kai; Boctor, Emad; Wong, John; Iordachita, Iulian; Kazanzides, Peter

    2017-07-01

    We are developing a cooperatively controlled robot system for image-guided radiation therapy (IGRT) in which a clinician and robot share control of a 3-D ultrasound (US) probe. IGRT involves two main steps: 1) planning/simulation and 2) treatment delivery. The goals of the system are to provide guidance for patient setup and real-time target monitoring during fractionated radiotherapy of soft tissue targets, especially in the upper abdomen. To compensate for soft tissue deformations created by the probe, we present a novel workflow where the robot holds the US probe on the patient during acquisition of the planning computerized tomography image, thereby ensuring that planning is performed on the deformed tissue. The robot system introduces constraints (virtual fixtures) to help to produce consistent soft tissue deformation between simulation and treatment days, based on the robot position, contact force, and reference US image recorded during simulation. This paper presents the system integration and the proposed clinical workflow, validated by an in vivo canine study. The results show that the virtual fixtures enable the clinician to deviate from the recorded position to better reproduce the reference US image, which correlates with more consistent soft tissue deformation and the possibility for more accurate patient setup and radiation delivery.

  16. An ultra-high field strength MR image-guided robotic needle delivery system for in-bore small animal interventions.

    PubMed

    Gravett, Matthew; Cepek, Jeremy; Fenster, Aaron

    2017-11-01

    The purpose of this study was to develop and validate an image-guided robotic needle delivery system for accurate and repeatable needle targeting procedures in mouse brains inside the 12 cm inner diameter gradient coil insert of a 9.4 T MR scanner. Many preclinical research techniques require the use of accurate needle deliveries to soft tissues, including brain tissue. Soft tissues are optimally visualized in MR images, which offer high-soft tissue contrast, as well as a range of unique imaging techniques, including functional, spectroscopy and thermal imaging, however, there are currently no solutions for delivering needles to small animal brains inside the bore of an ultra-high field MR scanner. This paper describes the mechatronic design, evaluation of MR compatibility, registration technique, mechanical calibration, the quantitative validation of the in-bore image-guided needle targeting accuracy and repeatability, and demonstrated the system's ability to deliver needles in situ. Our six degree-of-freedom, MR compatible, mechatronic system was designed to fit inside the bore of a 9.4 T MR scanner and is actuated using a combination of piezoelectric and hydraulic mechanisms. The MR compatibility and targeting accuracy of the needle delivery system are evaluated to ensure that the system is precisely calibrated to perform the needle targeting procedures. A semi-automated image registration is performed to link the robot coordinates to the MR coordinate system. Soft tissue targets can be accurately localized in MR images, followed by automatic alignment of the needle trajectory to the target. Intra-procedure visualization of the needle target location and the needle were confirmed through MR images after needle insertion. The effects of geometric distortions and signal noise were found to be below threshold that would have an impact on the accuracy of the system. The system was found to have negligible effect on the MR image signal noise and geometric distortion. The system was mechanically calibrated and the mean image-guided needle targeting and needle trajectory accuracies were quantified in an image-guided tissue mimicking phantom experiment to be 178 ± 54 μm and 0.27 ± 0.65°, respectively. An MR image-guided system for in-bore needle deliveries to soft tissue targets in small animal models has been developed. The results of the needle targeting accuracy experiments in phantoms indicate that this system has the potential to deliver needles to the smallest soft tissue structures relevant in preclinical studies, at a wide variety of needle trajectories. Future work in the form of a fully-automated needle driver with precise depth control would benefit this system in terms of its applicability to a wider range of animal models and organ targets. © 2017 American Association of Physicists in Medicine.

  17. MRI-Compatible Pneumatic Robot for Transperineal Prostate Needle Placement.

    PubMed

    Fischer, Gregory S; Iordachita, Iulian; Csoma, Csaba; Tokuda, Junichi; Dimaio, Simon P; Tempany, Clare M; Hata, Nobuhiko; Fichtinger, Gabor

    2008-06-01

    Magnetic resonance imaging (MRI) can provide high-quality 3-D visualization of prostate and surrounding tissue, thus granting potential to be a superior medical imaging modality for guiding and monitoring prostatic interventions. However, the benefits cannot be readily harnessed for interventional procedures due to difficulties that surround the use of high-field (1.5T or greater) MRI. The inability to use conventional mechatronics and the confined physical space makes it extremely challenging to access the patient. We have designed a robotic assistant system that overcomes these difficulties and promises safe and reliable intraprostatic needle placement inside closed high-field MRI scanners. MRI compatibility of the robot has been evaluated under 3T MRI using standard prostate imaging sequences and average SNR loss is limited to 5%. Needle alignment accuracy of the robot under servo pneumatic control is better than 0.94 mm rms per axis. The complete system workflow has been evaluated in phantom studies with accurate visualization and targeting of five out of five 1 cm targets. The paper explains the robot mechanism and controller design, the system integration, and presents results of preliminary evaluation of the system.

  18. MO-DE-210-03: Ultrasound imaging is an attractive method for image guided radiation treatment (IGRT), by itself or to complement other imaging modalities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, K.

    Ultrasound imaging is an attractive method for image guided radiation treatment (IGRT), by itself or to complement other imaging modalities. It is inexpensive, portable and provides good soft tissue contrast. For challenging soft tissue targets such as pancreatic cancer, ultrasound imaging can be used in combination with pre-treatment MRI and/or CT to transfer important anatomical features for target localization at time of treatment. The non-invasive and non-ionizing nature of ultrasound imaging is particularly powerful for intra-fraction localization and monitoring. Recognizing these advantages, efforts are being made to incorporate novel robotic approaches to position and manipulate the ultrasound probe during irradiation.more » These recent enabling developments hold potential to bring ultrasound imaging to a new level of IGRT applications. However, many challenges, not limited to image registration, robotic deployment, probe interference and image acquisition rate, need to be addressed to realize the full potential of IGRT with ultrasound imaging. Learning Objectives: Understand the benefits and limitations in using ultrasound to augment MRI and/or CT for motion monitoring during radiation therapy delivery. Understanding passive and active robotic approaches to implement ultrasound imaging for intra-fraction monitoring. Understand issues of probe interference with radiotherapy treatment. Understand the critical clinical workflow for effective and reproducible IGRT using ultrasound guidance. The work of X.L. is supported in part by Elekta; J.W. and K.D. is supported in part by a NIH grant R01 CA161613 and by Elekta; D.H. is support in part by a NIH grant R41 CA174089.« less

  19. An MRI-Guided Telesurgery System Using a Fabry-Perot Interferometry Force Sensor and a Pneumatic Haptic Device.

    PubMed

    Su, Hao; Shang, Weijian; Li, Gang; Patel, Niravkumar; Fischer, Gregory S

    2017-08-01

    This paper presents a surgical master-slave teleoperation system for percutaneous interventional procedures under continuous magnetic resonance imaging (MRI) guidance. The slave robot consists of a piezoelectrically actuated 6-degree-of-freedom (DOF) robot for needle placement with an integrated fiber optic force sensor (1-DOF axial force measurement) using the Fabry-Perot interferometry (FPI) sensing principle; it is configured to operate inside the bore of the MRI scanner during imaging. By leveraging the advantages of pneumatic and piezoelectric actuation in force and position control respectively, we have designed a pneumatically actuated master robot (haptic device) with strain gauge based force sensing that is configured to operate the slave from within the scanner room during imaging. The slave robot follows the insertion motion of the haptic device while the haptic device displays the needle insertion force as measured by the FPI sensor. Image interference evaluation demonstrates that the telesurgery system presents a signal to noise ratio reduction of less than 17% and less than 1% geometric distortion during simultaneous robot motion and imaging. Teleoperated needle insertion and rotation experiments were performed to reach 10 targets in a soft tissue-mimicking phantom with 0.70 ± 0.35 mm Cartesian space error.

  20. Interventional robotic systems: Applications and technology state-of-the-art

    PubMed Central

    CLEARY, KEVIN; MELZER, ANDREAS; WATSON, VANCE; KRONREIF, GERNOT; STOIANOVICI, DAN

    2011-01-01

    Many different robotic systems have been developed for invasive medical procedures. In this article we will focus on robotic systems for image-guided interventions such as biopsy of suspicious lesions, interstitial tumor treatment, or needle placement for spinal blocks and neurolysis. Medical robotics is a young and evolving field and the ultimate role of these systems has yet to be determined. This paper presents four interventional robotics systems designed to work with MRI, CT, fluoroscopy, and ultrasound imaging devices. The details of each system are given along with any phantom, animal, or human trials. The systems include the AcuBot for active needle insertion under CT or fluoroscopy, the B-Rob systems for needle placement using CT or ultrasound, the INNOMOTION for MRI and CT interventions, and the MRBot for MRI procedures. Following these descriptions, the technology issues of image compatibility, registration, patient movement and respiration, force feedback, and control mode are briefly discussed. It is our belief that robotic systems will be an important part of future interventions, but more research and clinical trials are needed. The possibility of performing new clinical procedures that the human cannot achieve remains an ultimate goal for medical robotics. Engineers and physicians should work together to create and validate these systems for the benefits of patients everywhere. PMID:16754193

  1. PROPOSAL FOR A SIMPLE AND EFFICIENT MONTHLY QUALITY MANAGEMENT PROGRAM ASSESSING THE CONSISTENCY OF ROBOTIC IMAGE-GUIDED SMALL ANIMAL RADIATION SYSTEMS

    PubMed Central

    Brodin, N. Patrik; Guha, Chandan; Tomé, Wolfgang A.

    2015-01-01

    Modern pre-clinical radiation therapy (RT) research requires high precision and accurate dosimetry to facilitate the translation of research findings into clinical practice. Several systems are available that provide precise delivery and on-board imaging capabilities, highlighting the need for a quality management program (QMP) to ensure consistent and accurate radiation dose delivery. An ongoing, simple, and efficient QMP for image-guided robotic small animal irradiators used in pre-clinical RT research is described. Protocols were developed and implemented to assess the dose output constancy (based on the AAPM TG-61 protocol), cone-beam computed tomography (CBCT) image quality and object representation accuracy (using a custom-designed imaging phantom), CBCT-guided target localization accuracy and consistency of the CBCT-based dose calculation. To facilitate an efficient read-out and limit the user dependence of the QMP data analysis, a semi-automatic image analysis and data representation program was developed using the technical computing software MATLAB. The results of the first six months experience using the suggested QMP for a Small Animal Radiation Research Platform (SARRP) are presented, with data collected on a bi-monthly basis. The dosimetric output constancy was established to be within ±1 %, the consistency of the image resolution was within ±0.2 mm, the accuracy of CBCT-guided target localization was within ±0.5 mm, and dose calculation consistency was within ±2 s (± 3 %) per treatment beam. Based on these results, this simple quality assurance program allows for the detection of inconsistencies in dosimetric or imaging parameters that are beyond the acceptable variability for a reliable and accurate pre-clinical RT system, on a monthly or bi-monthly basis. PMID:26425981

  2. Proposal for a Simple and Efficient Monthly Quality Management Program Assessing the Consistency of Robotic Image-Guided Small Animal Radiation Systems.

    PubMed

    Brodin, N Patrik; Guha, Chandan; Tomé, Wolfgang A

    2015-11-01

    Modern pre-clinical radiation therapy (RT) research requires high precision and accurate dosimetry to facilitate the translation of research findings into clinical practice. Several systems are available that provide precise delivery and on-board imaging capabilities, highlighting the need for a quality management program (QMP) to ensure consistent and accurate radiation dose delivery. An ongoing, simple, and efficient QMP for image-guided robotic small animal irradiators used in pre-clinical RT research is described. Protocols were developed and implemented to assess the dose output constancy (based on the AAPM TG-61 protocol), cone-beam computed tomography (CBCT) image quality and object representation accuracy (using a custom-designed imaging phantom), CBCT-guided target localization accuracy and consistency of the CBCT-based dose calculation. To facilitate an efficient read-out and limit the user dependence of the QMP data analysis, a semi-automatic image analysis and data representation program was developed using the technical computing software MATLAB. The results of the first 6-mo experience using the suggested QMP for a Small Animal Radiation Research Platform (SARRP) are presented, with data collected on a bi-monthly basis. The dosimetric output constancy was established to be within ±1 %, the consistency of the image resolution was within ±0.2 mm, the accuracy of CBCT-guided target localization was within ±0.5 mm, and dose calculation consistency was within ±2 s (±3%) per treatment beam. Based on these results, this simple quality assurance program allows for the detection of inconsistencies in dosimetric or imaging parameters that are beyond the acceptable variability for a reliable and accurate pre-clinical RT system, on a monthly or bi-monthly basis.

  3. SU-G-JeP3-03: Effect of Robot Pose On Beam Blocking for Ultrasound Guided SBRT of the Prostate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerlach, S; Schlaefer, A; Kuhlemann, I

    Purpose: Ultrasound presents a fast, volumetric image modality for real-time tracking of abdominal organ motion. How-ever, ultrasound transducer placement during radiation therapy is challenging. Recently, approaches using robotic arms for intra-treatment ultrasound imaging have been proposed. Good and reliable imaging requires placing the transducer close to the PTV. We studied the effect of a seven degrees of freedom robot on the fea-sible beam directions. Methods: For five CyberKnife prostate treatment plans we established viewports for the transducer, i.e., points on the patient surface with a soft tissue view towards the PTV. Choosing a feasible transducer pose and using the kinematicmore » redundancy of the KUKA LBR iiwa robot, we considered three robot poses. Poses 1 to 3 had the elbow point anterior, superior, and inferior, respectively. For each pose and each beam starting point, the pro-jections of robot and PTV were computed. We added a 20 mm margin accounting for organ / beam motion. The number of nodes for which the PTV was partially of fully blocked were established. Moreover, the cumula-tive overlap for each of the poses and the minimum overlap over all poses were computed. Results: The fully and partially blocked nodes ranged from 12% to 20% and 13% to 27%, respectively. Typically, pose 3 caused the fewest blocked nodes. The cumulative overlap ranged from 19% to 29%. Taking the minimum overlap, i.e., considering moving the robot’s elbow while maintaining the transducer pose, the cumulative over-lap was reduced to 16% to 18% and was 3% to 6% lower than for the best individual pose. Conclusion: Our results indicate that it is possible to identify feasible ultrasound transducer poses and to use the kinematic redundancy of a 7 DOF robot to minimize the impact of the imaging subsystem on the feasible beam directions for ultrasound guided and motion compensated SBRT. Research partially funded by DFG grants ER 817/1-1 and SCHL 1844/3-1.« less

  4. Smart tissue anastomosis robot (STAR): a vision-guided robotics system for laparoscopic suturing.

    PubMed

    Leonard, Simon; Wu, Kyle L; Kim, Yonjae; Krieger, Axel; Kim, Peter C W

    2014-04-01

    This paper introduces the smart tissue anastomosis robot (STAR). Currently, the STAR is a proof-of-concept for a vision-guided robotic system featuring an actuated laparoscopic suturing tool capable of executing running sutures from image-based commands. The STAR tool is designed around a commercially available laparoscopic suturing tool that is attached to a custom-made motor stage and the STAR supervisory control architecture that enables a surgeon to select and track incisions and the placement of stitches. The STAR supervisory-control interface provides two modes: A manual mode that enables a surgeon to specify the placement of each stitch and an automatic mode that automatically computes equally-spaced stitches based on an incision contour. Our experiments on planar phantoms demonstrate that the STAR in either mode is more accurate, up to four times more consistent and five times faster than surgeons using state-of-the-art robotic surgical system, four times faster than surgeons using manual Endo360(°)®, and nine times faster than surgeons using manual laparoscopic tools.

  5. Design of a Teleoperated Needle Steering System for MRI-guided Prostate Interventions

    PubMed Central

    Seifabadi, Reza; Iordachita, Iulian; Fichtinger, Gabor

    2013-01-01

    Accurate needle placement plays a key role in success of prostate biopsy and brachytherapy. During percutaneous interventions, the prostate gland rotates and deforms which may cause significant target displacement. In these cases straight needle trajectory is not sufficient for precise targeting. Although needle spinning and fast insertion may be helpful, they do not entirely resolve the issue. We propose robot-assisted bevel-tip needle steering under MRI guidance as a potential solution to compensate for the target displacement. MRI is chosen for its superior soft tissue contrast in prostate imaging. Due to the confined workspace of the MRI scanner and the requirement for the clinician to be present inside the MRI room during the procedure, we designed a MRI-compatible 2-DOF haptic device to command the needle steering slave robot which operates inside the scanner. The needle steering slave robot was designed to be integrated with a previously developed pneumatically actuated transperineal robot for MRI-guided prostate needle placement. We describe design challenges and present the conceptual design of the master and slave robots and the associated controller. PMID:24649480

  6. Mechanical Validation of an MRI Compatible Stereotactic Neurosurgery Robot in Preparation for Pre-Clinical Trials

    PubMed Central

    Nycz, Christopher J; Gondokaryono, Radian; Carvalho, Paulo; Patel, Nirav; Wartenberg, Marek; Pilitsis, Julie G; Fischer, Gregory S

    2018-01-01

    The use of magnetic resonance imaging (MRI) for guiding robotic surgical devices has shown great potential for performing precisely targeted and controlled interventions. To fully realize these benefits, devices must work safely within the tight confines of the MRI bore without negatively impacting image quality. Here we expand on previous work exploring MRI guided robots for neural interventions by presenting the mechanical design and assessment of a device for positioning, orienting, and inserting an interstitial ultrasound-based ablation probe. From our previous work we have added a 2 degree of freedom (DOF) needle driver for use with the aforementioned probe, revised the mechanical design to improve strength and function, and performed an evaluation of the mechanism’s accuracy and effect on MR image quality. The result of this work is a 7-DOF MRI robot capable of positioning a needle tip and orienting it’s axis with accuracy of 1.37 ± 0.06mm and 0.79° ± 0.41°, inserting it along it’s axis with an accuracy of 0.06 ± 0.07mm, and rotating it about it’s axis to an accuracy of 0.77° ± 1.31°. This was accomplished with no significant reduction in SNR caused by the robot’s presence in the MRI bore, ≤ 10.3% reduction in SNR from running the robot’s motors during a scan, and no visible paramagnetic artifacts. PMID:29696097

  7. Multi-imager compatible actuation principles in surgical robotics.

    PubMed

    Stoianovici, D

    2005-01-01

    Today's most successful surgical robots are perhaps surgeon-driven systems, such as the daVinci (Intuitive Surgical Inc., USA, www.intuitivesurgical.com). These have already enabled surgery that was unattainable with classic instrumentation; however, at their present level of development, they have limited utility. The drawback of these systems is that they are independent self-contained units, and as such, they do not directly take advantage of patient data. The potential of these new surgical tools lies much further ahead. Integration with medical imaging and information are needed for these devices to achieve their true potential. Surgical robots and especially their subclass of image-guided systems require special design, construction and control compared to industrial types, due to the special requirements of the medical and imaging environments. Imager compatibility raises significant engineering challenges for the development of robotic manipulators with respect to imager access, safety, ergonomics, and above all the non-interference with the functionality of the imager. These apply to all known medical imaging types, but are especially challenging for achieving compatibility with the class of MRI systems. Even though a large majority of robotic components may be redesigned to be constructed of MRI compatible materials, for other components such as the motors used in actuation, prescribing MRI compatible materials alone is not sufficient. The electromagnetic motors most commonly used in robotic actuation, for example, are incompatible by principle. As such, alternate actuation principles using "intervention friendly" energy should be adopted and/or devised for these special surgical and radiological interventions. This paper defines the new concept of Multi-Imager Compatibility of surgical manipulators and describes its requirements. Subsequently, the paper gives several recommendations and proposes new actuation principles for this concept. Several implementations have been constructed and tested, and the results are presented here. This is the first paper addressing these issues. Copyright 2005 Robotic Publications Ltd.

  8. Clinical acceptance and accuracy assessment of spinal implants guided with SpineAssist surgical robot: retrospective study.

    PubMed

    Devito, Dennis P; Kaplan, Leon; Dietl, Rupert; Pfeiffer, Michael; Horne, Dale; Silberstein, Boris; Hardenbrook, Mitchell; Kiriyanthan, George; Barzilay, Yair; Bruskin, Alexander; Sackerer, Dieter; Alexandrovsky, Vitali; Stüer, Carsten; Burger, Ralf; Maeurer, Johannes; Donald, Gordon D; Gordon, Donald G; Schoenmayr, Robert; Friedlander, Alon; Knoller, Nachshon; Schmieder, Kirsten; Pechlivanis, Ioannis; Kim, In-Se; Meyer, Bernhard; Shoham, Moshe

    2010-11-15

    Retrospective, multicenter study of robotically-guided spinal implant insertions. Clinical acceptance of the implants was assessed by intraoperative radiograph, and when available, postoperative computed tomography (CT) scans were used to determine placement accuracy. To verify the clinical acceptance and accuracy of robotically-guided spinal implants and compare to those of unguided free-hand procedures. SpineAssist surgical robot has been used to guide implants and guide-wires to predefined locations in the spine. SpineAssist which, to the best of the authors' knowledge, is currently the sole robot providing surgical assistance in positioning tools in the spine, guided over 840 cases in 14 hospitals, between June 2005 and June 2009. Clinical acceptance of 3271 pedicle screws and guide-wires inserted in 635 reported cases was assessed by intraoperative fluoroscopy, where placement accuracy of 646 pedicle screws inserted in 139 patients was measured using postoperative CT scans. Screw placements were found to be clinically acceptable in 98% of the cases when intraoperatively assessed by fluoroscopic images. Measurements derived from postoperative CT scans demonstrated that 98.3% of the screws fell within the safe zone, where 89.3% were completely within the pedicle and 9% breached the pedicle by up to 2 mm. The remaining 1.4% of the screws breached between 2 and 4 mm, while only 2 screws (0.3%) deviated by more than 4 mm from the pedicle wall. Neurologic deficits were observed in 4 cases yet, following revisions, no permanent nerve damage was encountered, in contrast to the 0.6% to 5% of neurologic damage reported in the literature. SpineAssist offers enhanced performance in spinal surgery when compared to free-hand surgeries, by increasing placement accuracy and reducing neurologic risks. In addition, 49% of the cases reported herein used a percutaneous approach, highlighting the contribution of SpineAssist in procedures without anatomic landmarks.

  9. Navigation of a robot-integrated fluorescence laparoscope in preoperative SPECT/CT and intraoperative freehand SPECT imaging data: a phantom study

    NASA Astrophysics Data System (ADS)

    van Oosterom, Matthias Nathanaël; Engelen, Myrthe Adriana; van den Berg, Nynke Sjoerdtje; KleinJan, Gijs Hendrik; van der Poel, Henk Gerrit; Wendler, Thomas; van de Velde, Cornelis Jan Hadde; Navab, Nassir; van Leeuwen, Fijs Willem Bernhard

    2016-08-01

    Robot-assisted laparoscopic surgery is becoming an established technique for prostatectomy and is increasingly being explored for other types of cancer. Linking intraoperative imaging techniques, such as fluorescence guidance, with the three-dimensional insights provided by preoperative imaging remains a challenge. Navigation technologies may provide a solution, especially when directly linked to both the robotic setup and the fluorescence laparoscope. We evaluated the feasibility of such a setup. Preoperative single-photon emission computed tomography/X-ray computed tomography (SPECT/CT) or intraoperative freehand SPECT (fhSPECT) scans were used to navigate an optically tracked robot-integrated fluorescence laparoscope via an augmented reality overlay in the laparoscopic video feed. The navigation accuracy was evaluated in soft tissue phantoms, followed by studies in a human-like torso phantom. Navigation accuracies found for SPECT/CT-based navigation were 2.25 mm (coronal) and 2.08 mm (sagittal). For fhSPECT-based navigation, these were 1.92 mm (coronal) and 2.83 mm (sagittal). All errors remained below the <1-cm detection limit for fluorescence imaging, allowing refinement of the navigation process using fluorescence findings. The phantom experiments performed suggest that SPECT-based navigation of the robot-integrated fluorescence laparoscope is feasible and may aid fluorescence-guided surgery procedures.

  10. Phase-Discriminating Capacitive Sensor System

    NASA Technical Reports Server (NTRS)

    Vranish, John M.; Rahim, Wadi

    1993-01-01

    Crosstalk eliminated by maintaining voltages on all electrodes at same amplitude, phase, and frequency. Each output feedback-derived control voltage, change of which indicates proximity-induced change in capacitance of associated sensing electrode. Sensors placed close together, enabling imaging of sort. Images and/or output voltages used to guide robots in proximity to various objects.

  11. A cadaver study of mastoidectomy using an image-guided human-robot collaborative control system.

    PubMed

    Yoo, Myung Hoon; Lee, Hwan Seo; Yang, Chan Joo; Lee, Seung Hwan; Lim, Hoon; Lee, Seongpung; Yi, Byung-Ju; Chung, Jong Woo

    2017-10-01

    Surgical precision would be better achieved with the development of an anatomical monitoring and controlling robot system than by traditional surgery techniques alone. We evaluated the feasibility of robot-assisted mastoidectomy in terms of duration, precision, and safety. Human cadaveric study. We developed a multi-degree-of-freedom robot system for a surgical drill with a balancing arm. The drill system is manipulated by the surgeon, the motion of the drill burr is monitored by the image-guided system, and the brake is controlled by the robotic system. The system also includes an alarm as well as the brake to help avoid unexpected damage to vital structures. Experimental mastoidectomy was performed in 11 temporal bones of six cadavers. Parameters including duration and safety were assessed, as well as intraoperative damage, which was judged via pre- and post-operative computed tomography. The duration of mastoidectomy in our study was comparable with that required for chronic otitis media patients. Although minor damage, such as dura exposure without tearing, was noted, no critical damage to the facial nerve or other important structures was observed. When the brake system was set to 1 mm from the facial nerve, the postoperative average bone thicknesses of the facial nerve was 1.39, 1.41, 1.22, 1.41, and 1.55 mm in the lateral, posterior pyramidal and anterior, lateral, and posterior mastoid portions, respectively. Mastoidectomy can be successfully performed using our robot-assisted system while maintaining a pre-set limit of 1 mm in most cases. This system may thus be useful for more inexperienced surgeons. NA.

  12. A multimodality imaging-compatible insertion robot with a respiratory motion calibration module designed for ablation of liver tumors: a preclinical study.

    PubMed

    Li, Dongrui; Cheng, Zhigang; Chen, Gang; Liu, Fangyi; Wu, Wenbo; Yu, Jie; Gu, Ying; Liu, Fengyong; Ren, Chao; Liang, Ping

    2018-04-03

    To test the accuracy and efficacy of the multimodality imaging-compatible insertion robot with a respiratory motion calibration module designed for ablation of liver tumors in phantom and animal models. To evaluate and compare the influences of intervention experience on robot-assisted and ultrasound-controlled ablation procedures. Accuracy tests on rigid body/phantom model with a respiratory movement simulation device and microwave ablation tests on porcine liver tumor/rabbit liver cancer were performed with the robot we designed or with the traditional ultrasound-guidance by physicians with or without intervention experience. In the accuracy tests performed by the physicians without intervention experience, the insertion accuracy and efficiency of robot-assisted group was higher than those of ultrasound-guided group with statistically significant differences. In the microwave ablation tests performed by the physicians without intervention experience, better complete ablation rate was achieved when applying the robot. In the microwave ablation tests performed by the physicians with intervention experience, there was no statistically significant difference of the insertion number and total ablation time between the robot-assisted group and the ultrasound-controlled group. The evaluation by the NASA-TLX suggested that the robot-assisted insertion and microwave ablation process performed by physicians with or without experience were more comfortable. The multimodality imaging-compatible insertion robot with a respiratory motion calibration module designed for ablation of liver tumors could increase the insertion accuracy and ablation efficacy, and minimize the influence of the physicians' experience. The ablation procedure could be more comfortable with less stress with the application of the robot.

  13. MRI-Compatible Pneumatic Robot for Transperineal Prostate Needle Placement

    PubMed Central

    Fischer, Gregory S.; Iordachita, Iulian; Csoma, Csaba; Tokuda, Junichi; DiMaio, Simon P.; Tempany, Clare M.; Hata, Nobuhiko; Fichtinger, Gabor

    2010-01-01

    Magnetic resonance imaging (MRI) can provide high-quality 3-D visualization of prostate and surrounding tissue, thus granting potential to be a superior medical imaging modality for guiding and monitoring prostatic interventions. However, the benefits cannot be readily harnessed for interventional procedures due to difficulties that surround the use of high-field (1.5T or greater) MRI. The inability to use conventional mechatronics and the confined physical space makes it extremely challenging to access the patient. We have designed a robotic assistant system that overcomes these difficulties and promises safe and reliable intraprostatic needle placement inside closed high-field MRI scanners. MRI compatibility of the robot has been evaluated under 3T MRI using standard prostate imaging sequences and average SNR loss is limited to 5%. Needle alignment accuracy of the robot under servo pneumatic control is better than 0.94 mm rms per axis. The complete system workflow has been evaluated in phantom studies with accurate visualization and targeting of five out of five 1 cm targets. The paper explains the robot mechanism and controller design, the system integration, and presents results of preliminary evaluation of the system. PMID:21057608

  14. Image fusion and navigation platforms for percutaneous image-guided interventions.

    PubMed

    Rajagopal, Manoj; Venkatesan, Aradhana M

    2016-04-01

    Image-guided interventional procedures, particularly image guided biopsy and ablation, serve an important role in the care of the oncology patient. The need for tumor genomic and proteomic profiling, early tumor response assessment and confirmation of early recurrence are common scenarios that may necessitate successful biopsies of targets, including those that are small, anatomically unfavorable or inconspicuous. As image-guided ablation is increasingly incorporated into interventional oncology practice, similar obstacles are posed for the ablation of technically challenging tumor targets. Navigation tools, including image fusion and device tracking, can enable abdominal interventionalists to more accurately target challenging biopsy and ablation targets. Image fusion technologies enable multimodality fusion and real-time co-displays of US, CT, MRI, and PET/CT data, with navigational technologies including electromagnetic tracking, robotic, cone beam CT, optical, and laser guidance of interventional devices. Image fusion and navigational platform technology is reviewed in this article, including the results of studies implementing their use for interventional procedures. Pre-clinical and clinical experiences to date suggest these technologies have the potential to reduce procedure risk, time, and radiation dose to both the patient and the operator, with a valuable role to play for complex image-guided interventions.

  15. Videoexoscopic real-time intraoperative navigation for spinal neurosurgery: a novel co-adaptation of two existing technology platforms, technical note.

    PubMed

    Huang, Meng; Barber, Sean Michael; Steele, William James; Boghani, Zain; Desai, Viren Rajendrakumar; Britz, Gavin Wayne; West, George Alexander; Trask, Todd Wilson; Holman, Paul Joseph

    2018-06-01

    Image-guided approaches to spinal instrumentation and interbody fusion have been widely popularized in the last decade [1-5]. Navigated pedicle screws are significantly less likely to breach [2, 3, 5, 6]. Navigation otherwise remains a point reference tool because the projection is off-axis to the surgeon's inline loupe or microscope view. The Synaptive robotic brightmatter drive videoexoscope monitor system represents a new paradigm for off-axis high-definition (HD) surgical visualization. It has many advantages over the traditional microscope and loupes, which have already been demonstrated in a cadaveric study [7]. An auxiliary, but powerful capability of this system is projection of a second, modifiable image in a split-screen configuration. We hypothesized that integration of both Medtronic and Synaptive platforms could permit the visualization of reconstructed navigation and surgical field images simultaneously. By utilizing navigated instruments, this configuration has the ability to support live image-guided surgery or real-time navigation (RTN). Medtronic O-arm/Stealth S7 navigation, MetRx, NavLock, and SureTrak spinal systems were implemented on a prone cadaveric specimen with a stream output to the Synaptive Display. Surgical visualization was provided using a Storz Image S1 platform and camera mounted to the Synaptive robotic brightmatter drive. We were able to successfully technically co-adapt both platforms. A minimally invasive transforaminal lumbar interbody fusion (MIS TLIF) and an open pedicle subtraction osteotomy (PSO) were performed using a navigated high-speed drill under RTN. Disc Shaver and Trials under RTN were implemented on the MIS TLIF. The synergy of Synaptive HD videoexoscope robotic drive and Medtronic Stealth platforms allow for live image-guided surgery or real-time navigation (RTN). Off-axis projection also allows upright neutral cervical spine operative ergonomics for the surgeons and improved surgical team visualization and education compared to traditional means. This technique has the potential to augment existing minimally invasive and open approaches, but will require long-term outcome measurements for efficacy.

  16. Robotic Camera Assistance and Its Benefit in 1033 Traditional Laparoscopic Procedures: Prospective Clinical Trial Using a Joystick-guided Camera Holder.

    PubMed

    Holländer, Sebastian W; Klingen, Hans Joachim; Fritz, Marliese; Djalali, Peter; Birk, Dieter

    2014-11-01

    Despite advances in instruments and techniques in laparoscopic surgery, one thing remains uncomfortable: the camera assistance. The aim of this study was to investigate the benefit of a joystick-guided camera holder (SoloAssist®, Aktormed, Barbing, Germany) for laparoscopic surgery and to compare the robotic assistance to human assistance. 1033 consecutive laparoscopic procedures were performed assisted by the SoloAssist®. Failures and aborts were documented and nine surgeons were interviewed by questionnaire regarding their experiences. In 71 of 1033 procedures, robotic assistance was aborted and the procedure was continued manually, mostly because of frequent changes of position, narrow spaces, and adverse angular degrees. One case of short circuit was reported. Emergency stop was necessary in three cases due to uncontrolled movement into the abdominal cavity. Eight of nine surgeons prefer robotic to human assistance, mostly because of a steady image and self-control. The SoloAssist® robot is a reliable system for laparoscopic procedures. Emergency shutdown was necessary in only three cases. Some minor weak spots could have been identified. Most surgeons prefer robotic assistance to human assistance. We feel that the SoloAssist® makes standard laparoscopic surgery more comfortable and further development is desirable, but it cannot fully replace a human assistant.

  17. Virtual remote center of motion control for needle placement robots.

    PubMed

    Boctor, Emad M; Webster, Robert J; Mathieu, Herve; Okamura, Allison M; Fichtinger, Gabor

    2004-01-01

    We present an algorithm that enables percutaneous needle-placement procedures to be performed with unencoded, unregistered, minimally calibrated robots while removing the constraint of placing the needle tip on a mechanically enforced Remote Center of Motion (RCM). The algorithm requires only online tracking of the surgical tool and a five-degree-of-freedom (5-DOF) robot comprising three prismatic DOF and two rotational DOF. An incremental adaptive motion control cycle guides the needle to the insertion point and also orients it to align with the target-entry-point line. The robot executes RCM motion without having a physically constrained fulcrum point. The proof-of-concept prototype system achieved 0.78 mm translation accuracy and 1.4 degrees rotational accuracy (this is within the tracker accuracy) within 17 iterative steps (0.5-1 s). This research enables robotic assistant systems for image-guided percutaneous procedures to be prototyped/constructed more quickly and less expensively than has been previously possible. Since the clinical utility of such systems is clear and has been demonstrated in the literature, our work may help promote widespread clinical adoption of this technology by lowering system cost and complexity.

  18. Technical Note: Evaluation of the systematic accuracy of a frameless, multiple image modality guided, linear accelerator based stereotactic radiosurgery system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wen, N., E-mail: nwen1@hfhs.org; Snyder, K. C.; Qin, Y.

    2016-05-15

    Purpose: To evaluate the total systematic accuracy of a frameless, image guided stereotactic radiosurgery system. Methods: The localization accuracy and intermodality difference was determined by delivering radiation to an end-to-end prototype phantom, in which the targets were localized using optical surface monitoring system (OSMS), electromagnetic beacon-based tracking (Calypso®), cone-beam CT, “snap-shot” planar x-ray imaging, and a robotic couch. Six IMRT plans with jaw tracking and a flattening filter free beam were used to study the dosimetric accuracy for intracranial and spinal stereotactic radiosurgery treatment. Results: End-to-end localization accuracy of the system evaluated with the end-to-end phantom was 0.5 ± 0.2more » mm with a maximum deviation of 0.9 mm over 90 measurements (including jaw, MLC, and cone measurements for both auto and manual fusion) for single isocenter, single target treatment, 0.6 ± 0.4 mm for multitarget treatment with shared isocenter. Residual setup errors were within 0.1 mm for OSMS, and 0.3 mm for Calypso. Dosimetric evaluation based on absolute film dosimetry showed greater than 90% pass rate for all cases using a gamma criteria of 3%/1 mm. Conclusions: The authors’ experience demonstrates that the localization accuracy of the frameless image-guided system is comparable to robotic or invasive frame based radiosurgery systems.« less

  19. Stereotactic robot-assisted MRI-guided laser thermal ablation of radiation necrosis in the posterior cranial fossa: technical note.

    PubMed

    Chan, Alvin Y; Tran, Diem Kieu T; Gill, Amandip S; Hsu, Frank P K; Vadera, Sumeet

    2016-10-01

    Laser interstitial thermal therapy (LITT) is a minimally invasive procedure used to treat a variety of intracranial lesions. Utilization of robotic assistance with stereotactic procedures has gained attention due to potential for advantages over conventional techniques. The authors report the first case in which robot-assisted MRI-guided LITT was used to treat radiation necrosis in the posterior fossa, specifically within the cerebellar peduncle. The use of a stereotactic robot allowed the surgeon to perform LITT using a trajectory that would be extremely difficult with conventional arc-based techniques. A 60-year-old man presented with facial weakness and brainstem symptoms consistent with radiation necrosis. He had a history of anaplastic astrocytoma that was treated with CyberKnife radiosurgery 1 year prior to presentation, and he did well for 11 months until his symptoms recurred. The location and form of the lesion precluded excision but made the patient a suitable candidate for LITT. The location and configuration of the lesion required a trajectory for LITT that was too low for arc-based stereotactic navigation, and thus the ROSA robot (Medtech) was used. Using preoperative MRI acquisitions, the lesion in the posterior fossa was targeted. Bone fiducials were used to improve accuracy in registration, and the authors obtained an intraoperative CT image that was then fused with the MR image by the ROSA robot. They placed the laser applicator and then ablated the lesion under real-time MR thermometry. There were no complications, and the patient tolerated the procedure well. Postoperative 2-month MRI showed complete resolution of the lesion, and the patient had some improvement in symptoms.

  20. A Fabry-Perot Interferometry Based MRI-Compatible Miniature Uniaxial Force Sensor for Percutaneous Needle Placement

    PubMed Central

    Shang, Weijian; Su, Hao; Li, Gang; Furlong, Cosme; Fischer, Gregory S.

    2014-01-01

    Robot-assisted surgical procedures, taking advantage of the high soft tissue contrast and real-time imaging of magnetic resonance imaging (MRI), are developing rapidly. However, it is crucial to maintain tactile force feedback in MRI-guided needle-based procedures. This paper presents a Fabry-Perot interference (FPI) based system of an MRI-compatible fiber optic sensor which has been integrated into a piezoelectrically actuated robot for prostate cancer biopsy and brachytherapy in 3T MRI scanner. The opto-electronic sensing system design was minimized to fit inside an MRI-compatible robot controller enclosure. A flexure mechanism was designed that integrates the FPI sensor fiber for measuring needle insertion force, and finite element analysis was performed for optimizing the correct force-deformation relationship. The compact, low-cost FPI sensing system was integrated into the robot and calibration was conducted. The root mean square (RMS) error of the calibration among the range of 0–10 Newton was 0.318 Newton comparing to the theoretical model which has been proven sufficient for robot control and teleoperation. PMID:25126153

  1. Robotic digital subtraction angiography systems within the hybrid operating room.

    PubMed

    Murayama, Yuichi; Irie, Koreaki; Saguchi, Takayuki; Ishibashi, Toshihiro; Ebara, Masaki; Nagashima, Hiroyasu; Isoshima, Akira; Arakawa, Hideki; Takao, Hiroyuki; Ohashi, Hiroki; Joki, Tatsuhiro; Kato, Masataka; Tani, Satoshi; Ikeuchi, Satoshi; Abe, Toshiaki

    2011-05-01

    Fully equipped high-end digital subtraction angiography (DSA) within the operating room (OR) environment has emerged as a new trend in the fields of neurosurgery and vascular surgery. To describe initial clinical experience with a robotic DSA system in the hybrid OR. A newly designed robotic DSA system (Artis zeego; Siemens AG, Forchheim, Germany) was installed in the hybrid OR. The system consists of a multiaxis robotic C arm and surgical OR table. In addition to conventional neuroendovascular procedures, the system was used as an intraoperative imaging tool for various neurosurgical procedures such as aneurysm clipping and spine instrumentation. Five hundred one neurosurgical procedures were successfully conducted in the hybrid OR with the robotic DSA. During surgical procedures such as aneurysm clipping and arteriovenous fistula treatment, intraoperative 2-/3-dimensional angiography and C-arm-based computed tomographic images (DynaCT) were easily performed without moving the OR table. Newly developed virtual navigation software (syngo iGuide; Siemens AG) can be used in frameless navigation and in access to deep-seated intracranial lesions or needle placement. This newly developed robotic DSA system provides safe and precise treatment in the fields of endovascular treatment and neurosurgery.

  2. An open-source framework for testing tracking devices using Lego Mindstorms

    NASA Astrophysics Data System (ADS)

    Jomier, Julien; Ibanez, Luis; Enquobahrie, Andinet; Pace, Danielle; Cleary, Kevin

    2009-02-01

    In this paper, we present an open-source framework for testing tracking devices in surgical navigation applications. At the core of image-guided intervention systems is the tracking interface that handles communication with the tracking device and gathers tracking information. Given that the correctness of tracking information is critical for protecting patient safety and for ensuring the successful execution of an intervention, the tracking software component needs to be thoroughly tested on a regular basis. Furthermore, with widespread use of extreme programming methodology that emphasizes continuous and incremental testing of application components, testing design becomes critical. While it is easy to automate most of the testing process, it is often more difficult to test components that require manual intervention such as tracking device. Our framework consists of a robotic arm built from a set of Lego Mindstorms and an open-source toolkit written in C++ to control the robot movements and assess the accuracy of the tracking devices. The application program interface (API) is cross-platform and runs on Windows, Linux and MacOS. We applied this framework for the continuous testing of the Image-Guided Surgery Toolkit (IGSTK), an open-source toolkit for image-guided surgery and shown that regression testing on tracking devices can be performed at low cost and improve significantly the quality of the software.

  3. Navigation and Self-Semantic Location of Drones in Indoor Environments by Combining the Visual Bug Algorithm and Entropy-Based Vision.

    PubMed

    Maravall, Darío; de Lope, Javier; Fuentes, Juan P

    2017-01-01

    We introduce a hybrid algorithm for the self-semantic location and autonomous navigation of robots using entropy-based vision and visual topological maps. In visual topological maps the visual landmarks are considered as leave points for guiding the robot to reach a target point (robot homing) in indoor environments. These visual landmarks are defined from images of relevant objects or characteristic scenes in the environment. The entropy of an image is directly related to the presence of a unique object or the presence of several different objects inside it: the lower the entropy the higher the probability of containing a single object inside it and, conversely, the higher the entropy the higher the probability of containing several objects inside it. Consequently, we propose the use of the entropy of images captured by the robot not only for the landmark searching and detection but also for obstacle avoidance. If the detected object corresponds to a landmark, the robot uses the suggestions stored in the visual topological map to reach the next landmark or to finish the mission. Otherwise, the robot considers the object as an obstacle and starts a collision avoidance maneuver. In order to validate the proposal we have defined an experimental framework in which the visual bug algorithm is used by an Unmanned Aerial Vehicle (UAV) in typical indoor navigation tasks.

  4. Navigation and Self-Semantic Location of Drones in Indoor Environments by Combining the Visual Bug Algorithm and Entropy-Based Vision

    PubMed Central

    Maravall, Darío; de Lope, Javier; Fuentes, Juan P.

    2017-01-01

    We introduce a hybrid algorithm for the self-semantic location and autonomous navigation of robots using entropy-based vision and visual topological maps. In visual topological maps the visual landmarks are considered as leave points for guiding the robot to reach a target point (robot homing) in indoor environments. These visual landmarks are defined from images of relevant objects or characteristic scenes in the environment. The entropy of an image is directly related to the presence of a unique object or the presence of several different objects inside it: the lower the entropy the higher the probability of containing a single object inside it and, conversely, the higher the entropy the higher the probability of containing several objects inside it. Consequently, we propose the use of the entropy of images captured by the robot not only for the landmark searching and detection but also for obstacle avoidance. If the detected object corresponds to a landmark, the robot uses the suggestions stored in the visual topological map to reach the next landmark or to finish the mission. Otherwise, the robot considers the object as an obstacle and starts a collision avoidance maneuver. In order to validate the proposal we have defined an experimental framework in which the visual bug algorithm is used by an Unmanned Aerial Vehicle (UAV) in typical indoor navigation tasks. PMID:28900394

  5. Autonomous surgical robotics using 3-D ultrasound guidance: feasibility study.

    PubMed

    Whitman, John; Fronheiser, Matthew P; Ivancevich, Nikolas M; Smith, Stephen W

    2007-10-01

    The goal of this study was to test the feasibility of using a real-time 3D (RT3D) ultrasound scanner with a transthoracic matrix array transducer probe to guide an autonomous surgical robot. Employing a fiducial alignment mark on the transducer to orient the robot's frame of reference and using simple thresholding algorithms to segment the 3D images, we tested the accuracy of using the scanner to automatically direct a robot arm that touched two needle tips together within a water tank. RMS measurement error was 3.8% or 1.58 mm for an average path length of 41 mm. Using these same techniques, the autonomous robot also performed simulated needle biopsies of a cyst-like lesion in a tissue phantom. This feasibility study shows the potential for 3D ultrasound guidance of an autonomous surgical robot for simple interventional tasks, including lesion biopsy and foreign body removal.

  6. Design, analysis and control of a novel tendon-driven magnetic resonance-guided robotic system for minimally invasive breast surgery.

    PubMed

    Jiang, Shan; Lou, Jinlong; Yang, Zhiyong; Dai, Jiansheng; Yu, Yan

    2015-09-01

    Biopsy and brachytherapy for small core breast cancer are always difficult medical problems in the field of cancer treatment. This research mainly develops a magnetic resonance imaging-guided high-precision robotic system for breast puncture treatment. First, a 5-degree-of-freedom tendon-based surgical robotic system is introduced in detail. What follows are the kinematic analysis and dynamical modeling of the robotic system, where a mathematic dynamic model is established using the Lagrange method and a lumped parameter tendon model is used to identify the nonlinear gain of the tendon-sheath transmission system. Based on the dynamical models, an adaptive proportional-integral-derivative controller with friction compensation is proposed for accurate position control. Through simulations using different sinusoidal input signals, we observe that the sinusoidal tracking error at 1/2π Hz is 0.41 mm. Finally, the experiments on tendon-sheath transmission and needle insertion performance are conducted, which show that the insertion precision is 0.68 mm in laboratory environment. © IMechE 2015.

  7. 3-D ultrasound guidance of surgical robotics: a feasibility study.

    PubMed

    Pua, Eric C; Fronheiser, Matthew P; Noble, Joanna R; Light, Edward D; Wolf, Patrick D; von Allmen, Daniel; Smith, Stephen W

    2006-11-01

    Laparoscopic ultrasound has seen increased use as a surgical aide in general, gynecological, and urological procedures. The application of real-time, three-dimensional (RT3D) ultrasound to these laparoscopic procedures may increase information available to the surgeon and serve as an additional intraoperative guidance tool. The integration of RT3D with recent advances in robotic surgery also can increase automation and ease of use. In this study, a 1-cm diameter probe for RT3D has been used laparoscopically for in vivo imaging of a canine. The probe, which operates at 5 MHz, was used to image the spleen, liver, and gall bladder as well as to guide surgical instruments. Furthermore, the three-dimensional (3-D) measurement system of the volumetric scanner used with this probe was tested as a guidance mechanism for a robotic linear motion system in order to simulate the feasibility of RT3D/robotic surgery integration. Using images acquired with the 3-D laparoscopic ultrasound device, coordinates were acquired by the scanner and used to direct a robotically controlled needle toward desired in vitro targets as well as targets in a post-mortem canine. The rms error for these measurements was 1.34 mm using optical alignment and 0.76 mm using ultrasound alignment.

  8. Teleoperation System with Hybrid Pneumatic-Piezoelectric Actuation for MRI-Guided Needle Insertion with Haptic Feedback

    PubMed Central

    Shang, Weijian; Su, Hao; Li, Gang; Fischer, Gregory S.

    2014-01-01

    This paper presents a surgical master-slave tele-operation system for percutaneous interventional procedures under continuous magnetic resonance imaging (MRI) guidance. This system consists of a piezoelectrically actuated slave robot for needle placement with integrated fiber optic force sensor utilizing Fabry-Perot interferometry (FPI) sensing principle. The sensor flexure is optimized and embedded to the slave robot for measuring needle insertion force. A novel, compact opto-mechanical FPI sensor interface is integrated into an MRI robot control system. By leveraging the complementary features of pneumatic and piezoelectric actuation, a pneumatically actuated haptic master robot is also developed to render force associated with needle placement interventions to the clinician. An aluminum load cell is implemented and calibrated to close the impedance control loop of the master robot. A force-position control algorithm is developed to control the hybrid actuated system. Teleoperated needle insertion is demonstrated under live MR imaging, where the slave robot resides in the scanner bore and the user manipulates the master beside the patient outside the bore. Force and position tracking results of the master-slave robot are demonstrated to validate the tracking performance of the integrated system. It has a position tracking error of 0.318mm and sine wave force tracking error of 2.227N. PMID:25126446

  9. Teleoperation System with Hybrid Pneumatic-Piezoelectric Actuation for MRI-Guided Needle Insertion with Haptic Feedback.

    PubMed

    Shang, Weijian; Su, Hao; Li, Gang; Fischer, Gregory S

    2013-01-01

    This paper presents a surgical master-slave tele-operation system for percutaneous interventional procedures under continuous magnetic resonance imaging (MRI) guidance. This system consists of a piezoelectrically actuated slave robot for needle placement with integrated fiber optic force sensor utilizing Fabry-Perot interferometry (FPI) sensing principle. The sensor flexure is optimized and embedded to the slave robot for measuring needle insertion force. A novel, compact opto-mechanical FPI sensor interface is integrated into an MRI robot control system. By leveraging the complementary features of pneumatic and piezoelectric actuation, a pneumatically actuated haptic master robot is also developed to render force associated with needle placement interventions to the clinician. An aluminum load cell is implemented and calibrated to close the impedance control loop of the master robot. A force-position control algorithm is developed to control the hybrid actuated system. Teleoperated needle insertion is demonstrated under live MR imaging, where the slave robot resides in the scanner bore and the user manipulates the master beside the patient outside the bore. Force and position tracking results of the master-slave robot are demonstrated to validate the tracking performance of the integrated system. It has a position tracking error of 0.318mm and sine wave force tracking error of 2.227N.

  10. Robotics in keyhole transcranial endoscope-assisted microsurgery: a critical review of existing systems and proposed specifications for new robotic platforms.

    PubMed

    Marcus, Hani J; Seneci, Carlo A; Payne, Christopher J; Nandi, Dipankar; Darzi, Ara; Yang, Guang-Zhong

    2014-03-01

    Over the past decade, advances in image guidance, endoscopy, and tube-shaft instruments have allowed for the further development of keyhole transcranial endoscope-assisted microsurgery, utilizing smaller craniotomies and minimizing exposure and manipulation of unaffected brain tissue. Although such approaches offer the possibility of shorter operating times, reduced morbidity and mortality, and improved long-term outcomes, the technical skills required to perform such surgery are inevitably greater than for traditional open surgical techniques, and they have not been widely adopted by neurosurgeons. Surgical robotics, which has the ability to improve visualization and increase dexterity, therefore has the potential to enhance surgical performance. To evaluate the role of surgical robots in keyhole transcranial endoscope-assisted microsurgery. The technical challenges faced by surgeons utilizing keyhole craniotomies were reviewed, and a thorough appraisal of presently available robotic systems was performed. Surgical robotic systems have the potential to incorporate advances in augmented reality, stereoendoscopy, and jointed-wrist instruments, and therefore to significantly impact the field of keyhole neurosurgery. To date, over 30 robotic systems have been applied to neurosurgical procedures. The vast majority of these robots are best described as supervisory controlled, and are designed for stereotactic or image-guided surgery. Few telesurgical robots are suitable for keyhole neurosurgical approaches, and none are in widespread clinical use in the field. New robotic platforms in minimally invasive neurosurgery must possess clear and unambiguous advantages over conventional approaches if they are to achieve significant clinical penetration.

  11. Robot Service and Repair. Teacher's Guide.

    ERIC Educational Resources Information Center

    Pittsburg State Univ., KS. Kansas Vocational Curriculum Dissemination Center.

    This document is a teacher's guide for teaching a course on robot service and repair. The guide is organized in four units covering the following topics: introduction to robots, power supply, robot control systems, and service and repair. Each unit contains several lesson plans on the unit topic. Lesson plans consist of objectives, tools and…

  12. A Cross Structured Light Sensor and Stripe Segmentation Method for Visual Tracking of a Wall Climbing Robot

    PubMed Central

    Zhang, Liguo; Sun, Jianguo; Yin, Guisheng; Zhao, Jing; Han, Qilong

    2015-01-01

    In non-destructive testing (NDT) of metal welds, weld line tracking is usually performed outdoors, where the structured light sources are always disturbed by various noises, such as sunlight, shadows, and reflections from the weld line surface. In this paper, we design a cross structured light (CSL) to detect the weld line and propose a robust laser stripe segmentation algorithm to overcome the noises in structured light images. An adaptive monochromatic space is applied to preprocess the image with ambient noises. In the monochromatic image, the laser stripe obtained is recovered as a multichannel signal by minimum entropy deconvolution. Lastly, the stripe centre points are extracted from the image. In experiments, the CSL sensor and the proposed algorithm are applied to guide a wall climbing robot inspecting the weld line of a wind power tower. The experimental results show that the CSL sensor can capture the 3D information of the welds with high accuracy, and the proposed algorithm contributes to the weld line inspection and the robot navigation. PMID:26110403

  13. A Motionless Camera

    NASA Technical Reports Server (NTRS)

    1994-01-01

    Omniview, a motionless, noiseless, exceptionally versatile camera was developed for NASA as a receiving device for guiding space robots. The system can see in one direction and provide as many as four views simultaneously. Developed by Omniview, Inc. (formerly TRI) under a NASA Small Business Innovation Research (SBIR) grant, the system's image transformation electronics produce a real-time image from anywhere within a hemispherical field. Lens distortion is removed, and a corrected "flat" view appears on a monitor. Key elements are a high resolution charge coupled device (CCD), image correction circuitry and a microcomputer for image processing. The system can be adapted to existing installations. Applications include security and surveillance, teleconferencing, imaging, virtual reality, broadcast video and military operations. Omniview technology is now called IPIX. The company was founded in 1986 as TeleRobotics International, became Omniview in 1995, and changed its name to Interactive Pictures Corporation in 1997.

  14. Intraoperative optical coherence tomography of the cerebral cortex using a 7 degree-of freedom robotic arm

    NASA Astrophysics Data System (ADS)

    Reyes Perez, Robnier; Jivraj, Jamil; Yang, Victor X. D.

    2017-02-01

    Optical Coherence Tomography (OCT) provides a high-resolution imaging technique with limited depth penetration. The current use of OCT is limited to relatively small areas of tissue for anatomical structure diagnosis or minimally invasive guided surgery. In this study, we propose to image a large area of the surface of the cerebral cortex. This experiment aims to evaluate the potential difficulties encountered when applying OCT imaging to large and irregular surface areas. The current state-of-the-art OCT imaging technology uses scanning systems with at most 3 degrees-of-freedom (DOF) to obtain a 3D image representation of the sample tissue. We propose the use of a 7 DOF industrial robotic arm to increase the scanning capabilities of our OCT. Such system will be capable of acquiring data from large samples of tissue that are too irregular for conventional methods. Advantages and disadvantages of our system are discussed.

  15. A novel semi-robotized device for high-precision 18F-FDG-guided breast cancer biopsy.

    PubMed

    Hellingman, D; Teixeira, S C; Donswijk, M L; Rijkhorst, E J; Moliner, L; Alamo, J; Loo, C E; Valdés Olmos, R A; Stokkel, M P M

    To assess the 3D geometric sampling accuracy of a new PET-guided system for breast cancer biopsy (BCB) from areas within the tumour with high 18 F-FDG uptake. In the context of the European Union project MammoCare, a prototype semi-robotic stereotactic prototype BCB-device was incorporated into a dedicated high resolution PET-detector for breast imaging. The system consists of 2 stacked rings, each containing 12 plane detectors, forming a dodecagon with a 186mm aperture for 3D reconstruction (1mm 3 voxel). A vacuum-assisted biopsy needle attached to a robot-controlled arm was used. To test the accuracy of needle placement, the needle tip was labelled with 18 F-FDG and positioned at 78 target coordinates distributed over a 35mm×24mm×28mm volume within the PET-detector field-of-view. At each position images were acquired from which the needle positioning accuracy was calculated. Additionally, phantom-based biopsy proofs, as well as MammoCare images of 5 breast cancer patients, were evaluated for the 3D automated locating of 18 F-FDG uptake areas within the tumour. Needle positioning tests revealed an average accuracy of 0.5mm (range 0-1mm), 0.6mm (range 0-2mm), and 0.4mm (range 0-2mm) for the x/y/z-axes, respectively. Furthermore, the MammoCare system was able to visualize and locate small (<10mm) regions with high 18 F-FDG uptake within the tumour suitable for PET-guided biopsy after being located by the 3D automated application. Accuracy testing demonstrated high-precision of this semi-automatic 3D PET-guided system for breast cancer core needle biopsy. Its clinical feasibility evaluation in breast cancer patients scheduled for neo-adjuvant chemotherapy will follow. Copyright © 2016 Elsevier España, S.L.U. y SEMNIM. All rights reserved.

  16. Real-time 3D ultrasound guidance of autonomous surgical robot for shrapnel detection and breast biopsy

    NASA Astrophysics Data System (ADS)

    Rogers, Albert J.; Light, Edward D.; von Allmen, Daniel; Smith, Stephen W.

    2009-02-01

    Two studies have been conducted using real time 3D ultrasound and an automated robot system for carrying out surgical tasks. The first task is to perform a breast lesion biopsy automatically after detection by ultrasound. Combining 3D ultrasound with traditional mammography allows real time guidance of the biopsy needle. Image processing techniques analyze volumes to calculate the location of a target lesion. This position was converted into the coordinate system of a three axis robot which moved a needle probe to touch the lesion. The second task is to remove shrapnel from a tissue phantom autonomously. In some emergency situations, shrapnel detection in the body is necessary for quick treatment. Furthermore, small or uneven shrapnel geometry may hinder location by typical ultrasound imaging methods. Vibrations and small displacements can be induced in ferromagnetic shrapnel by a variable electromagnet. We used real time 3D color Doppler to locate this motion for 2 mm long needle fragments and determined the 3D position of the fragment in the scanner coordinates. The rms error of the image guided robot for 5 trials was 1.06 mm for this task which was accomplished in 76 seconds.

  17. Active point out-of-plane ultrasound calibration

    NASA Astrophysics Data System (ADS)

    Cheng, Alexis; Guo, Xiaoyu; Zhang, Haichong K.; Kang, Hyunjae; Etienne-Cummings, Ralph; Boctor, Emad M.

    2015-03-01

    Image-guided surgery systems are often used to provide surgeons with informational support. Due to several unique advantages such as ease of use, real-time image acquisition, and no ionizing radiation, ultrasound is a common intraoperative medical imaging modality used in image-guided surgery systems. To perform advanced forms of guidance with ultrasound, such as virtual image overlays or automated robotic actuation, an ultrasound calibration process must be performed. This process recovers the rigid body transformation between a tracked marker attached to the transducer and the ultrasound image. Point-based phantoms are considered to be accurate, but their calibration framework assumes that the point is in the image plane. In this work, we present the use of an active point phantom and a calibration framework that accounts for the elevational uncertainty of the point. Given the lateral and axial position of the point in the ultrasound image, we approximate a circle in the axial-elevational plane with a radius equal to the axial position. The standard approach transforms all of the imaged points to be a single physical point. In our approach, we minimize the distances between the circular subsets of each image, with them ideally intersecting at a single point. We simulated in noiseless and noisy cases, presenting results on out-of-plane estimation errors, calibration estimation errors, and point reconstruction precision. We also performed an experiment using a robot arm as the tracker, resulting in a point reconstruction precision of 0.64mm.

  18. Study of Image Qualities From 6D Robot-Based CBCT Imaging System of Small Animal Irradiator.

    PubMed

    Sharma, Sunil; Narayanasamy, Ganesh; Clarkson, Richard; Chao, Ming; Moros, Eduardo G; Zhang, Xin; Yan, Yulong; Boerma, Marjan; Paudel, Nava; Morrill, Steven; Corry, Peter; Griffin, Robert J

    2017-01-01

    To assess the quality of cone beam computed tomography images obtained by a robotic arm-based and image-guided small animal conformal radiation therapy device. The small animal conformal radiation therapy device is equipped with a 40 to 225 kV X-ray tube mounted on a custom made gantry, a 1024 × 1024 pixels flat panel detector (200 μm resolution), a programmable 6 degrees of freedom robot for cone beam computed tomography imaging and conformal delivery of radiation doses. A series of 2-dimensional radiographic projection images were recorded in cone beam mode by placing and rotating microcomputed tomography phantoms on the "palm' of the robotic arm. Reconstructed images were studied for image quality (spatial resolution, image uniformity, computed tomography number linearity, voxel noise, and artifacts). Geometric accuracy was measured to be 2% corresponding to 0.7 mm accuracy on a Shelley microcomputed tomography QA phantom. Qualitative resolution of reconstructed axial computed tomography slices using the resolution coils was within 200 μm. Quantitative spatial resolution was found to be 3.16 lp/mm. Uniformity of the system was measured within 34 Hounsfield unit on a QRM microcomputed tomography water phantom. Computed tomography numbers measured using the linearity plate were linear with material density ( R 2 > 0.995). Cone beam computed tomography images of the QRM multidisk phantom had minimal artifacts. Results showed that the small animal conformal radiation therapy device is capable of producing high-quality cone beam computed tomography images for precise and conformal small animal dose delivery. With its high-caliber imaging capabilities, the small animal conformal radiation therapy device is a powerful tool for small animal research.

  19. MO-DE-210-07: Investigation of Treatment Interferences of a Novel Robotic Ultrasound Radiotherapy Guidance System with Clinical VMAT Plans for Liver SBRT Patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gong, R; Bruder, R; Schweikard, A

    Purpose: To evaluate the proportion of liver SBRT cases in which robotic ultrasound image guidance concurrent with beam delivery can be deployed without interfering with clinically used VMAT beam configurations. Methods: A simulation environment incorporating LINAC, couch, planning CT, and robotic ultrasound guidance hardware was developed. Virtual placement of the robotic ultrasound hardware was guided by a target visibility map rendered on the CT surface. The map was computed on GPU by using the planning CT to simulate ultrasound propagation and attenuation along rays connecting skin surface points to a rasterized imaging target. The visibility map was validated in amore » prostate phantom experiment by capturing live ultrasound images of the prostate from different phantom locations. In 20 liver SBRT patients treated with VMAT, the simulation environment was used to place the robotic hardware and ultrasound probe at imaging locations indicated on the visibility map. Imaging targets were either entire PTV (range 5.9–679.5 ml) or entire GTV (range 0.9–343.4 ml). Presence or absence of mechanical collisions with LINAC, couch, and patient body as well as interferences with treated beams were recorded. Results: For PTV targets, robotic ultrasound guidance without mechanical collision was possible in 80% of the cases and guidance without beam interference was possible in 60% of the cases. For the smaller GTV targets, these proportions were 95% and 85% correspondingly. GTV size (1/20), elongated shape (1/20), and depth (1/20) were the main factors limiting the availability of non-interfering imaging positions. Conclusion: This study indicates that for VMAT liver SBRT, robotic ultrasound tracking of a relevant internal target would be possible in 85% of cases while using treatment plans currently deployed in the clinic. With beam re-planning in accordance with the presence of robotic ultrasound guidance, intra-fractional ultrasound guidance may be an option for 95% of the liver SBRT cases. This project was funded by NIH Grant R41CA174089.« less

  20. Experientally guided robots. [for planet exploration

    NASA Technical Reports Server (NTRS)

    Merriam, E. W.; Becker, J. D.

    1974-01-01

    This paper argues that an experientally guided robot is necessary to successfully explore far-away planets. Such a robot is characterized as having sense organs which receive sensory information from its environment and motor systems which allow it to interact with that environment. The sensori-motor information which it receives is organized into an experiential knowledge structure and this knowledge in turn is used to guide the robot's future actions. A summary is presented of a problem solving system which is being used as a test bed for developing such a robot. The robot currently engages in the behaviors of visual tracking, focusing down, and looking around in a simulated Martian landscape. Finally, some unsolved problems are outlined whose solutions are necessary before an experientally guided robot can be produced. These problems center around organizing the motivational and memory structure of the robot and understanding its high-level control mechanisms.

  1. MRI-guided procedures in various regions of the body using a robotic assistance system in a closed-bore scanner: preliminary clinical experience and limitations.

    PubMed

    Moche, Michael; Zajonz, Dirk; Kahn, Thomas; Busse, Harald

    2010-04-01

    To present the clinical setup and workflow of a robotic assistance system for image-guided interventions in a conventional magnetic resonance imaging (MRI) environment and to report our preliminary clinical experience with percutaneous biopsies in various body regions. The MR-compatible, servo-pneumatically driven, robotic device (Innomotion) fits into the 60-cm bore of a standard MR scanner. The needle placement (n = 25) accuracy was estimated by measuring the 3D deviation between needle tip and prescribed target point in a phantom. Percutaneous biopsies in six patients and different body regions were planned by graphically selecting entry and target points on intraoperatively acquired roadmap MR data. For insertion depths between 29 and 95 mm, the average 3D needle deviation was 2.2 +/- 0.7 mm (range 0.9-3.8 mm). Patients with a body mass index of up to approximately 30 kg/m(2) fitted into the bore with the device. Clinical work steps and limitations are reported for the various applications. All biopsies were diagnostic and could be completed without any major complications. Median planning and intervention times were 25 (range 20-36) and 44 (36-68) minutes, respectively. Preliminary clinical results in a standard MRI environment suggest that the presented robotic device provides accurate guidance for percutaneous procedures in various body regions. Shorter procedure times may be achievable by optimizing technical and workflow aspects. (c) 2010 Wiley-Liss, Inc.

  2. Technical vision for robots

    NASA Astrophysics Data System (ADS)

    1985-01-01

    A new invention by scientists who have copied the structure of a human eye will help replace a human telescope-watching astronomer with a robot. It will be possible to provide technical vision not only for robot astronomers but also for their industrial fellow robots. So far, an artificial eye with dimensions close to those of a human eye discerns only black-and-white images. But already the second model of the eye is to perceive colors as well. Polymers which are suited for the role of the coat of an eye, lens, and vitreous body were applied. The retina has been replaced with a bundle of the finest glass filaments through which light rays get onto photomultipliers. They can be positioned outside the artificial eye. The main thing is to prevent great losses in the light guide.

  3. Toward real-time tumor margin identification in image-guided robotic brain tumor resection

    NASA Astrophysics Data System (ADS)

    Hu, Danying; Jiang, Yang; Belykh, Evgenii; Gong, Yuanzheng; Preul, Mark C.; Hannaford, Blake; Seibel, Eric J.

    2017-03-01

    For patients with malignant brain tumors (glioblastomas), a safe maximal resection of tumor is critical for an increased survival rate. However, complete resection of the cancer is hard to achieve due to the invasive nature of these tumors, where the margins of the tumors become blurred from frank tumor to more normal brain tissue, but in which single cells or clusters of malignant cells may have invaded. Recent developments in fluorescence imaging techniques have shown great potential for improved surgical outcomes by providing surgeons intraoperative contrast-enhanced visual information of tumor in neurosurgery. The current near-infrared (NIR) fluorophores, such as indocyanine green (ICG), cyanine5.5 (Cy5.5), 5-aminolevulinic acid (5-ALA)-induced protoporphyrin IX (PpIX), are showing clinical potential to be useful in targeting and guiding resections of such tumors. Real-time tumor margin identification in NIR imaging could be helpful to both surgeons and patients by reducing the operation time and space required by other imaging modalities such as intraoperative MRI, and has the potential to integrate with robotically assisted surgery. In this paper, a segmentation method based on the Chan-Vese model was developed for identifying the tumor boundaries in an ex-vivo mouse brain from relatively noisy fluorescence images acquired by a multimodal scanning fiber endoscope (mmSFE). Tumor contours were achieved iteratively by minimizing an energy function formed by a level set function and the segmentation model. Quantitative segmentation metrics based on tumor-to-background (T/B) ratio were evaluated. Results demonstrated feasibility in detecting the brain tumor margins at quasi-real-time and has the potential to yield improved precision brain tumor resection techniques or even robotic interventions in the future.

  4. 2D–3D radiograph to cone-beam computed tomography (CBCT) registration for C-arm image-guided robotic surgery

    PubMed Central

    Liu, Wen Pei; Otake, Yoshito; Azizian, Mahdi; Wagner, Oliver J.; Sorger, Jonathan M.; Armand, Mehran; Taylor, Russell H.

    2015-01-01

    Purpose C-arm radiographs are commonly used for intraoperative image guidance in surgical interventions. Fluoroscopy is a cost-effective real-time modality, although image quality can vary greatly depending on the target anatomy. Cone-beam computed tomography (CBCT) scans are sometimes available, so 2D–3D registration is needed for intra-procedural guidance. C-arm radiographs were registered to CBCT scans and used for 3D localization of peritumor fiducials during a minimally invasive thoracic intervention with a da Vinci Si robot. Methods Intensity-based 2D–3D registration of intraoperative radiographs to CBCT was performed. The feasible range of X-ray projections achievable by a C-arm positioned around a da Vinci Si surgical robot, configured for robotic wedge resection, was determined using phantom models. Experiments were conducted on synthetic phantoms and animals imaged with an OEC 9600 and a Siemens Artis zeego, representing the spectrum of different C-arm systems currently available for clinical use. Results The image guidance workflow was feasible using either an optically tracked OEC 9600 or a Siemens Artis zeego C-arm, resulting in an angular difference of Δθ : ~ 30°. The two C-arm systems provided TREmean ≤ 2.5 mm and TREmean ≤ 2.0 mm, respectively (i.e., comparable to standard clinical intraoperative navigation systems). Conclusions C-arm 3D localization from dual 2D–3D registered radiographs was feasible and applicable for intraoperative image guidance during da Vinci robotic thoracic interventions using the proposed workflow. Tissue deformation and in vivo experiments are required before clinical evaluation of this system. PMID:25503592

  5. Smith predictor-based robot control for ultrasound-guided teleoperated beating-heart surgery.

    PubMed

    Bowthorpe, Meaghan; Tavakoli, Mahdi; Becher, Harald; Howe, Robert

    2014-01-01

    Performing surgery on fast-moving heart structures while the heart is freely beating is next to impossible. Nevertheless, the ability to do this would greatly benefit patients. By controlling a teleoperated robot to continuously follow the heart's motion, the heart can be made to appear stationary. The surgeon will then be able to operate on a seemingly stationary heart when in reality it is freely beating. The heart's motion is measured from ultrasound images and thus involves a non-negligible delay due to image acquisition and processing, estimated to be 150 ms that, if not compensated for, can cause the teleoperated robot's end-effector (i.e., the surgical tool) to collide with and puncture the heart. This research proposes the use of a Smith predictor to compensate for this time delay in calculating the reference position for the teleoperated robot. The results suggest that heart motion tracking is improved as the introduction of the Smith predictor significantly decreases the mean absolute error, which is the error in making the distance between the robot's end-effector and the heart follow the surgeon's motion, and the mean integrated square error.

  6. Merge Fuzzy Visual Servoing and GPS-Based Planning to Obtain a Proper Navigation Behavior for a Small Crop-Inspection Robot.

    PubMed

    Bengochea-Guevara, José M; Conesa-Muñoz, Jesus; Andújar, Dionisio; Ribeiro, Angela

    2016-02-24

    The concept of precision agriculture, which proposes farming management adapted to crop variability, has emerged in recent years. To effectively implement precision agriculture, data must be gathered from the field in an automated manner at minimal cost. In this study, a small autonomous field inspection vehicle was developed to minimise the impact of the scouting on the crop and soil compaction. The proposed approach integrates a camera with a GPS receiver to obtain a set of basic behaviours required of an autonomous mobile robot to inspect a crop field with full coverage. A path planner considered the field contour and the crop type to determine the best inspection route. An image-processing method capable of extracting the central crop row under uncontrolled lighting conditions in real time from images acquired with a reflex camera positioned on the front of the robot was developed. Two fuzzy controllers were also designed and developed to achieve vision-guided navigation. A method for detecting the end of a crop row using camera-acquired images was developed. In addition, manoeuvres necessary for the robot to change rows were established. These manoeuvres enabled the robot to autonomously cover the entire crop by following a previously established plan and without stepping on the crop row, which is an essential behaviour for covering crops such as maize without damaging them.

  7. Merge Fuzzy Visual Servoing and GPS-Based Planning to Obtain a Proper Navigation Behavior for a Small Crop-Inspection Robot

    PubMed Central

    Bengochea-Guevara, José M.; Conesa-Muñoz, Jesus; Andújar, Dionisio; Ribeiro, Angela

    2016-01-01

    The concept of precision agriculture, which proposes farming management adapted to crop variability, has emerged in recent years. To effectively implement precision agriculture, data must be gathered from the field in an automated manner at minimal cost. In this study, a small autonomous field inspection vehicle was developed to minimise the impact of the scouting on the crop and soil compaction. The proposed approach integrates a camera with a GPS receiver to obtain a set of basic behaviours required of an autonomous mobile robot to inspect a crop field with full coverage. A path planner considered the field contour and the crop type to determine the best inspection route. An image-processing method capable of extracting the central crop row under uncontrolled lighting conditions in real time from images acquired with a reflex camera positioned on the front of the robot was developed. Two fuzzy controllers were also designed and developed to achieve vision-guided navigation. A method for detecting the end of a crop row using camera-acquired images was developed. In addition, manoeuvres necessary for the robot to change rows were established. These manoeuvres enabled the robot to autonomously cover the entire crop by following a previously established plan and without stepping on the crop row, which is an essential behaviour for covering crops such as maize without damaging them. PMID:26927102

  8. Preoperative fiducial coil placement facilitates robot-assisted laparoscopic excision of retroperitoneal small solitary metastasis of kidney cancer.

    PubMed

    Agrawal, Vineet; Sharma, Ashwani; Wu, Guan

    2014-11-01

    Image-guided fiducial markers are being used in surgery, especially in spine and breast surgery, and radiotherapy, allowing localization of tumor sites precisely. We report a case of fiducial coil use in a man undergoing a robot-assisted laparoscopic resection of a metastatic nodule under the ipsilateral diaphragm after robot-assisted partial nephrectomy performed 2 years ago for a left upper pole renal tumor. The fiducial coil facilitated the localization of the lesion, which would otherwise have been challenging because of its small size and location. In addition, the fiducial coil was helpful to avoid cutting into the lesion directly. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. Image-guided techniques in renal and hepatic interventions.

    PubMed

    Najmaei, Nima; Mostafavi, Kamal; Shahbazi, Sahar; Azizian, Mahdi

    2013-12-01

    Development of new imaging technologies and advances in computing power have enabled the physicians to perform medical interventions on the basis of high-quality 3D and/or 4D visualization of the patient's organs. Preoperative imaging has been used for planning the surgery, whereas intraoperative imaging has been widely employed to provide visual feedback to a clinician when he or she is performing the procedure. In the past decade, such systems demonstrated great potential in image-guided minimally invasive procedures on different organs, such as brain, heart, liver and kidneys. This article focuses on image-guided interventions and surgery in renal and hepatic surgeries. A comprehensive search of existing electronic databases was completed for the period of 2000-2011. Each contribution was assessed by the authors for relevance and inclusion. The contributions were categorized on the basis of the type of operation/intervention, imaging modality and specific techniques such as image fusion and augmented reality, and organ motion tracking. As a result, detailed classification and comparative study of various contributions in image-guided renal and hepatic interventions are provided. In addition, the potential future directions have been sketched. With a detailed review of the literature, potential future trends in development of image-guided abdominal interventions are identified, namely, growing use of image fusion and augmented reality, computer-assisted and/or robot-assisted interventions, development of more accurate registration and navigation techniques, and growing applications of intraoperative magnetic resonance imaging. Copyright © 2012 John Wiley & Sons, Ltd.

  10. A simple approach to a vision-guided unmanned vehicle

    NASA Astrophysics Data System (ADS)

    Archibald, Christopher; Millar, Evan; Anderson, Jon D.; Archibald, James K.; Lee, Dah-Jye

    2005-10-01

    This paper describes the design and implementation of a vision-guided autonomous vehicle that represented BYU in the 2005 Intelligent Ground Vehicle Competition (IGVC), in which autonomous vehicles navigate a course marked with white lines while avoiding obstacles consisting of orange construction barrels, white buckets and potholes. Our project began in the context of a senior capstone course in which multi-disciplinary teams of five students were responsible for the design, construction, and programming of their own robots. Each team received a computer motherboard, a camera, and a small budget for the purchase of additional hardware, including a chassis and motors. The resource constraints resulted in a simple vision-based design that processes the sequence of images from the single camera to determine motor controls. Color segmentation separates white and orange from each image, and then the segmented image is examined using a 10x10 grid system, effectively creating a low resolution picture for each of the two colors. Depending on its position, each filled grid square influences the selection of an appropriate turn magnitude. Motor commands determined from the white and orange images are then combined to yield the final motion command for video frame. We describe the complete algorithm and the robot hardware and we present results that show the overall effectiveness of our control approach.

  11. Robotic-assisted real-time MRI-guided TAVR: from system deployment to in vivo experiment in swine model.

    PubMed

    Chan, Joshua L; Mazilu, Dumitru; Miller, Justin G; Hunt, Timothy; Horvath, Keith A; Li, Ming

    2016-10-01

    Real-time magnetic resonance imaging (rtMRI) guidance provides significant advantages during transcatheter aortic valve replacement (TAVR) as it provides superior real-time visualization and accurate device delivery tracking. However, performing a TAVR within an MRI scanner remains difficult due to a constrained procedural environment. To address these concerns, a magnetic resonance (MR)-compatible robotic system to assist in TAVR deployments was developed. This study evaluates the technical design and interface considerations of an MR-compatible robotic-assisted TAVR system with the purpose of demonstrating that such a system can be developed and executed safely and precisely in a preclinical model. An MR-compatible robotic surgical assistant system was built for TAVR deployment. This system integrates a 5-degrees of freedom (DoF) robotic arm with a 3-DoF robotic valve delivery module. A user interface system was designed for procedural planning and real-time intraoperative manipulation of the robot. The robotic device was constructed of plastic materials, pneumatic actuators, and fiber-optical encoders. The mechanical profile and MR compatibility of the robotic system were evaluated. The system-level error based on a phantom model was 1.14 ± 0.33 mm. A self-expanding prosthesis was successfully deployed in eight Yorkshire swine under rtMRI guidance. Post-deployment imaging and necropsy confirmed placement of the stent within 3 mm of the aortic valve annulus. These phantom and in vivo studies demonstrate the feasibility and advantages of robotic-assisted TAVR under rtMRI guidance. This robotic system increases the precision of valve deployments, diminishes environmental constraints, and improves the overall success of TAVR.

  12. Intraoperative tumor localization and tissue distinction during robotic adrenalectomy using indocyanine green fluorescence imaging: a feasibility study.

    PubMed

    Sound, Sara; Okoh, Alexis K; Bucak, Emre; Yigitbas, Hakan; Dural, Cem; Berber, Eren

    2016-02-01

    To investigate the feasibility of a method for intraoperative tumor localization and tissue distinction during robotic adrenalectomy (RA) via indocyanine green (ICG) imaging under near-infrared light. Ten patients underwent RA. After exposure of the retroperitoneal space, but before adrenal dissection was started, ICG was given intravenously (IV). Fluorescence Firefly™ imaging was performed at 1-, 5-, 10-, and 20-min time points. The precision with which the borders of the adrenal tissue were distinguished with ICG imaging was compared to that with the conventional robotic view. The number and the total volume of injections for each patient were recorded. There were six male and four female patients. Diagnosis was primary hyperaldosteronism in four patients and myelolipoma, adrenocortical neoplasm, adrenocortical hyperplasia, Cushing's syndrome, pheochromocytoma, and metastasis in one patient each. Procedures were done through a robotic lateral transabdominal approach in nine and through a robotic posterior retroperitoneal approach in one patient. Dose per injection ranged between 2.5 and 6.3 mg and total dose per patient 7.5-18.8 mg. The adrenal gland took up the dye in 1 min, with contrast between adrenal mass and surrounding retroperitoneal fat becoming most distinguished at 5 min. Fluorescence of adrenal tissue lasted up to 20 min after injection. Overall, ICG imaging was felt to help with the conduct of operation in 8 out of 10 procedures. There were no conversions to open or morbidity. There were no immediate or delayed adverse effects attributable to IV ICG administration. In this pilot study, we demonstrated the feasibility and safety of ICG imaging in a small group of patients undergoing RA. We described a method that enabled an effective fluorescence imaging to localize the adrenal glands and guide dissection. Future research is necessary to study how this imaging affects perioperative outcomes.

  13. A Guide for Developing Human-Robot Interaction Experiments in the Robotic Interactive Visualization and Experimentation Technology (RIVET) Simulation

    DTIC Science & Technology

    2016-05-01

    research, Kunkler (2006) suggested that the similarities between computer simulation tools and robotic surgery systems (e.g., mechanized feedback...distribution is unlimited. 49 Davies B. A review of robotics in surgery . Proceedings of the Institution of Mechanical Engineers, Part H: Journal...ARL-TR-7683 ● MAY 2016 US Army Research Laboratory A Guide for Developing Human- Robot Interaction Experiments in the Robotic

  14. Robotically assisted velocity-sensitive triggered focused ultrasound surgery

    NASA Astrophysics Data System (ADS)

    Maier, Florian; Brunner, Alexander; Jenne, Jürgen W.; Krafft, Axel J.; Semmler, Wolfhard; Bock, Michael

    2012-11-01

    Magnetic Resonance (MR) guided Focused Ultrasound Surgery (FUS) of abdominal organs is challenging due to breathing motion and limited patient access in the MR environment. In this work, an experimental robotically assisted FUS setup was combined with a MR-based navigator technique to realize motion-compensated sonications and online temperature imaging. Experiments were carried out in a static phantom, during periodic manual motion of the phantom without triggering, and with triggering to evaluate the triggering method. In contrast to the non-triggered sonication, the results of the triggered sonication show a confined symmetric temperature distribution. In conclusion, the velocity sensitive navigator can be employed for triggered FUS to compensate for periodic motion. Combined with the robotic FUS setup, flexible treatment of abdominal targets might be realized.

  15. Design and validation of an MR-conditional robot for transcranial focused ultrasound surgery in infants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Price, Karl D., E-mail: karl.price@sickkids.ca

    Purpose: Current treatment of intraventricular hemorrhage (IVH) involves cerebral shunt placement or an invasive brain surgery. Magnetic resonance-guided focused ultrasound (MRgFUS) applied to the brains of pediatric patients presents an opportunity to treat IVH in a noninvasive manner, termed “incision-less surgery.” Current clinical and research focused ultrasound systems lack the capability to perform neonatal transcranial surgeries due to either range of motion or dexterity requirements. A novel robotic system is proposed to position a focused ultrasound transducer accurately above the head of a neonatal patient inside an MRI machine to deliver the therapy. Methods: A clinical Philips Sonalleve MRgFUS systemmore » was expanded to perform transcranial treatment. A five degree-of-freedom MR-conditional robot was designed and manufactured using MR compatible materials. The robot electronics and control were integrated into existing Philips electronics and software interfaces. The user commands the position of the robot with a graphical user interface, and is presented with real-time MR imaging of the patient throughout the surgery. The robot is validated through a series of experiments that characterize accuracy, signal-to-noise ratio degeneration of an MR image as a result of the robot, MR imaging artifacts generated by the robot, and the robot’s ability to operate in a representative surgical environment inside an MR machine. Results: Experimental results show the robot responds reliably within an MR environment, has achieved 0.59 ± 0.25 mm accuracy, does not produce severe MR-imaging artifacts, has a workspace providing sufficient coverage of a neonatal brain, and can manipulate a 5 kg payload. A full system demonstration shows these characteristics apply in an application environment. Conclusions: This paper presents a comprehensive look at the process of designing and validating a new robot from concept to implementation for use in an MR environment. An MR conditional robot has been designed and manufactured to design specifications. The system has demonstrated its feasibility as a platform for MRgFUS interventions for neonatal patients. The success of the system in experimental trials suggests that it is ready to be used for validation of the transcranial intervention in animal studies.« less

  16. Robot Service and Repair. Student Guide.

    ERIC Educational Resources Information Center

    Pittsburg State Univ., KS. Kansas Vocational Curriculum Dissemination Center.

    This document is a student guide for a course on robot service and repair. It is organized in four units covering the following topics: introduction to robots, power supply, robot control systems, and service and repair. Each unit contains several lesson plans on the unit topic. Lesson plans consist of lesson objectives, lists of teaching aids and…

  17. Robotically assisted MRgFUS system

    NASA Astrophysics Data System (ADS)

    Jenne, Jürgen W.; Krafft, Axel J.; Maier, Florian; Rauschenberg, Jaane; Semmler, Wolfhard; Huber, Peter E.; Bock, Michael

    2010-03-01

    Magnetic resonance imaging guided focus ultrasound surgery (MRgFUS) is a highly precise method to ablate tissue non-invasively. The objective of this ongoing work is to establish an MRgFUS therapy unit consisting of a specially designed FUS applicator as an add-on to a commercial robotic assistance system originally designed for percutaneous needle interventions in whole-body MRI systems. The fully MR compatible robotic assistance system InnoMotion™ (Synthes Inc., West Chester, USA; formerly InnoMedic GmbH, Herxheim, Germany) offers six degrees of freedom. The developed add-on FUS treatment applicator features a fixed focus ultrasound transducer (f = 1.7 MHz; f' = 68 mm, NA = 0.44, elliptical shaped -6-dB-focus: 8.1 mm length; O/ = 1.1 mm) embedded in a water-filled flexible bellow. A Mylar® foil is used as acoustic window encompassed by a dedicated MRI loop coil. For FUS application, the therapy unit is directly connected to the head of the robotic system, and the treatment region is targeted from above. A newly in-house developed software tool allowed for complete remote control of the MRgFUS-robot system and online analysis of MRI thermometry data. The system's ability for therapeutic relevant focal spot scanning was tested in a closed-bore clinical 1.5 T MR scanner (Magnetom Symphony, Siemens AG, Erlangen, Germany) in animal experiments with pigs. The FUS therapy procedure was performed entirely under MRI guidance including initial therapy planning, online MR-thermometry, and final contrast enhanced imaging for lesion detection. In vivo trials proved the MRgFUS-robot system as highly MR compatible. MR-guided focal spot scanning experiments were performed and a well-defined pattern of thermal tissue lesions was created. A total in vivo positioning accuracy of the US focus better than 2 mm was estimated which is comparable to existing MRgFUS systems. The newly developed FUS-robotic system offers an accurate, highly flexible focus positioning. With its access to the patient from above, it provides a wide range of flexibility for acoustic target access. In the next step, motion correction unit should be integrated.

  18. A long arm for ultrasound: a combined robotic focused ultrasound setup for magnetic resonance-guided focused ultrasound surgery.

    PubMed

    Krafft, Axel J; Jenne, Jürgen W; Maier, Florian; Stafford, R Jason; Huber, Peter E; Semmler, Wolfhard; Bock, Michael

    2010-05-01

    Focused ultrasound surgery (FUS) is a highly precise noninvasive procedure to ablate pathogenic tissue. FUS therapy is often combined with magnetic resonance (MR) imaging as MR imaging offers excellent target identification and allows for continuous monitoring of FUS induced temperature changes. As the dimensions of the ultrasound (US) focus are typically much smaller than the targeted volume, multiple sonications and focus repositioning are interleaved to scan the focus over the target volume. Focal scanning can be achieved electronically by using phased-array US transducers or mechanically by using dedicated mechanical actuators. In this study, the authors propose and evaluate the precision of a combined robotic FUS setup to overcome some of the limitations of the existing MRgFUS systems. Such systems are typically integrated into the patient table of the MR scanner and thus only provide an application of the US wave within a limited spatial range from below the patient. The fully MR-compatible robotic assistance system InnoMotion (InnoMedic GmbH, Herxheim, Germany) was originally designed for MR-guided interventions with needles. It offers five pneumatically driven degrees of freedom and can be moved over a wide range within the bore of the magnet. In this work, the robotic system was combined with a fixed-focus US transducer (frequency: 1.7 MHz; focal length: 68 mm, and numerical aperture: 0.44) that was integrated into a dedicated, in-house developed treatment unit for FUS application. A series of MR-guided focal scanning procedures was performed in a polyacrylamide-egg white gel phantom to assess the positioning accuracy of the combined FUS setup. In animal experiments with a 3-month-old domestic pig, the system's potential and suitability for MRgFUS was tested. In phantom experiments, a total targeting precision of about 3 mm was found, which is comparable to that of the existing MRgFUS systems. Focus positioning could be performed within a few seconds. During in vivo experiments, a defined pattern of single thermal lesions and a therapeutically relevant confluent thermal lesion could be created. The creation of local tissue necrosis by coagulation was confirmed by post-FUS MR imaging and histological examinations on the treated tissue sample. During all sonications in phantom and in vivo, reliable MR imaging and online MR thermometry could be performed without compromises due to operation of the combined robotic FUS setup. Compared to the existing MRgFUS systems, the combined robotic FUS approach offers a wide range of spatial flexibility so that highly flexible application of the US wave would be possible, for example, to avoid risk structures within the US field. The setup might help to realize new ways of patient access in MRgFUS therapy. The setup is compatible with any closed-bore MR system and does not require an especially designed patient table.

  19. Versatile robotic probe calibration for position tracking in ultrasound imaging.

    PubMed

    Bø, Lars Eirik; Hofstad, Erlend Fagertun; Lindseth, Frank; Hernes, Toril A N

    2015-05-07

    Within the field of ultrasound-guided procedures, there are a number of methods for ultrasound probe calibration. While these methods are usually developed for a specific probe, they are in principle easily adapted to other probes. In practice, however, the adaptation often proves tedious and this is impractical in a research setting, where new probes are tested regularly. Therefore, we developed a method which can be applied to a large variety of probes without adaptation. The method used a robot arm to move a plastic sphere submerged in water through the ultrasound image plane, providing a slow and precise movement. The sphere was then segmented from the recorded ultrasound images using a MATLAB programme and the calibration matrix was computed based on this segmentation in combination with tracking information. The method was tested on three very different probes demonstrating both great versatility and high accuracy.

  20. Versatile robotic probe calibration for position tracking in ultrasound imaging

    NASA Astrophysics Data System (ADS)

    Eirik Bø, Lars; Fagertun Hofstad, Erlend; Lindseth, Frank; Hernes, Toril A. N.

    2015-05-01

    Within the field of ultrasound-guided procedures, there are a number of methods for ultrasound probe calibration. While these methods are usually developed for a specific probe, they are in principle easily adapted to other probes. In practice, however, the adaptation often proves tedious and this is impractical in a research setting, where new probes are tested regularly. Therefore, we developed a method which can be applied to a large variety of probes without adaptation. The method used a robot arm to move a plastic sphere submerged in water through the ultrasound image plane, providing a slow and precise movement. The sphere was then segmented from the recorded ultrasound images using a MATLAB programme and the calibration matrix was computed based on this segmentation in combination with tracking information. The method was tested on three very different probes demonstrating both great versatility and high accuracy.

  1. Multispectral Fluorescence Imaging During Robot-assisted Laparoscopic Sentinel Node Biopsy: A First Step Towards a Fluorescence-based Anatomic Roadmap.

    PubMed

    van den Berg, Nynke S; Buckle, Tessa; KleinJan, Gijs H; van der Poel, Henk G; van Leeuwen, Fijs W B

    2017-07-01

    During (robot-assisted) sentinel node (SN) biopsy procedures, intraoperative fluorescence imaging can be used to enhance radioguided SN excision. For this combined pre- and intraoperative SN identification was realized using the hybrid SN tracer, indocyanine green- 99m Tc-nanocolloid. Combining this dedicated SN tracer with a lymphangiographic tracer such as fluorescein may further enhance the accuracy of SN biopsy. Clinical evaluation of a multispectral fluorescence guided surgery approach using the dedicated SN tracer ICG- 99m Tc-nanocolloid, the lymphangiographic tracer fluorescein, and a commercially available fluorescence laparoscope. Pilot study in ten patients with prostate cancer. Following ICG- 99m Tc-nanocolloid administration and preoperative lymphoscintigraphy and single-photon emission computed tomograpy imaging, the number and location of SNs were determined. Fluorescein was injected intraprostatically immediately after the patient was anesthetized. A multispectral fluorescence laparoscope was used intraoperatively to identify both fluorescent signatures. Multispectral fluorescence imaging during robot-assisted radical prostatectomy with extended pelvic lymph node dissection and SN biopsy. (1) Number and location of preoperatively identified SNs. (2) Number and location of SNs intraoperatively identified via ICG- 99m Tc-nanocolloid imaging. (3) Rate of intraoperative lymphatic duct identification via fluorescein imaging. (4) Tumor status of excised (sentinel) lymph node(s). (5) Postoperative complications and follow-up. Near-infrared fluorescence imaging of ICG- 99m Tc-nanocolloid visualized 85.3% of the SNs. In 8/10 patients, fluorescein imaging allowed bright and accurate identification of lymphatic ducts, although higher background staining and tracer washout were observed. The main limitation is the small patient population. Our findings indicate that a lymphangiographic tracer can provide additional information during SN biopsy based on ICG- 99m Tc-nanocolloid. The study suggests that multispectral fluorescence image-guided surgery is clinically feasible. We evaluated the concept of surgical fluorescence guidance using differently colored dyes that visualize complementary features. In the future this concept may provide better guidance towards diseased tissue while sparing healthy tissue, and could thus improve functional and oncologic outcomes. Copyright © 2016 European Association of Urology. Published by Elsevier B.V. All rights reserved.

  2. Task path planning, scheduling and learning for free-ranging robot systems

    NASA Technical Reports Server (NTRS)

    Wakefield, G. Steve

    1987-01-01

    The development of robotics applications for space operations is often restricted by the limited movement available to guided robots. Free ranging robots can offer greater flexibility than physically guided robots in these applications. Presented here is an object oriented approach to path planning and task scheduling for free-ranging robots that allows the dynamic determination of paths based on the current environment. The system also provides task learning for repetitive jobs. This approach provides a basis for the design of free-ranging robot systems which are adaptable to various environments and tasks.

  3. Simplifying applications software for vision guided robot implementation

    NASA Technical Reports Server (NTRS)

    Duncheon, Charlie

    1994-01-01

    A simple approach to robot applications software is described. The idea is to use commercially available software and hardware wherever possible to minimize system costs, schedules and risks. The U.S. has been slow in the adaptation of robots and flexible automation compared to the fluorishing growth of robot implementation in Japan. The U.S. can benefit from this approach because of a more flexible array of vision guided robot technologies.

  4. Robotically-adjustable microstereotactic frames for image-guided neurosurgery

    NASA Astrophysics Data System (ADS)

    Kratchman, Louis B.; Fitzpatrick, J. Michael

    2013-03-01

    Stereotactic frames are a standard tool for neurosurgical targeting, but are uncomfortable for patients and obstruct the surgical field. Microstereotactic frames are more comfortable for patients, provide better access to the surgical site, and have grown in popularity as an alternative to traditional stereotactic devices. However, clinically available microstereotactic frames require either lengthy manufacturing delays or expensive image guidance systems. We introduce a robotically-adjusted, disposable microstereotactic frame for deep brain stimulation surgery that eliminates the drawbacks of existing microstereotactic frames. Our frame can be automatically adjusted in the operating room using a preoperative plan in less than five minutes. A validation study on phantoms shows that our approach provides a target positioning error of 0.14 mm, which exceeds the required accuracy for deep brain stimulation surgery.

  5. A probabilistic model of overt visual attention for cognitive robots.

    PubMed

    Begum, Momotaz; Karray, Fakhri; Mann, George K I; Gosine, Raymond G

    2010-10-01

    Visual attention is one of the major requirements for a robot to serve as a cognitive companion for human. The robotic visual attention is mostly concerned with overt attention which accompanies head and eye movements of a robot. In this case, each movement of the camera head triggers a number of events, namely transformation of the camera and the image coordinate systems, change of content of the visual field, and partial appearance of the stimuli. All of these events contribute to the reduction in probability of meaningful identification of the next focus of attention. These events are specific to overt attention with head movement and, therefore, their effects are not addressed in the classical models of covert visual attention. This paper proposes a Bayesian model as a robot-centric solution for the overt visual attention problem. The proposed model, while taking inspiration from the primates visual attention mechanism, guides a robot to direct its camera toward behaviorally relevant and/or visually demanding stimuli. A particle filter implementation of this model addresses the challenges involved in overt attention with head movement. Experimental results demonstrate the performance of the proposed model.

  6. Toward real-time endoscopically-guided robotic navigation based on a 3D virtual surgical field model

    NASA Astrophysics Data System (ADS)

    Gong, Yuanzheng; Hu, Danying; Hannaford, Blake; Seibel, Eric J.

    2015-03-01

    The challenge is to accurately guide the surgical tool within the three-dimensional (3D) surgical field for roboticallyassisted operations such as tumor margin removal from a debulked brain tumor cavity. The proposed technique is 3D image-guided surgical navigation based on matching intraoperative video frames to a 3D virtual model of the surgical field. A small laser-scanning endoscopic camera was attached to a mock minimally-invasive surgical tool that was manipulated toward a region of interest (residual tumor) within a phantom of a debulked brain tumor. Video frames from the endoscope provided features that were matched to the 3D virtual model, which were reconstructed earlier by raster scanning over the surgical field. Camera pose (position and orientation) is recovered by implementing a constrained bundle adjustment algorithm. Navigational error during the approach to fluorescence target (residual tumor) is determined by comparing the calculated camera pose to the measured camera pose using a micro-positioning stage. From these preliminary results, computation efficiency of the algorithm in MATLAB code is near real-time (2.5 sec for each estimation of pose), which can be improved by implementation in C++. Error analysis produced 3-mm distance error and 2.5 degree of orientation error on average. The sources of these errors come from 1) inaccuracy of the 3D virtual model, generated on a calibrated RAVEN robotic platform with stereo tracking; 2) inaccuracy of endoscope intrinsic parameters, such as focal length; and 3) any endoscopic image distortion from scanning irregularities. This work demonstrates feasibility of micro-camera 3D guidance of a robotic surgical tool.

  7. Preclinical evaluation of an MRI-compatible pneumatic robot for angulated needle placement in transperineal prostate interventions.

    PubMed

    Tokuda, Junichi; Song, Sang-Eun; Fischer, Gregory S; Iordachita, Iulian I; Seifabadi, Reza; Cho, Nathan B; Tuncali, Kemal; Fichtinger, Gabor; Tempany, Clare M; Hata, Nobuhiko

    2012-11-01

    To evaluate the targeting accuracy of a small profile MRI-compatible pneumatic robot for needle placement that can angulate a needle insertion path into a large accessible target volume. We extended our MRI-compatible pneumatic robot for needle placement to utilize its four degrees-of-freedom (4-DOF) mechanism with two parallel triangular structures and support transperineal prostate biopsies in a closed-bore magnetic resonance imaging (MRI) scanner. The robot is designed to guide a needle toward a lesion so that a radiologist can manually insert it in the bore. The robot is integrated with navigation software that allows an operator to plan angulated needle insertion by selecting a target and an entry point. The targeting error was evaluated while the angle between the needle insertion path and the static magnetic field was between -5.7° and 5.7° horizontally and between -5.7° and 4.3° vertically in the MRI scanner after sterilizing and draping the device. The robot positioned the needle for angulated insertion as specified on the navigation software with overall targeting error of 0.8 ± 0.5mm along the horizontal axis and 0.8 ± 0.8mm along the vertical axis. The two-dimensional root-mean-square targeting error on the axial slices as containing the targets was 1.4mm. Our preclinical evaluation demonstrated that the MRI-compatible pneumatic robot for needle placement with the capability to angulate the needle insertion path provides targeting accuracy feasible for clinical MRI-guided prostate interventions. The clinical feasibility has to be established in a clinical study.

  8. Automatic planning of needle placement for robot-assisted percutaneous procedures.

    PubMed

    Belbachir, Esia; Golkar, Ehsan; Bayle, Bernard; Essert, Caroline

    2018-04-18

    Percutaneous procedures allow interventional radiologists to perform diagnoses or treatments guided by an imaging device, typically a computed tomography (CT) scanner with a high spatial resolution. To reduce exposure to radiations and improve accuracy, robotic assistance to needle insertion is considered in the case of X-ray guided procedures. We introduce a planning algorithm that computes a needle placement compatible with both the patient's anatomy and the accessibility of the robot within the scanner gantry. Our preoperative planning approach is based on inverse kinematics, fast collision detection, and bidirectional rapidly exploring random trees coupled with an efficient strategy of node addition. The algorithm computes the allowed needle entry zones over the patient's skin (accessibility map) from 3D models of the patient's anatomy, the environment (CT, bed), and the robot. The result includes the admissible robot joint path to target the prescribed internal point, through the entry point. A retrospective study was performed on 16 patients datasets in different conditions: without robot (WR) and with the robot on the left or the right side of the bed (RL/RR). We provide an accessibility map ensuring a collision-free path of the robot and allowing for a needle placement compatible with the patient's anatomy. The result is obtained in an average time of about 1 min, even in difficult cases. The accessibility maps of RL and RR covered about a half of the surface of WR map in average, which offers a variety of options to insert the needle with the robot. We also measured the average distance between the needle and major obstacles such as the vessels and found that RL and RR produced needle placements almost as safe as WR. The introduced planning method helped us prove that it is possible to use such a "general purpose" redundant manipulator equipped with a dedicated tool to perform percutaneous interventions in cluttered spaces like a CT gantry.

  9. Real-time Accurate Surface Reconstruction Pipeline for Vision Guided Planetary Exploration Using Unmanned Ground and Aerial Vehicles

    NASA Technical Reports Server (NTRS)

    Almeida, Eduardo DeBrito

    2012-01-01

    This report discusses work completed over the summer at the Jet Propulsion Laboratory (JPL), California Institute of Technology. A system is presented to guide ground or aerial unmanned robots using computer vision. The system performs accurate camera calibration, camera pose refinement and surface extraction from images collected by a camera mounted on the vehicle. The application motivating the research is planetary exploration and the vehicles are typically rovers or unmanned aerial vehicles. The information extracted from imagery is used primarily for navigation, as robot location is the same as the camera location and the surfaces represent the terrain that rovers traverse. The processed information must be very accurate and acquired very fast in order to be useful in practice. The main challenge being addressed by this project is to achieve high estimation accuracy and high computation speed simultaneously, a difficult task due to many technical reasons.

  10. Navigation and Robotics in Spinal Surgery: Where Are We Now?

    PubMed

    Overley, Samuel C; Cho, Samuel K; Mehta, Ankit I; Arnold, Paul M

    2017-03-01

    Spine surgery has experienced much technological innovation over the past several decades. The field has seen advancements in operative techniques, implants and biologics, and equipment such as computer-assisted navigation and surgical robotics. With the arrival of real-time image guidance and navigation capabilities along with the computing ability to process and reconstruct these data into an interactive three-dimensional spinal "map", so too have the applications of surgical robotic technology. While spinal robotics and navigation represent promising potential for improving modern spinal surgery, it remains paramount to demonstrate its superiority as compared to traditional techniques prior to assimilation of its use amongst surgeons.The applications for intraoperative navigation and image-guided robotics have expanded to surgical resection of spinal column and intradural tumors, revision procedures on arthrodesed spines, and deformity cases with distorted anatomy. Additionally, these platforms may mitigate much of the harmful radiation exposure in minimally invasive surgery to which the patient, surgeon, and ancillary operating room staff are subjected.Spine surgery relies upon meticulous fine motor skills to manipulate neural elements and a steady hand while doing so, often exploiting small working corridors utilizing exposures that minimize collateral damage. Additionally, the procedures may be long and arduous, predisposing the surgeon to both mental and physical fatigue. In light of these characteristics, spine surgery may actually be an ideal candidate for the integration of navigation and robotic-assisted procedures.With this paper, we aim to critically evaluate the current literature and explore the options available for intraoperative navigation and robotic-assisted spine surgery. Copyright © 2016 by the Congress of Neurological Surgeons.

  11. Indirect iterative learning control for a discrete visual servo without a camera-robot model.

    PubMed

    Jiang, Ping; Bamforth, Leon C A; Feng, Zuren; Baruch, John E F; Chen, YangQuan

    2007-08-01

    This paper presents a discrete learning controller for vision-guided robot trajectory imitation with no prior knowledge of the camera-robot model. A teacher demonstrates a desired movement in front of a camera, and then, the robot is tasked to replay it by repetitive tracking. The imitation procedure is considered as a discrete tracking control problem in the image plane, with an unknown and time-varying image Jacobian matrix. Instead of updating the control signal directly, as is usually done in iterative learning control (ILC), a series of neural networks are used to approximate the unknown Jacobian matrix around every sample point in the demonstrated trajectory, and the time-varying weights of local neural networks are identified through repetitive tracking, i.e., indirect ILC. This makes repetitive segmented training possible, and a segmented training strategy is presented to retain the training trajectories solely within the effective region for neural network approximation. However, a singularity problem may occur if an unmodified neural-network-based Jacobian estimation is used to calculate the robot end-effector velocity. A new weight modification algorithm is proposed which ensures invertibility of the estimation, thus circumventing the problem. Stability is further discussed, and the relationship between the approximation capability of the neural network and the tracking accuracy is obtained. Simulations and experiments are carried out to illustrate the validity of the proposed controller for trajectory imitation of robot manipulators with unknown time-varying Jacobian matrices.

  12. Supervised Remote Robot with Guided Autonomy and Teleoperation (SURROGATE): A Framework for Whole-Body Manipulation

    NASA Technical Reports Server (NTRS)

    Hebert, Paul; Ma, Jeremy; Borders, James; Aydemir, Alper; Bajracharya, Max; Hudson, Nicolas; Shankar, Krishna; Karumanchi, Sisir; Douillard, Bertrand; Burdick, Joel

    2015-01-01

    The use of the cognitive capabilties of humans to help guide the autonomy of robotics platforms in what is typically called "supervised-autonomy" is becoming more commonplace in robotics research. The work discussed in this paper presents an approach to a human-in-the-loop mode of robot operation that integrates high level human cognition and commanding with the intelligence and processing power of autonomous systems. Our framework for a "Supervised Remote Robot with Guided Autonomy and Teleoperation" (SURROGATE) is demonstrated on a robotic platform consisting of a pan-tilt perception head, two 7-DOF arms connected by a single 7-DOF torso, mounted on a tracked-wheel base. We present an architecture that allows high-level supervisory commands and intents to be specified by a user that are then interpreted by the robotic system to perform whole body manipulation tasks autonomously. We use a concept of "behaviors" to chain together sequences of "actions" for the robot to perform which is then executed real time.

  13. Kinematic analysis and simulation of a substation inspection robot guided by magnetic sensor

    NASA Astrophysics Data System (ADS)

    Xiao, Peng; Luan, Yiqing; Wang, Haipeng; Li, Li; Li, Jianxiang

    2017-01-01

    In order to improve the performance of the magnetic navigation system used by substation inspection robot, the kinematic characteristics is analyzed based on a simplified magnetic guiding system model, and then the simulation process is executed to verify the reasonability of the whole analysis procedure. Finally, some suggestions are extracted out, which will be helpful to guide the design of the inspection robot system in the future.

  14. Shoulder-Mounted Robot for MRI-guided arthrography: Accuracy and mounting study.

    PubMed

    Monfaredi, R; Wilson, E; Sze, R; Sharma, K; Azizi, B; Iordachita, I; Cleary, K

    2015-08-01

    A new version of our compact and lightweight patient-mounted MRI-compatible 4 degree-of-freedom (DOF) robot for MRI-guided arthrography procedures is introduced. This robot could convert the traditional two-stage arthrography procedure (fluoroscopy-guided needle insertion followed by a diagnostic MRI scan) to a one-stage procedure, all in the MRI suite. The results of a recent accuracy study are reported. A new mounting technique is proposed and the mounting stability is investigated using optical and electromagnetic tracking on an anthropomorphic phantom. Five volunteer subjects including 2 radiologists were asked to conduct needle insertion in 4 different random positions and orientations within the robot's workspace and the displacement of the base of the robot was investigated during robot motion and needle insertion. Experimental results show that the proposed mounting method is stable and promising for clinical application.

  15. Intensity-based 2D 3D registration for lead localization in robot guided deep brain stimulation

    NASA Astrophysics Data System (ADS)

    Hunsche, Stefan; Sauner, Dieter; El Majdoub, Faycal; Neudorfer, Clemens; Poggenborg, Jörg; Goßmann, Axel; Maarouf, Mohammad

    2017-03-01

    Intraoperative assessment of lead localization has become a standard procedure during deep brain stimulation surgery in many centers, allowing immediate verification of targeting accuracy and, if necessary, adjustment of the trajectory. The most suitable imaging modality to determine lead positioning, however, remains controversially discussed. Current approaches entail the implementation of computed tomography and magnetic resonance imaging. In the present study, we adopted the technique of intensity-based 2D 3D registration that is commonly employed in stereotactic radiotherapy and spinal surgery. For this purpose, intraoperatively acquired 2D x-ray images were fused with preoperative 3D computed tomography (CT) data to verify lead placement during stereotactic robot assisted surgery. Accuracy of lead localization determined from 2D 3D registration was compared to conventional 3D 3D registration in a subsequent patient study. The mean Euclidian distance of lead coordinates estimated from intensity-based 2D 3D registration versus flat-panel detector CT 3D 3D registration was 0.7 mm  ±  0.2 mm. Maximum values of these distances amounted to 1.2 mm. To further investigate 2D 3D registration a simulation study was conducted, challenging two observers to visually assess artificially generated 2D 3D registration errors. 95% of deviation simulations, which were visually assessed as sufficient, had a registration error below 0.7 mm. In conclusion, 2D 3D intensity-based registration revealed high accuracy and reliability during robot guided stereotactic neurosurgery and holds great potential as a low dose, cost effective means for intraoperative lead localization.

  16. An image guidance system for positioning robotic cochlear implant insertion tools

    NASA Astrophysics Data System (ADS)

    Bruns, Trevor L.; Webster, Robert J.

    2017-03-01

    Cochlear implants must be inserted carefully to avoid damaging the delicate anatomical structures of the inner ear. This has motivated several approaches to improve the safety and efficacy of electrode array insertion by automating the process with specialized robotic or manual insertion tools. When such tools are used, they must be positioned at the entry point to the cochlea and aligned with the desired entry vector. This paper presents an image guidance system capable of accurately positioning a cochlear implant insertion tool. An optical tracking system localizes the insertion tool in physical space while a graphical user interface incorporates this with patient- specific anatomical data to provide error information to the surgeon in real-time. Guided by this interface, novice users successfully aligned the tool with an mean accuracy of 0.31 mm.

  17. Image-guided thoracic surgery in the hybrid operation room.

    PubMed

    Ujiie, Hideki; Effat, Andrew; Yasufuku, Kazuhiro

    2017-01-01

    There has been an increase in the use of image-guided technology to facilitate minimally invasive therapy. The next generation of minimally invasive therapy is focused on advancement and translation of novel image-guided technologies in therapeutic interventions, including surgery, interventional pulmonology, radiation therapy, and interventional laser therapy. To establish the efficacy of different minimally invasive therapies, we have developed a hybrid operating room, known as the guided therapeutics operating room (GTx OR) at the Toronto General Hospital. The GTx OR is equipped with multi-modality image-guidance systems, which features a dual source-dual energy computed tomography (CT) scanner, a robotic cone-beam CT (CBCT)/fluoroscopy, high-performance endobronchial ultrasound system, endoscopic surgery system, near-infrared (NIR) fluorescence imaging system, and navigation tracking systems. The novel multimodality image-guidance systems allow physicians to quickly, and accurately image patients while they are on the operating table. This yield improved outcomes since physicians are able to use image guidance during their procedures, and carry out innovative multi-modality therapeutics. Multiple preclinical translational studies pertaining to innovative minimally invasive technology is being developed in our guided therapeutics laboratory (GTx Lab). The GTx Lab is equipped with similar technology, and multimodality image-guidance systems as the GTx OR, and acts as an appropriate platform for translation of research into human clinical trials. Through the GTx Lab, we are able to perform basic research, such as the development of image-guided technologies, preclinical model testing, as well as preclinical imaging, and then translate that research into the GTx OR. This OR allows for the utilization of new technologies in cancer therapy, including molecular imaging, and other innovative imaging modalities, and therefore enables a better quality of life for patients, both during and after the procedure. In this article, we describe capabilities of the GTx systems, and discuss the first-in-human technologies used, and evaluated in GTx OR.

  18. Clinical applicability of robot-guided contact-free laser osteotomy in cranio-maxillo-facial surgery: in-vitro simulation and in-vivo surgery in minipig mandibles.

    PubMed

    Baek, K-W; Deibel, W; Marinov, D; Griessen, M; Bruno, A; Zeilhofer, H-F; Cattin, Ph; Juergens, Ph

    2015-12-01

    Laser was being used in medicine soon after its invention. However, it has been possible to excise hard tissue with lasers only recently, and the Er:YAG laser is now established in the treatment of damaged teeth. Recently experimental studies have investigated its use in bone surgery, where its major advantages are freedom of cutting geometry and precision. However, these advantages become apparent only when the system is used with robotic guidance. The main challenge is ergonomic integration of the laser and the robot, otherwise the surgeon's space in the operating theatre is obstructed during the procedure. Here we present our first experiences with an integrated, miniaturised laser system guided by a surgical robot. An Er:YAG laser source and the corresponding optical system were integrated into a composite casing that was mounted on a surgical robotic arm. The robot-guided laser system was connected to a computer-assisted preoperative planning and intraoperative navigation system, and the laser osteotome was used in an operating theatre to create defects of different shapes in the mandibles of 6 minipigs. Similar defects were created on the opposite side with a piezoelectric (PZE) osteotome and a conventional drill guided by a surgeon. The performance was analysed from the points of view of the workflow, ergonomics, ease of use, and safety features. The integrated robot-guided laser osteotome can be ergonomically used in the operating theatre. The computer-assisted and robot-guided laser osteotome is likely to be suitable for clinical use for ostectomies that require considerable accuracy and individual shape. Copyright © 2015 The British Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  19. Hand gesture guided robot-assisted surgery based on a direct augmented reality interface.

    PubMed

    Wen, Rong; Tay, Wei-Liang; Nguyen, Binh P; Chng, Chin-Boon; Chui, Chee-Kong

    2014-09-01

    Radiofrequency (RF) ablation is a good alternative to hepatic resection for treatment of liver tumors. However, accurate needle insertion requires precise hand-eye coordination and is also affected by the difficulty of RF needle navigation. This paper proposes a cooperative surgical robot system, guided by hand gestures and supported by an augmented reality (AR)-based surgical field, for robot-assisted percutaneous treatment. It establishes a robot-assisted natural AR guidance mechanism that incorporates the advantages of the following three aspects: AR visual guidance information, surgeon's experiences and accuracy of robotic surgery. A projector-based AR environment is directly overlaid on a patient to display preoperative and intraoperative information, while a mobile surgical robot system implements specified RF needle insertion plans. Natural hand gestures are used as an intuitive and robust method to interact with both the AR system and surgical robot. The proposed system was evaluated on a mannequin model. Experimental results demonstrated that hand gesture guidance was able to effectively guide the surgical robot, and the robot-assisted implementation was found to improve the accuracy of needle insertion. This human-robot cooperative mechanism is a promising approach for precise transcutaneous ablation therapy. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  20. Multimodal US-gamma imaging using collaborative robotics for cancer staging biopsies.

    PubMed

    Esposito, Marco; Busam, Benjamin; Hennersperger, Christoph; Rackerseder, Julia; Navab, Nassir; Frisch, Benjamin

    2016-09-01

    The staging of female breast cancer requires detailed information about the level of cancer spread through the lymphatic system. Common practice to obtain this information for patients with early-stage cancer is sentinel lymph node (SLN) biopsy, where LNs are radioactively identified for surgical removal and subsequent histological analysis. Punch needle biopsy is a less invasive approach but suffers from the lack of combined anatomical and nuclear information. We present and evaluate a system that introduces live collaborative robotic 2D gamma imaging in addition to live 2D ultrasound to identify SLNs in the surrounding anatomy. The system consists of a robotic arm equipped with both a gamma camera and a stereoscopic tracking system that monitors the position of an ultrasound probe operated by the physician. The arm cooperatively places the gamma camera parallel to the ultrasound imaging plane to provide live multimodal visualization and guidance. We validate the system by evaluating the target registration errors between fused nuclear and US image data in a phantom consisting of two spheres, one of which is filled with radioactivity. Medical experts perform punch biopsies on agar-gelatine phantoms with complex configurations of hot and cold lesions to provide a qualitative and quantitative evaluation of the system. The average point registration error for the overlay is [Formula: see text] mm. The time of the entire procedure was reduced by 36 %, with 80v of the biopsies being successful. The users' feedback was very positive, and the system was deemed to be very intuitive, with handling similar to classic US-guided needle biopsy. We present and evaluate the first medical collaborative robotic imaging system. Feedback from potential users for SLN punch needle biopsy is encouraging. Ongoing work investigates the clinical feasibility with more complex and realistic phantoms.

  1. Accurate 3D reconstruction of bony surfaces using ultrasonic synthetic aperture techniques for robotic knee arthroplasty.

    PubMed

    Kerr, William; Rowe, Philip; Pierce, Stephen Gareth

    2017-06-01

    Robotically guided knee arthroplasty systems generally require an individualized, preoperative 3D model of the knee joint. This is typically measured using Computed Tomography (CT) which provides the required accuracy for preoperative surgical intervention planning. Ultrasound imaging presents an attractive alternative to CT, allowing for reductions in cost and the elimination of doses of ionizing radiation, whilst maintaining the accuracy of the 3D model reconstruction of the joint. Traditional phased array ultrasound imaging methods, however, are susceptible to poor resolution and signal to noise ratios (SNR). Alleviating these weaknesses by offering superior focusing power, synthetic aperture methods have been investigated extensively within ultrasonic non-destructive testing. Despite this, they have yet to be fully exploited in medical imaging. In this paper, the ability of a robotic deployed ultrasound imaging system based on synthetic aperture methods to accurately reconstruct bony surfaces is investigated. Employing the Total Focussing Method (TFM) and the Synthetic Aperture Focussing Technique (SAFT), two samples were imaged which were representative of the bones of the knee joint: a human-shaped, composite distal femur and a bovine distal femur. Data were captured using a 5MHz, 128 element 1D phased array, which was manipulated around the samples using a robotic positioning system. Three dimensional surface reconstructions were then produced and compared with reference models measured using a precision laser scanner. Mean errors of 0.82mm and 0.88mm were obtained for the composite and bovine samples, respectively, thus demonstrating the feasibility of the approach to deliver the sub-millimetre accuracy required for the application. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  2. A highly articulated robotic surgical system for minimally invasive surgery.

    PubMed

    Ota, Takeyoshi; Degani, Amir; Schwartzman, David; Zubiate, Brett; McGarvey, Jeremy; Choset, Howie; Zenati, Marco A

    2009-04-01

    We developed a novel, highly articulated robotic surgical system (CardioARM) to enable minimally invasive intrapericardial therapeutic delivery through a subxiphoid approach. We performed preliminary proof of concept studies in a porcine preparation by performing epicardial ablation. CardioARM is a robotic surgical system having an articulated design to provide unlimited but controllable flexibility. The CardioARM consists of serially connected, rigid cyclindrical links housing flexible working ports through which catheter-based tools for therapy and imaging can be advanced. The CardioARM is controlled by a computer-driven, user interface, which is operated outside the operative field. In six experimental subjects, the CardioARM was introduced percutaneously through a subxiphoid access. A commercial 5-French radiofrequency ablation catheter was introduced through the working port, which was then used to guide deployment. In all subjects, regional ("linear") left atrial ablation was successfully achieved without complications. Based on these preliminary studies, we believe that the CardioARM promises to enable deployment of a number of epicardium-based therapies. Improvements in imaging techniques will likely facilitate increasingly complex procedures.

  3. Varying ultrasound power level to distinguish surgical instruments and tissue.

    PubMed

    Ren, Hongliang; Anuraj, Banani; Dupont, Pierre E

    2018-03-01

    We investigate a new framework of surgical instrument detection based on power-varying ultrasound images with simple and efficient pixel-wise intensity processing. Without using complicated feature extraction methods, we identified the instrument with an estimated optimal power level and by comparing pixel values of varying transducer power level images. The proposed framework exploits the physics of ultrasound imaging system by varying the transducer power level to effectively distinguish metallic surgical instruments from tissue. This power-varying image-guidance is motivated from our observations that ultrasound imaging at different power levels exhibit different contrast enhancement capabilities between tissue and instruments in ultrasound-guided robotic beating-heart surgery. Using lower transducer power levels (ranging from 40 to 75% of the rated lowest ultrasound power levels of the two tested ultrasound scanners) can effectively suppress the strong imaging artifacts from metallic instruments and thus, can be utilized together with the images from normal transducer power levels to enhance the separability between instrument and tissue, improving intraoperative instrument tracking accuracy from the acquired noisy ultrasound volumetric images. We performed experiments in phantoms and ex vivo hearts in water tank environments. The proposed multi-level power-varying ultrasound imaging approach can identify robotic instruments of high acoustic impedance from low-signal-to-noise-ratio ultrasound images by power adjustments.

  4. UAV-guided navigation for ground robot tele-operation in a military reconnaissance environment.

    PubMed

    Chen, Jessie Y C

    2010-08-01

    A military reconnaissance environment was simulated to examine the performance of ground robotics operators who were instructed to utilise streaming video from an unmanned aerial vehicle (UAV) to navigate his/her ground robot to the locations of the targets. The effects of participants' spatial ability on their performance and workload were also investigated. Results showed that participants' overall performance (speed and accuracy) was better when she/he had access to images from larger UAVs with fixed orientations, compared with other UAV conditions (baseline- no UAV, micro air vehicle and UAV with orbiting views). Participants experienced the highest workload when the UAV was orbiting. Those individuals with higher spatial ability performed significantly better and reported less workload than those with lower spatial ability. The results of the current study will further understanding of ground robot operators' target search performance based on streaming video from UAVs. The results will also facilitate the implementation of ground/air robots in military environments and will be useful to the future military system design and training community.

  5. Reactive navigation for autonomous guided vehicle using neuro-fuzzy techniques

    NASA Astrophysics Data System (ADS)

    Cao, Jin; Liao, Xiaoqun; Hall, Ernest L.

    1999-08-01

    A Neuro-fuzzy control method for navigation of an Autonomous Guided Vehicle robot is described. Robot navigation is defined as the guiding of a mobile robot to a desired destination or along a desired path in an environment characterized by as terrain and a set of distinct objects, such as obstacles and landmarks. The autonomous navigate ability and road following precision are mainly influenced by its control strategy and real-time control performance. Neural network and fuzzy logic control techniques can improve real-time control performance for mobile robot due to its high robustness and error-tolerance ability. For a mobile robot to navigate automatically and rapidly, an important factor is to identify and classify mobile robots' currently perceptual environment. In this paper, a new approach of the current perceptual environment feature identification and classification, which are based on the analysis of the classifying neural network and the Neuro- fuzzy algorithm, is presented. The significance of this work lies in the development of a new method for mobile robot navigation.

  6. An Approach for Preoperative Planning and Performance of MR-guided Interventions Demonstrated With a Manual Manipulator in a 1.5T MRI Scanner

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seimenis, Ioannis; Tsekos, Nikolaos V.; Keroglou, Christoforos

    2012-04-15

    Purpose: The aim of this work was to develop and test a general methodology for the planning and performance of robot-assisted, MR-guided interventions. This methodology also includes the employment of software tools with appropriately tailored routines to effectively exploit the capabilities of MRI and address the relevant spatial limitations. Methods: The described methodology consists of: (1) patient-customized feasibility study that focuses on the geometric limitations imposed by the gantry, the robotic hardware, and interventional tools, as well as the patient; (2) stereotactic preoperative planning for initial positioning of the manipulator and alignment of its end-effector with a selected target; andmore » (3) real-time, intraoperative tool tracking and monitoring of the actual intervention execution. Testing was performed inside a standard 1.5T MRI scanner in which the MR-compatible manipulator is deployed to provide the required access. Results: A volunteer imaging study demonstrates the application of the feasibility stage. A phantom study on needle targeting is also presented, demonstrating the applicability and effectiveness of the proposed preoperative and intraoperative stages of the methodology. For this purpose, a manually actuated, MR-compatible robotic manipulation system was used to accurately acquire a prescribed target through alternative approaching paths. Conclusions: The methodology presented and experimentally examined allows the effective performance of MR-guided interventions. It is suitable for, but not restricted to, needle-targeting applications assisted by a robotic manipulation system, which can be deployed inside a cylindrical scanner to provide the required access to the patient facilitating real-time guidance and monitoring.« less

  7. A gaussian mixture + demons deformable registration method for cone-beam CT-guided robotic transoral base-of-tongue surgery

    NASA Astrophysics Data System (ADS)

    Reaungamornrat, S.; Liu, W. P.; Schafer, S.; Otake, Y.; Nithiananthan, S.; Uneri, A.; Richmon, J.; Sorger, J.; Siewerdsen, J. H.; Taylor, R. H.

    2013-03-01

    Purpose: An increasingly popular minimally invasive approach to resection of oropharyngeal / base-of-tongue cancer is made possible by a transoral technique conducted with the assistance of a surgical robot. However, the highly deformed surgical setup (neck flexed, mouth open, and tongue retracted) compared to the typical patient orientation in preoperative images poses a challenge to guidance and localization of the tumor target and adjacent critical anatomy. Intraoperative cone-beam CT (CBCT) can account for such deformation, but due to the low contrast of soft-tissue in CBCT images, direct localization of the target and critical tissues in CBCT images can be difficult. Such structures may be more readily delineated in preoperative CT or MR images, so a method to deformably register such information to intraoperative CBCT could offer significant value. This paper details the initial implementation of a deformable registration framework to align preoperative images with the deformed intraoperative scene and gives preliminary evaluation of the geometric accuracy of registration in CBCT-guided TORS. Method: The deformable registration aligns preoperative CT or MR to intraoperative CBCT by integrating two established approaches. The volume of interest is first segmented (specifically, the region of the tongue from the tip to the hyoid), and a Gaussian mixture (GM) mode1 of surface point clouds is used for rigid initialization (GMRigid) as well as an initial deformation (GMNonRigid). Next, refinement of the registration is performed using the Demons algorithm applied to distance transformations of the GM-registered and CBCT volumes. The registration accuracy of the framework was quantified in preliminary studies using a cadaver emulating preoperative and intraoperative setups. Geometric accuracy of registration was quantified in terms of target registration error (TRE) and surface distance error. Result: With each step of the registration process, the framework demonstrated improved registration, achieving mean TRE of 3.0 mm following the GM rigid, 1.9 mm following GM nonrigid, and 1.5 mm at the output of the registration process. Analysis of surface distance demonstrated a corresponding improvement of 2.2, 0.4, and 0.3 mm, respectively. The evaluation of registration error revealed the accurate alignment in the region of interest for base-of-tongue robotic surgery owing to point-set selection in the GM steps and refinement in the deep aspect of the tongue in the Demons step. Conclusions: A promising framework has been developed for CBCT-guided TORS in which intraoperative CBCT provides a basis for registration of preoperative images to the highly deformed intraoperative setup. The registration framework is invariant to imaging modality (accommodating preoperative CT or MR) and is robust against CBCT intensity variations and artifact, provided corresponding segmentation of the volume of interest. The approach could facilitate overlay of preoperative planning data directly in stereo-endoscopic video in support of CBCT-guided TORS.

  8. Incorporating target registration error into robotic bone milling

    NASA Astrophysics Data System (ADS)

    Siebold, Michael A.; Dillon, Neal P.; Webster, Robert J.; Fitzpatrick, J. Michael

    2015-03-01

    Robots have been shown to be useful in assisting surgeons in a variety of bone drilling and milling procedures. Examples include commercial systems for joint repair or replacement surgeries, with in vitro feasibility recently shown for mastoidectomy. Typically, the robot is guided along a path planned on a CT image that has been registered to the physical anatomy in the operating room, which is in turn registered to the robot. The registrations often take advantage of the high accuracy of fiducial registration, but, because no real-world registration is perfect, the drill guided by the robot will inevitably deviate from its planned path. The extent of the deviation can vary from point to point along the path because of the spatial variation of target registration error. The allowable deviation can also vary spatially based on the necessary safety margin between the drill tip and various nearby anatomical structures along the path. Knowledge of the expected spatial distribution of registration error can be obtained from theoretical models or experimental measurements and used to modify the planned path. The objective of such modifications is to achieve desired probabilities for sparing specified structures. This approach has previously been studied for drilling straight holes but has not yet been generalized to milling procedures, such as mastoidectomy, in which cavities of more general shapes must be created. In this work, we present a general method for altering any path to achieve specified probabilities for any spatial arrangement of structures to be protected. We validate the method via numerical simulations in the context of mastoidectomy.

  9. Blurring the boundaries between frame-based and frameless stereotaxy: feasibility study for brain biopsies performed with the use of a head-mounted robot.

    PubMed

    Grimm, Florian; Naros, Georgios; Gutenberg, Angelika; Keric, Naureen; Giese, Alf; Gharabaghi, Alireza

    2015-09-01

    Frame-based stereotactic interventions are considered the gold standard for brain biopsies, but they have limitations with regard to flexibility and patient comfort because of the bulky head ring attached to the patient. Frameless image guidance systems that use scalp fiducial markers offer more flexibility and patient comfort but provide less stability and accuracy during drilling and biopsy needle positioning. Head-mounted robot-guided biopsies could provide the advantages of these 2 techniques without the downsides. The goal of this study was to evaluate the feasibility and safety of a robotic guidance device, affixed to the patient's skull through a small mounting platform, for use in brain biopsy procedures. This was a retrospective study of 37 consecutive patients who presented with supratentorial lesions and underwent brain biopsy procedures in which a surgical guidance robot was used to determine clinical outcomes and technical procedural operability. The portable head-mounted device was well tolerated by the patients and enabled stable drilling and needle positioning during surgery. Flexible adjustments of predefined paths and selection of new trajectories were successfully performed intraoperatively without the need for manual settings and fixations. The patients experienced no permanent deficits or infections after surgery. The head-mounted robot-guided approach presented here combines the stability of a bone-mounted set-up with the flexibility and tolerability of frameless systems. By reducing human interference (i.e., manual parameter settings, calibrations, and adjustments), this technology might be particularly useful in neurosurgical interventions that necessitate multiple trajectories.

  10. Incorporating Target Registration Error Into Robotic Bone Milling

    PubMed Central

    Siebold, Michael A.; Dillon, Neal P.; Webster, Robert J.; Fitzpatrick, J. Michael

    2015-01-01

    Robots have been shown to be useful in assisting surgeons in a variety of bone drilling and milling procedures. Examples include commercial systems for joint repair or replacement surgeries, with in vitro feasibility recently shown for mastoidectomy. Typically, the robot is guided along a path planned on a CT image that has been registered to the physical anatomy in the operating room, which is in turn registered to the robot. The registrations often take advantage of the high accuracy of fiducial registration, but, because no real-world registration is perfect, the drill guided by the robot will inevitably deviate from its planned path. The extent of the deviation can vary from point to point along the path because of the spatial variation of target registration error. The allowable deviation can also vary spatially based on the necessary safety margin between the drill tip and various nearby anatomical structures along the path. Knowledge of the expected spatial distribution of registration error can be obtained from theoretical models or experimental measurements and used to modify the planned path. The objective of such modifications is to achieve desired probabilities for sparing specified structures. This approach has previously been studied for drilling straight holes but has not yet been generalized to milling procedures, such as mastoidectomy, in which cavities of more general shapes must be created. In this work, we present a general method for altering any path to achieve specified probabilities for any spatial arrangement of structures to be protected. We validate the method via numerical simulations in the context of mastoidectomy. PMID:26692630

  11. Robotic Stereotaxy in Cranial Neurosurgery: A Qualitative Systematic Review.

    PubMed

    Fomenko, Anton; Serletis, Demitre

    2017-12-14

    Modern-day stereotactic techniques have evolved to tackle the neurosurgical challenge of accurately and reproducibly accessing specific brain targets. Neurosurgical advances have been made in synergy with sophisticated technological developments and engineering innovations such as automated robotic platforms. Robotic systems offer a unique combination of dexterity, durability, indefatigability, and precision. To perform a systematic review of robotic integration for cranial stereotactic guidance in neurosurgery. Specifically, we comprehensively analyze the strengths and weaknesses of a spectrum of robotic technologies, past and present, including details pertaining to each system's kinematic specifications and targeting accuracy profiles. Eligible articles on human clinical applications of cranial robotic-guided stereotactic systems between 1985 and 2017 were extracted from several electronic databases, with a focus on stereotactic biopsy procedures, stereoelectroencephalography, and deep brain stimulation electrode insertion. Cranial robotic stereotactic systems feature serial or parallel architectures with 4 to 7 degrees of freedom, and frame-based or frameless registration. Indications for robotic assistance are diversifying, and include stereotactic biopsy, deep brain stimulation and stereoelectroencephalography electrode placement, ventriculostomy, and ablation procedures. Complication rates are low, and mainly consist of hemorrhage. Newer systems benefit from increasing targeting accuracy, intraoperative imaging ability, improved safety profiles, and reduced operating times. We highlight emerging future directions pertaining to the integration of robotic technologies into future neurosurgical procedures. Notably, a trend toward miniaturization, cost-effectiveness, frameless registration, and increasing safety and accuracy characterize successful stereotactic robotic technologies. Copyright © 2017 by the Congress of Neurological Surgeons

  12. [Impact of digital technology on clinical practices: perspectives from surgery].

    PubMed

    Zhang, Y; Liu, X J

    2016-04-09

    Digital medical technologies or computer aided medical procedures, refer to imaging, 3D reconstruction, virtual design, 3D printing, navigation guided surgery and robotic assisted surgery techniques. These techniques are integrated into conventional surgical procedures to create new clinical protocols that are known as "digital surgical techniques". Conventional health care is characterized by subjective experiences, while digital medical technologies bring quantifiable information, transferable data, repeatable methods and predictable outcomes into clinical practices. Being integrated into clinical practice, digital techniques facilitate surgical care by improving outcomes and reducing risks. Digital techniques are becoming increasingly popular in trauma surgery, orthopedics, neurosurgery, plastic and reconstructive surgery, imaging and anatomic sciences. Robotic assisted surgery is also evolving and being applied in general surgery, cardiovascular surgery and orthopedic surgery. Rapid development of digital medical technologies is changing healthcare and clinical practices. It is therefore important for all clinicians to purposefully adapt to these technologies and improve their clinical outcomes.

  13. Variants of guided self-organization for robot control.

    PubMed

    Martius, Georg; Herrmann, J Michael

    2012-09-01

    Autonomous robots can generate exploratory behavior by self-organization of the sensorimotor loop. We show that the behavioral manifold that is covered in this way can be modified in a goal-dependent way without reducing the self-induced activity of the robot. We present three strategies for guided self-organization, namely by using external rewards, a problem-specific error function, or assumptions about the symmetries of the desired behavior. The strategies are analyzed for two different robots in a physically realistic simulation.

  14. Folk-Psychological Interpretation of Human vs. Humanoid Robot Behavior: Exploring the Intentional Stance toward Robots.

    PubMed

    Thellman, Sam; Silvervarg, Annika; Ziemke, Tom

    2017-01-01

    People rely on shared folk-psychological theories when judging behavior. These theories guide people's social interactions and therefore need to be taken into consideration in the design of robots and other autonomous systems expected to interact socially with people. It is, however, not yet clear to what degree the mechanisms that underlie people's judgments of robot behavior overlap or differ from the case of human or animal behavior. To explore this issue, participants ( N = 90) were exposed to images and verbal descriptions of eight different behaviors exhibited either by a person or a humanoid robot. Participants were asked to rate the intentionality, controllability and desirability of the behaviors, and to judge the plausibility of seven different types of explanations derived from a recently proposed psychological model of lay causal explanation of human behavior. Results indicate: substantially similar judgments of human and robot behavior, both in terms of (1a) ascriptions of intentionality/controllability/desirability and in terms of (1b) plausibility judgments of behavior explanations; (2a) high level of agreement in judgments of robot behavior - (2b) slightly lower but still largely similar to agreement over human behaviors; (3) systematic differences in judgments concerning the plausibility of goals and dispositions as explanations of human vs. humanoid behavior. Taken together, these results suggest that people's intentional stance toward the robot was in this case very similar to their stance toward the human.

  15. An MRI-Compatible Robotic System With Hybrid Tracking for MRI-Guided Prostate Intervention

    PubMed Central

    Krieger, Axel; Iordachita, Iulian I.; Guion, Peter; Singh, Anurag K.; Kaushal, Aradhana; Ménard, Cynthia; Pinto, Peter A.; Camphausen, Kevin; Fichtinger, Gabor

    2012-01-01

    This paper reports the development, evaluation, and first clinical trials of the access to the prostate tissue (APT) II system—a scanner independent system for magnetic resonance imaging (MRI)-guided transrectal prostate interventions. The system utilizes novel manipulator mechanics employing a steerable needle channel and a novel six degree-of-freedom hybrid tracking method, comprising passive fiducial tracking for initial registration and subsequent incremental motion measurements. Targeting accuracy of the system in prostate phantom experiments and two clinical human-subject procedures is shown to compare favorably with existing systems using passive and active tracking methods. The portable design of the APT II system, using only standard MRI image sequences and minimal custom scanner interfacing, allows the system to be easily used on different MRI scanners. PMID:22009867

  16. Accuracy of S2 Alar-Iliac Screw Placement Under Robotic Guidance.

    PubMed

    Laratta, Joseph L; Shillingford, Jamal N; Lombardi, Joseph M; Alrabaa, Rami G; Benkli, Barlas; Fischer, Charla; Lenke, Lawrence G; Lehman, Ronald A

    Case series. To determine the safety and feasibility of S2 alar-iliac (S2AI) screw placement under robotic guidance. Similar to standard iliac fixation, S2AI screws aid in achieving fixation across the sacropelvic junction and decreasing S1 screw strain. Fortunately, the S2AI technique minimizes prominent instrumentation and the need for offset connectors to the fusion construct. Herein, we present an analysis of the largest series of robotic-guided S2AI screws in the literature without any significant author conflicts of interest with the robotics industry. Twenty-three consecutive patients who underwent spinopelvic fixation with 46 S2AI screws under robotic guidance were analyzed from 2015 to 2016. Screws were placed by two senior spine surgeons, along with various fellow or resident surgical assistants, using a proprietary robotic guidance system (Renaissance; Mazor Robotics Ltd., Caesara, Israel). Screw position and accuracy was assessed on intraoperative CT O-arm scans and analyzed using three-dimensional interactive viewing and manipulation of the images. The average caudal angle in the sagittal plane was 31.0° ± 10.0°. The average horizontal angle in the axial plane using the posterior superior iliac spine as a reference was 42.8° ± 6.6°. The average S1 screw to S2AI screw angle was 11.3° ± 9.9°. Two violations of the iliac cortex were noted, with an average breach distance of 7.9 ± 4.8 mm. One breach was posterior (2.2%) and one was anterior (2.2%). The overall robotic S2AI screw accuracy rate was 95.7%. There were no intraoperative neurologic, vascular, or visceral complications related to the placement of the S2AI screws. Spinopelvic fixation achieved using a bone-mounted miniature robotic-guided S2AI screw insertion technique is safe and reliable. Despite two breaches, no complications related to the placement of the S2AI screws occurred in this series. Level IV, therapeutic. Copyright © 2017 Scoliosis Research Society. Published by Elsevier Inc. All rights reserved.

  17. Design and Development of a High Speed Sorting System Based on Machine Vision Guiding

    NASA Astrophysics Data System (ADS)

    Zhang, Wenchang; Mei, Jiangping; Ding, Yabin

    In this paper, a vision-based control strategy to perform high speed pick-and-place tasks on automation product line is proposed, and relevant control software is develop. Using Delta robot to control a sucker to grasp disordered objects from one moving conveyer and then place them on the other in order. CCD camera gets one picture every time the conveyer moves a distance of ds. Objects position and shape are got after image processing. Target tracking method based on "Servo motor + synchronous conveyer" is used to fulfill the high speed porting operation real time. Experiments conducted on Delta robot sorting system demonstrate the efficiency and validity of the proposed vision-control strategy.

  18. Optical Design of COATLI: A Diffraction-Limited Visible Imager with Fast Guiding and Active Optics Correction

    NASA Astrophysics Data System (ADS)

    Fuentes-Fernández, J.; Cuevas, S.; Watson, A. M.

    2018-04-01

    We present the optical design of COATLI, a two channel visible imager for a comercial 50 cm robotic telescope. COATLI will deliver diffraction-limited images (approximately 0.3 arcsec FWHM) in the riz bands, inside a 4.2 arcmin field, and seeing limited images (approximately 0.6 arcsec FWHM) in the B and g bands, inside a 5 arcmin field, by means of a tip-tilt mirror for fast guiding, and a deformable mirror for active optics, both located on two optically transferred pupil planes. The optical design is based on two collimator-camera systems plus a pupil transfer relay, using achromatic doublets of CaF2 and S-FTM16 and one triplet of N-BK7 and CaF2. We discuss the effciency, tolerancing, thermal behavior and ghosts. COATLI will be installed at the Observatorio Astronómico Nacional in Sierra San Pedro Mártir, Baja California, Mexico, in 2018.

  19. Suitability of healthcare robots for a dementia unit and suggested improvements.

    PubMed

    Robinson, Hayley; MacDonald, Bruce A; Kerse, Ngaire; Broadbent, Elizabeth

    2013-01-01

    To investigate the suitability of a new eldercare robot (Guide) for people with dementia and their caregivers compared with one that has been successfully used before (Paro), and to generate suggestions for improved robot enhanced dementia care. Cross-sectional study. A researcher demonstrated both robots in a random order to each staff member alone, or to each resident together with his/her relative(s). The researcher encouraged the participants to interact with each robot and asked staff and relatives a series of open ended questions about each robot. A secure dementia residential facility in Auckland, New Zealand. Ten people with dementia and 11 of their relatives, and five staff members. Each robot interaction was video-taped and coded for the number of times the resident looked at, smiled, touched, and talked to and about each robot, as well as relative interactions with the resident. Qualitative analysis was used to code the open ended questions. Residents smiled, touched and talked to Paro significantly more than Guide. Paro was found to be more acceptable to family members, staff, and residents, although many acknowledged that Guide had the potential to be useful if adapted for this population in terms of ergonomics and simplification. Healthcare robots in dementia settings have to be simple and easy to use as well as stimulating and entertaining. This research highlights how eldercare robots may be adapted to have the best effects in dementia settings. It is concluded that Paro's sounds could be modified to be more acceptable to this population. The ergonomic design of Guide could be reviewed and the software application could be simplified and targeted to people with dementia. Copyright © 2013 American Medical Directors Association, Inc. Published by Elsevier Inc. All rights reserved.

  20. Improved Image-Guided Laparoscopic Prostatectomy

    DTIC Science & Technology

    2013-07-01

    Automatic robotic-assisted palpation has been designed , implemented and tested. Two studies have been completed: 1) ex-vivo prostate specimens using...concerned with the additional processing of the specimens. We responded by designing a phantom box to improve the process so that pathologists could...of the study will be presented below, at task 3a. Task 3. Design and build new LAPUS probe (months 13-24) Data from the ex-vivo

  1. Robot-assisted, ultrasound-guided minimally invasive navigation tool for brachytherapy and ablation therapy: initial assessment

    NASA Astrophysics Data System (ADS)

    Bhattad, Srikanth; Escoto, Abelardo; Malthaner, Richard; Patel, Rajni

    2015-03-01

    Brachytherapy and thermal ablation are relatively new approaches in robot-assisted minimally invasive interventions for treating malignant tumors. Ultrasound remains the most favored choice for imaging feedback, the benefits being cost effectiveness, radiation free, and easy access in an OR. However it does not generally provide high contrast, noise free images. Distortion occurs when the sound waves pass through a medium that contains air and/or when the target organ is deep within the body. The distorted images make it quite difficult to recognize and localize tumors and surgical tools. Often tools, such as a bevel-tipped needle, deflect from its path during insertion, making it difficult to detect the needle tip using a single perspective view. The shifting of the target due to cardiac and/or respiratory motion can add further errors in reaching the target. This paper describes a comprehensive system that uses robot dexterity to capture 2D ultrasound images in various pre-determined modes for generating 3D ultrasound images and assists in maneuvering a surgical tool. An interactive 3D virtual reality environment is developed that visualizes various artifacts present in the surgical site in real-time. The system helps to avoid image distortion by grabbing images from multiple positions and orientation to provide a 3D view. Using the methods developed for this application, an accuracy of 1.3 mm was achieved in target attainment in an in-vivo experiment subjected to tissue motion. An accuracy of 1.36 mm and 0.93 mm respectively was achieved for the ex-vivo experiments with and without external induced motion. An ablation monitor widget that visualizes the changes during the complete ablation process and enables evaluation of the process in its entirety is integrated.

  2. Motion-compensated hand-held common-path Fourier-domain optical coherence tomography probe for image-guided intervention

    NASA Astrophysics Data System (ADS)

    Huang, Yong; Song, Cheol; Liu, Xuan; Kang, Jin U.

    2013-03-01

    A motion-compensated hand-held common-path Fourier-domain optical coherence tomography imaging probe has been developed for image guided intervention during microsurgery. A hand-held prototype instrument was designed and fabricated by integrating an imaging fiber probe inside a stainless steel needle which is attached to the ceramic shaft of a piezoelectric motor housed in an aluminum handle. The fiber probe obtains A-scan images. The distance information was extracted from the A-scans to track the sample surface distance and a fixed distance was maintained by a feedback motor control which effectively compensated hand tremor and target movements in the axial direction. Graphical user interface, real-time data processing, and visualization based on a CPU-GPU hybrid programming architecture were developed and used in the implantation of this system. To validate the system, free-hand optical coherence tomography images using various samples were obtained. The system can be easily integrated into microsurgical tools and robotics for a wide range of clinical applications. Such tools could offer physicians the freedom to easily image sites of interest with reduced risk and higher image quality.

  3. Percutaneous Steerable Robotic Tool Delivery Platform and Metal MEMS Device for Tissue Manipulation and Approximation: Closure of Patent Foramen Ovale in an Animal Model

    PubMed Central

    Vasilyev, Nikolay V.; Gosline, Andrew H.; Butler, Evan; Lang, Nora; Codd, Patrick J.; Yamauchi, Haruo; Feins, Eric N.; Folk, Chris R.; Cohen, Adam L.; Chen, Richard; Zurakowski, David; del Nido, Pedro J.; Dupont, Pierre E

    2013-01-01

    Background Beating-heart image-guided intracardiac interventions have been evolving rapidly. To extend the domain of catheter-based and transcardiac interventions into reconstructive surgery, a new robotic tool delivery platform (TDP) and tissue approximation device have been developed. Initial results employing these tools to perform patent foramen ovale (PFO) closure are described. Methods and Results A robotic TDP comprised of superelastic metal tubes provides the capability of delivering and manipulating tools and devices inside the beating heart. A new device technology is also presented that utilizes a metal-based MicroElectroMechanical Systems (MEMS) manufacturing process to produce fully-assembled and fully-functional millimeter-scale tools. As a demonstration of both technologies, a PFO creation and closure was performed in a swine model. In the first group of animals (N=10), a preliminary study was performed. The procedural technique was validated with a transcardiac handheld delivery platform and epicardial echocardiography, video-assisted cardioscopy and fluoroscopy. In the second group (N=9), the procedure was performed percutaneously using the robotic TDP under epicardial echocardiography and fluoroscopy imaging. All PFO’s were completely closed in the first group. In the second group, the PFO was not successfully created in 1 animal, and the defects were completely closed in 6 of the 8 remaining animals. Conclusions In contrast to existing robotic catheter technologies, the robotic TDP utilizes a combination of stiffness and active steerability along its length to provide the positioning accuracy and force application capability necessary for tissue manipulation. In combination with a MEMS tool technology, it can enable reconstructive procedures inside the beating heart. PMID:23899870

  4. Computed tomography (CT)-compatible remote center of motion needle steering robot: Fusing CT images and electromagnetic sensor data.

    PubMed

    Shahriari, Navid; Heerink, Wout; van Katwijk, Tim; Hekman, Edsko; Oudkerk, Matthijs; Misra, Sarthak

    2017-07-01

    Lung cancer is the most common cause of cancer-related death, and early detection can reduce the mortality rate. Patients with lung nodules greater than 10 mm usually undergo a computed tomography (CT)-guided biopsy. However, aligning the needle with the target is difficult and the needle tends to deflect from a straight path. In this work, we present a CT-compatible robotic system, which can both position the needle at the puncture point and also insert and rotate the needle. The robot has a remote-center-of-motion arm which is achieved through a parallel mechanism. A new needle steering scheme is also developed where CT images are fused with electromagnetic (EM) sensor data using an unscented Kalman filter. The data fusion allows us to steer the needle using the real-time EM tracker data. The robot design and the steering scheme are validated using three experimental cases. Experimental Case I and II evaluate the accuracy and CT-compatibility of the robot arm, respectively. In experimental Case III, the needle is steered towards 5 real targets embedded in an anthropomorphic gelatin phantom of the thorax. The mean targeting error for the 5 experiments is 1.78 ± 0.70 mm. The proposed robotic system is shown to be CT-compatible with low targeting error. Small nodule size and large needle diameter are two risk factors that can lead to complications in lung biopsy. Our results suggest that nodules larger than 5 mm in diameter can be targeted using our method which may result in lower complication rate. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  5. Ultrasound elastography as a tool for imaging guidance during prostatectomy: Initial experience

    PubMed Central

    Fleming, Ioana Nicolaescu; Kut, Carmen; Macura, Katarzyna J.; Su, Li-Ming; Rivaz, Hassan; Schneider, Caitlin; Hamper, Ulrike; Lotan, Tamara; Taylor, Russ; Hager, Gregory; Boctor, Emad

    2012-01-01

    Summary Background During laparoscopic or robotic assisted laparoscopic prostatectomy, the surgeon lacks tactile feedback which can help him tailor the size of the excision. Ultrasound elastography (USE) is an emerging imaging technology which maps the stiffness of tissue. In the paper we are evaluating USE as a palpation equivalent tool for intraoperative image guided robotic assisted laparoscopic prostatectomy. Material/Methods Two studies were performed: 1) A laparoscopic ultrasound probe was used in a comparative study of manual palpation versus USE in detecting tumor surrogates in synthetic and ex-vivo tissue phantoms; N=25 participants (students) were asked to provide the presence, size and depth of these simulated lesions, and 2) A standard ultrasound probe was used for the evaluation of USE on ex-vivo human prostate specimens (N=10 lesions in N=6 specimens) to differentiate hard versus soft lesions with pathology correlation. Results were validated by pathology findings, and also by in-vivo and ex-vivo MR imaging correlation. Results In the comparative study, USE displayed higher accuracy and specificity in tumor detection (sensitivity=84%, specificity=74%). Tumor diameters and depths were better estimated using USE versus with manual palpation. USE also proved consistent in identification of lesions in ex-vivo prostate specimens; hard and soft, malignant and benign, central and peripheral. Conclusions USE is a strong candidate for assisting surgeons by providing palpation equivalent evaluation of the tumor location, boundaries and extra-capsular extension. The results encourage us to pursue further testing in the robotic laparoscopic environment. PMID:23111738

  6. Autonomous Robotic Inspection in Tunnels

    NASA Astrophysics Data System (ADS)

    Protopapadakis, E.; Stentoumis, C.; Doulamis, N.; Doulamis, A.; Loupos, K.; Makantasis, K.; Kopsiaftis, G.; Amditis, A.

    2016-06-01

    In this paper, an automatic robotic inspector for tunnel assessment is presented. The proposed platform is able to autonomously navigate within the civil infrastructures, grab stereo images and process/analyse them, in order to identify defect types. At first, there is the crack detection via deep learning approaches. Then, a detailed 3D model of the cracked area is created, utilizing photogrammetric methods. Finally, a laser profiling of the tunnel's lining, for a narrow region close to detected crack is performed; allowing for the deduction of potential deformations. The robotic platform consists of an autonomous mobile vehicle; a crane arm, guided by the computer vision-based crack detector, carrying ultrasound sensors, the stereo cameras and the laser scanner. Visual inspection is based on convolutional neural networks, which support the creation of high-level discriminative features for complex non-linear pattern classification. Then, real-time 3D information is accurately calculated and the crack position and orientation is passed to the robotic platform. The entire system has been evaluated in railway and road tunnels, i.e. in Egnatia Highway and London underground infrastructure.

  7. Folk-Psychological Interpretation of Human vs. Humanoid Robot Behavior: Exploring the Intentional Stance toward Robots

    PubMed Central

    Thellman, Sam; Silvervarg, Annika; Ziemke, Tom

    2017-01-01

    People rely on shared folk-psychological theories when judging behavior. These theories guide people’s social interactions and therefore need to be taken into consideration in the design of robots and other autonomous systems expected to interact socially with people. It is, however, not yet clear to what degree the mechanisms that underlie people’s judgments of robot behavior overlap or differ from the case of human or animal behavior. To explore this issue, participants (N = 90) were exposed to images and verbal descriptions of eight different behaviors exhibited either by a person or a humanoid robot. Participants were asked to rate the intentionality, controllability and desirability of the behaviors, and to judge the plausibility of seven different types of explanations derived from a recently proposed psychological model of lay causal explanation of human behavior. Results indicate: substantially similar judgments of human and robot behavior, both in terms of (1a) ascriptions of intentionality/controllability/desirability and in terms of (1b) plausibility judgments of behavior explanations; (2a) high level of agreement in judgments of robot behavior – (2b) slightly lower but still largely similar to agreement over human behaviors; (3) systematic differences in judgments concerning the plausibility of goals and dispositions as explanations of human vs. humanoid behavior. Taken together, these results suggest that people’s intentional stance toward the robot was in this case very similar to their stance toward the human. PMID:29184519

  8. Vision-guided micromanipulation system for biomedical application

    NASA Astrophysics Data System (ADS)

    Shim, Jae-Hong; Cho, Sung-Yong; Cha, Dong-Hyuk

    2004-10-01

    In these days, various researches for biomedical application of robots have been carried out. Particularly, robotic manipulation of the biological cells has been studied by many researchers. Usually, most of the biological cell's shape is sphere. Commercial biological manipulation systems have been utilized the 2-Dimensional images through the optical microscopes only. Moreover, manipulation of the biological cells mainly depends on the subjective viewpoint of an operator. Due to these reasons, there exist lots of problems such as slippery and destruction of the cell membrane and damage of the pipette tip etc. In order to overcome the problems, we have proposed a vision-guided biological cell manipulation system. The newly proposed manipulation system makes use of vision and graphic techniques. Through the proposed procedures, an operator can inject the biological cell scientifically and objectively. Also, the proposed manipulation system can measure the contact force occurred at injection of a biological cell. It can be transmitted a measured force to the operator by the proposed haptic device. Consequently, the proposed manipulation system could safely handle the biological cells without any damage. This paper presents the introduction of our vision-guided manipulation techniques and the concept of the contact force sensing. Through a series of experiments the proposed vision-guided manipulation system shows the possibility of application for precision manipulation of the biological cell such as DNA.

  9. Multipurpose surgical robot as a laparoscope assistant.

    PubMed

    Nelson, Carl A; Zhang, Xiaoli; Shah, Bhavin C; Goede, Matthew R; Oleynikov, Dmitry

    2010-07-01

    This study demonstrates the effectiveness of a new, compact surgical robot at improving laparoscope guidance. Currently, the assistant guiding the laparoscope camera tends to be less experienced and requires physical and verbal direction from the surgeon. Human guidance has disadvantages of fatigue and shakiness leading to inconsistency in the field of view. This study investigates whether replacing the assistant with a compact robot can improve the stability of the surgeon's field of view and also reduce crowding at the operating table. A compact robot based on a bevel-geared "spherical mechanism" with 4 degrees of freedom and capable of full dexterity through a 15-mm port was designed and built. The robot was mounted on the standard railing of the operating table and used to manipulate a laparoscope through a supraumbilical port in a porcine model via a joystick controlled externally by a surgeon. The process was videotaped externally via digital video recorder and internally via laparoscope. Robot position data were also recorded within the robot's motion control software. The robot effectively manipulated the laparoscope in all directions to provide a clear and consistent view of liver, small intestine, and spleen. Its range of motion was commensurate with typical motions executed by a human assistant and was well controlled with the joystick. Qualitative analysis of the video suggested that this method of laparoscope guidance provides highly stable imaging during laparoscopic surgery, which was confirmed by robot position data. Because the robot was table-mounted and compact in design, it increased standing room around the operation table and did not interfere with the workspace of other surgical instruments. The study results also suggest that this robotic method may be combined with flexible endoscopes for highly dexterous visualization with more degrees of freedom.

  10. Intelligent lead: a novel HRI sensor for guide robots.

    PubMed

    Cho, Keum-Bae; Lee, Beom-Hee

    2012-01-01

    This paper addresses the introduction of a new Human Robot Interaction (HRI) sensor for guide robots. Guide robots for geriatric patients or the visually impaired should follow user's control command, keeping a certain desired distance allowing the user to work freely. Therefore, it is necessary to acquire control commands and a user's position on a real-time basis. We suggest a new sensor fusion system to achieve this objective and we will call this sensor the "intelligent lead". The objective of the intelligent lead is to acquire a stable distance from the user to the robot, speed-control volume and turn-control volume, even when the robot platform with the intelligent lead is shaken on uneven ground. In this paper we explain a precise Extended Kalman Filter (EKF) procedure for this. The intelligent lead physically consists of a Kinect sensor, the serial linkage attached with eight rotary encoders, and an IMU (Inertial Measurement Unit) and their measurements are fused by the EKF. A mobile robot was designed to test the performance of the proposed sensor system. After installing the intelligent lead in the mobile robot, several tests are conducted to verify that the mobile robot with the intelligent lead is capable of achieving its goal points while maintaining the appropriate distance between the robot and the user. The results show that we can use the intelligent lead proposed in this paper as a new HRI sensor joined a joystick and a distance measure in the mobile environments such as the robot and the user are moving at the same time.

  11. End-tidal CO2-guided automated robot CPR system in the pig. Preliminary communication.

    PubMed

    Suh, Gil Joon; Park, Jaeheung; Lee, Jung Chan; Na, Sang Hoon; Kwon, Woon Yong; Kim, Kyung Su; Kim, Taegyun; Jung, Yoon Sun; Ko, Jung-In; Shin, So Mi; You, Kyoung Min

    2018-06-01

    Our aim was to compare the efficacy of the end-tidal CO 2 -guided automated robot CPR (robot CPR) system with manual CPR and mechanical device CPR. We developed the algorithm of the robot CPR system which automatically finds the optimal compression position under the guidance of end-tidal CO 2 feedback in swine models of cardiac arrest. Then, 18 pigs after 11 min of cardiac arrest were randomly assigned to one of three groups, robot CPR, LUCAS CPR, and manual CPR groups (n = 6 each group). Return of spontaneous circulation (ROSC) and Neurological Deficit Score 48 h after ROSC were compared. A ROSC was achieved in 5 pigs, 4 pigs, and 3 pigs in the robot CPR, LUCAS CPR, and manual CPR groups, respectively (p = 0.47). Robot CPR showed a significant difference in Neurological Deficit Score 48 h after ROSC compared to manual CPR, whereas LUCAS CPR showed no significant difference over manual CPR. (p = 0.01; Robot versus Manual adjusted p = 0.04, Robot versus LUCAS adjusted p = 0.07, Manual versus LUCAS adjusted p = 1.00). The end-tidal CO 2 -guided automated robot CPR system did not significantly improve ROSC rate in a swine model of cardiac arrest. However, robot CPR showed significant improvement of Neurological Deficit Score 48 h after ROSC compared to Manual CPR while LUCAS CPR showed no significant improvement compared to Manual CPR. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. Novel dielectric elastomer structure of soft robot

    NASA Astrophysics Data System (ADS)

    Li, Chi; Xie, Yuhan; Huang, Xiaoqiang; Liu, Junjie; Jin, Yongbin; Li, Tiefeng

    2015-04-01

    Inspired from the natural invertebrates like worms and starfish, we propose a novel elastomeric smart structure. The smart structure can function as a soft robot. The soft robot is made from a flexible elastomer as the body and driven by dielectric elastomer as the muscle. Finite element simulations based on nonlinear field theory are conducted to investigate the working condition of the structure, and guide the design of the smart structure. The effects of the prestretch, structural stiffness and voltage on the performance of the smart structure are investigated. This work can guide the design of soft robot.

  13. Hysterectomy

    MedlinePlus

    ... made in either your abdomen or your vagina. Robotic surgery. Your doctor guides a robotic arm to do ... to six weeks to recover. Vaginal, laparoscopic, or robotic surgery can take from three to four weeks to ...

  14. Augmented Reality Robot-assisted Radical Prostatectomy: Preliminary Experience.

    PubMed

    Porpiglia, Francesco; Fiori, Cristian; Checcucci, Enrico; Amparore, Daniele; Bertolo, Riccardo

    2018-05-01

    To present our preliminary experience with augmented reality robot-assisted radical prostatectomy (AR-RARP). From June to August 2017, patients candidate to RARP were enrolled and underwent high-resolution multi-parametric magnetic resonance imaging (1-mm slices) according to dedicated protocol. The obtained three-dimensional (3D) reconstruction was integrated in the robotic console to perform AR-RARP. According to the staging at magnetic resonance imaging or reconstruction, in case of cT2 prostate cancer, intrafascial nerve sparing (NS) was performed: a mark was placed on the prostate capsule to indicate the virtual underlying intraprostatic lesion; in case of cT3, standard NS AR-RARP was scheduled with AR-guided biopsy at the level of suspected extracapsular extension (ECE). Prostate specimens were scanned to assess the 3D model concordance. Sixteen patients underwent intrafascial NS technique (cT2), whereas 14 underwent standard NS+ selective biopsy of suspected ECE (cT3). Final pathology confirmed clinical staging. Positive surgical margins' rate was 30% (no positive surgical margins in pT2). In patients whose intraprostatic lesions were marked, final pathology confirmed lesion location. In patients with suspected ECE, AR-guided selective biopsies confirmed the ECE location, with 11 of 14 biopsies (78%) positive for prostate cancer. Prostate specimens were scanned with finding of a good overlap. The mismatch between 3D reconstruction and scanning ranged from 1 to 5 mm. In 85% of the entire surface, the mismatch was <3 mm. In our preliminary experience, AR-RARP seems to be safe and effective. The accuracy of 3D reconstruction seemed to be promising. This technology has still limitations: the virtual models are manually oriented and rigid. Future collaborations with bioengineers will allow overcoming these limitations. Copyright © 2018 Elsevier Inc. All rights reserved.

  15. A Neuromonitoring Approach to Facial Nerve Preservation During Image-guided Robotic Cochlear Implantation.

    PubMed

    Ansó, Juan; Dür, Cilgia; Gavaghan, Kate; Rohrbach, Helene; Gerber, Nicolas; Williamson, Tom; Calvo, Enric M; Balmer, Thomas Wyss; Precht, Christina; Ferrario, Damien; Dettmer, Matthias S; Rösler, Kai M; Caversaccio, Marco D; Bell, Brett; Weber, Stefan

    2016-01-01

    A multielectrode probe in combination with an optimized stimulation protocol could provide sufficient sensitivity and specificity to act as an effective safety mechanism for preservation of the facial nerve in case of an unsafe drill distance during image-guided cochlear implantation. A minimally invasive cochlear implantation is enabled by image-guided and robotic-assisted drilling of an access tunnel to the middle ear cavity. The approach requires the drill to pass at distances below 1  mm from the facial nerve and thus safety mechanisms for protecting this critical structure are required. Neuromonitoring is currently used to determine facial nerve proximity in mastoidectomy but lacks sensitivity and specificity necessaries to effectively distinguish the close distance ranges experienced in the minimally invasive approach, possibly because of current shunting of uninsulated stimulating drilling tools in the drill tunnel and because of nonoptimized stimulation parameters. To this end, we propose an advanced neuromonitoring approach using varying levels of stimulation parameters together with an integrated bipolar and monopolar stimulating probe. An in vivo study (sheep model) was conducted in which measurements at specifically planned and navigated lateral distances from the facial nerve were performed to determine if specific sets of stimulation parameters in combination with the proposed neuromonitoring system could reliably detect an imminent collision with the facial nerve. For the accurate positioning of the neuromonitoring probe, a dedicated robotic system for image-guided cochlear implantation was used and drilling accuracy was corrected on postoperative microcomputed tomographic images. From 29 trajectories analyzed in five different subjects, a correlation between stimulus threshold and drill-to-facial nerve distance was found in trajectories colliding with the facial nerve (distance <0.1  mm). The shortest pulse duration that provided the highest linear correlation between stimulation intensity and drill-to-facial nerve distance was 250  μs. Only at low stimulus intensity values (≤0.3  mA) and with the bipolar configurations of the probe did the neuromonitoring system enable sufficient lateral specificity (>95%) at distances to the facial nerve below 0.5  mm. However, reduction in stimulus threshold to 0.3  mA or lower resulted in a decrease of facial nerve distance detection range below 0.1  mm (>95% sensitivity). Subsequent histopathology follow-up of three representative cases where the neuromonitoring system could reliably detect a collision with the facial nerve (distance <0.1  mm) revealed either mild or inexistent damage to the nerve fascicles. Our findings suggest that although no general correlation between facial nerve distance and stimulation threshold existed, possibly because of variances in patient-specific anatomy, correlations at very close distances to the facial nerve and high levels of specificity would enable a binary response warning system to be developed using the proposed probe at low stimulation currents.

  16. Automated Guided Vehicle For Phsically Handicapped People - A Cost Effective Approach

    NASA Astrophysics Data System (ADS)

    Kumar, G. Arun, Dr.; Sivasubramaniam, Mr. A.

    2017-12-01

    Automated Guided vehicle (AGV) is like a robot that can deliver the materials from the supply area to the technician automatically. This is faster and more efficient. The robot can be accessed wirelessly. A technician can directly control the robot to deliver the components rather than control it via a human operator (over phone, computer etc. who has to program the robot or ask a delivery person to make the delivery). The vehicle is automatically guided through its ways. To avoid collisions a proximity sensor is attached to the system. The sensor senses the signals of the obstacles and can stop the vehicle in the presence of obstacles. Thus vehicle can avoid accidents that can be very useful to the present industrial trend and material handling and equipment handling will be automated and easy time saving methodology.

  17. Development of a Guide-Dog Robot: Leading and Recognizing a Visually-Handicapped Person using a LRF

    NASA Astrophysics Data System (ADS)

    Saegusa, Shozo; Yasuda, Yuya; Uratani, Yoshitaka; Tanaka, Eiichirou; Makino, Toshiaki; Chang, Jen-Yuan (James

    A conceptual Guide-Dog Robot prototype to lead and to recognize a visually-handicapped person is developed and discussed in this paper. Key design features of the robot include a movable platform, human-machine interface, and capability of avoiding obstacles. A novel algorithm enabling the robot to recognize its follower's locomotion as well to detect the center of corridor is proposed and implemented in the robot's human-machine interface. It is demonstrated that using the proposed novel leading and detecting algorithm along with a rapid scanning laser range finder (LRF) sensor, the robot is able to successfully and effectively lead a human walking in corridor without running into obstacles such as trash boxes or adjacent walking persons. Position and trajectory of the robot leading a human maneuvering in common corridor environment are measured by an independent LRF observer. The measured data suggest that the proposed algorithms are effective to enable the robot to detect center of the corridor and position of its follower correctly.

  18. Navigation concepts for MR image-guided interventions.

    PubMed

    Moche, Michael; Trampel, Robert; Kahn, Thomas; Busse, Harald

    2008-02-01

    The ongoing development of powerful magnetic resonance imaging techniques also allows for advanced possibilities to guide and control minimally invasive interventions. Various navigation concepts have been described for practically all regions of the body. The specific advantages and limitations of these concepts largely depend on the magnet design of the MR scanner and the interventional environment. Open MR scanners involve minimal patient transfer, which improves the interventional workflow and reduces the need for coregistration, ie, the mapping of spatial coordinates between imaging and intervention position. Most diagnostic scanners, in contrast, do not allow the physician to guide his instrument inside the magnet and, consequently, the patient needs to be moved out of the bore. Although adequate coregistration and navigation concepts for closed-bore scanners are technically more challenging, many developments are driven by the well-known capabilities of high-field systems and their better economic value. Advanced concepts such as multimodal overlays, augmented reality displays, and robotic assistance devices are still in their infancy but might propel the use of intraoperative navigation. The goal of this work is to give an update on MRI-based navigation and related techniques and to briefly discuss the clinical experience and limitations of some selected systems. (Copyright) 2008 Wiley-Liss, Inc.

  19. PIR-1 and PIRPL. A Project in Robotics Education. Revised.

    ERIC Educational Resources Information Center

    Schultz, Charles P.

    This paper presents the results of a project in robotics education that included: (1) designing a mobile robot--the Personal Instructional Robot-1 (PIR-1); (2) providing a guide to the purchase and assembly of necessary parts; (3) providing a way to interface the robot with common classroom microcomputers; and (4) providing a language by which the…

  20. High-accuracy drilling with an image guided light weight robot: autonomous versus intuitive feed control.

    PubMed

    Tauscher, Sebastian; Fuchs, Alexander; Baier, Fabian; Kahrs, Lüder A; Ortmaier, Tobias

    2017-10-01

    Assistance of robotic systems in the operating room promises higher accuracy and, hence, demanding surgical interventions become realisable (e.g. the direct cochlear access). Additionally, an intuitive user interface is crucial for the use of robots in surgery. Torque sensors in the joints can be employed for intuitive interaction concepts. Regarding the accuracy, they lead to a lower structural stiffness and, thus, to an additional error source. The aim of this contribution is to examine, if an accuracy needed for demanding interventions can be achieved by such a system or not. Feasible accuracy results of the robot-assisted process depend on each work-flow step. This work focuses on the determination of the tool coordinate frame. A method for drill axis definition is implemented and analysed. Furthermore, a concept of admittance feed control is developed. This allows the user to control feeding along the planned path by applying a force to the robots structure. The accuracy is researched by drilling experiments with a PMMA phantom and artificial bone blocks. The described drill axis estimation process results in a high angular repeatability ([Formula: see text]). In the first set of drilling results, an accuracy of [Formula: see text] at entrance and [Formula: see text] at target point excluding imaging was achieved. With admittance feed control an accuracy of [Formula: see text] at target point was realised. In a third set twelve holes were drilled in artificial temporal bone phantoms including imaging. In this set-up an error of [Formula: see text] and [Formula: see text] was achieved. The results of conducted experiments show that accuracy requirements for demanding procedures such as the direct cochlear access can be fulfilled with compliant systems. Furthermore, it was shown that with the presented admittance feed control an accuracy of less then [Formula: see text] is achievable.

  1. Pose estimation of industrial objects towards robot operation

    NASA Astrophysics Data System (ADS)

    Niu, Jie; Zhou, Fuqiang; Tan, Haishu; Cao, Yu

    2017-10-01

    With the advantages of wide range, non-contact and high flexibility, the visual estimation technology of target pose has been widely applied in modern industry, robot guidance and other engineering practices. However, due to the influence of complicated industrial environment, outside interference factors, lack of object characteristics, restrictions of camera and other limitations, the visual estimation technology of target pose is still faced with many challenges. Focusing on the above problems, a pose estimation method of the industrial objects is developed based on 3D models of targets. By matching the extracted shape characteristics of objects with the priori 3D model database of targets, the method realizes the recognition of target. Thus a pose estimation of objects can be determined based on the monocular vision measuring model. The experimental results show that this method can be implemented to estimate the position of rigid objects based on poor images information, and provides guiding basis for the operation of the industrial robot.

  2. [Surgical robotics, short state of the art and prospects].

    PubMed

    Gravez, P

    2003-11-01

    State-of-the-art robotized systems developed for surgery are either remotely controlled manipulators that duplicate gestures made by the surgeon (endoscopic surgery applications), or automated robots that execute trajectories defined relatively to pre-operative medical imaging (neurosurgery and orthopaedic surgery). This generation of systems primarily applies existing robotics technologies (the remote handling systems and the so-called "industrial robots") to current surgical practices. It has contributed to validate the huge potential of surgical robotics, but it suffers from several drawbacks, mainly high costs, excessive dimensions and some lack of user-friendliness. Nevertheless, technological progress let us anticipate the appearance in the near future of miniaturised surgical robots able to assist the gesture of the surgeon and to enhance his perception of the operation at hand. Due to many in-the-body articulated links, these systems will have the capability to perform complex minimally invasive gestures without obstructing the operating theatre. They will also combine the facility of manual piloting with the accuracy and increased safety of computer control, guiding the gestures of the human without offending to his freedom of action. Lastly, they will allow the surgeon to feel the mechanical properties of the tissues he is operating through a genuine "remote palpation" function. Most probably, such technological evolutions will lead the way to redesigned surgical procedures taking place inside new operating rooms featuring a better integration of all equipments and favouring cooperative work from multidisciplinary and sometimes geographically distributed medical staff.

  3. A new AS-display as part of the MIRO lightweight robot for surgical applications

    NASA Astrophysics Data System (ADS)

    Grossmann, Christoph M.

    2010-02-01

    The DLR MIRO is the second generation of versatile robot arms for surgical applications, developed at the Institute for Robotics and Mechatronics at Deutsche Zentrum für Luft- und Raumfahrt (DLR) in Oberpfaffenhofen, Germany. With its low weight of 10 kg and dimensions similar to those of the human arm, the MIRO robot can assist the surgeon directly at the operating table where space is scarce. The planned scope of applications of this robot arm ranges from guiding a laser unit for the precise separation of bone tissue in orthopedics to positioning holes for bone screws, robot assisted endoscope guidance and on to the multi-robot concept for endoscopic minimally invasive surgery. A stereo-endoscope delivers two full HD video streams that can even be augmented with information, e.g vectors indicating the forces that act on the surgical tool at any given moment. SeeFront's new autostereoscopic 3D display SF 2223, being a part of the MIRO assembly, will let the surgeon view the stereo video stream in excellent quality, in real time and without the need for any viewing aids. The presentation is meant to provide an insight into the principles at the basis of the SeeFront 3D technology and how they allow the creation of autostereoscopic display solutions ranging from smallest "stamp-sized" displays to 30" desktop versions, which all provide comfortable freedom of movement for the viewer along with excellent 3D image quality.

  4. [Experimental study of angiography using vascular interventional robot-2(VIR-2)].

    PubMed

    Tian, Zeng-min; Lu, Wang-sheng; Liu, Da; Wang, Da-ming; Guo, Shu-xiang; Xu, Wu-yi; Jia, Bo; Zhao, De-peng; Liu, Bo; Gao, Bao-feng

    2012-06-01

    To verify the feasibility and safety of new vascular interventional robot system used in vascular interventional procedures. Vascular interventional robot type-2 (VIR-2) included master-slave parts of body propulsion system, image navigation systems and force feedback system, the catheter movement could achieve under automatic control and navigation, force feedback was integrated real-time, followed by in vitro pre-test in vascular model and cerebral angiography in dog. Surgeon controlled vascular interventional robot remotely, the catheter was inserted into the intended target, the catheter positioning error and the operation time would be evaluated. In vitro pre-test and animal experiment went well; the catheter can enter any branch of vascular. Catheter positioning error was less than 1 mm. The angiography operation in animal was carried out smoothly without complication; the success rate of the operation was 100% and the entire experiment took 26 and 30 minutes, efficiency was slightly improved compared with the VIR-1, and the time what staff exposed to the DSA machine was 0 minute. The resistance of force sensor can be displayed to the operator to provide a security guarantee for the operation. No surgical complications. VIR-2 is safe and feasible, and can achieve the catheter remote operation and angiography; the master-slave system meets the characteristics of traditional procedure. The three-dimensional image can guide the operation more smoothly; force feedback device provides remote real-time haptic information to provide security for the operation.

  5. Efficient visual grasping alignment for cylinders

    NASA Technical Reports Server (NTRS)

    Nicewarner, Keith E.; Kelley, Robert B.

    1992-01-01

    Monocular information from a gripper-mounted camera is used to servo the robot gripper to grasp a cylinder. The fundamental concept for rapid pose estimation is to reduce the amount of information that needs to be processed during each vision update interval. The grasping procedure is divided into four phases: learn, recognition, alignment, and approach. In the learn phase, a cylinder is placed in the gripper and the pose estimate is stored and later used as the servo target. This is performed once as a calibration step. The recognition phase verifies the presence of a cylinder in the camera field of view. An initial pose estimate is computed and uncluttered scan regions are selected. The radius of the cylinder is estimated by moving the robot a fixed distance toward the cylinder and observing the change in the image. The alignment phase processes only the scan regions obtained previously. Rapid pose estimates are used to align the robot with the cylinder at a fixed distance from it. The relative motion of the cylinder is used to generate an extrapolated pose-based trajectory for the robot controller. The approach phase guides the robot gripper to a grasping position. The cylinder can be grasped with a minimal reaction force and torque when only rough global pose information is initially available.

  6. Efficient visual grasping alignment for cylinders

    NASA Technical Reports Server (NTRS)

    Nicewarner, Keith E.; Kelley, Robert B.

    1991-01-01

    Monocular information from a gripper-mounted camera is used to servo the robot gripper to grasp a cylinder. The fundamental concept for rapid pose estimation is to reduce the amount of information that needs to be processed during each vision update interval. The grasping procedure is divided into four phases: learn, recognition, alignment, and approach. In the learn phase, a cylinder is placed in the gripper and the pose estimate is stored and later used as the servo target. This is performed once as a calibration step. The recognition phase verifies the presence of a cylinder in the camera field of view. An initial pose estimate is computed and uncluttered scan regions are selected. The radius of the cylinder is estimated by moving the robot a fixed distance toward the cylinder and observing the change in the image. The alignment phase processes only the scan regions obtained previously. Rapid pose estimates are used to align the robot with the cylinder at a fixed distance from it. The relative motion of the cylinder is used to generate an extrapolated pose-based trajectory for the robot controller. The approach phase guides the robot gripper to a grasping position. The cylinder can be grasped with a minimal reaction force and torque when only rough global pose information is initially available.

  7. 4D motion modeling of the coronary arteries from CT images for robotic assisted minimally invasive surgery

    NASA Astrophysics Data System (ADS)

    Zhang, Dong Ping; Edwards, Eddie; Mei, Lin; Rueckert, Daniel

    2009-02-01

    In this paper, we present a novel approach for coronary artery motion modeling from cardiac Computed Tomography( CT) images. The aim of this work is to develop a 4D motion model of the coronaries for image guidance in robotic-assisted totally endoscopic coronary artery bypass (TECAB) surgery. To utilize the pre-operative cardiac images to guide the minimally invasive surgery, it is essential to have a 4D cardiac motion model to be registered with the stereo endoscopic images acquired intraoperatively using the da Vinci robotic system. In this paper, we are investigating the extraction of the coronary arteries and the modelling of their motion from a dynamic sequence of cardiac CT. We use a multi-scale vesselness filter to enhance vessels in the cardiac CT images. The centerlines of the arteries are extracted using a ridge traversal algorithm. Using this method the coronaries can be extracted in near real-time as only local information is used in vessel tracking. To compute the deformation of the coronaries due to cardiac motion, the motion is extracted from a dynamic sequence of cardiac CT. Each timeframe in this sequence is registered to the end-diastole timeframe of the sequence using a non-rigid registration algorithm based on free-form deformations. Once the images have been registered a dynamic motion model of the coronaries can be obtained by applying the computed free-form deformations to the extracted coronary arteries. To validate the accuracy of the motion model we compare the actual position of the coronaries in each time frame with the predicted position of the coronaries as estimated from the non-rigid registration. We expect that this motion model of coronaries can facilitate the planning of TECAB surgery, and through the registration with real-time endoscopic video images it can reduce the conversion rate from TECAB to conventional procedures.

  8. Accuracy study of a robotic system for MRI-guided prostate needle placement.

    PubMed

    Seifabadi, Reza; Cho, Nathan B J; Song, Sang-Eun; Tokuda, Junichi; Hata, Nobuhiko; Tempany, Clare M; Fichtinger, Gabor; Iordachita, Iulian

    2013-09-01

    Accurate needle placement is the first concern in percutaneous MRI-guided prostate interventions. In this phantom study, different sources contributing to the overall needle placement error of a MRI-guided robot for prostate biopsy have been identified, quantified and minimized to the possible extent. The overall needle placement error of the system was evaluated in a prostate phantom. This error was broken into two parts: the error associated with the robotic system (called 'before-insertion error') and the error associated with needle-tissue interaction (called 'due-to-insertion error'). Before-insertion error was measured directly in a soft phantom and different sources contributing into this part were identified and quantified. A calibration methodology was developed to minimize the 4-DOF manipulator's error. The due-to-insertion error was indirectly approximated by comparing the overall error and the before-insertion error. The effect of sterilization on the manipulator's accuracy and repeatability was also studied. The average overall system error in the phantom study was 2.5 mm (STD = 1.1 mm). The average robotic system error in the Super Soft plastic phantom was 1.3 mm (STD = 0.7 mm). Assuming orthogonal error components, the needle-tissue interaction error was found to be approximately 2.13 mm, thus making a larger contribution to the overall error. The average susceptibility artifact shift was 0.2 mm. The manipulator's targeting accuracy was 0.71 mm (STD = 0.21 mm) after robot calibration. The robot's repeatability was 0.13 mm. Sterilization had no noticeable influence on the robot's accuracy and repeatability. The experimental methodology presented in this paper may help researchers to identify, quantify and minimize different sources contributing into the overall needle placement error of an MRI-guided robotic system for prostate needle placement. In the robotic system analysed here, the overall error of the studied system remained within the acceptable range. Copyright © 2012 John Wiley & Sons, Ltd.

  9. Robot-assisted procedures in pediatric neurosurgery.

    PubMed

    De Benedictis, Alessandro; Trezza, Andrea; Carai, Andrea; Genovese, Elisabetta; Procaccini, Emidio; Messina, Raffaella; Randi, Franco; Cossu, Silvia; Esposito, Giacomo; Palma, Paolo; Amante, Paolina; Rizzi, Michele; Marras, Carlo Efisio

    2017-05-01

    OBJECTIVE During the last 3 decades, robotic technology has rapidly spread across several surgical fields due to the continuous evolution of its versatility, stability, dexterity, and haptic properties. Neurosurgery pioneered the development of robotics, with the aim of improving the quality of several procedures requiring a high degree of accuracy and safety. Moreover, robot-guided approaches are of special interest in pediatric patients, who often have altered anatomy and challenging relationships between the diseased and eloquent structures. Nevertheless, the use of robots has been rarely reported in children. In this work, the authors describe their experience using the ROSA device (Robotized Stereotactic Assistant) in the neurosurgical management of a pediatric population. METHODS Between 2011 and 2016, 116 children underwent ROSA-assisted procedures for a variety of diseases (epilepsy, brain tumors, intra- or extraventricular and tumor cysts, obstructive hydrocephalus, and movement and behavioral disorders). Each patient received accurate preoperative planning of optimal trajectories, intraoperative frameless registration, surgical treatment using specific instruments held by the robotic arm, and postoperative CT or MR imaging. RESULTS The authors performed 128 consecutive surgeries, including implantation of 386 electrodes for stereo-electroencephalography (36 procedures), neuroendoscopy (42 procedures), stereotactic biopsy (26 procedures), pallidotomy (12 procedures), shunt placement (6 procedures), deep brain stimulation procedures (3 procedures), and stereotactic cyst aspiration (3 procedures). For each procedure, the authors analyzed and discussed accuracy, timing, and complications. CONCLUSIONS To the best their knowledge, the authors present the largest reported series of pediatric neurosurgical cases assisted by robotic support. The ROSA system provided improved safety and feasibility of minimally invasive approaches, thus optimizing the surgical result, while minimizing postoperative morbidity.

  10. General visual robot controller networks via artificial evolution

    NASA Astrophysics Data System (ADS)

    Cliff, David; Harvey, Inman; Husbands, Philip

    1993-08-01

    We discuss recent results from our ongoing research concerning the application of artificial evolution techniques (i.e., an extended form of genetic algorithm) to the problem of developing `neural' network controllers for visually guided robots. The robot is a small autonomous vehicle with extremely low-resolution vision, employing visual sensors which could readily be constructed from discrete analog components. In addition to visual sensing, the robot is equipped with a small number of mechanical tactile sensors. Activity from the sensors is fed to a recurrent dynamical artificial `neural' network, which acts as the robot controller, providing signals to motors governing the robot's motion. Prior to presentation of new results, this paper summarizes our rationale and past work, which has demonstrated that visually guided control networks can arise without any explicit specification that visual processing should be employed: the evolutionary process opportunistically makes use of visual information if it is available.

  11. SU-E-J-114: Towards Integrated CT and Ultrasound Guided Radiation Therapy Using A Robotic Arm with Virtual Springs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, K; Zhang, Y; Sen, H

    Purpose: Currently there is an urgent need in Radiation Therapy for noninvasive and nonionizing soft tissue target guidance such as localization before treatment and continuous monitoring during treatment. Ultrasound is a portable, low cost option that can be easily integrated with the LINAC room. We are developing a cooperatively controlled robot arm that has high intrafraction reproducibility with repositioning of the ultrasound probe. In this study, we introduce virtual springs (VS) to assist with interfraction probe repositioning and we compare the soft tissue deformation introduced by VS to the deformation that would exist without them. Methods: Three metal markers weremore » surgically implanted in the kidney of one dog. The dog was anesthetized and immobilized supine in an alpha cradle. The reference ultrasound probe position and force to ideally visualize the kidney was defined by an experienced ultrasonographer using the Clarity ultrasound system and robot sensor. For each interfraction study, the dog was removed from the cradle and re-setup based on CBCT with bony anatomy alignment to mimic regular patient setup. The ultrasound probe was automatically returned to the reference position using the robot. To accommodate the soft tissue anatomy changes between each setup the operator used the VS feature to adjust the probe and obtain an ultrasound image that matched the reference image. CBCT images were acquired and each interfraction marker location was compared with the first interfraction Result. Results: Analysis of the marker positions revealed that the kidney was displaced by 18.8 ± 6.4 mm without VS and 19.9 ± 10.5 mm with VS. No statistically significant differences were found between two procedures. Conclusion: The VS feature is necessary to obtain matching ultrasound images, and they do not introduce further changes to the tissue deformation. Future work will focus on automatic VS based on ultrasound feedback. Supported in part by: NCI R01 CA161613; Elekta Sponsored Research.« less

  12. Magnetic resonance imaging compatible remote catheter navigation system with 3 degrees of freedom.

    PubMed

    Tavallaei, M A; Lavdas, M K; Gelman, D; Drangova, M

    2016-08-01

    To facilitate MRI-guided catheterization procedures, we present an MRI-compatible remote catheter navigation system that allows remote navigation of steerable catheters with 3 degrees of freedom. The system consists of a user interface (master), a robot (slave), and an ultrasonic motor control servomechanism. The interventionalist applies conventional motions (axial, radial and plunger manipulations) on an input catheter in the master unit; this user input is measured and used by the servomechanism to control a compact catheter manipulating robot, such that it replicates the interventionalist's input motion on the patient catheter. The performance of the system was evaluated in terms of MRI compatibility (SNR and artifact), feasibility of remote navigation under real-time MRI guidance, and motion replication accuracy. Real-time MRI experiments demonstrated that catheter was successfully navigated remotely to desired target references in all 3 degrees of freedom. The system had an absolute value error of [Formula: see text]1 mm in axial catheter motion replication over 30 mm of travel and [Formula: see text] for radial catheter motion replication over [Formula: see text]. The worst case SNR drop was observed to be [Formula: see text]3 %; the robot did not introduce any artifacts in the MR images. An MRI-compatible compact remote catheter navigation system has been developed that allows remote navigation of steerable catheters with 3 degrees of freedom. The proposed system allows for safe and accurate remote catheter navigation, within conventional closed-bore scanners, without degrading MR image quality.

  13. A haptic device for guide wire in interventional radiology procedures.

    PubMed

    Moix, Thomas; Ilic, Dejan; Bleuler, Hannes; Zoethout, Jurjen

    2006-01-01

    Interventional Radiology (IR) is a minimally invasive procedure where thin tubular instruments, guide wires and catheters, are steered through the patient's vascular system under X-ray imaging. In order to perform these procedures, a radiologist has to be trained to master hand-eye coordination, instrument manipulation and procedure protocols. The existing simulation systems all have major drawbacks: the use of modified instruments, unrealistic insertion lengths, high inertia of the haptic device that creates a noticeably degraded dynamic behavior or excessive friction that is not properly compensated for. In this paper we propose a quality training environment dedicated to IR. The system is composed of a virtual reality (VR) simulation of the patient's anatomy linked to a robotic interface providing haptic force feedback. This paper focuses on the requirements, design and prototyping of a specific haptic interface for guide wires.

  14. Automating High-Precision X-Ray and Neutron Imaging Applications with Robotics

    DOE PAGES

    Hashem, Joseph Anthony; Pryor, Mitch; Landsberger, Sheldon; ...

    2017-03-28

    Los Alamos National Laboratory and the University of Texas at Austin recently implemented a robotically controlled nondestructive testing (NDT) system for X-ray and neutron imaging. This system is intended to address the need for accurate measurements for a variety of parts and, be able to track measurement geometry at every imaging location, and is designed for high-throughput applications. This system was deployed in a beam port at a nuclear research reactor and in an operational inspection X-ray bay. The nuclear research reactor system consisted of a precision industrial seven-axis robot, 1.1-MW TRIGA research reactor, and a scintillator-mirror-camera-based imaging system. Themore » X-ray bay system incorporated the same robot, a 225-keV microfocus X-ray source, and a custom flat panel digital detector. The robotic positioning arm is programmable and allows imaging in multiple configurations, including planar, cylindrical, as well as other user defined geometries that provide enhanced engineering evaluation capability. The imaging acquisition device is coupled with the robot for automated image acquisition. The robot can achieve target positional repeatability within 17 μm in the 3-D space. Flexible automation with nondestructive imaging saves costs, reduces dosage, adds imaging techniques, and achieves better quality results in less time. Specifics regarding the robotic system and imaging acquisition and evaluation processes are presented. In conclusion, this paper reviews the comprehensive testing and system evaluation to affirm the feasibility of robotic NDT, presents the system configuration, and reviews results for both X-ray and neutron radiography imaging applications.« less

  15. Automating High-Precision X-Ray and Neutron Imaging Applications with Robotics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hashem, Joseph Anthony; Pryor, Mitch; Landsberger, Sheldon

    Los Alamos National Laboratory and the University of Texas at Austin recently implemented a robotically controlled nondestructive testing (NDT) system for X-ray and neutron imaging. This system is intended to address the need for accurate measurements for a variety of parts and, be able to track measurement geometry at every imaging location, and is designed for high-throughput applications. This system was deployed in a beam port at a nuclear research reactor and in an operational inspection X-ray bay. The nuclear research reactor system consisted of a precision industrial seven-axis robot, 1.1-MW TRIGA research reactor, and a scintillator-mirror-camera-based imaging system. Themore » X-ray bay system incorporated the same robot, a 225-keV microfocus X-ray source, and a custom flat panel digital detector. The robotic positioning arm is programmable and allows imaging in multiple configurations, including planar, cylindrical, as well as other user defined geometries that provide enhanced engineering evaluation capability. The imaging acquisition device is coupled with the robot for automated image acquisition. The robot can achieve target positional repeatability within 17 μm in the 3-D space. Flexible automation with nondestructive imaging saves costs, reduces dosage, adds imaging techniques, and achieves better quality results in less time. Specifics regarding the robotic system and imaging acquisition and evaluation processes are presented. In conclusion, this paper reviews the comprehensive testing and system evaluation to affirm the feasibility of robotic NDT, presents the system configuration, and reviews results for both X-ray and neutron radiography imaging applications.« less

  16. Expedient range enhanced 3-D robot colour vision

    NASA Astrophysics Data System (ADS)

    Jarvis, R. A.

    1983-01-01

    Computer vision has been chosen, in many cases, as offering the richest form of sensory information which can be utilized for guiding robotic manipulation. The present investigation is concerned with the problem of three-dimensional (3D) visual interpretation of colored objects in support of robotic manipulation of those objects with a minimum of semantic guidance. The scene 'interpretations' are aimed at providing basic parameters to guide robotic manipulation rather than to provide humans with a detailed description of what the scene 'means'. Attention is given to overall system configuration, hue transforms, a connectivity analysis, plan/elevation segmentations, range scanners, elevation/range segmentation, higher level structure, eye in hand research, and aspects of array and video stream processing.

  17. Deformable Image Registration for Cone-Beam CT Guided Transoral Robotic Base of Tongue Surgery

    PubMed Central

    Reaungamornrat, S.; Liu, W. P.; Wang, A. S.; Otake, Y.; Nithiananthan, S.; Uneri, A.; Schafer, S.; Tryggestad, E.; Richmon, J.; Sorger, J. M.; Siewerdsen, J. H.; Taylor, R. H.

    2013-01-01

    Transoral robotic surgery (TORS) offers a minimally invasive approach to resection of base of tongue tumors. However, precise localization of the surgical target and adjacent critical structures can be challenged by the highly deformed intraoperative setup. We propose a deformable registration method using intraoperative cone-beam CT (CBCT) to accurately align preoperative CT or MR images with the intraoperative scene. The registration method combines a Gaussian mixture (GM) model followed by a variation of the Demons algorithm. First, following segmentation of the volume of interest (i.e., volume of the tongue extending to the hyoid), a GM model is applied to surface point clouds for rigid initialization (GM rigid) followed by nonrigid deformation (GM nonrigid). Second, the registration is refined using the Demons algorithm applied to distance map transforms of the (GM-registered) preoperative image and intraoperative CBCT. Performance was evaluated in repeat cadaver studies (25 image pairs) in terms of target registration error (TRE), entropy correlation coefficient (ECC), and normalized pointwise mutual information (NPMI). Retraction of the tongue in the TORS operative setup induced gross deformation >30 mm. The mean TRE following the GM rigid, GM nonrigid, and Demons steps was 4.6, 2.1, and 1.7 mm, respectively. The respective ECC was 0.57, 0.70, and 0.73 and NPMI was 0.46, 0.57, and 0.60. Registration accuracy was best across the superior aspect of the tongue and in proximity to the hyoid (by virtue of GM registration of surface points on these structures). The Demons step refined registration primarily in deeper portions of the tongue further from the surface and hyoid bone. Since the method does not use image intensities directly, it is suitable to multi-modality registration of preoperative CT or MR with intraoperative CBCT. Extending the 3D image registration to the fusion of image and planning data in stereo-endoscopic video is anticipated to support safer, high-precision base of tongue robotic surgery. PMID:23807549

  18. Deformable image registration for cone-beam CT guided transoral robotic base-of-tongue surgery

    NASA Astrophysics Data System (ADS)

    Reaungamornrat, S.; Liu, W. P.; Wang, A. S.; Otake, Y.; Nithiananthan, S.; Uneri, A.; Schafer, S.; Tryggestad, E.; Richmon, J.; Sorger, J. M.; Siewerdsen, J. H.; Taylor, R. H.

    2013-07-01

    Transoral robotic surgery (TORS) offers a minimally invasive approach to resection of base-of-tongue tumors. However, precise localization of the surgical target and adjacent critical structures can be challenged by the highly deformed intraoperative setup. We propose a deformable registration method using intraoperative cone-beam computed tomography (CBCT) to accurately align preoperative CT or MR images with the intraoperative scene. The registration method combines a Gaussian mixture (GM) model followed by a variation of the Demons algorithm. First, following segmentation of the volume of interest (i.e. volume of the tongue extending to the hyoid), a GM model is applied to surface point clouds for rigid initialization (GM rigid) followed by nonrigid deformation (GM nonrigid). Second, the registration is refined using the Demons algorithm applied to distance map transforms of the (GM-registered) preoperative image and intraoperative CBCT. Performance was evaluated in repeat cadaver studies (25 image pairs) in terms of target registration error (TRE), entropy correlation coefficient (ECC) and normalized pointwise mutual information (NPMI). Retraction of the tongue in the TORS operative setup induced gross deformation >30 mm. The mean TRE following the GM rigid, GM nonrigid and Demons steps was 4.6, 2.1 and 1.7 mm, respectively. The respective ECC was 0.57, 0.70 and 0.73, and NPMI was 0.46, 0.57 and 0.60. Registration accuracy was best across the superior aspect of the tongue and in proximity to the hyoid (by virtue of GM registration of surface points on these structures). The Demons step refined registration primarily in deeper portions of the tongue further from the surface and hyoid bone. Since the method does not use image intensities directly, it is suitable to multi-modality registration of preoperative CT or MR with intraoperative CBCT. Extending the 3D image registration to the fusion of image and planning data in stereo-endoscopic video is anticipated to support safer, high-precision base-of-tongue robotic surgery.

  19. Panoramic optical-servoing for industrial inspection and repair

    NASA Astrophysics Data System (ADS)

    Sallinger, Christian; O'Leary, Paul; Retschnig, Alexander; Kammerhofer, Martin

    2004-05-01

    Recently specialized robots were introduced to perform the task of inspection and repair in large cylindrical structures such as ladles, melting furnaces and converters. This paper reports on the image processing system and optical servoing for one such a robot. A panoramic image of the vessels inner surface is produced by performing a coordinated robot motion and image acquisition. The level of projective distortion is minimized by acquiring a high density of images. Normalized phase correlation calculated via the 2D Fourier transform is used to calculate the shift between the single images. The narrow strips from the dense image map are then stitched together to build the panorama. The mapping between the panoramic image and the positioning of the robot is established during the stitching of the images. This enables optical feedback. The robots operator can locate a defect on the surface by selecting the area of the image. Calculation of the forward and inverse kinematics enable the robot to automatically move to the location on the surface requiring repair. Experimental results using a standard 6R industrial robot have shown the full functionality of the system concept. Finally, were test measurements carried out successfully, in a ladle at a temperature of 1100° C.

  20. A Haptic Guided Robotic System for Endoscope Positioning and Holding.

    PubMed

    Cabuk, Burak; Ceylan, Savas; Anik, Ihsan; Tugasaygi, Mehtap; Kizir, Selcuk

    2015-01-01

    To determine the feasibility, advantages, and disadvantages of using a robot for holding and maneuvering the endoscope in transnasal transsphenoidal surgery. The system used in this study was a Stewart Platform based robotic system that was developed by Kocaeli University Department of Mechatronics Engineering for positioning and holding of endoscope. After the first use on an artificial head model, the system was used on six fresh postmortem bodies that were provided by the Morgue Specialization Department of the Forensic Medicine Institute (Istanbul, Turkey). The setup required for robotic system was easy, the time for registration procedure and setup of the robot takes 15 minutes. The resistance was felt on haptic arm in case of contact or friction with adjacent tissues. The adaptation process was shorter with the mouse to manipulate the endoscope. The endoscopic transsphenoidal approach was achieved with the robotic system. The endoscope was guided to the sphenoid ostium with the help of the robotic arm. This robotic system can be used in endoscopic transsphenoidal surgery as an endoscope positioner and holder. The robot is able to change the position easily with the help of an assistant and prevents tremor, and provides a better field of vision for work.

  1. SU-E-T-453: A Novel Daily QA System for Robotic Image Guided Radiosurgery with Variable Aperture Collimator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, L; Nelson, B

    Purpose: A novel end-to-end system using a CCD camera and a scintillator based phantom that is capable of measuring the beam-by-beam delivery accuracy of Robotic Radiosurgery has been developed and reported in our previous work. This work investigates its application to end-to-end type daily QA for Robotic Radiosurgery (Cyberknife) with Variable Aperture Collimator (Iris). Methods: The phantom was first scanned with a CT scanner at 0.625 slice thickness and exported to the Cyberknife Muliplan (v4.6) treatment planning system. An isocentric treatment plan was created consisting of ten beams of 25 Monitor Units each using Iris apertures of 7.5, 10, 15,more » 20, and 25 mm. The plan was delivered six times in two days on the Cyberknife G4 system with fiducial tracking on the four metal fiducials embedded in phantom with re-positioning between the measurements. The beam vectors (X, Y, Z) are measured and compared with the plan from the machine delivery file (XML file). The Iris apertures (FWHM) were measured from the beam flux map and compared with the commissioning data. Results: The average beam positioning accuracies of the six deliveries are 0.71 ± 0.40 mm, 0.72 ± 0.44 mm, 0.74 ± 0.42 mm, 0.70 ± 0.40 mm, 0.79 ± 0.44 mm and 0.69 ± 0.41 mm respectively. Radiation beam width (FWHM) variations are within ±0.05 mm, and they agree with the commissioning data within 0.22 mm. The delivery time for the plan is about 7 minutes and the results are given instantly. Conclusion: The experimental results agree with stated sub-millimeter delivery accuracy of Cyberknife system. Beam FWHM variations comply with the 0.2 mm accuracy of the Iris collimator at SAD. The XRV-100 system has proven to be a powerful tool in performing end-to-end type tests for Robotic Image Guided Radiosurgery Daily QA.« less

  2. Multi-Robot Assembly Strategies and Metrics.

    PubMed

    Marvel, Jeremy A; Bostelman, Roger; Falco, Joe

    2018-02-01

    We present a survey of multi-robot assembly applications and methods and describe trends and general insights into the multi-robot assembly problem for industrial applications. We focus on fixtureless assembly strategies featuring two or more robotic systems. Such robotic systems include industrial robot arms, dexterous robotic hands, and autonomous mobile platforms, such as automated guided vehicles. In this survey, we identify the types of assemblies that are enabled by utilizing multiple robots, the algorithms that synchronize the motions of the robots to complete the assembly operations, and the metrics used to assess the quality and performance of the assemblies.

  3. Multi-Robot Assembly Strategies and Metrics

    PubMed Central

    MARVEL, JEREMY A.; BOSTELMAN, ROGER; FALCO, JOE

    2018-01-01

    We present a survey of multi-robot assembly applications and methods and describe trends and general insights into the multi-robot assembly problem for industrial applications. We focus on fixtureless assembly strategies featuring two or more robotic systems. Such robotic systems include industrial robot arms, dexterous robotic hands, and autonomous mobile platforms, such as automated guided vehicles. In this survey, we identify the types of assemblies that are enabled by utilizing multiple robots, the algorithms that synchronize the motions of the robots to complete the assembly operations, and the metrics used to assess the quality and performance of the assemblies. PMID:29497234

  4. Development of a control algorithm for the ultrasound scanning robot (NCCUSR) using ultrasound image and force feedback.

    PubMed

    Kim, Yeoun Jae; Seo, Jong Hyun; Kim, Hong Rae; Kim, Kwang Gi

    2017-06-01

    Clinicians who frequently perform ultrasound scanning procedures often suffer from musculoskeletal disorders, arthritis, and myalgias. To minimize their occurrence and to assist clinicians, ultrasound scanning robots have been developed worldwide. Although, to date, there is still no commercially available ultrasound scanning robot, many control methods have been suggested and researched. These control algorithms are either image based or force based. If the ultrasound scanning robot control algorithm was a combination of the two algorithms, it could benefit from the advantage of each one. However, there are no existing control methods for ultrasound scanning robots that combine force control and image analysis. Therefore, in this work, a control algorithm is developed for an ultrasound scanning robot using force feedback and ultrasound image analysis. A manipulator-type ultrasound scanning robot named 'NCCUSR' is developed and a control algorithm for this robot is suggested and verified. First, conventional hybrid position-force control is implemented for the robot and the hybrid position-force control algorithm is combined with ultrasound image analysis to fully control the robot. The control method is verified using a thyroid phantom. It was found that the proposed algorithm can be applied to control the ultrasound scanning robot and experimental outcomes suggest that the images acquired using the proposed control method can yield a rating score that is equivalent to images acquired directly by the clinicians. The proposed control method can be applied to control the ultrasound scanning robot. However, more work must be completed to verify the proposed control method in order to become clinically feasible. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  5. Diffuse intrinsic pontine gliomas in children: Interest of robotic frameless assisted biopsy. A technical note.

    PubMed

    Coca, H A; Cebula, H; Benmekhbi, M; Chenard, M P; Entz-Werle, N; Proust, F

    2016-12-01

    Diffuse intrinsic pontine gliomas (DIPG) constitute 10-15% of all brain tumors in the pediatric population; currently prognosis remains poor, with an overall survival of 7-14 months. Recently the indication of DIPG biopsy has been enlarged due to the development of molecular biology and various ongoing clinical and therapeutic trials. Classically a biopsy is performed using a stereotactic frame assisted procedure but the workflow may sometimes be heavy and more complex especially in children. In this study the authors present their experience with frameless robotic-guided biopsy of DIPG in a pediatric population. Retrospective study on a series of five consecutive pediatric patients harboring DIPG treated over a 4-year period. All patients underwent frameless robotic-guided biopsy via a transcerebellar approach. Among the 5 patients studied 3 were male and 2 female with a median age of 8.6 years [range 5 to 13 years]. Clinical presentation included ataxia, hemiparesis and cranial nerve palsy in all patients. MRI imaging of the lesion showed typical DIPG features (3 of them located in the pons) with hypo-intensity on T1 and hyper-intensity signal on T2 sequences and diffuse gadolinium enhancement. The mean procedure time was 56minutes (range 45 to 67minutes). No new postoperative neurological deficits were recorded. Histological diagnosis was achieved in all cases as follows: two anaplastic astrocytomas (grade III), two glioblastomas, and one diffuse astrocytoma (grade III). Frameless robotic assisted biopsy of DIPG in pediatric population is an easier, effective, safe and highly accurate method to achieve diagnosis. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  6. JPL Robotics Technology Applicable to Agriculture

    NASA Technical Reports Server (NTRS)

    Udomkesmalee, Suraphol Gabriel; Kyte, L.

    2008-01-01

    This slide presentation describes several technologies that are developed for robotics that are applicable for agriculture. The technologies discussed are detection of humans to allow safe operations of autonomous vehicles, and vision guided robotic techniques for shoot selection, separation and transfer to growth media,

  7. ICAM Robotics Application Guide (RAG)

    DTIC Science & Technology

    1980-04-01

    used for new purposes. Refers to the reprogrammability or multi-task capability of robots. HIERARCHY - A relationship of elements in a structure...Tech., 1977), 33 pp. Attitude of Unions towards Robotization I I Weekley, T. L., "A View of the United Automobile , Aerospace and Agricultural

  8. Fundamentals of soft robot locomotion

    PubMed Central

    2017-01-01

    Soft robotics and its related technologies enable robot abilities in several robotics domains including, but not exclusively related to, manipulation, manufacturing, human–robot interaction and locomotion. Although field applications have emerged for soft manipulation and human–robot interaction, mobile soft robots appear to remain in the research stage, involving the somehow conflictual goals of having a deformable body and exerting forces on the environment to achieve locomotion. This paper aims to provide a reference guide for researchers approaching mobile soft robotics, to describe the underlying principles of soft robot locomotion with its pros and cons, and to envisage applications and further developments for mobile soft robotics. PMID:28539483

  9. Fundamentals of soft robot locomotion.

    PubMed

    Calisti, M; Picardi, G; Laschi, C

    2017-05-01

    Soft robotics and its related technologies enable robot abilities in several robotics domains including, but not exclusively related to, manipulation, manufacturing, human-robot interaction and locomotion. Although field applications have emerged for soft manipulation and human-robot interaction, mobile soft robots appear to remain in the research stage, involving the somehow conflictual goals of having a deformable body and exerting forces on the environment to achieve locomotion. This paper aims to provide a reference guide for researchers approaching mobile soft robotics, to describe the underlying principles of soft robot locomotion with its pros and cons, and to envisage applications and further developments for mobile soft robotics. © 2017 The Author(s).

  10. Continuous Shape Estimation of Continuum Robots Using X-ray Images.

    PubMed

    Lobaton, Edgar J; Fu, Jinghua; Torres, Luis G; Alterovitz, Ron

    2013-05-06

    We present a new method for continuously and accurately estimating the shape of a continuum robot during a medical procedure using a small number of X-ray projection images (e.g., radiographs or fluoroscopy images). Continuum robots have curvilinear structure, enabling them to maneuver through constrained spaces by bending around obstacles. Accurately estimating the robot's shape continuously over time is crucial for the success of procedures that require avoidance of anatomical obstacles and sensitive tissues. Online shape estimation of a continuum robot is complicated by uncertainty in its kinematic model, movement of the robot during the procedure, noise in X-ray images, and the clinical need to minimize the number of X-ray images acquired. Our new method integrates kinematics models of the robot with data extracted from an optimally selected set of X-ray projection images. Our method represents the shape of the continuum robot over time as a deformable surface which can be described as a linear combination of time and space basis functions. We take advantage of probabilistic priors and numeric optimization to select optimal camera configurations, thus minimizing the expected shape estimation error. We evaluate our method using simulated concentric tube robot procedures and demonstrate that obtaining between 3 and 10 images from viewpoints selected by our method enables online shape estimation with errors significantly lower than using the kinematic model alone or using randomly spaced viewpoints.

  11. Task Analysis and Descriptions of Required Job Competencies for Robotics/Automated Systems Technicians. Final Report. Volume 2. Curriculum Planning Guide.

    ERIC Educational Resources Information Center

    Hull, Daniel M.; Lovett, James E.

    This volume of the final report for the Robotics/Automated Systems Technician (RAST) curriculum project is a curriculum planning guide intended for school administrators, faculty, and student counselors/advisors. It includes step-by-step procedures to help institutions evaluate their community's needs and their capabilities to meet these needs in…

  12. Robotic Needle Guide for Prostate Brachytherapy: Clinical Testing of Feasibility and Performance

    PubMed Central

    Song, Danny Y; Burdette, Everette C; Fiene, Jonathan; Armour, Elwood; Kronreif, Gernot; Deguet, Anton; Zhang, Zhe; Iordachita, Iulian; Fichtinger, Gabor; Kazanzides, Peter

    2010-01-01

    Purpose Optimization of prostate brachytherapy is constrained by tissue deflection of needles and fixed spacing of template holes. We developed and clinically tested a robotic guide towards the goal of allowing greater freedom of needle placement. Methods and Materials The robot consists of a small tubular needle guide attached to a robotically controlled arm. The apparatus is mounted and calibrated to operate in the same coordinate frame as a standard template. Translation in x and y directions over the perineum ±40mm are possible. Needle insertion is performed manually. Results Five patients were treated in an IRB-approved study. Confirmatory measurements of robotic movements for initial 3 patients using infrared tracking showed mean error of 0.489 mm (SD 0.328 mm). Fine adjustments in needle positioning were possible when tissue deflection was encountered; adjustments were performed in 54/179 (30.2%) needles placed, with 36/179 (20.1%) adjustments of > 2mm. Twenty-seven insertions were intentionally altered to positions between the standard template grid to improve the dosimetric plan or avoid structures such as pubic bone and blood vessels. Conclusions Robotic needle positioning provided a means of compensating for needle deflections as well as the ability to intentionally place needles into areas between the standard template holes. To our knowledge, these results represent the first clinical testing of such a system. Future work will be incorporation of direct control of the robot by the physician, adding software algorithms to help avoid robot collisions with the ultrasound, and testing the angulation capability in the clinical setting. PMID:20729152

  13. Feasibility study of a hand guided robotic drill for cochleostomy.

    PubMed

    Brett, Peter; Du, Xinli; Zoka-Assadi, Masoud; Coulson, Chris; Reid, Andrew; Proops, David

    2014-01-01

    The concept of a hand guided robotic drill has been inspired by an automated, arm supported robotic drill recently applied in clinical practice to produce cochleostomies without penetrating the endosteum ready for inserting cochlear electrodes. The smart tactile sensing scheme within the drill enables precise control of the state of interaction between tissues and tools in real-time. This paper reports development studies of the hand guided robotic drill where the same consistent outcomes, augmentation of surgeon control and skill, and similar reduction of induced disturbances on the hearing organ are achieved. The device operates with differing presentation of tissues resulting from variation in anatomy and demonstrates the ability to control or avoid penetration of tissue layers as required and to respond to intended rather than involuntary motion of the surgeon operator. The advantage of hand guided over an arm supported system is that it offers flexibility in adjusting the drilling trajectory. This can be important to initiate cutting on a hard convex tissue surface without slipping and then to proceed on the desired trajectory after cutting has commenced. The results for trials on phantoms show that drill unit compliance is an important factor in the design.

  14. Intelligent robot control using an adaptive critic with a task control center and dynamic database

    NASA Astrophysics Data System (ADS)

    Hall, E. L.; Ghaffari, M.; Liao, X.; Alhaj Ali, S. M.

    2006-10-01

    The purpose of this paper is to describe the design, development and simulation of a real time controller for an intelligent, vision guided robot. The use of a creative controller that can select its own tasks is demonstrated. This creative controller uses a task control center and dynamic database. The dynamic database stores both global environmental information and local information including the kinematic and dynamic models of the intelligent robot. The kinematic model is very useful for position control and simulations. However, models of the dynamics of the manipulators are needed for tracking control of the robot's motions. Such models are also necessary for sizing the actuators, tuning the controller, and achieving superior performance. Simulations of various control designs are shown. Also, much of the model has also been used for the actual prototype Bearcat Cub mobile robot. This vision guided robot was designed for the Intelligent Ground Vehicle Contest. A novel feature of the proposed approach is that the method is applicable to both robot arm manipulators and robot bases such as wheeled mobile robots. This generality should encourage the development of more mobile robots with manipulator capability since both models can be easily stored in the dynamic database. The multi task controller also permits wide applications. The use of manipulators and mobile bases with a high-level control are potentially useful for space exploration, certain rescue robots, defense robots, and medical robotics aids.

  15. Increasing N200 Potentials Via Visual Stimulus Depicting Humanoid Robot Behavior.

    PubMed

    Li, Mengfan; Li, Wei; Zhou, Huihui

    2016-02-01

    Achieving recognizable visual event-related potentials plays an important role in improving the success rate in telepresence control of a humanoid robot via N200 or P300 potentials. The aim of this research is to intensively investigate ways to induce N200 potentials with obvious features by flashing robot images (images with meaningful information) and by flashing pictures containing only solid color squares (pictures with incomprehensible information). Comparative studies have shown that robot images evoke N200 potentials with recognizable negative peaks at approximately 260 ms in the frontal and central areas. The negative peak amplitudes increase, on average, from 1.2 μV, induced by flashing the squares, to 6.7 μV, induced by flashing the robot images. The data analyses support that the N200 potentials induced by the robot image stimuli exhibit recognizable features. Compared with the square stimuli, the robot image stimuli increase the average accuracy rate by 9.92%, from 83.33% to 93.25%, and the average information transfer rate by 24.56 bits/min, from 72.18 bits/min to 96.74 bits/min, in a single repetition. This finding implies that the robot images might provide the subjects with more information to understand the visual stimuli meanings and help them more effectively concentrate on their mental activities.

  16. Realization of the FPGA-based reconfigurable computing environment by the example of morphological processing of a grayscale image

    NASA Astrophysics Data System (ADS)

    Shatravin, V.; Shashev, D. V.

    2018-05-01

    Currently, robots are increasingly being used in every industry. One of the most high-tech areas is creation of completely autonomous robotic devices including vehicles. The results of various global research prove the efficiency of vision systems in autonomous robotic devices. However, the use of these systems is limited because of the computational and energy resources available in the robot device. The paper describes the results of applying the original approach for image processing on reconfigurable computing environments by the example of morphological operations over grayscale images. This approach is prospective for realizing complex image processing algorithms and real-time image analysis in autonomous robotic devices.

  17. Autofluorescence lifetime imaging during transoral robotic surgery: a clinical validation study of tumor detection (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Lagarto, João. L.; Phipps, Jennifer E.; Unger, Jakob; Faller, Leta M.; Gorpas, Dimitris; Ma, Dinglong M.; Bec, Julien; Moore, Michael G.; Bewley, Arnaud F.; Yankelevich, Diego R.; Sorger, Jonathan M.; Farwell, Gregory D.; Marcu, Laura

    2017-02-01

    Autofluorescence lifetime spectroscopy is a promising non-invasive label-free tool for characterization of biological tissues and shows potential to report structural and biochemical alterations in tissue owing to pathological transformations. In particular, when combined with fiber-optic based instruments, autofluorescence lifetime measurements can enhance intraoperative diagnosis and provide guidance in surgical procedures. We investigate the potential of a fiber-optic based multi-spectral time-resolved fluorescence spectroscopy instrument to characterize the autofluorescence fingerprint associated with histologic, morphologic and metabolic changes in tissue that can provide real-time contrast between healthy and tumor regions in vivo and guide clinicians during resection of diseased areas during transoral robotic surgery. To provide immediate feedback to the surgeons, we employ tracking of an aiming beam that co-registers our point measurements with the robot camera images and allows visualization of the surgical area augmented with autofluorescence lifetime data in the surgeon's console in real-time. For each patient, autofluorescence lifetime measurements were acquired from normal, diseased and surgically altered tissue, both in vivo (pre- and post-resection) and ex vivo. Initial results indicate tumor and normal regions can be distinguished based on changes in lifetime parameters measured in vivo, when the tumor is located superficially. In particular, results show that autofluorescence lifetime of tumor is shorter than that of normal tissue (p < 0.05, n = 3). If clinical diagnostic efficacy is demonstrated throughout this on-going study, we believe that this method has the potential to become a valuable tool for real-time intraoperative diagnosis and guidance during transoral robot assisted cancer removal interventions.

  18. Vision Guided Intelligent Robot Design And Experiments

    NASA Astrophysics Data System (ADS)

    Slutzky, G. D.; Hall, E. L.

    1988-02-01

    The concept of an intelligent robot is an important topic combining sensors, manipulators, and artificial intelligence to design a useful machine. Vision systems, tactile sensors, proximity switches and other sensors provide the elements necessary for simple game playing as well as industrial applications. These sensors permit adaption to a changing environment. The AI techniques permit advanced forms of decision making, adaptive responses, and learning while the manipulator provides the ability to perform various tasks. Computer languages such as LISP and OPS5, have been utilized to achieve expert systems approaches in solving real world problems. The purpose of this paper is to describe several examples of visually guided intelligent robots including both stationary and mobile robots. Demonstrations will be presented of a system for constructing and solving a popular peg game, a robot lawn mower, and a box stacking robot. The experience gained from these and other systems provide insight into what may be realistically expected from the next generation of intelligent machines.

  19. Robot Acquisition of Active Maps Through Teleoperation and Vector Space Analysis

    NASA Technical Reports Server (NTRS)

    Peters, Richard Alan, II

    2003-01-01

    The work performed under this contract was in the area of intelligent robotics. The problem being studied was the acquisition of intelligent behaviors by a robot. The method was to acquire action maps that describe tasks as sequences of reflexive behaviors. Action maps (a.k.a. topological maps) are graphs whose nodes represent sensorimotor states and whose edges represent the motor actions that cause the robot to proceed from one state to the next. The maps were acquired by the robot after being teleoperated or otherwise guided by a person through a task several times. During a guided task, the robot records all its sensorimotor signals. The signals from several task trials are partitioned into episodes of static behavior. The corresponding episodes from each trial are averaged to produce a task description as a sequence of characteristic episodes. The sensorimotor states that indicate episode boundaries become the nodes, and the static behaviors, the edges. It was demonstrated that if compound maps are constructed from a set of tasks then the robot can perform new tasks in which it was never explicitly trained.

  20. State-Estimation Algorithm Based on Computer Vision

    NASA Technical Reports Server (NTRS)

    Bayard, David; Brugarolas, Paul

    2007-01-01

    An algorithm and software to implement the algorithm are being developed as means to estimate the state (that is, the position and velocity) of an autonomous vehicle, relative to a visible nearby target object, to provide guidance for maneuvering the vehicle. In the original intended application, the autonomous vehicle would be a spacecraft and the nearby object would be a small astronomical body (typically, a comet or asteroid) to be explored by the spacecraft. The algorithm could also be used on Earth in analogous applications -- for example, for guiding underwater robots near such objects of interest as sunken ships, mineral deposits, or submerged mines. It is assumed that the robot would be equipped with a vision system that would include one or more electronic cameras, image-digitizing circuitry, and an imagedata- processing computer that would generate feature-recognition data products.

  1. [The operating room of the future].

    PubMed

    Broeders, I A; Niessen, W; van der Werken, C; van Vroonhoven, T J

    2000-01-29

    Advances in computer technology will revolutionize surgical techniques in the next decade. The operating room (OR) of the future will be connected with a laboratory where clinical specialists and researchers prepare image-guided interventions and explore the possibilities of these techniques. The virtual reality is linked to the actual situation in the OR with the aid of navigation instruments. During complicated operations the images prepared preoperatively will be corrected during the operation on the basis of the information obtained peroperatively. MRI currently offers maximal possibilities for image-guided surgery of soft tissues. Simpler techniques such as fluoroscopy and echography will become increasingly integrated in computer-assisted peroperative navigation. The development of medical robot systems will make possible microsurgical procedures by the endoscopic route. Tele-manipulation systems will also play a part in the training of surgeons. Design and construction of the OR will be adapted to the surgical technology, and include an information and control unit where preoperative and peroperative data come together and from where the surgeon operates the instruments. Concepts for the future OR should be regularly adjusted to allow for new surgical technology.

  2. Human guidance of mobile robots in complex 3D environments using smart glasses

    NASA Astrophysics Data System (ADS)

    Kopinsky, Ryan; Sharma, Aneesh; Gupta, Nikhil; Ordonez, Camilo; Collins, Emmanuel; Barber, Daniel

    2016-05-01

    In order for humans to safely work alongside robots in the field, the human-robot (HR) interface, which enables bi-directional communication between human and robot, should be able to quickly and concisely express the robot's intentions and needs. While the robot operates mostly in autonomous mode, the human should be able to intervene to effectively guide the robot in complex, risky and/or highly uncertain scenarios. Using smart glasses such as Google Glass∗, we seek to develop an HR interface that aids in reducing interaction time and distractions during interaction with the robot.

  3. Design and Performance Evaluation of Real-time Endovascular Interventional Surgical Robotic System with High Accuracy.

    PubMed

    Wang, Kundong; Chen, Bing; Lu, Qingsheng; Li, Hongbing; Liu, Manhua; Shen, Yu; Xu, Zhuoyan

    2018-05-15

    Endovascular interventional surgery (EIS) is performed under a high radiation environment at the sacrifice of surgeons' health. This paper introduces a novel endovascular interventional surgical robot that aims to reduce radiation to surgeons and physical stress imposed by lead aprons during fluoroscopic X-ray guided catheter intervention. The unique mechanical structure allowed the surgeon to manipulate the axial and radial motion of the catheter and guide wire. Four catheter manipulators (to manipulate the catheter and guide wire), and a control console which consists of four joysticks, several buttons and two twist switches (to control the catheter manipulators) were presented. The entire robotic system was established on a master-slave control structure through CAN (Controller Area Network) bus communication, meanwhile, the slave side of this robotic system showed highly accurate control over velocity and displacement with PID controlling method. The robotic system was tested and passed in vitro and animal experiments. Through functionality evaluation, the manipulators were able to complete interventional surgical motion both independently and cooperatively. The robotic surgery was performed successfully in an adult female pig and demonstrated the feasibility of superior mesenteric and common iliac artery stent implantation. The entire robotic system met the clinical requirements of EIS. The results show that the system has the ability to imitate the movements of surgeons and to accomplish the axial and radial motions with consistency and high-accuracy. Copyright © 2018 John Wiley & Sons, Ltd.

  4. Image navigation as a means to expand the boundaries of fluorescence-guided surgery

    NASA Astrophysics Data System (ADS)

    Brouwer, Oscar R.; Buckle, Tessa; Bunschoten, Anton; Kuil, Joeri; Vahrmeijer, Alexander L.; Wendler, Thomas; Valdés-Olmos, Renato A.; van der Poel, Henk G.; van Leeuwen, Fijs W. B.

    2012-05-01

    Hybrid tracers that are both radioactive and fluorescent help extend the use of fluorescence-guided surgery to deeper structures. Such hybrid tracers facilitate preoperative surgical planning using (3D) scintigraphic images and enable synchronous intraoperative radio- and fluorescence guidance. Nevertheless, we previously found that improved orientation during laparoscopic surgery remains desirable. Here we illustrate how intraoperative navigation based on optical tracking of a fluorescence endoscope may help further improve the accuracy of hybrid surgical guidance. After feeding SPECT/CT images with an optical fiducial as a reference target to the navigation system, optical tracking could be used to position the tip of the fluorescence endoscope relative to the preoperative 3D imaging data. This hybrid navigation approach allowed us to accurately identify marker seeds in a phantom setup. The multispectral nature of the fluorescence endoscope enabled stepwise visualization of the two clinically approved fluorescent dyes, fluorescein and indocyanine green. In addition, the approach was used to navigate toward the prostate in a patient undergoing robot-assisted prostatectomy. Navigation of the tracked fluorescence endoscope toward the target identified on SPECT/CT resulted in real-time gradual visualization of the fluorescent signal in the prostate, thus providing an intraoperative confirmation of the navigation accuracy.

  5. Minimally invasive paediatric cardiac surgery.

    PubMed

    Bacha, Emile; Kalfa, David

    2014-01-01

    The concept of minimally invasive surgery for congenital heart disease in paediatric patients is broad, and has the aim of reducing the trauma of the operation at each stage of management. Firstly, in the operating room using minimally invasive incisions, video-assisted thoracoscopic and robotically assisted surgery, hybrid procedures, image-guided intracardiac surgery, and minimally invasive cardiopulmonary bypass strategies. Secondly, in the intensive-care unit with neuroprotection and 'fast-tracking' strategies that involve early extubation, early hospital discharge, and less exposure to transfused blood products. Thirdly, during postoperative mid-term and long-term follow-up by providing the children and their families with adequate support after hospital discharge. Improvement of these strategies relies on the development of new devices, real-time multimodality imaging, aids to instrument navigation, miniaturized and specialized instrumentation, robotic technology, and computer-assisted modelling of flow dynamics and tissue mechanics. In addition, dedicated multidisciplinary co-ordinated teams involving congenital cardiac surgeons, perfusionists, intensivists, anaesthesiologists, cardiologists, nurses, psychologists, and counsellors are needed before, during, and after surgery to go beyond apparent technological and medical limitations with the goal to 'treat more while hurting less'.

  6. Evaluation of automated decisionmaking methodologies and development of an integrated robotic system simulation. Appendix B: ROBSIM programmer's guide

    NASA Technical Reports Server (NTRS)

    Haley, D. C.; Almand, B. J.; Thomas, M. M.; Krauze, L. D.; Gremban, K. D.; Sanborn, J. C.; Kelly, J. H.; Depkovich, T. M.; Wolfe, W. J.; Nguyen, T.

    1986-01-01

    The purpose of the Robotic Simulation (ROBSIM) program is to provide a broad range of computer capabilities to assist in the design, verification, simulation, and study of robotic systems. ROBSIM is programmed in FORTRAM 77 and implemented on a VAX 11/750 computer using the VMS operating system. The programmer's guide describes the ROBSIM implementation and program logic flow, and the functions and structures of the different subroutines. With the manual and the in-code documentation, an experienced programmer can incorporate additional routines and modify existing ones to add desired capabilities.

  7. Robot-assisted ultrasound imaging: overview and development of a parallel telerobotic system.

    PubMed

    Monfaredi, Reza; Wilson, Emmanuel; Azizi Koutenaei, Bamshad; Labrecque, Brendan; Leroy, Kristen; Goldie, James; Louis, Eric; Swerdlow, Daniel; Cleary, Kevin

    2015-02-01

    Ultrasound imaging is frequently used in medicine. The quality of ultrasound images is often dependent on the skill of the sonographer. Several researchers have proposed robotic systems to aid in ultrasound image acquisition. In this paper we first provide a short overview of robot-assisted ultrasound imaging (US). We categorize robot-assisted US imaging systems into three approaches: autonomous US imaging, teleoperated US imaging, and human-robot cooperation. For each approach several systems are introduced and briefly discussed. We then describe a compact six degree of freedom parallel mechanism telerobotic system for ultrasound imaging developed by our research team. The long-term goal of this work is to enable remote ultrasound scanning through teleoperation. This parallel mechanism allows for both translation and rotation of an ultrasound probe mounted on the top plate along with force control. Our experimental results confirmed good mechanical system performance with a positioning error of < 1 mm. Phantom experiments by a radiologist showed promising results with good image quality.

  8. On the reproducibility of expert-operated and robotic ultrasound acquisitions.

    PubMed

    Kojcev, Risto; Khakzar, Ashkan; Fuerst, Bernhard; Zettinig, Oliver; Fahkry, Carole; DeJong, Robert; Richmon, Jeremy; Taylor, Russell; Sinibaldi, Edoardo; Navab, Nassir

    2017-06-01

    We present the evaluation of the reproducibility of measurements performed using robotic ultrasound imaging in comparison with expert-operated sonography. Robotic imaging for interventional procedures may be a valuable contribution, but requires reproducibility for its acceptance in clinical routine. We study this by comparing repeated measurements based on robotic and expert-operated ultrasound imaging. Robotic ultrasound acquisition is performed in three steps under user guidance: First, the patient is observed using a 3D camera on the robot end effector, and the user selects the region of interest. This allows for automatic planning of the robot trajectory. Next, the robot executes a sweeping motion following the planned trajectory, during which the ultrasound images and tracking data are recorded. As the robot is compliant, deviations from the path are possible, for instance due to patient motion. Finally, the ultrasound slices are compounded to create a volume. Repeated acquisitions can be performed automatically by comparing the previous and current patient surface. After repeated image acquisitions, the measurements based on acquisitions performed by the robotic system and expert are compared. Within our case series, the expert measured the anterior-posterior, longitudinal, transversal lengths of both of the left and right thyroid lobes on each of the 4 healthy volunteers 3 times, providing 72 measurements. Subsequently, the same procedure was performed using the robotic system resulting in a cumulative total of 144 clinically relevant measurements. Our results clearly indicated that robotic ultrasound enables more repeatable measurements. A robotic ultrasound platform leads to more reproducible data, which is of crucial importance for planning and executing interventions.

  9. Virtobot--a multi-functional robotic system for 3D surface scanning and automatic post mortem biopsy.

    PubMed

    Ebert, Lars Christian; Ptacek, Wolfgang; Naether, Silvio; Fürst, Martin; Ross, Steffen; Buck, Ursula; Weber, Stefan; Thali, Michael

    2010-03-01

    The Virtopsy project, a multi-disciplinary project that involves forensic science, diagnostic imaging, computer science, automation technology, telematics and biomechanics, aims to develop new techniques to improve the outcome of forensic investigations. This paper presents a new approach in the field of minimally invasive virtual autopsy for a versatile robotic system that is able to perform three-dimensional (3D) surface scans as well as post mortem image-guided soft tissue biopsies. The system consists of an industrial six-axis robot with additional extensions (i.e. a linear axis to increase working space, a tool-changing system and a dedicated safety system), a multi-slice CT scanner with equipment for angiography, a digital photogrammetry and 3D optical surface-scanning system, a 3D tracking system, and a biopsy end effector for automatic needle placement. A wax phantom was developed for biopsy accuracy tests. Surface scanning times were significantly reduced (scanning times cut in half, calibration three times faster). The biopsy module worked with an accuracy of 3.2 mm. Using the Virtobot, the surface-scanning procedure could be standardized and accelerated. The biopsy module is accurate enough for use in biopsies in a forensic setting. The Virtobot can be utilized for several independent tasks in the field of forensic medicine, and is sufficiently versatile to be adapted to different tasks in the future. (c) 2009 John Wiley & Sons, Ltd.

  10. Visual perception system and method for a humanoid robot

    NASA Technical Reports Server (NTRS)

    Chelian, Suhas E. (Inventor); Linn, Douglas Martin (Inventor); Wampler, II, Charles W. (Inventor); Bridgwater, Lyndon (Inventor); Wells, James W. (Inventor); Mc Kay, Neil David (Inventor)

    2012-01-01

    A robotic system includes a humanoid robot with robotic joints each moveable using an actuator(s), and a distributed controller for controlling the movement of each of the robotic joints. The controller includes a visual perception module (VPM) for visually identifying and tracking an object in the field of view of the robot under threshold lighting conditions. The VPM includes optical devices for collecting an image of the object, a positional extraction device, and a host machine having an algorithm for processing the image and positional information. The algorithm visually identifies and tracks the object, and automatically adapts an exposure time of the optical devices to prevent feature data loss of the image under the threshold lighting conditions. A method of identifying and tracking the object includes collecting the image, extracting positional information of the object, and automatically adapting the exposure time to thereby prevent feature data loss of the image.

  11. MLESAC Based Localization of Needle Insertion Using 2D Ultrasound Images

    NASA Astrophysics Data System (ADS)

    Xu, Fei; Gao, Dedong; Wang, Shan; Zhanwen, A.

    2018-04-01

    In the 2D ultrasound image of ultrasound-guided percutaneous needle insertions, it is difficult to determine the positions of needle axis and tip because of the existence of artifacts and other noises. In this work the speckle is regarded as the noise of an ultrasound image, and a novel algorithm is presented to detect the needle in a 2D ultrasound image. Firstly, the wavelet soft thresholding technique based on BayesShrink rule is used to denoise the speckle of ultrasound image. Secondly, we add Otsu’s thresholding method and morphologic operations to pre-process the ultrasound image. Finally, the localization of the needle is identified and positioned in the 2D ultrasound image based on the maximum likelihood estimation sample consensus (MLESAC) algorithm. The experimental results show that it is valid for estimating the position of needle axis and tip in the ultrasound images with the proposed algorithm. The research work is hopeful to be used in the path planning and robot-assisted needle insertion procedures.

  12. Image acquisition device of inspection robot based on adaptive rotation regulation of polarizer

    NASA Astrophysics Data System (ADS)

    Dong, Maoqi; Wang, Xingguang; Liang, Tao; Yang, Guoqing; Zhang, Chuangyou; Gao, Faqin

    2017-12-01

    An image processing device of inspection robot with adaptive polarization adjustment is proposed, that the device includes the inspection robot body, the image collecting mechanism, the polarizer and the polarizer automatic actuating device. Where, the image acquisition mechanism is arranged at the front of the inspection robot body for collecting equipment image data in the substation. Polarizer is fixed on the automatic actuating device of polarizer, and installed in front of the image acquisition mechanism, and that the optical axis of the camera vertically goes through the polarizer and the polarizer rotates with the optical axis of the visible camera as the central axis. The simulation results show that the system solves the fuzzy problems of the equipment that are caused by glare, reflection of light and shadow, and the robot can observe details of the running status of electrical equipment. And the full coverage of the substation equipment inspection robot observation target is achieved, which ensures the safe operation of the substation equipment.

  13. An infrared/video fusion system for military robotics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, A.W.; Roberts, R.S.

    1997-08-05

    Sensory information is critical to the telerobotic operation of mobile robots. In particular, visual sensors are a key component of the sensor package on a robot engaged in urban military operations. Visual sensors provide the robot operator with a wealth of information including robot navigation and threat assessment. However, simple countermeasures such as darkness, smoke, or blinding by a laser, can easily neutralize visual sensors. In order to provide a robust visual sensing system, an infrared sensor is required to augment the primary visual sensor. An infrared sensor can acquire useful imagery in conditions that incapacitate a visual sensor. Amore » simple approach to incorporating an infrared sensor into the visual sensing system is to display two images to the operator: side-by-side visual and infrared images. However, dual images might overwhelm the operator with information, and result in degraded robot performance. A better solution is to combine the visual and infrared images into a single image that maximizes scene information. Fusing visual and infrared images into a single image demands balancing the mixture of visual and infrared information. Humans are accustom to viewing and interpreting visual images. They are not accustom to viewing or interpreting infrared images. Hence, the infrared image must be used to enhance the visual image, not obfuscate it.« less

  14. Continuous Shape Estimation of Continuum Robots Using X-ray Images

    PubMed Central

    Lobaton, Edgar J.; Fu, Jinghua; Torres, Luis G.; Alterovitz, Ron

    2015-01-01

    We present a new method for continuously and accurately estimating the shape of a continuum robot during a medical procedure using a small number of X-ray projection images (e.g., radiographs or fluoroscopy images). Continuum robots have curvilinear structure, enabling them to maneuver through constrained spaces by bending around obstacles. Accurately estimating the robot’s shape continuously over time is crucial for the success of procedures that require avoidance of anatomical obstacles and sensitive tissues. Online shape estimation of a continuum robot is complicated by uncertainty in its kinematic model, movement of the robot during the procedure, noise in X-ray images, and the clinical need to minimize the number of X-ray images acquired. Our new method integrates kinematics models of the robot with data extracted from an optimally selected set of X-ray projection images. Our method represents the shape of the continuum robot over time as a deformable surface which can be described as a linear combination of time and space basis functions. We take advantage of probabilistic priors and numeric optimization to select optimal camera configurations, thus minimizing the expected shape estimation error. We evaluate our method using simulated concentric tube robot procedures and demonstrate that obtaining between 3 and 10 images from viewpoints selected by our method enables online shape estimation with errors significantly lower than using the kinematic model alone or using randomly spaced viewpoints. PMID:26279960

  15. Home-Based Versus Laboratory-Based Robotic Ankle Training for Children With Cerebral Palsy: A Pilot Randomized Comparative Trial.

    PubMed

    Chen, Kai; Wu, Yi-Ning; Ren, Yupeng; Liu, Lin; Gaebler-Spira, Deborah; Tankard, Kelly; Lee, Julia; Song, Weiqun; Wang, Maobin; Zhang, Li-Qun

    2016-08-01

    To examine the outcomes of home-based robot-guided therapy and compare it to laboratory-based robot-guided therapy for the treatment of impaired ankles in children with cerebral palsy. A randomized comparative trial design comparing a home-based training group and a laboratory-based training group. Home versus laboratory within a research hospital. Children (N=41) with cerebral palsy who were at Gross Motor Function Classification System level I, II, or III were randomly assigned to 2 groups. Children in home-based and laboratory-based groups were 8.7±2.8 (n=23) and 10.7±6.0 (n=18) years old, respectively. Six-week combined passive stretching and active movement intervention of impaired ankle in a laboratory or home environment using a portable rehabilitation robot. Active dorsiflexion range of motion (as the primary outcome), mobility (6-minute walk test and timed Up and Go test), balance (Pediatric Balance Scale), Selective Motor Control Assessment of the Lower Extremity, Modified Ashworth Scale (MAS) for spasticity, passive range of motion (PROM), strength, and joint stiffness. Significant improvements were found for the home-based group in all biomechanical outcome measures except for PROM and all clinical outcome measures except the MAS. The laboratory-based group also showed significant improvements in all the biomechanical outcome measures and all clinical outcome measures except the MAS. There were no significant differences in the outcome measures between the 2 groups. These findings suggest that the translation of repetitive, goal-directed, biofeedback training through motivating games from the laboratory to the home environment is feasible. The benefits of home-based robot-guided therapy were similar to those of laboratory-based robot-guided therapy. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  16. Organ motion due to respiration: the state of the art and applications in interventional radiology and radiation oncology

    NASA Astrophysics Data System (ADS)

    Cleary, Kevin R.; Mulcahy, Maureen; Piyasena, Rohan; Zhou, Tong; Dieterich, Sonja; Xu, Sheng; Banovac, Filip; Wong, Kenneth H.

    2005-04-01

    Tracking organ motion due to respiration is important for precision treatments in interventional radiology and radiation oncology, among other areas. In interventional radiology, the ability to track and compensate for organ motion could lead to more precise biopsies for applications such as lung cancer screening. In radiation oncology, image-guided treatment of tumors is becoming technically possible, and the management of organ motion then becomes a major issue. This paper will review the state-of-the-art in respiratory motion and present two related clinical applications. Respiratory motion is an important topic for future work in image-guided surgery and medical robotics. Issues include how organs move due to respiration, how much they move, how the motion can be compensated for, and what clinical applications can benefit from respiratory motion compensation. Technology that can be applied for this purpose is now becoming available, and as that technology evolves, the subject will become an increasingly interesting and clinically valuable topic of research.

  17. Accuracy Study of a Robotic System for MRI-guided Prostate Needle Placement

    PubMed Central

    Seifabadi, Reza; Cho, Nathan BJ.; Song, Sang-Eun; Tokuda, Junichi; Hata, Nobuhiko; Tempany, Clare M.; Fichtinger, Gabor; Iordachita, Iulian

    2013-01-01

    Background Accurate needle placement is the first concern in percutaneous MRI-guided prostate interventions. In this phantom study, different sources contributing to the overall needle placement error of a MRI-guided robot for prostate biopsy have been identified, quantified, and minimized to the possible extent. Methods and Materials The overall needle placement error of the system was evaluated in a prostate phantom. This error was broken into two parts: the error associated with the robotic system (called before-insertion error) and the error associated with needle-tissue interaction (called due-to-insertion error). The before-insertion error was measured directly in a soft phantom and different sources contributing into this part were identified and quantified. A calibration methodology was developed to minimize the 4-DOF manipulator’s error. The due-to-insertion error was indirectly approximated by comparing the overall error and the before-insertion error. The effect of sterilization on the manipulator’s accuracy and repeatability was also studied. Results The average overall system error in phantom study was 2.5 mm (STD=1.1mm). The average robotic system error in super soft phantom was 1.3 mm (STD=0.7 mm). Assuming orthogonal error components, the needle-tissue interaction error was approximated to be 2.13 mm thus having larger contribution to the overall error. The average susceptibility artifact shift was 0.2 mm. The manipulator’s targeting accuracy was 0.71 mm (STD=0.21mm) after robot calibration. The robot’s repeatability was 0.13 mm. Sterilization had no noticeable influence on the robot’s accuracy and repeatability. Conclusions The experimental methodology presented in this paper may help researchers to identify, quantify, and minimize different sources contributing into the overall needle placement error of an MRI-guided robotic system for prostate needle placement. In the robotic system analyzed here, the overall error of the studied system remained within the acceptable range. PMID:22678990

  18. Stormram 4: An MR Safe Robotic System for Breast Biopsy.

    PubMed

    Groenhuis, Vincent; Siepel, Françoise J; Veltman, Jeroen; van Zandwijk, Jordy K; Stramigioli, Stefano

    2018-05-21

    Suspicious lesions in the breast that are only visible on magnetic resonance imaging (MRI) need to be biopsied under MR guidance with high accuracy and efficiency for accurate diagnosis. The aim of this study is to present a novel robotic system, the Stormram 4, and to perform preclinical tests in an MRI environment. Excluding racks and needle, its dimensions are 72 × 51 × 40 mm. The Stormram 4 is driven by two linear and two curved pneumatic stepper motors. The linear motor is capable of exerting 63 N of force at a pressure of 0.65 MPa. In an MRI environment the maximum observed stepping frequency is 30 Hz (unloaded), or 8 Hz when full force is needed. The Stormram 4's mean positioning error is 0.73 ± 0.47 mm in free air, and 1.29 ± 0.59 mm when targeting breast phantoms in MRI. Excluding the off-the-shelf needle, the robot is inherently MR safe. The robot is able to accurately target lesions under MRI guidance, reducing tissue damage and risk of false negatives. These results are promising for clinical experiments, improving the quality of healthcare in the field of MRI-guided breast biopsies.

  19. Task Analysis and Job Descriptions for Robotics/Automated Systems Technicians. Final Report. Volume 1.

    ERIC Educational Resources Information Center

    Hull, Daniel M.; Lovett, James E.

    The Robotics/Automated Systems Technician (RAST) project developed a robotics technician model curriculum for the use of state directors of vocational education and two-year college vocational/technical educators. A baseline management plan was developed to guide the project. To provide awareness, project staff developed a dissemination plan…

  20. Minimally invasive abdominal surgery: lux et veritas past, present, and future.

    PubMed

    Harrell, Andrew G; Heniford, B Todd

    2005-08-01

    Laparoscopic surgery has developed out of multiple technology innovations and the desire to see beyond the confines of the human body. As the instrumentation became more advanced, the application of this technique followed. By revisiting the historical developments that now define laparoscopic surgery, we can possibly foresee its future. A Medline search was performed of all the English-language literature. Further references were obtained through cross-referencing the bibliography cited in each work and using books from the authors' collection. Minimally invasive surgery is becoming important in almost every facet of abdominal surgery. Optical improvements, miniaturization, and robotic technology continue to define the frontier of minimally invasive surgery. Endoluminal resection surgery, image-guided surgical navigation, and remotely controlled robotics are not far from becoming reality. These and advances yet to be described will change laparoscopic surgery just as the electric light bulb did over 100 years ago.

  1. A novel passive/active hybrid robot for orthopaedic trauma surgery.

    PubMed

    Kuang, Shaolong; Leung, Kwok-sui; Wang, Tianmiao; Hu, Lei; Chui, Elvis; Liu, Wenyong; Wang, Yu

    2012-12-01

    Image guided navigation systems (IGNS) have been implemented successfully in orthopaedic trauma surgery procedures because of their ability to help surgeons position and orient hand-held drills at optimal entry points. However, current IGNS cannot prevent drilling tools or instruments from slipping or deviating from the planned trajectory during the drilling process. A method is therefore needed to overcome such problems. A novel passive/active hybrid robot (the HybriDot) for positioning and supporting surgical tools and instruments while drilling and/or cutting in orthopaedic trauma surgery is presented in this paper. This new robot, consisting of a circular prismatic joint and five passive/active back-drivable joints, is designed to fulfill clinical needs. In this paper, a system configuration and three operational modes are introduced and analyzed. Workspace and layout in the operating theatre (OT) are also analyzed in order to validate the structure design. Finally, experiments to evaluate the feasibility of the robot system are described. Analysis, simulation, and experimental results show that the novel structure of the robot can provide an appropriate workspace without risk of collision within OT environments during operation. The back-drivable joint mechanism can provide surgeons with more safety and flexibility in operational modes. The mean square value of the positional accuracy of this robot is 0.811 mm, with a standard deviation (SD) of 0.361 mm; the orientation is accurate to within 2.186º, with a SD of 0.932º. Trials on actual patients undergoing surgery for distal locking of intramedullary nails were successfully conducted in one pass using the robot. This robot has the advantages of having an appropriate workspace, being well designed for human-robot cooperation, and having high accuracy, sufficient rigidity, and easy deployability within the OT for use in common orthopaedic trauma surgery tasks such as screw fixation and drilling assistance. Copyright © 2012 John Wiley & Sons, Ltd.

  2. Accuracy of robot-assisted pedicle screw placement for adolescent idiopathic scoliosis in the pediatric population.

    PubMed

    Macke, Jeremy J; Woo, Raymund; Varich, Laura

    2016-06-01

    This is a retrospective review of pedicle screw placement in adolescent idiopathic scoliosis (AIS) patients under 18 years of age who underwent robot-assisted corrective surgery. Our primary objective was to characterize the accuracy of pedicle screw placement with evaluation by computed tomography (CT) after robot-assisted surgery in AIS patients. Screw malposition is the most frequent complication of pedicle screw placement and is more frequent in AIS. Given the potential for serious complications, the need for improved accuracy of screw placement has spurred multiple innovations including robot-assisted guidance devices. No studies to date have evaluated this robot-assisted technique using CT exclusively within the AIS population. Fifty patients were included in the study. All operative procedures were performed at a single institution by a single pediatric orthopedic surgeon. We evaluated the grade of screw breach, the direction of screw breach, and the positioning of the patient for preoperative scan (supine versus prone). Of 662 screws evaluated, 48 screws (7.2 %) demonstrated a breach of greater than 2 mm. With preoperative prone position CT scanning, only 2.4 % of screws were found to have this degree of breach. Medial malposition was found in 3 % of screws, a rate which decreased to 0 % with preoperative prone position scanning. Based on our results, we conclude that the proper use of image-guided robot-assisted surgery can improve the accuracy and safety of thoracic pedicle screw placement in patients with adolescent idiopathic scoliosis. This is the first study to evaluate the accuracy of pedicle screw placement using CT assessment in robot-assisted surgical correction of patients with AIS. In our study, the robot-assisted screw misplacement rate was lower than similarly constructed studies evaluating conventional (non-robot-assisted) procedures. If patients are preoperatively scanned in the prone position, the misplacement rate is further decreased.

  3. Sensory Interactive Teleoperator Robotic Grasping

    NASA Technical Reports Server (NTRS)

    Alark, Keli; Lumia, Ron

    1997-01-01

    As the technological world strives for efficiency, the need for economical equipment that increases operator proficiency in minimal time is fundamental. This system links a CCD camera, a controller and a robotic arm to a computer vision system to provide an alternative method of image analysis. The machine vision system which was employed possesses software tools for acquiring and analyzing images which are received through a CCD camera. After feature extraction on the object in the image was performed, information about the object's location, orientation and distance from the robotic gripper is sent to the robot controller so that the robot can manipulate the object.

  4. Visual servoing in medical robotics: a survey. Part I: endoscopic and direct vision imaging - techniques and applications.

    PubMed

    Azizian, Mahdi; Khoshnam, Mahta; Najmaei, Nima; Patel, Rajni V

    2014-09-01

    Intra-operative imaging is widely used to provide visual feedback to a clinician when he/she performs a procedure. In visual servoing, surgical instruments and parts of tissue/body are tracked by processing the acquired images. This information is then used within a control loop to manoeuvre a robotic manipulator during a procedure. A comprehensive search of electronic databases was completed for the period 2000-2013 to provide a survey of the visual servoing applications in medical robotics. The focus is on medical applications where image-based tracking is used for closed-loop control of a robotic system. Detailed classification and comparative study of various contributions in visual servoing using endoscopic or direct visual images are presented and summarized in tables and diagrams. The main challenges in using visual servoing for medical robotic applications are identified and potential future directions are suggested. 'Supervised automation of medical robotics' is found to be a major trend in this field. Copyright © 2013 John Wiley & Sons, Ltd.

  5. A hardware investigation of robotic SPECT for functional and molecular imaging onboard radiation therapy systems

    PubMed Central

    Yan, Susu; Bowsher, James; Tough, MengHeng; Cheng, Lin; Yin, Fang-Fang

    2014-01-01

    Purpose: To construct a robotic SPECT system and to demonstrate its capability to image a thorax phantom on a radiation therapy flat-top couch, as a step toward onboard functional and molecular imaging in radiation therapy. Methods: A robotic SPECT imaging system was constructed utilizing a gamma camera detector (Digirad 2020tc) and a robot (KUKA KR150 L110 robot). An imaging study was performed with a phantom (PET CT PhantomTM), which includes five spheres of 10, 13, 17, 22, and 28 mm diameters. The phantom was placed on a flat-top couch. SPECT projections were acquired either with a parallel-hole collimator or a single-pinhole collimator, both without background in the phantom and with background at 1/10th the sphere activity concentration. The imaging trajectories of parallel-hole and pinhole collimated detectors spanned 180° and 228°, respectively. The pinhole detector viewed an off-centered spherical common volume which encompassed the 28 and 22 mm spheres. The common volume for parallel-hole system was centered at the phantom which encompassed all five spheres in the phantom. The maneuverability of the robotic system was tested by navigating the detector to trace the phantom and flat-top table while avoiding collision and maintaining the closest possible proximity to the common volume. The robot base and tool coordinates were used for image reconstruction. Results: The robotic SPECT system was able to maneuver parallel-hole and pinhole collimated SPECT detectors in close proximity to the phantom, minimizing impact of the flat-top couch on detector radius of rotation. Without background, all five spheres were visible in the reconstructed parallel-hole image, while four spheres, all except the smallest one, were visible in the reconstructed pinhole image. With background, three spheres of 17, 22, and 28 mm diameters were readily observed with the parallel-hole imaging, and the targeted spheres (22 and 28 mm diameters) were readily observed in the pinhole region-of-interest imaging. Conclusions: Onboard SPECT could be achieved by a robot maneuvering a SPECT detector about patients in position for radiation therapy on a flat-top couch. The robot inherent coordinate frames could be an effective means to estimate detector pose for use in SPECT image reconstruction. PMID:25370663

  6. A hardware investigation of robotic SPECT for functional and molecular imaging onboard radiation therapy systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yan, Susu, E-mail: susu.yan@duke.edu; Tough, MengHeng; Bowsher, James

    Purpose: To construct a robotic SPECT system and to demonstrate its capability to image a thorax phantom on a radiation therapy flat-top couch, as a step toward onboard functional and molecular imaging in radiation therapy. Methods: A robotic SPECT imaging system was constructed utilizing a gamma camera detector (Digirad 2020tc) and a robot (KUKA KR150 L110 robot). An imaging study was performed with a phantom (PET CT Phantom{sup TM}), which includes five spheres of 10, 13, 17, 22, and 28 mm diameters. The phantom was placed on a flat-top couch. SPECT projections were acquired either with a parallel-hole collimator ormore » a single-pinhole collimator, both without background in the phantom and with background at 1/10th the sphere activity concentration. The imaging trajectories of parallel-hole and pinhole collimated detectors spanned 180° and 228°, respectively. The pinhole detector viewed an off-centered spherical common volume which encompassed the 28 and 22 mm spheres. The common volume for parallel-hole system was centered at the phantom which encompassed all five spheres in the phantom. The maneuverability of the robotic system was tested by navigating the detector to trace the phantom and flat-top table while avoiding collision and maintaining the closest possible proximity to the common volume. The robot base and tool coordinates were used for image reconstruction. Results: The robotic SPECT system was able to maneuver parallel-hole and pinhole collimated SPECT detectors in close proximity to the phantom, minimizing impact of the flat-top couch on detector radius of rotation. Without background, all five spheres were visible in the reconstructed parallel-hole image, while four spheres, all except the smallest one, were visible in the reconstructed pinhole image. With background, three spheres of 17, 22, and 28 mm diameters were readily observed with the parallel-hole imaging, and the targeted spheres (22 and 28 mm diameters) were readily observed in the pinhole region-of-interest imaging. Conclusions: Onboard SPECT could be achieved by a robot maneuvering a SPECT detector about patients in position for radiation therapy on a flat-top couch. The robot inherent coordinate frames could be an effective means to estimate detector pose for use in SPECT image reconstruction.« less

  7. Reconciliation of diverse telepathology system designs. Historic issues and implications for emerging markets and new applications.

    PubMed

    Weinstein, Ronald S; Graham, Anna R; Lian, Fangru; Braunhut, Beth L; Barker, Gail R; Krupinski, Elizabeth A; Bhattacharyya, Achyut K

    2012-04-01

    Telepathology, the distant service component of digital pathology, is a growth industry. The word "telepathology" was introduced into the English Language in 1986. Initially, two different, competing imaging modalities were used for telepathology. These were dynamic (real time) robotic telepathology and static image (store-and-forward) telepathology. In 1989, a hybrid dynamic robotic/static image telepathology system was developed in Norway. This hybrid imaging system bundled these two primary pathology imaging modalities into a single multi-modality pathology imaging system. Similar hybrid systems were subsequently developed and marketed in other countries as well. It is noteworthy that hybrid dynamic robotic/static image telepathology systems provided the infrastructure for the first truly sustainable telepathology services. Since then, impressive progress has been made in developing another telepathology technology, so-called "virtual microscopy" telepathology (also called "whole slide image" telepathology or "WSI" telepathology). Over the past decade, WSI has appeared to be emerging as the preferred digital telepathology digital imaging modality. However, recently, there has been a re-emergence of interest in dynamic-robotic telepathology driven, in part, by concerns over the lack of a means for up-and-down focusing (i.e., Z-axis focusing) using early WSI processors. In 2010, the initial two U.S. patents for robotic telepathology (issued in 1993 and 1994) expired enabling many digital pathology equipment companies to incorporate dynamic-robotic telepathology modules into their WSI products for the first time. The dynamic-robotic telepathology module provided a solution to the up-and-down focusing issue. WSI and dynamic robotic telepathology are now, rapidly, being bundled into a new class of telepathology/digital pathology imaging system, the "WSI-enhanced dynamic robotic telepathology system". To date, six major WSI processor equipment companies have embraced the approach and developed WSI-enhanced dynamic-robotic digital telepathology systems, marketed under a variety of labels. Successful commercialization of such systems could help overcome the current resistance of some pathologists to incorporate digital pathology, and telepathology, into their routine and esoteric laboratory services. Also, WSI-enhanced dynamic robotic telepathology could be useful for providing general pathology and subspecialty pathology services to many of the world's underserved populations in the decades ahead. This could become an important enabler for the delivery of patient-centered healthcare in the future. © 2012 The Authors APMIS © 2012 APMIS.

  8. Comparison of success rates, learning curves, and inter-subject performance variability of robot-assisted and manual ultrasound-guided nerve block needle guidance in simulation.

    PubMed

    Morse, J; Terrasini, N; Wehbe, M; Philippona, C; Zaouter, C; Cyr, S; Hemmerling, T M

    2014-06-01

    This study focuses on a recently developed robotic nerve block system and its impact on learning regional anaesthesia skills. We compared success rates, learning curves, performance times, and inter-subject performance variability of robot-assisted vs manual ultrasound (US)-guided nerve block needle guidance. The hypothesis of this study is that robot assistance will result in faster skill acquisition than manual needle guidance. Five co-authors with different experience with nerve blocks and the robotic system performed both manual and robot-assisted, US-guided nerve blocks on two different nerves of a nerve phantom. Ten trials were performed for each of the four procedures. Time taken to move from a shared starting position till the needle was inserted into the target nerve was defined as the performance time. A successful block was defined as the insertion of the needle into the target nerve. Average performance times were compared using analysis of variance. P<0.05 was considered significant. Data presented as mean (standard deviation). All blocks were successful. There were significant differences in performance times between co-authors to perform the manual blocks, either superficial (P=0.001) or profound (P=0.0001); no statistical difference between co-authors was noted for the robot-assisted blocks. Linear regression indicated that the average decrease in time between consecutive trials for robot-assisted blocks of 1.8 (1.6) s was significantly (P=0.007) greater than the decrease for manual blocks of 0.3 (0.3) s. Robot assistance of nerve blocks allows for faster learning of needle guidance over manual positioning and reduces inter-subject performance variability. © The Author [2014]. Published by Oxford University Press on behalf of the British Journal of Anaesthesia. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  9. An Active System for Visually-Guided Reaching in 3D across Binocular Fixations

    PubMed Central

    2014-01-01

    Based on the importance of relative disparity between objects for accurate hand-eye coordination, this paper presents a biological approach inspired by the cortical neural architecture. So, the motor information is coded in egocentric coordinates obtained from the allocentric representation of the space (in terms of disparity) generated from the egocentric representation of the visual information (image coordinates). In that way, the different aspects of the visuomotor coordination are integrated: an active vision system, composed of two vergent cameras; a module for the 2D binocular disparity estimation based on a local estimation of phase differences performed through a bank of Gabor filters; and a robotic actuator to perform the corresponding tasks (visually-guided reaching). The approach's performance is evaluated through experiments on both simulated and real data. PMID:24672295

  10. Intra-operative feedback and dynamic compensation for image-guided robotic focal ultrasound surgery.

    PubMed

    Chauhan, S; Amir, H; Chen, G; Hacker, A; Michel, M S; Koehrmann, K U

    2008-11-01

    This paper describes a non-invasive remote temperature measurement technique integrated with a biomechatronic surgery system devised in our laboratory and named FUSBOT (Focal Ultrasound Surgery RoBOT). FUSBOTs use High-Intensity Focused Ultrasound (HIFU) for ablation of cancers/tumors and targets accessible through various soft-tissue acoustic windows in the human body. The focused ultrasound beam parameters are chosen so that biologically significant temperature rises are achieved only within the focal volume. In this paper, FUSBOT(BS), a customized system for breast surgery, is taken as a representative example to demonstrate the implementation and the results of non-invasive feedback during ablation. An 8-axis PC-based controller controls various sub-sections of the system within a safe constrained work envelope. Temperature is a prime target parameter in ablative procedures, and it is of paramount importance that means should be devised for its measurement and control in order to design optimal dose protocols and judge the efficacy of FUS systems. A customized sensory interface is devised and integrated with FUSBOT(BS), and dedicated software algorithms are embedded for surgical planning based on real-time guidance and feedback. Variations in the physical parameters of the tissue interacting with the incident modality are used as surgical feedback. The use of real-time ultrasound imaging and data processed from various sensors to deduce lesion position and thermal feedback during surgery, as integrated with the robotic system for online surgical planning, is described. Dynamic registration algorithms are developed for compensation and re-registration of the robotic end-effector with respect to the target, and representative empirical outcomes for lesion tracking and online temperature estimation in various biological tissues are presented.

  11. Development and Performance Evaluation of Image-Based Robotic Waxing System for Detailing Automobiles

    PubMed Central

    Hsu, Bing-Cheng

    2018-01-01

    Waxing is an important aspect of automobile detailing, aimed at protecting the finish of the car and preventing rust. At present, this delicate work is conducted manually due to the need for iterative adjustments to achieve acceptable quality. This paper presents a robotic waxing system in which surface images are used to evaluate the quality of the finish. An RGB-D camera is used to build a point cloud that details the sheet metal components to enable path planning for a robot manipulator. The robot is equipped with a multi-axis force sensor to measure and control the forces involved in the application and buffing of wax. Images of sheet metal components that were waxed by experienced car detailers were analyzed using image processing algorithms. A Gaussian distribution function and its parameterized values were obtained from the images for use as a performance criterion in evaluating the quality of surfaces prepared by the robotic waxing system. Waxing force and dwell time were optimized using a mathematical model based on the image-based criterion used to measure waxing performance. Experimental results demonstrate the feasibility of the proposed robotic waxing system and image-based performance evaluation scheme. PMID:29757940

  12. Development and Performance Evaluation of Image-Based Robotic Waxing System for Detailing Automobiles.

    PubMed

    Lin, Chi-Ying; Hsu, Bing-Cheng

    2018-05-14

    Waxing is an important aspect of automobile detailing, aimed at protecting the finish of the car and preventing rust. At present, this delicate work is conducted manually due to the need for iterative adjustments to achieve acceptable quality. This paper presents a robotic waxing system in which surface images are used to evaluate the quality of the finish. An RGB-D camera is used to build a point cloud that details the sheet metal components to enable path planning for a robot manipulator. The robot is equipped with a multi-axis force sensor to measure and control the forces involved in the application and buffing of wax. Images of sheet metal components that were waxed by experienced car detailers were analyzed using image processing algorithms. A Gaussian distribution function and its parameterized values were obtained from the images for use as a performance criterion in evaluating the quality of surfaces prepared by the robotic waxing system. Waxing force and dwell time were optimized using a mathematical model based on the image-based criterion used to measure waxing performance. Experimental results demonstrate the feasibility of the proposed robotic waxing system and image-based performance evaluation scheme.

  13. TH-C-17A-06: A Hardware Implementation and Evaluation of Robotic SPECT: Toward Molecular Imaging Onboard Radiation Therapy Machines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yan, S; Touch, M; Bowsher, J

    Purpose: To construct a robotic SPECT system and demonstrate its capability to image a thorax phantom on a radiation therapy flat-top couch. The system has potential for on-board functional and molecular imaging in radiation therapy. Methods: A robotic SPECT imaging system was developed utilizing a Digirad 2020tc detector and a KUKA KR150-L110 robot. An imaging study was performed with the PET CT Phantom, which includes 5 spheres: 10, 13, 17, 22 and 28 mm in diameter. Sphere-tobackground concentration ratio was 6:1 of Tc99m. The phantom was placed on a flat-top couch. SPECT projections were acquired with a parallel-hole collimator andmore » a single pinhole collimator. The robotic system navigated the detector tracing the flat-top table to maintain the closest possible proximity to the phantom. For image reconstruction, detector trajectories were described by six parameters: radius-of-rotation, x and z detector shifts, and detector rotation θ, tilt ϕ and twist γ. These six parameters were obtained from the robotic system by calibrating the robot base and tool coordinates. Results: The robotic SPECT system was able to maneuver parallel-hole and pinhole collimated SPECT detectors in close proximity to the phantom, minimizing impact of the flat-top couch on detector-to-COR (center-ofrotation) distance. In acquisitions with background at 1/6th sphere activity concentration, photopeak contamination was heavy, yet the 17, 22, and 28 mm diameter spheres were readily observed with the parallel hole imaging, and the single, targeted sphere (28 mm diameter) was readily observed in the pinhole region-of-interest (ROI) imaging. Conclusion: Onboard SPECT could be achieved by a robot maneuvering a SPECT detector about patients in position for radiation therapy on a flat-top couch. The robot inherent coordinate frame could be an effective means to estimate detector pose for use in SPECT image reconstruction. PHS/NIH/NCI grant R21-CA156390-01A1.« less

  14. Location-Driven Image Retrieval for Images Collected by a Mobile Robot

    NASA Astrophysics Data System (ADS)

    Tanaka, Kanji; Hirayama, Mitsuru; Okada, Nobuhiro; Kondo, Eiji

    Mobile robot teleoperation is a method for a human user to interact with a mobile robot over time and distance. Successful teleoperation depends on how well images taken by the mobile robot are visualized to the user. To enhance the efficiency and flexibility of the visualization, an image retrieval system on such a robot’s image database would be very useful. The main difference of the robot’s image database from standard image databases is that various relevant images exist due to variety of viewing conditions. The main contribution of this paper is to propose an efficient retrieval approach, named location-driven approach, utilizing correlation between visual features and real world locations of images. Combining the location-driven approach with the conventional feature-driven approach, our goal can be viewed as finding an optimal classifier between relevant and irrelevant feature-location pairs. An active learning technique based on support vector machine is extended for this aim.

  15. Co-robotic ultrasound imaging: a cooperative force control approach

    NASA Astrophysics Data System (ADS)

    Finocchi, Rodolfo; Aalamifar, Fereshteh; Fang, Ting Yun; Taylor, Russell H.; Boctor, Emad M.

    2017-03-01

    Ultrasound (US) imaging remains one of the most commonly used imaging modalities in medical practice. However, due to the physical effort required to perform US imaging tasks, 63-91% of ultrasonographers develop musculoskeletal disorders throughout their careers. The goal of this work is to provide ultrasonographers with a system that facilitates and reduces strain in US image acquisition. To this end, we propose a system for admittance force robot control that uses the six-degree-of-freedom UR5 industrial robot. A six-axis force sensor is used to measure the forces and torques applied by the sonographer on the probe. As the sonographer pushes against the US probe, the robot complies with these forces, following the user's desired path. A one-axis load cell is used to measure contact forces between the patient and the probe in real time. When imaging, the robot augments the axial forces applied by the user, lessening the physical effort required. User studies showed an overall decrease in hand tremor while imaging at high forces, improvements in image stability, and a decrease in difficulty and strenuousness.

  16. Robotics Projects and Learning Concepts in Science, Technology and Problem Solving

    ERIC Educational Resources Information Center

    Barak, Moshe; Zadok, Yair

    2009-01-01

    This paper presents a study about learning and the problem solving process identified among junior high school pupils participating in robotics projects in the Lego Mindstorm environment. The research was guided by the following questions: (1) How do pupils come up with inventive solutions to problems in the context of robotics activities? (2)…

  17. NASA Robotic Neurosurgery Testbed

    NASA Technical Reports Server (NTRS)

    Mah, Robert

    1997-01-01

    The detection of tissue interface (e.g., normal tissue, cancer, tumor) has been limited clinically to tactile feedback, temperature monitoring, and the use of a miniature ultrasound probe for tissue differentiation during surgical operations. In neurosurgery, the needle used in the standard stereotactic CT (Computational Tomography) or MRI (Magnetic Resonance Imaging) guided brain biopsy provides no information about the tissue being sampled. The tissue sampled depends entirely upon the accuracy with which the localization provided by the preoperative CT or MRI scan is translated to the intracranial biopsy site. In addition, no information about the tissue being traversed by the needle (e.g., a blood vessel) is provided. Hemorrhage due to the biopsy needle tearing a blood vessel within the brain is the most devastating complication of stereotactic CT/MRI guided brain biopsy. A robotic neurosurgery testbed has been developed at NASA Ames Research Center as a spin-off of technologies from space, aeronautics and medical programs. The invention entitled 'Robotic Neurosurgery Leading to Multimodality Devices for Tissue Identification' is nearing a state ready for commercialization. The devices will: 1) improve diagnostic accuracy and precision of general surgery, with near term emphasis on stereotactic brain biopsy, 2) automate tissue identification, with near term emphasis on stereotactic brain biopsy, to permit remote control of the procedure, and 3) reduce morbidity for stereotactic brain biopsy. The commercial impact from this work is the potential development of a whole new generation of smart surgical tools to increase the safety, accuracy and efficiency of surgical procedures. Other potential markets include smart surgical tools for tumor ablation in neurosurgery, general exploratory surgery, prostate cancer surgery, and breast cancer surgery.

  18. Optimization of multi-image pose recovery of fluoroscope tracking (FTRAC) fiducial in an image-guided femoroplasty system

    NASA Astrophysics Data System (ADS)

    Liu, Wen P.; Armand, Mehran; Otake, Yoshito; Taylor, Russell H.

    2011-03-01

    Percutaneous femoroplasty [1], or femoral bone augmentation, is a prospective alternative treatment for reducing the risk of fracture in patients with severe osteoporosis. We are developing a surgical robotics system that will assist orthopaedic surgeons in planning and performing a patient-specific, augmentation of the femur with bone cement. This collaborative project, sponsored by the National Institutes of Health (NIH), has been the topic of previous publications [2],[3] from our group. This paper presents modifications to the pose recovery of a fluoroscope tracking (FTRAC) fiducial during our process of 2D/3D registration of X-ray intraoperative images to preoperative CT data. We show improved automata of the initial pose estimation as well as lower projection errors with the advent of a multiimage pose optimization step.

  19. Efficacy, safety and outcome of frameless image-guided robotic radiosurgery for brain metastases after whole brain radiotherapy.

    PubMed

    Lohkamp, Laura-Nanna; Vajkoczy, Peter; Budach, Volker; Kufeld, Markus

    2018-05-01

    Estimating efficacy, safety and outcome of frameless image-guided robotic radiosurgery for the treatment of recurrent brain metastases after whole brain radiotherapy (WBRT). We performed a retrospective single-center analysis including patients with recurrent brain metastases after WBRT, who have been treated with single session radiosurgery, using the CyberKnife® Radiosurgery System (CKRS) (Accuray Inc., CA) between 2011 and 2016. The primary end point was local tumor control, whereas secondary end points were distant tumor control, treatment-related toxicity and overall survival. 36 patients with 140 recurrent brain metastases underwent 46 single session CKRS treatments. Twenty one patients had multiple brain metastases (58%). The mean interval between WBRT and CKRS accounted for 2 years (range 0.2-7 years). The median number of treated metastases per treatment session was five (range 1-12) with a tumor volume of 1.26 ccm (mean) and a median tumor dose of 18 Gy prescribed to the 70% isodose line. Two patients experienced local tumor recurrence within the 1st year after treatment and 13 patients (36%) developed novel brain metastases. Nine of these patients underwent additional one to three CKRS treatments. Eight patients (22.2%) showed treatment-related radiation reactions on MRI, three with clinical symptoms. Median overall survival was 19 months after CKRS. The actuarial 1-year local control rate was 94.2%. CKRS has proven to be locally effective and safe due to high local tumor control rates and low toxicity. Thus CKRS offers a reliable salvage treatment option for recurrent brain metastases after WBRT.

  20. Needle-tissue interactive mechanism and steering control in image-guided robot-assisted minimally invasive surgery: a review.

    PubMed

    Li, Pan; Yang, Zhiyong; Jiang, Shan

    2018-06-01

    Image-guided robot-assisted minimally invasive surgery is an important medicine procedure used for biopsy or local target therapy. In order to reach the target region not accessible using traditional techniques, long and thin flexible needles are inserted into the soft tissue which has large deformation and nonlinear characteristics. However, the detection results and therapeutic effect are directly influenced by the targeting accuracy of needle steering. For this reason, the needle-tissue interactive mechanism, path planning, and steering control are investigated in this review by searching literatures in the last 10 years, which results in a comprehensive overview of the existing techniques with the main accomplishments, limitations, and recommendations. Through comprehensive analyses, surgical simulation for insertion into multi-layer inhomogeneous tissue is verified as a primary and propositional aspect to be explored, which accurately predicts the nonlinear needle deflection and tissue deformation. Investigation of the path planning of flexible needles is recommended to an anatomical or a deformable environment which has characteristics of the tissue deformation. Nonholonomic modeling combined with duty-cycled spinning for needle steering, which tracks the tip position in real time and compensates for the deviation error, is recommended as a future research focus in the steering control in anatomical and deformable environments. Graphical abstract a Insertion force when the needle is inserted into soft tissue. b Needle deflection model when the needle is inserted into soft tissue [68]. c Path planning in anatomical environments [92]. d Duty-cycled spinning incorporated in nonholonomic needle steering [64].

  1. Correction of Visual Perception Based on Neuro-Fuzzy Learning for the Humanoid Robot TEO.

    PubMed

    Hernandez-Vicen, Juan; Martinez, Santiago; Garcia-Haro, Juan Miguel; Balaguer, Carlos

    2018-03-25

    New applications related to robotic manipulation or transportation tasks, with or without physical grasping, are continuously being developed. To perform these activities, the robot takes advantage of different kinds of perceptions. One of the key perceptions in robotics is vision. However, some problems related to image processing makes the application of visual information within robot control algorithms difficult. Camera-based systems have inherent errors that affect the quality and reliability of the information obtained. The need of correcting image distortion slows down image parameter computing, which decreases performance of control algorithms. In this paper, a new approach to correcting several sources of visual distortions on images in only one computing step is proposed. The goal of this system/algorithm is the computation of the tilt angle of an object transported by a robot, minimizing image inherent errors and increasing computing speed. After capturing the image, the computer system extracts the angle using a Fuzzy filter that corrects at the same time all possible distortions, obtaining the real angle in only one processing step. This filter has been developed by the means of Neuro-Fuzzy learning techniques, using datasets with information obtained from real experiments. In this way, the computing time has been decreased and the performance of the application has been improved. The resulting algorithm has been tried out experimentally in robot transportation tasks in the humanoid robot TEO (Task Environment Operator) from the University Carlos III of Madrid.

  2. Correction of Visual Perception Based on Neuro-Fuzzy Learning for the Humanoid Robot TEO

    PubMed Central

    2018-01-01

    New applications related to robotic manipulation or transportation tasks, with or without physical grasping, are continuously being developed. To perform these activities, the robot takes advantage of different kinds of perceptions. One of the key perceptions in robotics is vision. However, some problems related to image processing makes the application of visual information within robot control algorithms difficult. Camera-based systems have inherent errors that affect the quality and reliability of the information obtained. The need of correcting image distortion slows down image parameter computing, which decreases performance of control algorithms. In this paper, a new approach to correcting several sources of visual distortions on images in only one computing step is proposed. The goal of this system/algorithm is the computation of the tilt angle of an object transported by a robot, minimizing image inherent errors and increasing computing speed. After capturing the image, the computer system extracts the angle using a Fuzzy filter that corrects at the same time all possible distortions, obtaining the real angle in only one processing step. This filter has been developed by the means of Neuro-Fuzzy learning techniques, using datasets with information obtained from real experiments. In this way, the computing time has been decreased and the performance of the application has been improved. The resulting algorithm has been tried out experimentally in robot transportation tasks in the humanoid robot TEO (Task Environment Operator) from the University Carlos III of Madrid. PMID:29587392

  3. Single-Command Approach and Instrument Placement by a Robot on a Target

    NASA Technical Reports Server (NTRS)

    Huntsberger, Terrance; Cheng, Yang

    2005-01-01

    AUTOAPPROACH is a computer program that enables a mobile robot to approach a target autonomously, starting from a distance of as much as 10 m, in response to a single command. AUTOAPPROACH is used in conjunction with (1) software that analyzes images acquired by stereoscopic cameras aboard the robot and (2) navigation and path-planning software that utilizes odometer readings along with the output of the image-analysis software. Intended originally for application to an instrumented, wheeled robot (rover) in scientific exploration of Mars, AUTOAPPROACH could be adapted to terrestrial applications, notably including the robotic removal of land mines and other unexploded ordnance. A human operator generates the approach command by selecting the target in images acquired by the robot cameras. The approach path consists of multiple legs. Feature points are derived from images that contain the target and are thereafter tracked to correct odometric errors and iteratively refine estimates of the position and orientation of the robot relative to the target on successive legs. The approach is terminated when the robot attains the position and orientation required for placing a scientific instrument at the target. The workspace of the robot arm is then autonomously checked for self/terrain collisions prior to the deployment of the scientific instrument onto the target.

  4. In vivo reproducibility of robotic probe placement for a novel ultrasound-guided radiation therapy system

    PubMed Central

    Lediju Bell, Muyinatu A.; Sen, H. Tutkun; Iordachita, Iulian; Kazanzides, Peter; Wong, John

    2014-01-01

    Abstract. Ultrasound can provide real-time image guidance of radiation therapy, but the probe-induced tissue deformations cause local deviations from the treatment plan. If placed during treatment planning, the probe causes streak artifacts in required computed tomography (CT) images. To overcome these challenges, we propose robot-assisted placement of an ultrasound probe, followed by replacement with a geometrically identical, CT-compatible model probe. In vivo reproducibility was investigated by implanting a canine prostate, liver, and pancreas with three 2.38-mm spherical markers in each organ. The real probe was placed to visualize the markers and subsequently replaced with the model probe. Each probe was automatically removed and returned to the same position or force. Under position control, the median three-dimensional reproducibility of marker positions was 0.6 to 0.7 mm, 0.3 to 0.6 mm, and 1.1 to 1.6 mm in the prostate, liver, and pancreas, respectively. Reproducibility was worse under force control. Probe substitution errors were smallest for the prostate (0.2 to 0.6 mm) and larger for the liver and pancreas (4.1 to 6.3 mm), where force control generally produced larger errors than position control. Results indicate that position control is better than force control for this application, and the robotic approach has potential, particularly for relatively constrained organs and reproducibility errors that are smaller than established treatment margins. PMID:26158038

  5. Addressing the Movement of a Freescale Robotic Car Using Neural Network

    NASA Astrophysics Data System (ADS)

    Horváth, Dušan; Cuninka, Peter

    2016-12-01

    This article deals with the management of a Freescale small robotic car along the predefined guide line. Controlling of the direction of movement of the robot is performed by neural networks, and scales (memory) of neurons are calculated by Hebbian learning from the truth tables as learning with a teacher. Reflexive infrared sensors serves as inputs. The results are experiments, which are used to compare two methods of mobile robot control - tracking lines.

  6. Laser-only Adaptive Optics Achieves Significant Image Quality Gains Compared to Seeing-limited Observations over the Entire Sky

    NASA Astrophysics Data System (ADS)

    Howard, Ward S.; Law, Nicholas M.; Ziegler, Carl A.; Baranec, Christoph; Riddle, Reed

    2018-02-01

    Adaptive optics laser guide-star systems perform atmospheric correction of stellar wavefronts in two parts: stellar tip-tilt and high-spatial-order laser correction. The requirement of a sufficiently bright guide star in the field-of-view to correct tip-tilt limits sky coverage. In this paper, we show an improvement to effective seeing without the need for nearby bright stars, enabling full sky coverage by performing only laser-assisted wavefront correction. We used Robo-AO, the first robotic AO system, to comprehensively demonstrate this laser-only correction. We analyze observations from four years of efficient robotic operation covering 15000 targets and 42000 observations, each realizing different seeing conditions. Using an autoguider (or a post-processing software equivalent) and the laser to improve effective seeing independent of the brightness of a target, Robo-AO observations show a 39% ± 19% improvement to effective FWHM, without any tip-tilt correction. We also demonstrate that 50% encircled energy performance without tip-tilt correction remains comparable to diffraction-limited, standard Robo-AO performance. Faint-target science programs primarily limited by 50% encircled energy (e.g., those employing integral field spectrographs placed behind the AO system) may see significant benefits to sky coverage from employing laser-only AO.

  7. Multidirectional Image Sensing for Microscopy Based on a Rotatable Robot.

    PubMed

    Shen, Yajing; Wan, Wenfeng; Zhang, Lijun; Yong, Li; Lu, Haojian; Ding, Weili

    2015-12-15

    Image sensing at a small scale is essentially important in many fields, including microsample observation, defect inspection, material characterization and so on. However, nowadays, multi-directional micro object imaging is still very challenging due to the limited field of view (FOV) of microscopes. This paper reports a novel approach for multi-directional image sensing in microscopes by developing a rotatable robot. First, a robot with endless rotation ability is designed and integrated with the microscope. Then, the micro object is aligned to the rotation axis of the robot automatically based on the proposed forward-backward alignment strategy. After that, multi-directional images of the sample can be obtained by rotating the robot within one revolution under the microscope. To demonstrate the versatility of this approach, we view various types of micro samples from multiple directions in both optical microscopy and scanning electron microscopy, and panoramic images of the samples are processed as well. The proposed method paves a new way for the microscopy image sensing, and we believe it could have significant impact in many fields, especially for sample detection, manipulation and characterization at a small scale.

  8. Experiments in socially guided exploration: lessons learned in building robots that learn with and without human teachers

    NASA Astrophysics Data System (ADS)

    Thomaz, Andrea; Breazeal, Cynthia

    2008-06-01

    We present a learning system, socially guided exploration, in which a social robot learns new tasks through a combination of self-exploration and social interaction. The system's motivational drives, along with social scaffolding from a human partner, bias behaviour to create learning opportunities for a hierarchical reinforcement learning mechanism. The robot is able to learn on its own, but can flexibly take advantage of the guidance of a human teacher. We report the results of an experiment that analyses what the robot learns on its own as compared to being taught by human subjects. We also analyse the video of these interactions to understand human teaching behaviour and the social dynamics of the human-teacher/robot-learner system. With respect to learning performance, human guidance results in a task set that is significantly more focused and efficient at the tasks the human was trying to teach, whereas self-exploration results in a more diverse set. Analysis of human teaching behaviour reveals insights of social coupling between the human teacher and robot learner, different teaching styles, strong consistency in the kinds and frequency of scaffolding acts across teachers and nuances in the communicative intent behind positive and negative feedback.

  9. Revisions for screw malposition and clinical outcomes after robot-guided lumbar fusion for spondylolisthesis.

    PubMed

    Schröder, Marc L; Staartjes, Victor E

    2017-05-01

    OBJECTIVE The accuracy of robot-guided pedicle screw placement has been proven to be high, but little is known about the impact of such guidance on clinical outcomes such as the rate of revision surgeries for screw malposition. In addition, there are very few data about the impact of robot-guided fusion on patient-reported outcomes (PROs). Thus, the clinical benefit for the patient is unclear. In this study, the authors analyzed revision rates for screw malposition and changes in PROs following minimally invasive robot-guided pedicle screw fixation. METHODS A retrospective cohort study of patients who had undergone minimally invasive posterior lumbar interbody fusion (MI-PLIF) or minimally invasive transforaminal lumbar interbody fusion was performed. Patients were followed up clinically at 6 weeks, 12 months, and 24 months after treatment and by mailed questionnaire in March 2016 as a final follow-up. Visual analog scale (VAS) scores for back and leg pain severity, Oswestry Disability Index (ODI), screw revisions, and socio-demographic factors were analyzed. A literature review was performed, comparing the incidence of intraoperative screw revisions and revision surgery for screw malposition in robot-guided, navigated, and freehand fusion procedures. RESULTS Seventy-two patients fit the study inclusion criteria and had a mean follow up of 32 ± 17 months. No screws had to be revised intraoperatively, and no revision surgery for screw malposition was needed. In the literature review, the authors found a higher rate of intraoperative screw revisions in the navigated pool than in the robot-guided pool (p < 0.001, OR 9.7). Additionally, a higher incidence of revision surgery for screw malposition was observed for freehand procedures than for the robot-guided procedures (p < 0.001, OR 8.1). The VAS score for back pain improved significantly from 66.9 ± 25.0 preoperatively to 30.1 ± 26.8 at the final follow-up, as did the VAS score for leg pain (from 70.6 ± 22.8 to 24.3 ± 28.3) and ODI (from 43.4 ± 18.3 to 16.2 ± 16.7; all p < 0.001). Undergoing PLIF, a high body mass index, smoking status, and a preoperative ability to work were identified as predictors of a reduction in back pain. Length of hospital stay was 2.4 ± 1.1 days and operating time was 161 ± 50 minutes. Ability to work increased from 38.9% to 78.2% of patients (p < 0.001) at the final follow-up, and 89.1% of patients indicated they would choose to undergo the same treatment again. CONCLUSIONS In adults with low-grade spondylolisthesis, the data demonstrated a benefit in using robotic guidance to reduce the rate of revision surgery for screw malposition as compared with other techniques of pedicle screw insertion described in peer-reviewed publications. Larger comparative studies are required to assess differences in PROs following a minimally invasive approach in spinal fusion surgeries compared with other techniques.

  10. System Design and Development of a Robotic Device for Automated Venipuncture and Diagnostic Blood Cell Analysis.

    PubMed

    Balter, Max L; Chen, Alvin I; Fromholtz, Alex; Gorshkov, Alex; Maguire, Tim J; Yarmush, Martin L

    2016-10-01

    Diagnostic blood testing is the most prevalent medical procedure performed in the world and forms the cornerstone of modern health care delivery. Yet blood tests are still predominantly carried out in centralized labs using large-volume samples acquired by manual venipuncture, and no end-to-end solution from blood draw to sample analysis exists today. Our group is developing a platform device that merges robotic phlebotomy with automated diagnostics to rapidly deliver patient information at the site of the blood draw. The system couples an image-guided venipuncture robot, designed to address the challenges of routine venous access, with a centrifuge-based blood analyzer to obtain quantitative measurements of hematology. In this paper, we first present the system design and architecture of the integrated device. We then perform a series of in vitro experiments to evaluate the cannulation accuracy of the system on blood vessel phantoms. Next, we assess the effects of vessel diameter, needle gauge, flow rate, and viscosity on the rate of sample collection. Finally, we demonstrate proof-of-concept of a white cell assay on the blood analyzer using in vitro human samples spiked with fluorescently labeled microbeads.

  11. EVA Robotic Assistant Project: Platform Attitude Prediction

    NASA Technical Reports Server (NTRS)

    Nickels, Kevin M.

    2003-01-01

    The Robotic Systems Technology Branch is currently working on the development of an EVA Robotic Assistant under the sponsorship of the Surface Systems Thrust of the NASA Cross Enterprise Technology Development Program (CETDP). This will be a mobile robot that can follow a field geologist during planetary surface exploration, carry his tools and the samples that he collects, and provide video coverage of his activity. Prior experiments have shown that for such a robot to be useful it must be able to follow the geologist at walking speed over any terrain of interest. Geologically interesting terrain tends to be rough rather than smooth. The commercial mobile robot that was recently purchased as an initial testbed for the EVA Robotic Assistant Project, an ATRV Jr., is capable of faster than walking speed outside but it has no suspension. Its wheels with inflated rubber tires are attached to axles that are connected directly to the robot body. Any angular motion of the robot produced by driving over rough terrain will directly affect the pointing of the on-board stereo cameras. The resulting image motion is expected to make tracking of the geologist more difficult. This will either require the tracker to search a larger part of the image to find the target from frame to frame or to search mechanically in pan and tilt whenever the image motion is large enough to put the target outside the image in the next frame. This project consists of the design and implementation of a Kalman filter that combines the output of the angular rate sensors and linear accelerometers on the robot to estimate the motion of the robot base. The motion of the stereo camera pair mounted on the robot that results from this motion as the robot drives over rough terrain is then straightforward to compute. The estimates may then be used, for example, to command the robot s on-board pan-tilt unit to compensate for the camera motion induced by the base movement. This has been accomplished in two ways: first, a standalone head stabilizer has been implemented and second, the estimates have been used to influence the search algorithm of the stereo tracking algorithm. Studies of the image motion of a tracked object indicate that the image motion of objects is suppressed while the robot crossing rough terrain. This work expands the range of speed and surface roughness over which the robot should be able to track and follow a field geologist and accept arm gesture commands from the geologist.

  12. System and method for seamless task-directed autonomy for robots

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nielsen, Curtis; Bruemmer, David; Few, Douglas

    Systems, methods, and user interfaces are used for controlling a robot. An environment map and a robot designator are presented to a user. The user may place, move, and modify task designators on the environment map. The task designators indicate a position in the environment map and indicate a task for the robot to achieve. A control intermediary links task designators with robot instructions issued to the robot. The control intermediary analyzes a relative position between the task designators and the robot. The control intermediary uses the analysis to determine a task-oriented autonomy level for the robot and communicates targetmore » achievement information to the robot. The target achievement information may include instructions for directly guiding the robot if the task-oriented autonomy level indicates low robot initiative and may include instructions for directing the robot to determine a robot plan for achieving the task if the task-oriented autonomy level indicates high robot initiative.« less

  13. Feasibility of telementoring between Baltimore (USA) and Rome (Italy): the first five cases.

    PubMed

    Micali, S; Virgili, G; Vannozzi, E; Grassi, N; Jarrett, T W; Bauer, J J; Vespasiani, G; Kavoussi, L R

    2000-08-01

    Telemedicine is the use of telecommunication technology to deliver healthcare. Telementoring has been developed to allow a surgeon at a remote site to offer guidance and assistance to a less-experienced surgeon. We report on our experience during laparoscopic urologic procedures with mentoring between Rome, Italy, and Baltimore, USA. Over a period of 3 months, two laparoscopic left spermatic vein ligations, one retroperitoneal renal biopsy, one laparoscopic nephrectomy, and one percutaneous access to the kidney were telementored. Transperitoneal laparoscopic cases were performed with the use of AESOP, a robotic for remote manipulation of the endoscopic camera. A second robot, PAKY, was used to perform radiologically guided needle orientation and insertion for percutaneous renal access. In addition to controlling the robotic devices, the system provided real-time video display for either the laparoscope or an externally mounted camera located in the operating room, full duplex audio, telestration over live video, and access to electrocautery for tissue cutting or hemostasis. All procedures were accomplished with an uneventful postoperative course. One technical failure occurred because the robotic device was not properly positioned on the operating table. The round-trip delay of image transmission was less than 1 second. International telementoring is a feasible technique that can enhance surgeon education and decrease the likelihood of complications attributable to inexperience with new operative techniques.

  14. Object Segmentation Methods for Online Model Acquisition to Guide Robotic Grasping

    NASA Astrophysics Data System (ADS)

    Ignakov, Dmitri

    A vision system is an integral component of many autonomous robots. It enables the robot to perform essential tasks such as mapping, localization, or path planning. A vision system also assists with guiding the robot's grasping and manipulation tasks. As an increased demand is placed on service robots to operate in uncontrolled environments, advanced vision systems must be created that can function effectively in visually complex and cluttered settings. This thesis presents the development of segmentation algorithms to assist in online model acquisition for guiding robotic manipulation tasks. Specifically, the focus is placed on localizing door handles to assist in robotic door opening, and on acquiring partial object models to guide robotic grasping. First, a method for localizing a door handle of unknown geometry based on a proposed 3D segmentation method is presented. Following segmentation, localization is performed by fitting a simple box model to the segmented handle. The proposed method functions without requiring assumptions about the appearance of the handle or the door, and without a geometric model of the handle. Next, an object segmentation algorithm is developed, which combines multiple appearance (intensity and texture) and geometric (depth and curvature) cues. The algorithm is able to segment objects without utilizing any a priori appearance or geometric information in visually complex and cluttered environments. The segmentation method is based on the Conditional Random Fields (CRF) framework, and the graph cuts energy minimization technique. A simple and efficient method for initializing the proposed algorithm which overcomes graph cuts' reliance on user interaction is also developed. Finally, an improved segmentation algorithm is developed which incorporates a distance metric learning (DML) step as a means of weighing various appearance and geometric segmentation cues, allowing the method to better adapt to the available data. The improved method also models the distribution of 3D points in space as a distribution of algebraic distances from an ellipsoid fitted to the object, improving the method's ability to predict which points are likely to belong to the object or the background. Experimental validation of all methods is performed. Each method is evaluated in a realistic setting, utilizing scenarios of various complexities. Experimental results have demonstrated the effectiveness of the handle localization method, and the object segmentation methods.

  15. A tele-operated mobile ultrasound scanner using a light-weight robot.

    PubMed

    Delgorge, Cécile; Courrèges, Fabien; Al Bassit, Lama; Novales, Cyril; Rosenberger, Christophe; Smith-Guerin, Natalie; Brù, Concepció; Gilabert, Rosa; Vannoni, Maurizio; Poisson, Gérard; Vieyres, Pierre

    2005-03-01

    This paper presents a new tele-operated robotic chain for real-time ultrasound image acquisition and medical diagnosis. This system has been developed in the frame of the Mobile Tele-Echography Using an Ultralight Robot European Project. A light-weight six degrees-of-freedom serial robot, with a remote center of motion, has been specially designed for this application. It holds and moves a real probe on a distant patient according to the expert gesture and permits an image acquisition using a standard ultrasound device. The combination of mechanical structure choice for the robot and dedicated control law, particularly nearby the singular configuration allows a good path following and a robotized gesture accuracy. The choice of compression techniques for image transmission enables a compromise between flow and quality. These combined approaches, for robotics and image processing, enable the medical specialist to better control the remote ultrasound probe holder system and to receive stable and good quality ultrasound images to make a diagnosis via any type of communication link from terrestrial to satellite. Clinical tests have been performed since April 2003. They used both satellite or Integrated Services Digital Network lines with a theoretical bandwidth of 384 Kb/s. They showed the tele-echography system helped to identify 66% of lesions and 83% of symptomatic pathologies.

  16. Percutaneous Sacroiliac Screw Placement: A Prospective Randomized Comparison of Robot-assisted Navigation Procedures with a Conventional Technique

    PubMed Central

    Wang, Jun-Qiang; Wang, Yu; Feng, Yun; Han, Wei; Su, Yong-Gang; Liu, Wen-Yong; Zhang, Wei-Jun; Wu, Xin-Bao; Wang, Man-Yi; Fan, Yu-Bo

    2017-01-01

    Background: Sacroiliac (SI) screw fixation is a demanding technique, with a high rate of screw malposition due to the complex pelvic anatomy. TiRobot™ is an orthopedic surgery robot which can be used for SI screw fixation. This study aimed to evaluate the accuracy of robot-assisted placement of SI screws compared with a freehand technique. Methods: Thirty patients requiring posterior pelvic ring stabilization were randomized to receive freehand or robot-assisted SI screw fixation, between January 2016 and June 2016 at Beijing Jishuitan Hospital. Forty-five screws were placed at levels S1 and S2. In both methods, the primary end point screw position was assessed and classified using postoperative computed tomography. Fisher's exact probability test was used to analyze the screws’ positions. Secondary end points, such as duration of trajectory planning, surgical time after reduction of the pelvis, insertion time for guide wire, number of guide wire attempts, and radiation exposure without pelvic reduction, were also assessed. Results: Twenty-three screws were placed in the robot-assisted group and 22 screws in the freehand group; no postoperative complications or revisions were reported. The excellent and good rate of screw placement was 100% in the robot-assisted group and 95% in the freehand group. The P value (0.009) showed the same superiority in screw distribution. The fluoroscopy time after pelvic reduction in the robot-assisted group was significantly shorter than that in the freehand group (median [Q1, Q3]: 6.0 [6.0, 9.0] s vs. median [Q1, Q3]: 36.0 [21.5, 48.0] s; χ2 = 13.590, respectively, P < 0.001); no difference in operation time after reduction of the pelvis was noted (χ2 = 1.990, P = 0.158). Time for guide wire insertion was significantly shorter for the robot-assisted group than that for the freehand group (median [Q1, Q3]: 2.0 [2.0, 2.7] min vs. median [Q1, Q3]: 19.0 [15.5, 45.0] min; χ2 = 20.952, respectively, P < 0.001). The number of guide wire attempts in the robot-assisted group was significantly less than that in the freehand group (median [Q1, Q3]: 1.0 [1.0,1.0] time vs. median [Q1, Q3]: 7.0 [1.0, 9.0] times; χ2 = 15.771, respectively, P < 0.001). The instrumented SI levels did not differ between both groups (from S1 to S2, χ2 = 4.760, P = 0.093). Conclusions: Accuracy of the robot-assisted technique was superior to that of the freehand technique. Robot-assisted navigation is safe for unstable posterior pelvic ring stabilization, especially in S1, but also in S2. SI screw insertion with robot-assisted navigation is clinically feasible. PMID:29067950

  17. Percutaneous Sacroiliac Screw Placement: A Prospective Randomized Comparison of Robot-assisted Navigation Procedures with a Conventional Technique.

    PubMed

    Wang, Jun-Qiang; Wang, Yu; Feng, Yun; Han, Wei; Su, Yong-Gang; Liu, Wen-Yong; Zhang, Wei-Jun; Wu, Xin-Bao; Wang, Man-Yi; Fan, Yu-Bo

    2017-11-05

    Sacroiliac (SI) screw fixation is a demanding technique, with a high rate of screw malposition due to the complex pelvic anatomy. TiRobot™ is an orthopedic surgery robot which can be used for SI screw fixation. This study aimed to evaluate the accuracy of robot-assisted placement of SI screws compared with a freehand technique. Thirty patients requiring posterior pelvic ring stabilization were randomized to receive freehand or robot-assisted SI screw fixation, between January 2016 and June 2016 at Beijing Jishuitan Hospital. Forty-five screws were placed at levels S1 and S2. In both methods, the primary end point screw position was assessed and classified using postoperative computed tomography. Fisher's exact probability test was used to analyze the screws' positions. Secondary end points, such as duration of trajectory planning, surgical time after reduction of the pelvis, insertion time for guide wire, number of guide wire attempts, and radiation exposure without pelvic reduction, were also assessed. Twenty-three screws were placed in the robot-assisted group and 22 screws in the freehand group; no postoperative complications or revisions were reported. The excellent and good rate of screw placement was 100% in the robot-assisted group and 95% in the freehand group. The P value (0.009) showed the same superiority in screw distribution. The fluoroscopy time after pelvic reduction in the robot-assisted group was significantly shorter than that in the freehand group (median [Q1, Q3]: 6.0 [6.0, 9.0] s vs. median [Q1, Q3]: 36.0 [21.5, 48.0] s; χ2 = 13.590, respectively, P < 0.001); no difference in operation time after reduction of the pelvis was noted (χ2 = 1.990, P = 0.158). Time for guide wire insertion was significantly shorter for the robot-assisted group than that for the freehand group (median [Q1, Q3]: 2.0 [2.0, 2.7] min vs. median [Q1, Q3]: 19.0 [15.5, 45.0] min; χ2 = 20.952, respectively, P < 0.001). The number of guide wire attempts in the robot-assisted group was significantly less than that in the freehand group (median [Q1, Q3]: 1.0 [1.0,1.0] time vs. median [Q1, Q3]: 7.0 [1.0, 9.0] times; χ2 = 15.771, respectively, P < 0.001). The instrumented SI levels did not differ between both groups (from S1 to S2, χ2 = 4.760, P = 0.093). Accuracy of the robot-assisted technique was superior to that of the freehand technique. Robot-assisted navigation is safe for unstable posterior pelvic ring stabilization, especially in S1, but also in S2. SI screw insertion with robot-assisted navigation is clinically feasible.

  18. ROBOSIGHT: Robotic Vision System For Inspection And Manipulation

    NASA Astrophysics Data System (ADS)

    Trivedi, Mohan M.; Chen, ChuXin; Marapane, Suresh

    1989-02-01

    Vision is an important sensory modality that can be used for deriving information critical to the proper, efficient, flexible, and safe operation of an intelligent robot. Vision systems are uti-lized for developing higher level interpretation of the nature of a robotic workspace using images acquired by cameras mounted on a robot. Such information can be useful for tasks such as object recognition, object location, object inspection, obstacle avoidance and navigation. In this paper we describe efforts directed towards developing a vision system useful for performing various robotic inspection and manipulation tasks. The system utilizes gray scale images and can be viewed as a model-based system. It includes general purpose image analysis modules as well as special purpose, task dependent object status recognition modules. Experiments are described to verify the robust performance of the integrated system using a robotic testbed.

  19. Manipulation of permanent magnetic polymer micro-robots: a new approach towards guided wireless capsule endoscopy

    NASA Astrophysics Data System (ADS)

    Hilbich, D.; Rahbar, A.; Khosla, A.; Gray, B. L.

    2012-10-01

    We present the initial experimental results for manipulating micro-robots featuring permanent magnetic polymer magnets for guided wireless endoscopy applications. The magnetic polymers are fabricated by doping polydimethylsiloxane (PDMS) with permanent isotropic rare earth magnetic powder (MQFP 12-5) with an average particle size of 6 μm. The prepared magnetic nanocomposite polymer (M-NCP) is patterned in the desired shape against a plexiglass mold via soft lithography techniques. It is observed that the fabricated micro-robot magnets have a magnetic field strength of 50 mT and can easily be actuated by applying a field of 8.3 mT (field measured at the capsule's position) and moved at a rate of 5 inches/second.

  20. Robotic Materials Handling in Space: Mechanical Design of the Robot Operated Materials Processing System HitchHiker Experiment

    NASA Technical Reports Server (NTRS)

    Voellmer, George

    1997-01-01

    The Goddard Space Flight Center has developed the Robot Operated Materials Processing System (ROMPS) that flew aboard STS-64 in September, 1994. The ROMPS robot transported pallets containing wafers of different materials from their storage racks to a furnace for thermal processing. A system of tapered guides and compliant springs was designed to deal with the potential misalignments. The robot and all the sample pallets were locked down for launch and landing. The design of the passive lockdown system, and the interplay between it and the alignment system are presented.

  1. Can Robots and Humans Get Along?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scholtz, Jean

    2007-06-01

    Now that robots have moved into the mainstream—as vacuum cleaners, lawn mowers, autonomous vehicles, tour guides, and even pets—it is important to consider how everyday people will interact with them. A robot is really just a computer, but many researchers are beginning to understand that human-robot interactions are much different than human-computer interactions. So while the metrics used to evaluate the human-computer interaction (usability of the software interface in terms of time, accuracy, and user satisfaction) may also be appropriate for human-robot interactions, we need to determine whether there are additional metrics that should be considered.

  2. The ROMPS robot in HitchHiker

    NASA Technical Reports Server (NTRS)

    Voellmer, George

    1992-01-01

    The Robotics Branch of the Goddard Space Flight Center has under development a robot that fits inside a Get Away Special can. In the RObotic Materials Processing System (ROMPS) HitchHiker experiment, this robot is used to transport pallets containing wafers of different materials from their storage rack to a halogen lamp furnace for rapid thermal processing in a microgravity environment. It then returns them to their storage rack. A large part of the mechanical design of the robot dealt with the potential misalignment between the various components that are repeatedly mated and demated. A system of tapered guides and compliant springs was designed to work within the robot's force and accuracy capabilities. This paper discusses the above and other robot design issues in detail, and presents examples of ROMPS robot analyses that are applicable to other HitcherHiker materials handling missions.

  3. Tactile surface classification for limbed robots using a pressure sensitive robot skin.

    PubMed

    Shill, Jacob J; Collins, Emmanuel G; Coyle, Eric; Clark, Jonathan

    2015-02-02

    This paper describes an approach to terrain identification based on pressure images generated through direct surface contact using a robot skin constructed around a high-resolution pressure sensing array. Terrain signatures for classification are formulated from the magnitude frequency responses of the pressure images. The initial experimental results for statically obtained images show that the approach yields classification accuracies [Formula: see text]. The methodology is extended to accommodate the dynamic pressure images anticipated when a robot is walking or running. Experiments with a one-legged hopping robot yield similar identification accuracies [Formula: see text]. In addition, the accuracies are independent with respect to changing robot dynamics (i.e., when using different leg gaits). The paper further shows that the high-resolution capabilities of the sensor enables similarly textured surfaces to be distinguished. A correcting filter is developed to accommodate for failures or faults that inevitably occur within the sensing array with continued use. Experimental results show using the correcting filter can extend the effective operational lifespan of a high-resolution sensing array over 6x in the presence of sensor damage. The results presented suggest this methodology can be extended to autonomous field robots, providing a robot with crucial information about the environment that can be used to aid stable and efficient mobility over rough and varying terrains.

  4. Targeted vs systematic robot-assisted transperineal magnetic resonance imaging-transrectal ultrasonography fusion prostate biopsy.

    PubMed

    Mischinger, Johannes; Kaufmann, Sascha; Russo, Giorgio I; Harland, Niklas; Rausch, Steffen; Amend, Bastian; Scharpf, Marcus; Loewe, Lorenz; Todenhoefer, Tilman; Notohamiprodjo, Mike; Nikolaou, Konstantin; Stenzl, Arnulf; Bedke, Jens; Kruck, Stephan

    2018-05-01

    To evaluate the performance of transperineal robot-assisted (RA) targeted (TB) and systematic (SB) prostate biopsy in primary and repeat biopsy settings. Patients underwent RA biopsy between 2014 and 2016. Before RA-TB, multiparametric magnetic resonance imaging (mpMRI) was performed. Prostate lesions were scored (Prostate Imaging, Reporting and Data System, version 2) and used for RA-TB planning. In addition, RA-SB was performed. Available, whole-gland pathology was analysed. In all, 130 patients were biopsy naive and 72 had had a previous negative transrectal ultrasonography-guided biopsy. In total, 202 patients had suspicious mpMRI lesions. Clinically significant prostate cancer was found in 85% of all prostate cancer cases (n = 123). Total and clinically significant prostate cancer detection rates for RA-TB vs RA-SB were not significantly different at 77% vs 84% and 80% vs 82%, respectively. RA-TB demonstrated a better sampling performance compared to RA-SB (26.4% vs 13.9%; P < 0.001). Transperineal RA-TB and -SB showed similar clinically significant prostate cancer detection rates in primary and repeat biopsy settings. However, RA-TB offered a 50% reduction in biopsy cores. Omitting RA-SB is associated with a significant risk of missing clinically significant prostate cancer. © 2017 The Authors BJU International © 2017 BJU International Published by John Wiley & Sons Ltd.

  5. Research on Modeling Technology of Virtual Robot Based on LabVIEW

    NASA Astrophysics Data System (ADS)

    Wang, Z.; Huo, J. L.; Y Sun, L.; Y Hao, X.

    2017-12-01

    Because of the dangerous working environment, the underwater operation robot for nuclear power station needs manual teleoperation. In the process of operation, it is necessary to guide the position and orientation of the robot in real time. In this paper, the geometric modeling of the virtual robot and the working environment is accomplished by using SolidWorks software, and the accurate modeling and assembly of the robot are realized. Using LabVIEW software to read the model, and established the manipulator forward kinematics and inverse kinematics model, and realized the hierarchical modeling of virtual robot and computer graphics modeling. Experimental results show that the method studied in this paper can be successfully applied to robot control system.

  6. Position Estimation and Local Mapping Using Omnidirectional Images and Global Appearance Descriptors

    PubMed Central

    Berenguer, Yerai; Payá, Luis; Ballesta, Mónica; Reinoso, Oscar

    2015-01-01

    This work presents some methods to create local maps and to estimate the position of a mobile robot, using the global appearance of omnidirectional images. We use a robot that carries an omnidirectional vision system on it. Every omnidirectional image acquired by the robot is described only with one global appearance descriptor, based on the Radon transform. In the work presented in this paper, two different possibilities have been considered. In the first one, we assume the existence of a map previously built composed of omnidirectional images that have been captured from previously-known positions. The purpose in this case consists of estimating the nearest position of the map to the current position of the robot, making use of the visual information acquired by the robot from its current (unknown) position. In the second one, we assume that we have a model of the environment composed of omnidirectional images, but with no information about the location of where the images were acquired. The purpose in this case consists of building a local map and estimating the position of the robot within this map. Both methods are tested with different databases (including virtual and real images) taking into consideration the changes of the position of different objects in the environment, different lighting conditions and occlusions. The results show the effectiveness and the robustness of both methods. PMID:26501289

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dorum, O.H.; Hoover, A.; Jones, J.P.

    This paper addresses some issues in the development of sensor-based systems for mobile robot navigation which use range imaging sensors as the primary source for geometric information about the environment. In particular, we describe a model of scanning laser range cameras which takes into account the properties of the mechanical system responsible for image formation and a calibration procedure which yields improved accuracy over previous models. In addition, we describe an algorithm which takes the limitations of these sensors into account in path planning and path execution. In particular, range imaging sensors are characterized by a limited field of viewmore » and a standoff distance -- a minimum distance nearer than which surfaces cannot be sensed. These limitations can be addressed by enriching the concept of configuration space to include information about what can be sensed from a given configuration, and using this information to guide path planning and path following.« less

  8. Smart mobile robot system for rubbish collection

    NASA Astrophysics Data System (ADS)

    Ali, Mohammed A. H.; Sien Siang, Tan

    2018-03-01

    This paper records the research and procedures of developing a smart mobility robot with detection system to collect rubbish. The objective of this paper is to design a mobile robot that can detect and recognize medium-size rubbish such as drinking cans. Besides that, the objective is also to design a mobile robot with the ability to estimate the position of rubbish from the robot. In addition, the mobile robot is also able to approach the rubbish based on position of rubbish. This paper explained about the types of image processing, detection and recognition methods and image filters. This project implements RGB subtraction method as the prior system. Other than that, algorithm for distance measurement based on image plane is implemented in this project. This project is limited to use computer webcam as the sensor. Secondly, the robot is only able to approach the nearest rubbish in the same views of camera vision and any rubbish that contain RGB colour components on its body.

  9. Theoretical neutron damage calculations in industrial robotic manipulators used for non-destructive imaging applications

    DOE PAGES

    Hashem, Joseph; Schneider, Erich; Pryor, Mitch; ...

    2017-01-01

    Our paper describes how to use MCNP to evaluate the rate of material damage in a robot incurred by exposure to a neutron flux. The example used in this work is that of a robotic manipulator installed in a high intensity, fast, and collimated neutron radiography beam port at the University of Texas at Austin's TRIGA Mark II research reactor. Our effort includes taking robotic technologies and using them to automate non-destructive imaging tasks in nuclear facilities where the robotic manipulator acts as the motion control system for neutron imaging tasks. Simulated radiation tests are used to analyze the radiationmore » damage to the robot. Once the neutron damage is calculated using MCNP, several possible shielding materials are analyzed to determine the most effective way of minimizing the neutron damage. Furthermore, neutron damage predictions provide users the means to simulate geometrical and material changes, thus saving time, money, and energy in determining the optimal setup for a robotic system installed in a radiation environment.« less

  10. Theoretical neutron damage calculations in industrial robotic manipulators used for non-destructive imaging applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hashem, Joseph; Schneider, Erich; Pryor, Mitch

    Our paper describes how to use MCNP to evaluate the rate of material damage in a robot incurred by exposure to a neutron flux. The example used in this work is that of a robotic manipulator installed in a high intensity, fast, and collimated neutron radiography beam port at the University of Texas at Austin's TRIGA Mark II research reactor. Our effort includes taking robotic technologies and using them to automate non-destructive imaging tasks in nuclear facilities where the robotic manipulator acts as the motion control system for neutron imaging tasks. Simulated radiation tests are used to analyze the radiationmore » damage to the robot. Once the neutron damage is calculated using MCNP, several possible shielding materials are analyzed to determine the most effective way of minimizing the neutron damage. Furthermore, neutron damage predictions provide users the means to simulate geometrical and material changes, thus saving time, money, and energy in determining the optimal setup for a robotic system installed in a radiation environment.« less

  11. Embedded mobile farm robot for identification of diseased plants

    NASA Astrophysics Data System (ADS)

    Sadistap, S. S.; Botre, B. A.; Pandit, Harshavardhan; Chandrasekhar; Rao, Adesh

    2013-07-01

    This paper presents the development of a mobile robot used in farms for identification of diseased plants. It puts forth two of the major aspects of robotics namely automated navigation and image processing. The robot navigates on the basis of the GPS (Global Positioning System) location and data obtained from IR (Infrared) sensors to avoid any obstacles in its path. It uses an image processing algorithm to differentiate between diseased and non-diseased plants. A robotic platform consisting of an ARM9 processor, motor drivers, robot mechanical assembly, camera and infrared sensors has been used. Mini2440 microcontroller has been used wherein Embedded linux OS (Operating System) is implemented.

  12. Design and development of an ultrasound calibration phantom and system

    NASA Astrophysics Data System (ADS)

    Cheng, Alexis; Ackerman, Martin K.; Chirikjian, Gregory S.; Boctor, Emad M.

    2014-03-01

    Image-guided surgery systems are often used to provide surgeons with informational support. Due to several unique advantages such as ease of use, real-time image acquisition, and no ionizing radiation, ultrasound is a common medical imaging modality used in image-guided surgery systems. To perform advanced forms of guidance with ultrasound, such as virtual image overlays or automated robotic actuation, an ultrasound calibration process must be performed. This process recovers the rigid body transformation between a tracked marker attached to the ultrasound transducer and the ultrasound image. A phantom or model with known geometry is also required. In this work, we design and test an ultrasound calibration phantom and software. The two main considerations in this work are utilizing our knowledge of ultrasound physics to design the phantom and delivering an easy to use calibration process to the user. We explore the use of a three-dimensional printer to create the phantom in its entirety without need for user assembly. We have also developed software to automatically segment the three-dimensional printed rods from the ultrasound image by leveraging knowledge about the shape and scale of the phantom. In this work, we present preliminary results from using this phantom to perform ultrasound calibration. To test the efficacy of our method, we match the projection of the points segmented from the image to the known model and calculate a sum squared difference between each point for several combinations of motion generation and filtering methods. The best performing combination of motion and filtering techniques had an error of 1.56 mm and a standard deviation of 1.02 mm.

  13. Developing stereo image based robot control system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suprijadi,; Pambudi, I. R.; Woran, M.

    Application of image processing is developed in various field and purposes. In the last decade, image based system increase rapidly with the increasing of hardware and microprocessor performance. Many fields of science and technology were used this methods especially in medicine and instrumentation. New technique on stereovision to give a 3-dimension image or movie is very interesting, but not many applications in control system. Stereo image has pixel disparity information that is not existed in single image. In this research, we proposed a new method in wheel robot control system using stereovision. The result shows robot automatically moves based onmore » stereovision captures.« less

  14. Robotic Assistance for Ultrasound-Guided Prostate Brachytherapy

    PubMed Central

    Fichtinger, Gabor; Fiene, Jonathan P.; Kennedy, Christopher W.; Kronreif, Gernot; Iordachita, Iulian; Song, Danny Y.; Burdette, Everette C.; Kazanzides, Peter

    2016-01-01

    We present a robotically assisted prostate brachytherapy system and test results in training phantoms and Phase-I clinical trials. The system consists of a transrectal ultrasound (TRUS) and a spatially co-registered robot, fully integrated with an FDA-approved commercial treatment planning system. The salient feature of the system is a small parallel robot affixed to the mounting posts of the template. The robot replaces the template interchangeably, using the same coordinate system. Established clinical hardware, workflow and calibration remain intact. In all phantom experiments, we recorded the first insertion attempt without adjustment. All clinically relevant locations in the prostate were reached. Non-parallel needle trajectories were achieved. The pre-insertion transverse and rotational errors (measured with a Polaris optical tracker relative to the template’s coordinate frame) were 0.25mm (STD=0.17mm) and 0.75° (STD=0.37°). In phantoms, needle tip placement errors measured in TRUS were 1.04mm (STD=0.50mm). A Phase-I clinical feasibility and safety trial has been successfully completed with the system. We encountered needle tip positioning errors of a magnitude greater than 4mm in only 2 out of 179 robotically guided needles, in contrast to manual template guidance where errors of this magnitude are much more common. Further clinical trials are necessary to determine whether the apparent benefits of the robotic assistant will lead to improvements in clinical efficacy and outcomes. PMID:18650122

  15. Vision-based obstacle avoidance

    DOEpatents

    Galbraith, John [Los Alamos, NM

    2006-07-18

    A method for allowing a robot to avoid objects along a programmed path: first, a field of view for an electronic imager of the robot is established along a path where the electronic imager obtains the object location information within the field of view; second, a population coded control signal is then derived from the object location information and is transmitted to the robot; finally, the robot then responds to the control signal and avoids the detected object.

  16. Robot-guided ankle sensorimotor rehabilitation of patients with multiple sclerosis.

    PubMed

    Lee, Yunju; Chen, Kai; Ren, Yupeng; Son, Jongsang; Cohen, Bruce A; Sliwa, James A; Zhang, Li-Qun

    2017-01-01

    People with multiple sclerosis (MS) often develop symptoms including muscle weakness, spasticity, imbalance, and sensory loss in the lower limbs, especially at the ankle, which result in impaired balance and locomotion and increased risk of falls. Rehabilitation strategies that improve ankle function may improve mobility and safety of ambulation in patients with MS. This pilot study investigated effectiveness of a robot-guided ankle passive-active movement training in reducing motor and sensory impairments and improving balance and gait functions. Seven patients with MS participated in combined passive stretching and active movement training using an ankle rehabilitation robot. Six of the patients finished robotic training 3 sessions per week over 6 weeks for a total of 18 sessions. Biomechanical and clinical outcome evaluations were done before and after the 6-week treatment, and at a follow-up six weeks afterwards. After six-week ankle sensorimotor training, there were increases in active range of motion in dorsiflexion, dorsiflexor and plantar flexor muscle strength, and balance and locomotion (p<0.05). Proprioception acuity showed a trend of improvement. Improvements in four biomechanical outcome measures and two of the clinical outcome measures were maintained at the 6-week follow-up. The study showed the six-week training duration was appropriate to see improvement of range of motion and strength for MS patients with ankle impairment. Robot-guided ankle training is potentially a useful therapeutic intervention to improve mobility in patients with MS. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Blending of brain-machine interface and vision-guided autonomous robotics improves neuroprosthetic arm performance during grasping.

    PubMed

    Downey, John E; Weiss, Jeffrey M; Muelling, Katharina; Venkatraman, Arun; Valois, Jean-Sebastien; Hebert, Martial; Bagnell, J Andrew; Schwartz, Andrew B; Collinger, Jennifer L

    2016-03-18

    Recent studies have shown that brain-machine interfaces (BMIs) offer great potential for restoring upper limb function. However, grasping objects is a complicated task and the signals extracted from the brain may not always be capable of driving these movements reliably. Vision-guided robotic assistance is one possible way to improve BMI performance. We describe a method of shared control where the user controls a prosthetic arm using a BMI and receives assistance with positioning the hand when it approaches an object. Two human subjects with tetraplegia used a robotic arm to complete object transport tasks with and without shared control. The shared control system was designed to provide a balance between BMI-derived intention and computer assistance. An autonomous robotic grasping system identified and tracked objects and defined stable grasp positions for these objects. The system identified when the user intended to interact with an object based on the BMI-controlled movements of the robotic arm. Using shared control, BMI controlled movements and autonomous grasping commands were blended to ensure secure grasps. Both subjects were more successful on object transfer tasks when using shared control compared to BMI control alone. Movements made using shared control were more accurate, more efficient, and less difficult. One participant attempted a task with multiple objects and successfully lifted one of two closely spaced objects in 92 % of trials, demonstrating the potential for users to accurately execute their intention while using shared control. Integration of BMI control with vision-guided robotic assistance led to improved performance on object transfer tasks. Providing assistance while maintaining generalizability will make BMI systems more attractive to potential users. NCT01364480 and NCT01894802 .

  18. Equipment and technology in surgical robotics.

    PubMed

    Sim, Hong Gee; Yip, Sidney Kam Hung; Cheng, Christopher Wai Sam

    2006-06-01

    Contemporary medical robotic systems used in urologic surgery usually consist of a computer and a mechanical device to carry out the designated task with an image acquisition module. These systems are typically from one of the two categories: offline or online robots. Offline robots, also known as fixed path robots, are completely automated with pre-programmed motion planning based on pre-operative imaging studies where precise movements within set confines are carried out. Online robotic systems rely on continuous input from the surgeons and change their movements and actions according to the input in real time. This class of robots is further divided into endoscopic manipulators and master-slave robotic systems. Current robotic surgical systems have resulted in a paradigm shift in the minimally invasive approach to complex laparoscopic urological procedures. Future developments will focus on refining haptic feedback, system miniaturization and improved augmented reality and telesurgical capabilities.

  19. The Power of Educational Robotics

    NASA Astrophysics Data System (ADS)

    Cummings, Timothy

    The purpose of this action research project was to investigate the impact a students' participation in educational robotics has on his or her performance in the STEM subjects. This study attempted to utilize educational robotics as a method for increasing student achievement and engagement in STEM subjects. Over the course of 12 weeks, an after-school robotics program was offered to students. Guided by the standards and principles of VEX IQ, a leading resource in educational robotics, students worked in collaboration on creating a design for their robot, building and testing their robot, and competing in the VEX IQ Crossover Challenge. Student data was gathered through a pre-participation survey, observations from the work they performed in robotics club, their performance in STEM subject classes, and the analysis of their end-of-the-year report card. Results suggest that the students who participate in robotics club experienced a positive impact on their performance in STEM subject classes.

  20. A Web-Remote/Robotic/Scheduled Astronomical Data Acquisition System

    NASA Astrophysics Data System (ADS)

    Denny, Robert

    2011-03-01

    Traditionally, remote/robotic observatory operating systems have been custom made for each observatory. While data reduction pipelines need to be tailored for each investigation, the data acquisition process (especially for stare-mode optical images) is often quite similar across investigations. Since 1999, DC-3 Dreams has focused on providing and supporting a remote/robotic observatory operating system which can be adapted to a wide variety of physical hardware and optics while achieving the highest practical observing efficiency and safe/secure web browser user controls. ACP Expert consists of three main subsystems: (1) a robotic list-driven data acquisition engine which controls all aspects of the observatory, (2) a constraint-driven dispatch scheduler with a long-term database of requests, and (3) a built-in "zero admin" web server and dynamic web pages which provide a remote capability for immediate execution and monitoring as well as entry and monitoring of dispatch-scheduled observing requests. No remote desktop login is necessary for observing, thus keeping the system safe and consistent. All routine operation is via the web browser. A wide variety of telescope mounts, CCD imagers, guiding sensors, filter selectors, focusers, instrument-package rotators, weather sensors, and dome control systems are supported via the ASCOM standardized device driver architecture. The system is most commonly employed on commercial 1-meter and smaller observatories used by universities and advanced amateurs for both science and art. One current project, the AAVSO Photometric All-Sky Survey (APASS), uses ACP Expert to acquire large volumes of data in dispatch-scheduled mode. In its first 18 months of operation (North then South), 40,307 sky images were acquired in 117 photometric nights, resulting in 12,107,135 stars detected two or more times. These stars had measures in 5 filters. The northern station covered 754 fields (6446 square degrees) at least twice, the southern station covered 951 fields (8500 square degrees) at least twice. The database of photometric calibrations is available from AAVSO. The paper will cover the ACP web interface, including the use of AJAX and JSON within a micro-content framework, as well as dispatch scheduler and acquisition engine operation.

  1. Have I Been Here Before? A Method for Detecting Loop Closure With LiDAR

    DTIC Science & Technology

    2015-01-01

    mobile robot system, which has the unfortunate task of exploring a system of austere underground tunnels with only a laser scanner as a guide. 15...INTENTIONALLY LEFT BLANK. 1 1. Introduction Techniques for using mobile robots to generate detailed maps of different environments...durations. This is especially true for applications involving small mobile robots where sensor drift and inaccuracies can cause significant mistakes

  2. A simple, inexpensive, and effective implementation of a vision-guided autonomous robot

    NASA Astrophysics Data System (ADS)

    Tippetts, Beau; Lillywhite, Kirt; Fowers, Spencer; Dennis, Aaron; Lee, Dah-Jye; Archibald, James

    2006-10-01

    This paper discusses a simple, inexpensive, and effective implementation of a vision-guided autonomous robot. This implementation is a second year entrance for Brigham Young University students to the Intelligent Ground Vehicle Competition. The objective of the robot was to navigate a course constructed of white boundary lines and orange obstacles for the autonomous competition. A used electric wheelchair was used as the robot base. The wheelchair was purchased from a local thrift store for $28. The base was modified to include Kegresse tracks using a friction drum system. This modification allowed the robot to perform better on a variety of terrains, resolving issues with last year's design. In order to control the wheelchair and retain the robust motor controls already on the wheelchair the wheelchair joystick was simply removed and replaced with a printed circuit board that emulated joystick operation and was capable of receiving commands through a serial port connection. Three different algorithms were implemented and compared: a purely reactive approach, a potential fields approach, and a machine learning approach. Each of the algorithms used color segmentation methods to interpret data from a digital camera in order to identify the features of the course. This paper will be useful to those interested in implementing an inexpensive vision-based autonomous robot.

  3. Multi-Robot, Multi-Target Particle Swarm Optimization Search in Noisy Wireless Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurt Derr; Milos Manic

    Multiple small robots (swarms) can work together using Particle Swarm Optimization (PSO) to perform tasks that are difficult or impossible for a single robot to accomplish. The problem considered in this paper is exploration of an unknown environment with the goal of finding a target(s) at an unknown location(s) using multiple small mobile robots. This work demonstrates the use of a distributed PSO algorithm with a novel adaptive RSS weighting factor to guide robots for locating target(s) in high risk environments. The approach was developed and analyzed on multiple robot single and multiple target search. The approach was further enhancedmore » by the multi-robot-multi-target search in noisy environments. The experimental results demonstrated how the availability of radio frequency signal can significantly affect robot search time to reach a target.« less

  4. Treatment of early non-small cell lung cancer, stage IA, by image-guided robotic stereotactic radioablation--CyberKnife.

    PubMed

    Brown, William T; Wu, Xiaodong; Amendola, Beatriz; Perman, Mark; Han, Hoke; Fayad, Fahed; Garcia, Silvio; Lewin, Alan; Abitbol, Andre; de la Zerda, Alberto; Schwade, James G

    2007-01-01

    To evaluate the efficacy of using image-guided robotic stereotactic radioablation as an alternative treatment modality for patients with surgically resectable, but medically inoperable, T1 N0 M0, stage IA non-small cell lung cancer. Between January 2004 and May 2006, 19 patients, 11 women and 8 men ranging in age from 52 to 88 years, with stage IA non-small cell lung cancer were treated. Tumor volume ranged from 1.7 to 13 mL. Total doses ranged from 24 to 60 Gy delivered in 3 fractions. Eleven patients received 60 Gy. Real-time target localization was accomplished by radiographic detection of fiducial marker(s) implanted within the tumor combined with respiratory motion tracking. All patients tolerated radioablation well with fatigue as the main side effect. Fourteen patients are alive from 1 to 25 months posttreatment. Four patients died: 2 of comorbid disease and 2 of cancer progression (status post 60 and 55.5 Gy). Three patients developed grade I radiation pneumonitis. Two patients have stable disease. In 3 patients, cancer recurred in the planning treatment volume: in 2 patients after treatment with 60 Gy and in 1 patient after treatment with 55.5 Gy. One patient had local control in the target volume but developed metastasis to the ipsilateral hilum. Nine patients had a complete response and show no evidence of disease. In our early experience, stereotactic radioablation using the CyberKnife system appears to be a safe, minimally invasive, and effective modality for treating early stage lung cancer in patients with medically inoperable disease. Dose escalation and/or increasing the treatment volumes, with the aid of the high conformality of this technique, may help to achieve further improvements in these promising results.

  5. Behavioral similarity measurement based on image processing for robots that use imitative learning

    NASA Astrophysics Data System (ADS)

    Sterpin B., Dante G.; Martinez S., Fernando; Jacinto G., Edwar

    2017-02-01

    In the field of the artificial societies, particularly those are based on memetics, imitative behavior is essential for the development of cultural evolution. Applying this concept for robotics, through imitative learning, a robot can acquire behavioral patterns from another robot. Assuming that the learning process must have an instructor and, at least, an apprentice, the fact to obtain a quantitative measurement for their behavioral similarity, would be potentially useful, especially in artificial social systems focused on cultural evolution. In this paper the motor behavior of both kinds of robots, for two simple tasks, is represented by 2D binary images, which are processed in order to measure their behavioral similarity. The results shown here were obtained comparing some similarity measurement methods for binary images.

  6. 2011 Bayou Regional

    NASA Image and Video Library

    2011-03-19

    Students from 38 high school teams in seven states competed for top honors during the 2011 FIRST (For Inspiration and Recognition of Science and Technology) Robotics Bayou Regional competition held March 17-19 in the New Orleans area. In this photo, members of the robotics team from Gulfport High School guide their robot during the annual tournament. The robotics competition is designed to help encourage students to pursue studies and careers in the areas of science, technology, engineering and mathematics. John C. Stennis Space Center is a supporter of FIRST activities and the Bayou Regional tournament.

  7. Hand-Eye Calibration in Visually-Guided Robot Grinding.

    PubMed

    Li, Wen-Long; Xie, He; Zhang, Gang; Yan, Si-Jie; Yin, Zhou-Ping

    2016-11-01

    Visually-guided robot grinding is a novel and promising automation technique for blade manufacturing. One common problem encountered in robot grinding is hand-eye calibration, which establishes the pose relationship between the end effector (hand) and the scanning sensor (eye). This paper proposes a new calibration approach for robot belt grinding. The main contribution of this paper is its consideration of both joint parameter errors and pose parameter errors in a hand-eye calibration equation. The objective function of the hand-eye calibration is built and solved, from which 30 compensated values (corresponding to 24 joint parameters and six pose parameters) are easily calculated in a closed solution. The proposed approach is economic and simple because only a criterion sphere is used to calculate the calibration parameters, avoiding the need for an expensive and complicated tracking process using a laser tracker. The effectiveness of this method is verified using a calibration experiment and a blade grinding experiment. The code used in this approach is attached in the Appendix.

  8. Iconic memory-based omnidirectional route panorama navigation.

    PubMed

    Yagi, Yasushi; Imai, Kousuke; Tsuji, Kentaro; Yachida, Masahiko

    2005-01-01

    A route navigation method for a mobile robot with an omnidirectional image sensor is described. The route is memorized from a series of consecutive omnidirectional images of the horizon when the robot moves to its goal. While the robot is navigating to the goal point, input is matched against the memorized spatio-temporal route pattern by using dual active contour models and the exact robot position and orientation is estimated from the converged shape of the active contour models.

  9. A new technique for robot vision in autonomous underwater vehicles using the color shift in underwater imaging

    DTIC Science & Technology

    2017-06-01

    FOR ROBOT VISION IN AUTONOMOUS UNDERWATER VEHICLES USING THE COLOR SHIFT IN UNDERWATER IMAGING by Jake A. Jones June 2017 Thesis Advisor...June 2017 3. REPORT TYPE AND DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE A NEW TECHNIQUE FOR ROBOT VISION IN AUTONOMOUS UNDERWATER...Developing a technique for underwater robot vision is a key factor in establishing autonomy in underwater vehicles. A new technique is developed and

  10. Small, Lightweight Inspection Robot With 12 Degrees Of Freedom

    NASA Technical Reports Server (NTRS)

    Lee, Thomas S.; Ohm, Timothy R.; Hayati, Samad

    1996-01-01

    Small serpentine robot weighs only 6 lbs. and has link diameter of 1.5 in. Designed to perform inspections. Multiple degrees of freedom enables it to reach around obstacles and through small openings into simple or complexly shaped confined spaces to positions where difficult or impossible to perform inspections by other means. Fiber-optic borescope incorporated into robot arm, with inspection tip of borescope located at tip of arm. Borescope both conveys light along robot arm to illuminate scene inspected at tip and conveys image of scene back along robot arm to external imaging equipment.

  11. [Digital imaging and robotics in endoscopic surgery].

    PubMed

    Go, P M

    1998-05-23

    The introduction of endoscopical surgery has among other things influenced technical developments in surgery. Owing to digitalisation, major progress will be made in imaging and in the sophisticated technology sometimes called robotics. Digital storage makes the results of imaging diagnostics (e.g. the results of radiological examination) suitable for transmission via video conference systems for telediagnostic purposes. The availability of digital video technique renders possible the processing, storage and retrieval of moving images as well. During endoscopical operations use may be made of a robot arm which replaces the camera man. The arm does not grow tired and provides a stable image. The surgeon himself can operate or address the arm and it can remember fixed image positions to which it can return if ordered to do so. The next step is to carry out surgical manipulations via a robot arm. This may make operations more patient-friendly. A robot arm can also have remote control: telerobotics. At the Internet site of this journal a number of supplements to this article can be found, for instance three-dimensional (3D) illustrations (which is the purpose of the 3D spectacles enclosed with this issue) and a quiz (http:@appendix.niwi. knaw.nl).

  12. CHAMP (Camera, Handlens, and Microscope Probe)

    NASA Technical Reports Server (NTRS)

    Mungas, Greg S.; Boynton, John E.; Balzer, Mark A.; Beegle, Luther; Sobel, Harold R.; Fisher, Ted; Klein, Dan; Deans, Matthew; Lee, Pascal; Sepulveda, Cesar A.

    2005-01-01

    CHAMP (Camera, Handlens And Microscope Probe)is a novel field microscope capable of color imaging with continuously variable spatial resolution from infinity imaging down to diffraction-limited microscopy (3 micron/pixel). As a robotic arm-mounted imager, CHAMP supports stereo imaging with variable baselines, can continuously image targets at an increasing magnification during an arm approach, can provide precision rangefinding estimates to targets, and can accommodate microscopic imaging of rough surfaces through a image filtering process called z-stacking. CHAMP was originally developed through the Mars Instrument Development Program (MIDP) in support of robotic field investigations, but may also find application in new areas such as robotic in-orbit servicing and maintenance operations associated with spacecraft and human operations. We overview CHAMP'S instrument performance and basic design considerations below.

  13. Image registration: enabling technology for image guided surgery and therapy.

    PubMed

    Sauer, Frank

    2005-01-01

    Imaging looks inside the patient's body, exposing the patient's anatomy beyond what is visible on the surface. Medical imaging has a very successful history for medical diagnosis. It also plays an increasingly important role as enabling technology for minimally invasive procedures. Interventional procedures (e.g. catheter based cardiac interventions) are traditionally supported by intra-procedure imaging (X-ray fluoro, ultrasound). There is realtime feedback, but the images provide limited information. Surgical procedures are traditionally supported with pre-operative images (CT, MR). The image quality can be very good; however, the link between images and patient has been lost. For both cases, image registration can play an essential role -augmenting intra-op images with pre-op images, and mapping pre-op images to the patient's body. We will present examples of both approaches from an application oriented perspective, covering electrophysiology, radiation therapy, and neuro-surgery. Ultimately, as the boundaries between interventional radiology and surgery are becoming blurry, also the different methods for image guidance will merge. Image guidance will draw upon a combination of pre-op and intra-op imaging together with magnetic or optical tracking systems, and enable precise minimally invasive procedures. The information is registered into a common coordinate system, and allows advanced methods for visualization such as augmented reality or advanced methods for therapy delivery such as robotics.

  14. Robot-Aided Neurorehabilitation: A Pediatric Robot for Ankle Rehabilitation

    PubMed Central

    Michmizos, Konstantinos P.; Rossi, Stefano; Castelli, Enrico; Cappa, Paolo; Krebs, Hermano Igo

    2015-01-01

    This paper presents the pediAnklebot, an impedance-controlled low-friction, backdriveable robotic device developed at the Massachusetts Institute of Technology that trains the ankle of neurologically impaired children of ages 6-10 years old. The design attempts to overcome the known limitations of the lower extremity robotics and the unknown difficulties of what constitutes an appropriate therapeutic interaction with children. The robot's pilot clinical evaluation is on-going and it incorporates our recent findings on the ankle sensorimotor control in neurologically intact subjects, namely the speed-accuracy tradeoff, the deviation from an ideally smooth ankle trajectory, and the reaction time. We used these concepts to develop the kinematic and kinetic performance metrics that guided the ankle therapy in a similar fashion that we have done for our upper extremity devices. Here we report on the use of the device in at least 9 training sessions for 3 neurologically impaired children. Results demonstrated a statistically significant improvement in the performance metrics assessing explicit and implicit motor learning. Based on these initial results, we are confident that the device will become an effective tool that harnesses plasticity to guide habilitation during childhood. PMID:25769168

  15. Watching elderly and disabled person's physical condition by remotely controlled monorail robot

    NASA Astrophysics Data System (ADS)

    Nagasaka, Yasunori; Matsumoto, Yoshinori; Fukaya, Yasutoshi; Takahashi, Tomoichi; Takeshita, Toru

    2001-10-01

    We are developing a nursing system using robots and cameras. The cameras are mounted on a remote controlled monorail robot which moves inside a room and watches the elderly. It is necessary to pay attention to the elderly at home or nursing homes all time. This requires staffs to pay attention to them at every time. The purpose of our system is to help those staffs. This study intends to improve such situation. A host computer controls a monorail robot to go in front of the elderly using the images taken by cameras on the ceiling. A CCD camera is mounted on the monorail robot to take pictures of their facial expression or movements. The robot sends the images to a host computer that checks them whether something unusual happens or not. We propose a simple calibration method for positioning the monorail robots to track the moves of the elderly for keeping their faces at center of camera view. We built a small experiment system, and evaluated our camera calibration method and image processing algorithm.

  16. Design of a laser navigation system for the inspection robot used in substation

    NASA Astrophysics Data System (ADS)

    Zhu, Jing; Sun, Yanhe; Sun, Deli

    2017-01-01

    Aimed at the deficiency of the magnetic guide and RFID parking system used by substation inspection robot now, a laser navigation system is designed, and the system structure, the method of map building and positioning are all introduced. The system performance is tested in a 500kV substation, and the result show that the repetitive precision of navigation system is precise enough to help the robot fulfill inspection tasks.

  17. Inexpensive robots used to teach dc circuits and electronics

    NASA Astrophysics Data System (ADS)

    Sidebottom, David L.

    2017-05-01

    This article describes inexpensive, autonomous robots, built without microprocessors, used in a college-level introductory physics laboratory course to motivate student learning of dc circuits. Detailed circuit descriptions are provided as well as a week-by-week course plan that can guide students from elementary dc circuits, through Kirchhoff's laws, and into simple analog integrated circuits with the motivational incentive of building an autonomous robot that can compete with others in a public arena.

  18. Robotically Driven CT-guided Needle Insertion: Preliminary Results in Phantom and Animal Experiments.

    PubMed

    Hiraki, Takao; Kamegawa, Tetsushi; Matsuno, Takayuki; Sakurai, Jun; Kirita, Yasuzo; Matsuura, Ryutaro; Yamaguchi, Takuya; Sasaki, Takanori; Mitsuhashi, Toshiharu; Komaki, Toshiyuki; Masaoka, Yoshihisa; Matsui, Yusuke; Fujiwara, Hiroyasu; Iguchi, Toshihiro; Gobara, Hideo; Kanazawa, Susumu

    2017-11-01

    Purpose To evaluate the accuracy of the remote-controlled robotic computed tomography (CT)-guided needle insertion in phantom and animal experiments. Materials and Methods In a phantom experiment, 18 robotic and manual insertions each were performed with 19-gauge needles by using CT fluoroscopic guidance for the evaluation of the equivalence of accuracy of insertion between the two groups with a 1.0-mm margin. Needle insertion time, CT fluoroscopy time, and radiation exposure were compared by using the Student t test. The animal experiments were approved by the institutional animal care and use committee. In the animal experiment, five robotic insertions each were attempted toward targets in the liver, kidneys, lungs, and hip muscle of three swine by using 19-gauge or 17-gauge needles and by using conventional CT guidance. The feasibility, safety, and accuracy of robotic insertion were evaluated. Results The mean accuracies of robotic and manual insertion in phantoms were 1.6 and 1.4 mm, respectively. The 95% confidence interval of the mean difference was -0.3 to 0.6 mm. There were no significant differences in needle insertion time, CT fluoroscopy time, or radiation exposure to the phantom between the two methods. Effective dose to the physician during robotic insertion was always 0 μSv, while that during manual insertion was 5.7 μSv on average (P < .001). Robotic insertion was feasible in the animals, with an overall mean accuracy of 3.2 mm and three minor procedure-related complications. Conclusion Robotic insertion exhibited equivalent accuracy as manual insertion in phantoms, without radiation exposure to the physician. It was also found to be accurate in an in vivo procedure in animals. © RSNA, 2017 Online supplemental material is available for this article.

  19. Autonomous caregiver following robotic wheelchair

    NASA Astrophysics Data System (ADS)

    Ratnam, E. Venkata; Sivaramalingam, Sethurajan; Vignesh, A. Sri; Vasanth, Elanthendral; Joans, S. Mary

    2011-12-01

    In the last decade, a variety of robotic/intelligent wheelchairs have been proposed to meet the need in aging society. Their main research topics are autonomous functions such as moving toward some goals while avoiding obstacles, or user-friendly interfaces. Although it is desirable for wheelchair users to go out alone, caregivers often accompany them. Therefore we have to consider not only autonomous functions and user interfaces but also how to reduce caregivers' load and support their activities in a communication aspect. From this point of view, we have proposed a robotic wheelchair moving with a caregiver side by side based on the MATLAB process. In this project we discussing about robotic wheel chair to follow a caregiver by using a microcontroller, Ultrasonic sensor, keypad, Motor drivers to operate robot. Using camera interfaced with the DM6437 (Davinci Code Processor) image is captured. The captured image are then processed by using image processing technique, the processed image are then converted into voltage levels through MAX 232 level converter and given it to the microcontroller unit serially and ultrasonic sensor to detect the obstacle in front of robot. In this robot we have mode selection switch Automatic and Manual control of robot, we use ultrasonic sensor in automatic mode to find obstacle, in Manual mode to use the keypad to operate wheel chair. In the microcontroller unit, c language coding is predefined, according to this coding the robot which connected to it was controlled. Robot which has several motors is activated by using the motor drivers. Motor drivers are nothing but a switch which ON/OFF the motor according to the control given by the microcontroller unit.

  20. Robotics in neurosurgery: which tools for what?

    PubMed

    Benabid, A L; Hoffmann, D; Seigneuret, E; Chabardes, S

    2006-01-01

    Robots are the tools for taking advantage of the skills of computers in achieving complicated tasks. This has been made possible owing to the "numerical image explosion" which allowed us to easily obtain spatial coordinates, three dimensional reconstruction, multimodality imaging including digital subtraction angiography (DSA), computed tomography (CT), magnetic resonance imaging (MRI) and magneto encephalography (MEG), with high resolution in space, time, and tissue density. Neurosurgical robots currently available at the operating level are being described. Future evolutions, indications and ethical aspects are examined.

  1. Morphological computation of multi-gaited robot locomotion based on free vibration.

    PubMed

    Reis, Murat; Yu, Xiaoxiang; Maheshwari, Nandan; Iida, Fumiya

    2013-01-01

    In recent years, there has been increasing interest in the study of gait patterns in both animals and robots, because it allows us to systematically investigate the underlying mechanisms of energetics, dexterity, and autonomy of adaptive systems. In particular, for morphological computation research, the control of dynamic legged robots and their gait transitions provides additional insights into the guiding principles from a synthetic viewpoint for the emergence of sensible self-organizing behaviors in more-degrees-of-freedom systems. This article presents a novel approach to the study of gait patterns, which makes use of the intrinsic mechanical dynamics of robotic systems. Each of the robots consists of a U-shaped elastic beam and exploits free vibration to generate different locomotion patterns. We developed a simplified physics model of these robots, and through experiments in simulation and real-world robotic platforms, we show three distinctive mechanisms for generating different gait patterns in these robots.

  2. System and method for controlling a vision guided robot assembly

    DOEpatents

    Lin, Yhu-Tin; Daro, Timothy; Abell, Jeffrey A.; Turner, III, Raymond D.; Casoli, Daniel J.

    2017-03-07

    A method includes the following steps: actuating a robotic arm to perform an action at a start position; moving the robotic arm from the start position toward a first position; determining from a vision process method if a first part from the first position will be ready to be subjected to a first action by the robotic arm once the robotic arm reaches the first position; commencing the execution of the visual processing method for determining the position deviation of the second part from the second position and the readiness of the second part to be subjected to a second action by the robotic arm once the robotic arm reaches the second position; and performing a first action on the first part using the robotic arm with the position deviation of the first part from the first position predetermined by the vision process method.

  3. System for robot-assisted real-time laparoscopic ultrasound elastography

    NASA Astrophysics Data System (ADS)

    Billings, Seth; Deshmukh, Nishikant; Kang, Hyun Jae; Taylor, Russell; Boctor, Emad M.

    2012-02-01

    Surgical robots provide many advantages for surgery, including minimal invasiveness, precise motion, high dexterity, and crisp stereovision. One limitation of current robotic procedures, compared to open surgery, is the loss of haptic information for such purposes as palpation, which can be very important in minimally invasive tumor resection. Numerous studies have reported the use of real-time ultrasound elastography, in conjunction with conventional B-mode ultrasound, to differentiate malignant from benign lesions. Several groups (including our own) have reported integration of ultrasound with the da Vinci robot, and ultrasound elastography is a very promising image guidance method for robotassisted procedures that will further enable the role of robots in interventions where precise knowledge of sub-surface anatomical features is crucial. We present a novel robot-assisted real-time ultrasound elastography system for minimally invasive robot-assisted interventions. Our system combines a da Vinci surgical robot with a non-clinical experimental software interface, a robotically articulated laparoscopic ultrasound probe, and our GPU-based elastography system. Elasticity and B-mode ultrasound images are displayed as picture-in-picture overlays in the da Vinci console. Our system minimizes dependence on human performance factors by incorporating computer-assisted motion control that automatically generates the tissue palpation required for elastography imaging, while leaving high-level control in the hands of the user. In addition to ensuring consistent strain imaging, the elastography assistance mode avoids the cognitive burden of tedious manual palpation. Preliminary tests of the system with an elasticity phantom demonstrate the ability to differentiate simulated lesions of varied stiffness and to clearly delineate lesion boundaries.

  4. Application of fluorescence in robotic general surgery: review of the literature and state of the art.

    PubMed

    Marano, Alessandra; Priora, Fabio; Lenti, Luca Matteo; Ravazzoni, Ferruccio; Quarati, Raoul; Spinoglio, Giuseppe

    2013-12-01

    The initial use of the indocyanine green fluorescence imaging system was for sentinel lymph node biopsy in patients with breast or colorectal cancer. Since then, application of this method has received wide acceptance in various fields of surgical oncology, and it has become a valid diagnostic tool for guiding cancer treatment. It has also been employed in numerous conventional surgical procedures with much success and benefit to the patient. The advent of minimally invasive surgery brought with it a new use for fluorescence in helping to improve the safety of these procedures, particularly for single-site procedures. In 2010, a near-infrared camera was integrated into the da Vinci Si System, creating a combination of technical and minimally invasive advantages that have been embraced by several experienced surgeons. The use of fluorescence, although useful, is considered challenging. Only a few studies are currently available on the use of fluorescence in robotic general surgery, whereas many articles have focused on its application in open and laparoscopic surgery. Many of these reports describe promising and satisfactory results, although with some shortcomings. The purpose of this article is to review the current status of the use of fluorescence in general surgery and particularly its role in robotic surgery. We also review potential uses in the future.

  5. Self-Taught Visually-Guided Pointing for a Humanoid Robot

    DTIC Science & Technology

    2006-01-01

    Brooks, R., Bryson, J., Marjanovic , M., Stein, L. A., & Wessler, M. (1996), Humanoid Soft- ware, Technical report, MIT Arti cial Intelli- gence Lab...8217, Journal of Biomechanics 19, 231{238. Marjanovic , M. (1995), Learning Functional Maps Between Sensorimotor Systems on a Humanoid Robot, Master’s thesis, MIT

  6. Three-Dimensional Images For Robot Vision

    NASA Astrophysics Data System (ADS)

    McFarland, William D.

    1983-12-01

    Robots are attracting increased attention in the industrial productivity crisis. As one significant approach for this nation to maintain technological leadership, the need for robot vision has become critical. The "blind" robot, while occupying an economical niche at present is severely limited and job specific, being only one step up from the numerical controlled machines. To successfully satisfy robot vision requirements a three dimensional representation of a real scene must be provided. Several image acquistion techniques are discussed with more emphasis on the laser radar type instruments. The autonomous vehicle is also discussed as a robot form, and the requirements for these applications are considered. The total computer vision system requirement is reviewed with some discussion of the major techniques in the literature for three dimensional scene analysis.

  7. Mutual interferences and design principles for mechatronic devices in magnetic resonance imaging.

    PubMed

    Yu, Ningbo; Gassert, Roger; Riener, Robert

    2011-07-01

    Robotic and mechatronic devices that work compatibly with magnetic resonance imaging (MRI) are applied in diagnostic MRI, image-guided surgery, neurorehabilitation and neuroscience. MRI-compatible mechatronic systems must address the challenges imposed by the scanner's electromagnetic fields. We have developed objective quantitative evaluation criteria for device characteristics needed to formulate design guidelines that ensure MRI-compatibility based on safety, device functionality and image quality. The mutual interferences between an MRI system and mechatronic devices working in its vicinity are modeled and tested. For each interference, the involved components are listed, and a numerical measure for "MRI-compatibility" is proposed. These interferences are categorized into an MRI-compatibility matrix, with each element representing possible interactions between one part of the mechatronic system and one component of the electromagnetic fields. Based on this formulation, design principles for MRI-compatible mechatronic systems are proposed. Furthermore, test methods are developed to examine whether a mechatronic device indeed works without interferences within an MRI system. Finally, the proposed MRI-compatibility criteria and design guidelines have been applied to an actual design process that has been validated by the test procedures. Objective and quantitative MRI-compatibility measures for mechatronic and robotic devices have been established. Applying the proposed design principles, potential problems in safety, device functionality and image quality can be considered in the design phase to ensure that the mechatronic system will fulfill the MRI-compatibility criteria. New guidelines and test procedures for MRI instrument compatibility provide a rational basis for design and evaluation of mechatronic devices in various MRI applications. Designers can apply these criteria and use the tests, so that MRI-compatibility results can accrue to build an experiential database.

  8. Work-rate-guided exercise testing in patients with incomplete spinal cord injury using a robotics-assisted tilt-table.

    PubMed

    Laubacher, Marco; Perret, Claudio; Hunt, Kenneth J

    2015-01-01

    Robotics-assisted tilt-table (RTT) technology allows neurological rehabilitation therapy to be started early thus alleviating some secondary complications of prolonged bed rest. This study assessed the feasibility of a novel work-rate-guided RTT approach for cardiopulmonary training and assessment in patients with incomplete spinal cord injury (iSCI). Three representative subjects with iSCI at three distinct stages of primary rehabilitation completed an incremental exercise test (IET) and a constant load test (CLT) on a RTT augmented with integrated leg-force and position measurement and visual work rate feedback. Feasibility assessment focused on: (i) implementation, (ii) limited efficacy testing, (iii) acceptability. (i) All subjects were able follow the work rate target profile by adapting their volitional leg effort. (ii) During the IETs, peak oxygen uptake above rest was 304, 467 and 1378 ml/min and peak heart rate (HR) was 46, 32 and 65 beats/min above rest (subjects A, B and C, respectively). During the CLTs, steady-state oxygen uptake increased by 42%, 38% and 162% and HR by 12%, 20% and 29%. (iii) All exercise tests were tolerated well. The novel work-rate guided RTT intervention is deemed feasible for cardiopulmonary training and assessment in patients with iSCI: substantial cardiopulmonary responses were observed and the approach was found to be tolerable and implementable. Implications for Rehabilitation Work-rate guided robotics-assisted tilt-table technology is deemed feasible for cardiopulmonary assessment and training in patients with incomplete spinal cord injury. Robotics-assisted tilt-tables might be a good way to start with an active rehabilitation as early as possible after a spinal cord injury. During training with robotics-assisted devices the active participation of the patients is crucial to strain the cardiopulmonary system and hence gain from the training.

  9. Value of C-Arm Cone Beam Computed Tomography Image Fusion in Maximizing the Versatility of Endovascular Robotics.

    PubMed

    Chinnadurai, Ponraj; Duran, Cassidy; Al-Jabbari, Odeaa; Abu Saleh, Walid K; Lumsden, Alan; Bismuth, Jean

    2016-01-01

    To report our initial experience and highlight the value of using intraoperative C-arm cone beam computed tomography (CT; DynaCT(®)) image fusion guidance along with steerable robotic endovascular catheter navigation to optimize vessel cannulation. Between May 2013 and January 2015, all patients who underwent endovascular procedures using DynaCT image fusion technique along with Hansen Magellan vascular robotic catheter were included in this study. As a part of preoperative planning, relevant vessel landmarks were electronically marked in contrast-enhanced multi-slice computed tomography images and stored. At the beginning of procedure, an intraoperative noncontrast C-arm cone beam CT (syngo DynaCT(®), Siemens Medical Solutions USA Inc.) was acquired in the hybrid suite. Preoperative images were then coregistered to intraoperative DynaCT images using aortic wall calcifications and bone landmarks. Stored landmarks were then overlaid on 2-dimensional (2D) live fluoroscopic images as virtual markers that are updated in real-time with C-arm, table movements and image zoom. Vascular access and robotic catheter (Magellan(®), Hansen Medical) was setup per standard. Vessel cannulation was performed based on electronic virtual markers on live fluoroscopy using robotic catheter. The impact of 3-dimensional (3D) image fusion guidance on robotic vessel cannulation was evaluated retrospectively, by assessing quantitative parameters like number of angiograms acquired before vessel cannulation and qualitative parameters like accuracy of vessel ostium and centerline markers. All 17 vessels were cannulated successfully in 14 patients' attempted using robotic catheter and image fusion guidance. Median vessel diameter at origin was 5.4 mm (range, 2.3-13 mm), whereas 12 of 17 (70.6%) vessels had either calcified and/or stenosed origin from parent vessel. Nine of 17 vessels (52.9 %) were cannulated without any contrast injection. Median number of angiograms required before cannulation was 0 (range, 0-2). On qualitative assessment, 14 of 15 vessels (93.3%) had grade = 1 accuracy (guidewire inside virtual ostial marker). Fourteen of 14 vessels had grade = 1 accuracy (virtual centerlines that matched with the actual vessel trajectory during cannulation). In this small series, the experience of using DynaCT image fusion guidance together with a steerable endovascular robotic catheter indicates that such image fusion strategies can enhance intraoperative 2D fluoroscopy by bringing preoperative 3D information about vascular stenosis and/or calcification, angulation, and take off from main vessel thereby facilitating ultimate vessel cannulation. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Evaluation of automated decisionmaking methodologies and development of an integrated robotic system simulation. Appendix A: ROBSIM user's guide

    NASA Technical Reports Server (NTRS)

    Haley, D. C.; Almand, B. J.; Thomas, M. M.; Krauze, L. D.; Gremban, K. D.; Sanborn, J. C.; Kelley, J. H.; Depkovich, T. M.; Wolfe, W. J.; Nguyen, T.

    1986-01-01

    The purpose of the Robotics Simulation Program is to provide a broad range of computer capabilities to assist in the design, verification, simulation, and study of robotics systems. ROBSIM is program in FORTRAN 77 for use on a VAX 11/750 computer under the VMS operating system. This user's guide describes the capabilities of the ROBSIM programs, including the system definition function, the analysis tools function and the postprocessor function. The options a user may encounter with each of these executables are explained in detail and the different program prompts appearing to the user are included. Some useful suggestions concerning the appropriate answers to be given by the user are provided. An example user interactive run in enclosed for each of the main program services, and some of the capabilities are illustrated.

  11. A comparative analysis and guide to virtual reality robotic surgical simulators.

    PubMed

    Julian, Danielle; Tanaka, Alyssa; Mattingly, Patricia; Truong, Mireille; Perez, Manuela; Smith, Roger

    2018-02-01

    Since the US Food and Drug Administration approved robotically assisted surgical devices for human surgery in 2000, the number of surgeries utilizing this innovative technology has risen. In 2015, approximately 650 000 robot-assisted procedures were performed worldwide. Surgeons must be properly trained to safely transition to using such innovative technology. Multiple virtual reality robotic simulators are now commercially available for educational and training purposes. There is a need for comparative evaluations of these simulators to aid users in selecting an appropriate device for their purposes. We conducted a comparison of the design and capabilities of all dedicated simulators of the da Vinci robot - the da Vinci Skills Simulator (dVSS), dV-Trainer (dVT), Robotic Skills Simulators (RoSS) and the RobotiX Mentor. This paper provides the base specifications of the hardware and software, with an emphasis on the training capabilities of each system. Each simulator contains a large number of training exercises for skills development: dVSS n = 40, dVT n = 65, RoSS n = 52, RobotiX Mentor n = 31. All four offer 3D visual images but use different display technologies. The dVSS leverages the real robotic surgical console to provide visualization, hand controls and foot pedals. The dVT, RoSS and RobotiX Mentor created simulated versions of all of these control systems. Each includes systems management services that allow instructors to collect, export and analyze the scores of students using the simulators. This study provides comparative information on the four simulators' functional capabilities. Each device offers unique advantages and capabilities for training robotic surgeons. Each has been the subject of validation experiments, which have been published in the literature. But those do not provide specific details on the capabilities of the simulators, which are necessary for an understanding sufficient to select the one best suited for an organization's needs. This article provides comparative information to assist with that type of selection. Copyright © 2017 John Wiley & Sons, Ltd.

  12. SU-E-J-53: A Phantom Design to Assist Patient Position Verification System in Daily Image-Guided RT and Comprehensive QA Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Syh, J; Wu, H

    2015-06-15

    Purpose This study is to implement a homemade novel device with surface locking couch index to check daily radiograph (DR) function of adaPTInsight™, stereoscopic image guided system (SIGS), for proton therapy. The comprehensive daily QA checks of proton pencil beam output, field size, flatness and symmetry of spots and energy layers will be followed by using MatriXX dosimetry device. Methods The iBa MatriXX device was used to perform daily dosimetry which is also used to perform SIGS checks. A set of markers were attached to surface of MatriXX device in alignment of DRR of reconstructed CT images and daily DR.more » The novel device allows MatriXX to be fit into the cradle which was locked by couch index bars on couch surface. This will keep the MatriXX at same XY plane daily with exact coordinates. Couch height Z will be adjusted according to imaging to check isocenter-laser coincidence accuracy. Results adaPTInsight™ provides robotic couch to move in 6-degree coordinate system to align the dosimetry device to be within 1.0 mm / 1.0°. The daily constancy was tightened to be ± 0.5 mm / 0.3° compared to 1.0 mm / 1.0° before. For gantry at 0° and couch all 0° angles (@ Rt ARM 0 setting), offsets measured of the couch systems were ≤ 0.5° in roll, yaw and pitch dimensions. Conclusion Simplicity of novel device made daily image guided QA consistent with accuracy. The offset of the MatriXX isocenter-laser coincident was reproducible. Such easy task not only speeds up the setup, but it increases confidence level in detailed daily comprehensive measurements. The total SIGS alignment time has been shortened with less setup error. This device will enhance our experiences for the future QA when cone beam CT imaging modality becomes available at proton therapy center.« less

  13. Automatic Focus Adjustment of a Microscope

    NASA Technical Reports Server (NTRS)

    Huntsberger, Terrance

    2005-01-01

    AUTOFOCUS is a computer program for use in a control system that automatically adjusts the position of an instrument arm that carries a microscope equipped with an electronic camera. In the original intended application of AUTOFOCUS, the imaging microscope would be carried by an exploratory robotic vehicle on a remote planet, but AUTOFOCUS could also be adapted to similar applications on Earth. Initially control software other than AUTOFOCUS brings the microscope to a position above a target to be imaged. Then the instrument arm is moved to lower the microscope toward the target: nominally, the target is approached from a starting distance of 3 cm in 10 steps of 3 mm each. After each step, the image in the camera is subjected to a wavelet transform, which is used to evaluate the texture in the image at multiple scales to determine whether and by how much the microscope is approaching focus. A focus measure is derived from the transform and used to guide the arm to bring the microscope to the focal height. When the analysis reveals that the microscope is in focus, image data are recorded and transmitted.

  14. Patient body image, self-esteem, and cosmetic results of minimally invasive robotic cardiac surgery.

    PubMed

    İyigün, Taner; Kaya, Mehmet; Gülbeyaz, Sevil Özgül; Fıstıkçı, Nurhan; Uyanık, Gözde; Yılmaz, Bilge; Onan, Burak; Erkanlı, Korhan

    2017-03-01

    Patient-reported outcome measures reveal the quality of surgical care from the patient's perspective. We aimed to compare body image, self-esteem, hospital anxiety and depression, and cosmetic outcomes by using validated tools between patients undergoing robot-assisted surgery and those undergoing conventional open surgery. This single-center, multidisciplinary, randomized, prospective study of 62 patients who underwent cardiac surgery was conducted at Hospital from May 2013 to January 2015. The patients were divided into two groups: the robotic group (n = 33) and the open group (n = 29). The study employed five different tools to assess body image, self-esteem, and overall patient-rated scar satisfaction. There were statistically significant differences between the groups in terms of self-esteem scores (p = 0.038), body image scores (p = 0.026), overall Observer Scar Assessment Scale (p = 0.013), and overall Patient Scar Assessment Scale (p = 0.036) scores in favor of the robotic group during the postoperative period. Robot-assisted surgery protected the patient's body image and self-esteem, while conventional open surgery decreased these levels but without causing pathologies. Preoperative depression and anxiety level was reduced by both robot-assisted surgery and conventional open surgery. The groups did not significantly differ on Patient Satisfaction Scores and depression/anxiety scores. The results of this study clearly demonstrated that a minimally invasive approach using robotic-assisted surgery has advantages in terms of body image, self-esteem, and cosmetic outcomes over the conventional approach in patients undergoing cardiac surgery. Copyright © 2017 IJS Publishing Group Ltd. Published by Elsevier Ltd. All rights reserved.

  15. Autonomous navigation method for substation inspection robot based on travelling deviation

    NASA Astrophysics Data System (ADS)

    Yang, Guoqing; Xu, Wei; Li, Jian; Fu, Chongguang; Zhou, Hao; Zhang, Chuanyou; Shao, Guangting

    2017-06-01

    A new method of edge detection is proposed in substation environment, which can realize the autonomous navigation of the substation inspection robot. First of all, the road image and information are obtained by using an image acquisition device. Secondly, the noise in the region of interest which is selected in the road image, is removed with the digital image processing algorithm, the road edge is extracted by Canny operator, and the road boundaries are extracted by Hough transform. Finally, the distance between the robot and the left and the right boundaries is calculated, and the travelling distance is obtained. The robot's walking route is controlled according to the travel deviation and the preset threshold. Experimental results show that the proposed method can detect the road area in real time, and the algorithm has high accuracy and stable performance.

  16. Augmented Reality in Neurosurgery: A Review of Current Concepts and Emerging Applications.

    PubMed

    Guha, Daipayan; Alotaibi, Naif M; Nguyen, Nhu; Gupta, Shaurya; McFaul, Christopher; Yang, Victor X D

    2017-05-01

    Augmented reality (AR) superimposes computer-generated virtual objects onto the user's view of the real world. Among medical disciplines, neurosurgery has long been at the forefront of image-guided surgery, and it continues to push the frontiers of AR technology in the operating room. In this systematic review, we explore the history of AR in neurosurgery and examine the literature on current neurosurgical applications of AR. Significant challenges to surgical AR exist, including compounded sources of registration error, impaired depth perception, visual and tactile temporal asynchrony, and operator inattentional blindness. Nevertheless, the ability to accurately display multiple three-dimensional datasets congruently over the area where they are most useful, coupled with future advances in imaging, registration, display technology, and robotic actuation, portend a promising role for AR in the neurosurgical operating room.

  17. Towards a Teleoperated Needle Driver Robot with Haptic Feedback for RFA of Breast Tumors under Continuous MRI1

    PubMed Central

    Kokes, Rebecca; Lister, Kevin; Gullapalli, Rao; Zhang, Bao; MacMillan, Alan; Richard, Howard; Desai, Jaydev P.

    2009-01-01

    Objective The purpose of this paper is to explore the feasibility of developing a MRI-compatible needle driver system for radiofrequency ablation (RFA) of breast tumors under continuous MRI imaging while being teleoperated by a haptic feedback device from outside the scanning room. The developed needle driver prototype was designed and tested for both tumor targeting capability as well as RFA. Methods The single degree-of-freedom (DOF) prototype was interfaced with a PHANToM haptic device controlled from outside the scanning room. Experiments were performed to demonstrate MRI-compatibility and position control accuracy with hydraulic actuation, along with an experiment to determine the PHANToM’s ability to guide the RFA tool to a tumor nodule within a phantom breast tissue model while continuously imaging within the MRI and receiving force feedback from the RFA tool. Results Hydraulic actuation is shown to be a feasible actuation technique for operation in an MRI environment. The design is MRI-compatible in all aspects except for force sensing in the directions perpendicular to the direction of motion. Experiments confirm that the user is able to detect healthy vs. cancerous tissue in a phantom model when provided with both visual (imaging) feedback and haptic feedback. Conclusion The teleoperated 1-DOF needle driver system presented in this paper demonstrates the feasibility of implementing a MRI-compatible robot for RFA of breast tumors with haptic feedback capability. PMID:19303805

  18. Intelligent navigation and accurate positioning of an assist robot in indoor environments

    NASA Astrophysics Data System (ADS)

    Hua, Bin; Rama, Endri; Capi, Genci; Jindai, Mitsuru; Tsuri, Yosuke

    2017-12-01

    Intact robot's navigation and accurate positioning in indoor environments are still challenging tasks. Especially in robot applications, assisting disabled and/or elderly people in museums/art gallery environments. In this paper, we present a human-like navigation method, where the neural networks control the wheelchair robot to reach the goal location safely, by imitating the supervisor's motions, and positioning in the intended location. In a museum similar environment, the mobile robot starts navigation from various positions, and uses a low-cost camera to track the target picture, and a laser range finder to make a safe navigation. Results show that the neural controller with the Conjugate Gradient Backpropagation training algorithm gives a robust response to guide the mobile robot accurately to the goal position.

  19. Percutaneous needle placement using laser guidance: a practical solution

    NASA Astrophysics Data System (ADS)

    Xu, Sheng; Kapoor, Ankur; Abi-Jaoudeh, Nadine; Imbesi, Kimberly; Hong, Cheng William; Mazilu, Dumitru; Sharma, Karun; Venkatesan, Aradhana M.; Levy, Elliot; Wood, Bradford J.

    2013-03-01

    In interventional radiology, various navigation technologies have emerged aiming to improve the accuracy of device deployment and potentially the clinical outcomes of minimally invasive procedures. While these technologies' performance has been explored extensively, their impact on daily clinical practice remains undetermined due to the additional cost and complexity, modification of standard devices (e.g. electromagnetic tracking), and different levels of experience among physicians. Taking these factors into consideration, a robotic laser guidance system for percutaneous needle placement is developed. The laser guidance system projects a laser guide line onto the skin entry point of the patient, helping the physician to align the needle with the planned path of the preoperative CT scan. To minimize changes to the standard workflow, the robot is integrated with the CT scanner via optical tracking. As a result, no registration between the robot and CT is needed. The robot can compensate for the motion of the equipment and keep the laser guide line aligned with the biopsy path in real-time. Phantom experiments showed that the guidance system can benefit physicians at different skill levels, while clinical studies showed improved accuracy over conventional freehand needle insertion. The technology is safe, easy to use, and does not involve additional disposable costs. It is our expectation that this technology can be accepted by interventional radiologists for CT guided needle placement procedures.

  20. Virtobot 2.0: the future of automated surface documentation and CT-guided needle placement in forensic medicine.

    PubMed

    Ebert, Lars Christian; Ptacek, Wolfgang; Breitbeck, Robert; Fürst, Martin; Kronreif, Gernot; Martinez, Rosa Maria; Thali, Michael; Flach, Patricia M

    2014-06-01

    In this paper we present the second prototype of a robotic system to be used in forensic medicine. The system is capable of performing automated surface documentation using photogrammetry, optical surface scanning and image-guided, post-mortem needle placement for tissue sampling, liquid sampling, or the placement of guide wires. The upgraded system includes workflow optimizations, an automatic tool-change mechanism, a new software module for trajectory planning and a fully automatic computed tomography-data-set registration algorithm. We tested the placement accuracy of the system by using a needle phantom with radiopaque markers as targets. The system is routinely used for surface documentation and resulted in 24 surface documentations over the course of 11 months. We performed accuracy tests for needle placement using a biopsy phantom, and the Virtobot placed introducer needles with an accuracy of 1.4 mm (±0.9 mm). The second prototype of the Virtobot system is an upgrade of the first prototype but mainly focuses on streamlining the workflow and increasing the level of automation and also has an easier user interface. These upgrades make the Virtobot a potentially valuable tool for case documentation in a scalpel-free setting that uses purely imaging techniques and minimally invasive procedures and is the next step toward the future of virtual autopsy.

  1. Open core control software for surgical robots.

    PubMed

    Arata, Jumpei; Kozuka, Hiroaki; Kim, Hyung Wook; Takesue, Naoyuki; Vladimirov, B; Sakaguchi, Masamichi; Tokuda, Junichi; Hata, Nobuhiko; Chinzei, Kiyoyuki; Fujimoto, Hideo

    2010-05-01

    In these days, patients and doctors in operation room are surrounded by many medical devices as resulting from recent advancement of medical technology. However, these cutting-edge medical devices are working independently and not collaborating with each other, even though the collaborations between these devices such as navigation systems and medical imaging devices are becoming very important for accomplishing complex surgical tasks (such as a tumor removal procedure while checking the tumor location in neurosurgery). On the other hand, several surgical robots have been commercialized, and are becoming common. However, these surgical robots are not open for collaborations with external medical devices in these days. A cutting-edge "intelligent surgical robot" will be possible in collaborating with surgical robots, various kinds of sensors, navigation system and so on. On the other hand, most of the academic software developments for surgical robots are "home-made" in their research institutions and not open to the public. Therefore, open source control software for surgical robots can be beneficial in this field. From these perspectives, we developed Open Core Control software for surgical robots to overcome these challenges. In general, control softwares have hardware dependencies based on actuators, sensors and various kinds of internal devices. Therefore, these control softwares cannot be used on different types of robots without modifications. However, the structure of the Open Core Control software can be reused for various types of robots by abstracting hardware dependent parts. In addition, network connectivity is crucial for collaboration between advanced medical devices. The OpenIGTLink is adopted in Interface class which plays a role to communicate with external medical devices. At the same time, it is essential to maintain the stable operation within the asynchronous data transactions through network. In the Open Core Control software, several techniques for this purpose were introduced. Virtual fixture is well known technique as a "force guide" for supporting operators to perform precise manipulation by using a master-slave robot. The virtual fixture for precise and safety surgery was implemented on the system to demonstrate an idea of high-level collaboration between a surgical robot and a navigation system. The extension of virtual fixture is not a part of the Open Core Control system, however, the function such as virtual fixture cannot be realized without a tight collaboration between cutting-edge medical devices. By using the virtual fixture, operators can pre-define an accessible area on the navigation system, and the area information can be transferred to the robot. In this manner, the surgical console generates the reflection force when the operator tries to get out from the pre-defined accessible area during surgery. The Open Core Control software was implemented on a surgical master-slave robot and stable operation was observed in a motion test. The tip of the surgical robot was displayed on a navigation system by connecting the surgical robot with a 3D position sensor through the OpenIGTLink. The accessible area was pre-defined before the operation, and the virtual fixture was displayed as a "force guide" on the surgical console. In addition, the system showed stable performance in a duration test with network disturbance. In this paper, a design of the Open Core Control software for surgical robots and the implementation of virtual fixture were described. The Open Core Control software was implemented on a surgical robot system and showed stable performance in high-level collaboration works. The Open Core Control software is developed to be a widely used platform of surgical robots. Safety issues are essential for control software of these complex medical devices. It is important to follow the global specifications such as a FDA requirement "General Principles of Software Validation" or IEC62304. For following these regulations, it is important to develop a self-test environment. Therefore, a test environment is now under development to test various interference in operation room such as a noise of electric knife by considering safety and test environment regulations such as ISO13849 and IEC60508. The Open Core Control software is currently being developed software in open-source manner and available on the Internet. A communization of software interface is becoming a major trend in this field. Based on this perspective, the Open Core Control software can be expected to bring contributions in this field.

  2. A Magnetic Resonance Compatible Soft Wearable Robotic Glove for Hand Rehabilitation and Brain Imaging.

    PubMed

    Hong Kai Yap; Kamaldin, Nazir; Jeong Hoon Lim; Nasrallah, Fatima A; Goh, James Cho Hong; Chen-Hua Yeow

    2017-06-01

    In this paper, we present the design, fabrication and evaluation of a soft wearable robotic glove, which can be used with functional Magnetic Resonance imaging (fMRI) during the hand rehabilitation and task specific training. The soft wearable robotic glove, called MR-Glove, consists of two major components: a) a set of soft pneumatic actuators and b) a glove. The soft pneumatic actuators, which are made of silicone elastomers, generate bending motion and actuate finger joints upon pressurization. The device is MR-compatible as it contains no ferromagnetic materials and operates pneumatically. Our results show that the device did not cause artifacts to fMRI images during hand rehabilitation and task-specific exercises. This study demonstrated the possibility of using fMRI and MR-compatible soft wearable robotic device to study brain activities and motor performances during hand rehabilitation, and to unravel the functional effects of rehabilitation robotics on brain stimulation.

  3. A magnetic compatible supernumerary robotic finger for functional magnetic resonance imaging (fMRI) acquisitions: Device description and preliminary results.

    PubMed

    Hussain, Irfan; Santarnecchi, Emiliano; Leo, Andrea; Ricciardi, Emiliano; Rossi, Simone; Prattichizzo, Domenico

    2017-07-01

    The Supernumerary robotic limbs are a recently introduced class of wearable robots that, differently from traditional prostheses and exoskeletons, aim at adding extra effectors (i.e., arms, legs, or fingers) to the human user, rather than substituting or enhancing the natural ones. However, it is still undefined whether the use of supernumerary robotic limbs could specifically lead to neural modifications in brain dynamics. The illusion of owning the part of body has been already proven in many experimental observations, such as those relying on multisensory integration (e.g., rubber hand illusion), prosthesis and even on virtual reality. In this paper we present a description of a novel magnetic compatible supernumerary robotic finger together with preliminary observations from two functional magnetic resonance imaging (fMRI) experiments, in which brain activity was measured before and after a period of training with the robotic device, and during the use of the novel MRI-compatible version of the supernumerary robotic finger. Results showed that the usage of the MR-compatible robotic finger is safe and does not produce artifacts on MRI images. Moreover, the training with the supernumerary robotic finger recruits a network of motor-related cortical regions (i.e. primary and supplementary motor areas), hence the same motor network of a fully physiological voluntary motor gestures.

  4. Neural net target-tracking system using structured laser patterns

    NASA Astrophysics Data System (ADS)

    Cho, Jae-Wan; Lee, Yong-Bum; Lee, Nam-Ho; Park, Soon-Yong; Lee, Jongmin; Choi, Gapchu; Baek, Sunghyun; Park, Dong-Sun

    1996-06-01

    In this paper, we describe a robot endeffector tracking system using sensory information from recently-announced structured pattern laser diodes, which can generate images with several different types of structured pattern. The neural network approach is employed to recognize the robot endeffector covering the situation of three types of motion: translation, scaling and rotation. Features for the neural network to detect the position of the endeffector are extracted from the preprocessed images. Artificial neural networks are used to store models and to match with unknown input features recognizing the position of the robot endeffector. Since a minimal number of samples are used for different directions of the robot endeffector in the system, an artificial neural network with the generalization capability can be utilized for unknown input features. A feedforward neural network with the generalization capability can be utilized for unknown input features. A feedforward neural network trained with the back propagation learning is used to detect the position of the robot endeffector. Another feedforward neural network module is used to estimate the motion from a sequence of images and to control movements of the robot endeffector. COmbining the tow neural networks for recognizing the robot endeffector and estimating the motion with the preprocessing stage, the whole system keeps tracking of the robot endeffector effectively.

  5. Research and Development of Target Recognition and Location Crawling Platform based on Binocular Vision

    NASA Astrophysics Data System (ADS)

    Xu, Weidong; Lei, Zhu; Yuan, Zhang; Gao, Zhenqing

    2018-03-01

    The application of visual recognition technology in industrial robot crawling and placing operation is one of the key tasks in the field of robot research. In order to improve the efficiency and intelligence of the material sorting in the production line, especially to realize the sorting of the scattered items, the robot target recognition and positioning crawling platform based on binocular vision is researched and developed. The images were collected by binocular camera, and the images were pretreated. Harris operator was used to identify the corners of the images. The Canny operator was used to identify the images. Hough-chain code recognition was used to identify the images. The target image in the image, obtain the coordinates of each vertex of the image, calculate the spatial position and posture of the target item, and determine the information needed to capture the movement and transmit it to the robot control crawling operation. Finally, In this paper, we use this method to experiment the wrapping problem in the express sorting process The experimental results show that the platform can effectively solve the problem of sorting of loose parts, so as to achieve the purpose of efficient and intelligent sorting.

  6. Autonomous robot software development using simple software components

    NASA Astrophysics Data System (ADS)

    Burke, Thomas M.; Chung, Chan-Jin

    2004-10-01

    Developing software to control a sophisticated lane-following, obstacle-avoiding, autonomous robot can be demanding and beyond the capabilities of novice programmers - but it doesn"t have to be. A creative software design utilizing only basic image processing and a little algebra, has been employed to control the LTU-AISSIG autonomous robot - a contestant in the 2004 Intelligent Ground Vehicle Competition (IGVC). This paper presents a software design equivalent to that used during the IGVC, but with much of the complexity removed. The result is an autonomous robot software design, that is robust, reliable, and can be implemented by programmers with a limited understanding of image processing. This design provides a solid basis for further work in autonomous robot software, as well as an interesting and achievable robotics project for students.

  7. Preparing for High Technology: Robotics Programs. Research & Development Series No. 233.

    ERIC Educational Resources Information Center

    Ashley, William; And Others

    This guide is one of three developed to provide guidelines, information, and resources useful in planning and developing postsecondary technician training programs in high technology. It is specifically intended for program planners and developers in the initial stages of planning a new program or specialized option in robotics. (Two companion…

  8. Fifth Grade Students' Understanding of Ratio and Proportion in an Engineering Robotics Program

    ERIC Educational Resources Information Center

    Ortiz, Araceli Martinez

    2010-01-01

    The research described in this dissertation explores the impact of utilizing a LEGO-robotics integrated engineering and mathematics program to support fifth grade students' learning of ratios and proportion in an extracurricular program. The research questions guiding this research study were (1) how do students' test results compare for students…

  9. Robotics-Control Technology. Technology Learning Activity. Teacher Edition. Technology Education Series.

    ERIC Educational Resources Information Center

    Oklahoma State Dept. of Vocational and Technical Education, Stillwater. Curriculum and Instructional Materials Center.

    This document contains the materials required for presenting an 8-day competency-based technology learning activity (TLA) designed to introduce students in grades 6-10 to advances and career opportunities in the field of robotics-control technology. The guide uses hands-on exploratory experiences into which activities to help students develop…

  10. Damage detection in hazardous waste storage tank bottoms using ultrasonic guided waves

    NASA Astrophysics Data System (ADS)

    Cobb, Adam C.; Fisher, Jay L.; Bartlett, Jonathan D.; Earnest, Douglas R.

    2018-04-01

    Detecting damage in storage tanks is performed commercially using a variety of techniques. The most commonly used inspection technologies are magnetic flux leakage (MFL), conventional ultrasonic testing (UT), and leak testing. MFL and UT typically involve manual or robotic scanning of a sensor along the metal surfaces to detect cracks or corrosion wall loss. For inspection of the tank bottom, however, the storage tank is commonly emptied to allow interior access for the inspection system. While there are costs associated with emptying a storage tank for inspection that can be justified in some scenarios, there are situations where emptying the tank is impractical. Robotic, submersible systems have been developed for inspecting these tanks, but there are some storage tanks whose contents are so hazardous that even the use of these systems is untenable. Thus, there is a need to develop an inspection strategy that does not require emptying the tank or insertion of the sensor system into the tank. This paper presents a guided wave system for inspecting the bottom of double-shelled storage tanks (DSTs), with the sensor located on the exterior side-wall of the vessel. The sensor used is an electromagnetic acoustic transducer (EMAT) that generates and receives shear-horizontal guided plate waves using magnetostriction principles. The system operates by scanning the sensor around the circumference of the storage tank and sending guided waves into the tank bottom at regular intervals. The data from multiple locations are combined using the synthetic aperture focusing technique (SAFT) to create a color-mapped image of the vessel thickness changes. The target application of the system described is inspection of DSTs located at the Hanford site, which are million-gallon vessels used to store nuclear waste. Other vessels whose exterior walls are accessible would also be candidates for inspection using the described approach. Experimental results are shown from tests on multiple mockups of the DSTs being used to develop the sensor system.

  11. Depth perception camera for autonomous vehicle applications

    NASA Astrophysics Data System (ADS)

    Kornreich, Philipp

    2013-05-01

    An imager that can measure the distance from each pixel to the point on the object that is in focus at the pixel is described. Since it provides numeric information of the distance from the camera to all points in its field of view it is ideally suited for autonomous vehicle navigation and robotic vision. This eliminates the LIDAR conventionally used for range measurements. The light arriving at a pixel through a convex lens adds constructively only if it comes from the object point in focus at this pixel. The light from all other object points cancels. Thus, the lens selects the point on the object who's range is to be determined. The range measurement is accomplished by short light guides at each pixel. The light guides contain a p - n junction and a pair of contacts along its length. They, too, contain light sensing elements along the length. The device uses ambient light that is only coherent in spherical shell shaped light packets of thickness of one coherence length. Each of the frequency components of the broad band light arriving at a pixel has a phase proportional to the distance from an object point to its image pixel.

  12. How do walkers avoid a mobile robot crossing their way?

    PubMed

    Vassallo, Christian; Olivier, Anne-Hélène; Souères, Philippe; Crétual, Armel; Stasse, Olivier; Pettré, Julien

    2017-01-01

    Robots and Humans have to share the same environment more and more often. In the aim of steering robots in a safe and convenient manner among humans it is required to understand how humans interact with them. This work focuses on collision avoidance between a human and a robot during locomotion. Having in mind previous results on human obstacle avoidance, as well as the description of the main principles which guide collision avoidance strategies, we observe how humans adapt a goal-directed locomotion task when they have to interfere with a mobile robot. Our results show differences in the strategy set by humans to avoid a robot in comparison with avoiding another human. Humans prefer to give the way to the robot even when they are likely to pass first at the beginning of the interaction. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Composite video and graphics display for multiple camera viewing system in robotics and teleoperation

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)

    1991-01-01

    A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.

  14. Composite video and graphics display for camera viewing systems in robotics and teleoperation

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)

    1993-01-01

    A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.

  15. Conference on Space and Military Applications of Automation and Robotics

    NASA Technical Reports Server (NTRS)

    1988-01-01

    Topics addressed include: robotics; deployment strategies; artificial intelligence; expert systems; sensors and image processing; robotic systems; guidance, navigation, and control; aerospace and missile system manufacturing; and telerobotics.

  16. Self-localization for an autonomous mobile robot based on an omni-directional vision system

    NASA Astrophysics Data System (ADS)

    Chiang, Shu-Yin; Lin, Kuang-Yu; Chia, Tsorng-Lin

    2013-12-01

    In this study, we designed an autonomous mobile robot based on the rules of the Federation of International Robotsoccer Association (FIRA) RoboSot category, integrating the techniques of computer vision, real-time image processing, dynamic target tracking, wireless communication, self-localization, motion control, path planning, and control strategy to achieve the contest goal. The self-localization scheme of the mobile robot is based on the algorithms featured in the images from its omni-directional vision system. In previous works, we used the image colors of the field goals as reference points, combining either dual-circle or trilateration positioning of the reference points to achieve selflocalization of the autonomous mobile robot. However, because the image of the game field is easily affected by ambient light, positioning systems exclusively based on color model algorithms cause errors. To reduce environmental effects and achieve the self-localization of the robot, the proposed algorithm is applied in assessing the corners of field lines by using an omni-directional vision system. Particularly in the mid-size league of the RobotCup soccer competition, selflocalization algorithms based on extracting white lines from the soccer field have become increasingly popular. Moreover, white lines are less influenced by light than are the color model of the goals. Therefore, we propose an algorithm that transforms the omni-directional image into an unwrapped transformed image, enhancing the extraction features. The process is described as follows: First, radical scan-lines were used to process omni-directional images, reducing the computational load and improving system efficiency. The lines were radically arranged around the center of the omni-directional camera image, resulting in a shorter computational time compared with the traditional Cartesian coordinate system. However, the omni-directional image is a distorted image, which makes it difficult to recognize the position of the robot. Therefore, image transformation was required to implement self-localization. Second, we used an approach to transform the omni-directional images into panoramic images. Hence, the distortion of the white line can be fixed through the transformation. The interest points that form the corners of the landmark were then located using the features from accelerated segment test (FAST) algorithm. In this algorithm, a circle of sixteen pixels surrounding the corner candidate is considered and is a high-speed feature detector in real-time frame rate applications. Finally, the dual-circle, trilateration, and cross-ratio projection algorithms were implemented in choosing the corners obtained from the FAST algorithm and localizing the position of the robot. The results demonstrate that the proposed algorithm is accurate, exhibiting a 2-cm position error in the soccer field measuring 600 cm2 x 400 cm2.

  17. SU-F-BRA-04: Prostate HDR Brachytherapy with Multichannel Robotic System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joseph, F Maria; Podder, T; Yu, Y

    Purpose: High-dose-rate (HDR) brachytherapy is gradually becoming popular in treating patients with prostate cancers. However, placement of the HDR needles at desired locations into the patient is challenging. Application of robotic system may improve the accuracy of the clinical procedure. This experimental study is to evaluate the feasibility of using a multichannel robotic system for prostate HDR brachytherapy. Methods: In this experimental study, the robotic system employed was a 6-DOF Multichannel Image-guided Robotic Assistant for Brachytherapy (MIRAB), which was designed and fabricated for prostate seed implantation. The MIRAB has the provision of rotating 16 needles while inserting them. Ten prostatemore » HDR brachytherapy needles were simultaneously inserted using MIRAB into a commercially available prostate phantom. After inserting the needles into the prostate phantom at desired locations, 2mm thick CT slices were obtained for dosimetric planning. HDR plan was generated using Oncetra planning system with a total prescription dose of 34Gy in 4 fractions. Plan quality was evaluated considering dose coverage to prostate and planning target volume (PTV), with 3mm margin around prostate, as well as the dose limit to the organs at risk (OARs) following the American Brachytherapy Society (ABS) guidelines. Results: From the CT scan, it is observed that the needles were inserted straight into the desired locations and they were adequately spaced and distributed for a clinically acceptable HDR plan. Coverage to PTV and prostate were about 91% (V100= 91%) and 96% (V100=96%), respectively. Dose to 1cc of urethra, rectum, and bladder were within the ABS specified limits. Conclusion: The MIRAB was able to insert multiple needles simultaneously into the prostate precisely. By controlling the MIRAB to insert all the ten utilized needles into the prostate phantom, we could achieve the robotic HDR brachytherapy successfully. Further study for assessing the system’s performance and reliability is in progress.« less

  18. Proposal of Path Following and Arrival Judgement Methods Using Target Vector for Teleoperation of a Mobile Robot on Uneven Ground by Image Pointing

    NASA Astrophysics Data System (ADS)

    Tamura, Sho; Maeyama, Shoichi

    Rescue robots have been actively developed since Hanshin-Awaji (Kobe) Earthquake. Recently, the rescue robot to reduce the risk of the secondary disaster on NBC terror and critical accident is also developed. For such a background, the development project of mobile RT system in the collapsed is started. This research also participates in this project. It is useful to use the image pointing for the control interface of the rescue robot because it can control the robot by the simple operation. However, the conventional method cannot work on a rough terrain. In this research, we propose the system which controls the robot to arrive the target position on the rough terrain. It is constructed the methods which put the destination into the vector, and control the 3D localizated robot to follow the vector. Finally, the proposed system is evaluated through experiments by remote control of a mobile robot in slope and cofirmed the feasibility.

  19. Mobile robots traversability awareness based on terrain visual sensory data fusion

    NASA Astrophysics Data System (ADS)

    Shirkhodaie, Amir

    2007-04-01

    In this paper, we have presented methods that significantly improve the robot awareness of its terrain traversability conditions. The terrain traversability awareness is achieved by association of terrain image appearances from different poses and fusion of extracted information from multimodality imaging and range sensor data for localization and clustering environment landmarks. Initially, we describe methods for extraction of salient features of the terrain for the purpose of landmarks registration from two or more images taken from different via points along the trajectory path of the robot. The method of image registration is applied as a means of overlaying (two or more) of the same terrain scene at different viewpoints. The registration geometrically aligns salient landmarks of two images (the reference and sensed images). A Similarity matching techniques is proposed for matching the terrain salient landmarks. Secondly, we present three terrain classifier models based on rule-based, supervised neural network, and fuzzy logic for classification of terrain condition under uncertainty and mapping the robot's terrain perception to apt traversability measures. This paper addresses the technical challenges and navigational skill requirements of mobile robots for traversability path planning in natural terrain environments similar to Mars surface terrains. We have described different methods for detection of salient terrain features based on imaging texture analysis techniques. We have also presented three competing techniques for terrain traversability assessment of mobile robots navigating in unstructured natural terrain environments. These three techniques include: a rule-based terrain classifier, a neural network-based terrain classifier, and a fuzzy-logic terrain classifier. Each proposed terrain classifier divides a region of natural terrain into finite sub-terrain regions and classifies terrain condition exclusively within each sub-terrain region based on terrain spatial and textural cues.

  20. Remote-controlled vision-guided mobile robot system

    NASA Astrophysics Data System (ADS)

    Ande, Raymond; Samu, Tayib; Hall, Ernest L.

    1997-09-01

    Automated guided vehicles (AGVs) have many potential applications in manufacturing, medicine, space and defense. The purpose of this paper is to describe exploratory research on the design of the remote controlled emergency stop and vision systems for an autonomous mobile robot. The remote control provides human supervision and emergency stop capabilities for the autonomous vehicle. The vision guidance provides automatic operation. A mobile robot test-bed has been constructed using a golf cart base. The mobile robot (Bearcat) was built for the Association for Unmanned Vehicle Systems (AUVS) 1997 competition. The mobile robot has full speed control with guidance provided by a vision system and an obstacle avoidance system using ultrasonic sensors systems. Vision guidance is accomplished using two CCD cameras with zoom lenses. The vision data is processed by a high speed tracking device, communicating with the computer the X, Y coordinates of blobs along the lane markers. The system also has three emergency stop switches and a remote controlled emergency stop switch that can disable the traction motor and set the brake. Testing of these systems has been done in the lab as well as on an outside test track with positive results that show that at five mph the vehicle can follow a line and at the same time avoid obstacles.

  1. Use of near infrared fluorescence during robot-assisted laparoscopic partial nephrectomy.

    PubMed

    Cornejo-Dávila, V; Nazmy, M; Kella, N; Palmeros-Rodríguez, M A; Morales-Montor, J G; Pacheco-Gahbler, C

    2016-04-01

    Partial nephrectomy is the treatment of choice for T1a tumours. The open approach is still the standard method. Robot-assisted laparoscopic surgery offers advantages that are applicable to partial nephrectomy, such as the use of the Firefly® system with near-infrared fluorescence. To demonstrate the implementation of fluorescence in nephron-sparing surgery. This case concerned a 37-year-old female smoker, with obesity. The patient had a right kidney tumour measuring 31 mm, which was found using tomography. She therefore underwent robot-assisted laparoscopic partial nephrectomy, with a warm ischaemia time of 22 minutes and the use of fluorescence with the Firefly® system to guide the resection. There were no complications. The tumour was a pT1aN0M0 renal cell carcinoma, with negative margins. Robot-assisted renal laparoscopic surgery is employed for nephron-sparing surgery, with good oncological and functional results. The combination of the Firefly® technology and intraoperative ultrasound can more accurately delimit the extent of the lesion, increase the negative margins and decrease the ischaemia time. Near-infrared fluorescence in robot-assisted partial nephrectomy is useful for guiding the tumour resection and can potentially improve the oncological and functional results. Copyright © 2015 AEU. Publicado por Elsevier España, S.L.U. All rights reserved.

  2. Robotic retroperitoneal partial nephrectomy: a step-by-step guide.

    PubMed

    Ghani, Khurshid R; Porter, James; Menon, Mani; Rogers, Craig

    2014-08-01

    To describe a step-by-step guide for successful implementation of the retroperitoneal approach to robotic partial nephrectomy (RPN) PATIENTS AND METHODS: The patient is placed in the flank position and the table fully flexed to increase the space between the 12th rib and iliac crest. Access to the retroperitoneal space is obtained using a balloon-dilating device. Ports include a 12-mm camera port, two 8-mm robotic ports and a 12-mm assistant port placed in the anterior axillary line cephalad to the anterior superior iliac spine, and 7-8 cm caudal to the ipsilateral robotic port. Positioning and port placement strategies for successful technique include: (i) Docking robot directly over the patient's head parallel to the spine; (ii) incision for camera port ≈1.9 cm (1 fingerbreadth) above the iliac crest, lateral to the triangle of Petit; (iii) Seldinger technique insertion of kidney-shaped balloon dilator into retroperitoneal space; (iv) Maximising distance between all ports; (v) Ensuring camera arm is placed in the outer part of the 'sweet spot'. The retroperitoneal approach to RPN permits direct access to the renal hilum, no need for bowel mobilisation and excellent visualisation of posteriorly located tumours. © 2014 The Authors. BJU International © 2014 BJU International.

  3. A robotic orbital emulator with lidar-based SLAM and AMCL for multiple entity pose estimation

    NASA Astrophysics Data System (ADS)

    Shen, Dan; Xiang, Xingyu; Jia, Bin; Wang, Zhonghai; Chen, Genshe; Blasch, Erik; Pham, Khanh

    2018-05-01

    This paper revises and evaluates an orbital emulator (OE) for space situational awareness (SSA). The OE can produce 3D satellite movements using capabilities generated from omni-wheeled robot and robotic arm motions. The 3D motion of satellite is partitioned into the movements in the equatorial plane and the up-down motions in the vertical plane. The 3D actions are emulated by omni-wheeled robot models while the up-down motions are performed by a stepped-motorcontrolled- ball along a rod (robotic arm), which is attached to the robot. Lidar only measurements are used to estimate the pose information of the multiple robots. SLAM (simultaneous localization and mapping) is running on one robot to generate the map and compute the pose for the robot. Based on the SLAM map maintained by the robot, the other robots run the adaptive Monte Carlo localization (AMCL) method to estimate their poses. The controller is designed to guide the robot to follow a given orbit. The controllability is analyzed by using a feedback linearization method. Experiments are conducted to show the convergence of AMCL and the orbit tracking performance.

  4. Neurosurgical robotic arm drilling navigation system.

    PubMed

    Lin, Chung-Chih; Lin, Hsin-Cheng; Lee, Wen-Yo; Lee, Shih-Tseng; Wu, Chieh-Tsai

    2017-09-01

    The aim of this work was to develop a neurosurgical robotic arm drilling navigation system that provides assistance throughout the complete bone drilling process. The system comprised neurosurgical robotic arm navigation combining robotic and surgical navigation, 3D medical imaging based surgical planning that could identify lesion location and plan the surgical path on 3D images, and automatic bone drilling control that would stop drilling when the bone was to be drilled-through. Three kinds of experiment were designed. The average positioning error deduced from 3D images of the robotic arm was 0.502 ± 0.069 mm. The correlation between automatically and manually planned paths was 0.975. The average distance error between automatically planned paths and risky zones was 0.279 ± 0.401 mm. The drilling auto-stopping algorithm had 0.00% unstopped cases (26.32% in control group 1) and 70.53% non-drilled-through cases (8.42% and 4.21% in control groups 1 and 2). The system may be useful for neurosurgical robotic arm drilling navigation. Copyright © 2016 John Wiley & Sons, Ltd.

  5. Capaciflector-guided mechanisms

    NASA Technical Reports Server (NTRS)

    Vranish, John M. (Inventor)

    1996-01-01

    A plurality of capaciflector proximity sensors, one or more of which may be overlaid on each other, and at least one shield are mounted on a device guided by a robot so as to see a designated surface, hole or raised portion of an object, for example, in three dimensions. Individual current-measuring voltage follower circuits interface the sensors and shield to a common AC signal source. As the device approaches the object, the sensors respond by a change in the currents therethrough. The currents are detected by the respective current-measuring voltage follower circuits with the outputs thereof being fed to a robot controller. The device is caused to move under robot control in a predetermined pattern over the object while directly referencing each other without any offsets, whereupon by a process of minimization of the sensed currents, the device is dithered or wiggled into position for a soft touchdown or contact without any prior contact with the object.

  6. Panorama of Phoenix Solar Panel and Robotic Arm

    NASA Image and Video Library

    2008-06-13

    This panorama image of NASA’s Phoenix Mars Lander’s solar panel and the lander’s Robotic Arm with a sample in the scoop. The image was taken just before the sample was delivered to the Optical Microscope.

  7. Autonomous stair-climbing with miniature jumping robots.

    PubMed

    Stoeter, Sascha A; Papanikolopoulos, Nikolaos

    2005-04-01

    The problem of vision-guided control of miniature mobile robots is investigated. Untethered mobile robots with small physical dimensions of around 10 cm or less do not permit powerful onboard computers because of size and power constraints. These challenges have, in the past, reduced the functionality of such devices to that of a complex remote control vehicle with fancy sensors. With the help of a computationally more powerful entity such as a larger companion robot, the control loop can be closed. Using the miniature robot's video transmission or that of an observer to localize it in the world, control commands can be computed and relayed to the inept robot. The result is a system that exhibits autonomous capabilities. The framework presented here solves the problem of climbing stairs with the miniature Scout robot. The robot's unique locomotion mode, the jump, is employed to hop one step at a time. Methods for externally tracking the Scout are developed. A large number of real-world experiments are conducted and the results discussed.

  8. Emergent of Burden Sharing of Robots with Emotion Model

    NASA Astrophysics Data System (ADS)

    Kusano, Takuya; Nozawa, Akio; Ide, Hideto

    Cooperated multi robots system has much dominance in comparison with single robot system. Multi robots system is able to adapt to various circumstances and has a flexibility for variation of tasks. Robots are necessary that build a cooperative relations and acts as an organization to attain a purpose in multi robots system. Then, group behavior of insects which doesn't have advanced ability is observed. For example, ants called a sociality insect emerge systematic activities by the interaction with using a very simple way. Though ants make a communication with chemical matter, a human plans a communication by words and gestures. In this paper, we paid attention to the interaction based on psychological viewpoint. And a human's emotion model was used for the parameter which became a base of the motion planning of robots. These robots were made to do both-way action in test field with obstacle. As a result, a burden sharing like guide or carrier was seen even though those had a simple setup.

  9. Robot therapy: a new approach for mental healthcare of the elderly - a mini-review.

    PubMed

    Shibata, Takanori; Wada, Kazuyoshi

    2011-01-01

    Mental healthcare of elderly people is a common problem in advanced countries. Recently, high technology has developed robots for use not only in factories but also for our living environment. In particular, human-interactive robots for psychological enrichment, which provide services by interacting with humans while stimulating their minds, are rapidly spreading. Such robots not only simply entertain but also render assistance, guide, provide therapy, educate, enable communication, and so on. Robot therapy, which uses robots as a substitution for animals in animal-assisted therapy and activity, is a new application of robots and is attracting the attention of many researchers and psychologists. The seal robot named Paro was developed especially for robot therapy and was used at hospitals and facilities for elderly people in several countries. Recent research has revealed that robot therapy has the same effects on people as animal therapy. In addition, it is being recognized as a new method of mental healthcare for elderly people. In this mini review, we introduce the merits and demerits of animal therapy. Then we explain the human-interactive robot for psychological enrichment, the required functions for therapeutic robots, and the seal robot. Finally, we provide examples of robot therapy for elderly people, including dementia patients. Copyright © 2010 S. Karger AG, Basel.

  10. Olfaction and Hearing Based Mobile Robot Navigation for Odor/Sound Source Search

    PubMed Central

    Song, Kai; Liu, Qi; Wang, Qi

    2011-01-01

    Bionic technology provides a new elicitation for mobile robot navigation since it explores the way to imitate biological senses. In the present study, the challenging problem was how to fuse different biological senses and guide distributed robots to cooperate with each other for target searching. This paper integrates smell, hearing and touch to design an odor/sound tracking multi-robot system. The olfactory robot tracks the chemical odor plume step by step through information fusion from gas sensors and airflow sensors, while two hearing robots localize the sound source by time delay estimation (TDE) and the geometrical position of microphone array. Furthermore, this paper presents a heading direction based mobile robot navigation algorithm, by which the robot can automatically and stably adjust its velocity and direction according to the deviation between the current heading direction measured by magnetoresistive sensor and the expected heading direction acquired through the odor/sound localization strategies. Simultaneously, one robot can communicate with the other robots via a wireless sensor network (WSN). Experimental results show that the olfactory robot can pinpoint the odor source within the distance of 2 m, while two hearing robots can quickly localize and track the olfactory robot in 2 min. The devised multi-robot system can achieve target search with a considerable success ratio and high stability. PMID:22319401

  11. Mapping Robots to Therapy and Educational Objectives for Children with Autism Spectrum Disorder.

    PubMed

    Huijnen, Claire A G J; Lexis, Monique A S; Jansens, Rianne; de Witte, Luc P

    2016-06-01

    The aim of this study was to increase knowledge on therapy and educational objectives professionals work on with children with autism spectrum disorder (ASD) and to identify corresponding state of the art robots. Focus group sessions (n = 9) with ASD professionals (n = 53) from nine organisations were carried out to create an objectives overview, followed by a systematic literature study to identify state of the art robots matching these objectives. Professionals identified many ASD objectives (n = 74) in 9 different domains. State of the art robots addressed 24 of these objectives in 8 domains. Robots can potentially be applied to a large scope of objectives for children with ASD. This objectives overview functions as a base to guide development of robot interventions for these children.

  12. Swarmie User Manual: A Rover Used for Multi-agent Swarm Research

    NASA Technical Reports Server (NTRS)

    Montague, Gilbert

    2014-01-01

    The ability to create multiple functional yet cost effective robots is crucial for conducting swarming robotics research. The Center Innovation Fund (CIF) swarming robotics project is a collaboration among the KSC Granular Mechanics and Regolith Operations (GMRO) group, the University of New Mexico Biological Computation Lab, and the NASA Ames Intelligent Robotics Group (IRG) that uses rovers, dubbed "Swarmies", as test platforms for genetic search algorithms. This fall, I assisted in the development of the software modules used on the Swarmies and created this guide to provide thorough instructions on how to configure your workspace to operate a Swarmie both in simulation and out in the field.

  13. A numerical study of sensory-guided multiple views for improved object identification

    NASA Astrophysics Data System (ADS)

    Blakeslee, B. A.; Zelnio, E. G.; Koditschek, D. E.

    2014-06-01

    We explore the potential on-line adjustment of sensory controls for improved object identification and discrimination in the context of a simulated high resolution camera system carried onboard a maneuverable robotic platform that can actively choose its observational position and pose. Our early numerical studies suggest the significant efficacy and enhanced performance achieved by even very simple feedback-driven iteration of the view in contrast to identification from a fixed pose, uninformed by any active adaptation. Specifically, we contrast the discriminative performance of the same conventional classification system when informed by: a random glance at a vehicle; two random glances at a vehicle; or a random glance followed by a guided second look. After each glance, edge detection algorithms isolate the most salient features of the image and template matching is performed through the use of the Hausdor↵ distance, comparing the simulated sensed images with reference images of the vehicles. We present initial simulation statistics that overwhelmingly favor the third scenario. We conclude with a sketch of our near-future steps in this study that will entail: the incorporation of more sophisticated image processing and template matching algorithms; more complex discrimination tasks such as distinguishing between two similar vehicles or vehicles in motion; more realistic models of the observers mobility including platform dynamics and eventually environmental constraints; and expanding the sensing task beyond the identification of a specified object selected from a pre-defined library of alternatives.

  14. High-frequency imaging radar for robotic navigation and situational awareness

    NASA Astrophysics Data System (ADS)

    Thomas, David J.; Luo, Changan; Knox, Robert

    2011-05-01

    With increasingly available high frequency radar components, the practicality of imaging radar for mobile robotic applications is now practical. Navigation, ODOA, situational awareness and safety applications can be supported in small light weight packaging. Radar has the additional advantage of being able sense through aerosols, smoke and dust that can be difficult for many optical systems. The ability to directly measure the range rate of an object is also an advantage in radar applications. This paper will explore the applicability of high frequency imaging radar for mobile robotics and examine a W-band 360 degree imaging radar prototype. Indoor and outdoor performance data will be analyzed and evaluated for applicability to navigation and situational awareness.

  15. Parallel robot for micro assembly with integrated innovative optical 3D-sensor

    NASA Astrophysics Data System (ADS)

    Hesselbach, Juergen; Ispas, Diana; Pokar, Gero; Soetebier, Sven; Tutsch, Rainer

    2002-10-01

    Recent advances in the fields of MEMS and MOEMS often require precise assembly of very small parts with an accuracy of a few microns. In order to meet this demand, a new approach using a robot based on parallel mechanisms in combination with a novel 3D-vision system has been chosen. The planar parallel robot structure with 2 DOF provides a high resolution in the XY-plane. It carries two additional serial axes for linear and rotational movement in/about z direction. In order to achieve high precision as well as good dynamic capabilities, the drive concept for the parallel (main) axes incorporates air bearings in combination with a linear electric servo motors. High accuracy position feedback is provided by optical encoders with a resolution of 0.1 μm. To allow for visualization and visual control of assembly processes, a camera module fits into the hollow tool head. It consists of a miniature CCD camera and a light source. In addition a modular gripper support is integrated into the tool head. To increase the accuracy a control loop based on an optoelectronic sensor will be implemented. As a result of an in-depth analysis of different approaches a photogrammetric system using one single camera and special beam-splitting optics was chosen. A pattern of elliptical marks is applied to the surfaces of workpiece and gripper. Using a model-based recognition algorithm the image processing software identifies the gripper and the workpiece and determines their relative position. A deviation vector is calculated and fed into the robot control to guide the gripper.

  16. Autonomous Fault Detection for Performance Bugs in Component Based Robotic Systems

    DTIC Science & Technology

    2016-12-01

    platform performs a modified version of the restaurant task from the RoboCup@Home competition 2015 [20]. Here, an operator first guides the robot around a...Control. Berlin: Springer, 2008. DOI: 10.1007/ 978-3-540-76304-8. [18] H. Zou and T. Hastie, “Regularization and variable selection via the elastic net

  17. Velocity-curvature patterns limit human-robot physical interaction

    PubMed Central

    Maurice, Pauline; Huber, Meghan E.; Hogan, Neville; Sternad, Dagmar

    2018-01-01

    Physical human-robot collaboration is becoming more common, both in industrial and service robotics. Cooperative execution of a task requires intuitive and efficient interaction between both actors. For humans, this means being able to predict and adapt to robot movements. Given that natural human movement exhibits several robust features, we examined whether human-robot physical interaction is facilitated when these features are considered in robot control. The present study investigated how humans adapt to biological and non-biological velocity patterns in robot movements. Participants held the end-effector of a robot that traced an elliptic path with either biological (two-thirds power law) or non-biological velocity profiles. Participants were instructed to minimize the force applied on the robot end-effector. Results showed that the applied force was significantly lower when the robot moved with a biological velocity pattern. With extensive practice and enhanced feedback, participants were able to decrease their force when following a non-biological velocity pattern, but never reached forces below those obtained with the 2/3 power law profile. These results suggest that some robust features observed in natural human movements are also a strong preference in guided movements. Therefore, such features should be considered in human-robot physical collaboration. PMID:29744380

  18. Velocity-curvature patterns limit human-robot physical interaction.

    PubMed

    Maurice, Pauline; Huber, Meghan E; Hogan, Neville; Sternad, Dagmar

    2018-01-01

    Physical human-robot collaboration is becoming more common, both in industrial and service robotics. Cooperative execution of a task requires intuitive and efficient interaction between both actors. For humans, this means being able to predict and adapt to robot movements. Given that natural human movement exhibits several robust features, we examined whether human-robot physical interaction is facilitated when these features are considered in robot control. The present study investigated how humans adapt to biological and non-biological velocity patterns in robot movements. Participants held the end-effector of a robot that traced an elliptic path with either biological (two-thirds power law) or non-biological velocity profiles. Participants were instructed to minimize the force applied on the robot end-effector. Results showed that the applied force was significantly lower when the robot moved with a biological velocity pattern. With extensive practice and enhanced feedback, participants were able to decrease their force when following a non-biological velocity pattern, but never reached forces below those obtained with the 2/3 power law profile. These results suggest that some robust features observed in natural human movements are also a strong preference in guided movements. Therefore, such features should be considered in human-robot physical collaboration.

  19. Image Mapping and Visual Attention on the Sensory Ego-Sphere

    NASA Technical Reports Server (NTRS)

    Fleming, Katherine Achim; Peters, Richard Alan, II

    2012-01-01

    The Sensory Ego-Sphere (SES) is a short-term memory for a robot in the form of an egocentric, tessellated, spherical, sensory-motor map of the robot s locale. Visual attention enables fast alignment of overlapping images without warping or position optimization, since an attentional point (AP) on the composite typically corresponds to one on each of the collocated regions in the images. Such alignment speeds analysis of the multiple images of the area. Compositing and attention were performed two ways and compared: (1) APs were computed directly on the composite and not on the full-resolution images until the time of retrieval; and (2) the attentional operator was applied to all incoming imagery. It was found that although the second method was slower, it produced consistent and, thereby, more useful APs. The SES is an integral part of a control system that will enable a robot to learn new behaviors based on its previous experiences, and that will enable it to recombine its known behaviors in such a way as to solve related, but novel, task problems with apparent creativity. The approach is to combine sensory-motor data association and dimensionality reduction to learn navigation and manipulation tasks as sequences of basic behaviors that can be implemented with a small set of closed-loop controllers. Over time, the aggregate of behaviors and their transition probabilities form a stochastic network. Then given a task, the robot finds a path in the network that leads from its current state to the goal. The SES provides a short-term memory for the cognitive functions of the robot, association of sensory and motor data via spatio-temporal coincidence, direction of the attention of the robot, navigation through spatial localization with respect to known or discovered landmarks, and structured data sharing between the robot and human team members, the individuals in multi-robot teams, or with a C3 center.

  20. Evaluation of surgical strategy of conventional vs. percutaneous robot-assisted spinal trans-pedicular instrumentation in spondylodiscitis.

    PubMed

    Keric, Naureen; Eum, David J; Afghanyar, Feroz; Rachwal-Czyzewicz, Izabela; Renovanz, Mirjam; Conrad, Jens; Wesp, Dominik M A; Kantelhardt, Sven R; Giese, Alf

    2017-03-01

    Robot-assisted percutaneous insertion of pedicle screws is a recent technique demonstrating high accuracy. The optimal treatment for spondylodiscitis is still a matter of debate. We performed a retrospective cohort study on surgical patients treated with pedicle screw/rod placement alone without the application of intervertebral cages. In this collective, we compare conventional open to a further minimalized percutaneous robot-assisted spinal instrumentation, avoiding a direct contact of implants and infectious focus. 90 records and CT scans of patients treated by dorsal transpedicular instrumentation of the infected segments with and without decompression and antibiotic therapy were analysed for clinical and radiological outcome parameters. 24 patients were treated by free-hand fluoroscopy-guided surgery (121 screws), and 66 patients were treated by percutaneous robot-assisted spinal instrumentation (341 screws). Accurate screw placement was confirmed in 90 % of robot-assisted and 73.5 % of free-hand placed screws. Implant revision due to misplacement was necessary in 4.95 % of the free-hand group compared to 0.58 % in the robot-assisted group. The average intraoperative X-ray exposure per case was 0.94 ± 1.04 min in the free-hand group vs. 0.4 ± 0.16 min in the percutaneous group (p = 0.000). Intraoperative adverse events were observed in 12.5 % of free-hand placed pedicle screws and 6.1 % of robot robot-assisted screws. The mean postoperative hospital stay in the free-hand group was 18.1 ± 12.9 days, and in percutaneous group, 13.8 ± 5.6 days (p = 0.012). This study demonstrates that the robot-guided insertion of pedicle screws is a safe and effective procedure in lumbar and thoracic spondylodiscitis with higher accuracy of implant placement, lower radiation dose, and decreased complication rates. Percutaneous spinal dorsal instrumentation seems to be sufficient to treat lumbar and thoracic spondylodiscitis.

  1. Spherical Camera

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Developed largely through a Small Business Innovation Research contract through Langley Research Center, Interactive Picture Corporation's IPIX technology provides spherical photography, a panoramic 360-degrees. NASA found the technology appropriate for use in guiding space robots, in the space shuttle and space station programs, as well as research in cryogenic wind tunnels and for remote docking of spacecraft. Images of any location are captured in their entirety in a 360-degree immersive digital representation. The viewer can navigate to any desired direction within the image. Several car manufacturers already use IPIX to give viewers a look at their latest line-up of automobiles. Another application is for non-invasive surgeries. By using OmniScope, surgeons can look more closely at various parts of an organ with medical viewing instruments now in use. Potential applications of IPIX technology include viewing of homes for sale, hotel accommodations, museum sites, news events, and sports stadiums.

  2. Future of operating rooms.

    PubMed

    Reijnen, Michel M P J; Zeebregts, Clark J; Meijerink, Wilhelmus J H J

    2005-01-01

    Operating-room design has not changed significantly since the modern era of surgery began. Minimal invasive, endoscopic, procedures, and evolution of technology will affect operating-room design in the near future. Poor ergonomics has always been one of the major drawbacks of endoscopic surgery. Use of retractable arms and monitors will improve ergonomics of the operating team. Developments in telecommunication will allow surgeons to communicate with colleagues and experts during the procedure in virtually any location around the world, which increases teaching possibilities and procedural safety. Introduction and further development of intraoperative imaging, including real-time, three-dimensional (3-D) reconstructions of patient, and computer-aided surgery offer surgeons the opportunity to train the planned surgical procedure. Moreover, they will improve control and supervision of the procedure in learning situations. The last decade's robotics have made their introduction into the operating rooms. They improve control over the operating-room environment and will facilitate the performance of more complex procedures. However, high costs and lack of force feedback remain its major drawbacks. Improvements of robotic techniques and its implementation into the operating rooms will further guide their design into highly specialized operating units.

  3. A Third Arm for the Surgeon

    NASA Technical Reports Server (NTRS)

    1994-01-01

    In laparoscopic surgery, tiny incisions are made in the patient's body and a laparoscope (an optical tube with a camera at the end) is inserted. The camera's image is projected onto two video screens, whose views guide the surgeon through the procedure. AESOP, a medical robot developed by Computer Motion, Inc. with NASA assistance, eliminates the need for a human assistant to operate the camera. The surgeon uses a foot pedal control to move the device, allowing him to use both hands during the surgery. Miscommunication is avoided; AESOP's movement is smooth and steady, and the memory vision is invaluable. Operations can be completed more quickly, and the patient spends less time under anesthesia. AESOP has been approved by the FDA.

  4. Poster - Thurs Eve-12: A needle-positioning robot co-registered with volumetric x-ray micro-computed tomography images for minimally-invasive small-animal interventions.

    PubMed

    Waspe, A C; Holdsworth, D W; Lacefield, J C; Fenster, A

    2008-07-01

    Preclinical research protocols often require the delivery of biological substances to specific targets in small animal disease models. To target biologically relevant locations in mice accurately, the needle positioning error needs to be < 200 μm. If targeting is inaccurate, experimental results can be inconclusive or misleading. We have developed a robotic manipulator that is capable of positioning a needle with a mean error < 100 μm. An apparatus and method were developed for integrating the needle-positioning robot with volumetric micro-computed tomography image guidance for interventions in small animals. Accurate image-to-robot registration is critical for integration as it enables targets identified in the image to be mapped to physical coordinates inside the animal. Registration is accomplished by injecting barium sulphate into needle tracks as the robot withdraws the needle from target points in a tissue-mimicking phantom. Registration accuracy is therefore affected by the positioning error of the robot and is assessed by measuring the point-to-line fiducial and target registration errors (FRE, TRE). Centroid points along cross-sectional slices of the track are determined using region growing segmentation followed by application of a center-of-mass algorithm. The centerline points are registered to needle trajectories in robot coordinates by applying an iterative closest point algorithm between points and lines. Implementing this procedure with four fiducial needle tracks produced a point-to-line FRE and TRE of 246 ± 58 μm and 194 ± 18 μm, respectively. The proposed registration technique produced a TRE < 200 μm, in the presence of robot positioning error, meeting design specification. © 2008 American Association of Physicists in Medicine.

  5. Advanced Electronic Systems. Curriculum Guide for Technology Education.

    ERIC Educational Resources Information Center

    Patrick, Dale R.

    This curriculum for a 1-semester or 1-year course in electronics is designed to take students from basic through advanced electronic systems. It covers several electronic areas, such as digital electronics, communication electronics, industrial process control, instrumentation, programmable controllers, and robotics. The guide contains…

  6. Robotics research projects report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hsia, T.C.

    The research results of the Robotics Research Laboratory are summarized. Areas of research include robotic control, a stand-alone vision system for industrial robots, and sensors other than vision that would be useful for image ranging, including ultrasonic and infra-red devices. One particular project involves RHINO, a 6-axis robotic arm that can be manipulated by serial transmission of ASCII command strings to its interfaced controller. (LEW)

  7. Cellular-level surgery using nano robots.

    PubMed

    Song, Bo; Yang, Ruiguo; Xi, Ning; Patterson, Kevin Charles; Qu, Chengeng; Lai, King Wai Chiu

    2012-12-01

    The atomic force microscope (AFM) is a popular instrument for studying the nano world. AFM is naturally suitable for imaging living samples and measuring mechanical properties. In this article, we propose a new concept of an AFM-based nano robot that can be applied for cellular-level surgery on living samples. The nano robot has multiple functions of imaging, manipulation, characterizing mechanical properties, and tracking. In addition, the technique of tip functionalization allows the nano robot the ability for precisely delivering a drug locally. Therefore, the nano robot can be used for conducting complicated nano surgery on living samples, such as cells and bacteria. Moreover, to provide a user-friendly interface, the software in this nano robot provides a "videolized" visual feedback for monitoring the dynamic changes on the sample surface. Both the operation of nano surgery and observation of the surgery results can be simultaneously achieved. This nano robot can be easily integrated with extra modules that have the potential applications of characterizing other properties of samples such as local conductance and capacitance.

  8. The commercialization of robotic surgery: unsubstantiated marketing of gynecologic surgery by hospitals.

    PubMed

    Schiavone, Maria B; Kuo, Eugenia C; Naumann, R Wendel; Burke, William M; Lewin, Sharyn N; Neugut, Alfred I; Hershman, Dawn L; Herzog, Thomas J; Wright, Jason D

    2012-09-01

    We analyzed the content, quality, and accuracy of information provided on hospital web sites about robotic gynecologic surgery. An analysis of hospitals with more than 200 beds from a selection of states was performed. Hospital web sites were analyzed for the content and quality of data regarding robotic-assisted surgery. Among 432 hospitals, the web sites of 192 (44.4%) contained marketing for robotic gynecologic surgery. Stock images (64.1%) and text (24.0%) derived from the robot manufacturer were frequent. Although most sites reported improved perioperative outcomes, limitations of robotics including cost, complications, and operative time were discussed only 3.7%, 1.6%, and 3.7% of the time, respectively. Only 47.9% of the web sites described a comparison group. Marketing of robotic gynecologic surgery is widespread. Much of the content is not based on high-quality data, fails to present alternative procedures, and relies on stock text and images. Copyright © 2012 Mosby, Inc. All rights reserved.

  9. The Evolution of Image-Free Robotic Assistance in Unicompartmental Knee Arthroplasty.

    PubMed

    Lonner, Jess H; Moretti, Vincent M

    2016-01-01

    Semiautonomous robotic technology has been introduced to optimize accuracy of bone preparation, implant positioning, and soft tissue balance in unicompartmental knee arthroplasty (UKA), with the expectation that there will be a resultant improvement in implant durability and survivorship. Currently, roughly one-fifth of UKAs in the US are being performed with robotic assistance, and it is anticipated that there will be substantial growth in market penetration of robotics over the next decade. First-generation robotic technology improved substantially implant position compared to conventional methods; however, high capital costs, uncertainty regarding the value of advanced technologies, and the need for preoperative computed tomography (CT) scans were barriers to broader adoption. Newer image-free semiautonomous robotic technology optimizes both implant position and soft tissue balance, without the need for preoperative CT scans and with pricing and portability that make it suitable for use in an ambulatory surgery center setting, where approximately 40% of these systems are currently being utilized. This article will review the robotic experience for UKA, including rationale, system descriptions, and outcomes.

  10. Integration and evaluation of a needle-positioning robot with volumetric microcomputed tomography image guidance for small animal stereotactic interventions.

    PubMed

    Waspe, Adam C; McErlain, David D; Pitelka, Vasek; Holdsworth, David W; Lacefield, James C; Fenster, Aaron

    2010-04-01

    Preclinical research protocols often require insertion of needles to specific targets within small animal brains. To target biologically relevant locations in rodent brains more effectively, a robotic device has been developed that is capable of positioning a needle along oblique trajectories through a single burr hole in the skull under volumetric microcomputed tomography (micro-CT) guidance. An x-ray compatible stereotactic frame secures the head throughout the procedure using a bite bar, nose clamp, and ear bars. CT-to-robot registration enables structures identified in the image to be mapped to physical coordinates in the brain. Registration is accomplished by injecting a barium sulfate contrast agent as the robot withdraws the needle from predefined points in a phantom. Registration accuracy is affected by the robot-positioning error and is assessed by measuring the surface registration error for the fiducial and target needle tracks (FRE and TRE). This system was demonstrated in situ by injecting 200 microm tungsten beads into rat brains along oblique trajectories through a single burr hole on the top of the skull under micro-CT image guidance. Postintervention micro-CT images of each skull were registered with preintervention high-field magnetic resonance images of the brain to infer the anatomical locations of the beads. Registration using four fiducial needle tracks and one target track produced a FRE and a TRE of 96 and 210 microm, respectively. Evaluation with tissue-mimicking gelatin phantoms showed that locations could be targeted with a mean error of 154 +/- 113 microm. The integration of a robotic needle-positioning device with volumetric micro-CT image guidance should increase the accuracy and reduce the invasiveness of stereotactic needle interventions in small animals.

  11. Integration and evaluation of a needle-positioning robot with volumetric microcomputed tomography image guidance for small animal stereotactic interventions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Waspe, Adam C.; McErlain, David D.; Pitelka, Vasek

    Purpose: Preclinical research protocols often require insertion of needles to specific targets within small animal brains. To target biologically relevant locations in rodent brains more effectively, a robotic device has been developed that is capable of positioning a needle along oblique trajectories through a single burr hole in the skull under volumetric microcomputed tomography (micro-CT) guidance. Methods: An x-ray compatible stereotactic frame secures the head throughout the procedure using a bite bar, nose clamp, and ear bars. CT-to-robot registration enables structures identified in the image to be mapped to physical coordinates in the brain. Registration is accomplished by injecting amore » barium sulfate contrast agent as the robot withdraws the needle from predefined points in a phantom. Registration accuracy is affected by the robot-positioning error and is assessed by measuring the surface registration error for the fiducial and target needle tracks (FRE and TRE). This system was demonstrated in situ by injecting 200 {mu}m tungsten beads into rat brains along oblique trajectories through a single burr hole on the top of the skull under micro-CT image guidance. Postintervention micro-CT images of each skull were registered with preintervention high-field magnetic resonance images of the brain to infer the anatomical locations of the beads. Results: Registration using four fiducial needle tracks and one target track produced a FRE and a TRE of 96 and 210 {mu}m, respectively. Evaluation with tissue-mimicking gelatin phantoms showed that locations could be targeted with a mean error of 154{+-}113 {mu}m. Conclusions: The integration of a robotic needle-positioning device with volumetric micro-CT image guidance should increase the accuracy and reduce the invasiveness of stereotactic needle interventions in small animals.« less

  12. JPL-20170926-TECHf-0001-Robot Descends into Alaska Moulin

    NASA Image and Video Library

    2017-09-26

    JPL engineer Andy Klesh lowers a robotic submersible into a moulin. Klesh and JPL's John Leichty used robots and probes to explore the Matanuska Glacier in Alaska this past July. Image Credit: NASA/JPL-Caltech

  13. A low-cost, high-field-strength magnetic resonance imaging-compatible actuator.

    PubMed

    Secoli, Riccardo; Robinson, Matthew; Brugnoli, Michele; Rodriguez y Baena, Ferdinando

    2015-03-01

    To perform minimally invasive surgical interventions with the aid of robotic systems within a magnetic resonance imaging scanner offers significant advantages compared to conventional surgery. However, despite the numerous exciting potential applications of this technology, the introduction of magnetic resonance imaging-compatible robotics has been hampered by safety, reliability and cost concerns: the robots should not be attracted by the strong magnetic field of the scanner and should operate reliably in the field without causing distortion to the scan data. Development of non-conventional sensors and/or actuators is thus required to meet these strict operational and safety requirements. These demands commonly result in expensive actuators, which mean that cost effectiveness remains a major challenge for such robotic systems. This work presents a low-cost, high-field-strength magnetic resonance imaging-compatible actuator: a pneumatic stepper motor which is controllable in open loop or closed loop, along with a rotary encoder, both fully manufactured in plastic, which are shown to perform reliably via a set of in vitro trials while generating negligible artifacts when imaged within a standard clinical scanner. © IMechE 2015.

  14. Rehabilitation-triggered cortical plasticity after stroke: in vivo imaging at multiple scales (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Allegra Mascaro, Anna Letizia; Conti, Emilia; Lai, Stefano; Spalletti, Cristina; Di Giovanna, Antonino Paolo; Alia, Claudia; Panarese, Alessandro; Sacconi, Leonardo; Micera, Silvestro; Caleo, Matteo; Pavone, Francesco S.

    2017-02-01

    Neurorehabilitation protocols based on the use of robotic devices provide a highly repeatable therapy and have recently shown promising clinical results. Little is known about how rehabilitation molds the brain to promote motor recovery of the affected limb. We used a custom-made robotic platform that provides quantitative assessment of forelimb function in a retraction test. Complementary imaging techniques allowed us to access to the multiple facets of robotic rehabilitation-induced cortical plasticity after unilateral photothrombotic stroke in mice Primary Motor Cortex (Caudal Forelimb Area - CFA). First, we analyzed structural features of vasculature and dendritic reshaping in the peri-infarct area with two-photon fluorescence microscopy. Longitudinal analysis of dendritic branches and spines of pyramidal neurons suggests that robotic rehabilitation promotes the stabilization of peri-infarct cortical excitatory circuits, which is not accompanied by consistent vascular reorganization towards pre-stroke conditions. To investigate if this structural stabilization was linked to functional remapping, we performed mesoscale wide-field imaging on GCaMP6 mice while performing the motor task on the robotic platform. We revealed temporal and spatial features of the motor-triggered cortical activation, shining new light on rehabilitation-induced functional remapping of the ipsilesional cortex. Finally, by using an all-optical approach that combines optogenetic activation of the contralesional hemisphere and wide-field functional imaging of peri-infarct area, we dissected the effect of robotic rehabilitation on inter-hemispheric cortico-cortical connectivity.

  15. Teleoperation of steerable flexible needles by combining kinesthetic and vibratory feedback.

    PubMed

    Pacchierotti, Claudio; Abayazid, Momen; Misra, Sarthak; Prattichizzo, Domenico

    2014-01-01

    Needle insertion in soft-tissue is a minimally invasive surgical procedure that demands high accuracy. In this respect, robotic systems with autonomous control algorithms have been exploited as the main tool to achieve high accuracy and reliability. However, for reasons of safety and responsibility, autonomous robotic control is often not desirable. Therefore, it is necessary to focus also on techniques enabling clinicians to directly control the motion of the surgical tools. In this work, we address that challenge and present a novel teleoperated robotic system able to steer flexible needles. The proposed system tracks the position of the needle using an ultrasound imaging system and computes needle's ideal position and orientation to reach a given target. The master haptic interface then provides the clinician with mixed kinesthetic-vibratory navigation cues to guide the needle toward the computed ideal position and orientation. Twenty participants carried out an experiment of teleoperated needle insertion into a soft-tissue phantom, considering four different experimental conditions. Participants were provided with either mixed kinesthetic-vibratory feedback or mixed kinesthetic-visual feedback. Moreover, we considered two different ways of computing ideal position and orientation of the needle: with or without set-points. Vibratory feedback was found more effective than visual feedback in conveying navigation cues, with a mean targeting error of 0.72 mm when using set-points, and of 1.10 mm without set-points.

  16. Attitudes towards health-care robots in a retirement village.

    PubMed

    Broadbent, Elizabeth; Tamagawa, Rie; Patience, Anna; Knock, Brett; Kerse, Ngaire; Day, Karen; MacDonald, Bruce A

    2012-06-01

    This study investigated the attitudes and preferences of staff, residents and relatives of residents in a retirement village towards a health-care robot. Focus groups were conducted with residents, managers and caregivers, and questionnaires were collected from 32 residents, 30 staff and 27 relatives of residents. The most popular robot tasks were detection of falls and calling for help, lifting, and monitoring location. Robot functionality was more important than appearance. Concerns included the loss of jobs and personal care, while perceived benefits included allowing staff to spend quality time with residents, and helping residents with self-care. Residents showed a more positive attitude towards robots than both staff and relatives. These results provide an initial guide for the tasks and appearance appropriate for a robot to provide assistance in aged care facilities and highlight concerns. © 2011 The Authors. Australasian Journal on Ageing © 2011 ACOTA.

  17. Understanding of and applications for robot vision guidance at KSC

    NASA Technical Reports Server (NTRS)

    Shawaga, Lawrence M.

    1988-01-01

    The primary thrust of robotics at KSC is for the servicing of Space Shuttle remote umbilical docking functions. In order for this to occur, robots performing servicing operations must be capable of tracking a swaying Orbiter in Six Degrees of Freedom (6-DOF). Currently, in NASA KSC's Robotic Applications Development Laboratory (RADL), an ASEA IRB-90 industrial robot is being equipped with a real-time computer vision (hardware and software) system to allow it to track a simulated Orbiter interface (target) in 6-DOF. The real-time computer vision system effectively becomes the eyes for the lab robot, guiding it through a closed loop visual feedback system to move with the simulated Orbiter interface. This paper will address an understanding of this vision guidance system and how it will be applied to remote umbilical servicing at KSC. In addition, other current and future applications will be addressed.

  18. The MVACS Robotic Arm Camera

    NASA Astrophysics Data System (ADS)

    Keller, H. U.; Hartwig, H.; Kramm, R.; Koschny, D.; Markiewicz, W. J.; Thomas, N.; Fernades, M.; Smith, P. H.; Reynolds, R.; Lemmon, M. T.; Weinberg, J.; Marcialis, R.; Tanner, R.; Boss, B. J.; Oquest, C.; Paige, D. A.

    2001-08-01

    The Robotic Arm Camera (RAC) is one of the key instruments newly developed for the Mars Volatiles and Climate Surveyor payload of the Mars Polar Lander. This lightweight instrument employs a front lens with variable focus range and takes images at distances from 11 mm (image scale 1:1) to infinity. Color images with a resolution of better than 50 μm can be obtained to characterize the Martian soil. Spectral information of nearby objects is retrieved through illumination with blue, green, and red lamp sets. The design and performance of the camera are described in relation to the science objectives and operation. The RAC uses the same CCD detector array as the Surface Stereo Imager and shares the readout electronics with this camera. The RAC is mounted at the wrist of the Robotic Arm and can characterize the contents of the scoop, the samples of soil fed to the Thermal Evolved Gas Analyzer, the Martian surface in the vicinity of the lander, and the interior of trenches dug out by the Robotic Arm. It can also be used to take panoramic images and to retrieve stereo information with an effective baseline surpassing that of the Surface Stereo Imager by about a factor of 3.

  19. Biologically inspired robots elicit a robust fear response in zebrafish

    NASA Astrophysics Data System (ADS)

    Ladu, Fabrizio; Bartolini, Tiziana; Panitz, Sarah G.; Butail, Sachit; Macrı, Simone; Porfiri, Maurizio

    2015-03-01

    We investigate the behavioral response of zebrafish to three fear-evoking stimuli. In a binary choice test, zebrafish are exposed to a live allopatric predator, a biologically-inspired robot, and a computer-animated image of the live predator. A target tracking algorithm is developed to score zebrafish behavior. Unlike computer-animated images, the robotic and live predator elicit a robust avoidance response. Importantly, the robotic stimulus elicits more consistent inter-individual responses than the live predator. Results from this effort are expected to aid in hypothesis-driven studies on zebrafish fear response, by offering a valuable approach to maximize data-throughput and minimize animal subjects.

  20. Miniature in vivo robotics and novel robotic surgical platforms.

    PubMed

    Shah, Bhavin C; Buettner, Shelby L; Lehman, Amy C; Farritor, Shane M; Oleynikov, Dmitry

    2009-05-01

    Robotic surgical systems, such as the da Vinci Surgical System (Intuitive Surgical, Inc., Sunnyvale, California), have revolutionized laparoscopic surgery but are limited by large size, increased costs, and limitations in imaging. Miniature in vivo robots are being developed that are inserted entirely into the peritoneal cavity for laparoscopic and natural orifice transluminal endoscopic surgical (NOTES) procedures. In the future, miniature camera robots and microrobots should be able to provide a mobile viewing platform. This article discusses the current state of miniature robotics and novel robotic surgical platforms and the development of future robotic technology for general surgery and urology.

  1. Stereo Image Ranging For An Autonomous Robot Vision System

    NASA Astrophysics Data System (ADS)

    Holten, James R.; Rogers, Steven K.; Kabrisky, Matthew; Cross, Steven

    1985-12-01

    The principles of stereo vision for three-dimensional data acquisition are well-known and can be applied to the problem of an autonomous robot vehicle. Coincidental points in the two images are located and then the location of that point in a three-dimensional space can be calculated using the offset of the points and knowledge of the camera positions and geometry. This research investigates the application of artificial intelligence knowledge representation techniques as a means to apply heuristics to relieve the computational intensity of the low level image processing tasks. Specifically a new technique for image feature extraction is presented. This technique, the Queen Victoria Algorithm, uses formal language productions to process the image and characterize its features. These characterized features are then used for stereo image feature registration to obtain the required ranging information. The results can be used by an autonomous robot vision system for environmental modeling and path finding.

  2. A design of endoscopic imaging system for hyper long pipeline based on wheeled pipe robot

    NASA Astrophysics Data System (ADS)

    Zheng, Dongtian; Tan, Haishu; Zhou, Fuqiang

    2017-03-01

    An endoscopic imaging system of hyper long pipeline is designed to acquire the inner surface image in advance for the hyper long pipeline detects measurement. The system consists of structured light sensors, pipe robots and control system. The pipe robot is in the form of wheel structure, with the sensor which is at the front of the vehicle body. The control system is at the tail of the vehicle body in the form of upper and lower computer. The sensor can be translated and scanned in three steps: walking, lifting and scanning, then the inner surface image can be acquired at a plurality of positions and different angles. The results of imaging experiments show that the system's transmission distance is longer, the acquisition angle is more diverse and the result is more comprehensive than the traditional imaging system, which lays an important foundation for later inner surface vision measurement.

  3. AAPM and GEC-ESTRO guidelines for image-guided robotic brachytherapy: report of Task Group 192.

    PubMed

    Podder, Tarun K; Beaulieu, Luc; Caldwell, Barrett; Cormack, Robert A; Crass, Jostin B; Dicker, Adam P; Fenster, Aaron; Fichtinger, Gabor; Meltsner, Michael A; Moerland, Marinus A; Nath, Ravinder; Rivard, Mark J; Salcudean, Tim; Song, Danny Y; Thomadsen, Bruce R; Yu, Yan

    2014-10-01

    In the last decade, there have been significant developments into integration of robots and automation tools with brachytherapy delivery systems. These systems aim to improve the current paradigm by executing higher precision and accuracy in seed placement, improving calculation of optimal seed locations, minimizing surgical trauma, and reducing radiation exposure to medical staff. Most of the applications of this technology have been in the implantation of seeds in patients with early-stage prostate cancer. Nevertheless, the techniques apply to any clinical site where interstitial brachytherapy is appropriate. In consideration of the rapid developments in this area, the American Association of Physicists in Medicine (AAPM) commissioned Task Group 192 to review the state-of-the-art in the field of robotic interstitial brachytherapy. This is a joint Task Group with the Groupe Européen de Curiethérapie-European Society for Radiotherapy & Oncology (GEC-ESTRO). All developed and reported robotic brachytherapy systems were reviewed. Commissioning and quality assurance procedures for the safe and consistent use of these systems are also provided. Manual seed placement techniques with a rigid template have an estimated in vivo accuracy of 3-6 mm. In addition to the placement accuracy, factors such as tissue deformation, needle deviation, and edema may result in a delivered dose distribution that differs from the preimplant or intraoperative plan. However, real-time needle tracking and seed identification for dynamic updating of dosimetry may improve the quality of seed implantation. The AAPM and GEC-ESTRO recommend that robotic systems should demonstrate a spatial accuracy of seed placement ≤1.0 mm in a phantom. This recommendation is based on the current performance of existing robotic brachytherapy systems and propagation of uncertainties. During clinical commissioning, tests should be conducted to ensure that this level of accuracy is achieved. These tests should mimic the real operating procedure as closely as possible. Additional recommendations on robotic brachytherapy systems include display of the operational state; capability of manual override; documented policies for independent check and data verification; intuitive interface displaying the implantation plan and visualization of needle positions and seed locations relative to the target anatomy; needle insertion in a sequential order; robot-clinician and robot-patient interactions robustness, reliability, and safety while delivering the correct dose at the correct site for the correct patient; avoidance of excessive force on radioactive sources; delivery confirmation of the required number or position of seeds; incorporation of a collision avoidance system; system cleaning, decontamination, and sterilization procedures. These recommendations are applicable to end users and manufacturers of robotic brachytherapy systems.

  4. An egocentric vision based assistive co-robot.

    PubMed

    Zhang, Jingzhe; Zhuang, Lishuo; Wang, Yang; Zhou, Yameng; Meng, Yan; Hua, Gang

    2013-06-01

    We present the prototype of an egocentric vision based assistive co-robot system. In this co-robot system, the user is wearing a pair of glasses with a forward looking camera, and is actively engaged in the control loop of the robot in navigational tasks. The egocentric vision glasses serve for two purposes. First, it serves as a source of visual input to request the robot to find a certain object in the environment. Second, the motion patterns computed from the egocentric video associated with a specific set of head movements are exploited to guide the robot to find the object. These are especially helpful for quadriplegic individuals who do not have needed hand functionality for interaction and control with other modalities (e.g., joystick). In our co-robot system, when the robot does not fulfill the object finding task in a pre-specified time window, it would actively solicit user controls for guidance. Then the users can use the egocentric vision based gesture interface to orient the robot towards the direction of the object. After that the robot will automatically navigate towards the object until it finds it. Our experiments validated the efficacy of the closed-loop design to engage the human in the loop.

  5. Pupillary Responses to Robotic and Human Emotions: The Uncanny Valley and Media Equation Confirmed.

    PubMed

    Reuten, Anne; van Dam, Maureen; Naber, Marnix

    2018-01-01

    Physiological responses during human-robots interaction are useful alternatives to subjective measures of uncanny feelings for nearly humanlike robots (uncanny valley) and comparable emotional responses between humans and robots (media equation). However, no studies have employed the easily accessible measure of pupillometry to confirm the uncanny valley and media equation hypotheses, evidence in favor of the existence of these hypotheses in interaction with emotional robots is scarce, and previous studies have not controlled for low level image statistics across robot appearances. We therefore recorded pupil size of 40 participants that viewed and rated pictures of robotic and human faces that expressed a variety of basic emotions. The robotic faces varied along the dimension of human likeness from cartoonish to humanlike. We strictly controlled for confounding factors by removing backgrounds, hair, and color, and by equalizing low level image statistics. After the presentation phase, participants indicated to what extent the robots appeared uncanny and humanlike, and whether they could imagine social interaction with the robots in real life situations. The results show that robots rated as nearly humanlike scored higher on uncanniness, scored lower on imagined social interaction, evoked weaker pupil dilations, and their emotional expressions were more difficult to recognize. Pupils dilated most strongly to negative expressions and the pattern of pupil responses across emotions was highly similar between robot and human stimuli. These results highlight the usefulness of pupillometry in emotion studies and robot design by confirming the uncanny valley and media equation hypotheses.

  6. Pupillary Responses to Robotic and Human Emotions: The Uncanny Valley and Media Equation Confirmed

    PubMed Central

    Reuten, Anne; van Dam, Maureen; Naber, Marnix

    2018-01-01

    Physiological responses during human–robots interaction are useful alternatives to subjective measures of uncanny feelings for nearly humanlike robots (uncanny valley) and comparable emotional responses between humans and robots (media equation). However, no studies have employed the easily accessible measure of pupillometry to confirm the uncanny valley and media equation hypotheses, evidence in favor of the existence of these hypotheses in interaction with emotional robots is scarce, and previous studies have not controlled for low level image statistics across robot appearances. We therefore recorded pupil size of 40 participants that viewed and rated pictures of robotic and human faces that expressed a variety of basic emotions. The robotic faces varied along the dimension of human likeness from cartoonish to humanlike. We strictly controlled for confounding factors by removing backgrounds, hair, and color, and by equalizing low level image statistics. After the presentation phase, participants indicated to what extent the robots appeared uncanny and humanlike, and whether they could imagine social interaction with the robots in real life situations. The results show that robots rated as nearly humanlike scored higher on uncanniness, scored lower on imagined social interaction, evoked weaker pupil dilations, and their emotional expressions were more difficult to recognize. Pupils dilated most strongly to negative expressions and the pattern of pupil responses across emotions was highly similar between robot and human stimuli. These results highlight the usefulness of pupillometry in emotion studies and robot design by confirming the uncanny valley and media equation hypotheses. PMID:29875722

  7. Intelligent robot trends for factory automation

    NASA Astrophysics Data System (ADS)

    Hall, Ernest L.

    1997-09-01

    An intelligent robot is a remarkably useful combination of a manipulator, sensors and controls. The use of these machines in factory automation can improve productivity, increase product quality and improve competitiveness. This paper presents a discussion of recent economic and technical trends. The robotics industry now has a billion-dollar market in the U.S. and is growing. Feasibility studies are presented which also show unaudited healthy rates of return for a variety of robotic applications. Technically, the machines are faster, cheaper, more repeatable, more reliable and safer. The knowledge base of inverse kinematic and dynamic solutions and intelligent controls is increasing. More attention is being given by industry to robots, vision and motion controls. New areas of usage are emerging for service robots, remote manipulators and automated guided vehicles. However, the road from inspiration to successful application is still long and difficult, often taking decades to achieve a new product. More cooperation between government, industry and universities is needed to speed the development of intelligent robots that will benefit both industry and society.

  8. Direct interaction with an assistive robot for individuals with chronic stroke.

    PubMed

    Kmetz, Brandon; Markham, Heather; Brewer, Bambi R

    2011-01-01

    Many robotic systems have been developed to provide assistance to individuals with disabilities. Most of these systems require the individual to interact with the robot via a joystick or keypad, though some utilize techniques such as speech recognition or selection of objects with a laser pointer. In this paper, we describe a prototype system using a novel method of interaction with an assistive robot. A touch-sensitive skin enables the user to directly guide a robotic arm to a desired position. When the skin is released, the robot remains fixed in position. The target population for this system is individuals with hemiparesis due to chronic stroke. The system can be used as a substitute for the paretic arm and hand in bimanual tasks such as holding a jar while removing the lid. This paper describes the hardware and software of the prototype system, which includes a robotic arm, the touch-sensitive skin, a hook-style prehensor, and weight compensation and speech recognition software.

  9. Improving semantic scene understanding using prior information

    NASA Astrophysics Data System (ADS)

    Laddha, Ankit; Hebert, Martial

    2016-05-01

    Perception for ground robot mobility requires automatic generation of descriptions of the robot's surroundings from sensor input (cameras, LADARs, etc.). Effective techniques for scene understanding have been developed, but they are generally purely bottom-up in that they rely entirely on classifying features from the input data based on learned models. In fact, perception systems for ground robots have a lot of information at their disposal from knowledge about the domain and the task. For example, a robot in urban environments might have access to approximate maps that can guide the scene interpretation process. In this paper, we explore practical ways to combine such prior information with state of the art scene understanding approaches.

  10. Development of a precision multimodal surgical navigation system for lung robotic segmentectomy

    PubMed Central

    Soldea, Valentin; Lachkar, Samy; Rinieri, Philippe; Sarsam, Mathieu; Bottet, Benjamin; Peillon, Christophe

    2018-01-01

    Minimally invasive sublobar anatomical resection is becoming more and more popular to manage early lung lesions. Robotic-assisted thoracic surgery (RATS) is unique in comparison with other minimally invasive techniques. Indeed, RATS is able to better integrate multiple streams of information including advanced imaging techniques, in an immersive experience at the level of the robotic console. Our aim was to describe three-dimensional (3D) imaging throughout the surgical procedure from preoperative planning to intraoperative assistance and complementary investigations such as radial endobronchial ultrasound (R-EBUS) and virtual bronchoscopy for pleural dye marking. All cases were operated using the DaVinci SystemTM. Modelisation was provided by Visible Patient™ (Strasbourg, France). Image integration in the operative field was achieved using the Tile Pro multi display input of the DaVinci console. Our experience was based on 114 robotic segmentectomies performed between January 2012 and October 2017. The clinical value of 3D imaging integration was evaluated in 2014 in a pilot study. Progressively, we have reached the conclusion that the use of such an anatomic model improves the safety and reliability of procedures. The multimodal system including 3D imaging has been used in more than 40 patients so far and demonstrated a perfect operative anatomic accuracy. Currently, we are developing an original virtual reality experience by exploring 3D imaging models at the robotic console level. The act of operating is being transformed and the surgeon now oversees a complex system that improves decision making. PMID:29785294

  11. Development of a precision multimodal surgical navigation system for lung robotic segmentectomy.

    PubMed

    Baste, Jean Marc; Soldea, Valentin; Lachkar, Samy; Rinieri, Philippe; Sarsam, Mathieu; Bottet, Benjamin; Peillon, Christophe

    2018-04-01

    Minimally invasive sublobar anatomical resection is becoming more and more popular to manage early lung lesions. Robotic-assisted thoracic surgery (RATS) is unique in comparison with other minimally invasive techniques. Indeed, RATS is able to better integrate multiple streams of information including advanced imaging techniques, in an immersive experience at the level of the robotic console. Our aim was to describe three-dimensional (3D) imaging throughout the surgical procedure from preoperative planning to intraoperative assistance and complementary investigations such as radial endobronchial ultrasound (R-EBUS) and virtual bronchoscopy for pleural dye marking. All cases were operated using the DaVinci System TM . Modelisation was provided by Visible Patient™ (Strasbourg, France). Image integration in the operative field was achieved using the Tile Pro multi display input of the DaVinci console. Our experience was based on 114 robotic segmentectomies performed between January 2012 and October 2017. The clinical value of 3D imaging integration was evaluated in 2014 in a pilot study. Progressively, we have reached the conclusion that the use of such an anatomic model improves the safety and reliability of procedures. The multimodal system including 3D imaging has been used in more than 40 patients so far and demonstrated a perfect operative anatomic accuracy. Currently, we are developing an original virtual reality experience by exploring 3D imaging models at the robotic console level. The act of operating is being transformed and the surgeon now oversees a complex system that improves decision making.

  12. Phoenix Dodo Trench

    NASA Image and Video Library

    2008-06-04

    This image was taken by NASA's Phoenix Mars Lander's Robotic Arm Camera (RAC) on the ninth Martian day of the mission, or Sol 9 (June 3, 2008). The center of the image shows a trench informally called "Dodo" after the second dig. "Dodo" is located within the previously determined digging area, informally called "Knave of Hearts." The light square to the right of the trench is the Robotic Arm's Thermal and Electrical Conductivity Probe (TECP). The Robotic Arm has scraped to a bright surface which indicated the Arm has reached a solid structure underneath the surface, which has been seen in other images as well. http://photojournal.jpl.nasa.gov/catalog/PIA10763

  13. Intelligent robot trends for 1998

    NASA Astrophysics Data System (ADS)

    Hall, Ernest L.

    1998-10-01

    An intelligent robot is a remarkably useful combination of a manipulator, sensors and controls. The use of these machines in factory automation can improve productivity, increase product quality and improve competitiveness. This paper presents a discussion of recent technical and economic trends. Technically, the machines are faster, cheaper, more repeatable, more reliable and safer. The knowledge base of inverse kinematic and dynamic solutions and intelligent controls is increasing. More attention is being given by industry to robots, vision and motion controls. New areas of usage are emerging for service robots, remote manipulators and automated guided vehicles. Economically, the robotics industry now has a 1.1 billion-dollar market in the U.S. and is growing. Feasibility studies results are presented which also show decreasing costs for robots and unaudited healthy rates of return for a variety of robotic applications. However, the road from inspiration to successful application can be long and difficult, often taking decades to achieve a new product. A greater emphasis on mechatronics is needed in our universities. Certainly, more cooperation between government, industry and universities is needed to speed the development of intelligent robots that will benefit industry and society.

  14. The use of soft robotics in cardiovascular therapy.

    PubMed

    Wamala, Isaac; Roche, Ellen T; Pigula, Frank A

    2017-10-01

    Robots have been employed in cardiovascular therapy as surgical tools and for automation of hospital systems. Soft robots are a new kind of robot made of soft deformable materials, that are uniquely suited for biomedical applications because they are inherently less likely to injure body tissues and more likely to adapt to biological environments. Awareness of the soft robotic systems under development will help promote clinician involvement in their successful clinical translation. Areas covered: The most advanced soft robotic systems, across the size scale from nano to macro, that have shown the most promise for clinical application in cardiovascular therapy because they offer solutions where a clear therapeutic need still exists. We discuss nano and micro scale technology that could help improve targeted therapy for cardiac regeneration in ischemic heart disease, and soft robots for mechanical circulatory support. Additionally, we suggest where the gaps in the technology currently lie. Expert commentary: Soft robotic technology has now matured from the proof-of-concept phase to successful animal testing. With further refinement in materials and clinician guided application, they will be a useful complement for cardiovascular therapy.

  15. Robotics in general thoracic surgery procedures.

    PubMed

    Latif, M Jawad; Park, Bernard J

    2017-01-01

    The use of robotic technology in general thoracic surgical practice continues to expand across various institutions and at this point many major common thoracic surgical procedures have been successfully performed by general thoracic surgeons using the robotic technology. These procedures include lung resections, excision of mediastinal masses, esophagectomy and reconstruction for malignant and benign esophageal pathologies. The success of robotic technology can be attributed to highly magnified 3-D visualization, dexterity afforded by 7 degrees of freedom that allow difficult dissections in narrow fields and the ease of reproducibility once the initial set up and instruments become familiar to the surgeon. As the application of robotic technology trickle downs from major academic centers to community hospitals, it becomes imperative that its role, limitations, learning curve and financial impact are understood by the novice robotic surgeon. In this article, we share our experience as it relates to the setup, common pitfalls and long term results for more commonly performed robotic assisted lung and thymic resections using the 4 arm da Vinci Xi robotic platform (Intuitive Surgical, Inc., Sunnyvale, CA, USA) to help guide those who are interested in adopting this technology.

  16. Highly dexterous 2-module soft robot for intra-organ navigation in minimally invasive surgery.

    PubMed

    Abidi, Haider; Gerboni, Giada; Brancadoro, Margherita; Fras, Jan; Diodato, Alessandro; Cianchetti, Matteo; Wurdemann, Helge; Althoefer, Kaspar; Menciassi, Arianna

    2018-02-01

    For some surgical interventions, like the Total Mesorectal Excision (TME), traditional laparoscopes lack the flexibility to safely maneuver and reach difficult surgical targets. This paper answers this need through designing, fabricating and modelling a highly dexterous 2-module soft robot for minimally invasive surgery (MIS). A soft robotic approach is proposed that uses flexible fluidic actuators (FFAs) allowing highly dexterous and inherently safe navigation. Dexterity is provided by an optimized design of fluid chambers within the robot modules. Safe physical interaction is ensured by fabricating the entire structure by soft and compliant elastomers, resulting in a squeezable 2-module robot. An inner free lumen/chamber along the central axis serves as a guide of flexible endoscopic tools. A constant curvature based inverse kinematics model is also proposed, providing insight into the robot capabilities. Experimental tests in a surgical scenario using a cadaver model are reported, demonstrating the robot advantages over standard systems in a realistic MIS environment. Simulations and experiments show the efficacy of the proposed soft robot. Copyright © 2017 John Wiley & Sons, Ltd.

  17. Fractals, Fuzzy Sets And Image Representation

    NASA Astrophysics Data System (ADS)

    Dodds, D. R.

    1988-10-01

    This paper addresses some uses of fractals, fuzzy sets and image representation as it pertains to robotic grip planning and autonomous vehicle navigation AVN. The robot/vehicle is assumed to be equipped with multimodal sensors including ultrashort pulse imaging laser rangefinder. With a temporal resolution of 50 femtoseconds a time of flight laser rangefinder can resolve distances within approximately half an inch or 1.25 centimeters. (Fujimoto88)

  18. Overview and Categorization of Robots Supporting Independent Living of Elderly People: What Activities Do They Support and How Far Have They Developed.

    PubMed

    Bedaf, Sandra; Gelderblom, Gert Jan; De Witte, Luc

    2015-01-01

    Over the past decades, many robots for the elderly have been developed, supporting different activities of elderly people. A systematic review in four scientific literature databases and a search in article references and European projects was performed in order to create an overview of robots supporting independent living of elderly people. The robots found were categorized based on their development stage, the activity domains they claim to support, and the type of support provided (i.e., physical, non-physical, and/or non-specified). In total, 107 robots for the elderly were identified. Six robots were still in a concept phase, 95 in a development phase, and six of these robots were commercially available. These robots claimed to provide support related to four activity domains: mobility, self-care, interpersonal interaction & relationships, and other activities. Of the many robots developed, only a small percentage is commercially available. Technical ambitions seem to be guiding robot development. To prolong independent living, the step towards physical support is inevitable and needs to be taken. However, it will be a long time before a robot will be capable of supporting multiple activities in a physical manner in the home of an elderly person in order to enhance their independent living.

  19. Visual servoing for a US-guided therapeutic HIFU system by coagulated lesion tracking: a phantom study.

    PubMed

    Seo, Joonho; Koizumi, Norihiro; Funamoto, Takakazu; Sugita, Naohiko; Yoshinaka, Kiyoshi; Nomiya, Akira; Homma, Yukio; Matsumoto, Yoichiro; Mitsuishi, Mamoru

    2011-06-01

    Applying ultrasound (US)-guided high-intensity focused ultrasound (HIFU) therapy for kidney tumours is currently very difficult, due to the unclearly observed tumour area and renal motion induced by human respiration. In this research, we propose new methods by which to track the indistinct tumour area and to compensate the respiratory tumour motion for US-guided HIFU treatment. For tracking indistinct tumour areas, we detect the US speckle change created by HIFU irradiation. In other words, HIFU thermal ablation can coagulate tissue in the tumour area and an intraoperatively created coagulated lesion (CL) is used as a spatial landmark for US visual tracking. Specifically, the condensation algorithm was applied to robust and real-time CL speckle pattern tracking in the sequence of US images. Moreover, biplanar US imaging was used to locate the three-dimensional position of the CL, and a three-actuator system drives the end-effector to compensate for the motion. Finally, we tested the proposed method by using a newly devised phantom model that enables both visual tracking and a thermal response by HIFU irradiation. In the experiment, after generation of the CL in the phantom kidney, the end-effector successfully synchronized with the phantom motion, which was modelled by the captured motion data for the human kidney. The accuracy of the motion compensation was evaluated by the error between the end-effector and the respiratory motion, the RMS error of which was approximately 2 mm. This research shows that a HIFU-induced CL provides a very good landmark for target motion tracking. By using the CL tracking method, target motion compensation can be realized in the US-guided robotic HIFU system. Copyright © 2011 John Wiley & Sons, Ltd.

  20. Acceptance of an assistive robot in older adults: a mixed-method study of human-robot interaction over a 1-month period in the Living Lab setting.

    PubMed

    Wu, Ya-Huei; Wrobel, Jérémy; Cornuet, Mélanie; Kerhervé, Hélène; Damnée, Souad; Rigaud, Anne-Sophie

    2014-01-01

    There is growing interest in investigating acceptance of robots, which are increasingly being proposed as one form of assistive technology to support older adults, maintain their independence, and enhance their well-being. In the present study, we aimed to observe robot-acceptance in older adults, particularly subsequent to a 1-month direct experience with a robot. Six older adults with mild cognitive impairment (MCI) and five cognitively intact healthy (CIH) older adults were recruited. Participants interacted with an assistive robot in the Living Lab once a week for 4 weeks. After being shown how to use the robot, participants performed tasks to simulate robot use in everyday life. Mixed methods, comprising a robot-acceptance questionnaire, semistructured interviews, usability-performance measures, and a focus group, were used. Both CIH and MCI subjects were able to learn how to use the robot. However, MCI subjects needed more time to perform tasks after a 1-week period of not using the robot. Both groups rated similarly on the robot-acceptance questionnaire. They showed low intention to use the robot, as well as negative attitudes toward and negative images of this device. They did not perceive it as useful in their daily life. However, they found it easy to use, amusing, and not threatening. In addition, social influence was perceived as powerful on robot adoption. Direct experience with the robot did not change the way the participants rated robots in their acceptance questionnaire. We identified several barriers to robot-acceptance, including older adults' uneasiness with technology, feeling of stigmatization, and ethical/societal issues associated with robot use. It is important to destigmatize images of assistive robots to facilitate their acceptance. Universal design aiming to increase the market for and production of products that are usable by everyone (to the greatest extent possible) might help to destigmatize assistive devices.

  1. Acceptance of an assistive robot in older adults: a mixed-method study of human–robot interaction over a 1-month period in the Living Lab setting

    PubMed Central

    Wu, Ya-Huei; Wrobel, Jérémy; Cornuet, Mélanie; Kerhervé, Hélène; Damnée, Souad; Rigaud, Anne-Sophie

    2014-01-01

    Background There is growing interest in investigating acceptance of robots, which are increasingly being proposed as one form of assistive technology to support older adults, maintain their independence, and enhance their well-being. In the present study, we aimed to observe robot-acceptance in older adults, particularly subsequent to a 1-month direct experience with a robot. Subjects and methods Six older adults with mild cognitive impairment (MCI) and five cognitively intact healthy (CIH) older adults were recruited. Participants interacted with an assistive robot in the Living Lab once a week for 4 weeks. After being shown how to use the robot, participants performed tasks to simulate robot use in everyday life. Mixed methods, comprising a robot-acceptance questionnaire, semistructured interviews, usability-performance measures, and a focus group, were used. Results Both CIH and MCI subjects were able to learn how to use the robot. However, MCI subjects needed more time to perform tasks after a 1-week period of not using the robot. Both groups rated similarly on the robot-acceptance questionnaire. They showed low intention to use the robot, as well as negative attitudes toward and negative images of this device. They did not perceive it as useful in their daily life. However, they found it easy to use, amusing, and not threatening. In addition, social influence was perceived as powerful on robot adoption. Direct experience with the robot did not change the way the participants rated robots in their acceptance questionnaire. We identified several barriers to robot-acceptance, including older adults’ uneasiness with technology, feeling of stigmatization, and ethical/societal issues associated with robot use. Conclusion It is important to destigmatize images of assistive robots to facilitate their acceptance. Universal design aiming to increase the market for and production of products that are usable by everyone (to the greatest extent possible) might help to destigmatize assistive devices. PMID:24855349

  2. Mobile robots exploration through cnn-based reinforcement learning.

    PubMed

    Tai, Lei; Liu, Ming

    2016-01-01

    Exploration in an unknown environment is an elemental application for mobile robots. In this paper, we outlined a reinforcement learning method aiming for solving the exploration problem in a corridor environment. The learning model took the depth image from an RGB-D sensor as the only input. The feature representation of the depth image was extracted through a pre-trained convolutional-neural-networks model. Based on the recent success of deep Q-network on artificial intelligence, the robot controller achieved the exploration and obstacle avoidance abilities in several different simulated environments. It is the first time that the reinforcement learning is used to build an exploration strategy for mobile robots through raw sensor information.

  3. Flexible robotics in pelvic disease: does the catheter increase applicability of embolic therapy?

    PubMed

    Rueda, Maria A; Riga, Celia; Hamady, Mohamad S

    2018-06-01

    Interventional radiology procedures, equipment, and techniques as well as image guidance have developed dramatically over the last few decades. The evidence for minimally invasive interventions in vascular and oncology fields is rapidly growing and several procedures are considered the first line management. However, radiation exposure, image guidance and innovative solutions to known anatomical challenges are still lagging behind. Robotic technology and its role in surgery have been developing at a steady speed. Endovascular robotics are following suit with a different set of problems and targets. This article discusses the advances and limitations in one aspects of endovascular robotic, namely pelvic pathology that includes aneurysms, fibroids, benign prostatic hypertrophy and vascular malformation.

  4. Automatic tracking of laparoscopic instruments for autonomous control of a cameraman robot.

    PubMed

    Khoiy, Keyvan Amini; Mirbagheri, Alireza; Farahmand, Farzam

    2016-01-01

    An automated instrument tracking procedure was designed and developed for autonomous control of a cameraman robot during laparoscopic surgery. The procedure was based on an innovative marker-free segmentation algorithm for detecting the tip of the surgical instruments in laparoscopic images. A compound measure of Saturation and Value components of HSV color space was incorporated that was enhanced further using the Hue component and some essential characteristics of the instrument segment, e.g., crossing the image boundaries. The procedure was then integrated into the controlling system of the RoboLens cameraman robot, within a triple-thread parallel processing scheme, such that the tip is always kept at the center of the image. Assessment of the performance of the system on prerecorded real surgery movies revealed an accuracy rate of 97% for high quality images and about 80% for those suffering from poor lighting and/or blood, water and smoke noises. A reasonably satisfying performance was also observed when employing the system for autonomous control of the robot in a laparoscopic surgery phantom, with a mean time delay of 200ms. It was concluded that with further developments, the proposed procedure can provide a practical solution for autonomous control of cameraman robots during laparoscopic surgery operations.

  5. Robotic Arm-Assisted Sonography: Review of Technical Developments and Potential Clinical Applications.

    PubMed

    Swerdlow, Daniel R; Cleary, Kevin; Wilson, Emmanuel; Azizi-Koutenaei, Bamshad; Monfaredi, Reza

    2017-04-01

    Ultrasound imaging requires trained personnel. Advances in robotics and data transmission create the possibility of telesonography. This review introduces clinicians to current technical work in and potential applications of this developing capability. Telesonography offers advantages in hazardous or remote environments. Robotically assisted ultrasound can reduce stress injuries in sonographers and has potential utility during robotic surgery and interventional procedures.

  6. Toward the Design of Personalized Continuum Surgical Robots.

    PubMed

    Morimoto, Tania K; Greer, Joseph D; Hawkes, Elliot W; Hsieh, Michael H; Okamura, Allison M

    2018-05-31

    Robot-assisted minimally invasive surgical systems enable procedures with reduced pain, recovery time, and scarring compared to traditional surgery. While these improvements benefit a large number of patients, safe access to diseased sites is not always possible for specialized patient groups, including pediatric patients, due to their anatomical differences. We propose a patient-specific design paradigm that leverages the surgeon's expertise to design and fabricate robots based on preoperative medical images. The components of the patient-specific robot design process are a virtual reality design interface enabling the surgeon to design patient-specific tools, 3-D printing of these tools with a biodegradable polyester, and an actuation and control system for deployment. The designed robot is a concentric tube robot, a type of continuum robot constructed from precurved, elastic, nesting tubes. We demonstrate the overall patient-specific design workflow, from preoperative images to physical implementation, for an example clinical scenario: nonlinear renal access to a pediatric kidney. We also measure the system's behavior as it is deployed through real and artificial tissue. System integration and successful benchtop experiments in ex vivo liver and in a phantom patient model demonstrate the feasibility of using a patient-specific design workflow to plan, fabricate, and deploy personalized, flexible continuum robots.

  7. Using Visual Odometry to Estimate Position and Attitude

    NASA Technical Reports Server (NTRS)

    Maimone, Mark; Cheng, Yang; Matthies, Larry; Schoppers, Marcel; Olson, Clark

    2007-01-01

    A computer program in the guidance system of a mobile robot generates estimates of the position and attitude of the robot, using features of the terrain on which the robot is moving, by processing digitized images acquired by a stereoscopic pair of electronic cameras mounted rigidly on the robot. Developed for use in localizing the Mars Exploration Rover (MER) vehicles on Martian terrain, the program can also be used for similar purposes on terrestrial robots moving in sufficiently visually textured environments: examples include low-flying robotic aircraft and wheeled robots moving on rocky terrain or inside buildings. In simplified terms, the program automatically detects visual features and tracks them across stereoscopic pairs of images acquired by the cameras. The 3D locations of the tracked features are then robustly processed into an estimate of overall vehicle motion. Testing has shown that by use of this software, the error in the estimate of the position of the robot can be limited to no more than 2 percent of the distance traveled, provided that the terrain is sufficiently rich in features. This software has proven extremely useful on the MER vehicles during driving on sandy and highly sloped terrains on Mars.

  8. Design of multifunction anti-terrorism robotic system based on police dog

    NASA Astrophysics Data System (ADS)

    You, Bo; Liu, Suju; Xu, Jun; Li, Dongjie

    2007-11-01

    Aimed at some typical constraints of police dogs and robots used in the areas of reconnaissance and counterterrorism currently, the multifunction anti-terrorism robotic system based on police dog has been introduced. The system is made up of two parts: portable commanding device and police dog robotic system. The portable commanding device consists of power supply module, microprocessor module, LCD display module, wireless data receiving and dispatching module and commanding module, which implements the remote control to the police dogs and takes real time monitor to the video and images. The police dog robotic system consists of microprocessor module, micro video module, wireless data transmission module, power supply module and offence weapon module, which real time collects and transmits video and image data of the counter-terrorism sites, and gives military attack based on commands. The system combines police dogs' biological intelligence with micro robot. Not only does it avoid the complexity of general anti-terrorism robots' mechanical structure and the control algorithm, but it also widens the working scope of police dog, which meets the requirements of anti-terrorism in the new era.

  9. Autonomous bone reposition around anatomical landmark for robot-assisted orthognathic surgery.

    PubMed

    Woo, Sang-Yoon; Lee, Sang-Jeong; Yoo, Ji-Yong; Han, Jung-Joon; Hwang, Soon-Jung; Huh, Kyung-Hoe; Lee, Sam-Sun; Heo, Min-Suk; Choi, Soon-Chul; Yi, Won-Jin

    2017-12-01

    The purpose of this study was to develop a new method for enabling a robot to assist a surgeon in repositioning a bone segment to accurately transfer a preoperative virtual plan into the intraoperative phase in orthognathic surgery. We developed a robot system consisting of an arm with six degrees of freedom, a robot motion-controller, and a PC. An end-effector at the end of the robot arm transferred the movements of the robot arm to the patient's jawbone. The registration between the robot and CT image spaces was performed completely preoperatively, and the intraoperative registration could be finished using only position changes of the tracking tools at the robot end-effector and the patient's splint. The phantom's maxillomandibular complex (MMC) connected to the robot's end-effector was repositioned autonomously by the robot movements around an anatomical landmark of interest based on the tool center point (TCP) principle. The robot repositioned the MMC around the TCP of the incisor of the maxilla and the pogonion of the mandible following plans for real orthognathic patients. The accuracy of the robot's repositioning increased when an anatomical landmark for the TCP was close to the registration fiducials. In spite of this influence, we could increase the repositioning accuracy at the landmark by using the landmark itself as the TCP. With its ability to incorporate virtual planning using a CT image and autonomously execute the plan around an anatomical landmark of interest, the robot could help surgeons reposition bones more accurately and dexterously. Copyright © 2017 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  10. In-line inspection of unpiggable buried live gas pipes using circumferential EMAT guided waves

    NASA Astrophysics Data System (ADS)

    Ren, Baiyang; Xin, Junjun

    2018-04-01

    Unpiggable buried gas pipes need to be inspected to ensure their structural integrity and safe operation. The CIRRIS XITM robot, developed and operated by ULC Robotics, conducts in-line nondestructive inspection of live gas pipes. With the no-blow launching system, the inspection operation has reduced disruption to the public and by eliminating the need to dig trenches, has minimized the site footprint. This provides a highly time and cost effective solution for gas pipe maintenance. However, the current sensor on the robot performs a point-by-point measurement of the pipe wall thickness which cannot cover the whole volume of the pipe in a reasonable timeframe. The study of ultrasonic guided wave technique is discussed to improve the volume coverage as well as the scanning speed. Circumferential guided wave is employed to perform axial scanning. Mode selection is discussed in terms of sensitivity to different defects and defect characterization capability. To assist with the mode selection, finite element analysis is performed to evaluate the wave-defect interaction and to identify potential defect features. Pulse-echo and through-transmission mode are evaluated and compared for their pros and cons in axial scanning. Experiments are also conducted to verify the mode selection and detect and characterize artificial defects introduced into pipe samples.

  11. Self calibrating autoTRAC

    NASA Technical Reports Server (NTRS)

    Everett, Louis J.

    1994-01-01

    The work reported here demonstrates how to automatically compute the position and attitude of a targeting reflective alignment concept (TRAC) camera relative to the robot end effector. In the robotics literature this is known as the sensor registration problem. The registration problem is important to solve if TRAC images need to be related to robot position. Previously, when TRAC operated on the end of a robot arm, the camera had to be precisely located at the correct orientation and position. If this location is in error, then the robot may not be able to grapple an object even though the TRAC sensor indicates it should. In addition, if the camera is significantly far from the alignment it is expected to be at, TRAC may give incorrect feedback for the control of the robot. A simple example is if the robot operator thinks the camera is right side up but the camera is actually upside down, the camera feedback will tell the operator to move in an incorrect direction. The automatic calibration algorithm requires the operator to translate and rotate the robot arbitrary amounts along (about) two coordinate directions. After the motion, the algorithm determines the transformation matrix from the robot end effector to the camera image plane. This report discusses the TRAC sensor registration problem.

  12. Towards the development of a spring-based continuum robot for neurosurgery

    NASA Astrophysics Data System (ADS)

    Kim, Yeongjin; Cheng, Shing Shin; Desai, Jaydev P.

    2015-03-01

    Brain tumor is usually life threatening due to the uncontrolled growth of abnormal cells native to the brain or the spread of tumor cells from outside the central nervous system to the brain. The risks involved in carrying out surgery within such a complex organ can cause severe anxiety in cancer patients. However, neurosurgery, which remains one of the more effective ways of treating brain tumors focused in a confined volume, can have a tremendously increased success rate if the appropriate imaging modality is used for complete tumor removal. Magnetic resonance imaging (MRI) provides excellent soft-tissue contrast and is the imaging modality of choice for brain tumor imaging. MRI combined with continuum soft robotics has immense potential to be the revolutionary treatment technique in the field of brain cancer. It eliminates the concern of hand tremor and guarantees a more precise procedure. One of the prototypes of Minimally Invasive Neurosurgical Intracranial Robot (MINIR-II), which can be classified as a continuum soft robot, consists of a snake-like body made of three segments of rapid prototyped plastic springs. It provides improved dexterity with higher degrees of freedom and independent joint control. It is MRI-compatible, allowing surgeons to track and determine the real-time location of the robot relative to the brain tumor target. The robot was manufactured in a single piece using rapid prototyping technology at a low cost, allowing it to disposable after each use. MINIR-II has two DOFs at each segment with both joints controlled by two pairs of MRI-compatible SMA spring actuators. Preliminary motion tests have been carried out using vision-tracking method and the robot was able to move to different positions based on user commands.

  13. Application of unscented Kalman filter for robust pose estimation in image-guided surgery

    NASA Astrophysics Data System (ADS)

    Vaccarella, Alberto; De Momi, Elena; Valenti, Marta; Ferrigno, Giancarlo; Enquobahrie, Andinet

    2012-02-01

    Image-guided surgery (IGS) allows clinicians to view current, intra-operative scenes superimposed on preoperative images (typically MRI or CT scans). IGS systems use localization systems to track and visualize surgical tools overlaid on top of preoperative images of the patient during surgery. The most commonly used localization systems in the Operating Rooms (OR) are optical tracking systems (OTS) due to their ease of use and cost effectiveness. However, OTS' suffer from the major drawback of line-of-sight requirements. State space approaches based on different implementations of the Kalman filter have recently been investigated in order to compensate for short line-of-sight occlusion. However, the proposed parameterizations for the rigid body orientation suffer from singularities at certain values of rotation angles. The purpose of this work is to develop a quaternion-based Unscented Kalman Filter (UKF) for robust optical tracking of both position and orientation of surgical tools in order to compensate marker occlusion issues. This paper presents preliminary results towards a Kalman-based Sensor Management Engine (SME). The engine will filter and fuse multimodal tracking streams of data. This work was motivated by our experience working in robot-based applications for keyhole neurosurgery (ROBOCAST project). The algorithm was evaluated using real data from NDI Polaris tracker. The results show that our estimation technique is able to compensate for marker occlusion with a maximum error of 2.5° for orientation and 2.36 mm for position. The proposed approach will be useful in over-crowded state-of-the-art ORs where achieving continuous visibility of all tracked objects will be difficult.

  14. Curiosity's Mars Hand Lens Imager (MAHLI): Inital Observations and Activities

    NASA Technical Reports Server (NTRS)

    Edgett, K. S.; Yingst, R. A.; Minitti, M. E.; Robinson, M. L.; Kennedy, M. R.; Lipkaman, L. J.; Jensen, E. H.; Anderson, R. C.; Bean, K. M.; Beegle, L. W.; hide

    2013-01-01

    MAHLI (Mars Hand Lens Imager) is a 2-megapixel focusable macro lens color camera on the turret on Curiosity's robotic arm. The investigation centers on stratigraphy, grain-scale texture, structure, mineralogy, and morphology of geologic materials at Curiosity's Gale robotic field site. MAHLI acquires focused images at working distances of 2.1 cm to infinity; for reference, at 2.1 cm the scale is 14 microns/pixel; at 6.9 cm it is 31 microns/pixel, like the Spirit and Opportunity Microscopic Imager (MI) cameras.

  15. HOPIS: hybrid omnidirectional and perspective imaging system for mobile robots.

    PubMed

    Lin, Huei-Yung; Wang, Min-Liang

    2014-09-04

    In this paper, we present a framework for the hybrid omnidirectional and perspective robot vision system. Based on the hybrid imaging geometry, a generalized stereo approach is developed via the construction of virtual cameras. It is then used to rectify the hybrid image pair using the perspective projection model. The proposed method not only simplifies the computation of epipolar geometry for the hybrid imaging system, but also facilitates the stereo matching between the heterogeneous image formation. Experimental results for both the synthetic data and real scene images have demonstrated the feasibility of our approach.

  16. HOPIS: Hybrid Omnidirectional and Perspective Imaging System for Mobile Robots

    PubMed Central

    Lin, Huei-Yung.; Wang, Min-Liang.

    2014-01-01

    In this paper, we present a framework for the hybrid omnidirectional and perspective robot vision system. Based on the hybrid imaging geometry, a generalized stereo approach is developed via the construction of virtual cameras. It is then used to rectify the hybrid image pair using the perspective projection model. The proposed method not only simplifies the computation of epipolar geometry for the hybrid imaging system, but also facilitates the stereo matching between the heterogeneous image formation. Experimental results for both the synthetic data and real scene images have demonstrated the feasibility of our approach. PMID:25192317

  17. Advances in Simultaneous Localization and Mapping in Confined Underwater Environments Using Sonar and Optical Imaging

    DTIC Science & Technology

    2016-01-01

    satisfying journeys in my life. I would like to thank Ryan for his guidance through the truly exciting world of mobile robotics and robotic perception. Thank...Multi-session and Multi-robot SLAM . . . . . . . . . . . . . . . 15 1.3.3 Robust Techniques for SLAM Backends . . . . . . . . . . . . . . 18 1.4 A...sonar. xv CHAPTER 1 Introduction 1.1 The Importance of SLAM in Autonomous Robotics Autonomous mobile robots are becoming a promising aid in a wide

  18. Current status of endovascular catheter robotics.

    PubMed

    Lumsden, Alan B; Bismuth, Jean

    2018-06-01

    In this review, we will detail the evolution of endovascular therapy as the basis for the development of catheter-based robotics. In parallel, we will outline the evolution of robotics in the surgical space and how the convergence of technology and the entrepreneurs who push this evolution have led to the development of endovascular robots. The current state-of-the-art and future directions and potential are summarized for the reader. Information in this review has been drawn primarily from our personal clinical and preclinical experience in use of catheter robotics, coupled with some ground-breaking work reported from a few other major centers who have embraced the technology's capabilities and opportunities. Several case studies demonstrating the unique capabilities of a precisely controlled catheter are presented. Most of the preclinical work was performed in the advanced imaging and navigation laboratory. In this unique facility, the interface of advanced imaging techniques and robotic guidance is being explored. Although this procedure employs a very high-tech approach to navigation inside the endovascular space, we have conveyed the kind of opportunities that this technology affords to integrate 3D imaging and 3D control. Further, we present the opportunity of semi-autonomous motion of these devices to a target. For the interventionist, enhanced precision can be achieved in a nearly radiation-free environment.

  19. Sensors management in robotic neurosurgery: the ROBOCAST project.

    PubMed

    Vaccarella, Alberto; Comparetti, Mirko Daniele; Enquobahrie, Andinet; Ferrigno, Giancarlo; De Momi, Elena

    2011-01-01

    Robot and computer-aided surgery platforms bring a variety of sensors into the operating room. These sensors generate information to be synchronized and merged for improving the accuracy and the safety of the surgical procedure for both patients and operators. In this paper, we present our work on the development of a sensor management architecture that is used is to gather and fuse data from localization systems, such as optical and electromagnetic trackers and ultrasound imaging devices. The architecture follows a modular client-server approach and was implemented within the EU-funded project ROBOCAST (FP7 ICT 215190). Furthermore it is based on very well-maintained open-source libraries such as OpenCV and Image-Guided Surgery Toolkit (IGSTK), which are supported from a worldwide community of developers and allow a significant reduction of software costs. We conducted experiments to evaluate the performance of the sensor manager module. We computed the response time needed for a client to receive tracking data or video images, and the time lag between synchronous acquisition with an optical tracker and ultrasound machine. Results showed a median delay of 1.9 ms for a client request of tracking data and about 40 ms for US images; these values are compatible with the data generation rate (20-30 Hz for tracking system and 25 fps for PAL video). Simultaneous acquisitions have been performed with an optical tracking system and US imaging device: data was aligned according to the timestamp associated with each sample and the delay was estimated with a cross-correlation study. A median value of 230 ms delay was calculated showing that realtime 3D reconstruction is not feasible (an offline temporal calibration is needed), although a slow exploration is possible. In conclusion, as far as asleep patient neurosurgery is concerned, the proposed setup is indeed useful for registration error correction because the brain shift occurs with a time constant of few tens of minutes.

  20. Evaluation of automated decisionmaking methodologies and development of an integrated robotic system simulation, appendix B

    NASA Technical Reports Server (NTRS)

    Haley, D. C.; Almand, B. J.; Thomas, M. M.; Krauze, L. D.; Gremban, K. D.; Sanborn, J. C.; Kelly, J. H.; Depkovich, T. M.

    1984-01-01

    The purpose of the Robotics Simulation (ROBSIM) program is to provide a broad range of computer capabilities to assist in the design, verification, simulation, and study of robotic systems. ROBSIM is programmed in FORTRAN 77 and implemented on a VAX 11/750 computer using the VMS operating system. This programmer's guide describes the ROBSIM implementation and program logic flow, and the functions and structures of the different subroutines. With this manual and the in-code documentation, and experienced programmer can incorporate additional routines and modify existing ones to add desired capabilities.

  1. Kinematic simulation and analysis of robot based on MATLAB

    NASA Astrophysics Data System (ADS)

    Liao, Shuhua; Li, Jiong

    2018-03-01

    The history of industrial automation is characterized by quick update technology, however, without a doubt, the industrial robot is a kind of special equipment. With the help of MATLAB matrix and drawing capacity in the MATLAB environment each link coordinate system set up by using the d-h parameters method and equation of motion of the structure. Robotics, Toolbox programming Toolbox and GUIDE to the joint application is the analysis of inverse kinematics and path planning and simulation, preliminary solve the problem of college students the car mechanical arm positioning theory, so as to achieve the aim of reservation.

  2. Initial laboratory experience with a novel ultrasound probe for standard and single-port robotic kidney surgery: increasing console surgeon autonomy and minimizing instrument clashing.

    PubMed

    Yakoubi, Rachid; Autorino, Riccardo; Laydner, Humberto; Guillotreau, Julien; White, Michael A; Hillyer, Shahab; Spana, Gregory; Khanna, Rakesh; Isaac, Wahib; Haber, Georges-Pascal; Stein, Robert J; Kaouk, Jihad H

    2012-06-01

    The aim of this study was to evaluate a novel ultrasound probe specifically developed for robotic surgery by determining its efficiency in identifying renal tumors. The study was carried out using the Da Vinci™ surgical system in one female pig. Renal tumor targets were created by percutaneous injection of a tumor mimic mixture. Single-port and standard robotic partial nephrectomy were performed. Intraoperative ultrasound was performed using both standard laparoscopic probe and the new ProART™ Robotic probe. Probe maneuverability and ease of handling for tumor localization were recorded. The standard laparoscopic probe was guided by the assistant. Significant clashing with robotic arms was noted during the single-port procedure. The novel robotic probe was easily introduced through the assistant trocar, and held by the console surgeon using the robotic Prograsp™ with no registered clashing in the external operative field. The average time for grasping the new robotic probe was less than 10 s. Once inserted and grasped, no limitation was found in terms of instrument clashing during the single-port procedure. This novel ultrasound probe developed for robotic surgery was noted to be user-friendly when performing porcine standard and especially single-port robotic partial nephrectomy. Copyright © 2011 John Wiley & Sons, Ltd.

  3. Laser electro-optic system for rapid three-dimensional /3-D/ topographic mapping of surfaces

    NASA Technical Reports Server (NTRS)

    Altschuler, M. D.; Altschuler, B. R.; Taboada, J.

    1981-01-01

    It is pointed out that the generic utility of a robot in a factory/assembly environment could be substantially enhanced by providing a vision capability to the robot. A standard videocamera for robot vision provides a two-dimensional image which contains insufficient information for a detailed three-dimensional reconstruction of an object. Approaches which supply the additional information needed for the three-dimensional mapping of objects with complex surface shapes are briefly considered and a description is presented of a laser-based system which can provide three-dimensional vision to a robot. The system consists of a laser beam array generator, an optical image recorder, and software for controlling the required operations. The projection of a laser beam array onto a surface produces a dot pattern image which is viewed from one or more suitable perspectives. Attention is given to the mathematical method employed, the space coding technique, the approaches used for obtaining the transformation parameters, the optics for laser beam array generation, the hardware for beam array coding, and aspects of image acquisition.

  4. The RABiT: a rapid automated biodosimetry tool for radiological triage. II. Technological developments.

    PubMed

    Garty, Guy; Chen, Youhua; Turner, Helen C; Zhang, Jian; Lyulko, Oleksandra V; Bertucci, Antonella; Xu, Yanping; Wang, Hongliang; Simaan, Nabil; Randers-Pehrson, Gerhard; Lawrence Yao, Y; Brenner, David J

    2011-08-01

    Over the past five years the Center for Minimally Invasive Radiation Biodosimetry at Columbia University has developed the Rapid Automated Biodosimetry Tool (RABiT), a completely automated, ultra-high throughput biodosimetry workstation. This paper describes recent upgrades and reliability testing of the RABiT. The RABiT analyses fingerstick-derived blood samples to estimate past radiation exposure or to identify individuals exposed above or below a cut-off dose. Through automated robotics, lymphocytes are extracted from fingerstick blood samples into filter-bottomed multi-well plates. Depending on the time since exposure, the RABiT scores either micronuclei or phosphorylation of the histone H2AX, in an automated robotic system, using filter-bottomed multi-well plates. Following lymphocyte culturing, fixation and staining, the filter bottoms are removed from the multi-well plates and sealed prior to automated high-speed imaging. Image analysis is performed online using dedicated image processing hardware. Both the sealed filters and the images are archived. We have developed a new robotic system for lymphocyte processing, making use of an upgraded laser power and parallel processing of four capillaries at once. This system has allowed acceleration of lymphocyte isolation, the main bottleneck of the RABiT operation, from 12 to 2 sec/sample. Reliability tests have been performed on all robotic subsystems. Parallel handling of multiple samples through the use of dedicated, purpose-built, robotics and high speed imaging allows analysis of up to 30,000 samples per day.

  5. The RABiT: A Rapid Automated Biodosimetry Tool For Radiological Triage. II. Technological Developments

    PubMed Central

    Garty, Guy; Chen, Youhua; Turner, Helen; Zhang, Jian; Lyulko, Oleksandra; Bertucci, Antonella; Xu, Yanping; Wang, Hongliang; Simaan, Nabil; Randers-Pehrson, Gerhard; Yao, Y. Lawrence; Brenner, David J.

    2011-01-01

    Purpose Over the past five years the Center for Minimally Invasive Radiation Biodosimetry at Columbia University has developed the Rapid Automated Biodosimetry Tool (RABiT), a completely automated, ultra-high throughput biodosimetry workstation. This paper describes recent upgrades and reliability testing of the RABiT. Materials and methods The RABiT analyzes fingerstick-derived blood samples to estimate past radiation exposure or to identify individuals exposed above or below a cutoff dose. Through automated robotics, lymphocytes are extracted from fingerstick blood samples into filter-bottomed multi-well plates. Depending on the time since exposure, the RABiT scores either micronuclei or phosphorylation of the histone H2AX, in an automated robotic system, using filter-bottomed multi-well plates. Following lymphocyte culturing, fixation and staining, the filter bottoms are removed from the multi-well plates and sealed prior to automated high-speed imaging. Image analysis is performed online using dedicated image processing hardware. Both the sealed filters and the images are archived. Results We have developed a new robotic system for lymphocyte processing, making use of an upgraded laser power and parallel processing of four capillaries at once. This system has allowed acceleration of lymphocyte isolation, the main bottleneck of the RABiT operation, from 12 to 2 sec/sample. Reliability tests have been performed on all robotic subsystems. Conclusions Parallel handling of multiple samples through the use of dedicated, purpose-built, robotics and high speed imaging allows analysis of up to 30,000 samples per day. PMID:21557703

  6. Onboard functional and molecular imaging: A design investigation for robotic multipinhole SPECT

    PubMed Central

    Bowsher, James; Yan, Susu; Roper, Justin; Giles, William; Yin, Fang-Fang

    2014-01-01

    Purpose: Onboard imaging—currently performed primarily by x-ray transmission modalities—is essential in modern radiation therapy. As radiation therapy moves toward personalized medicine, molecular imaging, which views individual gene expression, may also be important onboard. Nuclear medicine methods, such as single photon emission computed tomography (SPECT), are premier modalities for molecular imaging. The purpose of this study is to investigate a robotic multipinhole approach to onboard SPECT. Methods: Computer-aided design (CAD) studies were performed to assess the feasibility of maneuvering a robotic SPECT system about a patient in position for radiation therapy. In order to obtain fast, high-quality SPECT images, a 49-pinhole SPECT camera was designed which provides high sensitivity to photons emitted from an imaging region of interest. This multipinhole system was investigated by computer-simulation studies. Seventeen hot spots 10 and 7 mm in diameter were placed in the breast region of a supine female phantom. Hot spot activity concentration was six times that of background. For the 49-pinhole camera and a reference, more conventional, broad field-of-view (FOV) SPECT system, projection data were computer simulated for 4-min scans and SPECT images were reconstructed. Hot-spot localization was evaluated using a nonprewhitening forced-choice numerical observer. Results: The CAD simulation studies found that robots could maneuver SPECT cameras about patients in position for radiation therapy. In the imaging studies, most hot spots were apparent in the 49-pinhole images. Average localization errors for 10-mm- and 7-mm-diameter hot spots were 0.4 and 1.7 mm, respectively, for the 49-pinhole system, and 3.1 and 5.7 mm, respectively, for the reference broad-FOV system. Conclusions: A robot could maneuver a multipinhole SPECT system about a patient in position for radiation therapy. The system could provide onboard functional and molecular imaging with 4-min scan times. PMID:24387490

  7. Onboard functional and molecular imaging: A design investigation for robotic multipinhole SPECT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bowsher, James, E-mail: james.bowsher@duke.edu; Giles, William; Yin, Fang-Fang

    2014-01-15

    Purpose: Onboard imaging—currently performed primarily by x-ray transmission modalities—is essential in modern radiation therapy. As radiation therapy moves toward personalized medicine, molecular imaging, which views individual gene expression, may also be important onboard. Nuclear medicine methods, such as single photon emission computed tomography (SPECT), are premier modalities for molecular imaging. The purpose of this study is to investigate a robotic multipinhole approach to onboard SPECT. Methods: Computer-aided design (CAD) studies were performed to assess the feasibility of maneuvering a robotic SPECT system about a patient in position for radiation therapy. In order to obtain fast, high-quality SPECT images, a 49-pinholemore » SPECT camera was designed which provides high sensitivity to photons emitted from an imaging region of interest. This multipinhole system was investigated by computer-simulation studies. Seventeen hot spots 10 and 7 mm in diameter were placed in the breast region of a supine female phantom. Hot spot activity concentration was six times that of background. For the 49-pinhole camera and a reference, more conventional, broad field-of-view (FOV) SPECT system, projection data were computer simulated for 4-min scans and SPECT images were reconstructed. Hot-spot localization was evaluated using a nonprewhitening forced-choice numerical observer. Results: The CAD simulation studies found that robots could maneuver SPECT cameras about patients in position for radiation therapy. In the imaging studies, most hot spots were apparent in the 49-pinhole images. Average localization errors for 10-mm- and 7-mm-diameter hot spots were 0.4 and 1.7 mm, respectively, for the 49-pinhole system, and 3.1 and 5.7 mm, respectively, for the reference broad-FOV system. Conclusions: A robot could maneuver a multipinhole SPECT system about a patient in position for radiation therapy. The system could provide onboard functional and molecular imaging with 4-min scan times.« less

  8. The effects of overall robot shape on the emotions invoked in users and the perceived personalities of robot.

    PubMed

    Hwang, Jihong; Park, Taezoon; Hwang, Wonil

    2013-05-01

    The affective interaction between human and robots could be influenced by various aspects of robots, which are appearance, countenance, gesture, voice, etc. Among these, the overall shape of robot could play a key role in invoking desired emotions to the users and bestowing preferred personalities to robots. In this regard, the present study experimentally investigates the effects of overall robot shape on the emotions invoked in users and the perceived personalities of robot with an objective of deriving guidelines for the affective design of service robots. In so doing, 27 different shapes of robot were selected, modeled and fabricated, which were combinations of three different shapes of head, trunk and limb (legs and arms) - rectangular-parallelepiped, cylindrical and human-like shapes. For the experiment, visual images and real prototypes of these robot shapes were presented to participants, and emotions invoked and personalities perceived from the presented robots were measured. The results showed that the overall shape of robot arouses any of three emotions named 'concerned', 'enjoyable' and 'favorable', among which 'concerned' emotion is negatively correlated with the 'big five personality factors' while 'enjoyable' and 'favorable' emotions are positively correlated. It was found that the 'big five personality factors', and 'enjoyable' and 'favorable' emotions are more strongly perceived through the real prototypes than through the visual images. It was also found that the robot shape consisting of cylindrical head, human-like trunk and cylindrical head is the best for 'conscientious' personality and 'favorable' emotion, the robot shape consisting of cylindrical head, human-like trunk and human-like limb for 'extroverted' personality, the robot shape consisting of cylindrical head, cylindrical trunk and cylindrical limb for 'anti-neurotic' personality, and the robot shape consisting of rectangular-parallelepiped head, human-like trunk and human-like limb for 'enjoyable' emotion. Copyright © 2012 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  9. Promoting Diversity in Undergraduate Research in Robotics-Based Seismic

    NASA Astrophysics Data System (ADS)

    Gifford, C. M.; Arthur, C. L.; Carmichael, B. L.; Webber, G. K.; Agah, A.

    2006-12-01

    The motivation for this research was to investigate forming evenly-spaced grid patterns with a team of mobile robots for future use in seismic imaging in polar environments. A team of robots was incrementally designed and simulated by incorporating sensors and altering each robot's controller. Challenges, design issues, and efficiency were also addressed. This research project incorporated the efforts of two undergraduate REU students from Elizabeth City State University (ECSU) in North Carolina, and the research staff at the Center for Remote Sensing of Ice Sheets (CReSIS) at the University of Kansas. ECSU is a historically black university. Mentoring these two minority students in scientific research, seismic, robotics, and simulation will hopefully encourage them to pursue graduate degrees in science-related or engineering fields. The goals for this 10-week internship during summer 2006 were to educate the students in the fields of seismology, robotics, and virtual prototyping and simulation. Incrementally designing a robot platform for future enhancement and evaluation was central to this research, and involved simulation of several robots working together to change seismic grid shape and spacing. This process gave these undergraduate students experience and knowledge in an actual research project for a real-world application. The two undergraduate students gained valuable research experience and advanced their knowledge of seismic imaging, robotics, sensors, and simulation. They learned that seismic sensors can be used in an array to gather 2D and 3D images of the subsurface. They also learned that robotics can support dangerous or difficult human activities, such as those in a harsh polar environment, by increasing automation, robustness, and precision. Simulating robot designs also gave them experience in programming behaviors for mobile robots. Thus far, one academic paper has resulted from their research. This paper received third place at the 2006 National Technical Association's (NTA) National Conference in Chicago. CReSIS, in conjunction with ECSU, provided these minority students with a well-rounded educational experience in a real-world research project. Their contributions will be used for future projects.

  10. Investigation of human-robot interface performance in household environments

    NASA Astrophysics Data System (ADS)

    Cremer, Sven; Mirza, Fahad; Tuladhar, Yathartha; Alonzo, Rommel; Hingeley, Anthony; Popa, Dan O.

    2016-05-01

    Today, assistive robots are being introduced into human environments at an increasing rate. Human environments are highly cluttered and dynamic, making it difficult to foresee all necessary capabilities and pre-program all desirable future skills of the robot. One approach to increase robot performance is semi-autonomous operation, allowing users to intervene and guide the robot through difficult tasks. To this end, robots need intuitive Human-Machine Interfaces (HMIs) that support fine motion control without overwhelming the operator. In this study we evaluate the performance of several interfaces that balance autonomy and teleoperation of a mobile manipulator for accomplishing several household tasks. Our proposed HMI framework includes teleoperation devices such as a tablet, as well as physical interfaces in the form of piezoresistive pressure sensor arrays. Mobile manipulation experiments were performed with a sensorized KUKA youBot, an omnidirectional platform with a 5 degrees of freedom (DOF) arm. The pick and place tasks involved navigation and manipulation of objects in household environments. Performance metrics included time for task completion and position accuracy.

  11. Framework for robot skill learning using reinforcement learning

    NASA Astrophysics Data System (ADS)

    Wei, Yingzi; Zhao, Mingyang

    2003-09-01

    Robot acquiring skill is a process similar to human skill learning. Reinforcement learning (RL) is an on-line actor critic method for a robot to develop its skill. The reinforcement function has become the critical component for its effect of evaluating the action and guiding the learning process. We present an augmented reward function that provides a new way for RL controller to incorporate prior knowledge and experience into the RL controller. Also, the difference form of augmented reward function is considered carefully. The additional reward beyond conventional reward will provide more heuristic information for RL. In this paper, we present a strategy for the task of complex skill learning. Automatic robot shaping policy is to dissolve the complex skill into a hierarchical learning process. The new form of value function is introduced to attain smooth motion switching swiftly. We present a formal, but practical, framework for robot skill learning and also illustrate with an example the utility of method for learning skilled robot control on line.

  12. Novel application of simultaneous multi-image display during complex robotic abdominal procedures

    PubMed Central

    2014-01-01

    Background The surgical robot offers the potential to integrate multiple views into the surgical console screen, and for the assistant’s monitors to provide real-time views of both fields of operation. This function has the potential to increase patient safety and surgical efficiency during an operation. Herein, we present a novel application of the multi-image display system for simultaneous visualization of endoscopic views during various complex robotic gastrointestinal operations. All operations were performed using the da Vinci Surgical System (Intuitive Surgical, Sunnyvale, CA, USA) with the assistance of Tilepro, multi-input display software, during employment of the intraoperative scopes. Three robotic operations, left hepatectomy with intraoperative common bile duct exploration, low anterior resection, and radical distal subtotal gastrectomy with intracorporeal gastrojejunostomy, were performed by three different surgeons at a tertiary academic medical center. Results The three complex robotic abdominal operations were successfully completed without difficulty or intraoperative complications. The use of the Tilepro to simultaneously visualize the images from the colonoscope, gastroscope, and choledochoscope made it possible to perform additional intraoperative endoscopic procedures without extra monitors or interference with the operations. Conclusion We present a novel use of the multi-input display program on the da Vinci Surgical System to facilitate the performance of intraoperative endoscopies during complex robotic operations. Our study offers another potentially beneficial application of the robotic surgery platform toward integration and simplification of combining additional procedures with complex minimally invasive operations. PMID:24628761

  13. Mobile robot self-localization system using single webcam distance measurement technology in indoor environments.

    PubMed

    Li, I-Hsum; Chen, Ming-Chang; Wang, Wei-Yen; Su, Shun-Feng; Lai, To-Wen

    2014-01-27

    A single-webcam distance measurement technique for indoor robot localization is proposed in this paper. The proposed localization technique uses webcams that are available in an existing surveillance environment. The developed image-based distance measurement system (IBDMS) and parallel lines distance measurement system (PLDMS) have two merits. Firstly, only one webcam is required for estimating the distance. Secondly, the set-up of IBDMS and PLDMS is easy, which only one known-dimension rectangle pattern is needed, i.e., a ground tile. Some common and simple image processing techniques, i.e., background subtraction are used to capture the robot in real time. Thus, for the purposes of indoor robot localization, the proposed method does not need to use expensive high-resolution webcams and complicated pattern recognition methods but just few simple estimating formulas. From the experimental results, the proposed robot localization method is reliable and effective in an indoor environment.

  14. Absolute High-Precision Localisation of an Unmanned Ground Vehicle by Using Real-Time Aerial Video Imagery for Geo-referenced Orthophoto Registration

    NASA Astrophysics Data System (ADS)

    Kuhnert, Lars; Ax, Markus; Langer, Matthias; Nguyen van, Duong; Kuhnert, Klaus-Dieter

    This paper describes an absolute localisation method for an unmanned ground vehicle (UGV) if GPS is unavailable for the vehicle. The basic idea is to combine an unmanned aerial vehicle (UAV) to the ground vehicle and use it as an external sensor platform to achieve an absolute localisation of the robotic team. Beside the discussion of the rather naive method directly using the GPS position of the aerial robot to deduce the ground robot's position the main focus of this paper lies on the indirect usage of the telemetry data of the aerial robot combined with live video images of an onboard camera to realise a registration of local video images with apriori registered orthophotos. This yields to a precise driftless absolute localisation of the unmanned ground vehicle. Experiments with our robotic team (AMOR and PSYCHE) successfully verify this approach.

  15. Mobile Robot Self-Localization System Using Single Webcam Distance Measurement Technology in Indoor Environments

    PubMed Central

    Li, I-Hsum; Chen, Ming-Chang; Wang, Wei-Yen; Su, Shun-Feng; Lai, To-Wen

    2014-01-01

    A single-webcam distance measurement technique for indoor robot localization is proposed in this paper. The proposed localization technique uses webcams that are available in an existing surveillance environment. The developed image-based distance measurement system (IBDMS) and parallel lines distance measurement system (PLDMS) have two merits. Firstly, only one webcam is required for estimating the distance. Secondly, the set-up of IBDMS and PLDMS is easy, which only one known-dimension rectangle pattern is needed, i.e., a ground tile. Some common and simple image processing techniques, i.e., background subtraction are used to capture the robot in real time. Thus, for the purposes of indoor robot localization, the proposed method does not need to use expensive high-resolution webcams and complicated pattern recognition methods but just few simple estimating formulas. From the experimental results, the proposed robot localization method is reliable and effective in an indoor environment. PMID:24473282

  16. Estimating Position of Mobile Robots From Omnidirectional Vision Using an Adaptive Algorithm.

    PubMed

    Li, Luyang; Liu, Yun-Hui; Wang, Kai; Fang, Mu

    2015-08-01

    This paper presents a novel and simple adaptive algorithm for estimating the position of a mobile robot with high accuracy in an unknown and unstructured environment by fusing images of an omnidirectional vision system with measurements of odometry and inertial sensors. Based on a new derivation where the omnidirectional projection can be linearly parameterized by the positions of the robot and natural feature points, we propose a novel adaptive algorithm, which is similar to the Slotine-Li algorithm in model-based adaptive control, to estimate the robot's position by using the tracked feature points in image sequence, the robot's velocity, and orientation angles measured by odometry and inertial sensors. It is proved that the adaptive algorithm leads to global exponential convergence of the position estimation errors to zero. Simulations and real-world experiments are performed to demonstrate the performance of the proposed algorithm.

  17. Estimating the position and orientation of a mobile robot with respect to a trajectory using omnidirectional imaging and global appearance.

    PubMed

    Payá, Luis; Reinoso, Oscar; Jiménez, Luis M; Juliá, Miguel

    2017-01-01

    Along the past years, mobile robots have proliferated both in domestic and in industrial environments to solve some tasks such as cleaning, assistance, or material transportation. One of their advantages is the ability to operate in wide areas without the necessity of introducing changes into the existing infrastructure. Thanks to the sensors they may be equipped with and their processing systems, mobile robots constitute a versatile alternative to solve a wide range of applications. When designing the control system of a mobile robot so that it carries out a task autonomously in an unknown environment, it is expected to take decisions about its localization in the environment and about the trajectory that it has to follow in order to arrive to the target points. More concisely, the robot has to find a relatively good solution to two crucial problems: building a model of the environment, and estimating the position of the robot within this model. In this work, we propose a framework to solve these problems using only visual information. The mobile robot is equipped with a catadioptric vision sensor that provides omnidirectional images from the environment. First, the robot goes along the trajectories to include in the model and uses the visual information captured to build this model. After that, the robot is able to estimate its position and orientation with respect to the trajectory. Among the possible approaches to solve these problems, global appearance techniques are used in this work. They have emerged recently as a robust and efficient alternative compared to landmark extraction techniques. A global description method based on Radon Transform is used to design mapping and localization algorithms and a set of images captured by a mobile robot in a real environment, under realistic operation conditions, is used to test the performance of these algorithms.

  18. Interactive-rate Motion Planning for Concentric Tube Robots.

    PubMed

    Torres, Luis G; Baykal, Cenk; Alterovitz, Ron

    2014-05-01

    Concentric tube robots may enable new, safer minimally invasive surgical procedures by moving along curved paths to reach difficult-to-reach sites in a patient's anatomy. Operating these devices is challenging due to their complex, unintuitive kinematics and the need to avoid sensitive structures in the anatomy. In this paper, we present a motion planning method that computes collision-free motion plans for concentric tube robots at interactive rates. Our method's high speed enables a user to continuously and freely move the robot's tip while the motion planner ensures that the robot's shaft does not collide with any anatomical obstacles. Our approach uses a highly accurate mechanical model of tube interactions, which is important since small movements of the tip position may require large changes in the shape of the device's shaft. Our motion planner achieves its high speed and accuracy by combining offline precomputation of a collision-free roadmap with online position control. We demonstrate our interactive planner in a simulated neurosurgical scenario where a user guides the robot's tip through the environment while the robot automatically avoids collisions with the anatomical obstacles.

  19. Nasa's Ant-Inspired Swarmie Robots

    NASA Technical Reports Server (NTRS)

    Leucht, Kurt W.

    2016-01-01

    As humans push further beyond the grasp of earth, robotic missions in advance of human missions will play an increasingly important role. These robotic systems will find and retrieve valuable resources as part of an in-situ resource utilization (ISRU) strategy. They will need to be highly autonomous while maintaining high task performance levels. NASA Kennedy Space Center has teamed up with the Biological Computation Lab at the University of New Mexico to create a swarm of small, low-cost, autonomous robots to be used as a ground-based research platform for ISRU missions. The behavior of the robot swarm mimics the central-place foraging strategy of ants to find and collect resources in a previously unmapped environment and return those resources to a central site. This talk will guide the audience through the Swarmie robot project from its conception by students in a New Mexico research lab to its robot trials in an outdoor parking lot at NASA. The software technologies and techniques used on the project will be discussed, as well as various challenges and solutions that were encountered by the development team along the way.

  20. Depth and thermal sensor fusion to enhance 3D thermographic reconstruction.

    PubMed

    Cao, Yanpeng; Xu, Baobei; Ye, Zhangyu; Yang, Jiangxin; Cao, Yanlong; Tisse, Christel-Loic; Li, Xin

    2018-04-02

    Three-dimensional geometrical models with incorporated surface temperature data provide important information for various applications such as medical imaging, energy auditing, and intelligent robots. In this paper we present a robust method for mobile and real-time 3D thermographic reconstruction through depth and thermal sensor fusion. A multimodal imaging device consisting of a thermal camera and a RGB-D sensor is calibrated geometrically and used for data capturing. Based on the underlying principle that temperature information remains robust against illumination and viewpoint changes, we present a Thermal-guided Iterative Closest Point (T-ICP) methodology to facilitate reliable 3D thermal scanning applications. The pose of sensing device is initially estimated using correspondences found through maximizing the thermal consistency between consecutive infrared images. The coarse pose estimate is further refined by finding the motion parameters that minimize a combined geometric and thermographic loss function. Experimental results demonstrate that complimentary information captured by multimodal sensors can be utilized to improve performance of 3D thermographic reconstruction. Through effective fusion of thermal and depth data, the proposed approach generates more accurate 3D thermal models using significantly less scanning data.

  1. MRI-powered biomedical devices.

    PubMed

    Hovet, Sierra; Ren, Hongliang; Xu, Sheng; Wood, Bradford; Tokuda, Junichi; Tse, Zion Tsz Ho

    2017-11-16

    Magnetic resonance imaging (MRI) is beneficial for imaging-guided procedures because it provides higher resolution images and better soft tissue contrast than computed tomography (CT), ultrasound, and X-ray. MRI can be used to streamline diagnostics and treatment because it does not require patients to be repositioned between scans of different areas of the body. It is even possible to use MRI to visualize, power, and control medical devices inside the human body to access remote locations and perform minimally invasive procedures. Therefore, MR conditional medical devices have the potential to improve a wide variety of medical procedures; this potential is explored in terms of practical considerations pertaining to clinical applications and the MRI environment. Recent advancements in this field are introduced with a review of clinically relevant research in the areas of interventional tools, endovascular microbots, and closed-loop controlled MRI robots. Challenges related to technology and clinical feasibility are discussed, including MRI based propulsion and control, navigation of medical devices through the human body, clinical adoptability, and regulatory issues. The development of MRI-powered medical devices is an emerging field, but the potential clinical impact of these devices is promising.

  2. Augmented reality in surgical procedures

    NASA Astrophysics Data System (ADS)

    Samset, E.; Schmalstieg, D.; Vander Sloten, J.; Freudenthal, A.; Declerck, J.; Casciaro, S.; Rideng, Ø.; Gersak, B.

    2008-02-01

    Minimally invasive therapy (MIT) is one of the most important trends in modern medicine. It includes a wide range of therapies in videoscopic surgery and interventional radiology and is performed through small incisions. It reduces hospital stay-time by allowing faster recovery and offers substantially improved cost-effectiveness for the hospital and the society. However, the introduction of MIT has also led to new problems. The manipulation of structures within the body through small incisions reduces dexterity and tactile feedback. It requires a different approach than conventional surgical procedures, since eye-hand co-ordination is not based on direct vision, but more predominantly on image guidance via endoscopes or radiological imaging modalities. ARIS*ER is a multidisciplinary consortium developing a new generation of decision support tools for MIT by augmenting visual and sensorial feedback. We will present tools based on novel concepts in visualization, robotics and haptics providing tailored solutions for a range of clinical applications. Examples from radio-frequency ablation of liver-tumors, laparoscopic liver surgery and minimally invasive cardiac surgery will be presented. Demonstrators were developed with the aim to provide a seamless workflow for the clinical user conducting image-guided therapy.

  3. Surgical robot setup simulation with consistent kinematics and haptics for abdominal surgery.

    PubMed

    Hayashibe, Mitsuhiro; Suzuki, Naoki; Hattori, Asaki; Suzuki, Shigeyuki; Konishi, Kozo; Kakeji, Yoshihiro; Hashizume, Makoto

    2005-01-01

    Preoperative simulation and planning of surgical robot setup should accompany advanced robotic surgery if their advantages are to be further pursued. Feedback from the planning system will plays an essential role in computer-aided robotic surgery in addition to preoperative detailed geometric information from patient CT/MRI images. Surgical robot setup simulation systems for appropriate trocar site placement have been developed especially for abdominal surgery. The motion of the surgical robot can be simulated and rehearsed with kinematic constraints at the trocar site, and the inverse-kinematics of the robot. Results from simulation using clinical patient data verify the effectiveness of the proposed system.

  4. Review of emerging surgical robotic technology.

    PubMed

    Peters, Brian S; Armijo, Priscila R; Krause, Crystal; Choudhury, Songita A; Oleynikov, Dmitry

    2018-04-01

    The use of laparoscopic and robotic procedures has increased in general surgery. Minimally invasive robotic surgery has made tremendous progress in a relatively short period of time, realizing improvements for both the patient and surgeon. This has led to an increase in the use and development of robotic devices and platforms for general surgery. The purpose of this review is to explore current and emerging surgical robotic technologies in a growing and dynamic environment of research and development. This review explores medical and surgical robotic endoscopic surgery and peripheral technologies currently available or in development. The devices discussed here are specific to general surgery, including laparoscopy, colonoscopy, esophagogastroduodenoscopy, and thoracoscopy. Benefits and limitations of each technology were identified and applicable future directions were described. A number of FDA-approved devices and platforms for robotic surgery were reviewed, including the da Vinci Surgical System, Sensei X Robotic Catheter System, FreeHand 1.2, invendoscopy E200 system, Flex® Robotic System, Senhance, ARES, the Single-Port Instrument Delivery Extended Research (SPIDER), and the NeoGuide Colonoscope. Additionally, platforms were reviewed which have not yet obtained FDA approval including MiroSurge, ViaCath System, SPORT™ Surgical System, SurgiBot, Versius Robotic System, Master and Slave Transluminal Endoscopic Robot, Verb Surgical, Miniature In Vivo Robot, and the Einstein Surgical Robot. The use and demand for robotic medical and surgical platforms is increasing and new technologies are continually being developed. New technologies are increasingly implemented to improve on the capabilities of previously established systems. Future studies are needed to further evaluate the strengths and weaknesses of each robotic surgical device and platform in the operating suite.

  5. Intelligent robot trends and predictions for the first year of the new millennium

    NASA Astrophysics Data System (ADS)

    Hall, Ernest L.

    2000-10-01

    An intelligent robot is a remarkably useful combination of a manipulator, sensors and controls. The current use of these machines in outer space, medicine, hazardous materials, defense applications and industry is being pursued with vigor. In factory automation, industrial robots can improve productivity, increase product quality and improve competitiveness. The computer and the robot have both been developed during recent times. The intelligent robot combines both technologies and requires a thorough understanding and knowledge of mechatronics. Today's robotic machines are faster, cheaper, more repeatable, more reliable and safer than ever. The knowledge base of inverse kinematic and dynamic solutions and intelligent controls is increasing. More attention is being given by industry to robots, vision and motion controls. New areas of usage are emerging for service robots, remote manipulators and automated guided vehicles. Economically, the robotics industry now has more than a billion-dollar market in the U.S. and is growing. Feasibility studies show decreasing costs for robots and unaudited healthy rates of return for a variety of robotic applications. However, the road from inspiration to successful application can be long and difficult, often taking decades to achieve a new product. A greater emphasis on mechatronics is needed in our universities. Certainly, more cooperation between government, industry and universities is needed to speed the development of intelligent robots that will benefit industry and society. The fearful robot stories may help us prevent future disaster. The inspirational robot ideas may inspire the scientists of tomorrow. However, the intelligent robot ideas, which can be reduced to practice, will change the world.

  6. On the development of a reactive sensor-based robotic system

    NASA Technical Reports Server (NTRS)

    Hexmoor, Henry H.; Underwood, William E., Jr.

    1989-01-01

    Flexible robotic systems for space applications need to use local information to guide their action in uncertain environments where the state of the environment and even the goals may change. They have to be tolerant of unexpected events and robust enough to carry their task to completion. Tactical goals should be modified while maintaining strategic goals. Furthermore, reactive robotic systems need to have a broader view of their environments than sensory-based systems. An architecture and a theory of representation extending the basic cycles of action and perception are described. This scheme allows for dynamic description of the environment and determining purposive and timely action. Applications of this scheme for assembly and repair tasks using a Universal Machine Intelligence RTX robot are being explored, but the ideas are extendable to other domains. The nature of reactivity for sensor-based robotic systems and implementation issues encountered in developing a prototype are discussed.

  7. Teaching and implementing autonomous robotic lab walkthroughs in a biotech laboratory through model-based visual tracking

    NASA Astrophysics Data System (ADS)

    Wojtczyk, Martin; Panin, Giorgio; Röder, Thorsten; Lenz, Claus; Nair, Suraj; Heidemann, Rüdiger; Goudar, Chetan; Knoll, Alois

    2010-01-01

    After utilizing robots for more than 30 years for classic industrial automation applications, service robots form a constantly increasing market, although the big breakthrough is still awaited. Our approach to service robots was driven by the idea of supporting lab personnel in a biotechnology laboratory. After initial development in Germany, a mobile robot platform extended with an industrial manipulator and the necessary sensors for indoor localization and object manipulation, has been shipped to Bayer HealthCare in Berkeley, CA, USA, a global player in the sector of biopharmaceutical products, located in the San Francisco bay area. The determined goal of the mobile manipulator is to support the off-shift staff to carry out completely autonomous or guided, remote controlled lab walkthroughs, which we implement utilizing a recent development of our computer vision group: OpenTL - an integrated framework for model-based visual tracking.

  8. "You gotta try it all": Parents' Experiences with Robotic Gait Training for their Children with Cerebral Palsy.

    PubMed

    Beveridge, Briony; Feltracco, Deanna; Struyf, Jillian; Strauss, Emily; Dang, Saniya; Phelan, Shanon; Wright, F Virginia; Gibson, Barbara E

    2015-01-01

    Innovative robotic technologies hold strong promise for improving walking abilities of children with cerebral palsy (CP), but may create expectations for parents pursuing the "newest thing" in treatment. The aim of this qualitative study was to explore parents' values about walking in relation to their experiences with robotic gait training for their children. Semi-structured interviews were conducted with parents of five ambulatory children with CP participating in a randomized trial investigating robotic gait training effectiveness. Parents valued walking, especially "correct" walking, as a key component of their children's present and future well-being. They continually sought the "next best thing" in therapy and viewed the robotic gait trainer as a potentially revolutionary technology despite mixed experiences. The results can help inform rehabilitation therapists' knowledge of parents' values and perspectives, and guide effective collaborations toward meeting the therapeutic needs of children with CP.

  9. Group sessions with Paro in a nursing home: Structure, observations and interviews.

    PubMed

    Robinson, Hayley; Broadbent, Elizabeth; MacDonald, Bruce

    2016-06-01

    We recently reported that a companion robot reduced residents' loneliness in a randomised controlled trial at an aged-care facility. This report aims to provide additional, previously unpublished data about how the sessions were run, residents' interactions with the robot and staff perspectives. Observations were conducted focusing on engagement, how residents treated the robot and if the robot acted as a social catalyst. In addition, 16 residents and 21 staff were asked open-ended questions at the end of the study about the sessions and the robot. Observations indicated that some residents engaged on an emotional level with Paro, and Paro was treated as both an agent and an artificial object. Interviews revealed that residents enjoyed sharing, interacting with and talking about Paro. This study supports other research showing Paro has psychosocial benefits and provides a guide for those wishing to use Paro in a group setting in aged care. © 2015 AJA Inc.

  10. Metaphors to Drive By: Exploring New Ways to Guide Human-Robot Interaction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    David J. Bruemmer; David I. Gertman; Curtis W. Nielsen

    2007-08-01

    Autonomous behaviors created by the research and development community are not being extensively utilized within energy, defense, security, or industrial contexts. This paper provides evidence that the interaction methods used alongside these behaviors may not provide a mental model that can be easily adopted or used by operators. Although autonomy has the potential to reduce overall workload, the use of robot behaviors often increased the complexity of the underlying interaction metaphor. This paper reports our development of new metaphors that support increased robot complexity without passing the complexity of the interaction onto the operator. Furthermore, we illustrate how recognition ofmore » problems in human-robot interactions can drive the creation of new metaphors for design and how human factors lessons in usability, human performance, and our social contract with technology have the potential for enormous payoff in terms of establishing effective, user-friendly robot systems when appropriate metaphors are used.« less

  11. A color-coded vision scheme for robotics

    NASA Technical Reports Server (NTRS)

    Johnson, Kelley Tina

    1991-01-01

    Most vision systems for robotic applications rely entirely on the extraction of information from gray-level images. Humans, however, regularly depend on color to discriminate between objects. Therefore, the inclusion of color in a robot vision system seems a natural extension of the existing gray-level capabilities. A method for robot object recognition using a color-coding classification scheme is discussed. The scheme is based on an algebraic system in which a two-dimensional color image is represented as a polynomial of two variables. The system is then used to find the color contour of objects. In a controlled environment, such as that of the in-orbit space station, a particular class of objects can thus be quickly recognized by its color.

  12. Automatic Recognition Of Moving Objects And Its Application To A Robot For Picking Asparagus

    NASA Astrophysics Data System (ADS)

    Baylou, P.; Amor, B. El Hadj; Bousseau, G.

    1983-10-01

    After a brief description of the robot for picking white asparagus, a statistical study of the different shapes of asparagus tips allowed us to determine certain discriminating parameters to detect the tips as they appear on the silhouette of the mound of earth. The localisation was done stereometrically with the help of two cameras. As the robot carrying the system of vision-localisation moves, the images are altered and decision cri-teria modified. A study of the image from mobile objects produced by both tube and CCD came-ras was carried out. A simulation of this phenomenon has been achieved in order to determine the modifications concerning object shapes, thresholding levels and decision parameters in function of the robot speed.

  13. Humanoid Robots: A New Kind of Tool

    DTIC Science & Technology

    2000-01-01

    Breazeal (Ferrell), R. Irie, C. C. Kemp, M. J. Marjanovic , B. Scassellati, M. M. Williamson, Alternate Essences of Intelligence, AAAI 1998. 2 R. A. Brooks, C...Breazeal, M. J. Marjanovic , B. Scassellati, M. M. Williamson, The Cog Project: Building a Humanoid Robot, Computation fbr Metaphors, Analogy and...Functions, Vol. 608, 1990, New York Academy of Sciences, pp. 637-676. 7 M. J. Marjanovic , B. Scassellati, M. M. Williamson, Self-Taught Visually-Guided

  14. Active MRI tracking for robotic assisted FUS

    NASA Astrophysics Data System (ADS)

    Xiao, Xu; Huang, Zhihong; Melzer, Andreas

    2017-03-01

    MR guided FUS is a noninvasive method producing thermal necrosis at the position of tumors with high accuracy and temperature control. Because the typical size of the ultrasound focus is smaller than the area of interested treatment tissues, focus repositioning become necessary to achieve multiple sonications to cover the whole targeted area. Using MR compatible mechanical actuators could help the ultrasound beam to reach a wider treatment range than using electrical beam steering technique and more flexibility in position the transducer. An active MR tracking technique was combined into the MRgFUS system to help locating the position of the mechanical actuator and the FUS transducer. For this study, a precise agar reference model was designed and fabricated to test the performance of the active tracking technique when it was used on the MR-compatible robotics InnoMotion™ (IBSMM, Engineering spol. s r.o. / Ltd, Czech Republic). The precision, tracking range and positioning speed of the combined robotic FUS system were evaluated in this study. Compared to the existing MR guided HIFU systems, the combined robotic system with active tracking techniques provides a potential that allows the FUS treatment to operate in a larger spatial range and with a faster speed, which is one of the main challenges for organ motion tracking.

  15. Embodied neurofeedback with an anthropomorphic robotic hand

    PubMed Central

    Braun, Niclas; Emkes, Reiner; Thorne, Jeremy D.; Debener, Stefan

    2016-01-01

    Neurofeedback-guided motor imagery training (NF-MIT) has been suggested as a promising therapy for stroke-induced motor impairment. Whereas much NF-MIT research has aimed at signal processing optimization, the type of sensory feedback given to the participant has received less attention. Often the feedback signal is highly abstract and not inherently coupled to the mental act performed. In this study, we asked whether an embodied feedback signal is more efficient for neurofeedback operation than a non-embodiable feedback signal. Inspired by the rubber hand illusion, demonstrating that an artificial hand can be incorporated into one’s own body scheme, we used an anthropomorphic robotic hand to visually guide the participants’ motor imagery act and to deliver neurofeedback. Using two experimental manipulations, we investigated how a participant’s neurofeedback performance and subjective experience were influenced by the embodiability of the robotic hand, and by the neurofeedback signal’s validity. As pertains to embodiment, we found a promoting effect of robotic-hand embodiment in subjective, behavioral, electrophysiological and electrodermal measures. Regarding neurofeedback signal validity, we found some differences between real and sham neurofeedback in terms of subjective and electrodermal measures, but not in terms of behavioral and electrophysiological measures. This study motivates the further development of embodied feedback signals for NF-MIT. PMID:27869190

  16. Developing a Wearable Ankle Rehabilitation Robotic Device for in-Bed Acute Stroke Rehabilitation.

    PubMed

    Ren, Yupeng; Wu, Yi-Ning; Yang, Chung-Yong; Xu, Tao; Harvey, Richard L; Zhang, Li-Qun

    2017-06-01

    Ankle movement training is important in motor recovery post stroke and early intervention is critical to stroke rehabilitation. However, acute stroke survivors receive motor rehabilitation in only a small fraction of time, partly due to the lack of effective devices and protocols suitable for early in-bed rehabilitation. Considering the first few months post stroke is critical in stroke recovery, there is a strong need to start motor rehabilitation early, mobilize the ankle, and conduct movement therapy. This study seeks to address the need and deliver intensive passive and active movement training in acute stroke using a wearable ankle robotic device. Isometric torque generation mode under real-time feedback is used to guide patients in motor relearning. In the passive stretching mode, the wearable robotic device stretches the ankle throughout its range of motion to the extreme dorsiflexion forcefully and safely. In the active movement training mode, a patient is guided and motivated to actively participate in movement training through game playing. Clinical testing of the wearable robotic device on 10 acute stroke survivors over 12 sessions of feedback-facilitated isometric torque generation, and passive and active movement training indicated that the early in-bed rehabilitation could have facilitated neuroplasticity and helped improve motor control ability.

  17. Navigating the pathway to robotic competency in general thoracic surgery.

    PubMed

    Seder, Christopher W; Cassivi, Stephen D; Wigle, Dennis A

    2013-01-01

    Although robotic technology has addressed many of the limitations of traditional videoscopic surgery, robotic surgery has not gained widespread acceptance in the general thoracic community. We report our initial robotic surgery experience and propose a structured, competency-based pathway for the development of robotic skills. Between December 2008 and February 2012, a total of 79 robot-assisted pulmonary, mediastinal, benign esophageal, or diaphragmatic procedures were performed. Data on patient characteristics and perioperative outcomes were retrospectively collected and analyzed. During the study period, one surgeon and three residents participated in a triphasic, competency-based pathway designed to teach robotic skills. The pathway consisted of individual preclinical learning followed by mentored preclinical exercises and progressive clinical responsibility. The robot-assisted procedures performed included lung resection (n = 38), mediastinal mass resection (n = 19), hiatal or paraesophageal hernia repair (n = 12), and Heller myotomy (n = 7), among others (n = 3). There were no perioperative mortalities, with a 20% complication rate and a 3% readmission rate. Conversion to a thoracoscopic or open approach was required in eight pulmonary resections to facilitate dissection (six) or to control hemorrhage (two). Fewer major perioperative complications were observed in the later half of the experience. All residents who participated in the thoracic surgery robotic pathway perform robot-assisted procedures as part of their clinical practice. Robot-assisted thoracic surgery can be safely learned when skill acquisition is guided by a structured, competency-based pathway.

  18. Cooperative crossing of traffic intersections in a distributed robot system

    NASA Astrophysics Data System (ADS)

    Rausch, Alexander; Oswald, Norbert; Levi, Paul

    1995-09-01

    In traffic scenarios a distributed robot system has to cope with problems like resource sharing, distributed planning, distributed job scheduling, etc. While travelling along a street segment can be done autonomously by each robot, crossing of an intersection as a shared resource forces the robot to coordinate its actions with those of other robots e.g. by means of negotiations. We discuss the issue of cooperation on the design of a robot control architecture. Task and sensor specific cooperation between robots requires the robots' architectures to be interlinked at different hierarchical levels. Inside each level control cycles are running in parallel and provide fast reaction on events. Internal cooperation may occur between cycles of the same level. Altogether the architecture is matrix-shaped and contains abstract control cycles with a certain degree of autonomy. Based upon the internal structure of a cycle we consider the horizontal and vertical interconnection of cycles to form an individual architecture. Thereafter we examine the linkage of several agents and its influence on an interacting architecture. A prototypical implementation of a scenario, which combines aspects of active vision and cooperation, illustrates our approach. Two vision-guided vehicles are faced with line following, intersection recognition and negotiation.

  19. Agile beam laser radar using computational imaging for robotic perception

    NASA Astrophysics Data System (ADS)

    Powers, Michael A.; Stann, Barry L.; Giza, Mark M.

    2015-05-01

    This paper introduces a new concept that applies computational imaging techniques to laser radar for robotic perception. We observe that nearly all contemporary laser radars for robotic (i.e., autonomous) applications use pixel basis scanning where there is a one-to-one correspondence between world coordinates and the measurements directly produced by the instrument. In such systems this is accomplished through beam scanning and/or the imaging properties of focal-plane optics. While these pixel-basis measurements yield point clouds suitable for straightforward human interpretation, the purpose of robotic perception is the extraction of meaningful features from a scene, making human interpretability and its attendant constraints mostly unnecessary. The imposing size, weight, power and cost of contemporary systems is problematic, and relief from factors that increase these metrics is important to the practicality of robotic systems. We present a system concept free from pixel basis sampling constraints that promotes efficient and adaptable sensing modes. The cornerstone of our approach is agile and arbitrary beam formation that, when combined with a generalized mathematical framework for imaging, is suited to the particular challenges and opportunities of robotic perception systems. Our hardware concept looks toward future systems with optical device technology closely resembling modern electronically-scanned-array radar that may be years away from practicality. We present the design concept and results from a prototype system constructed and tested in a laboratory environment using a combination of developed hardware and surrogate devices for beam formation. The technological status and prognosis for key components in the system is discussed.

  20. Visual identification and similarity measures used for on-line motion planning of autonomous robots in unknown environments

    NASA Astrophysics Data System (ADS)

    Martínez, Fredy; Martínez, Fernando; Jacinto, Edwar

    2017-02-01

    In this paper we propose an on-line motion planning strategy for autonomous robots in dynamic and locally observable environments. In this approach, we first visually identify geometric shapes in the environment by filtering images. Then, an ART-2 network is used to establish the similarity between patterns. The proposed algorithm allows that a robot establish its relative location in the environment, and define its navigation path based on images of the environment and its similarity to reference images. This is an efficient and minimalist method that uses the similarity of landmark view patterns to navigate to the desired destination. Laboratory tests on real prototypes demonstrate the performance of the algorithm.

Top