Brain Computer Interfaces for Enhanced Interaction with Mobile Robot Agents
2016-07-27
synergistic and complementary way. This project focused on acquiring a mobile robotic agent platform that can be used to explore these interfaces...providing a test environment where the human control of a robot agent can be experimentally validated in 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND...Distribution Unlimited UU UU UU UU 27-07-2016 17-Sep-2013 16-Sep-2014 Final Report: Brain Computer Interfaces for Enhanced Interactions with Mobile Robot
PIR-1 and PIRPL. A Project in Robotics Education. Revised.
ERIC Educational Resources Information Center
Schultz, Charles P.
This paper presents the results of a project in robotics education that included: (1) designing a mobile robot--the Personal Instructional Robot-1 (PIR-1); (2) providing a guide to the purchase and assembly of necessary parts; (3) providing a way to interface the robot with common classroom microcomputers; and (4) providing a language by which the…
Bergamasco, Massimo; Frisoli, Antonio; Fontana, Marco; Loconsole, Claudio; Leonardis, Daniele; Troncossi, Marco; Foumashi, Mohammad Mozaffari; Parenti-Castelli, Vincenzo
2011-01-01
This paper presents the preliminary results of the project BRAVO (Brain computer interfaces for Robotic enhanced Action in Visuo-motOr tasks). The objective of this project is to define a new approach to the development of assistive and rehabilitative robots for motor impaired users to perform complex visuomotor tasks that require a sequence of reaches, grasps and manipulations of objects. BRAVO aims at developing new robotic interfaces and HW/SW architectures for rehabilitation and regain/restoration of motor function in patients with upper limb sensorimotor impairment through extensive rehabilitation therapy and active assistance in the execution of Activities of Daily Living. The final system developed within this project will include a robotic arm exoskeleton and a hand orthosis that will be integrated together for providing force assistance. The main novelty that BRAVO introduces is the control of the robotic assistive device through the active prediction of intention/action. The system will actually integrate the information about the movement carried out by the user with a prediction of the performed action through an interpretation of current gaze of the user (measured through eye-tracking), brain activation (measured through BCI) and force sensor measurements. © 2011 IEEE
Robotics research projects report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hsia, T.C.
The research results of the Robotics Research Laboratory are summarized. Areas of research include robotic control, a stand-alone vision system for industrial robots, and sensors other than vision that would be useful for image ranging, including ultrasonic and infra-red devices. One particular project involves RHINO, a 6-axis robotic arm that can be manipulated by serial transmission of ASCII command strings to its interfaced controller. (LEW)
A graphical, rule based robotic interface system
NASA Technical Reports Server (NTRS)
Mckee, James W.; Wolfsberger, John
1988-01-01
The ability of a human to take control of a robotic system is essential in any use of robots in space in order to handle unforeseen changes in the robot's work environment or scheduled tasks. But in cases in which the work environment is known, a human controlling a robot's every move by remote control is both time consuming and frustrating. A system is needed in which the user can give the robotic system commands to perform tasks but need not tell the system how. To be useful, this system should be able to plan and perform the tasks faster than a telerobotic system. The interface between the user and the robot system must be natural and meaningful to the user. A high level user interface program under development at the University of Alabama, Huntsville, is described. A graphical interface is proposed in which the user selects objects to be manipulated by selecting representations of the object on projections of a 3-D model of the work environment. The user may move in the work environment by changing the viewpoint of the projections. The interface uses a rule based program to transform user selection of items on a graphics display of the robot's work environment into commands for the robot. The program first determines if the desired task is possible given the abilities of the robot and any constraints on the object. If the task is possible, the program determines what movements the robot needs to make to perform the task. The movements are transformed into commands for the robot. The information defining the robot, the work environment, and how objects may be moved is stored in a set of data bases accessible to the program and displayable to the user.
NASA Astrophysics Data System (ADS)
Gîlcă, G.; Bîzdoacă, N. G.; Diaconu, I.
2016-08-01
This article aims to implement some practical applications using the Socibot Desktop social robot. We mean to realize three applications: creating a speech sequence using the Kiosk menu of the browser interface, creating a program in the Virtual Robot browser interface and making a new guise to be loaded into the robot's memory in order to be projected onto it face. The first application is actually created in the Compose submenu that contains 5 file categories: audio, eyes, face, head, mood, this being helpful in the creation of the projected sequence. The second application is more complex, the completed program containing: audio files, speeches (can be created in over 20 languages), head movements, the robot's facial parameters function of each action units (AUs) of the facial muscles, its expressions and its line of sight. Last application aims to change the robot's appearance with the guise created by us. The guise was created in Adobe Photoshop and then loaded into the robot's memory.
Performance Evaluation and Software Design for EVA Robotic Assistant Stereo Vision Heads
NASA Technical Reports Server (NTRS)
DiPaolo, Daniel
2003-01-01
The purpose of this project was to aid the EVA Robotic Assistant project by evaluating and designing the necessary interfaces for two stereo vision heads - the TracLabs Biclops pan-tilt-verge head, and the Helpmate Zebra pan-tilt-verge head. The first half of the project consisted of designing the necessary software interface so that the other modules of the EVA Robotic Assistant had proper access to all of the functionalities offered by each of the stereovision heads. This half took most of the project time, due to a lack of ready-made CORBA drivers for either of the heads. Once this was overcome, the evaluation stage of the project began. The second half of the project was to take these interfaces and to evaluate each of the stereo vision heads in terms of usefulness to the project. In the key project areas such as stability and reliability, the Zebra pan-tilt-verge head came out on top. However, the Biclops did have many more advantages over the Zebra, such as: lower power consumption, faster communications, and a simpler, cleaner API. Overall, the Biclops pan-tilt-verge head outperformed the Zebra pan-tilt-verge head.
Automation and robotics for the Space Exploration Initiative: Results from Project Outreach
NASA Technical Reports Server (NTRS)
Gonzales, D.; Criswell, D.; Heer, E.
1991-01-01
A total of 52 submissions were received in the Automation and Robotics (A&R) area during Project Outreach. About half of the submissions (24) contained concepts that were judged to have high utility for the Space Exploration Initiative (SEI) and were analyzed further by the robotics panel. These 24 submissions are analyzed here. Three types of robots were proposed in the high scoring submissions: structured task robots (STRs), teleoperated robots (TORs), and surface exploration robots. Several advanced TOR control interface technologies were proposed in the submissions. Many A&R concepts or potential standards were presented or alluded to by the submitters, but few specific technologies or systems were suggested.
NASA Astrophysics Data System (ADS)
Erickson, David; Lacheray, Hervé; Lai, Gilbert; Haddadi, Amir
2014-06-01
This paper presents the latest advancements of the Haptics-based Immersive Tele-robotic System (HITS) project, a next generation Improvised Explosive Device (IED) disposal (IEDD) robotic interface containing an immersive telepresence environment for a remotely-controlled three-articulated-robotic-arm system. While the haptic feedback enhances the operator's perception of the remote environment, a third teleoperated dexterous arm, equipped with multiple vision sensors and cameras, provides stereo vision with proper visual cues, and a 3D photo-realistic model of the potential IED. This decentralized system combines various capabilities including stable and scaled motion, singularity avoidance, cross-coupled hybrid control, active collision detection and avoidance, compliance control and constrained motion to provide a safe and intuitive control environment for the operators. Experimental results and validation of the current system are presented through various essential IEDD tasks. This project demonstrates that a two-armed anthropomorphic Explosive Ordnance Disposal (EOD) robot interface can achieve complex neutralization techniques against realistic IEDs without the operator approaching at any time.
Teleoperation of Robonaut Using Finger Tracking
NASA Technical Reports Server (NTRS)
Champoux, Rachel G.; Luo, Victor
2012-01-01
With the advent of new finger tracking systems, the idea of a more expressive and intuitive user interface is being explored and implemented. One practical application for this new kind of interface is that of teleoperating a robot. For humanoid robots, a finger tracking interface is required due to the level of complexity in a human-like hand, where a joystick isn't accurate. Moreover, for some tasks, using one's own hands allows the user to communicate their intentions more effectively than other input. The purpose of this project was to develop a natural user interface for someone to teleoperate a robot that is elsewhere. Specifically, this was designed to control Robonaut on the international space station to do tasks too dangerous and/or too trivial for human astronauts. This interface was developed by integrating and modifying 3Gear's software, which includes a library of gestures and the ability to track hands. The end result is an interface in which the user can manipulate objects in real time in the user interface. then, the information is relayed to a simulator, the stand in for Robonaut, at a slight delay.
Contreras-Vidal, Jose L.; Grossman, Robert G.
2013-01-01
In this communication, a translational clinical brain-machine interface (BMI) roadmap for an EEG-based BMI to a robotic exoskeleton (NeuroRex) is presented. This multi-faceted project addresses important engineering and clinical challenges: It addresses the validation of an intelligent, self-balancing, robotic lower-body and trunk exoskeleton (Rex) augmented with EEG-based BMI capabilities to interpret user intent to assist a mobility-impaired person to walk independently. The goal is to improve the quality of life and health status of wheelchair-bounded persons by enabling standing and sitting, walking and backing, turning, ascending and descending stairs/curbs, and navigating sloping surfaces in a variety of conditions without the need for additional support or crutches. PMID:24110003
NASA Technical Reports Server (NTRS)
Ezer, Neta; Zumbado, Jennifer Rochlis; Sandor, Aniko; Boyer, Jennifer
2011-01-01
Human-robot systems are expected to have a central role in future space exploration missions that extend beyond low-earth orbit [1]. As part of a directed research project funded by NASA s Human Research Program (HRP), researchers at the Johnson Space Center have started to use a variety of techniques, including literature reviews, case studies, knowledge capture, field studies, and experiments to understand critical human-robot interaction (HRI) variables for current and future systems. Activities accomplished to date include observations of the International Space Station s Special Purpose Dexterous Manipulator (SPDM), Robonaut, and Space Exploration Vehicle (SEV), as well as interviews with robotics trainers, robot operators, and developers of gesture interfaces. A survey of methods and metrics used in HRI was completed to identify those most applicable to space robotics. These methods and metrics included techniques and tools associated with task performance, the quantification of human-robot interactions and communication, usability, human workload, and situation awareness. The need for more research in areas such as natural interfaces, compensations for loss of signal and poor video quality, psycho-physiological feedback, and common HRI testbeds were identified. The initial findings from these activities and planned future research are discussed. Human-robot systems are expected to have a central role in future space exploration missions that extend beyond low-earth orbit [1]. As part of a directed research project funded by NASA s Human Research Program (HRP), researchers at the Johnson Space Center have started to use a variety of techniques, including literature reviews, case studies, knowledge capture, field studies, and experiments to understand critical human-robot interaction (HRI) variables for current and future systems. Activities accomplished to date include observations of the International Space Station s Special Purpose Dexterous Manipulator (SPDM), Robonaut, and Space Exploration Vehicle (SEV), as well as interviews with robotics trainers, robot operators, and developers of gesture interfaces. A survey of methods and metrics used in HRI was completed to identify those most applicable to space robotics. These methods and metrics included techniques and tools associated with task performance, the quantification of human-robot interactions and communication, usability, human workload, and situation awareness. The need for more research in areas such as natural interfaces, compensations for loss of signal and poor video quality, psycho-physiological feedback, and common HRI testbeds were identified. The initial findings from these activities and planned future research are discussed.
INL Multi-Robot Control Interface
DOE Office of Scientific and Technical Information (OSTI.GOV)
2005-03-30
The INL Multi-Robot Control Interface controls many robots through a single user interface. The interface includes a robot display window for each robot showing the robotâs condition. More than one window can be used depending on the number of robots. The user interface also includes a robot control window configured to receive commands for sending to the respective robot and a multi-robot common window showing information received from each robot.
NASA Astrophysics Data System (ADS)
Tamura, Sho; Maeyama, Shoichi
Rescue robots have been actively developed since Hanshin-Awaji (Kobe) Earthquake. Recently, the rescue robot to reduce the risk of the secondary disaster on NBC terror and critical accident is also developed. For such a background, the development project of mobile RT system in the collapsed is started. This research also participates in this project. It is useful to use the image pointing for the control interface of the rescue robot because it can control the robot by the simple operation. However, the conventional method cannot work on a rough terrain. In this research, we propose the system which controls the robot to arrive the target position on the rough terrain. It is constructed the methods which put the destination into the vector, and control the 3D localizated robot to follow the vector. Finally, the proposed system is evaluated through experiments by remote control of a mobile robot in slope and cofirmed the feasibility.
Human-Robot Interaction Directed Research Project
NASA Technical Reports Server (NTRS)
Rochlis, Jennifer; Ezer, Neta; Sandor, Aniko
2011-01-01
Human-robot interaction (HRI) is about understanding and shaping the interactions between humans and robots (Goodrich & Schultz, 2007). It is important to evaluate how the design of interfaces and command modalities affect the human s ability to perform tasks accurately, efficiently, and effectively (Crandall, Goodrich, Olsen Jr., & Nielsen, 2005) It is also critical to evaluate the effects of human-robot interfaces and command modalities on operator mental workload (Sheridan, 1992) and situation awareness (Endsley, Bolt , & Jones, 2003). By understanding the effects of interface design on human performance, workload, and situation awareness, interfaces can be developed that support the human in performing tasks with minimal errors and with appropriate interaction time and effort. Thus, the results of research on human-robot interfaces have direct implications for design. Because the factors associated with interfaces and command modalities in HRI are too numerous to address in 3 years of research, the proposed research concentrates on three manageable areas applicable to National Aeronautics and Space Administration (NASA) robot systems. These topic areas emerged from the Fiscal Year (FY) 2011 work that included extensive literature reviews and observations of NASA systems. The three topic areas are: 1) video overlays, 2) camera views, and 3) command modalities. Each area is described in detail below, along with relevance to existing NASA human-robot systems. In addition to studies in these three topic areas, a workshop is proposed for FY12. The workshop will bring together experts in human-robot interaction and robotics to discuss the state of the practice as applicable to research in space robotics. Studies proposed in the area of video overlays consider two factors in the implementation of augmented reality (AR) for operator displays during teleoperation. The first of these factors is the type of navigational guidance provided by AR symbology. In the proposed studies, participants performance during teleoperation of a robot arm will be compared when they are provided with command-guidance symbology (that is, directing the operator what commands to make) or situation-guidance symbology (that is, providing natural cues so that the operator can infer what commands to make). The second factor for AR symbology is the effects of overlays that are either superimposed or integrated into the external view of the world. A study is proposed in which the effects of superimposed and integrated overlays on operator task performance during teleoperated driving tasks are compared
Three Dimensional Measurements And Display Using A Robot Arm
NASA Astrophysics Data System (ADS)
Swift, Thomas E.
1984-02-01
The purpose of this paper is to describe a project which makes three dimensional measurements of an object using a robot arm. A program was written to determine the X-Y-Z coordinates of the end point of a Minimover-5 robot arm which was interfaced to a TRS-80 Model III microcomputer. This program was used in conjunction with computer graphics subroutines that draw a projected three dimensional object.. The robot arm was direc-ted to touch points on an object and then lines were drawn on the screen of the microcomputer between consecutive points as they were entered. A representation of the entire object is in this way constructed on the screen. The three dimensional graphics subroutines have the ability to rotate the projected object about any of the three axes, and to scale the object to any size. This project has applications in the computer-aided design and manufacturing fields because it can accurately measure the features of an irregularly shaped object.
A study of space-rated connectors using a robot end-effector
NASA Technical Reports Server (NTRS)
Nguyen, Charles C.
1995-01-01
The main research activities have been directed toward the study of the Robot Operated Materials Processing System (ROMPS), developed at GSFC under a flight project to investigate commercially promising in-space material processes and to design reflyable robot automated systems to be used in the above processes for low-cost operations. The research activities can be divided into two phases. Phase 1 dealt with testing of ROMPS robot mechanical interfaces and compliant device using a Stewart Platform testbed and Phase 2 with computer simulation study of the ROMPS robot control system. This report provides a summary of the results obtained in Phase 1 and Phase 2.
NASA Astrophysics Data System (ADS)
Erickson, David; Lacheray, Hervé; Lambert, Jason Michel; Mantegh, Iraj; Crymble, Derry; Daly, John; Zhao, Yan
2012-06-01
State-of-the-art robotic explosive ordnance disposal robotics have not, in general, adopted recent advances in control technology and man-machine interfaces and lag many years behind academia. This paper describes the Haptics-based Immersive Telerobotic System project investigating an immersive telepresence envrionment incorporating advanced vehicle control systems, Augmented immersive sensory feedback, dynamic 3D visual information, and haptic feedback for explosive ordnance disposal operators. The project aim is to provide operatiors a more sophisticated interface and expand sensory input to perform complex tasks to defeat improvised explosive devices successfully. The introduction of haptics and immersive teleprescence has the potential to shift the way teleprescence systems work for explosive ordnance disposal tasks or more widely for first responders scenarios involving remote unmanned ground vehicles.
Self-organization via active exploration in robotic applications. Phase 2: Hybrid hardware prototype
NASA Technical Reports Server (NTRS)
Oegmen, Haluk
1993-01-01
In many environments human-like intelligent behavior is required from robots to assist and/or replace human operators. The purpose of these robots is to reduce human time and effort in various tasks. Thus the robot should be robust and as autonomous as possible in order to eliminate or to keep to a strict minimum its maintenance and external control. Such requirements lead to the following properties: fault tolerance, self organization, and intelligence. A good insight into implementing these properties in a robot can be gained by considering human behavior. In the first phase of this project, a neural network architecture was developed that captures some fundamental aspects of human categorization, habit, novelty, and reinforcement behavior. The model, called FRONTAL, is a 'cognitive unit' regulating the exploratory behavior of the robot. In the second phase of the project, FRONTAL was interfaced with an off-the-shelf robotic arm and a real-time vision system. The components of this robotic system, a review of FRONTAL, and simulation studies are presented in this report.
Health Care Robotics: A Progress Report
NASA Technical Reports Server (NTRS)
Fiorini, Paolo; Ali, Khaled; Seraji, Homayoun
1997-01-01
This paper describes the approach followed in the design of a service robot for health care applications. Under the auspices of the NASA Technology Transfer program, a partnership was established between JPL and RWI, a manufacturer of mobile robots, to design and evaluate a mobile robot for health care assistance to the elderly and the handicapped. The main emphasis of the first phase of the project is on the development on a multi-modal operator interface and its evaluation by health care professionals and users. This paper describes the architecture of the system, the evaluation method used, and some preliminary results of the user evaluation.
Autonomous caregiver following robotic wheelchair
NASA Astrophysics Data System (ADS)
Ratnam, E. Venkata; Sivaramalingam, Sethurajan; Vignesh, A. Sri; Vasanth, Elanthendral; Joans, S. Mary
2011-12-01
In the last decade, a variety of robotic/intelligent wheelchairs have been proposed to meet the need in aging society. Their main research topics are autonomous functions such as moving toward some goals while avoiding obstacles, or user-friendly interfaces. Although it is desirable for wheelchair users to go out alone, caregivers often accompany them. Therefore we have to consider not only autonomous functions and user interfaces but also how to reduce caregivers' load and support their activities in a communication aspect. From this point of view, we have proposed a robotic wheelchair moving with a caregiver side by side based on the MATLAB process. In this project we discussing about robotic wheel chair to follow a caregiver by using a microcontroller, Ultrasonic sensor, keypad, Motor drivers to operate robot. Using camera interfaced with the DM6437 (Davinci Code Processor) image is captured. The captured image are then processed by using image processing technique, the processed image are then converted into voltage levels through MAX 232 level converter and given it to the microcontroller unit serially and ultrasonic sensor to detect the obstacle in front of robot. In this robot we have mode selection switch Automatic and Manual control of robot, we use ultrasonic sensor in automatic mode to find obstacle, in Manual mode to use the keypad to operate wheel chair. In the microcontroller unit, c language coding is predefined, according to this coding the robot which connected to it was controlled. Robot which has several motors is activated by using the motor drivers. Motor drivers are nothing but a switch which ON/OFF the motor according to the control given by the microcontroller unit.
Manufacturing Methods and Technology Project Summary Reports
1985-06-01
Computer -Aided Design (CAD)/ Computer -Aided Manufacturing (CAM) Process for the Production of Cold Forged Gears Project 483 6121 - Robotic Welding and...Caliber Projectile Bodies Project 682 8370 - Automatic Inspection and 1-I1 Process Control of Weapons Parts Manufacturing METALS Project 181 7285 - Cast...designed for use on each project. Experience suggested that a general purpose computer interface might be designed that could be used on any project
Human-Robot Interaction Directed Research Project
NASA Technical Reports Server (NTRS)
Sandor, Aniko; Cross, Ernest V., II; Chang, M. L.
2014-01-01
Human-robot interaction (HRI) is a discipline investigating the factors affecting the interactions between humans and robots. It is important to evaluate how the design of interfaces and command modalities affect the human's ability to perform tasks accurately, efficiently, and effectively when working with a robot. By understanding the effects of interface design on human performance, workload, and situation awareness, interfaces can be developed to appropriately support the human in performing tasks with minimal errors and with appropriate interaction time and effort. Thus, the results of research on human-robot interfaces have direct implications for the design of robotic systems. This DRP concentrates on three areas associated with interfaces and command modalities in HRI which are applicable to NASA robot systems: 1) Video Overlays, 2) Camera Views, and 3) Command Modalities. The first study focused on video overlays that investigated how Augmented Reality (AR) symbology can be added to the human-robot interface to improve teleoperation performance. Three types of AR symbology were explored in this study, command guidance (CG), situation guidance (SG), and both (SCG). CG symbology gives operators explicit instructions on what commands to input, whereas SG symbology gives operators implicit cues so that operators can infer the input commands. The combination of CG and SG provided operators with explicit and implicit cues allowing the operator to choose which symbology to utilize. The objective of the study was to understand how AR symbology affects the human operator's ability to align a robot arm to a target using a flight stick and the ability to allocate attention between the symbology and external views of the world. The study evaluated the effects type of symbology (CG and SG) has on operator tasks performance and attention allocation during teleoperation of a robot arm. The second study expanded on the first study by evaluating the effects of the type of navigational guidance (CG and SG) on operator task performance and attention allocation during teleoperation of a robot arm through uplinked commands. Although this study complements the first study on navigational guidance with hand controllers, it is a separate investigation due to the distinction in intended operators (i.e., crewmembers versus ground-operators). A third study looked at superimposed and integrated overlays for teleoperation of a mobile robot using a hand controller. When AR is superimposed on the external world, it appears to be fixed onto the display and internal to the operators' workstation. Unlike superimposed overlays, integrated overlays often appear as three-dimensional objects and move as if part of the external world. Studies conducted in the aviation domain show that integrated overlays can improve situation awareness and reduce the amount of deviation from the optimal path. The purpose of the study was to investigate whether these results apply to HRI tasks, such as navigation with a mobile robot. HRP GAPS This HRI research contributes to closure of HRP gaps by providing information on how display and control characteristics - those related to guidance, feedback, and command modalities - affect operator performance. The overarching goals are to improve interface usability, reduce operator error, and develop candidate guidelines to design effective human-robot interfaces.
Bakkum, Douglas J.; Gamblen, Philip M.; Ben-Ary, Guy; Chao, Zenas C.; Potter, Steve M.
2007-01-01
Here, we and others describe an unusual neurorobotic project, a merging of art and science called MEART, the semi-living artist. We built a pneumatically actuated robotic arm to create drawings, as controlled by a living network of neurons from rat cortex grown on a multi-electrode array (MEA). Such embodied cultured networks formed a real-time closed-loop system which could now behave and receive electrical stimulation as feedback on its behavior. We used MEART and simulated embodiments, or animats, to study the network mechanisms that produce adaptive, goal-directed behavior. This approach to neural interfacing will help instruct the design of other hybrid neural-robotic systems we call hybrots. The interfacing technologies and algorithms developed have potential applications in responsive deep brain stimulation systems and for motor prosthetics using sensory components. In a broader context, MEART educates the public about neuroscience, neural interfaces, and robotics. It has paved the way for critical discussions on the future of bio-art and of biotechnology. PMID:18958276
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bruemmer, David J; Walton, Miles C
Methods and systems for controlling a plurality of robots through a single user interface include at least one robot display window for each of the plurality of robots with the at least one robot display window illustrating one or more conditions of a respective one of the plurality of robots. The user interface further includes at least one robot control window for each of the plurality of robots with the at least one robot control window configured to receive one or more commands for sending to the respective one of the plurality of robots. The user interface further includes amore » multi-robot common window comprised of information received from each of the plurality of robots.« less
Robotic and Virtual Reality BCIs Using Spatial Tactile and Auditory Oddball Paradigms.
Rutkowski, Tomasz M
2016-01-01
The paper reviews nine robotic and virtual reality (VR) brain-computer interface (BCI) projects developed by the author, in collaboration with his graduate students, within the BCI-lab research group during its association with University of Tsukuba, Japan. The nine novel approaches are discussed in applications to direct brain-robot and brain-virtual-reality-agent control interfaces using tactile and auditory BCI technologies. The BCI user intentions are decoded from the brainwaves in realtime using a non-invasive electroencephalography (EEG) and they are translated to a symbiotic robot or virtual reality agent thought-based only control. A communication protocol between the BCI output and the robot or the virtual environment is realized in a symbiotic communication scenario using an user datagram protocol (UDP), which constitutes an internet of things (IoT) control scenario. Results obtained from healthy users reproducing simple brain-robot and brain-virtual-agent control tasks in online experiments support the research goal of a possibility to interact with robotic devices and virtual reality agents using symbiotic thought-based BCI technologies. An offline BCI classification accuracy boosting method, using a previously proposed information geometry derived approach, is also discussed in order to further support the reviewed robotic and virtual reality thought-based control paradigms.
Robotic and Virtual Reality BCIs Using Spatial Tactile and Auditory Oddball Paradigms
Rutkowski, Tomasz M.
2016-01-01
The paper reviews nine robotic and virtual reality (VR) brain–computer interface (BCI) projects developed by the author, in collaboration with his graduate students, within the BCI–lab research group during its association with University of Tsukuba, Japan. The nine novel approaches are discussed in applications to direct brain-robot and brain-virtual-reality-agent control interfaces using tactile and auditory BCI technologies. The BCI user intentions are decoded from the brainwaves in realtime using a non-invasive electroencephalography (EEG) and they are translated to a symbiotic robot or virtual reality agent thought-based only control. A communication protocol between the BCI output and the robot or the virtual environment is realized in a symbiotic communication scenario using an user datagram protocol (UDP), which constitutes an internet of things (IoT) control scenario. Results obtained from healthy users reproducing simple brain-robot and brain-virtual-agent control tasks in online experiments support the research goal of a possibility to interact with robotic devices and virtual reality agents using symbiotic thought-based BCI technologies. An offline BCI classification accuracy boosting method, using a previously proposed information geometry derived approach, is also discussed in order to further support the reviewed robotic and virtual reality thought-based control paradigms. PMID:27999538
Space station automation and robotics study. Operator-systems interface
NASA Technical Reports Server (NTRS)
1984-01-01
This is the final report of a Space Station Automation and Robotics Planning Study, which was a joint project of the Boeing Aerospace Company, Boeing Commercial Airplane Company, and Boeing Computer Services Company. The study is in support of the Advanced Technology Advisory Committee established by NASA in accordance with a mandate by the U.S. Congress. Boeing support complements that provided to the NASA Contractor study team by four aerospace contractors, the Stanford Research Institute (SRI), and the California Space Institute. This study identifies automation and robotics (A&R) technologies that can be advanced by requirements levied by the Space Station Program. The methodology used in the study is to establish functional requirements for the operator system interface (OSI), establish the technologies needed to meet these requirements, and to forecast the availability of these technologies. The OSI would perform path planning, tracking and control, object recognition, fault detection and correction, and plan modifications in connection with extravehicular (EV) robot operations.
De Momi, E; Ferrigno, G
2010-01-01
The robot and sensors integration for computer-assisted surgery and therapy (ROBOCAST) project (FP7-ICT-2007-215190) is co-funded by the European Union within the Seventh Framework Programme in the field of information and communication technologies. The ROBOCAST project focuses on robot- and artificial-intelligence-assisted keyhole neurosurgery (tumour biopsy and local drug delivery along straight or turning paths). The goal of this project is to assist surgeons with a robotic system controlled by an intelligent high-level controller (HLC) able to gather and integrate information from the surgeon, from diagnostic images, and from an array of on-field sensors. The HLC integrates pre-operative and intra-operative diagnostics data and measurements, intelligence augmentation, multiple-robot dexterity, and multiple sensory inputs in a closed-loop cooperating scheme including a smart interface for improved haptic immersion and integration. This paper, after the overall architecture description, focuses on the intelligent trajectory planner based on risk estimation and human criticism. The current status of development is reported, and first tests on the planner are shown by using a real image stack and risk descriptor phantom. The advantages of using a fuzzy risk description are given by the possibility of upgrading the knowledge on-field without the intervention of a knowledge engineer.
MODULAR MANIPULATOR FOR ROBOTICS APPLICATIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joseph W. Geisinger, Ph.D.
ARM Automation, Inc. is developing a framework of modular actuators that can address the DOE's wide range of robotics needs. The objective of this effort is to demonstrate the effectiveness of this technology by constructing a manipulator from these actuators within a glovebox for Automated Plutonium Processing (APP). At the end of the project, the system of actuators was used to construct several different manipulator configurations, which accommodate common glovebox tasks such as repackaging. The modular nature and quickconnects of this system simplify installation into ''hot'' boxes and any potential modifications or repair therein. This work focused on the developmentmore » of self-contained robotic actuator modules including the embedded electronic controls for the purpose of building a manipulator system. Both of the actuators developed under this project contain the control electronics, sensors, motor, gear train, wiring, system communications and mechanical interfaces of a complete robotics servo device. Test actuators and accompanying DISC{trademark}s underwent validation testing at The University of Texas at Austin and ARM Automation, Inc. following final design and fabrication. The system also included custom links, an umbilical cord, an open architecture PC-based system controller, and operational software that permitted integration into a completely functional robotic manipulator system. The open architecture on which this system is based avoids proprietary interfaces and communication protocols which only serve to limit the capabilities and flexibility of automation equipment. The system was integrated and tested in the contractor's facility for intended performance and operations. The manipulator was tested using the full-scale equipment and process mock-ups. The project produced a practical and operational system including a quantitative evaluation of its performance and cost.« less
Open multi-agent control architecture to support virtual-reality-based man-machine interfaces
NASA Astrophysics Data System (ADS)
Freund, Eckhard; Rossmann, Juergen; Brasch, Marcel
2001-10-01
Projective Virtual Reality is a new and promising approach to intuitively operable man machine interfaces for the commanding and supervision of complex automation systems. The user interface part of Projective Virtual Reality heavily builds on latest Virtual Reality techniques, a task deduction component and automatic action planning capabilities. In order to realize man machine interfaces for complex applications, not only the Virtual Reality part has to be considered but also the capabilities of the underlying robot and automation controller are of great importance. This paper presents a control architecture that has proved to be an ideal basis for the realization of complex robotic and automation systems that are controlled by Virtual Reality based man machine interfaces. The architecture does not just provide a well suited framework for the real-time control of a multi robot system but also supports Virtual Reality metaphors and augmentations which facilitate the user's job to command and supervise a complex system. The developed control architecture has already been used for a number of applications. Its capability to integrate sensor information from sensors of different levels of abstraction in real-time helps to make the realized automation system very responsive to real world changes. In this paper, the architecture will be described comprehensively, its main building blocks will be discussed and one realization that is built based on an open source real-time operating system will be presented. The software design and the features of the architecture which make it generally applicable to the distributed control of automation agents in real world applications will be explained. Furthermore its application to the commanding and control of experiments in the Columbus space laboratory, the European contribution to the International Space Station (ISS), is only one example which will be described.
Human-Robot Interaction Directed Research Project
NASA Technical Reports Server (NTRS)
Sandor, Aniko; Cross, Ernest V., II; Chang, Mai Lee
2014-01-01
Human-robot interaction (HRI) is a discipline investigating the factors affecting the interactions between humans and robots. It is important to evaluate how the design of interfaces and command modalities affect the human's ability to perform tasks accurately, efficiently, and effectively when working with a robot. By understanding the effects of interface design on human performance, workload, and situation awareness, interfaces can be developed to appropriately support the human in performing tasks with minimal errors and with appropriate interaction time and effort. Thus, the results of research on human-robot interfaces have direct implications for the design of robotic systems. This DRP concentrates on three areas associated with interfaces and command modalities in HRI which are applicable to NASA robot systems: 1) Video Overlays, 2) Camera Views, and 3) Command Modalities. The first study focused on video overlays that investigated how Augmented Reality (AR) symbology can be added to the human-robot interface to improve teleoperation performance. Three types of AR symbology were explored in this study, command guidance (CG), situation guidance (SG), and both (SCG). CG symbology gives operators explicit instructions on what commands to input, whereas SG symbology gives operators implicit cues so that operators can infer the input commands. The combination of CG and SG provided operators with explicit and implicit cues allowing the operator to choose which symbology to utilize. The objective of the study was to understand how AR symbology affects the human operator's ability to align a robot arm to a target using a flight stick and the ability to allocate attention between the symbology and external views of the world. The study evaluated the effects type of symbology (CG and SG) has on operator tasks performance and attention allocation during teleoperation of a robot arm. The second study expanded on the first study by evaluating the effects of the type of navigational guidance (CG and SG) on operator task performance and attention allocation during teleoperation of a robot arm through uplinked commands. Although this study complements the first study on navigational guidance with hand controllers, it is a separate investigation due to the distinction in intended operators (i.e., crewmembers versus ground-operators). A third study looked at superimposed and integrated overlays for teleoperation of a mobile robot using a hand controller. When AR is superimposed on the external world, it appears to be fixed onto the display and internal to the operators' workstation. Unlike superimposed overlays, integrated overlays often appear as three-dimensional objects and move as if part of the external world. Studies conducted in the aviation domain show that integrated overlays can improve situation awareness and reduce the amount of deviation from the optimal path. The purpose of the study was to investigate whether these results apply to HRI tasks, such as navigation with a mobile robot.
Control of free-flying space robot manipulator systems
NASA Technical Reports Server (NTRS)
Cannon, Robert H., Jr.
1989-01-01
Control techniques for self-contained, autonomous free-flying space robots are being tested and developed. Free-flying space robots are envisioned as a key element of any successful long term presence in space. These robots must be capable of performing the assembly, maintenance, and inspection, and repair tasks that currently require astronaut extra-vehicular activity (EVA). Use of robots will provide economic savings as well as improved astronaut safety by reducing and in many cases, eliminating the need for human EVA. The focus of the work is to develop and carry out a set of research projects using laboratory models of satellite robots. These devices use air-cushion-vehicle (ACV) technology to simulate in two dimensions the drag-free, zero-g conditions of space. Current work is divided into six major projects or research areas. Fixed-base cooperative manipulation work represents our initial entry into multiple arm cooperation and high-level control with a sophisticated user interface. The floating-base cooperative manipulation project strives to transfer some of the technologies developed in the fixed-base work onto a floating base. The global control and navigation experiment seeks to demonstrate simultaneous control of the robot manipulators and the robot base position so that tasks can be accomplished while the base is undergoing a controlled motion. The multiple-vehicle cooperation project's goal is to demonstrate multiple free-floating robots working in teams to carry out tasks too difficult or complex for a single robot to perform. The Location Enhancement Arm Push-off (LEAP) activity's goal is to provide a viable alternative to expendable gas thrusters for vehicle propulsion wherein the robot uses its manipulators to throw itself from place to place. Because the successful execution of the LEAP technique requires an accurate model of the robot and payload mass properties, it was deemed an attractive testbed for adaptive control technology.
Mobile tele-echography: user interface design.
Cañero, Cristina; Thomos, Nikolaos; Triantafyllidis, George A; Litos, George C; Strintzis, Michael Gerassimos
2005-03-01
Ultrasound imaging allows the evaluation of the degree of emergency of a patient. However, in some instances, a well-trained sonographer is unavailable to perform such echography. To cope with this issue, the Mobile Tele-Echography Using an Ultralight Robot (OTELO) project aims to develop a fully integrated end-to-end mobile tele-echography system using an ultralight remote-controlled robot for population groups that are not served locally by medical experts. This paper focuses on the user interface of the OTELO system, consisting of the following parts: an ultrasound video transmission system providing real-time images of the scanned area, an audio/video conference to communicate with the paramedical assistant and with the patient, and a virtual-reality environment, providing visual and haptic feedback to the expert, while capturing the expert's hand movements. These movements are reproduced by the robot at the patient site while holding the ultrasound probe against the patient skin. In addition, the user interface includes an image processing facility for enhancing the received images and the possibility to include them into a database.
A Biotic Game Design Project for Integrated Life Science and Engineering Education
Denisin, Aleksandra K.; Rensi, Stefano; Sanchez, Gabriel N.; Quake, Stephen R.; Riedel-Kruse, Ingmar H.
2015-01-01
Engaging, hands-on design experiences are key for formal and informal Science, Technology, Engineering, and Mathematics (STEM) education. Robotic and video game design challenges have been particularly effective in stimulating student interest, but equivalent experiences for the life sciences are not as developed. Here we present the concept of a "biotic game design project" to motivate student learning at the interface of life sciences and device engineering (as part of a cornerstone bioengineering devices course). We provide all course material and also present efforts in adapting the project's complexity to serve other time frames, age groups, learning focuses, and budgets. Students self-reported that they found the biotic game project fun and motivating, resulting in increased effort. Hence this type of design project could generate excitement and educational impact similar to robotics and video games. PMID:25807212
A biotic game design project for integrated life science and engineering education.
Cira, Nate J; Chung, Alice M; Denisin, Aleksandra K; Rensi, Stefano; Sanchez, Gabriel N; Quake, Stephen R; Riedel-Kruse, Ingmar H
2015-03-01
Engaging, hands-on design experiences are key for formal and informal Science, Technology, Engineering, and Mathematics (STEM) education. Robotic and video game design challenges have been particularly effective in stimulating student interest, but equivalent experiences for the life sciences are not as developed. Here we present the concept of a "biotic game design project" to motivate student learning at the interface of life sciences and device engineering (as part of a cornerstone bioengineering devices course). We provide all course material and also present efforts in adapting the project's complexity to serve other time frames, age groups, learning focuses, and budgets. Students self-reported that they found the biotic game project fun and motivating, resulting in increased effort. Hence this type of design project could generate excitement and educational impact similar to robotics and video games.
NASA Technical Reports Server (NTRS)
Voellmer, George M.
1992-01-01
Mechanism enables robot to change tools on end of arm. Actuated by motion of robot: requires no additional electrical or pneumatic energy to make or break connection between tool and wrist at end of arm. Includes three basic subassemblies: wrist interface plate attached to robot arm at wrist, tool interface plate attached to tool, and holster. Separate tool interface plate and holster provided for each tool robot uses.
(abstract) A Mobile Robot for Remote Response to Incidents Involving Hazardous Materials
NASA Technical Reports Server (NTRS)
Welch, Richard V.
1994-01-01
This paper will report the status of the Emergency Response Robotics project, a teleoperated mobile robot system being developed at JPL for use by the JPL Fire Department/HAZMAT Team. The project, which began in 1991, has been focused on developing a robotic vehicle which can be quickly deployed by HAZMAT Team personnel for first entry into an incident site. The primary goals of the system are to gain access to the site, locate and identify the hazard, and aid in its mitigation. The involvement of JPL Fire Department/HAZMAT Team personnel has been critical in guiding the design and evaluation of the system. A unique feature of the current robot, called HAZBOT III, is its special design for operation in combustible environments. This includes the use of all solid state electronics, brushless motors, and internal pressurization. Demonstration and testing of the system with HAZMAT Team personnel has shown that teleoperated robots, such as HAZBOT III, can successfully gain access to incident sites locating and identifying hazardous material spills. Work is continuing to enable more complex missions through the addition of appropriate sensor technology and enhancement of the operator interface.
NASA Technical Reports Server (NTRS)
Bradley, Arthur; Dubowsky, Steven; Quinn, Roger; Marzwell, Neville
2005-01-01
Robots that operate independently of one another will not be adequate to accomplish the future exploration tasks of long-distance autonomous navigation, habitat construction, resource discovery, and material handling. Such activities will require that systems widely share information, plan and divide complex tasks, share common resources, and physically cooperate to manipulate objects. Recognizing the need for interoperable robots to accomplish the new exploration initiative, NASA s Office of Exploration Systems Research & Technology recently funded the development of the Joint Technical Architecture for Robotic Systems (JTARS). JTARS charter is to identify the interface standards necessary to achieve interoperability among space robots. A JTARS working group (JTARS-WG) has been established comprising recognized leaders in the field of space robotics including representatives from seven NASA centers along with academia and private industry. The working group s early accomplishments include addressing key issues required for interoperability, defining which systems are within the project s scope, and framing the JTARS manuals around classes of robotic systems.
Designing speech-based interfaces for telepresence robots for people with disabilities.
Tsui, Katherine M; Flynn, Kelsey; McHugh, Amelia; Yanco, Holly A; Kontak, David
2013-06-01
People with cognitive and/or motor impairments may benefit from using telepresence robots to engage in social activities. To date, these robots, their user interfaces, and their navigation behaviors have not been designed for operation by people with disabilities. We conducted an experiment in which participants (n=12) used a telepresence robot in a scavenger hunt task to determine how they would use speech to command the robot. Based upon the results, we present design guidelines for speech-based interfaces for telepresence robots.
Human-robot skills transfer interfaces for a flexible surgical robot.
Calinon, Sylvain; Bruno, Danilo; Malekzadeh, Milad S; Nanayakkara, Thrishantha; Caldwell, Darwin G
2014-09-01
In minimally invasive surgery, tools go through narrow openings and manipulate soft organs to perform surgical tasks. There are limitations in current robot-assisted surgical systems due to the rigidity of robot tools. The aim of the STIFF-FLOP European project is to develop a soft robotic arm to perform surgical tasks. The flexibility of the robot allows the surgeon to move within organs to reach remote areas inside the body and perform challenging procedures in laparoscopy. This article addresses the problem of designing learning interfaces enabling the transfer of skills from human demonstration. Robot programming by demonstration encompasses a wide range of learning strategies, from simple mimicking of the demonstrator's actions to the higher level imitation of the underlying intent extracted from the demonstrations. By focusing on this last form, we study the problem of extracting an objective function explaining the demonstrations from an over-specified set of candidate reward functions, and using this information for self-refinement of the skill. In contrast to inverse reinforcement learning strategies that attempt to explain the observations with reward functions defined for the entire task (or a set of pre-defined reward profiles active for different parts of the task), the proposed approach is based on context-dependent reward-weighted learning, where the robot can learn the relevance of candidate objective functions with respect to the current phase of the task or encountered situation. The robot then exploits this information for skills refinement in the policy parameters space. The proposed approach is tested in simulation with a cutting task performed by the STIFF-FLOP flexible robot, using kinesthetic demonstrations from a Barrett WAM manipulator. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
An EMG Interface for the Control of Motion and Compliance of a Supernumerary Robotic Finger
Hussain, Irfan; Spagnoletti, Giovanni; Salvietti, Gionata; Prattichizzo, Domenico
2016-01-01
In this paper, we propose a novel electromyographic (EMG) control interface to control motion and joints compliance of a supernumerary robotic finger. The supernumerary robotic fingers are a recently introduced class of wearable robotics that provides users additional robotic limbs in order to compensate or augment the existing abilities of natural limbs without substituting them. Since supernumerary robotic fingers are supposed to closely interact and perform actions in synergy with the human limbs, the control principles of extra finger should have similar behavior as human’s ones including the ability of regulating the compliance. So that, it is important to propose a control interface and to consider the actuators and sensing capabilities of the robotic extra finger compatible to implement stiffness regulation control techniques. We propose EMG interface and a control approach to regulate the compliance of the device through servo actuators. In particular, we use a commercial EMG armband for gesture recognition to be associated with the motion control of the robotic device and surface one channel EMG electrodes interface to regulate the compliance of the robotic device. We also present an updated version of a robotic extra finger where the adduction/abduction motion is realized through ball bearing and spur gears mechanism. We have validated the proposed interface with two sets of experiments related to compensation and augmentation. In the first set of experiments, different bimanual tasks have been performed with the help of the robotic device and simulating a paretic hand since this novel wearable system can be used to compensate the missing grasping abilities in chronic stroke patients. In the second set, the robotic extra finger is used to enlarge the workspace and manipulation capability of healthy hands. In both sets, the same EMG control interface has been used. The obtained results demonstrate that the proposed control interface is intuitive and can successfully be used, not only to control the motion of a supernumerary robotic finger but also to regulate its compliance. The proposed approach can be exploited also for the control of different wearable devices that has to actively cooperate with the human limbs. PMID:27891088
A Mobile Robot for Remote Response to Incidents Involving Hazardous Materials
NASA Technical Reports Server (NTRS)
Welch, Richard V.
1994-01-01
This paper will describe a teleoperated mobile robot system being developed at JPL for use by the JPL Fire Department/HAZMAT Team. The project, which began in October 1990, is focused on prototyping a robotic vehicle which can be quickly deployed and easily operated by HAZMAT Team personnel allowing remote entry and exploration of a hazardous material incident site. The close involvement of JPL Fire Department personnel has been critical in establishing system requirements as well as evaluating the system. The current robot, called HAZBOT III, has been especially designed for operation in environments that may contain combustible gases. Testing of the system with the Fire Department has shown that teleoperated robots can successfully gain access to incident sites allowing hazardous material spills to be remotely located and identified. Work is continuing to enable more complex missions through enhancement of the operator interface and by allowing tetherless operation.
The robotized workstation "MASTER" for users with tetraplegia: description and evaluation.
Busnel, M; Cammoun, R; Coulon-Lauture, F; Détriché, J M; Le Claire, G; Lesigne, B
1999-07-01
The rehabilitation robotics MASTER program was developed by the French Atomic Energy Commission (CEA) and evaluated by the APPROCHE Rehabilitation centers. The aim of this program is to increase the autonomy and quality of life of persons with tetraplegia in domestic and vocational environments. Taking advantage of its experience in nuclear robotics, the CEA has supported studies dealing with the use of such technical aids in the medical area since 1975 with the SPARTACUS project, followed by MASTER 10 years later, and its European extension in the framework of the TIDE/RAID program. The present system is composed of a fixed robotized workstation that includes a six-axis SCARA robot mounted on a rail to allow horizontal movement and is equipped with tools for various tasks. The Operator Interface (OI) has been carefully adapted to the most severe tetraplegia. Results are given following a 2-year evaluation in real-life situations.
Team Oriented Robotic Exploration Task on Scorpion and K9 Platforms
NASA Technical Reports Server (NTRS)
Kirchner, Frank
2003-01-01
This final report describes the achievements that have been made in the project over the complete period of performance. The technical progress highlights the different areas of work in terms of Progress in Mechatronics, Sensor integration, Software Development. User Interfaces, Behavior Development and Experimental Results and System Testing. The different areas are: Mechatronics, Sensor integration, Software development, Experimental results and Basic System Testing, Behaviors Development and Advanced System Testing, User Interface and Wireless Communication.
Soft brain-machine interfaces for assistive robotics: A novel control approach.
Schiatti, Lucia; Tessadori, Jacopo; Barresi, Giacinto; Mattos, Leonardo S; Ajoudani, Arash
2017-07-01
Robotic systems offer the possibility of improving the life quality of people with severe motor disabilities, enhancing the individual's degree of independence and interaction with the external environment. In this direction, the operator's residual functions must be exploited for the control of the robot movements and the underlying dynamic interaction through intuitive and effective human-robot interfaces. Towards this end, this work aims at exploring the potential of a novel Soft Brain-Machine Interface (BMI), suitable for dynamic execution of remote manipulation tasks for a wide range of patients. The interface is composed of an eye-tracking system, for an intuitive and reliable control of a robotic arm system's trajectories, and a Brain-Computer Interface (BCI) unit, for the control of the robot Cartesian stiffness, which determines the interaction forces between the robot and environment. The latter control is achieved by estimating in real-time a unidimensional index from user's electroencephalographic (EEG) signals, which provides the probability of a neutral or active state. This estimated state is then translated into a stiffness value for the robotic arm, allowing a reliable modulation of the robot's impedance. A preliminary evaluation of this hybrid interface concept provided evidence on the effective execution of tasks with dynamic uncertainties, demonstrating the great potential of this control method in BMI applications for self-service and clinical care.
Virtual reality for intelligent and interactive operating, training, and visualization systems
NASA Astrophysics Data System (ADS)
Freund, Eckhard; Rossmann, Juergen; Schluse, Michael
2000-10-01
Virtual Reality Methods allow a new and intuitive way of communication between man and machine. The basic idea of Virtual Reality (VR) is the generation of artificial computer simulated worlds, which the user not only can look at but also can interact with actively using data glove and data helmet. The main emphasis for the use of such techniques at the IRF is the development of a new generation of operator interfaces for the control of robots and other automation components and for intelligent training systems for complex tasks. The basic idea of the methods developed at the IRF for the realization of Projective Virtual Reality is to let the user work in the virtual world as he would act in reality. The user actions are recognized by the Virtual reality System and by means of new and intelligent control software projected onto the automation components like robots which afterwards perform the necessary actions in reality to execute the users task. In this operation mode the user no longer has to be a robot expert to generate tasks for robots or to program them, because intelligent control software recognizes the users intention and generated automatically the commands for nearly every automation component. Now, Virtual Reality Methods are ideally suited for universal man-machine-interfaces for the control and supervision of a big class of automation components, interactive training and visualization systems. The Virtual Reality System of the IRF-COSIMIR/VR- forms the basis for different projects starting with the control of space automation systems in the projects CIROS, VITAL and GETEX, the realization of a comprehensive development tool for the International Space Station and last but not least with the realistic simulation fire extinguishing, forest machines and excavators which will be presented in the final paper in addition to the key ideas of this Virtual Reality System.
Comparison of tongue interface with keyboard for control of an assistive robotic arm.
Struijk, Lotte N S Andreasen; Lontis, Romulus
2017-07-01
This paper demonstrates how an assistive 6 DoF robotic arm with a gripper can be controlled manually using a tongue interface. The proposed method suggests that it possible for a user to manipulate the surroundings with his or her tongue using the inductive tongue control system as deployed in this study. The sensors of an inductive tongue-computer interface were mapped to the Cartesian control of an assistive robotic arm. The resulting control system was tested manually in order to compare manual control of the robot using a standard keyboard and using the tongue interface. Two healthy subjects controlled the robotic arm to precisely move a bottle of water from one location to another. The results shows that the tongue interface was able to fully control the robotic arm in a similar manner as the standard keyboard resulting in the same number of successful manipulations and an average increase in task duration of up to 30% as compared with the standard keyboard.
A multimodal interface for real-time soldier-robot teaming
NASA Astrophysics Data System (ADS)
Barber, Daniel J.; Howard, Thomas M.; Walter, Matthew R.
2016-05-01
Recent research and advances in robotics have led to the development of novel platforms leveraging new sensing capabilities for semantic navigation. As these systems becoming increasingly more robust, they support highly complex commands beyond direct teleoperation and waypoint finding facilitating a transition away from robots as tools to robots as teammates. Supporting future Soldier-Robot teaming requires communication capabilities on par with human-human teams for successful integration of robots. Therefore, as robots increase in functionality, it is equally important that the interface between the Soldier and robot advances as well. Multimodal communication (MMC) enables human-robot teaming through redundancy and levels of communications more robust than single mode interaction. Commercial-off-the-shelf (COTS) technologies released in recent years for smart-phones and gaming provide tools for the creation of portable interfaces incorporating MMC through the use of speech, gestures, and visual displays. However, for multimodal interfaces to be successfully used in the military domain, they must be able to classify speech, gestures, and process natural language in real-time with high accuracy. For the present study, a prototype multimodal interface supporting real-time interactions with an autonomous robot was developed. This device integrated COTS Automated Speech Recognition (ASR), a custom gesture recognition glove, and natural language understanding on a tablet. This paper presents performance results (e.g. response times, accuracy) of the integrated device when commanding an autonomous robot to perform reconnaissance and surveillance activities in an unknown outdoor environment.
Cyber integrated MEMS microhand for biological applications
NASA Astrophysics Data System (ADS)
Weissman, Adam; Frazier, Athena; Pepen, Michael; Lu, Yen-Wen; Yang, Shanchieh Jay
2009-05-01
Anthropomorphous robotic hands at microscales have been developed to receive information and perform tasks for biological applications. To emulate a human hand's dexterity, the microhand requires a master-slave interface with a wearable controller, force sensors, and perception displays for tele-manipulation. Recognizing the constraints and complexity imposed in developing feedback interface during miniaturization, this project address the need by creating an integrated cyber environment incorporating sensors with a microhand, haptic/visual display, and object model, to emulates human hands' psychophysical perception at microscale.
ERIC Educational Resources Information Center
Strawhacker, Amanda; Bers, Marina U.
2015-01-01
In recent years, educational robotics has become an increasingly popular research area. However, limited studies have focused on differentiated learning outcomes based on type of programming interface. This study aims to explore how successfully young children master foundational programming concepts based on the robotics user interface (tangible,…
NASA/ASEE Summer Faculty Fellowship Program
NASA Technical Reports Server (NTRS)
Hosler, E. Ramon (Editor); Armstrong, Dennis W. (Editor)
1989-01-01
The contractor's report contains all sixteen final reports prepared by the participants in the 1989 Summer Faculty Fellowship Program. Reports describe research projects on a number of different topics. Interface software, metal corrosion, rocket triggering lightning, automatic drawing, 60-Hertz power, carotid-cardiac baroreflex, acoustic fields, robotics, AI, CAD/CAE, cryogenics, titanium, and flow measurement are discussed.
SKITTER/implement mechanical interface
NASA Technical Reports Server (NTRS)
Cash, John Wilson, III; Cone, Alan E.; Garolera, Frank J.; German, David; Lindabury, David Peter; Luckado, Marshall Cleveland; Murphey, Craig; Rowell, John Bryan; Wilkinson, Brad
1988-01-01
SKITTER (Spacial Kinematic Inertial Translatory Tripod Extremity Robot) is a three-legged transport vehicle designed to perform under the unique environment of the moon. The objective of this project was to design a mechanical interface for SKITTER. This mechanical latching interface will allow SKITTER to use a series of implements such as drills, cranes, etc., and perform different tasks on the moon. The design emphasized versatility and detachability; that is, the interface design is the same for all implements, and connection and detachment is simple. After consideration of many alternatives, a system of three identical latches at each of the three interface points was chosen. The latching mechanism satisfies the design constraints because it facilitates connection and detachment. Also, the moving parts are protected from the dusty environment by housing plates.
NASA Astrophysics Data System (ADS)
Yang, Zhixiao; Ito, Kazuyuki; Saijo, Kazuhiko; Hirotsune, Kazuyuki; Gofuku, Akio; Matsuno, Fumitoshi
This paper aims at constructing an efficient interface being similar to those widely used in human daily life, to fulfill the need of many volunteer rescuers operating rescue robots at large-scale disaster sites. The developed system includes a force feedback steering wheel interface and an artificial neural network (ANN) based mouse-screen interface. The former consists of a force feedback steering control and a six monitors’ wall. It provides a manual operation like driving cars to navigate a rescue robot. The latter consists of a mouse and a camera’s view displayed in a monitor. It provides a semi-autonomous operation by mouse clicking to navigate a rescue robot. Results of experiments show that a novice volunteer can skillfully navigate a tank rescue robot through both interfaces after 20 to 30 minutes of learning their operation respectively. The steering wheel interface has high navigating speed in open areas, without restriction of terrains and surface conditions of a disaster site. The mouse-screen interface is good at exact navigation in complex structures, while bringing little tension to operators. The two interfaces are designed to switch into each other at any time to provide a combined efficient navigation method.
A Mobile, Map-Based Tasking Interface for Human-Robot Interaction
2010-12-01
A MOBILE, MAP-BASED TASKING INTERFACE FOR HUMAN-ROBOT INTERACTION By Eli R. Hooten Thesis Submitted to the Faculty of the Graduate School of...SUBTITLE A Mobile, Map-Based Tasking Interface for Human-Robot Interaction 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6...3 II.1 Interactive Modalities and Multi-Touch . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 II.2
A two-class self-paced BCI to control a robot in four directions.
Ron-Angevin, Ricardo; Velasco-Alvarez, Francisco; Sancha-Ros, Salvador; da Silva-Sauer, Leandro
2011-01-01
In this work, an electroencephalographic analysis-based, self-paced (asynchronous) brain-computer interface (BCI) is proposed to control a mobile robot using four different navigation commands: turn right, turn left, move forward and move back. In order to reduce the probability of misclassification, the BCI is to be controlled with only two mental tasks (relaxed state versus imagination of right hand movements), using an audio-cued interface. Four healthy subjects participated in the experiment. After two sessions controlling a simulated robot in a virtual environment (which allowed the user to become familiar with the interface), three subjects successfully moved the robot in a real environment. The obtained results show that the proposed interface enables control over the robot, even for subjects with low BCI performance. © 2011 IEEE
I want what you've got: Cross platform portabiity and human-robot interaction assessment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Julie L. Marble, Ph.D.*.; Douglas A. Few; David J. Bruemmer
2005-08-01
Human-robot interaction is a subtle, yet critical aspect of design that must be assessed during the development of both the human-robot interface and robot behaviors if the human-robot team is to effectively meet the complexities of the task environment. Testing not only ensures that the system can successfully achieve the tasks for which it was designed, but more importantly, usability testing allows the designers to understand how humans and robots can, will, and should work together to optimize workload distribution. A lack of human-centered robot interface design, the rigidity of sensor configuration, and the platform-specific nature of research robot developmentmore » environments are a few factors preventing robotic solutions from reaching functional utility in real word environments. Often the difficult engineering challenge of implementing adroit reactive behavior, reliable communication, trustworthy autonomy that combines with system transparency and usable interfaces is overlooked in favor of other research aims. The result is that many robotic systems never reach a level of functional utility necessary even to evaluate the efficacy of the basic system, much less result in a system that can be used in a critical, real-world environment. Further, because control architectures and interfaces are often platform specific, it is difficult or even impossible to make usability comparisons between them. This paper discusses the challenges inherent to the conduct of human factors testing of variable autonomy control architectures and across platforms within a complex, real-world environment. It discusses the need to compare behaviors, architectures, and interfaces within a structured environment that contains challenging real-world tasks, and the implications for system acceptance and trust of autonomous robotic systems for how humans and robots interact in true interactive teams.« less
NASA Astrophysics Data System (ADS)
Lee, Sam; Lucas, Nathan P.; Ellis, R. Darin; Pandya, Abhilash
2012-06-01
This paper presents a seamlessly controlled human multi-robot system comprised of ground and aerial robots of semiautonomous nature for source localization tasks. The system combines augmented reality interfaces capabilities with human supervisor's ability to control multiple robots. The role of this human multi-robot interface is to allow an operator to control groups of heterogeneous robots in real time in a collaborative manner. It used advanced path planning algorithms to ensure obstacles are avoided and that the operators are free for higher-level tasks. Each robot knows the environment and obstacles and can automatically generate a collision-free path to any user-selected target. It displayed sensor information from each individual robot directly on the robot in the video view. In addition, a sensor data fused AR view is displayed which helped the users pin point source information or help the operator with the goals of the mission. The paper studies a preliminary Human Factors evaluation of this system in which several interface conditions are tested for source detection tasks. Results show that the novel Augmented Reality multi-robot control (Point-and-Go and Path Planning) reduced mission completion times compared to the traditional joystick control for target detection missions. Usability tests and operator workload analysis are also investigated.
Human guidance of mobile robots in complex 3D environments using smart glasses
NASA Astrophysics Data System (ADS)
Kopinsky, Ryan; Sharma, Aneesh; Gupta, Nikhil; Ordonez, Camilo; Collins, Emmanuel; Barber, Daniel
2016-05-01
In order for humans to safely work alongside robots in the field, the human-robot (HR) interface, which enables bi-directional communication between human and robot, should be able to quickly and concisely express the robot's intentions and needs. While the robot operates mostly in autonomous mode, the human should be able to intervene to effectively guide the robot in complex, risky and/or highly uncertain scenarios. Using smart glasses such as Google Glass∗, we seek to develop an HR interface that aids in reducing interaction time and distractions during interaction with the robot.
Peña-Tapia, Elena; Martín-Barrio, Andrés; Olivares-Méndez, Miguel A.
2017-01-01
Multi-robot missions are a challenge for operators in terms of workload and situational awareness. These operators have to receive data from the robots, extract information, understand the situation properly, make decisions, generate the adequate commands, and send them to the robots. The consequences of excessive workload and lack of awareness can vary from inefficiencies to accidents. This work focuses on the study of future operator interfaces of multi-robot systems, taking into account relevant issues such as multimodal interactions, immersive devices, predictive capabilities and adaptive displays. Specifically, four interfaces have been designed and developed: a conventional, a predictive conventional, a virtual reality and a predictive virtual reality interface. The four interfaces have been validated by the performance of twenty-four operators that supervised eight multi-robot missions of fire surveillance and extinguishing. The results of the workload and situational awareness tests show that virtual reality improves the situational awareness without increasing the workload of operators, whereas the effects of predictive components are not significant and depend on their implementation. PMID:28749407
Scalability of Robotic Controllers: An Evaluation of Controller Options-Experiment II
2011-09-01
for the Soldier, to ensure mission success while maximizing the survivability and lethality through the synergistic interaction of equipment...based touch interface for gloved finger interactions . This interface had to have larger-than-normal touch-screen buttons for commanding the robot...C.; Hill, S.; Pillalamarri, K. Extreme Scalability: Designing Interfaces and Algorithms for Soldier-Robotic Swarm Interaction , Year 2; ARL- TR
Combined virtual and real robotic test-bed for single operator control of multiple robots
NASA Astrophysics Data System (ADS)
Lee, Sam Y.-S.; Hunt, Shawn; Cao, Alex; Pandya, Abhilash
2010-04-01
Teams of heterogeneous robots with different dynamics or capabilities could perform a variety of tasks such as multipoint surveillance, cooperative transport and explorations in hazardous environments. In this study, we work with heterogeneous robots of semi-autonomous ground and aerial robots for contaminant localization. We developed a human interface system which linked every real robot to its virtual counterpart. A novel virtual interface has been integrated with Augmented Reality that can monitor the position and sensory information from video feed of ground and aerial robots in the 3D virtual environment, and improve user situational awareness. An operator can efficiently control the real multi-robots using the Drag-to-Move method on the virtual multi-robots. This enables an operator to control groups of heterogeneous robots in a collaborative way for allowing more contaminant sources to be pursued simultaneously. The advanced feature of the virtual interface system is guarded teleoperation. This can be used to prevent operators from accidently driving multiple robots into walls and other objects. Moreover, the feature of the image guidance and tracking is able to reduce operator workload.
My thoughts through a robot's eyes: an augmented reality-brain-machine interface.
Kansaku, Kenji; Hata, Naoki; Takano, Kouji
2010-02-01
A brain-machine interface (BMI) uses neurophysiological signals from the brain to control external devices, such as robot arms or computer cursors. Combining augmented reality with a BMI, we show that the user's brain signals successfully controlled an agent robot and operated devices in the robot's environment. The user's thoughts became reality through the robot's eyes, enabling the augmentation of real environments outside the anatomy of the human body.
An EMG-based robot control scheme robust to time-varying EMG signal features.
Artemiadis, Panagiotis K; Kyriakopoulos, Kostas J
2010-05-01
Human-robot control interfaces have received increased attention during the past decades. With the introduction of robots in everyday life, especially in providing services to people with special needs (i.e., elderly, people with impairments, or people with disabilities), there is a strong necessity for simple and natural control interfaces. In this paper, electromyographic (EMG) signals from muscles of the human upper limb are used as the control interface between the user and a robot arm. EMG signals are recorded using surface EMG electrodes placed on the user's skin, making the user's upper limb free of bulky interface sensors or machinery usually found in conventional human-controlled systems. The proposed interface allows the user to control in real time an anthropomorphic robot arm in 3-D space, using upper limb motion estimates based only on EMG recordings. Moreover, the proposed interface is robust to EMG changes with respect to time, mainly caused by muscle fatigue or adjustments of contraction level. The efficiency of the method is assessed through real-time experiments, including random arm motions in the 3-D space with variable hand speed profiles.
Investigation of human-robot interface performance in household environments
NASA Astrophysics Data System (ADS)
Cremer, Sven; Mirza, Fahad; Tuladhar, Yathartha; Alonzo, Rommel; Hingeley, Anthony; Popa, Dan O.
2016-05-01
Today, assistive robots are being introduced into human environments at an increasing rate. Human environments are highly cluttered and dynamic, making it difficult to foresee all necessary capabilities and pre-program all desirable future skills of the robot. One approach to increase robot performance is semi-autonomous operation, allowing users to intervene and guide the robot through difficult tasks. To this end, robots need intuitive Human-Machine Interfaces (HMIs) that support fine motion control without overwhelming the operator. In this study we evaluate the performance of several interfaces that balance autonomy and teleoperation of a mobile manipulator for accomplishing several household tasks. Our proposed HMI framework includes teleoperation devices such as a tablet, as well as physical interfaces in the form of piezoresistive pressure sensor arrays. Mobile manipulation experiments were performed with a sensorized KUKA youBot, an omnidirectional platform with a 5 degrees of freedom (DOF) arm. The pick and place tasks involved navigation and manipulation of objects in household environments. Performance metrics included time for task completion and position accuracy.
NASA Astrophysics Data System (ADS)
Paar, G.
2009-04-01
At present, mainly the US have realized planetary space missions with essential robotics background. Joining institutions, companies and universities from different established groups in Europe and two relevant players from the US, the EC FP7 Project PRoVisG started in autumn 2008 to demonstrate the European ability of realizing high-level processing of robotic vision image products from the surface of planetary bodies. PRoVisG will build a unified European framework for Robotic Vision Ground Processing. State-of-art computer vision technology will be collected inside and outside Europe to better exploit the image data gathered during past, present and future robotic space missions to the Moon and the Planets. This will lead to a significant enhancement of the scientific, technologic and educational outcome of such missions. We report on the main PRoVisG objectives and the development status: - Past, present and future planetary robotic mission profiles are analysed in terms of existing solutions and requirements for vision processing - The generic processing chain is based on unified vision sensor descriptions and processing interfaces. Processing components available at the PRoVisG Consortium Partners will be completed by and combined with modules collected within the international computer vision community in the form of Announcements of Opportunity (AOs). - A Web GIS is developed to integrate the processing results obtained with data from planetary surfaces into the global planetary context. - Towards the end of the 39 month project period, PRoVisG will address the public by means of a final robotic field test in representative terrain. The European tax payers will be able to monitor the imaging and vision processing in a Mars - similar environment, thus getting an insight into the complexity and methods of processing, the potential and decision making of scientific exploitation of such data and not least the elegancy and beauty of the resulting image products and their visualization. - The educational aspect is addressed by two summer schools towards the end of the project, presenting robotic vision to the students who are future providers of European science and technology, inside and outside the space domain.
NASA Technical Reports Server (NTRS)
Backes, Paul G. (Inventor); Tso, Kam S. (Inventor)
1993-01-01
This invention relates to an operator interface for controlling a telerobot to perform tasks in a poorly modeled environment and/or within unplanned scenarios. The telerobot control system includes a remote robot manipulator linked to an operator interface. The operator interface includes a setup terminal, simulation terminal, and execution terminal for the control of the graphics simulator and local robot actuator as well as the remote robot actuator. These terminals may be combined in a single terminal. Complex tasks are developed from sequential combinations of parameterized task primitives and recorded teleoperations, and are tested by execution on a graphics simulator and/or local robot actuator, together with adjustable time delays. The novel features of this invention include the shared and supervisory control of the remote robot manipulator via operator interface by pretested complex tasks sequences based on sequences of parameterized task primitives combined with further teleoperation and run-time binding of parameters based on task context.
NASA Astrophysics Data System (ADS)
Tadokoro, Satoshi; Kitano, Hiroaki; Takahashi, Tomoichi; Noda, Itsuki; Matsubara, Hitoshi; Shinjoh, Atsushi; Koto, Tetsuo; Takeuchi, Ikuo; Takahashi, Hironao; Matsuno, Fumitoshi; Hatayama, Mitsunori; Nobe, Jun; Shimada, Susumu
2000-07-01
This paper introduces the RoboCup-Rescue Simulation Project, a contribution to the disaster mitigation, search and rescue problem. A comprehensive urban disaster simulator is constructed on distributed computers. Heterogeneous intelligent agents such as fire fighters, victims and volunteers conduct search and rescue activities in this virtual disaster world. A real world interface integrates various sensor systems and controllers of infrastructures in the real cities with the real world. Real-time simulation is synchronized with actual disasters, computing complex relationship between various damage factors and agent behaviors. A mission-critical man-machine interface provides portability and robustness of disaster mitigation centers, and augmented-reality interfaces for rescue in real disasters. It also provides a virtual- reality training function for the public. This diverse spectrum of RoboCup-Rescue contributes to the creation of the safer social system.
Lunar rover technology demonstrations with Dante and Ratler
NASA Technical Reports Server (NTRS)
Krotkov, Eric; Bares, John; Katragadda, Lalitesh; Simmons, Reid; Whittaker, Red
1994-01-01
Carnegie Mellon University has undertaken a research, development, and demonstration program to enable a robotic lunar mission. The two-year mission scenario is to traverse 1,000 kilometers, revisiting the historic sites of Apollo 11, Surveyor 5, Ranger 8, Apollo 17, and Lunokhod 2, and to return continuous live video amounting to more than 11 terabytes of data. Our vision blends autonomously safeguarded user driving with autonomous operation augmented with rich visual feedback, in order to enable facile interaction and exploration. The resulting experience is intended to attract mass participation and evoke strong public interest in lunar exploration. The encompassing program that forwards this work is the Lunar Rover Initiative (LRI). Two concrete technology demonstration projects currently advancing the Lunar Rover Initiative are: (1) The Dante/Mt. Spurr project, which, at the time of this writing, is sending the walking robot Dante to explore the Mt. Spurr volcano, in rough terrain that is a realistic planetary analogue. This project will generate insights into robot system robustness in harsh environments, and into remote operation by novices; and (2) The Lunar Rover Demonstration project, which is developing and evaluating key technologies for navigation, teleoperation, and user interfaces in terrestrial demonstrations. The project timetable calls for a number of terrestrial traverses incorporating teleoperation and autonomy including natural terrain this year, 10 km in 1995. and 100 km in 1996. This paper will discuss the goals of the Lunar Rover Initiative and then focus on the present state of the Dante/Mt. Spurr and Lunar Rover Demonstration projects.
TARDEC's Intelligent Ground Systems overview
NASA Astrophysics Data System (ADS)
Jaster, Jeffrey F.
2009-05-01
The mission of the Intelligent Ground Systems (IGS) Area at the Tank Automotive Research, Development and Engineering Center (TARDEC) is to conduct technology maturation and integration to increase Soldier robot control/interface intuitiveness and robotic ground system robustness, functionality and overall system effectiveness for the Future Combat System Brigade Combat Team, Robotics Systems Joint Project Office and game changing capabilities to be fielded beyond the current force. This is accomplished through technology component development focused on increasing unmanned ground vehicle autonomy, optimizing crew interfaces and mission planners that capture commanders' intent, integrating payloads that provide 360 degree local situational awareness and expanding current UGV tactical behavior, learning and adaptation capabilities. The integration of these technology components into ground vehicle demonstrators permits engineering evaluation, User assessment and performance characterization in increasingly complex, dynamic and relevant environments to include high speed on road or cross country operations, all weather/visibility conditions and military operations in urban terrain (MOUT). Focused testing and experimentation is directed at reducing PM risk areas (safe operations, autonomous maneuver, manned-unmanned collaboration) and transitioning technology in the form of hardware, software algorithms, test and performance data, as well as User feedback and lessons learned.
Augmented reality and haptic interfaces for robot-assisted surgery.
Yamamoto, Tomonori; Abolhassani, Niki; Jung, Sung; Okamura, Allison M; Judkins, Timothy N
2012-03-01
Current teleoperated robot-assisted minimally invasive surgical systems do not take full advantage of the potential performance enhancements offered by various forms of haptic feedback to the surgeon. Direct and graphical haptic feedback systems can be integrated with vision and robot control systems in order to provide haptic feedback to improve safety and tissue mechanical property identification. An interoperable interface for teleoperated robot-assisted minimally invasive surgery was developed to provide haptic feedback and augmented visual feedback using three-dimensional (3D) graphical overlays. The software framework consists of control and command software, robot plug-ins, image processing plug-ins and 3D surface reconstructions. The feasibility of the interface was demonstrated in two tasks performed with artificial tissue: palpation to detect hard lumps and surface tracing, using vision-based forbidden-region virtual fixtures to prevent the patient-side manipulator from entering unwanted regions of the workspace. The interoperable interface enables fast development and successful implementation of effective haptic feedback methods in teleoperation. Copyright © 2011 John Wiley & Sons, Ltd.
Integration of task level planning and diagnosis for an intelligent robot
NASA Technical Reports Server (NTRS)
Chan, Amy W.
1992-01-01
A satellite floating space is diagnosed with a telerobot attached performing maintenance or replacement tasks. This research included three objectives. The first objective was to generate intelligent path planning for a robot to move around a satellite. The second objective was to diagnose possible faulty scenarios in the satellite. The third objective included two tasks. The first task was to combine intelligent path planning with diagnosis. The second task was to build an interface between the combined intelligent system with Robosim. The ability of a robot to deal with unexpected scenarios is particularly important in space since the situation could be different from time to time so that the telerobot must be capable of detecting that the situation has changed and the necessity may exist to alter its behavior based on the new situation. The feature of allowing human-in-the-loop is also very important in space. In some extreme cases, the situation is beyond the capability of a robot so our research project allows the human to override the decision of a robot.
The KALI multi-arm robot programming and control environment
NASA Technical Reports Server (NTRS)
Backes, Paul; Hayati, Samad; Hayward, Vincent; Tso, Kam
1989-01-01
The KALI distributed robot programming and control environment is described within the context of its use in the Jet Propulsion Laboratory (JPL) telerobot project. The purpose of KALI is to provide a flexible robot programming and control environment for coordinated multi-arm robots. Flexibility, both in hardware configuration and software, is desired so that it can be easily modified to test various concepts in robot programming and control, e.g., multi-arm control, force control, sensor integration, teleoperation, and shared control. In the programming environment, user programs written in the C programming language describe trajectories for multiple coordinated manipulators with the aid of KALI function libraries. A system of multiple coordinated manipulators is considered within the programming environment as one motion system. The user plans the trajectory of one controlled Cartesian frame associated with a motion system and describes the positions of the manipulators with respect to that frame. Smooth Cartesian trajectories are achieved through a blending of successive path segments. The manipulator and load dynamics are considered during trajectory generation so that given interface force limits are not exceeded.
Distributed cooperating processes in a mobile robot control system
NASA Technical Reports Server (NTRS)
Skillman, Thomas L., Jr.
1988-01-01
A mobile inspection robot has been proposed for the NASA Space Station. It will be a free flying autonomous vehicle that will leave a berthing unit to accomplish a variety of inspection tasks around the Space Station, and then return to its berth to recharge, refuel, and transfer information. The Flying Eye robot will receive voice communication to change its attitude, move at a constant velocity, and move to a predefined location along a self generated path. This mobile robot control system requires integration of traditional command and control techniques with a number of AI technologies. Speech recognition, natural language understanding, task and path planning, sensory abstraction and pattern recognition are all required for successful implementation. The interface between the traditional numeric control techniques and the symbolic processing to the AI technologies must be developed, and a distributed computing approach will be needed to meet the real time computing requirements. To study the integration of the elements of this project, a novel mobile robot control architecture and simulation based on the blackboard architecture was developed. The control system operation and structure is discussed.
Key technology issues for space robotic systems
NASA Technical Reports Server (NTRS)
Schappell, Roger T.
1987-01-01
Robotics has become a key technology consideration for the Space Station project to enable enhanced crew productivity and to maximize safety. There are many robotic functions currently being studied, including Space Station assembly, repair, and maintenance as well as satellite refurbishment, repair, and retrieval. Another area of concern is that of providing ground based experimenters with a natural interface that they might directly interact with their hardware onboard the Space Station or ancillary spacecraft. The state of the technology is such that the above functions are feasible; however, considerable development work is required for operation in this gravity-free vacuum environment. Furthermore, a program plan is evolving within NASA that will capitalize on recent government, university, and industrial robotics research and development (R and D) accomplishments. A brief summary is presented of the primary technology issues and physical examples are provided of the state of the technology for the initial operational capability (IOC) system as well as for the eventual final operational capability (FOC) Space Station.
Dynamics simulation and controller interfacing for legged robots
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reichler, J.A.; Delcomyn, F.
2000-01-01
Dynamics simulation can play a critical role in the engineering of robotic control code, and there exist a variety of strategies both for building physical models and for interacting with these models. This paper presents an approach to dynamics simulation and controller interfacing for legged robots, and contrasts it to existing approaches. The authors describe dynamics algorithms and contact-resolution strategies for multibody articulated mobile robots based on the decoupled tree-structure approach, and present a novel scripting language that provides a unified framework for control-code interfacing, user-interface design, and data analysis. Special emphasis is placed on facilitating the rapid integration ofmore » control algorithms written in a standard object-oriented language (C++), the production of modular, distributed, reusable controllers, and the use of parameterized signal-transmission properties such as delay, sampling rate, and noise.« less
Zygomalas, Apollon; Giokas, Konstantinos; Koutsouris, Dimitrios
2014-01-01
Aim. Modular mini-robots can be used in novel minimally invasive surgery techniques like natural orifice transluminal endoscopic surgery (NOTES) and laparoendoscopic single site (LESS) surgery. The control of these miniature assistants is complicated. The aim of this study is the in silico investigation of a remote controlling interface for modular miniature robots which can be used in minimally invasive surgery. Methods. The conceptual controlling system was developed, programmed, and simulated using professional robotics simulation software. Three different modes of control were programmed. The remote controlling surgical interface was virtually designed as a high scale representation of the respective modular mini-robot, therefore a modular controlling system itself. Results. With the proposed modular controlling system the user could easily identify the conformation of the modular mini-robot and adequately modify it as needed. The arrangement of each module was always known. The in silico investigation gave useful information regarding the controlling mode, the adequate speed of rearrangements, and the number of modules needed for efficient working tasks. Conclusions. The proposed conceptual model may promote the research and development of more sophisticated modular controlling systems. Modular surgical interfaces may improve the handling and the dexterity of modular miniature robots during minimally invasive procedures. PMID:25295187
Zygomalas, Apollon; Giokas, Konstantinos; Koutsouris, Dimitrios
2014-01-01
Aim. Modular mini-robots can be used in novel minimally invasive surgery techniques like natural orifice transluminal endoscopic surgery (NOTES) and laparoendoscopic single site (LESS) surgery. The control of these miniature assistants is complicated. The aim of this study is the in silico investigation of a remote controlling interface for modular miniature robots which can be used in minimally invasive surgery. Methods. The conceptual controlling system was developed, programmed, and simulated using professional robotics simulation software. Three different modes of control were programmed. The remote controlling surgical interface was virtually designed as a high scale representation of the respective modular mini-robot, therefore a modular controlling system itself. Results. With the proposed modular controlling system the user could easily identify the conformation of the modular mini-robot and adequately modify it as needed. The arrangement of each module was always known. The in silico investigation gave useful information regarding the controlling mode, the adequate speed of rearrangements, and the number of modules needed for efficient working tasks. Conclusions. The proposed conceptual model may promote the research and development of more sophisticated modular controlling systems. Modular surgical interfaces may improve the handling and the dexterity of modular miniature robots during minimally invasive procedures.
A development of intelligent entertainment robot for home life
NASA Astrophysics Data System (ADS)
Kim, Cheoltaek; Lee, Ju-Jang
2005-12-01
The purpose of this paper was to present the study and design idea for entertainment robot with educational purpose (IRFEE). The robot has been designed for home life considering dependability and interaction. The developed robot has three objectives - 1. Develop autonomous robot, 2. Design robot considering mobility and robustness, 3. Develop robot interface and software considering entertainment and education functionalities. The autonomous navigation was implemented by active vision based SLAM and modified EPF algorithm. The two differential wheels, the pan-tilt were designed mobility and robustness and the exterior was designed considering esthetic element and minimizing interference. The speech and tracking algorithm provided the good interface with human. The image transfer and Internet site connection is needed for service of remote connection and educational purpose.
Parallel-distributed mobile robot simulator
NASA Astrophysics Data System (ADS)
Okada, Hiroyuki; Sekiguchi, Minoru; Watanabe, Nobuo
1996-06-01
The aim of this project is to achieve an autonomous learning and growth function based on active interaction with the real world. It should also be able to autonomically acquire knowledge about the context in which jobs take place, and how the jobs are executed. This article describes a parallel distributed movable robot system simulator with an autonomous learning and growth function. The autonomous learning and growth function which we are proposing is characterized by its ability to learn and grow through interaction with the real world. When the movable robot interacts with the real world, the system compares the virtual environment simulation with the interaction result in the real world. The system then improves the virtual environment to match the real-world result more closely. This the system learns and grows. It is very important that such a simulation is time- realistic. The parallel distributed movable robot simulator was developed to simulate the space of a movable robot system with an autonomous learning and growth function. The simulator constructs a virtual space faithful to the real world and also integrates the interfaces between the user, the actual movable robot and the virtual movable robot. Using an ultrafast CG (computer graphics) system (FUJITSU AG series), time-realistic 3D CG is displayed.
Weintek interfaces for controlling the position of a robotic arm
NASA Astrophysics Data System (ADS)
Barz, C.; Ilia, M.; Ilut, T.; Pop-Vadean, A.; Pop, P. P.; Dragan, F.
2016-08-01
The paper presents the use of Weintek panels to control the position of a robotic arm, operated step by step on the three motor axes. PLC control interface is designed with a Weintek touch screen. The HMI Weintek eMT3070a is the user interface in the process command of the PLC. This HMI controls the local PLC, entering the coordinate on the axes X, Y and Z. The subject allows the development in a virtual environment for e-learning and monitoring the robotic arm actions.
3min. poster presentations of B01
NASA Astrophysics Data System (ADS)
Foing, Bernard H.
We give a report on recommendations from ILEWG International conferences held at Cape Canaveral in 2008 (ICEUM10), and in Beijing in May 2010 with IAF (GLUC -ICEUM11). We discuss the different rationale for Moon exploration. Priorities for scientific investigations include: clues on the formation and evolution of rocky planets, accretion and bombardment in the inner solar system, comparative planetology processes (tectonic, volcanic, impact cratering, volatile delivery), historical records, astrobiology, survival of organics; past, present and future life. The ILEWG technology task group set priorities for the advancement of instrumenta-tion: Remote sensing miniaturised instruments; Surface geophysical and geochemistry package; Instrument deployment and robotic arm, nano-rover, sampling, drilling; Sample finder and collector. Regional mobility rover; Autonomy and Navigation; Artificially intelligent robots, Complex systems. The ILEWG ExogeoLab pilot project was developed as support for instru-ments, landers, rovers,and preparation for cooperative robotic village. The ILEWG lunar base task group looked at minimal design concepts, technologies in robotic and human exploration with Tele control, telepresence, virtual reality; Man-Machine interface and performances. The ILEWG ExoHab pilot project has been started with support from agencies and partners. We discuss ILEWG terrestrial Moon-Mars campaigns for validation of technologies, research and human operations. We indicate how Moon-Mars Exploration can inspire solutions to global Earth sustained development: In-Situ Utilisation of resources; Establishment of permanent robotic infrastructures, Environmental protection aspects; Life sciences laboratories; Support to human exploration. Co-Authors: ILEWG Task Groups on: Science, Technology, Robotic village, Lunar Bases , Commercial and Societal aspects, Roadmap synergies with other programmes, Public en-gagemnet and Outreach, Young Lunar Explorers.
NICA: Natural Interaction with a Caring Agent
NASA Astrophysics Data System (ADS)
de Carolis, Berardina; Mazzotta, Irene; Novielli, Nicole
Ambient Intelligence solutions may provide a great opportunity for elderly people to live longer at home. Assistance and care are delegated to the intelligence embedded in the environment. However, besides considering service-oriented response to the user needs, the assistance has to take into account the establishment of social relations. We propose the use of a robot NICA (as the name of the project Natural Interaction with a Caring Agent) acting as a caring assistant that provides a social interface with the smart home services. In this paper, we introduce the general architecture of the robot's "mind" and then we focus on the need to properly react to affective and socially oriented situations.
Dominici, Nadia; Keller, Urs; Vallery, Heike; Friedli, Lucia; van den Brand, Rubia; Starkey, Michelle L; Musienko, Pavel; Riener, Robert; Courtine, Grégoire
2012-07-01
Central nervous system (CNS) disorders distinctly impair locomotor pattern generation and balance, but technical limitations prevent independent assessment and rehabilitation of these subfunctions. Here we introduce a versatile robotic interface to evaluate, enable and train pattern generation and balance independently during natural walking behaviors in rats. In evaluation mode, the robotic interface affords detailed assessments of pattern generation and dynamic equilibrium after spinal cord injury (SCI) and stroke. In enabling mode,the robot acts as a propulsive or postural neuroprosthesis that instantly promotes unexpected locomotor capacities including overground walking after complete SCI, stair climbing following partial SCI and precise paw placement shortly after stroke. In training mode, robot-enabled rehabilitation, epidural electrical stimulation and monoamine agonists reestablish weight-supported locomotion, coordinated steering and balance in rats with a paralyzing SCI. This new robotic technology and associated concepts have broad implications for both assessing and restoring motor functions after CNS disorders, both in animals and in humans.
NASA Technical Reports Server (NTRS)
Erickson, Jon D.
1994-01-01
This paper presents an overview of the proposed Lyndon B. Johnson Space Center (JSC) precompetitive, dual-use technology investment project in robotics. New robotic technology in advanced robots, which can recognize and respond to their environments and to spoken human supervision so as to perform a variety of combined mobility and manipulation tasks in various sectors, is an objective of this work. In the U.S. economy, such robots offer the benefits of improved global competitiveness in a critical industrial sector; improved productivity by the end users of these robots; a growing robotics industry that produces jobs and profits; lower cost health care delivery with quality improvements; and, as these 'intelligent' robots become acceptable throughout society, an increase in the standard of living for everyone. In space, such robots will provide improved safety, reliability, and productivity as Space Station evolves, and will enable human space exploration (by human/robot teams). The proposed effort consists of partnerships between manufacturers, universities, and JSC to develop working production prototypes of these robots by leveraging current development by both sides. Currently targeted applications are in the manufacturing, health care, services, and construction sectors of the U.S. economy and in the inspection, servicing, maintenance, and repair aspects of space exploration. But the focus is on the generic software architecture and standardized interfaces for custom modules tailored for the various applications allowing end users to customize a robot as PC users customize PC's. Production prototypes would be completed in 5 years under this proposal.
NASA Technical Reports Server (NTRS)
Erikson, Jon D.
1994-01-01
This paper presents an overview of the proposed Lyndon B. Johnson Space Center (JSC) precompetitive, dual-use technology investment project in robotics. New robotic technology in advanced robots, which can recognize and respond to their environments and to spoken human supervision so as to perform a variety of combined mobility and manipulation tasks in various sectors, is an obejective of this work. In the U.S. economy, such robots offer the benefits of improved global competitiveness in a critical industrial sector; improved productivity by the end users of these robots; a growing robotics industry that produces jobs and profits; lower cost health care delivery with quality improvements; and, as these 'intelligent' robots become acceptable throughout society, an increase in the standard of living for everyone. In space, such robots will provide improved safety, reliability, and productivity as Space Station evolves, and will enable human space exploration (by human/robot teams). The proposed effort consists of partnerships between manufacturers, universities, and JSC to develop working production prototypes of these robots by leveraging current development by both sides. Currently targeted applications are in the manufacturing, health care, services, and construction sectors of the U.S. economy and in the inspection, servicing, maintenance, and repair aspects of space exploration. But the focus is on the generic software architecture and standardized interfaces for custom modules tailored for the various applications allowing end users to customize a robot as PC users customize PC's. Production prototypes would be completed in 5 years under this proposal.
ERIC Educational Resources Information Center
Landa-Jiménez, M. A.; González-Gaspar, P.; Pérez-Estudillo, C.; López-Meraz, M. L.; Morgado-Valle, C.; Beltran-Parrazal, L.
2016-01-01
A Muscle-Computer Interface (muCI) is a human-machine system that uses electromyographic (EMG) signals to communicate with a computer. Surface EMG (sEMG) signals are currently used to command robotic devices, such as robotic arms and hands, and mobile robots, such as wheelchairs. These signals reflect the motor intention of a user before the…
Experiments in teleoperator and autonomous control of space robotic vehicles
NASA Technical Reports Server (NTRS)
Alexander, Harold L.
1990-01-01
A research program and strategy are described which include fundamental teleoperation issues and autonomous-control issues of sensing and navigation for satellite robots. The program consists of developing interfaces for visual operation and studying the consequences of interface designs as well as developing navigation and control technologies based on visual interaction. A space-robot-vehicle simulator is under development for use in virtual-environment teleoperation experiments and neutral-buoyancy investigations. These technologies can be utilized in a study of visual interfaces to address tradeoffs between head-tracking and manual remote cameras, panel-mounted and helmet-mounted displays, and stereoscopic and monoscopic display systems. The present program can provide significant data for the development of control experiments for autonomously controlled satellite robots.
A Space Station robot walker and its shared control software
NASA Technical Reports Server (NTRS)
Xu, Yangsheng; Brown, Ben; Aoki, Shigeru; Yoshida, Tetsuji
1994-01-01
In this paper, we first briefly overview the update of the self-mobile space manipulator (SMSM) configuration and testbed. The new robot is capable of projecting cameras anywhere interior or exterior of the Space Station Freedom (SSF), and will be an ideal tool for inspecting connectors, structures, and other facilities on SSF. Experiments have been performed under two gravity compensation systems and a full-scale model of a segment of SSF. This paper presents a real-time shared control architecture that enables the robot to coordinate autonomous locomotion and teleoperation input for reliable walking on SSF. Autonomous locomotion can be executed based on a CAD model and off-line trajectory planning, or can be guided by a vision system with neural network identification. Teleoperation control can be specified by a real-time graphical interface and a free-flying hand controller. SMSM will be a valuable assistant for astronauts in inspection and other EVA missions.
Robotic devices and brain-machine interfaces for hand rehabilitation post-stroke.
McConnell, Alistair C; Moioli, Renan C; Brasil, Fabricio L; Vallejo, Marta; Corne, David W; Vargas, Patricia A; Stokes, Adam A
2017-06-28
To review the state of the art of robotic-aided hand physiotherapy for post-stroke rehabilitation, including the use of brain-machine interfaces. Each patient has a unique clinical history and, in response to personalized treatment needs, research into individualized and at-home treatment options has expanded rapidly in recent years. This has resulted in the development of many devices and design strategies for use in stroke rehabilitation. The development progression of robotic-aided hand physiotherapy devices and brain-machine interface systems is outlined, focussing on those with mechanisms and control strategies designed to improve recovery outcomes of the hand post-stroke. A total of 110 commercial and non-commercial hand and wrist devices, spanning the 2 major core designs: end-effector and exoskeleton are reviewed. The growing body of evidence on the efficacy and relevance of incorporating brain-machine interfaces in stroke rehabilitation is summarized. The challenges involved in integrating robotic rehabilitation into the healthcare system are discussed. This review provides novel insights into the use of robotics in physiotherapy practice, and may help system designers to develop new devices.
Innovation in robotic surgery: the Indian scenario.
Deshpande, Suresh V
2015-01-01
Robotics is the science. In scientific words a "Robot" is an electromechanical arm device with a computer interface, a combination of electrical, mechanical, and computer engineering. It is a mechanical arm that performs tasks in Industries, space exploration, and science. One such idea was to make an automated arm - A robot - In laparoscopy to control the telescope-camera unit electromechanically and then with a computer interface using voice control. It took us 5 long years from 2004 to bring it to the level of obtaining a patent. That was the birth of the Swarup Robotic Arm (SWARM) which is the first and the only Indian contribution in the field of robotics in laparoscopy as a total voice controlled camera holding robotic arm developed without any support by industry or research institutes.
NASA Astrophysics Data System (ADS)
Dong, Wentao; Zhu, Chen; Hu, Wei; Xiao, Lin; Huang, Yong'an
2018-01-01
Current stretchable surface electrodes have attracted increasing attention owing to their potential applications in biological signal monitoring, wearable human-machine interfaces (HMIs) and the Internet of Things. The paper proposed a stretchable HMI based on a surface electromyography (sEMG) electrode with a self-similar serpentine configuration. The sEMG electrode was transfer-printed onto the skin surface conformally to monitor biological signals, followed by signal classification and controlling of a mobile robot. Such electrodes can bear rather large deformation (such as >30%) under an appropriate areal coverage. The sEMG electrodes have been used to record electrophysiological signals from different parts of the body with sharp curvature, such as the index finger, back of the neck and face, and they exhibit great potential for HMI in the fields of robotics and healthcare. The electrodes placed onto the two wrists would generate two different signals with the fist clenched and loosened. It is classified to four kinds of signals with a combination of the gestures from the two wrists, that is, four control modes. Experiments demonstrated that the electrodes were successfully used as an HMI to control the motion of a mobile robot remotely. Project supported by the National Natural Science Foundation of China (Nos. 51635007, 91323303).
Human-Robot Interface: Issues in Operator Performance, Interface Design, and Technologies
2006-07-01
and the use of lightweight portable robotic sensor platforms. 5 robotics has reached a point where some generalities of HRI transcend specific...displays with control devices such as joysticks, wheels, and pedals (Kamsickas, 2003). Typical control stations include panels displaying (a) sensor ...tasks that do not involve mobility and usually involve camera control or data fusion from sensors Active search: Search tasks that involve mobility
Human factors in space telepresence
NASA Technical Reports Server (NTRS)
Akin, D. L.; Howard, R. D.; Oliveria, J. S.
1983-01-01
The problems of interfacing a human with a teleoperation system, for work in space are discussed. Much of the information presented here is the result of experience gained by the M.I.T. Space Systems Laboratory during the past two years of work on the ARAMIS (Automation, Robotics, and Machine Intelligence Systems) project. Many factors impact the design of the man-machine interface for a teleoperator. The effects of each are described in turn. An annotated bibliography gives the key references that were used. No conclusions are presented as a best design, since much depends on the particular application desired, and the relevant technology is swiftly changing.
NASA Astrophysics Data System (ADS)
Singh, N. Nirmal; Chatterjee, Amitava; Rakshit, Anjan
2010-02-01
The present article describes the development of a peripheral interface controller (PIC) microcontroller-based system for interfacing external add-on peripherals with a real mobile robot, for real life applications. This system serves as an important building block of a complete integrated vision-based mobile robot system, integrated indigenously in our laboratory. The system is composed of the KOALA mobile robot in conjunction with a personal computer (PC) and a two-camera-based vision system where the PIC microcontroller is used to drive servo motors, in interrupt-driven mode, to control additional degrees of freedom of the vision system. The performance of the developed system is tested by checking it under the control of several user-specified commands, issued from the PC end.
A motion sensing-based framework for robotic manipulation.
Deng, Hao; Xia, Zeyang; Weng, Shaokui; Gan, Yangzhou; Fang, Peng; Xiong, Jing
2016-01-01
To data, outside of the controlled environments, robots normally perform manipulation tasks operating with human. This pattern requires the robot operators with high technical skills training for varied teach-pendant operating system. Motion sensing technology, which enables human-machine interaction in a novel and natural interface using gestures, has crucially inspired us to adopt this user-friendly and straightforward operation mode on robotic manipulation. Thus, in this paper, we presented a motion sensing-based framework for robotic manipulation, which recognizes gesture commands captured from motion sensing input device and drives the action of robots. For compatibility, a general hardware interface layer was also developed in the framework. Simulation and physical experiments have been conducted for preliminary validation. The results have shown that the proposed framework is an effective approach for general robotic manipulation with motion sensing control.
SOFT ROBOTICS. A 3D-printed, functionally graded soft robot powered by combustion.
Bartlett, Nicholas W; Tolley, Michael T; Overvelde, Johannes T B; Weaver, James C; Mosadegh, Bobak; Bertoldi, Katia; Whitesides, George M; Wood, Robert J
2015-07-10
Roboticists have begun to design biologically inspired robots with soft or partially soft bodies, which have the potential to be more robust and adaptable, and safer for human interaction, than traditional rigid robots. However, key challenges in the design and manufacture of soft robots include the complex fabrication processes and the interfacing of soft and rigid components. We used multimaterial three-dimensional (3D) printing to manufacture a combustion-powered robot whose body transitions from a rigid core to a soft exterior. This stiffness gradient, spanning three orders of magnitude in modulus, enables reliable interfacing between rigid driving components (controller, battery, etc.) and the primarily soft body, and also enhances performance. Powered by the combustion of butane and oxygen, this robot is able to perform untethered jumping. Copyright © 2015, American Association for the Advancement of Science.
Tsai, Tzung-Cheng; Hsu, Yeh-Liang; Ma, An-I; King, Trevor; Wu, Chang-Huei
2007-08-01
"Telepresence" is an interesting field that includes virtual reality implementations with human-system interfaces, communication technologies, and robotics. This paper describes the development of a telepresence robot called Telepresence Robot for Interpersonal Communication (TRIC) for the purpose of interpersonal communication with the elderly in a home environment. The main aim behind TRIC's development is to allow elderly populations to remain in their home environments, while loved ones and caregivers are able to maintain a higher level of communication and monitoring than via traditional methods. TRIC aims to be a low-cost, lightweight robot, which can be easily implemented in the home environment. Under this goal, decisions on the design elements included are discussed. In particular, the implementation of key autonomous behaviors in TRIC to increase the user's capability of projection of self and operation of the telepresence robot, in addition to increasing the interactive capability of the participant as a dialogist are emphasized. The technical development and integration of the modules in TRIC, as well as human factors considerations are then described. Preliminary functional tests show that new users were able to effectively navigate TRIC and easily locate visual targets. Finally the future developments of TRIC, especially the possibility of using TRIC for home tele-health monitoring and tele-homecare visits are discussed.
A novel interface for the telementoring of robotic surgery.
Shin, Daniel H; Dalag, Leonard; Azhar, Raed A; Santomauro, Michael; Satkunasivam, Raj; Metcalfe, Charles; Dunn, Matthew; Berger, Andre; Djaladat, Hooman; Nguyen, Mike; Desai, Mihir M; Aron, Monish; Gill, Inderbir S; Hung, Andrew J
2015-08-01
To prospectively evaluate the feasibility and safety of a novel, second-generation telementoring interface (Connect(™) ; Intuitive Surgical Inc., Sunnyvale, CA, USA) for the da Vinci robot. Robotic surgery trainees were mentored during portions of robot-assisted prostatectomy and renal surgery cases. Cases were assigned as traditional in-room mentoring or remote mentoring using Connect. While viewing two-dimensional, real-time video of the surgical field, remote mentors delivered verbal and visual counsel, using two-way audio and telestration (drawing) capabilities. Perioperative and technical data were recorded. Trainee robotic performance was rated using a validated assessment tool by both mentors and trainees. The mentoring interface was rated using a multi-factorial Likert-based survey. The Mann-Whitney and t-tests were used to determine statistical differences. We enrolled 55 mentored surgical cases (29 in-room, 26 remote). Perioperative variables of operative time and blood loss were similar between in-room and remote mentored cases. Robotic skills assessment showed no significant difference (P > 0.05). Mentors preferred remote over in-room telestration (P = 0.05); otherwise no significant difference existed in evaluation of the interfaces. Remote cases using wired (vs wireless) connections had lower latency and better data transfer (P = 0.005). Three of 18 (17%) wireless sessions were disrupted; one was converted to wired, one continued after restarting Connect, and the third was aborted. A bipolar injury to the colon occurred during one (3%) in-room mentored case; no intraoperative injuries were reported during remote sessions. In a tightly controlled environment, the Connect interface allows trainee robotic surgeons to be telementored in a safe and effective manner while performing basic surgical techniques. Significant steps remain prior to widespread use of this technology. © 2014 The Authors BJU International © 2014 BJU International Published by John Wiley & Sons Ltd.
Autonomous Systems, Robotics, and Computing Systems Capability Roadmap: NRC Dialogue
NASA Technical Reports Server (NTRS)
Zornetzer, Steve; Gage, Douglas
2005-01-01
Contents include the following: Introduction. Process, Mission Drivers, Deliverables, and Interfaces. Autonomy. Crew-Centered and Remote Operations. Integrated Systems Health Management. Autonomous Vehicle Control. Autonomous Process Control. Robotics. Robotics for Solar System Exploration. Robotics for Lunar and Planetary Habitation. Robotics for In-Space Operations. Computing Systems. Conclusion.
Knowledge representation system for assembly using robots
NASA Technical Reports Server (NTRS)
Jain, A.; Donath, M.
1987-01-01
Assembly robots combine the benefits of speed and accuracy with the capability of adaptation to changes in the work environment. However, an impediment to the use of robots is the complexity of the man-machine interface. This interface can be improved by providing a means of using a priori-knowledge and reasoning capabilities for controlling and monitoring the tasks performed by robots. Robots ought to be able to perform complex assembly tasks with the help of only supervisory guidance from human operators. For such supervisory quidance, it is important to express the commands in terms of the effects desired, rather than in terms of the motion the robot must undertake in order to achieve these effects. A suitable knowledge representation can facilitate the conversion of task level descriptions into explicit instructions to the robot. Such a system would use symbolic relationships describing the a priori information about the robot, its environment, and the tasks specified by the operator to generate the commands for the robot.
Design And Control Of Agricultural Robot For Tomato Plants Treatment And Harvesting
NASA Astrophysics Data System (ADS)
Sembiring, Arnes; Budiman, Arif; Lestari, Yuyun D.
2017-12-01
Although Indonesia is one of the biggest agricultural country in the world, implementation of robotic technology, otomation and efficiency enhancement in agriculture process hasn’t extensive yet. This research proposed a low cost agricultural robot architecture. The robot could help farmer to survey their farm area, treat the tomato plants and harvest the ripe tomatoes. Communication between farmer and robot was facilitated by wireless line using radio wave to reach wide area (120m radius). The radio wave was combinated with Bluetooth to simplify the communication between robot and farmer’s Android smartphone. The robot was equipped with a camera, so the farmers could survey the farm situation through 7 inch monitor display real time. The farmers controlled the robot and arm movement through an user interface in Android smartphone. The user interface contains control icons that allow farmers to control the robot movement (formard, reverse, turn right and turn left) and cut the spotty leaves or harvest the ripe tomatoes.
Developments in brain-machine interfaces from the perspective of robotics.
Kim, Hyun K; Park, Shinsuk; Srinivasan, Mandayam A
2009-04-01
Many patients suffer from the loss of motor skills, resulting from traumatic brain and spinal cord injuries, stroke, and many other disabling conditions. Thanks to technological advances in measuring and decoding the electrical activity of cortical neurons, brain-machine interfaces (BMI) have become a promising technology that can aid paralyzed individuals. In recent studies on BMI, robotic manipulators have demonstrated their potential as neuroprostheses. Restoring motor skills through robot manipulators controlled by brain signals may improve the quality of life of people with disability. This article reviews current robotic technologies that are relevant to BMI and suggests strategies that could improve the effectiveness of a brain-operated neuroprosthesis through robotics.
Towards a real-time interface between a biomimetic model of sensorimotor cortex and a robotic arm
Dura-Bernal, Salvador; Chadderdon, George L; Neymotin, Samuel A; Francis, Joseph T; Lytton, William W
2015-01-01
Brain-machine interfaces can greatly improve the performance of prosthetics. Utilizing biomimetic neuronal modeling in brain machine interfaces (BMI) offers the possibility of providing naturalistic motor-control algorithms for control of a robotic limb. This will allow finer control of a robot, while also giving us new tools to better understand the brain’s use of electrical signals. However, the biomimetic approach presents challenges in integrating technologies across multiple hardware and software platforms, so that the different components can communicate in real-time. We present the first steps in an ongoing effort to integrate a biomimetic spiking neuronal model of motor learning with a robotic arm. The biomimetic model (BMM) was used to drive a simple kinematic two-joint virtual arm in a motor task requiring trial-and-error convergence on a single target. We utilized the output of this model in real time to drive mirroring motion of a Barrett Technology WAM robotic arm through a user datagram protocol (UDP) interface. The robotic arm sent back information on its joint positions, which was then used by a visualization tool on the remote computer to display a realistic 3D virtual model of the moving robotic arm in real time. This work paves the way towards a full closed-loop biomimetic brain-effector system that can be incorporated in a neural decoder for prosthetic control, to be used as a platform for developing biomimetic learning algorithms for controlling real-time devices. PMID:26709323
Evaluation of a graphic interface to control a robotic grasping arm: a multicenter study.
Laffont, Isabelle; Biard, Nicolas; Chalubert, Gérard; Delahoche, Laurent; Marhic, Bruno; Boyer, François C; Leroux, Christophe
2009-10-01
Laffont I, Biard N, Chalubert G, Delahoche L, Marhic B, Boyer FC, Leroux C. Evaluation of a graphic interface to control a robotic grasping arm: a multicenter study. Grasping robots are still difficult to use for persons with disabilities because of inadequate human-machine interfaces (HMIs). Our purpose was to evaluate the efficacy of a graphic interface enhanced by a panoramic camera to detect out-of-view objects and control a commercialized robotic grasping arm. Multicenter, open-label trial. Four French departments of physical and rehabilitation medicine. Control subjects (N=24; mean age, 33y) and 20 severely impaired patients (mean age, 44y; 5 with muscular dystrophies, 13 with traumatic tetraplegia, and 2 others) completed the study. None of these patients was able to grasp a 50-cL bottle without the robot. Participants were asked to grasp 6 objects scattered around their wheelchair using the robotic arm. They were able to select the desired object through the graphic interface available on their computer screen. Global success rate, time needed to select the object on the screen of the computer, number of clicks on the HMI, and satisfaction among users. We found a significantly lower success rate in patients (81.1% vs 88.7%; chi(2)P=.017). The duration of the task was significantly higher in patients (71.6s vs 39.1s; P<.001). We set a cut-off for the maximum duration at 79 seconds, representing twice the amount of time needed by the control subjects to complete the task. In these conditions, the success rate for the impaired participants was 65% versus 85.4% for control subjects. The mean number of clicks necessary to select the object with the HMI was very close in both groups: patients used (mean +/- SD) 7.99+/-6.07 clicks, whereas controls used 7.04+/-2.87 clicks. Considering the severity of patients' impairment, all these differences were considered tiny. Furthermore, a high satisfaction rate was reported for this population concerning the use of the graphic interface. The graphic interface is of interest in controlling robotic arms for disabled people, with numerous potential applications in daily life.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anthony L. Crawford
MODIFIED PAPER TITLE AND ABSTRACT DUE TO SLIGHTLY MODIFIED SCOPE: TITLE: Nonlinear Force Profile Used to Increase the Performance of a Haptic User Interface for Teleoperating a Robotic Hand Natural movements and force feedback are important elements in using teleoperated equipment if complex and speedy manipulation tasks are to be accomplished in hazardous environments, such as hot cells, glove boxes, decommissioning, explosives disarmament, and space. The research associated with this paper hypothesizes that a user interface and complementary radiation compatible robotic hand that integrates the human hand’s anthropometric properties, speed capability, nonlinear strength profile, reduction of active degrees of freedommore » during the transition from manipulation to grasping, and just noticeable difference force sensation characteristics will enhance a user’s teleoperation performance. The main contribution of this research is in that a system that concisely integrates all these factors has yet to be developed and furthermore has yet to be applied to a hazardous environment as those referenced above. In fact, the most prominent slave manipulator teleoperation technology in use today is based on a design patented in 1945 (Patent 2632574) [1]. The robotic hand/user interface systems of similar function as the one being developed in this research limit their design input requirements in the best case to only complementing the hand’s anthropometric properties, speed capability, and linearly scaled force application relationship (e.g. robotic force is a constant, 4 times that of the user). In this paper a nonlinear relationship between the force experienced between the user interface and the robotic hand was devised based on property differences of manipulation and grasping activities as they pertain to the human hand. The results show that such a relationship when subjected to a manipulation task and grasping task produces increased performance compared to the traditional linear scaling techniques used by other systems. Key Words: Teleoperation, Robotic Hand, Robotic Force Scaling« less
He, Yongtian; Nathan, Kevin; Venkatakrishnan, Anusha; Rovekamp, Roger; Beck, Christopher; Ozdemir, Recep; Francisco, Gerard E; Contreras-Vidal, Jose L
2014-01-01
Stroke remains a leading cause of disability, limiting independent ambulation in survivors, and consequently affecting quality of life (QOL). Recent technological advances in neural interfacing with robotic rehabilitation devices are promising in the context of gait rehabilitation. Here, the X1, NASA's powered robotic lower limb exoskeleton, is introduced as a potential diagnostic, assistive, and therapeutic tool for stroke rehabilitation. Additionally, the feasibility of decoding lower limb joint kinematics and kinetics during walking with the X1 from scalp electroencephalographic (EEG) signals--the first step towards the development of a brain-machine interface (BMI) system to the X1 exoskeleton--is demonstrated.
ERIC Educational Resources Information Center
Cappelleri, D. J.; Vitoroulis, N.
2013-01-01
This paper presents a series of novel project-based learning labs for an introductory robotics course that are developed into a semester-long Robotic Decathlon. The last three events of the Robotic Decathlon are used as three final one-week-long project tasks; these replace a previous course project that was a semester-long robotics competition.…
A Novel Passive Robotic Tool Interface
NASA Astrophysics Data System (ADS)
Roberts, Paul
2013-09-01
The increased capability of space robotics has seen their uses increase from simple sample gathering and mechanical adjuncts to humans, to sophisticated multi- purpose investigative and maintenance tools that substitute for humans for many external space tasks. As with all space missions, reducing mass and system complexity is critical. A key component of robotic systems mass and complexity is the number of motors and actuators needed. MDA has developed a passive tool interface that, like a household power drill, permits a single tool actuator to be interfaced with many Tool Tips without requiring additional actuators to manage the changing and storage of these tools. MDA's Multifunction Tool interface permits a wide range of Tool Tips to be designed to a single interface that can be pre-qualified to torque and strength limits such that additional Tool Tips can be added to a mission's "tool kit" simply and quickly.
Autonomous assistance navigation for robotic wheelchairs in confined spaces.
Cheein, Fernando Auat; Carelli, Ricardo; De la Cruz, Celso; Muller, Sandra; Bastos Filho, Teodiano F
2010-01-01
In this work, a visual interface for the assistance of a robotic wheelchair's navigation is presented. The visual interface is developed for the navigation in confined spaces such as narrows corridors or corridor-ends. The interface performs two navigation modus: non-autonomous and autonomous. The non-autonomous driving of the robotic wheelchair is made by means of a hand-joystick. The joystick directs the motion of the vehicle within the environment. The autonomous driving is performed when the user of the wheelchair has to turn (90, 90 or 180 degrees) within the environment. The turning strategy is performed by a maneuverability algorithm compatible with the kinematics of the wheelchair and by the SLAM (Simultaneous Localization and Mapping) algorithm. The SLAM algorithm provides the interface with the information concerning the environment disposition and the pose -position and orientation-of the wheelchair within the environment. Experimental and statistical results of the interface are also shown in this work.
Development of wrist rehabilitation robot and interface system.
Yamamoto, Ikuo; Matsui, Miki; Inagawa, Naohiro; Hachisuka, Kenji; Wada, Futoshi; Hachisuka, Akiko; Saeki, Satoru
2015-01-01
The authors have developed a practical wrist rehabilitation robot for hemiplegic patients. It consists of a mechanical rotation unit, sensor, grip, and computer system. A myoelectric sensor is used to monitor the extensor carpi radialis longus/brevis muscle and flexor carpi radialis muscle activity during training. The training robot can provoke training through myoelectric sensors, a biological signal detector and processor in advance, so that patients can undergo effective training of extention and flexion in an excited condition. In addition, both-wrist system has been developed for mirror effect training, which is the most effective function of the system, so that autonomous training using both wrists is possible. Furthermore, a user-friendly screen interface with easily recognizable touch panels has been developed to give effective training for patients. The developed robot is small size and easy to carry. The developed aspiring interface system is effective to motivate the training of patients. The effectiveness of the robot system has been verified in hospital trails.
Sensing Pressure Distribution on a Lower-Limb Exoskeleton Physical Human-Machine Interface
De Rossi, Stefano Marco Maria; Vitiello, Nicola; Lenzi, Tommaso; Ronsse, Renaud; Koopman, Bram; Persichetti, Alessandro; Vecchi, Fabrizio; Ijspeert, Auke Jan; van der Kooij, Herman; Carrozza, Maria Chiara
2011-01-01
A sensory apparatus to monitor pressure distribution on the physical human-robot interface of lower-limb exoskeletons is presented. We propose a distributed measure of the interaction pressure over the whole contact area between the user and the machine as an alternative measurement method of human-robot interaction. To obtain this measure, an array of newly-developed soft silicone pressure sensors is inserted between the limb and the mechanical interface that connects the robot to the user, in direct contact with the wearer’s skin. Compared to state-of-the-art measures, the advantage of this approach is that it allows for a distributed measure of the interaction pressure, which could be useful for the assessment of safety and comfort of human-robot interaction. This paper presents the new sensor and its characterization, and the development of an interaction measurement apparatus, which is applied to a lower-limb rehabilitation robot. The system is calibrated, and an example its use during a prototypical gait training task is presented. PMID:22346574
Off-line programming motion and process commands for robotic welding of Space Shuttle main engines
NASA Technical Reports Server (NTRS)
Ruokangas, C. C.; Guthmiller, W. A.; Pierson, B. L.; Sliwinski, K. E.; Lee, J. M. F.
1987-01-01
The off-line-programming software and hardware being developed for robotic welding of the Space Shuttle main engine are described and illustrated with diagrams, drawings, graphs, and photographs. The menu-driven workstation-based interactive programming system is designed to permit generation of both motion and process commands for the robotic workcell by weld engineers (with only limited knowledge of programming or CAD systems) on the production floor. Consideration is given to the user interface, geometric-sources interfaces, overall menu structure, weld-parameter data base, and displays of run time and archived data. Ongoing efforts to address limitations related to automatic-downhand-configuration coordinated motion, a lack of source codes for the motion-control software, CAD data incompatibility, interfacing with the robotic workcell, and definition of the welding data base are discussed.
Surgeon Design Interface for Patient-Specific Concentric Tube Robots
Morimoto, Tania K.; Greer, Joseph D.; Hsieh, Michael H.; Okamura, Allison M.
2017-01-01
Concentric tube robots have potential for use in a wide variety of surgical procedures due to their small size, dexterity, and ability to move in highly curved paths. Unlike most existing clinical robots, the design of these robots can be developed and manufactured on a patient- and procedure-specific basis. The design of concentric tube robots typically requires significant computation and optimization, and it remains unclear how the surgeon should be involved. We propose to use a virtual reality-based design environment for surgeons to easily and intuitively visualize and design a set of concentric tube robots for a specific patient and procedure. In this paper, we describe a novel patient-specific design process in the context of the virtual reality interface. We also show a resulting concentric tube robot design, created by a pediatric urologist to access a kidney stone in a pediatric patient. PMID:28656124
Surgeon Design Interface for Patient-Specific Concentric Tube Robots.
Morimoto, Tania K; Greer, Joseph D; Hsieh, Michael H; Okamura, Allison M
2016-06-01
Concentric tube robots have potential for use in a wide variety of surgical procedures due to their small size, dexterity, and ability to move in highly curved paths. Unlike most existing clinical robots, the design of these robots can be developed and manufactured on a patient- and procedure-specific basis. The design of concentric tube robots typically requires significant computation and optimization, and it remains unclear how the surgeon should be involved. We propose to use a virtual reality-based design environment for surgeons to easily and intuitively visualize and design a set of concentric tube robots for a specific patient and procedure. In this paper, we describe a novel patient-specific design process in the context of the virtual reality interface. We also show a resulting concentric tube robot design, created by a pediatric urologist to access a kidney stone in a pediatric patient.
Özcan, Alpay; Christoforou, Eftychios; Brown, Daniel; Tsekos, Nikolaos
2011-01-01
The graphical user interface for an MR compatible robotic device has the capability of displaying oblique MR slices in 2D and a 3D virtual environment along with the representation of the robotic arm in order to swiftly complete the intervention. Using the advantages of the MR modality the device saves time and effort, is safer for the medical staff and is more comfortable for the patient. PMID:17946067
Goal Tracking in a Natural Language Interface: Towards Achieving Adjustable Autonomy
1999-01-01
communication , we believe that human/machine interfaces that share some of the characteristics of human- human communication can be friendlier and easier...natural means of communicating with a mobile robot. Although we are not claiming that communication with robotic agents must be patterned after human
GOM-Face: GKP, EOG, and EMG-based multimodal interface with application to humanoid robot control.
Nam, Yunjun; Koo, Bonkon; Cichocki, Andrzej; Choi, Seungjin
2014-02-01
We present a novel human-machine interface, called GOM-Face , and its application to humanoid robot control. The GOM-Face bases its interfacing on three electric potentials measured on the face: 1) glossokinetic potential (GKP), which involves the tongue movement; 2) electrooculogram (EOG), which involves the eye movement; 3) electromyogram, which involves the teeth clenching. Each potential has been individually used for assistive interfacing to provide persons with limb motor disabilities or even complete quadriplegia an alternative communication channel. However, to the best of our knowledge, GOM-Face is the first interface that exploits all these potentials together. We resolved the interference between GKP and EOG by extracting discriminative features from two covariance matrices: a tongue-movement-only data matrix and eye-movement-only data matrix. With the feature extraction method, GOM-Face can detect four kinds of horizontal tongue or eye movements with an accuracy of 86.7% within 2.77 s. We demonstrated the applicability of the GOM-Face to humanoid robot control: users were able to communicate with the robot by selecting from a predefined menu using the eye and tongue movements.
NASA Technical Reports Server (NTRS)
Sandor, Aniko; Cross, E. Vincent, II; Chang, Mai Lee
2015-01-01
Human-robot interaction (HRI) is a discipline investigating the factors affecting the interactions between humans and robots. It is important to evaluate how the design of interfaces affect the human's ability to perform tasks effectively and efficiently when working with a robot. By understanding the effects of interface design on human performance, workload, and situation awareness, interfaces can be developed to appropriately support the human in performing tasks with minimal errors and with appropriate interaction time and effort. Thus, the results of research on human-robot interfaces have direct implications for the design of robotic systems. For efficient and effective remote navigation of a rover, a human operator needs to be aware of the robot's environment. However, during teleoperation, operators may get information about the environment only through a robot's front-mounted camera causing a keyhole effect. The keyhole effect reduces situation awareness which may manifest in navigation issues such as higher number of collisions, missing critical aspects of the environment, or reduced speed. One way to compensate for the keyhole effect and the ambiguities operators experience when they teleoperate a robot is adding multiple cameras and including the robot chassis in the camera view. Augmented reality, such as overlays, can also enhance the way a person sees objects in the environment or in camera views by making them more visible. Scenes can be augmented with integrated telemetry, procedures, or map information. Furthermore, the addition of an exocentric (i.e., third-person) field of view from a camera placed in the robot's environment may provide operators with the additional information needed to gain spatial awareness of the robot. Two research studies investigated possible mitigation approaches to address the keyhole effect: 1) combining the inclusion of the robot chassis in the camera view with augmented reality overlays, and 2) modifying the camera frame of reference. The first study investigated the effects of inclusion and exclusion of the robot chassis along with superimposing a simple arrow overlay onto the video feed of operator task performance during teleoperation of a mobile robot in a driving task. In this study, the front half of the robot chassis was made visible through the use of three cameras, two side-facing and one forward-facing. The purpose of the second study was to compare operator performance when teleoperating a robot from an egocentric-only and combined (egocentric plus exocentric camera) view. Camera view parameters that are found to be beneficial in these laboratory experiments can be implemented on NASA rovers and tested in a real-world driving and navigation scenario on-site at the Johnson Space Center.
Design of a haptic device with grasp and push-pull force feedback for a master-slave surgical robot.
Hu, Zhenkai; Yoon, Chae-Hyun; Park, Samuel Byeongjun; Jo, Yung-Ho
2016-07-01
We propose a portable haptic device providing grasp (kinesthetic) and push-pull (cutaneous) sensations for optical-motion-capture master interfaces. Although optical-motion-capture master interfaces for surgical robot systems can overcome the stiffness, friction, and coupling problems of mechanical master interfaces, it is difficult to add haptic feedback to an optical-motion-capture master interface without constraining the free motion of the operator's hands. Therefore, we utilized a Bowden cable-driven mechanism to provide the grasp and push-pull sensation while retaining the free hand motion of the optical-motion capture master interface. To evaluate the haptic device, we construct a 2-DOF force sensing/force feedback system. We compare the sensed force and the reproduced force of the haptic device. Finally, a needle insertion test was done to evaluate the performance of the haptic interface in the master-slave system. The results demonstrate that both the grasp force feedback and the push-pull force feedback provided by the haptic interface closely matched with the sensed forces of the slave robot. We successfully apply our haptic interface in the optical-motion-capture master-slave system. The results of the needle insertion test showed that our haptic feedback can provide more safety than merely visual observation. We develop a suitable haptic device to produce both kinesthetic grasp force feedback and cutaneous push-pull force feedback. Our future research will include further objective performance evaluations of the optical-motion-capture master-slave robot system with our haptic interface in surgical scenarios.
A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera
Chao, Chun-Tang; Chung, Ming-Hsuan; Chiou, Juing-Shian; Wang, Chi-Jo
2016-01-01
In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots. PMID:27023556
A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera.
Chao, Chun-Tang; Chung, Ming-Hsuan; Chiou, Juing-Shian; Wang, Chi-Jo
2016-03-25
In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots.
Tele-rehabilitation using in-house wearable ankle rehabilitation robot.
Jamwal, Prashant K; Hussain, Shahid; Mir-Nasiri, Nazim; Ghayesh, Mergen H; Xie, Sheng Q
2018-01-01
This article explores wide-ranging potential of the wearable ankle robot for in-house rehabilitation. The presented robot has been conceptualized following a brief analysis of the existing technologies, systems, and solutions for in-house physical ankle rehabilitation. Configuration design analysis and component selection for ankle robot have been discussed as part of the conceptual design. The complexities of human robot interaction are closely encountered while maneuvering a rehabilitation robot. We present a fuzzy logic-based controller to perform the required robot-assisted ankle rehabilitation treatment. Designs of visual haptic interfaces have also been discussed, which will make the treatment interesting, and the subject will be motivated to exert more and regain lost functions rapidly. The complex nature of web-based communication between user and remotely sitting physiotherapy staff has also been discussed. A high-level software architecture appended with robot ensures user-friendly operations. This software is made up of three important components: patient-related database, graphical user interface (GUI), and a library of exercises creating virtual reality-specifically developed for ankle rehabilitation.
Development of a Guide-Dog Robot: Leading and Recognizing a Visually-Handicapped Person using a LRF
NASA Astrophysics Data System (ADS)
Saegusa, Shozo; Yasuda, Yuya; Uratani, Yoshitaka; Tanaka, Eiichirou; Makino, Toshiaki; Chang, Jen-Yuan (James
A conceptual Guide-Dog Robot prototype to lead and to recognize a visually-handicapped person is developed and discussed in this paper. Key design features of the robot include a movable platform, human-machine interface, and capability of avoiding obstacles. A novel algorithm enabling the robot to recognize its follower's locomotion as well to detect the center of corridor is proposed and implemented in the robot's human-machine interface. It is demonstrated that using the proposed novel leading and detecting algorithm along with a rapid scanning laser range finder (LRF) sensor, the robot is able to successfully and effectively lead a human walking in corridor without running into obstacles such as trash boxes or adjacent walking persons. Position and trajectory of the robot leading a human maneuvering in common corridor environment are measured by an independent LRF observer. The measured data suggest that the proposed algorithms are effective to enable the robot to detect center of the corridor and position of its follower correctly.
Melidis, Christos; Iizuka, Hiroyuki; Marocco, Davide
2018-05-01
In this paper, we present a novel approach to human-robot control. Taking inspiration from behaviour-based robotics and self-organisation principles, we present an interfacing mechanism, with the ability to adapt both towards the user and the robotic morphology. The aim is for a transparent mechanism connecting user and robot, allowing for a seamless integration of control signals and robot behaviours. Instead of the user adapting to the interface and control paradigm, the proposed architecture allows the user to shape the control motifs in their way of preference, moving away from the case where the user has to read and understand an operation manual, or it has to learn to operate a specific device. Starting from a tabula rasa basis, the architecture is able to identify control patterns (behaviours) for the given robotic morphology and successfully merge them with control signals from the user, regardless of the input device used. The structural components of the interface are presented and assessed both individually and as a whole. Inherent properties of the architecture are presented and explained. At the same time, emergent properties are presented and investigated. As a whole, this paradigm of control is found to highlight the potential for a change in the paradigm of robotic control, and a new level in the taxonomy of human in the loop systems.
Review of surgical robotics user interface: what is the best way to control robotic surgery?
Simorov, Anton; Otte, R Stephen; Kopietz, Courtni M; Oleynikov, Dmitry
2012-08-01
As surgical robots begin to occupy a larger place in operating rooms around the world, continued innovation is necessary to improve our outcomes. A comprehensive review of current surgical robotic user interfaces was performed to describe the modern surgical platforms, identify the benefits, and address the issues of feedback and limitations of visualization. Most robots currently used in surgery employ a master/slave relationship, with the surgeon seated at a work-console, manipulating the master system and visualizing the operation on a video screen. Although enormous strides have been made to advance current technology to the point of clinical use, limitations still exist. A lack of haptic feedback to the surgeon and the inability of the surgeon to be stationed at the operating table are the most notable examples. The future of robotic surgery sees a marked increase in the visualization technologies used in the operating room, as well as in the robots' abilities to convey haptic feedback to the surgeon. This will allow unparalleled sensation for the surgeon and almost eliminate inadvertent tissue contact and injury. A novel design for a user interface will allow the surgeon to have access to the patient bedside, remaining sterile throughout the procedure, employ a head-mounted three-dimensional visualization system, and allow the most intuitive master manipulation of the slave robot to date.
Generic command interpreter for robot controllers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Werner, J.
1991-04-09
Generic command interpreter programs have been written for robot controllers at Sandia National Laboratories (SNL). Each interpreter program resides on a robot controller and interfaces the controller with a supervisory program on another (host) computer. We call these interpreter programs monitors because they wait, monitoring a communication line, for commands from the supervisory program. These monitors are designed to interface with the object-oriented software structure of the supervisory programs. The functions of the monitor programs are written in each robot controller's native language but reflect the object-oriented functions of the supervisory programs. These functions and other specifics of the monitormore » programs written for three different robots at SNL will be discussed. 4 refs., 4 figs.« less
Reusable science tools for analog exploration missions: xGDS Web Tools, VERVE, and Gigapan Voyage
NASA Astrophysics Data System (ADS)
Lee, Susan Y.; Lees, David; Cohen, Tamar; Allan, Mark; Deans, Matthew; Morse, Theodore; Park, Eric; Smith, Trey
2013-10-01
The Exploration Ground Data Systems (xGDS) project led by the Intelligent Robotics Group (IRG) at NASA Ames Research Center creates software tools to support multiple NASA-led planetary analog field experiments. The two primary tools that fall under the xGDS umbrella are the xGDS Web Tools (xGDS-WT) and Visual Environment for Remote Virtual Exploration (VERVE). IRG has also developed a hardware and software system that is closely integrated with our xGDS tools and is used in multiple field experiments called Gigapan Voyage. xGDS-WT, VERVE, and Gigapan Voyage are examples of IRG projects that improve the ratio of science return versus development effort by creating generic and reusable tools that leverage existing technologies in both hardware and software. xGDS Web Tools provides software for gathering and organizing mission data for science and engineering operations, including tools for planning traverses, monitoring autonomous or piloted vehicles, visualization, documentation, analysis, and search. VERVE provides high performance three dimensional (3D) user interfaces used by scientists, robot operators, and mission planners to visualize robot data in real time. Gigapan Voyage is a gigapixel image capturing and processing tool that improves situational awareness and scientific exploration in human and robotic analog missions. All of these technologies emphasize software reuse and leverage open source and/or commercial-off-the-shelf tools to greatly improve the utility and reduce the development and operational cost of future similar technologies. Over the past several years these technologies have been used in many NASA-led robotic field campaigns including the Desert Research and Technology Studies (DRATS), the Pavilion Lake Research Project (PLRP), the K10 Robotic Follow-Up tests, and most recently we have become involved in the NASA Extreme Environment Mission Operations (NEEMO) field experiments. A major objective of these joint robot and crew experiments is to improve NASAs understanding of how to most effectively execute and increase science return from exploration missions. This paper focuses on an integrated suite of xGDS software and compatible hardware tools: xGDS Web Tools, VERVE, and Gigapan Voyage, how they are used, and the design decisions that were made to allow them to be easily developed, integrated, tested, and reused by multiple NASA field experiments and robotic platforms.
NASA Astrophysics Data System (ADS)
Kattoju, Ravi Kiran; Barber, Daniel J.; Abich, Julian; Harris, Jonathan
2016-05-01
With increasing necessity for intuitive Soldier-robot communication in military operations and advancements in interactive technologies, autonomous robots have transitioned from assistance tools to functional and operational teammates able to service an array of military operations. Despite improvements in gesture and speech recognition technologies, their effectiveness in supporting Soldier-robot communication is still uncertain. The purpose of the present study was to evaluate the performance of gesture and speech interface technologies to facilitate Soldier-robot communication during a spatial-navigation task with an autonomous robot. Gesture and speech semantically based spatial-navigation commands leveraged existing lexicons for visual and verbal communication from the U.S Army field manual for visual signaling and a previously established Squad Level Vocabulary (SLV). Speech commands were recorded by a Lapel microphone and Microsoft Kinect, and classified by commercial off-the-shelf automatic speech recognition (ASR) software. Visual signals were captured and classified using a custom wireless gesture glove and software. Participants in the experiment commanded a robot to complete a simulated ISR mission in a scaled down urban scenario by delivering a sequence of gesture and speech commands, both individually and simultaneously, to the robot. Performance and reliability of gesture and speech hardware interfaces and recognition tools were analyzed and reported. Analysis of experimental results demonstrated the employed gesture technology has significant potential for enabling bidirectional Soldier-robot team dialogue based on the high classification accuracy and minimal training required to perform gesture commands.
2007-09-01
behaviour based on past experience of interacting with the operator), and mobile (i.e., can move themselves from one machine to another). Edwards argues that...Sofge, D., Bugajska, M., Adams, W., Perzanowski, D., and Schultz, A. (2003). Agent-based Multimodal Interface for Dynamically Autonomous Mobile Robots...based architecture can provide a natural and scalable approach to implementing a multimodal interface to control mobile robots through dynamic
2011-06-01
effective way- point navigation algorithm that interfaced with a Java based graphical user interface (GUI), written by Uzun, for a robot named Bender [2...the angular acceleration, θ̈, or angular rate, θ̇. When considering a joint driven by an electric motor, the inertia and friction can be divided into...interactive simulations that can receive input from user controls, scripts , and other applications, such as Excel and MATLAB. One drawback is that the
The ACE multi-user web-based Robotic Observatory Control System
NASA Astrophysics Data System (ADS)
Mack, P.
2003-05-01
We have developed an observatory control system that can be operated in interactive, remote or robotic modes. In interactive and remote mode the observer typically acquires the first object then creates a script through a window interface to complete observations for the rest of the night. The system closes early in the event of bad weather. In robotic mode observations are submitted ahead of time through a web-based interface. We present observations made with a 1.0-m telescope using these methods.
Development and Deployment of Robonaut 2 to the International Space Station
NASA Technical Reports Server (NTRS)
Ambrose, Robert O.
2011-01-01
The development of the Robonaut 2 (R2) system was a joint endeavor with NASA and General Motors, producing robots strong enough to do work, yet safe enough to be trusted to work near humans. To date two R2 units have been produced, designated as R2A and R2B. This follows more than a decade of work on the Robonaut 1 units that produced advances in dexterity, tele-presence, remote supervision across time delay, combining mobility with manipulation, human-robot interaction, force control and autonomous grasping. Design challenges for the R2 included higher speed, smaller packaging, more dexterous fingers, more sensitive perception, soft drivetrain design, and the overall implementation of a system software approach for human safety, At the time of this writing the R2B unit was poised for launch to the International Space Station (ISS) aboard STS-133. R2 will be the first humanoid robot in space, and is arguably the most sophisticated robot in the world, bringing NASA into the 21st century as the world's leader in this field. Joining the other robots already on ISS, the station is now an exciting lab for robot experiments and utilization. A particular challenge for this project has been the design and certification of the robot and its software for work near humans. The 3 layer software systems will be described, and the path to ISS certification will be reviewed. R2 will go through a series of ISS checkout tests during 2011. A taskboard was shipped with the robot that will be used to compare R2B's dexterous manipulation in zero gravity with the ground robot s ability to handle similar objects in Earth s gravity. R2's taskboard has panels with increasingly difficult tasks, starting with switches, progressing to connectors and eventually handling softgoods. The taskboard is modular, and new interfaces and experiments will be built up using equipment already on ISS. Since the objective is to test R2 performing tasks with human interfaces, hardware abounds on ISS and the crew will be involved to help select tasks that are dull, dirty or dangerous. Future plans for R2 include a series of upgrades, evolving from static IVA (Intravehicular Activity) operations, to mobile IVA, then EVA (Extravehicular Activity).
Robust human machine interface based on head movements applied to assistive robotics.
Perez, Elisa; López, Natalia; Orosco, Eugenio; Soria, Carlos; Mut, Vicente; Freire-Bastos, Teodiano
2013-01-01
This paper presents an interface that uses two different sensing techniques and combines both results through a fusion process to obtain the minimum-variance estimator of the orientation of the user's head. Sensing techniques of the interface are based on an inertial sensor and artificial vision. The orientation of the user's head is used to steer the navigation of a robotic wheelchair. Also, a control algorithm for assistive technology system is presented. The system is evaluated by four individuals with severe motors disability and a quantitative index was developed, in order to objectively evaluate the performance. The results obtained are promising since most users could perform the proposed tasks with the robotic wheelchair.
Robust Human Machine Interface Based on Head Movements Applied to Assistive Robotics
Perez, Elisa; López, Natalia; Orosco, Eugenio; Soria, Carlos; Mut, Vicente; Freire-Bastos, Teodiano
2013-01-01
This paper presents an interface that uses two different sensing techniques and combines both results through a fusion process to obtain the minimum-variance estimator of the orientation of the user's head. Sensing techniques of the interface are based on an inertial sensor and artificial vision. The orientation of the user's head is used to steer the navigation of a robotic wheelchair. Also, a control algorithm for assistive technology system is presented. The system is evaluated by four individuals with severe motors disability and a quantitative index was developed, in order to objectively evaluate the performance. The results obtained are promising since most users could perform the proposed tasks with the robotic wheelchair. PMID:24453877
Modular Countermine Payload for Small Robots
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herman Herman; Doug Few; Roelof Versteeg
2010-04-01
Payloads for small robotic platforms have historically been designed and implemented as platform and task specific solutions. A consequence of this approach is that payloads cannot be deployed on different robotic platforms without substantial re-engineering efforts. To address this issue, we developed a modular countermine payload that is designed from the ground-up to be platform agnostic. The payload consists of the multi-mission payload controller unit (PCU) coupled with the configurable mission specific threat detection, navigation and marking payloads. The multi-mission PCU has all the common electronics to control and interface to all the payloads. It also contains the embedded processormore » that can be used to run the navigational and control software. The PCU has a very flexible robot interface which can be configured to interface to various robot platforms. The threat detection payload consists of a two axis sweeping arm and the detector. The navigation payload consists of several perception sensors that are used for terrain mapping, obstacle detection and navigation. Finally, the marking payload consists of a dual-color paint marking system. Through the multi-mission PCU, all these payloads are packaged in a platform agnostic way to allow deployment on multiple robotic platforms, including Talon and Packbot.« less
Modular countermine payload for small robots
NASA Astrophysics Data System (ADS)
Herman, Herman; Few, Doug; Versteeg, Roelof; Valois, Jean-Sebastien; McMahill, Jeff; Licitra, Michael; Henciak, Edward
2010-04-01
Payloads for small robotic platforms have historically been designed and implemented as platform and task specific solutions. A consequence of this approach is that payloads cannot be deployed on different robotic platforms without substantial re-engineering efforts. To address this issue, we developed a modular countermine payload that is designed from the ground-up to be platform agnostic. The payload consists of the multi-mission payload controller unit (PCU) coupled with the configurable mission specific threat detection, navigation and marking payloads. The multi-mission PCU has all the common electronics to control and interface to all the payloads. It also contains the embedded processor that can be used to run the navigational and control software. The PCU has a very flexible robot interface which can be configured to interface to various robot platforms. The threat detection payload consists of a two axis sweeping arm and the detector. The navigation payload consists of several perception sensors that are used for terrain mapping, obstacle detection and navigation. Finally, the marking payload consists of a dual-color paint marking system. Through the multimission PCU, all these payloads are packaged in a platform agnostic way to allow deployment on multiple robotic platforms, including Talon and Packbot.
Wireless intraoral tongue control of an assistive robotic arm for individuals with tetraplegia.
Andreasen Struijk, Lotte N S; Egsgaard, Line Lindhardt; Lontis, Romulus; Gaihede, Michael; Bentsen, Bo
2017-11-06
For an individual with tetraplegia assistive robotic arms provide a potentially invaluable opportunity for rehabilitation. However, there is a lack of available control methods to allow these individuals to fully control the assistive arms. Here we show that it is possible for an individual with tetraplegia to use the tongue to fully control all 14 movements of an assistive robotic arm in a three dimensional space using a wireless intraoral control system, thus allowing for numerous activities of daily living. We developed a tongue-based robotic control method incorporating a multi-sensor inductive tongue interface. One abled-bodied individual and one individual with tetraplegia performed a proof of concept study by controlling the robot with their tongue using direct actuator control and endpoint control, respectively. After 30 min of training, the able-bodied experimental participant tongue controlled the assistive robot to pick up a roll of tape in 80% of the attempts. Further, the individual with tetraplegia succeeded in fully tongue controlling the assistive robot to reach for and touch a roll of tape in 100% of the attempts and to pick up the roll in 50% of the attempts. Furthermore, she controlled the robot to grasp a bottle of water and pour its contents into a cup; her first functional action in 19 years. To our knowledge, this is the first time that an individual with tetraplegia has been able to fully control an assistive robotic arm using a wireless intraoral tongue interface. The tongue interface used to control the robot is currently available for control of computers and of powered wheelchairs, and the robot employed in this study is also commercially available. Therefore, the presented results may translate into available solutions within reasonable time.
INL Generic Robot Architecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
2005-03-30
The INL Generic Robot Architecture is a generic, extensible software framework that can be applied across a variety of different robot geometries, sensor suites and low-level proprietary control application programming interfaces (e.g. mobility, aria, aware, player, etc.).
Design and development of an upper extremity motion capture system for a rehabilitation robot.
Nanda, Pooja; Smith, Alan; Gebregiorgis, Adey; Brown, Edward E
2009-01-01
Human robot interaction is a new and rapidly growing field and its application in the realm of rehabilitation and physical care is a major focus area of research worldwide. This paper discusses the development and implementation of a wireless motion capture system for the human arm which can be used for physical therapy or real-time control of a robotic arm, among many other potential applications. The system is comprised of a mechanical brace with rotary potentiometers inserted at the different joints to capture position data. It also contains surface electrodes which acquire electromyographic signals through the CleveMed BioRadio device. The brace interfaces with a software subsystem which displays real time data signals. The software includes a 3D arm model which imitates the actual movement of a subject's arm under testing. This project began as part of the Rochester Institute of Technology's Undergraduate Multidisciplinary Senior Design curriculum and has been integrated into the overall research objectives of the Biomechatronic Learning Laboratory.
Robot Teleoperation and Perception Assistance with a Virtual Holographic Display
NASA Technical Reports Server (NTRS)
Goddard, Charles O.
2012-01-01
Teleoperation of robots in space from Earth has historically been dfficult. Speed of light delays make direct joystick-type control infeasible, so it is desirable to command a robot in a very high-level fashion. However, in order to provide such an interface, knowledge of what objects are in the robot's environment and how they can be interacted with is required. In addition, many tasks that would be desirable to perform are highly spatial, requiring some form of six degree of freedom input. These two issues can be combined, allowing the user to assist the robot's perception by identifying the locations of objects in the scene. The zSpace system, a virtual holographic environment, provides a virtual three-dimensional space superimposed over real space and a stylus tracking position and rotation inside of it. Using this system, a possible interface for this sort of robot control is proposed.
Understanding of and applications for robot vision guidance at KSC
NASA Technical Reports Server (NTRS)
Shawaga, Lawrence M.
1988-01-01
The primary thrust of robotics at KSC is for the servicing of Space Shuttle remote umbilical docking functions. In order for this to occur, robots performing servicing operations must be capable of tracking a swaying Orbiter in Six Degrees of Freedom (6-DOF). Currently, in NASA KSC's Robotic Applications Development Laboratory (RADL), an ASEA IRB-90 industrial robot is being equipped with a real-time computer vision (hardware and software) system to allow it to track a simulated Orbiter interface (target) in 6-DOF. The real-time computer vision system effectively becomes the eyes for the lab robot, guiding it through a closed loop visual feedback system to move with the simulated Orbiter interface. This paper will address an understanding of this vision guidance system and how it will be applied to remote umbilical servicing at KSC. In addition, other current and future applications will be addressed.
Comparison of three different techniques for camera and motion control of a teleoperated robot.
Doisy, Guillaume; Ronen, Adi; Edan, Yael
2017-01-01
This research aims to evaluate new methods for robot motion control and camera orientation control through the operator's head orientation in robot teleoperation tasks. Specifically, the use of head-tracking in a non-invasive way, without immersive virtual reality devices was combined and compared with classical control modes for robot movements and camera control. Three control conditions were tested: 1) a condition with classical joystick control of both the movements of the robot and the robot camera, 2) a condition where the robot movements were controlled by a joystick and the robot camera was controlled by the user head orientation, and 3) a condition where the movements of the robot were controlled by hand gestures and the robot camera was controlled by the user head orientation. Performance, workload metrics and their evolution as the participants gained experience with the system were evaluated in a series of experiments: for each participant, the metrics were recorded during four successive similar trials. Results shows that the concept of robot camera control by user head orientation has the potential of improving the intuitiveness of robot teleoperation interfaces, specifically for novice users. However, more development is needed to reach a margin of progression comparable to a classical joystick interface. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Mckee, James W.
1989-01-01
The objective is to develop a system that will allow a person not necessarily skilled in the art of programming robots to quickly and naturally create the necessary data and commands to enable a robot to perform a desired task. The system will use a menu driven graphical user interface. This interface will allow the user to input data to select objects to be moved. There will be an imbedded expert system to process the knowledge about objects and the robot to determine how they are to be moved. There will be automatic path planning to avoid obstacles in the work space and to create a near optimum path. The system will contain the software to generate the required robot instructions.
Experimental setup for evaluating an adaptive user interface for teleoperation control
NASA Astrophysics Data System (ADS)
Wijayasinghe, Indika B.; Peetha, Srikanth; Abubakar, Shamsudeen; Saadatzi, Mohammad Nasser; Cremer, Sven; Popa, Dan O.
2017-05-01
A vital part of human interactions with a machine is the control interface, which single-handedly could define the user satisfaction and the efficiency of performing a task. This paper elaborates the implementation of an experimental setup to study an adaptive algorithm that can help the user better tele-operate the robot. The formulation of the adaptive interface and associate learning algorithms are general enough to apply when the mapping between the user controls and the robot actuators is complex and/or ambiguous. The method uses a genetic algorithm to find the optimal parameters that produce the input-output mapping for teleoperation control. In this paper, we describe the experimental setup and associated results that was used to validate the adaptive interface to a differential drive robot from two different input devices; a joystick, and a Myo gesture control armband. Results show that after the learning phase, the interface converges to an intuitive mapping that can help even inexperienced users drive the system to a goal location.
Demonstration of a Spoken Dialogue Interface for Planning Activities of a Semi-autonomous Robot
NASA Technical Reports Server (NTRS)
Dowding, John; Frank, Jeremy; Hockey, Beth Ann; Jonsson, Ari; Aist, Gregory
2002-01-01
Planning and scheduling in the face of uncertainty and change pushes the capabilities of both planning and dialogue technologies by requiring complex negotiation to arrive at a workable plan. Planning for use of semi-autonomous robots involves negotiation among multiple participants with competing scientific and engineering goals to co-construct a complex plan. In NASA applications this plan construction is done under severe time pressure so having a dialogue interface to the plan construction tools can aid rapid completion of the process. But, this will put significant demands on spoken dialogue technology, particularly in the areas of dialogue management and generation. The dialogue interface will need to be able to handle the complex dialogue strategies that occur in negotiation dialogues, including hypotheticals and revisions, and the generation component will require an ability to summarize complex plans. This demonstration will describe a work in progress towards building a spoken dialogue interface to the EUROPA planner for the purposes of planning and scheduling the activities of a semi-autonomous robot. A prototype interface has been built for planning the schedule of the Personal Satellite Assistant (PSA), a mobile robot designed for micro-gravity environments that is intended for use on the Space Shuttle and International Space Station. The spoken dialogue interface gives the user the capability to ask for a description of the plan, ask specific questions about the plan, and update or modify the plan. We anticipate that a spoken dialogue interface to the planner will provide a natural augmentation or alternative to the visualization interface, in situations in which the user needs very targeted information about the plan, in situations where natural language can express complex ideas more concisely than GUI actions, or in situations in which a graphical user interface is not appropriate.
Mobility Systems For Robotic Vehicles
NASA Astrophysics Data System (ADS)
Chun, Wendell
1987-02-01
The majority of existing robotic systems can be decomposed into five distinct subsystems: locomotion, control/man-machine interface (MMI), sensors, power source, and manipulator. When designing robotic vehicles, there are two main requirements: first, to design for the environment and second, for the task. The environment can be correlated with known missions. This can be seen by analyzing existing mobile robots. Ground mobile systems are generally wheeled, tracked, or legged. More recently, underwater vehicles have gained greater attention. For example, Jason Jr. made history by surveying the sunken luxury liner, the Titanic. The next big surge of robotic vehicles will be in space. This will evolve as a result of NASA's commitment to the Space Station. The foreseeable robots will interface with current systems as well as standalone, free-flying systems. A space robotic vehicle is similar to its underwater counterpart with very few differences. Their commonality includes missions and degrees-of-freedom. The issues of stability and communication are inherent in both systems and environment.
Boninger, Michael L; Wechsler, Lawrence R; Stein, Joel
2014-11-01
The aim of this study was to describe the current state and latest advances in robotics, stem cells, and brain-computer interfaces in rehabilitation and recovery for stroke. The authors of this summary recently reviewed this work as part of a national presentation. The article represents the information included in each area. Each area has seen great advances and challenges as products move to market and experiments are ongoing. Robotics, stem cells, and brain-computer interfaces all have tremendous potential to reduce disability and lead to better outcomes for patients with stroke. Continued research and investment will be needed as the field moves forward. With this investment, the potential for recovery of function is likely substantial.
Boninger, Michael L; Wechsler, Lawrence R.; Stein, Joel
2014-01-01
Objective To describe the current state and latest advances in robotics, stem cells, and brain computer interfaces in rehabilitation and recovery for stroke. Design The authors of this summary recently reviewed this work as part of a national presentation. The paper represents the information included in each area. Results Each area has seen great advances and challenges as products move to market and experiments are ongoing. Conclusion Robotics, stem cells, and brain computer interfaces all have tremendous potential to reduce disability and lead to better outcomes for patients with stroke. Continued research and investment will be needed as the field moves forward. With this investment, the potential for recovery of function is likely substantial PMID:25313662
International Assessment of Research and Development in Micromanufacturing
2005-10-01
83 7.1. Female robot used for robot artificial insemination project...90 7.2. Male robot used for robot artificial insemination project...include building a desktop factory, “robot mating” using artificial insemination (a fish egg was actually fertilized by his students’ robots
The Human-Robot Interaction Operating System
NASA Technical Reports Server (NTRS)
Fong, Terrence; Kunz, Clayton; Hiatt, Laura M.; Bugajska, Magda
2006-01-01
In order for humans and robots to work effectively together, they need to be able to converse about abilities, goals and achievements. Thus, we are developing an interaction infrastructure called the "Human-Robot Interaction Operating System" (HRI/OS). The HRI/OS provides a structured software framework for building human-robot teams, supports a variety of user interfaces, enables humans and robots to engage in task-oriented dialogue, and facilitates integration of robots through an extensible API.
Chung, Cheng-Shiu; Wang, Hongwu; Cooper, Rory A
2013-07-01
The user interface development of assistive robotic manipulators can be traced back to the 1960s. Studies include kinematic designs, cost-efficiency, user experience involvements, and performance evaluation. This paper is to review studies conducted with clinical trials using activities of daily living (ADLs) tasks to evaluate performance categorized using the International Classification of Functioning, Disability, and Health (ICF) frameworks, in order to give the scope of current research and provide suggestions for future studies. We conducted a literature search of assistive robotic manipulators from 1970 to 2012 in PubMed, Google Scholar, and University of Pittsburgh Library System - PITTCat. Twenty relevant studies were identified. Studies were separated into two broad categories: user task preferences and user-interface performance measurements of commercialized and developing assistive robotic manipulators. The outcome measures and ICF codes associated with the performance evaluations are reported. Suggestions for the future studies include (1) standardized ADL tasks for the quantitative and qualitative evaluation of task efficiency and performance to build comparable measures between research groups, (2) studies relevant to the tasks from user priority lists and ICF codes, and (3) appropriate clinical functional assessment tests with consideration of constraints in assistive robotic manipulator user interfaces. In addition, these outcome measures will help physicians and therapists build standardized tools while prescribing and assessing assistive robotic manipulators.
NASA Technical Reports Server (NTRS)
Dischinger, H. Charles., Jr.; Mullins, Jeffrey B.
2005-01-01
The United States is entering a new period of human exploration of the inner Solar System, and robotic human helpers will be partners in that effort. In order to support integration of these new worker robots into existing and new human systems, a new design standard should be developed, to be called the Robot-Systems Integration Standard (RSIS). It will address the requirements for and constraints upon robotic collaborators with humans. These workers are subject to the same functional constraints as humans of work, reach, and visibility/situational awareness envelopes, and they will deal with the same maintenance and communication interfaces. Thus, the RSIS will be created by discipline experts with the same sort of perspective on these and other interface concerns as human engineers.
Interactive multi-objective path planning through a palette-based user interface
NASA Astrophysics Data System (ADS)
Shaikh, Meher T.; Goodrich, Michael A.; Yi, Daqing; Hoehne, Joseph
2016-05-01
n a problem where a human uses supervisory control to manage robot path-planning, there are times when human does the path planning, and if satisfied commits those paths to be executed by the robot, and the robot executes that plan. In planning a path, the robot often uses an optimization algorithm that maximizes or minimizes an objective. When a human is assigned the task of path planning for robot, the human may care about multiple objectives. This work proposes a graphical user interface (GUI) designed for interactive robot path-planning when an operator may prefer one objective over others or care about how multiple objectives are traded off. The GUI represents multiple objectives using the metaphor of an artist's palette. A distinct color is used to represent each objective, and tradeoffs among objectives are balanced in a manner that an artist mixes colors to get the desired shade of color. Thus, human intent is analogous to the artist's shade of color. We call the GUI an "Adverb Palette" where the word "Adverb" represents a specific type of objective for the path, such as the adverbs "quickly" and "safely" in the commands: "travel the path quickly", "make the journey safely". The novel interactive interface provides the user an opportunity to evaluate various alternatives (that tradeoff between different objectives) by allowing her to visualize the instantaneous outcomes that result from her actions on the interface. In addition to assisting analysis of various solutions given by an optimization algorithm, the palette has additional feature of allowing the user to define and visualize her own paths, by means of waypoints (guiding locations) thereby spanning variety for planning. The goal of the Adverb Palette is thus to provide a way for the user and robot to find an acceptable solution even though they use very different representations of the problem. Subjective evaluations suggest that even non-experts in robotics can carry out the planning tasks with a great deal of flexibility using the adverb palette.
RoMPS concept review automatic control of space robot, volume 2
NASA Technical Reports Server (NTRS)
Dobbs, M. E.
1991-01-01
Topics related to robot operated materials processing in space (RoMPS) are presented in view graph form and include: (1) system concept; (2) Hitchhiker Interface Requirements; (3) robot axis control concepts; (4) Autonomous Experiment Management System; (5) Zymate Robot Controller; (6) Southwest SC-4 Computer; (7) oven control housekeeping data; and (8) power distribution.
Tonet, Oliver; Marinelli, Martina; Citi, Luca; Rossini, Paolo Maria; Rossini, Luca; Megali, Giuseppe; Dario, Paolo
2008-01-15
Interaction with machines is mediated by human-machine interfaces (HMIs). Brain-machine interfaces (BMIs) are a particular class of HMIs and have so far been studied as a communication means for people who have little or no voluntary control of muscle activity. In this context, low-performing interfaces can be considered as prosthetic applications. On the other hand, for able-bodied users, a BMI would only be practical if conceived as an augmenting interface. In this paper, a method is introduced for pointing out effective combinations of interfaces and devices for creating real-world applications. First, devices for domotics, rehabilitation and assistive robotics, and their requirements, in terms of throughput and latency, are described. Second, HMIs are classified and their performance described, still in terms of throughput and latency. Then device requirements are matched with performance of available interfaces. Simple rehabilitation and domotics devices can be easily controlled by means of BMI technology. Prosthetic hands and wheelchairs are suitable applications but do not attain optimal interactivity. Regarding humanoid robotics, the head and the trunk can be controlled by means of BMIs, while other parts require too much throughput. Robotic arms, which have been controlled by means of cortical invasive interfaces in animal studies, could be the next frontier for non-invasive BMIs. Combining smart controllers with BMIs could improve interactivity and boost BMI applications.
Yap, Hwa Jen; Taha, Zahari; Md Dawal, Siti Zawiah; Chang, Siow-Wee
2014-01-01
Traditional robotic work cell design and programming are considered inefficient and outdated in current industrial and market demands. In this research, virtual reality (VR) technology is used to improve human-robot interface, whereby complicated commands or programming knowledge is not required. The proposed solution, known as VR-based Programming of a Robotic Work Cell (VR-Rocell), consists of two sub-programmes, which are VR-Robotic Work Cell Layout (VR-RoWL) and VR-based Robot Teaching System (VR-RoT). VR-RoWL is developed to assign the layout design for an industrial robotic work cell, whereby VR-RoT is developed to overcome safety issues and lack of trained personnel in robot programming. Simple and user-friendly interfaces are designed for inexperienced users to generate robot commands without damaging the robot or interrupting the production line. The user is able to attempt numerous times to attain an optimum solution. A case study is conducted in the Robotics Laboratory to assemble an electronics casing and it is found that the output models are compatible with commercial software without loss of information. Furthermore, the generated KUKA commands are workable when loaded into a commercial simulator. The operation of the actual robotic work cell shows that the errors may be due to the dynamics of the KUKA robot rather than the accuracy of the generated programme. Therefore, it is concluded that the virtual reality based solution approach can be implemented in an industrial robotic work cell. PMID:25360663
Yap, Hwa Jen; Taha, Zahari; Dawal, Siti Zawiah Md; Chang, Siow-Wee
2014-01-01
Traditional robotic work cell design and programming are considered inefficient and outdated in current industrial and market demands. In this research, virtual reality (VR) technology is used to improve human-robot interface, whereby complicated commands or programming knowledge is not required. The proposed solution, known as VR-based Programming of a Robotic Work Cell (VR-Rocell), consists of two sub-programmes, which are VR-Robotic Work Cell Layout (VR-RoWL) and VR-based Robot Teaching System (VR-RoT). VR-RoWL is developed to assign the layout design for an industrial robotic work cell, whereby VR-RoT is developed to overcome safety issues and lack of trained personnel in robot programming. Simple and user-friendly interfaces are designed for inexperienced users to generate robot commands without damaging the robot or interrupting the production line. The user is able to attempt numerous times to attain an optimum solution. A case study is conducted in the Robotics Laboratory to assemble an electronics casing and it is found that the output models are compatible with commercial software without loss of information. Furthermore, the generated KUKA commands are workable when loaded into a commercial simulator. The operation of the actual robotic work cell shows that the errors may be due to the dynamics of the KUKA robot rather than the accuracy of the generated programme. Therefore, it is concluded that the virtual reality based solution approach can be implemented in an industrial robotic work cell.
Decentralized sensor fusion for Ubiquitous Networking Robotics in Urban Areas.
Sanfeliu, Alberto; Andrade-Cetto, Juan; Barbosa, Marco; Bowden, Richard; Capitán, Jesús; Corominas, Andreu; Gilbert, Andrew; Illingworth, John; Merino, Luis; Mirats, Josep M; Moreno, Plínio; Ollero, Aníbal; Sequeira, João; Spaan, Matthijs T J
2010-01-01
In this article we explain the architecture for the environment and sensors that has been built for the European project URUS (Ubiquitous Networking Robotics in Urban Sites), a project whose objective is to develop an adaptable network robot architecture for cooperation between network robots and human beings and/or the environment in urban areas. The project goal is to deploy a team of robots in an urban area to give a set of services to a user community. This paper addresses the sensor architecture devised for URUS and the type of robots and sensors used, including environment sensors and sensors onboard the robots. Furthermore, we also explain how sensor fusion takes place to achieve urban outdoor execution of robotic services. Finally some results of the project related to the sensor network are highlighted.
Experiments in teleoperator and autonomous control of space robotic vehicles
NASA Technical Reports Server (NTRS)
Alexander, Harold L.
1991-01-01
A program of research embracing teleoperator and automatic navigational control of freely flying satellite robots is presented. Current research goals include: (1) developing visual operator interfaces for improved vehicle teleoperation; (2) determining the effects of different visual interface system designs on operator performance; and (3) achieving autonomous vision-based vehicle navigation and control. This research program combines virtual-environment teleoperation studies and neutral-buoyancy experiments using a space-robot simulator vehicle currently under development. Visual-interface design options under investigation include monoscopic versus stereoscopic displays and cameras, helmet-mounted versus panel-mounted display monitors, head-tracking versus fixed or manually steerable remote cameras, and the provision of vehicle-fixed visual cues, or markers, in the remote scene for improved sensing of vehicle position, orientation, and motion.
Application of Robotics in Decommissioning and Decontamination - 12536
DOE Office of Scientific and Technical Information (OSTI.GOV)
Banford, Anthony; Kuo, Jeffrey A.; Bowen, R.A.
Decommissioning and dismantling of nuclear facilities is a significant challenge worldwide and one which is growing in size as more plants reach the end of their operational lives. The strategy chosen for individual projects varies from the hands-on approach with significant manual intervention using traditional demolition equipment at one extreme to bespoke highly engineered robotic solutions at the other. The degree of manual intervention is limited by the hazards and risks involved, and in some plants are unacceptable. Robotic remote engineering is often viewed as more expensive and less reliable than manual approaches, with significant lead times and capital expenditure.more » However, advances in robotics and automation in other industries offer potential benefits for future decommissioning activities, with the high probability of reducing worker exposure and other safety risks as well as reducing the schedule and costs required to complete these activities. Some nuclear decommissioning tasks and facility environments are so hazardous that they can only be accomplished by exclusive use of robotic and remote intervention. Less hazardous tasks can be accomplished by manual intervention and the use of PPE. However, PPE greatly decreases worker productivity and still exposes the worker to both risk and dose making remote operation preferable to achieve ALARP. Before remote operations can be widely accepted and deployed, there are some economic and technological challenges that must be addressed. These challenges will require long term investment commitments in order for technology to be: - Specifically developed for nuclear applications; - At a sufficient TRL for practical deployment; - Readily available as a COTS. Tremendous opportunities exist to reduce cost and schedule and improve safety in D and D activities through the use of robotic and/or tele-operated systems. - Increasing the level of remote intervention reduces the risk and dose to an operator. Better environmental information identifies hazards, which can be assessed, managed and mitigated. - Tele-autonomous control in a congested unstructured environment is more reliable compared to a human operator. Advances in Human Machine Interfaces contribute to reliability and task optimization. Use of standardized dexterous manipulators and COTS, including standardized communication protocols reduces project time scales. - The technologies identified, if developed to a sufficient TRL would all contribute to cost reductions. Additionally, optimizing a project's position on a Remote Intervention Scale, a Bespoke Equipment Scale and a Tele-autonomy Scale would provide cost reductions from the start of a project. Of the technologies identified, tele-autonomy is arguably the most significant, because this would provide a fundamental positive change for robotic control in the nuclear industry. The challenge for technology developers is to develop versatile robotic technology that can be economically deployed to a wide range of future D and D projects and industrial sectors. The challenge for facility owners and project managers is to partner with the developers to provide accurate systems requirements and an open and receptive environment for testing and deployment. To facilitate this development and deployment effort, the NNL and DOE have initiated discussions to explore a collaborative R and D program that would accelerate development and support the optimum utilization of resources. (authors)« less
A neurorobotic platform for locomotor prosthetic development in rats and mice
NASA Astrophysics Data System (ADS)
von Zitzewitz, Joachim; Asboth, Leonie; Fumeaux, Nicolas; Hasse, Alexander; Baud, Laetitia; Vallery, Heike; Courtine, Grégoire
2016-04-01
Objectives. We aimed to develop a robotic interface capable of providing finely-tuned, multidirectional trunk assistance adjusted in real-time during unconstrained locomotion in rats and mice. Approach. We interfaced a large-scale robotic structure actuated in four degrees of freedom to exchangeable attachment modules exhibiting selective compliance along distinct directions. This combination allowed high-precision force and torque control in multiple directions over a large workspace. We next designed a neurorobotic platform wherein real-time kinematics and physiological signals directly adjust robotic actuation and prosthetic actions. We tested the performance of this platform in both rats and mice with spinal cord injury. Main Results. Kinematic analyses showed that the robotic interface did not impede locomotor movements of lightweight mice that walked freely along paths with changing directions and height profiles. Personalized trunk assistance instantly enabled coordinated locomotion in mice and rats with severe hindlimb motor deficits. Closed-loop control of robotic actuation based on ongoing movement features enabled real-time control of electromyographic activity in anti-gravity muscles during locomotion. Significance. This neurorobotic platform will support the study of the mechanisms underlying the therapeutic effects of locomotor prosthetics and rehabilitation using high-resolution genetic tools in rodent models.
A neurorobotic platform for locomotor prosthetic development in rats and mice.
von Zitzewitz, Joachim; Asboth, Leonie; Fumeaux, Nicolas; Hasse, Alexander; Baud, Laetitia; Vallery, Heike; Courtine, Grégoire
2016-04-01
We aimed to develop a robotic interface capable of providing finely-tuned, multidirectional trunk assistance adjusted in real-time during unconstrained locomotion in rats and mice. We interfaced a large-scale robotic structure actuated in four degrees of freedom to exchangeable attachment modules exhibiting selective compliance along distinct directions. This combination allowed high-precision force and torque control in multiple directions over a large workspace. We next designed a neurorobotic platform wherein real-time kinematics and physiological signals directly adjust robotic actuation and prosthetic actions. We tested the performance of this platform in both rats and mice with spinal cord injury. Kinematic analyses showed that the robotic interface did not impede locomotor movements of lightweight mice that walked freely along paths with changing directions and height profiles. Personalized trunk assistance instantly enabled coordinated locomotion in mice and rats with severe hindlimb motor deficits. Closed-loop control of robotic actuation based on ongoing movement features enabled real-time control of electromyographic activity in anti-gravity muscles during locomotion. This neurorobotic platform will support the study of the mechanisms underlying the therapeutic effects of locomotor prosthetics and rehabilitation using high-resolution genetic tools in rodent models.
Extending human proprioception to cyber-physical systems
NASA Astrophysics Data System (ADS)
Keller, Kevin; Robinson, Ethan; Dickstein, Leah; Hahn, Heidi A.; Cattaneo, Alessandro; Mascareñas, David
2016-04-01
Despite advances in computational cognition, there are many cyber-physical systems where human supervision and control is desirable. One pertinent example is the control of a robot arm, which can be found in both humanoid and commercial ground robots. Current control mechanisms require the user to look at several screens of varying perspective on the robot, then give commands through a joystick-like mechanism. This control paradigm fails to provide the human operator with an intuitive state feedback, resulting in awkward and slow behavior and underutilization of the robot's physical capabilities. To overcome this bottleneck, we introduce a new human-machine interface that extends the operator's proprioception by exploiting sensory substitution. Humans have a proprioceptive sense that provides us information on how our bodies are configured in space without having to directly observe our appendages. We constructed a wearable device with vibrating actuators on the forearm, where frequency of vibration corresponds to the spatial configuration of a robotic arm. The goal of this interface is to provide a means to communicate proprioceptive information to the teleoperator. Ultimately we will measure the change in performance (time taken to complete the task) achieved by the use of this interface.
Improved CLARAty Functional-Layer/Decision-Layer Interface
NASA Technical Reports Server (NTRS)
Estlin, Tara; Rabideau, Gregg; Gaines, Daniel; Johnston, Mark; Chouinard, Caroline; Nessnas, Issa; Shu, I-Hsiang
2008-01-01
Improved interface software for communication between the CLARAty Decision and Functional layers has been developed. [The Coupled Layer Architecture for Robotics Autonomy (CLARAty) was described in Coupled-Layer Robotics Architecture for Autonomy (NPO-21218), NASA Tech Briefs, Vol. 26, No. 12 (December 2002), page 48. To recapitulate: the CLARAty architecture was developed to improve the modularity of robotic software while tightening coupling between planning/execution and basic control subsystems. Whereas prior robotic software architectures typically contained three layers, the CLARAty contains two layers: a decision layer (DL) and a functional layer (FL).] Types of communication supported by the present software include sending commands from DL modules to FL modules and sending data updates from FL modules to DL modules. The present software supplants prior interface software that had little error-checking capability, supported data parameters in string form only, supported commanding at only one level of the FL, and supported only limited updates of the state of the robot. The present software offers strong error checking, and supports complex data structures and commanding at multiple levels of the FL, and relative to the prior software, offers a much wider spectrum of state-update capabilities.
Toward a practical mobile robotic aid system for people with severe physical disabilities.
Regalbuto, M A; Krouskop, T A; Cheatham, J B
1992-01-01
A simple, relatively inexpensive robotic system that can aid severely disabled persons by providing pick-and-place manipulative abilities to augment the functions of human or trained animal assistants is under development at Rice University and the Baylor College of Medicine. A stand-alone software application program runs on a Macintosh personal computer and provides the user with a selection of interactive windows for commanding the mobile robot via cursor action. A HERO 2000 robot has been modified such that its workspace extends from the floor to tabletop heights, and the robot is interfaced to a Macintosh SE via a wireless communications link for untethered operation. Integrated into the system are hardware and software which allow the user to control household appliances in addition to the robot. A separate Machine Control Interface device converts breath action and head or other three-dimensional motion inputs into cursor signals. Preliminary in-home and laboratory testing has demonstrated the utility of the system to perform useful navigational and manipulative tasks.
CESAR robotics and intelligent systems research for nuclear environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mann, R.C.
1992-07-01
The Center for Engineering Systems Advanced Research (CESAR) at the Oak Ridge National Laboratory (ORNL) encompasses expertise and facilities to perform basic and applied research in robotics and intelligent systems in order to address a broad spectrum of problems related to nuclear and other environments. For nuclear environments, research focus is derived from applications in advanced nuclear power stations, and in environmental restoration and waste management. Several programs at CESAR emphasize the cross-cutting technology issues, and are executed in appropriate cooperation with projects that address specific problem areas. Although the main thrust of the CESAR long-term research is on developingmore » highly automated systems that can cooperate and function reliably in complex environments, the development of advanced human-machine interfaces represents a significant part of our research. 11 refs.« less
CESAR robotics and intelligent systems research for nuclear environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mann, R.C.
1992-01-01
The Center for Engineering Systems Advanced Research (CESAR) at the Oak Ridge National Laboratory (ORNL) encompasses expertise and facilities to perform basic and applied research in robotics and intelligent systems in order to address a broad spectrum of problems related to nuclear and other environments. For nuclear environments, research focus is derived from applications in advanced nuclear power stations, and in environmental restoration and waste management. Several programs at CESAR emphasize the cross-cutting technology issues, and are executed in appropriate cooperation with projects that address specific problem areas. Although the main thrust of the CESAR long-term research is on developingmore » highly automated systems that can cooperate and function reliably in complex environments, the development of advanced human-machine interfaces represents a significant part of our research. 11 refs.« less
Object impedance control for cooperative manipulation - Theory and experimental results
NASA Technical Reports Server (NTRS)
Schneider, Stanley A.; Cannon, Robert H., Jr.
1992-01-01
This paper presents the dynamic control module of the Dynamic and Strategic Control of Cooperating Manipulators (DASCCOM) project at Stanford University's Aerospace Robotics Laboratory. First, the cooperative manipulation problem is analyzed from a systems perspective, and the desirable features of a control system for cooperative manipulation are discussed. Next, a control policy is developed that enforces a controlled impedance not of the individual arm endpoints, but of the manipulated object itself. A parallel implementation for a multiprocessor system is presented. The controller fully compensates for the system dynamics and directly controls the object internal forces. Most importantly, it presents a simple, powerful, intuitive interface to higher level strategic control modules. Experimental results from a dual two-link-arm robotic system are used to compare the object impedance controller with other strategies, both for free-motion slews and environmental contact.
Integration of a computerized two-finger gripper for robot workstation safety
NASA Technical Reports Server (NTRS)
Sneckenberger, John E.; Yoshikata, Kazuki
1988-01-01
A microprocessor-based controller has been developed that continuously monitors and adjusts the gripping force applied by a special two-finger gripper. This computerized force sensing gripper system enables the endeffector gripping action to be independently detected and corrected. The gripping force applied to a manipulated object is real-time monitored for problem situations, situations which can occur during both planned and errant robot arm manipulation. When unspecified force conditions occur at the gripper, the gripping force controller initiates specific reactions to cause dynamic corrections to the continuously variable gripping action. The force controller for this intelligent gripper has been interfaced to the controller of an industrial robot. The gripper and robot controllers communicate to accomplish the successful completion of normal gripper operations as well as unexpected hazardous situations. An example of an unexpected gripping condition would be the sudden deformation of the object being manipulated by the robot. The capabilities of the interfaced gripper-robot system to apply workstation safety measures (e.g., stop the robot) when these unexpected gripping effects occur have been assessed.
The NASA automation and robotics technology program
NASA Technical Reports Server (NTRS)
Holcomb, Lee B.; Montemerlo, Melvin D.
1986-01-01
The development and objectives of the NASA automation and robotics technology program are reviewed. The objectives of the program are to utilize AI and robotics to increase the probability of mission success; decrease the cost of ground control; and increase the capability and flexibility of space operations. There is a need for real-time computational capability; an effective man-machine interface; and techniques to validate automated systems. Current programs in the areas of sensing and perception, task planning and reasoning, control execution, operator interface, and system architecture and integration are described. Programs aimed at demonstrating the capabilities of telerobotics and system autonomy are discussed.
Control of the seven-degree-of-freedom upper limb exoskeleton for an improved human-robot interface
NASA Astrophysics Data System (ADS)
Kim, Hyunchul; Kim, Jungsuk
2017-04-01
This study analyzes a practical scheme for controlling an exoskeleton robot with seven degrees of freedom (DOFs) that supports natural movements of the human arm. A redundant upper limb exoskeleton robot with seven DOFs is mechanically coupled to the human body such that it becomes a natural extension of the body. If the exoskeleton robot follows the movement of the human body synchronously, the energy exchange between the human and the robot will be reduced significantly. In order to achieve this, the redundancy of the human arm, which is represented by the swivel angle, should be resolved using appropriate constraints and applied to the robot. In a redundant 7-DOF upper limb exoskeleton, the pseudoinverse of the Jacobian with secondary objective functions is widely used to resolve the redundancy that defines the desired joint angles. A secondary objective function requires the desired joint angles for the movement of the human arm, and the angles are estimated by maximizing the projection of the longest principle axis of the manipulability ellipsoid for the human arm onto the virtual destination toward the head region. Then, they are fed into the muscle model with a relative damping to achieve more realistic robot-arm movements. Various natural arm movements are recorded using a motion capture system, and the actual swivel-angle is compared to that estimated using the proposed swivel angle estimation algorithm. The results indicate that the proposed algorithm provides a precise reference for estimating the desired joint angle with an error less than 5°.
TCS and peripheral robotization and upgrade on the ESO 1-meter telescope at La Silla Observatory
NASA Astrophysics Data System (ADS)
Ropert, S.; Suc, V.; Jordán, A.; Tala, M.; Liedtke, P.; Royo, S.
2016-07-01
In this work we describe the robotization and upgrade of the ESO 1m telescope located at La Silla Observatory. The ESO 1m telescope was the first telescope installed in La Silla, in 1966. It now hosts as a main instrument the FIber Dual EchellE Optical Spectrograph (FIDEOS), a high resolution spectrograph designed for precise Radial Velocity (RV) measurements on bright stars. In order to meet this project's requirements, the Telescope Control System (TCS) and some of its mechanical peripherals needed to be upgraded. The TCS was also upgraded into a modern and robust software running on a group of single board computers interacting together as a network with the CoolObs TCS developed by ObsTech. One of the particularities of the CoolObs TCS is that it allows to fuse the input signals of 2 encoders per axis in order to achieve high precision and resolution of the tracking with moderate cost encoders. One encoder is installed on axis at the telescope and the other on axis at the motor. The TCS was also integrated with the FIDEOS instrument system so that all the system can be controlled through the same remote user interface. Our modern TCS unit allows the user to run observations remotely through a secured internet web interface, minimizing the need of an on-site observer and opening a new age in robotic astronomy for the ESO 1m telescope.
Human-Vehicle Interface for Semi-Autonomous Operation of Uninhabited Aero Vehicles
NASA Technical Reports Server (NTRS)
Jones, Henry L.; Frew, Eric W.; Woodley, Bruce R.; Rock, Stephen M.
2001-01-01
The robustness of autonomous robotic systems to unanticipated circumstances is typically insufficient for use in the field. The many skills of human user often fill this gap in robotic capability. To incorporate the human into the system, a useful interaction between man and machine must exist. This interaction should enable useful communication to be exchanged in a natural way between human and robot on a variety of levels. This report describes the current human-robot interaction for the Stanford HUMMINGBIRD autonomous helicopter. In particular, the report discusses the elements of the system that enable multiple levels of communication. An intelligent system agent manages the different inputs given to the helicopter. An advanced user interface gives the user and helicopter a method for exchanging useful information. Using this human-robot interaction, the HUMMINGBIRD has carried out various autonomous search, tracking, and retrieval missions.
78 FR 20359 - NASA Advisory Council; Technology and Innovation Committee; Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-04
... NASA Robotics Technologies project and NASA's work with the National Robotics Initiative; and an annual... Sail project --Update on NASA's Robotic Technologies and the National Robotics Initiative It is...
Decentralized Sensor Fusion for Ubiquitous Networking Robotics in Urban Areas
Sanfeliu, Alberto; Andrade-Cetto, Juan; Barbosa, Marco; Bowden, Richard; Capitán, Jesús; Corominas, Andreu; Gilbert, Andrew; Illingworth, John; Merino, Luis; Mirats, Josep M.; Moreno, Plínio; Ollero, Aníbal; Sequeira, João; Spaan, Matthijs T.J.
2010-01-01
In this article we explain the architecture for the environment and sensors that has been built for the European project URUS (Ubiquitous Networking Robotics in Urban Sites), a project whose objective is to develop an adaptable network robot architecture for cooperation between network robots and human beings and/or the environment in urban areas. The project goal is to deploy a team of robots in an urban area to give a set of services to a user community. This paper addresses the sensor architecture devised for URUS and the type of robots and sensors used, including environment sensors and sensors onboard the robots. Furthermore, we also explain how sensor fusion takes place to achieve urban outdoor execution of robotic services. Finally some results of the project related to the sensor network are highlighted. PMID:22294927
System for exchanging tools and end effectors on a robot
Burry, David B.; Williams, Paul M.
1991-02-19
A system and method for exchanging tools and end effectors on a robot permits exchange during a programmed task. The exchange mechanism is located off the robot, thus reducing the mass of the robot arm and permitting smaller robots to perform designated tasks. A simple spring/collet mechanism mounted on the robot is used which permits the engagement and disengagement of the tool or end effector without the need for a rotational orientation of the tool to the end effector/collet interface. As the tool changing system is not located on the robot arm no umbilical cords are located on robot.
Graphical interface between the CIRSSE testbed and CimStation software with MCS/CTOS
NASA Technical Reports Server (NTRS)
Hron, Anna B.
1992-01-01
This research is concerned with developing a graphical simulation of the testbed at the Center for Intelligent Robotic Systems for Space Exploration (CIRSSE) and the interface which allows for communication between the two. Such an interface is useful in telerobotic operations, and as a functional interaction tool for testbed users. Creating a simulated model of a real world system, generates inevitable calibration discrepancies between them. This thesis gives a brief overview of the work done to date in the area of workcell representation and communication, describes the development of the CIRSSE interface, and gives a direction for future work in the area of system calibration. The CimStation software used for development of this interface, is a highly versatile robotic workcell simulation package which has been programmed for this application with a scale graphical model of the testbed, and supporting interface menu code. A need for this tool has been identified for the reasons of path previewing, as a window on teleoperation and for calibration of simulated vs. real world models. The interface allows information (i.e., joint angles) generated by CimStation to be sent as motion goal positions to the testbed robots. An option of the interface has been established such that joint angle information generated by supporting testbed algorithms (i.e., TG, collision avoidance) can be piped through CimStation as a visual preview of the path.
Control of a 2 DoF robot using a brain-machine interface.
Hortal, Enrique; Ubeda, Andrés; Iáñez, Eduardo; Azorín, José M
2014-09-01
In this paper, a non-invasive spontaneous Brain-Machine Interface (BMI) is used to control the movement of a planar robot. To that end, two mental tasks are used to manage the visual interface that controls the robot. The robot used is a PupArm, a force-controlled planar robot designed by the nBio research group at the Miguel Hernández University of Elche (Spain). Two control strategies are compared: hierarchical and directional control. The experimental test (performed by four users) consists of reaching four targets. The errors and time used during the performance of the tests are compared in both control strategies (hierarchical and directional control). The advantages and disadvantages of each method are shown after the analysis of the results. The hierarchical control allows an accurate approaching to the goals but it is slower than using the directional control which, on the contrary, is less precise. The results show both strategies are useful to control this planar robot. In the future, by adding an extra device like a gripper, this BMI could be used in assistive applications such as grasping daily objects in a realistic environment. In order to compare the behavior of the system taking into account the opinion of the users, a NASA Tasks Load Index (TLX) questionnaire is filled out after two sessions are completed. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Chung, Cheng-Shiu; Wang, Hongwu; Cooper, Rory A.
2013-01-01
Context The user interface development of assistive robotic manipulators can be traced back to the 1960s. Studies include kinematic designs, cost-efficiency, user experience involvements, and performance evaluation. This paper is to review studies conducted with clinical trials using activities of daily living (ADLs) tasks to evaluate performance categorized using the International Classification of Functioning, Disability, and Health (ICF) frameworks, in order to give the scope of current research and provide suggestions for future studies. Methods We conducted a literature search of assistive robotic manipulators from 1970 to 2012 in PubMed, Google Scholar, and University of Pittsburgh Library System – PITTCat. Results Twenty relevant studies were identified. Conclusion Studies were separated into two broad categories: user task preferences and user-interface performance measurements of commercialized and developing assistive robotic manipulators. The outcome measures and ICF codes associated with the performance evaluations are reported. Suggestions for the future studies include (1) standardized ADL tasks for the quantitative and qualitative evaluation of task efficiency and performance to build comparable measures between research groups, (2) studies relevant to the tasks from user priority lists and ICF codes, and (3) appropriate clinical functional assessment tests with consideration of constraints in assistive robotic manipulator user interfaces. In addition, these outcome measures will help physicians and therapists build standardized tools while prescribing and assessing assistive robotic manipulators. PMID:23820143
Granata, C; Pino, M; Legouverneur, G; Vidal, J-S; Bidaud, P; Rigaud, A-S
2013-01-01
Socially assistive robotics for elderly care is a growing field. However, although robotics has the potential to support elderly in daily tasks by offering specific services, the development of usable interfaces is still a challenge. Since several factors such as age or disease-related changes in perceptual or cognitive abilities and familiarity with computer technologies influence technology use they must be considered when designing interfaces for these users. This paper presents findings from usability testing of two different services provided by a social assistive robot intended for elderly with cognitive impairment: a grocery shopping list and an agenda application. The main goal of this study is to identify the usability problems of the robot interface for target end-users as well as to isolate the human factors that affect the use of the technology by elderly. Socio-demographic characteristics and computer experience were examined as factors that could have an influence on task performance. A group of 11 elderly persons with Mild Cognitive Impairment and a group of 11 cognitively healthy elderly individuals took part in this study. Performance measures (task completion time and number of errors) were collected. Cognitive profile, age and computer experience were found to impact task performance. Participants with cognitive impairment achieved the tasks committing more errors than cognitively healthy elderly. Instead younger participants and those with previous computer experience were faster at completing the tasks confirming previous findings in the literature. The overall results suggested that interfaces and contents of the services assessed were usable by older adults with cognitive impairment. However, some usability problems were identified and should be addressed to better meet the needs and capacities of target end-users.
Integration of advanced teleoperation technologies for control of space robots
NASA Technical Reports Server (NTRS)
Stagnaro, Michael J.
1993-01-01
Teleoperated robots require one or more humans to control actuators, mechanisms, and other robot equipment given feedback from onboard sensors. To accomplish this task, the human or humans require some form of control station. Desirable features of such a control station include operation by a single human, comfort, and natural human interfaces (visual, audio, motion, tactile, etc.). These interfaces should work to maximize performance of the human/robot system by streamlining the link between human brain and robot equipment. This paper describes development of a control station testbed with the characteristics described above. Initially, this testbed will be used to control two teleoperated robots. Features of the robots include anthropomorphic mechanisms, slaving to the testbed, and delivery of sensory feedback to the testbed. The testbed will make use of technologies such as helmet mounted displays, voice recognition, and exoskeleton masters. It will allow tor integration and testing of emerging telepresence technologies along with techniques for coping with control link time delays. Systems developed from this testbed could be applied to ground control of space based robots. During man-tended operations, the Space Station Freedom may benefit from ground control of IVA or EVA robots with science or maintenance tasks. Planetary exploration may also find advanced teleoperation systems to be very useful.
Controlling the autonomy of a reconnaissance robot
NASA Astrophysics Data System (ADS)
Dalgalarrondo, Andre; Dufourd, Delphine; Filliat, David
2004-09-01
In this paper, we present our research on the control of a mobile robot for indoor reconnaissance missions. Based on previous work concerning our robot control architecture HARPIC, we have developed a man machine interface and software components that allow a human operator to control a robot at different levels of autonomy. This work aims at studying how a robot could be helpful in indoor reconnaissance and surveillance missions in hostile environment. In such missions, since a soldier faces many threats and must protect himself while looking around and holding his weapon, he cannot devote his attention to the teleoperation of the robot. Moreover, robots are not yet able to conduct complex missions in a fully autonomous mode. Thus, in a pragmatic way, we have built a software that allows dynamic swapping between control modes (manual, safeguarded and behavior-based) while automatically performing map building and localization of the robot. It also includes surveillance functions like movement detection and is designed for multirobot extensions. We first describe the design of our agent-based robot control architecture and discuss the various ways to control and interact with a robot. The main modules and functionalities implementing those ideas in our architecture are detailed. More precisely, we show how we combine manual controls, obstacle avoidance, wall and corridor following, way point and planned travelling. Some experiments on a Pioneer robot equipped with various sensors are presented. Finally, we suggest some promising directions for the development of robots and user interfaces for hostile environment and discuss our planned future improvements.
User interface for a tele-operated robotic hand system
Crawford, Anthony L
2015-03-24
Disclosed here is a user interface for a robotic hand. The user interface anchors a user's palm in a relatively stationary position and determines various angles of interest necessary for a user's finger to achieve a specific fingertip location. The user interface additionally conducts a calibration procedure to determine the user's applicable physiological dimensions. The user interface uses the applicable physiological dimensions and the specific fingertip location, and treats the user's finger as a two link three degree-of-freedom serial linkage in order to determine the angles of interest. The user interface communicates the angles of interest to a gripping-type end effector which closely mimics the range of motion and proportions of a human hand. The user interface requires minimal contact with the operator and provides distinct advantages in terms of available dexterity, work space flexibility, and adaptability to different users.
Bouquet de Joliniere, Jean; Librino, Armando; Dubuisson, Jean-Bernard; Khomsi, Fathi; Ben Ali, Nordine; Fadhlaoui, Anis; Ayoubi, J. M.; Feki, Anis
2016-01-01
Minimally invasive surgery (MIS) can be considered as the greatest surgical innovation over the past 30 years. It revolutionized surgical practice with well-proven advantages over traditional open surgery: reduced surgical trauma and incision-related complications, such as surgical-site infections, postoperative pain and hernia, reduced hospital stay, and improved cosmetic outcome. Nonetheless, proficiency in MIS can be technically challenging as conventional laparoscopy is associated with several limitations as the two-dimensional (2D) monitor reduction in-depth perception, camera instability, limited range of motion, and steep learning curves. The surgeon has a low force feedback, which allows simple gestures, respect for tissues, and more effective treatment of complications. Since the 1980s, several computer sciences and robotics projects have been set up to overcome the difficulties encountered with conventional laparoscopy, to augment the surgeon’s skills, achieve accuracy and high precision during complex surgery, and facilitate widespread of MIS. Surgical instruments are guided by haptic interfaces that replicate and filter hand movements. Robotically assisted technology offers advantages that include improved three-dimensional stereoscopic vision, wristed instruments that improve dexterity, and tremor canceling software that improves surgical precision. PMID:27200358
NASA Technical Reports Server (NTRS)
Mavroidis, Constantinos; Pfeiffer, Charles; Paljic, Alex; Celestino, James; Lennon, Jamie; Bar-Cohen, Yoseph
2000-01-01
For many years, the robotic community sought to develop robots that can eventually operate autonomously and eliminate the need for human operators. However, there is an increasing realization that there are some tasks that human can perform significantly better but, due to associated hazards, distance, physical limitations and other causes, only robot can be employed to perform these tasks. Remotely performing these types of tasks requires operating robots as human surrogates. While current "hand master" haptic systems are able to reproduce the feeling of rigid objects, they present great difficulties in emulating the feeling of remote/virtual stiffness. In addition, they tend to be heavy, cumbersome and usually they only allow limited operator workspace. In this paper a novel haptic interface is presented to enable human-operators to "feel" and intuitively mirror the stiffness/forces at remote/virtual sites enabling control of robots as human-surrogates. This haptic interface is intended to provide human operators intuitive feeling of the stiffness and forces at remote or virtual sites in support of space robots performing dexterous manipulation tasks (such as operating a wrench or a drill). Remote applications are referred to the control of actual robots whereas virtual applications are referred to simulated operations. The developed haptic interface will be applicable to IVA operated robotic EVA tasks to enhance human performance, extend crew capability and assure crew safety. The electrically controlled stiffness is obtained using constrained ElectroRheological Fluids (ERF), which changes its viscosity under electrical stimulation. Forces applied at the robot end-effector due to a compliant environment will be reflected to the user using this ERF device where a change in the system viscosity will occur proportionally to the force to be transmitted. In this paper, we will present the results of our modeling, simulation, and initial testing of such an electrorheological fluid (ERF) based haptic device.
NASA Technical Reports Server (NTRS)
Barlow, Jonathan; Benavides, Jose; Provencher, Chris; Bualat, Maria; Smith, Marion F.; Mora Vargas, Andres
2017-01-01
At the end of 2017, Astrobee will launch three free-flying robots that will navigate the entire US segment of the ISS (International Space Station) and serve as a payload facility. These robots will provide guest science payloads with processor resources, space within the robot for physical attachment, power, communication, propulsion, and human interfaces.
Control of free-flying space robot manipulator systems
NASA Technical Reports Server (NTRS)
Cannon, Robert H., Jr.
1990-01-01
New control techniques for self contained, autonomous free flying space robots were developed and tested experimentally. Free flying robots are envisioned as a key element of any successful long term presence in space. These robots must be capable of performing the assembly, maintenance, and inspection, and repair tasks that currently require human extravehicular activity (EVA). A set of research projects were developed and carried out using lab models of satellite robots and a flexible manipulator. The second generation space robot models use air cushion vehicle (ACV) technology to simulate in 2-D the drag free, zero g conditions of space. The current work is divided into 5 major projects: Global Navigation and Control of a Free Floating Robot, Cooperative Manipulation from a Free Flying Robot, Multiple Robot Cooperation, Thrusterless Robotic Locomotion, and Dynamic Payload Manipulation. These projects are examined in detail.
Sports Training Support Method by Self-Coaching with Humanoid Robot
NASA Astrophysics Data System (ADS)
Toyama, S.; Ikeda, F.; Yasaka, T.
2016-09-01
This paper proposes a new training support method called self-coaching with humanoid robots. In the proposed method, two small size inexpensive humanoid robots are used because of their availability. One robot called target robot reproduces motion of a target player and another robot called reference robot reproduces motion of an expert player. The target player can recognize a target technique from the reference robot and his/her inadequate skill from the target robot. Modifying the motion of the target robot as self-coaching, the target player could get advanced cognition. Some experimental results show some possibility as the new training method and some issues of the self-coaching interface program as a future work.
Anthropomorphic Robot Design and User Interaction Associated with Motion
NASA Technical Reports Server (NTRS)
Ellis, Stephen R.
2016-01-01
Though in its original concept a robot was conceived to have some human-like shape, most robots now in use have specific industrial purposes and do not closely resemble humans. Nevertheless, robots that resemble human form in some way have continued to be introduced. They are called anthropomorphic robots. The fact that the user interface to all robots is now highly mediated means that the form of the user interface is not necessarily connected to the robots form, human or otherwise. Consequently, the unique way the design of anthropomorphic robots affects their user interaction is through their general appearance and the way they move. These robots human-like appearance acts as a kind of generalized predictor that gives its operators, and those with whom they may directly work, the expectation that they will behave to some extent like a human. This expectation is especially prominent for interactions with social robots, which are built to enhance it. Often interaction with them may be mainly cognitive because they are not necessarily kinematically intricate enough for complex physical interaction. Their body movement, for example, may be limited to simple wheeled locomotion. An anthropomorphic robot with human form, however, can be kinematically complex and designed, for example, to reproduce the details of human limb, torso, and head movement. Because of the mediated nature of robot control, there remains in general no necessary connection between the specific form of user interface and the anthropomorphic form of the robot. But their anthropomorphic kinematics and dynamics imply that the impact of their design shows up in the way the robot moves. The central finding of this report is that the control of this motion is a basic design element through which the anthropomorphic form can affect user interaction. In particular, designers of anthropomorphic robots can take advantage of the inherent human-like movement to 1) improve the users direct manual control over robot limbs and body positions, 2) improve users ability to detect anomalous robot behavior which could signal malfunction, and 3) enable users to be better able to infer the intent of robot movement. These three benefits of anthropomorphic design are inherent implications of the anthropomorphic form but they need to be recognized by designers as part of anthropomorphic design and explicitly enhanced to maximize their beneficial impact. Examples of such enhancements are provided in this report. If implemented, these benefits of anthropomorphic design can help reduce the risk of Inadequate Design of Human and Automation Robotic Integration (HARI) associated with the HARI-01 gap by providing efficient and dexterous operator control over robots and by improving operator ability to detect malfunctions and understand the intention of robot movement.
Miniature surgical robot for laparoendoscopic single-incision colectomy.
Wortman, Tyler D; Meyer, Avishai; Dolghi, Oleg; Lehman, Amy C; McCormick, Ryan L; Farritor, Shane M; Oleynikov, Dmitry
2012-03-01
This study aimed to demonstrate the effectiveness of using a multifunctional miniature in vivo robotic platform to perform a single-incision colectomy. Standard laparoscopic techniques require multiple ports. A miniature robotic platform to be inserted completely into the peritoneal cavity through a single incision has been designed and built. The robot can be quickly repositioned, thus enabling multiquadrant access to the abdominal cavity. The miniature in vivo robotic platform used in this study consists of a multifunctional robot and a remote surgeon interface. The robot is composed of two arms with shoulder and elbow joints. Each forearm is equipped with specialized interchangeable end effectors (i.e., graspers and monopolar electrocautery). Five robotic colectomies were performed in a porcine model. For each procedure, the robot was completely inserted into the peritoneal cavity, and the surgeon manipulated the user interface to control the robot to perform the colectomy. The robot mobilized the colon from its lateral retroperitoneal attachments and assisted in the placement of a standard stapler to transect the sigmoid colon. This objective was completed for all five colectomies without any complications. The adoption of both laparoscopic and single-incision colectomies currently is constrained by the inadequacies of existing instruments. The described multifunctional robot provides a platform that overcomes existing limitations by operating completely within one incision in the peritoneal cavity and by improving visualization and dexterity. By repositioning the small robot to the area of the colon to be mobilized, the ability of the surgeon to perform complex surgical tasks is improved. Furthermore, the success of the robot in performing a completely in vivo colectomy suggests the feasibility of using this robotic platform to perform other complex surgeries through a single incision.
Solazzi, Massimiliano; Loconsole, Claudio; Barsotti, Michele
2016-01-01
This paper illustrates the application of emerging technologies and human-machine interfaces to the neurorehabilitation and motor assistance fields. The contribution focuses on wearable technologies and in particular on robotic exoskeleton as tools for increasing freedom to move and performing Activities of Daily Living (ADLs). This would result in a deep improvement in quality of life, also in terms of improved function of internal organs and general health status. Furthermore, the integration of these robotic systems with advanced bio-signal driven human-machine interface can increase the degree of participation of patient in robotic training allowing to recognize user's intention and assisting the patient in rehabilitation tasks, thus representing a fundamental aspect to elicit motor learning PMID:28484314
Petri net controllers for distributed robotic systems
NASA Technical Reports Server (NTRS)
Lefebvre, D. R.; Saridis, George N.
1992-01-01
Petri nets are a well established modelling technique for analyzing parallel systems. When coupled with an event-driven operating system, Petri nets can provide an effective means for integrating and controlling the functions of distributed robotic applications. Recent work has shown that Petri net graphs can also serve as remarkably intuitive operator interfaces. In this paper, the advantages of using Petri nets as high-level controllers to coordinate robotic functions are outlined, the considerations for designing Petri net controllers are discussed, and simple Petri net structures for implementing an interface for operator supervision are presented. A detailed example is presented which illustrates these concepts for a sensor-based assembly application.
System for exchanging tools and end effectors on a robot
Burry, D.B.; Williams, P.M.
1991-02-19
A system and method for exchanging tools and end effectors on a robot permits exchange during a programmed task. The exchange mechanism is located off the robot, thus reducing the mass of the robot arm and permitting smaller robots to perform designated tasks. A simple spring/collet mechanism mounted on the robot is used which permits the engagement and disengagement of the tool or end effector without the need for a rotational orientation of the tool to the end effector/collet interface. As the tool changing system is not located on the robot arm no umbilical cords are located on robot. 12 figures.
Sutherland, Garnette R; Wolfsberger, Stefan; Lama, Sanju; Zarei-nia, Kourosh
2013-01-01
Intraoperative imaging disrupts the rhythm of surgery despite providing an excellent opportunity for surgical monitoring and assessment. To allow surgery within real-time images, neuroArm, a teleoperated surgical robotic system, was conceptualized. The objective was to design and manufacture a magnetic resonance-compatible robot with a human-machine interface that could reproduce some of the sight, sound, and touch of surgery at a remote workstation. University of Calgary researchers worked with MacDonald, Dettwiler and Associates engineers to produce a requirements document, preliminary design review, and critical design review, followed by the manufacture, preclinical testing, and clinical integration of neuroArm. During the preliminary design review, the scope of the neuroArm project changed to performing microsurgery outside the magnet and stereotaxy inside the bore. neuroArm was successfully manufactured and installed in an intraoperative magnetic resonance imaging operating room. neuroArm was clinically integrated into 35 cases in a graded fashion. As a result of this experience, neuroArm II is in development, and advances in technology will allow microsurgery within the bore of the magnet. neuroArm represents a successful interdisciplinary collaboration. It has positive implications for the future of robotic technology in neurosurgery in that the precision and accuracy of robots will continue to augment human capability.
Wearable computer for mobile augmented-reality-based controlling of an intelligent robot
NASA Astrophysics Data System (ADS)
Turunen, Tuukka; Roening, Juha; Ahola, Sami; Pyssysalo, Tino
2000-10-01
An intelligent robot can be utilized to perform tasks that are either hazardous or unpleasant for humans. Such tasks include working in disaster areas or conditions that are, for example, too hot. An intelligent robot can work on its own to some extent, but in some cases the aid of humans will be needed. This requires means for controlling the robot from somewhere else, i.e. teleoperation. Mobile augmented reality can be utilized as a user interface to the environment, as it enhances the user's perception of the situation compared to other interfacing methods and allows the user to perform other tasks while controlling the intelligent robot. Augmented reality is a method that combines virtual objects into the user's perception of the real world. As computer technology evolves, it is possible to build very small devices that have sufficient capabilities for augmented reality applications. We have evaluated the existing wearable computers and mobile augmented reality systems to build a prototype of a future mobile terminal- the CyPhone. A wearable computer with sufficient system resources for applications, wireless communication media with sufficient throughput and enough interfaces for peripherals has been built at the University of Oulu. It is self-sustained in energy, with enough operating time for the applications to be useful, and uses accurate positioning systems.
Ali, S M; Reisner, L A; King, B; Cao, A; Auner, G; Klein, M; Pandya, A K
2008-01-01
A redesigned motion control system for the medical robot Aesop allows automating and programming its movements. An IR eye tracking system has been integrated with this control interface to implement an intelligent, autonomous eye gaze-based laparoscopic positioning system. A laparoscopic camera held by Aesop can be moved based on the data from the eye tracking interface to keep the user's gaze point region at the center of a video feedback monitor. This system setup provides autonomous camera control that works around the surgeon, providing an optimal robotic camera platform.
Robot Control Through Brain Computer Interface For Patterns Generation
NASA Astrophysics Data System (ADS)
Belluomo, P.; Bucolo, M.; Fortuna, L.; Frasca, M.
2011-09-01
A Brain Computer Interface (BCI) system processes and translates neuronal signals, that mainly comes from EEG instruments, into commands for controlling electronic devices. This system can allow people with motor disabilities to control external devices through the real-time modulation of their brain waves. In this context an EEG-based BCI system that allows creative luminous artistic representations is here presented. The system that has been designed and realized in our laboratory interfaces the BCI2000 platform performing real-time analysis of EEG signals with a couple of moving luminescent twin robots. Experiments are also presented.
Ma, Jiaxin; Zhang, Yu; Cichocki, Andrzej; Matsuno, Fumitoshi
2015-03-01
This study presents a novel human-machine interface (HMI) based on both electrooculography (EOG) and electroencephalography (EEG). This hybrid interface works in two modes: an EOG mode recognizes eye movements such as blinks, and an EEG mode detects event related potentials (ERPs) like P300. While both eye movements and ERPs have been separately used for implementing assistive interfaces, which help patients with motor disabilities in performing daily tasks, the proposed hybrid interface integrates them together. In this way, both the eye movements and ERPs complement each other. Therefore, it can provide a better efficiency and a wider scope of application. In this study, we design a threshold algorithm that can recognize four kinds of eye movements including blink, wink, gaze, and frown. In addition, an oddball paradigm with stimuli of inverted faces is used to evoke multiple ERP components including P300, N170, and VPP. To verify the effectiveness of the proposed system, two different online experiments are carried out. One is to control a multifunctional humanoid robot, and the other is to control four mobile robots. In both experiments, the subjects can complete tasks effectively by using the proposed interface, whereas the best completion time is relatively short and very close to the one operated by hand.
Affordance Templates for Shared Robot Control
NASA Technical Reports Server (NTRS)
Hart, Stephen; Dinh, Paul; Hambuchen, Kim
2014-01-01
This paper introduces the Affordance Template framework used to supervise task behaviors on the NASA-JSC Valkyrie robot at the 2013 DARPA Robotics Challenge (DRC) Trials. This framework provides graphical interfaces to human supervisors that are adjustable based on the run-time environmental context (e.g., size, location, and shape of objects that the robot must interact with, etc.). Additional improvements, described below, inject degrees of autonomy into instantiations of affordance templates at run-time in order to enable efficient human supervision of the robot for accomplishing tasks.
Control Robotics Programming Technology. Technology Learning Activity. Teacher Edition.
ERIC Educational Resources Information Center
Oklahoma State Dept. of Vocational and Technical Education, Stillwater. Curriculum and Instructional Materials Center.
This Technology Learning Activity (TLA) for control robotics programming technology in grades 6-10 is designed to teach students to construct and program computer-controlled devices using a LEGO DACTA set and computer interface and to help them understand how control technology and robotics affect them and their lifestyle. The suggested time for…
BioMot exoskeleton - Towards a smart wearable robot for symbiotic human-robot interaction.
Bacek, Tomislav; Moltedo, Marta; Langlois, Kevin; Prieto, Guillermo Asin; Sanchez-Villamanan, Maria Carmen; Gonzalez-Vargas, Jose; Vanderborght, Bram; Lefeber, Dirk; Moreno, Juan C
2017-07-01
This paper presents design of a novel modular lower-limb gait exoskeleton built within the FP7 BioMot project. Exoskeleton employs a variable stiffness actuator in all 6 joints, a directional-flexibility structure and a novel physical humanrobot interfacing, which allows it to deliver the required output while minimally constraining user's gait by providing passive degrees of freedom. Due to modularity, the exoskeleton can be used as a full lower-limb orthosis, a single-joint orthosis in any of the three joints, and a two-joint orthosis in a combination of any of the two joints. By employing a simple torque control strategy, the exoskeleton can be used to deliver user-specific assistance, both in gait rehabilitation and in assisting people suffering musculoskeletal impairments. The result of the presented BioMot efforts is a low-footprint exoskeleton with powerful compliant actuators, simple, yet effective torque controller and easily adjustable flexible structure.
Distributed Automated Medical Robotics to Improve Medical Field Operations
2010-04-01
ROBOT PATIENT INTERFACE Robotic trauma diagnosis and intervention is performed using instruments and tools mounted on the end of a robotic manipulator...manipulator to respond quickly enough to accommodate for motion due to high inertia and inaccuracies caused by low stiffness at the tool point. Ultrasonic...program was licensed to Intuitive Surgical, Inc and subsequently morphed into the daVinci surgical system. The daVinci has been widely applied in
The magic glove: a gesture-based remote controller for intelligent mobile robots
NASA Astrophysics Data System (ADS)
Luo, Chaomin; Chen, Yue; Krishnan, Mohan; Paulik, Mark
2012-01-01
This paper describes the design of a gesture-based Human Robot Interface (HRI) for an autonomous mobile robot entered in the 2010 Intelligent Ground Vehicle Competition (IGVC). While the robot is meant to operate autonomously in the various Challenges of the competition, an HRI is useful in moving the robot to the starting position and after run termination. In this paper, a user-friendly gesture-based embedded system called the Magic Glove is developed for remote control of a robot. The system consists of a microcontroller and sensors that is worn by the operator as a glove and is capable of recognizing hand signals. These are then transmitted through wireless communication to the robot. The design of the Magic Glove included contributions on two fronts: hardware configuration and algorithm development. A triple axis accelerometer used to detect hand orientation passes the information to a microcontroller, which interprets the corresponding vehicle control command. A Bluetooth device interfaced to the microcontroller then transmits the information to the vehicle, which acts accordingly. The user-friendly Magic Glove was successfully demonstrated first in a Player/Stage simulation environment. The gesture-based functionality was then also successfully verified on an actual robot and demonstrated to judges at the 2010 IGVC.
Health Cars Robotics: A Progress Report
NASA Technical Reports Server (NTRS)
Fiorini, P.; Ali, K.; Seraji, H.
1997-01-01
This paper describes the approach followed in the design of a service robot for health care applications. This paper describes the architecture of the subsystem, the features of the manipulator arm, and the operator interface.
Software for project-based learning of robot motion planning
NASA Astrophysics Data System (ADS)
Moll, Mark; Bordeaux, Janice; Kavraki, Lydia E.
2013-12-01
Motion planning is a core problem in robotics concerned with finding feasible paths for a given robot. Motion planning algorithms perform a search in the high-dimensional continuous space of robot configurations and exemplify many of the core algorithmic concepts of search algorithms and associated data structures. Motion planning algorithms can be explained in a simplified two-dimensional setting, but this masks many of the subtleties and complexities of the underlying problem. We have developed software for project-based learning of motion planning that enables deep learning. The projects that we have developed allow advanced undergraduate students and graduate students to reflect on the performance of existing textbook algorithms and their own variations on such algorithms. Formative assessment has been conducted at three institutions. The core of the software used for this teaching module is also used within the Robot Operating System, a widely adopted platform by the robotics research community. This allows for transfer of knowledge and skills to robotics research projects involving a large variety robot hardware platforms.
A small, cheap, and portable reconnaissance robot
NASA Astrophysics Data System (ADS)
Kenyon, Samuel H.; Creary, D.; Thi, Dan; Maynard, Jeffrey
2005-05-01
While there is much interest in human-carriable mobile robots for defense/security applications, existing examples are still too large/heavy, and there are not many successful small human-deployable mobile ground robots, especially ones that can survive being thrown/dropped. We have developed a prototype small short-range teleoperated indoor reconnaissance/surveillance robot that is semi-autonomous. It is self-powered, self-propelled, spherical, and meant to be carried and thrown by humans into indoor, yet relatively unstructured, dynamic environments. The robot uses multiple channels for wireless control and feedback, with the potential for inter-robot communication, swarm behavior, or distributed sensor network capabilities. The primary reconnaissance sensor for this prototype is visible-spectrum video. This paper focuses more on the software issues, both the onboard intelligent real time control system and the remote user interface. The communications, sensor fusion, intelligent real time controller, etc. are implemented with onboard microcontrollers. We based the autonomous and teleoperation controls on a simple finite state machine scripting layer. Minimal localization and autonomous routines were designed to best assist the operator, execute whatever mission the robot may have, and promote its own survival. We also discuss the advantages and pitfalls of an inexpensive, rapidly-developed semi-autonomous robotic system, especially one that is spherical, and the importance of human-robot interaction as considered for the human-deployment and remote user interface.
A universal six-joint robot controller
NASA Technical Reports Server (NTRS)
Bihn, D. G.; Hsia, T. C.
1987-01-01
A general purpose six-axis robotic manipulator controller was designed and implemented to serve as a research tool for the investigation of the practical and theoretical aspects of various control strategies in robotics. A 80286-based Intel System 310 running the Xenix operating servo software as well as the higher level software (e.g., kinematics and path planning) were employed. A Multibus compatible interface board was designed and constructed to handle I/O signals from the robot manipulator's joint motors. From the design point of view, the universal controller is capable of driving robot manipulators equipped with D.C. joint motors and position optical encoders. To test its functionality, the controller is connected to the joint motor D.C. power amplifier of a PUMA 560 arm bypassing completely the manufacturer-supplied Unimation controller. A controller algorithm consisting of local PD control laws was written and installed into the Xenix operating system. Additional software drivers were implemented to allow application programs access to the interface board. All software was written in the C language.
SLAM algorithm applied to robotics assistance for navigation in unknown environments.
Cheein, Fernando A Auat; Lopez, Natalia; Soria, Carlos M; di Sciascio, Fernando A; Pereira, Fernando Lobo; Carelli, Ricardo
2010-02-17
The combination of robotic tools with assistance technology determines a slightly explored area of applications and advantages for disability or elder people in their daily tasks. Autonomous motorized wheelchair navigation inside an environment, behaviour based control of orthopaedic arms or user's preference learning from a friendly interface are some examples of this new field. In this paper, a Simultaneous Localization and Mapping (SLAM) algorithm is implemented to allow the environmental learning by a mobile robot while its navigation is governed by electromyographic signals. The entire system is part autonomous and part user-decision dependent (semi-autonomous). The environmental learning executed by the SLAM algorithm and the low level behaviour-based reactions of the mobile robot are robotic autonomous tasks, whereas the mobile robot navigation inside an environment is commanded by a Muscle-Computer Interface (MCI). In this paper, a sequential Extended Kalman Filter (EKF) feature-based SLAM algorithm is implemented. The features correspond to lines and corners -concave and convex- of the environment. From the SLAM architecture, a global metric map of the environment is derived. The electromyographic signals that command the robot's movements can be adapted to the patient's disabilities. For mobile robot navigation purposes, five commands were obtained from the MCI: turn to the left, turn to the right, stop, start and exit. A kinematic controller to control the mobile robot was implemented. A low level behavior strategy was also implemented to avoid robot's collisions with the environment and moving agents. The entire system was tested in a population of seven volunteers: three elder, two below-elbow amputees and two young normally limbed patients. The experiments were performed within a closed low dynamic environment. Subjects took an average time of 35 minutes to navigate the environment and to learn how to use the MCI. The SLAM results have shown a consistent reconstruction of the environment. The obtained map was stored inside the Muscle-Computer Interface. The integration of a highly demanding processing algorithm (SLAM) with a MCI and the communication between both in real time have shown to be consistent and successful. The metric map generated by the mobile robot would allow possible future autonomous navigation without direct control of the user, whose function could be relegated to choose robot destinations. Also, the mobile robot shares the same kinematic model of a motorized wheelchair. This advantage can be exploited for wheelchair autonomous navigation.
Tactical mobile robots for urban search and rescue
NASA Astrophysics Data System (ADS)
Blitch, John; Sidki, Nahid; Durkin, Tim
2000-07-01
Few disasters can inspire more compassion for victims and families than those involving structural collapse. Video clips of children's bodies pulled from earthquake stricken cities and bombing sties tend to invoke tremendous grief and sorrow because of the totally unpredictable nature of the crisis and lack of even the slightest degree of negligence (such as with those who choose to ignore storm warnings). Heartbreaking stories of people buried alive for days provide a visceral and horrific perspective of some of greatest fears ever to be imagined by human beings. Current trends toward urban sprawl and increasing human discord dictates that structural collapse disasters will continue to present themselves at an alarming rate. The proliferation of domestic terrorism, HAZMAT and biological contaminants further complicates the matter further and presents a daunting problem set for Urban Search and Rescue (USAR) organizations around the world. This paper amplifies the case for robot assisted search and rescue that was first presented during the KNOBSAR project initiated at the Colorado School of Mines in 1995. It anticipates increasing technical development in mobile robot technologies and promotes their use for a wide variety of humanitarian assistance missions. Focus is placed on development of advanced robotic systems that are employed in a complementary tool-like fashion as opposed to traditional robotic approaches that portend to replace humans in hazardous tasks. Operational challenges for USAR are presented first, followed by a brief history of mobiles robot development. The paper then presents conformal robotics as a new design paradigm with emphasis on variable geometry and volumes. A section on robot perception follows with an initial attempt to characterize sensing in a volumetric manner. Collaborative rescue is then briefly discussed with an emphasis on marsupial operations and linked mobility. The paper concludes with an emphasis on Human Robot Interface (HRI) and a call for additional research in this exciting and all too important field.
HRI usability evaluation of interaction modes for a teleoperated agricultural robotic sprayer.
Adamides, George; Katsanos, Christos; Parmet, Yisrael; Christou, Georgios; Xenos, Michalis; Hadzilacos, Thanasis; Edan, Yael
2017-07-01
Teleoperation of an agricultural robotic system requires effective and efficient human-robot interaction. This paper investigates the usability of different interaction modes for agricultural robot teleoperation. Specifically, we examined the overall influence of two types of output devices (PC screen, head mounted display), two types of peripheral vision support mechanisms (single view, multiple views), and two types of control input devices (PC keyboard, PS3 gamepad) on observed and perceived usability of a teleoperated agricultural sprayer. A modular user interface for teleoperating an agricultural robot sprayer was constructed and field-tested. Evaluation included eight interaction modes: the different combinations of the 3 factors. Thirty representative participants used each interaction mode to navigate the robot along a vineyard and spray grape clusters based on a 2 × 2 × 2 repeated measures experimental design. Objective metrics of the effectiveness and efficiency of the human-robot collaboration were collected. Participants also completed questionnaires related to their user experience with the system in each interaction mode. Results show that the most important factor for human-robot interface usability is the number and placement of views. The type of robot control input device was also a significant factor in certain dependents, whereas the effect of the screen output type was only significant on the participants' perceived workload index. Specific recommendations for mobile field robot teleoperation to improve HRI awareness for the agricultural spraying task are presented. Copyright © 2017 Elsevier Ltd. All rights reserved.
Efforts toward an autonomous wheelchair - biomed 2011.
Barrett, Steven; Streeter, Robert
2011-01-01
An autonomous wheelchair is in development to provide mobility to those with significant physical challenges. The overall goal of the project is to develop a wheelchair that is fully autonomous with the ability to navigate about an environment and negotiate obstacles. As a starting point for the project, we have reversed engineered the joystick control system of an off-the-shelf commercially available wheelchair. The joystick control has been replaced with a microcontroller based system. The microcontroller has the capability to interface with a number of subsystems currently under development including wheel odometers, obstacle avoidance sensors, and ultrasonic-based wall sensors. This paper will discuss the microcontroller based system and provide a detailed system description. Results of this study may be adapted to commercial or military robot control.
Telerobotic management system: coordinating multiple human operators with multiple robots
NASA Astrophysics Data System (ADS)
King, Jamie W.; Pretty, Raymond; Brothers, Brendan; Gosine, Raymond G.
2003-09-01
This paper describes an application called the Tele-robotic management system (TMS) for coordinating multiple operators with multiple robots for applications such as underground mining. TMS utilizes several graphical interfaces to allow the user to define a partially ordered plan for multiple robots. This plan is then converted to a Petri net for execution and monitoring. TMS uses a distributed framework to allow robots and operators to easily integrate with the applications. This framework allows robots and operators to join the network and advertise their capabilities through services. TMS then decides whether tasks should be dispatched to a robot or a remote operator based on the services offered by the robots and operators.
Simulation-based intelligent robotic agent for Space Station Freedom
NASA Technical Reports Server (NTRS)
Biegl, Csaba A.; Springfield, James F.; Cook, George E.; Fernandez, Kenneth R.
1990-01-01
A robot control package is described which utilizes on-line structural simulation of robot manipulators and objects in their workspace. The model-based controller is interfaced with a high level agent-independent planner, which is responsible for the task-level planning of the robot's actions. Commands received from the agent-independent planner are refined and executed in the simulated workspace, and upon successful completion, they are transferred to the real manipulators.
Predictive Interfaces for Long-Distance Tele-Operations
NASA Technical Reports Server (NTRS)
Wheeler, Kevin R.; Martin, Rodney; Allan, Mark B.; Sunspiral, Vytas
2005-01-01
We address the development of predictive tele-operator interfaces for humanoid robots with respect to two basic challenges. Firstly, we address automating the transition from fully tele-operated systems towards degrees of autonomy. Secondly, we develop compensation for the time-delay that exists when sending telemetry data from a remote operation point to robots located at low earth orbit and beyond. Humanoid robots have a great advantage over other robotic platforms for use in space-based construction and maintenance because they can use the same tools as astronauts do. The major disadvantage is that they are difficult to control due to the large number of degrees of freedom, which makes it difficult to synthesize autonomous behaviors using conventional means. We are working with the NASA Johnson Space Center's Robonaut which is an anthropomorphic robot with fully articulated hands, arms, and neck. We have trained hidden Markov models that make use of the command data, sensory streams, and other relevant data sources to predict a tele-operator's intent. This allows us to achieve subgoal level commanding without the use of predefined command dictionaries, and to create sub-goal autonomy via sequence generation from generative models. Our method works as a means to incrementally transition from manual tele-operation to semi-autonomous, supervised operation. The multi-agent laboratory experiments conducted by Ambrose et. al. have shown that it is feasible to directly tele-operate multiple Robonauts with humans to perform complex tasks such as truss assembly. However, once a time-delay is introduced into the system, the rate of tele\\ioperation slows down to mimic a bump and wait type of activity. We would like to maintain the same interface to the operator despite time-delays. To this end, we are developing an interface which will allow for us to predict the intentions of the operator while interacting with a 3D virtual representation of the expected state of the robot. The predictive interface anticipates the intention of the operator, and then uses this prediction to initiate appropriate sub-goal autonomy tasks.
New generation emerging technologies for neurorehabilitation and motor assistance.
Frisoli, Antonio; Solazzi, Massimiliano; Loconsole, Claudio; Barsotti, Michele
2016-12-01
This paper illustrates the application of emerging technologies and human-machine interfaces to the neurorehabilitation and motor assistance fields. The contribution focuses on wearable technologies and in particular on robotic exoskeleton as tools for increasing freedom to move and performing Activities of Daily Living (ADLs). This would result in a deep improvement in quality of life, also in terms of improved function of internal organs and general health status. Furthermore, the integration of these robotic systems with advanced bio-signal driven human-machine interface can increase the degree of participation of patient in robotic training allowing to recognize user's intention and assisting the patient in rehabilitation tasks, thus representing a fundamental aspect to elicit motor learning.
CHIMERA II - A real-time multiprocessing environment for sensor-based robot control
NASA Technical Reports Server (NTRS)
Stewart, David B.; Schmitz, Donald E.; Khosla, Pradeep K.
1989-01-01
A multiprocessing environment for a wide variety of sensor-based robot system, providing the flexibility, performance, and UNIX-compatible interface needed for fast development of real-time code is addressed. The requirements imposed on the design of a programming environment for sensor-based robotic control is outlined. The details of the current hardware configuration are presented, along with the details of the CHIMERA II software. Emphasis is placed on the kernel, low-level interboard communication, user interface, extended file system, user-definable and dynamically selectable real-time schedulers, remote process synchronization, and generalized interprocess communication. A possible implementation of a hierarchical control model, the NASA/NBS standard reference model for telerobot control system is demonstrated.
Analysis of human emotion in human-robot interaction
NASA Astrophysics Data System (ADS)
Blar, Noraidah; Jafar, Fairul Azni; Abdullah, Nurhidayu; Muhammad, Mohd Nazrin; Kassim, Anuar Muhamed
2015-05-01
There is vast application of robots in human's works such as in industry, hospital, etc. Therefore, it is believed that human and robot can have a good collaboration to achieve an optimum result of work. The objectives of this project is to analyze human-robot collaboration and to understand humans feeling (kansei factors) when dealing with robot that robot should adapt to understand the humans' feeling. Researches currently are exploring in the area of human-robot interaction with the intention to reduce problems that subsist in today's civilization. Study had found that to make a good interaction between human and robot, first it is need to understand the abilities of each. Kansei Engineering in robotic was used to undergo the project. The project experiments were held by distributing questionnaire to students and technician. After that, the questionnaire results were analyzed by using SPSS analysis. Results from the analysis shown that there are five feelings which significant to the human in the human-robot interaction; anxious, fatigue, relaxed, peaceful, and impressed.
The Goddard Space Flight Center (GSFC) robotics technology testbed
NASA Technical Reports Server (NTRS)
Schnurr, Rick; Obrien, Maureen; Cofer, Sue
1989-01-01
Much of the technology planned for use in NASA's Flight Telerobotic Servicer (FTS) and the Demonstration Test Flight (DTF) is relatively new and untested. To provide the answers needed to design safe, reliable, and fully functional robotics for flight, NASA/GSFC is developing a robotics technology testbed for research of issues such as zero-g robot control, dual arm teleoperation, simulations, and hierarchical control using a high level programming language. The testbed will be used to investigate these high risk technologies required for the FTS and DTF projects. The robotics technology testbed is centered around the dual arm teleoperation of a pair of 7 degree-of-freedom (DOF) manipulators, each with their own 6-DOF mini-master hand controllers. Several levels of safety are implemented using the control processor, a separate watchdog computer, and other low level features. High speed input/output ports allow the control processor to interface to a simulation workstation: all or part of the testbed hardware can be used in real time dynamic simulation of the testbed operations, allowing a quick and safe means for testing new control strategies. The NASA/National Bureau of Standards Standard Reference Model for Telerobot Control System Architecture (NASREM) hierarchical control scheme, is being used as the reference standard for system design. All software developed for the testbed, excluding some of simulation workstation software, is being developed in Ada. The testbed is being developed in phases. The first phase, which is nearing completion, and highlights future developments is described.
Initial Experiments with the Leap Motion as a User Interface in Robotic Endonasal Surgery.
Travaglini, T A; Swaney, P J; Weaver, Kyle D; Webster, R J
The Leap Motion controller is a low-cost, optically-based hand tracking system that has recently been introduced on the consumer market. Prior studies have investigated its precision and accuracy, toward evaluating its usefulness as a surgical robot master interface. Yet due to the diversity of potential slave robots and surgical procedures, as well as the dynamic nature of surgery, it is challenging to make general conclusions from published accuracy and precision data. Thus, our goal in this paper is to explore the use of the Leap in the specific scenario of endonasal pituitary surgery. We use it to control a concentric tube continuum robot in a phantom study, and compare user performance using the Leap to previously published results using the Phantom Omni. We find that the users were able to achieve nearly identical average resection percentage and overall surgical duration with the Leap.
Brain computer interface for operating a robot
NASA Astrophysics Data System (ADS)
Nisar, Humaira; Balasubramaniam, Hari Chand; Malik, Aamir Saeed
2013-10-01
A Brain-Computer Interface (BCI) is a hardware/software based system that translates the Electroencephalogram (EEG) signals produced by the brain activity to control computers and other external devices. In this paper, we will present a non-invasive BCI system that reads the EEG signals from a trained brain activity using a neuro-signal acquisition headset and translates it into computer readable form; to control the motion of a robot. The robot performs the actions that are instructed to it in real time. We have used the cognitive states like Push, Pull to control the motion of the robot. The sensitivity and specificity of the system is above 90 percent. Subjective results show a mixed trend of the difficulty level of the training activities. The quantitative EEG data analysis complements the subjective results. This technology may become very useful for the rehabilitation of disabled and elderly people.
Initial Experiments with the Leap Motion as a User Interface in Robotic Endonasal Surgery
Travaglini, T. A.; Swaney, P. J.; Weaver, Kyle D.; Webster, R. J.
2016-01-01
The Leap Motion controller is a low-cost, optically-based hand tracking system that has recently been introduced on the consumer market. Prior studies have investigated its precision and accuracy, toward evaluating its usefulness as a surgical robot master interface. Yet due to the diversity of potential slave robots and surgical procedures, as well as the dynamic nature of surgery, it is challenging to make general conclusions from published accuracy and precision data. Thus, our goal in this paper is to explore the use of the Leap in the specific scenario of endonasal pituitary surgery. We use it to control a concentric tube continuum robot in a phantom study, and compare user performance using the Leap to previously published results using the Phantom Omni. We find that the users were able to achieve nearly identical average resection percentage and overall surgical duration with the Leap. PMID:26752501
Visual exploration and analysis of human-robot interaction rules
NASA Astrophysics Data System (ADS)
Zhang, Hui; Boyles, Michael J.
2013-01-01
We present a novel interaction paradigm for the visual exploration, manipulation and analysis of human-robot interaction (HRI) rules; our development is implemented using a visual programming interface and exploits key techniques drawn from both information visualization and visual data mining to facilitate the interaction design and knowledge discovery process. HRI is often concerned with manipulations of multi-modal signals, events, and commands that form various kinds of interaction rules. Depicting, manipulating and sharing such design-level information is a compelling challenge. Furthermore, the closed loop between HRI programming and knowledge discovery from empirical data is a relatively long cycle. This, in turn, makes design-level verification nearly impossible to perform in an earlier phase. In our work, we exploit a drag-and-drop user interface and visual languages to support depicting responsive behaviors from social participants when they interact with their partners. For our principal test case of gaze-contingent HRI interfaces, this permits us to program and debug the robots' responsive behaviors through a graphical data-flow chart editor. We exploit additional program manipulation interfaces to provide still further improvement to our programming experience: by simulating the interaction dynamics between a human and a robot behavior model, we allow the researchers to generate, trace and study the perception-action dynamics with a social interaction simulation to verify and refine their designs. Finally, we extend our visual manipulation environment with a visual data-mining tool that allows the user to investigate interesting phenomena such as joint attention and sequential behavioral patterns from multiple multi-modal data streams. We have created instances of HRI interfaces to evaluate and refine our development paradigm. As far as we are aware, this paper reports the first program manipulation paradigm that integrates visual programming interfaces, information visualization, and visual data mining methods to facilitate designing, comprehending, and evaluating HRI interfaces.
ERIC Educational Resources Information Center
Hull, Daniel M.; Lovett, James E.
The Robotics/Automated Systems Technician (RAST) project developed a robotics technician model curriculum for the use of state directors of vocational education and two-year college vocational/technical educators. A baseline management plan was developed to guide the project. To provide awareness, project staff developed a dissemination plan…
A Contest-Oriented Project for Learning Intelligent Mobile Robots
ERIC Educational Resources Information Center
Huang, Hsin-Hsiung; Su, Juing-Huei; Lee, Chyi-Shyong
2013-01-01
A contest-oriented project for undergraduate students to learn implementation skills and theories related to intelligent mobile robots is presented in this paper. The project, related to Micromouse, Robotrace (Robotrace is the title of Taiwanese and Japanese robot races), and line-maze contests was developed by the embedded control system research…
Telescope networking and user support via Remote Telescope Markup Language
NASA Astrophysics Data System (ADS)
Hessman, Frederic V.; Pennypacker, Carlton R.; Romero-Colmenero, Encarni; Tuparev, Georg
2004-09-01
Remote Telescope Markup Language (RTML) is an XML-based interface/document format designed to facilitate the exchange of astronomical observing requests and results between investigators and observatories as well as within networks of observatories. While originally created to support simple imaging telescope requests (Versions 1.0-2.1), RTML Version 3.0 now supports a wide range of applications, from request preparation, exposure calculation, spectroscopy, and observation reports to remote telescope scheduling, target-of-opportunity observations and telescope network administration. The elegance of RTML is that all of this is made possible using a public XML Schema which provides a general-purpose, easily parsed, and syntax-checked medium for the exchange of astronomical and user information while not restricting or otherwise constraining the use of the information at either end. Thus, RTML can be used to connect heterogeneous systems and their users without requiring major changes in existing local resources and procedures. Projects as very different as a number of advanced amateur observatories, the global Hands-On Universe project, the MONET network (robotic imaging), the STELLA consortium (robotic spectroscopy), and the 11-m Southern African Large Telescope are now using or intending to use RTML in various forms and for various purposes.
NASA Astrophysics Data System (ADS)
Kryuchkov, B. I.; Usov, V. M.; Chertopolokhov, V. A.; Ronzhin, A. L.; Karpov, A. A.
2017-05-01
Extravehicular activity (EVA) on the lunar surface, necessary for the future exploration of the Moon, involves extensive use of robots. One of the factors of safe EVA is a proper interaction between cosmonauts and robots in extreme environments. This requires a simple and natural man-machine interface, e.g. multimodal contactless interface based on recognition of gestures and cosmonaut's poses. When travelling in the "Follow Me" mode (master/slave), a robot uses onboard tools for tracking cosmonaut's position and movements, and on the basis of these data builds its itinerary. The interaction in the system "cosmonaut-robot" on the lunar surface is significantly different from that on the Earth surface. For example, a man, dressed in a space suit, has limited fine motor skills. In addition, EVA is quite tiring for the cosmonauts, and a tired human being less accurately performs movements and often makes mistakes. All this leads to new requirements for the convenient use of the man-machine interface designed for EVA. To improve the reliability and stability of human-robot communication it is necessary to provide options for duplicating commands at the task stages and gesture recognition. New tools and techniques for space missions must be examined at the first stage of works in laboratory conditions, and then in field tests (proof tests at the site of application). The article analyzes the methods of detection and tracking of movements and gesture recognition of the cosmonaut during EVA, which can be used for the design of human-machine interface. A scenario for testing these methods by constructing a virtual environment simulating EVA on the lunar surface is proposed. Simulation involves environment visualization and modeling of the use of the "vision" of the robot to track a moving cosmonaut dressed in a spacesuit.
Analyzing Robotic Kinematics Via Computed Simulations
NASA Technical Reports Server (NTRS)
Carnahan, Timothy M.
1992-01-01
Computing system assists in evaluation of kinematics of conceptual robot. Displays positions and motions of robotic manipulator within work cell. Also displays interactions between robotic manipulator and other objects. Results of simulation displayed on graphical computer workstation. System includes both off-the-shelf software originally developed for automotive industry and specially developed software. Simulation system also used to design human-equivalent hand, to model optical train in infrared system, and to develop graphical interface for teleoperator simulation system.
NASA Technical Reports Server (NTRS)
Stecklein, Jonette
2017-01-01
NASA has held an annual robotic mining competition for teams of university/college students since 2010. This competition is yearlong, suitable for a senior university engineering capstone project. It encompasses the full project life cycle from ideation of a robot design, through tele-operation of the robot collecting regolith in simulated Mars conditions, to disposal of the robot systems after the competition. A major required element for this competition is a Systems Engineering Paper in which each team describes the systems engineering approaches used on their project. The score for the Systems Engineering Paper contributes 25% towards the team’s score for the competition’s grand prize. The required use of systems engineering on the project by this competition introduces the students to an intense practical application of systems engineering throughout a full project life cycle.
NASA Astrophysics Data System (ADS)
See, Swee Lan; Tan, Mitchell; Looi, Qin En
This paper presents findings from a descriptive research on social gaming. A video-enhanced diary method was used to understand the user experience in social gaming. From this experiment, we found that natural human behavior and gamer’s decision making process can be elicited and speculated during human computer interaction. These are new information that we should consider as they can help us build better human computer interfaces and human robotic interfaces in future.
NASA Technical Reports Server (NTRS)
Aghazarian, Hrand
2009-01-01
The R4SA GUI mentioned in the immediately preceding article is a userfriendly interface for controlling one or more robot(s). This GUI makes it possible to perform meaningful real-time field experiments and research in robotics at an unmatched level of fidelity, within minutes of setup. It provides such powerful graphing modes as that of a digitizing oscilloscope that displays up to 250 variables at rates between 1 and 200 Hz. This GUI can be configured as multiple intuitive interfaces for acquisition of data, command, and control to enable rapid testing of subsystems or an entire robot system while simultaneously performing analysis of data. The R4SA software establishes an intuitive component-based design environment that can be easily reconfigured for any robotic platform by creating or editing setup configuration files. The R4SA GUI enables event-driven and conditional sequencing similar to those of Mars Exploration Rover (MER) operations. It has been certified as part of the MER ground support equipment and, therefore, is allowed to be utilized in conjunction with MER flight hardware. The R4SA GUI could also be adapted to use in embedded computing systems, other than that of the MER, for commanding and real-time analysis of data.
The Virtual Tablet: Virtual Reality as a Control System
NASA Technical Reports Server (NTRS)
Chronister, Andrew
2016-01-01
In the field of human-computer interaction, Augmented Reality (AR) and Virtual Reality (VR) have been rapidly growing areas of interest and concerted development effort thanks to both private and public research. At NASA, a number of groups have explored the possibilities afforded by AR and VR technology, among which is the IT Advanced Concepts Lab (ITACL). Within ITACL, the AVR (Augmented/Virtual Reality) Lab focuses on VR technology specifically for its use in command and control. Previous work in the AVR lab includes the Natural User Interface (NUI) project and the Virtual Control Panel (VCP) project, which created virtual three-dimensional interfaces that users could interact with while wearing a VR headset thanks to body- and hand-tracking technology. The Virtual Tablet (VT) project attempts to improve on these previous efforts by incorporating a physical surrogate which is mirrored in the virtual environment, mitigating issues with difficulty of visually determining the interface location and lack of tactile feedback discovered in the development of previous efforts. The physical surrogate takes the form of a handheld sheet of acrylic glass with several infrared-range reflective markers and a sensor package attached. Using the sensor package to track orientation and a motion-capture system to track the marker positions, a model of the surrogate is placed in the virtual environment at a position which corresponds with the real-world location relative to the user's VR Head Mounted Display (HMD). A set of control mechanisms is then projected onto the surface of the surrogate such that to the user, immersed in VR, the control interface appears to be attached to the object they are holding. The VT project was taken from an early stage where the sensor package, motion-capture system, and physical surrogate had been constructed or tested individually but not yet combined or incorporated into the virtual environment. My contribution was to combine the pieces of hardware, write software to incorporate each piece of position or orientation data into a coherent description of the object's location in space, place the virtual analogue accordingly, and project the control interface onto it, resulting in a functioning object which has both a physical and a virtual presence. Additionally, the virtual environment was enhanced with two live video feeds from cameras mounted on the robotic device being used as an example target of the virtual interface. The working VT allows users to naturally interact with a control interface with little to no training and without the issues found in previous efforts.
The Tactile Ethics of Soft Robotics: Designing Wisely for Human-Robot Interaction.
Arnold, Thomas; Scheutz, Matthias
2017-06-01
Soft robots promise an exciting design trajectory in the field of robotics and human-robot interaction (HRI), promising more adaptive, resilient movement within environments as well as a safer, more sensitive interface for the objects or agents the robot encounters. In particular, tactile HRI is a critical dimension for designers to consider, especially given the onrush of assistive and companion robots into our society. In this article, we propose to surface an important set of ethical challenges for the field of soft robotics to meet. Tactile HRI strongly suggests that soft-bodied robots balance tactile engagement against emotional manipulation, model intimacy on the bonding with a tool not with a person, and deflect users from personally and socially destructive behavior the soft bodies and surfaces could normally entice.
NASA Technical Reports Server (NTRS)
Brewer, W. V.; Rasis, E. P.; Shih, H. R.
1993-01-01
Results from NASA/HBCU Grant No. NAG-1-1125 are summarized. Designs developed for model fabrication, exploratory concepts drafted, interface of computer with robot and end-effector, and capability enhancement are discussed.
Automation and Robotics in the Laboratory.
ERIC Educational Resources Information Center
DiCesare, Frank; And Others
1985-01-01
A general laboratory course featuring microcomputer interfacing for data acquisition, process control and automation, and robotics was developed at Rensselaer Polytechnic Institute and is now available to all junior engineering students. The development and features of the course are described. (JN)
NASA-STD-(I)-6016, Standard Materials and Processes Requirements for Spacecraft
NASA Technical Reports Server (NTRS)
Pedley, Michael; Griffin, Dennis
2006-01-01
This document is directed toward Materials and Processes (M&P) used in the design, fabrication, and testing of flight components for all NASA manned, unmanned, robotic, launch vehicle, lander, in-space and surface systems, and spacecraft program/project hardware elements. All flight hardware is covered by the M&P requirements of this document, including vendor designed, off-the-shelf, and vendor furnished items. Materials and processes used in interfacing ground support equipment (GSE); test equipment; hardware processing equipment; hardware packaging; and hardware shipment shall be controlled to prevent damage to or contamination of flight hardware.
Implementation of an i.v.-compounding robot in a hospital-based cancer center pharmacy.
Yaniv, Angela W; Knoer, Scott J
2013-11-15
The implementation of a robotic device for compounding patient-specific chemotherapy doses is described, including a review of data on the robot's performance over a 13-month period. The automated system prepares individualized i.v. chemotherapy doses in a variety of infusion bags and syringes; more than 50 drugs are validated for use in the machine. The robot is programmed to recognize the physical parameters of syringes and vials and uses photographic identification, barcode identification, and gravimetric measurements to ensure that the correct ingredients are compounded and the final dose is accurate. The implementation timeline, including site preparation, logistics planning, installation, calibration, staff training, development of a pharmacy information system (PIS) interface, and validation by the state board of pharmacy, was about 10 months. In its first 13 months of operation, the robot was used to prepare 7384 medication doses; 85 doses (1.2%) found to be outside the desired accuracy range (±4%) were manually modified by pharmacy staff. Ongoing system monitoring has identified mechanical and materials-related problems including vial-recognition failures (in many instances, these issues were resolved by the system operator and robotic compounding proceeded successfully), interface issues affecting robot-PIS communication, and human errors such as the loading of an incorrect vial or bag into the machine. Through staff training, information technology improvements, and workflow adjustments, the robot's throughput has been steadily improved. An i.v.-compounding robot was successfully implemented in a cancer center pharmacy. The robot performs compounding tasks safely and accurately and has been integrated into the pharmacy's workflow.
Robots, systems, and methods for hazard evaluation and visualization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nielsen, Curtis W.; Bruemmer, David J.; Walton, Miles C.
A robot includes a hazard sensor, a locomotor, and a system controller. The robot senses a hazard intensity at a location of the robot, moves to a new location in response to the hazard intensity, and autonomously repeats the sensing and moving to determine multiple hazard levels at multiple locations. The robot may also include a communicator to communicate the multiple hazard levels to a remote controller. The remote controller includes a communicator for sending user commands to the robot and receiving the hazard levels from the robot. A graphical user interface displays an environment map of the environment proximatemore » the robot and a scale for indicating a hazard intensity. A hazard indicator corresponds to a robot position in the environment map and graphically indicates the hazard intensity at the robot position relative to the scale.« less
Design of the arm-wrestling robot's force acquisition system based on Qt
NASA Astrophysics Data System (ADS)
Huo, Zhixiang; Chen, Feng; Wang, Yongtao
2017-03-01
As a collection of entertainment and medical rehabilitation in a robot, the research on the arm-wrestling robot is of great significance. In order to achieve the collection of the arm-wrestling robot's force signals, the design and implementation of arm-wrestling robot's force acquisition system is introduced in this paper. The system is based on MP4221 data acquisition card and is programmed by Qt. It runs successfully in collecting the analog signals on PC. The interface of the system is simple and the real-time performance is good. The result of the test shows the feasibility in arm-wrestling robot.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Torres, P.; Luque de Castro, M.D.
1996-12-31
A fully automated method for the determination of organochlorine pesticides in vegetables is proposed. The overall system acts as an {open_quotes}analytical black box{close_quotes} because a robotic station performs the prelimninary operations, from weighing to capping the leached analytes and location in an autosampler of an automated gas chromatograph with electron capture detection. The method has been applied to the determination of lindane, heptachlor, captan, chlordane and metoxcychlor in tea, marjoram, cinnamon, pennyroyal, and mint with good results in most cases. A gas chromatograph has been interfaced to a robotic station for the determination of pesticides in vegetables. 15 refs., 4more » figs., 2 tabs.« less
Applications of Brain–Machine Interface Systems in Stroke Recovery and Rehabilitation
Francisco, Gerard E.; Contreras-Vidal, Jose L.
2014-01-01
Stroke is a leading cause of disability, significantly impacting the quality of life (QOL) in survivors, and rehabilitation remains the mainstay of treatment in these patients. Recent engineering and technological advances such as brain-machine interfaces (BMI) and robotic rehabilitative devices are promising to enhance stroke neu-rorehabilitation, to accelerate functional recovery and improve QOL. This review discusses the recent applications of BMI and robotic-assisted rehabilitation in stroke patients. We present the framework for integrated BMI and robotic-assisted therapies, and discuss their potential therapeutic, assistive and diagnostic functions in stroke rehabilitation. Finally, we conclude with an outlook on the potential challenges and future directions of these neurotechnologies, and their impact on clinical rehabilitation. PMID:25110624
NASA Technical Reports Server (NTRS)
Pisaich, Gregory; Flueckiger, Lorenzo; Neukom, Christian; Wagner, Mike; Buchanan, Eric; Plice, Laura
2007-01-01
The Mission Simulation Toolkit (MST) is a flexible software system for autonomy research. It was developed as part of the Mission Simulation Facility (MSF) project that was started in 2001 to facilitate the development of autonomous planetary robotic missions. Autonomy is a key enabling factor for robotic exploration. There has been a large gap between autonomy software (at the research level), and software that is ready for insertion into near-term space missions. The MST bridges this gap by providing a simulation framework and a suite of tools for supporting research and maturation of autonomy. MST uses a distributed framework based on the High Level Architecture (HLA) standard. A key feature of the MST framework is the ability to plug in new models to replace existing ones with the same services. This enables significant simulation flexibility, particularly the mixing and control of fidelity level. In addition, the MST provides automatic code generation from robot interfaces defined with the Unified Modeling Language (UML), methods for maintaining synchronization across distributed simulation systems, XML-based robot description, and an environment server. Finally, the MSF supports a number of third-party products including dynamic models and terrain databases. Although the communication objects and some of the simulation components that are provided with this toolkit are specifically designed for terrestrial surface rovers, the MST can be applied to any other domain, such as aerial, aquatic, or space.
Telerobot local-remote control architecture for space flight program applications
NASA Technical Reports Server (NTRS)
Zimmerman, Wayne; Backes, Paul; Steele, Robert; Long, Mark; Bon, Bruce; Beahan, John
1993-01-01
The JPL Supervisory Telerobotics (STELER) Laboratory has developed and demonstrated a unique local-remote robot control architecture which enables management of intermittent communication bus latencies and delays such as those expected for ground-remote operation of Space Station robotic systems via the Tracking and Data Relay Satellite System (TDRSS) communication platform. The current work at JPL in this area has focused on enhancing the technologies and transferring the control architecture to hardware and software environments which are more compatible with projected ground and space operational environments. At the local site, the operator updates the remote worksite model using stereo video and a model overlay/fitting algorithm which outputs the location and orientation of the object in free space. That information is relayed to the robot User Macro Interface (UMI) to enable programming of the robot control macros. This capability runs on a single Silicon Graphics Inc. machine. The operator can employ either manual teleoperation, shared control, or supervised autonomous control to manipulate the intended object. The remote site controller, called the Modular Telerobot Task Execution System (MOTES), runs in a multi-processor VME environment and performs the task sequencing, task execution, trajectory generation, closed loop force/torque control, task parameter monitoring, and reflex action. This paper describes the new STELER architecture implementation, and also documents the results of the recent autonomous docking task execution using the local site and MOTES.
Graphical user interface for a robotic workstation in a surgical environment.
Bielski, A; Lohmann, C P; Maier, M; Zapp, D; Nasseri, M A
2016-08-01
Surgery using a robotic system has proven to have significant potential but is still a highly challenging task for the surgeon. An eye surgery assistant has been developed to eliminate the problem of tremor caused by human motions endangering the outcome of ophthalmic surgery. In order to exploit the full potential of the robot and improve the workflow of the surgeon, providing the ability to change control parameters live in the system as well as the ability to connect additional ancillary systems is necessary. Additionally the surgeon should always be able to get an overview over the status of all systems with a quick glance. Therefore a workstation has been built. The contribution of this paper is the design and the implementation of an intuitive graphical user interface for this workstation. The interface has been designed with feedback from surgeons and technical staff in order to ensure its usability in a surgical environment. Furthermore, the system was designed with the intent of supporting additional systems with minimal additional effort.
NASA Astrophysics Data System (ADS)
Zhao, Ming-fu; Hu, Xin-Yu; Shao, Yun; Luo, Bin-bin; Wang, Xin
2008-10-01
This article analyses nowadays in common use of football robots in China, intended to improve the football robots' hardware platform system's capability, and designed a football robot which based on DSP core controller, and combined Fuzzy-PID control algorithm. The experiment showed, because of the advantages of DSP, such as quickly operation, various of interfaces, low power dissipation etc. It has great improvement on the football robot's performance of movement, controlling precision, real-time performance.
Online Learning Techniques for Improving Robot Navigation in Unfamiliar Domains
2010-12-01
In In Proceedings of the 1996 Symposium on Human Interaction and Complex Systems, pages 276–283, 1996. 6.1 [15] Colin Campbell and Kristin P. Bennett...ISBN 0-262-19450-3. 5.1 [104] Jean Scholtz, Jeff Young, Jill L. Drury , and Holly A. Yanco. Evaluation of human-robot interaction awareness in search...2004. 6.1 [147] Holly A. Yanco and Jill L. Drury . Rescuing interfaces: A multi-year study of human-robot interaction at the AAAI robot rescue
Chan, Joshua L; Mazilu, Dumitru; Miller, Justin G; Hunt, Timothy; Horvath, Keith A; Li, Ming
2016-10-01
Real-time magnetic resonance imaging (rtMRI) guidance provides significant advantages during transcatheter aortic valve replacement (TAVR) as it provides superior real-time visualization and accurate device delivery tracking. However, performing a TAVR within an MRI scanner remains difficult due to a constrained procedural environment. To address these concerns, a magnetic resonance (MR)-compatible robotic system to assist in TAVR deployments was developed. This study evaluates the technical design and interface considerations of an MR-compatible robotic-assisted TAVR system with the purpose of demonstrating that such a system can be developed and executed safely and precisely in a preclinical model. An MR-compatible robotic surgical assistant system was built for TAVR deployment. This system integrates a 5-degrees of freedom (DoF) robotic arm with a 3-DoF robotic valve delivery module. A user interface system was designed for procedural planning and real-time intraoperative manipulation of the robot. The robotic device was constructed of plastic materials, pneumatic actuators, and fiber-optical encoders. The mechanical profile and MR compatibility of the robotic system were evaluated. The system-level error based on a phantom model was 1.14 ± 0.33 mm. A self-expanding prosthesis was successfully deployed in eight Yorkshire swine under rtMRI guidance. Post-deployment imaging and necropsy confirmed placement of the stent within 3 mm of the aortic valve annulus. These phantom and in vivo studies demonstrate the feasibility and advantages of robotic-assisted TAVR under rtMRI guidance. This robotic system increases the precision of valve deployments, diminishes environmental constraints, and improves the overall success of TAVR.
SLAM algorithm applied to robotics assistance for navigation in unknown environments
2010-01-01
Background The combination of robotic tools with assistance technology determines a slightly explored area of applications and advantages for disability or elder people in their daily tasks. Autonomous motorized wheelchair navigation inside an environment, behaviour based control of orthopaedic arms or user's preference learning from a friendly interface are some examples of this new field. In this paper, a Simultaneous Localization and Mapping (SLAM) algorithm is implemented to allow the environmental learning by a mobile robot while its navigation is governed by electromyographic signals. The entire system is part autonomous and part user-decision dependent (semi-autonomous). The environmental learning executed by the SLAM algorithm and the low level behaviour-based reactions of the mobile robot are robotic autonomous tasks, whereas the mobile robot navigation inside an environment is commanded by a Muscle-Computer Interface (MCI). Methods In this paper, a sequential Extended Kalman Filter (EKF) feature-based SLAM algorithm is implemented. The features correspond to lines and corners -concave and convex- of the environment. From the SLAM architecture, a global metric map of the environment is derived. The electromyographic signals that command the robot's movements can be adapted to the patient's disabilities. For mobile robot navigation purposes, five commands were obtained from the MCI: turn to the left, turn to the right, stop, start and exit. A kinematic controller to control the mobile robot was implemented. A low level behavior strategy was also implemented to avoid robot's collisions with the environment and moving agents. Results The entire system was tested in a population of seven volunteers: three elder, two below-elbow amputees and two young normally limbed patients. The experiments were performed within a closed low dynamic environment. Subjects took an average time of 35 minutes to navigate the environment and to learn how to use the MCI. The SLAM results have shown a consistent reconstruction of the environment. The obtained map was stored inside the Muscle-Computer Interface. Conclusions The integration of a highly demanding processing algorithm (SLAM) with a MCI and the communication between both in real time have shown to be consistent and successful. The metric map generated by the mobile robot would allow possible future autonomous navigation without direct control of the user, whose function could be relegated to choose robot destinations. Also, the mobile robot shares the same kinematic model of a motorized wheelchair. This advantage can be exploited for wheelchair autonomous navigation. PMID:20163735
NASA's Evolutionary Xenon Thruster: The NEXT Ion Propulsion System for Solar System Exploration
NASA Technical Reports Server (NTRS)
Pencil, Eric J.; Benson, Scott W.
2008-01-01
This viewgraph presentation reviews NASA s Evolutionary Xenon Thruster (NEXT) Ion Propulsion system. The NEXT project is developing a solar electric ion propulsion system. The NEXT project is advancing the capability of ion propulsion to meet NASA robotic science mission needs. The NEXT system is planned to significantly improve performance over the state of the art electric propulsion systems, such as NASA Solar Electric Propulsion Technology Application Readiness (NSTAR). The status of NEXT development is reviewed, including information on the NEXT Thruster, the power processing unit, the propellant management system (PMS), the digital control interface unit, and the gimbal. Block diagrams NEXT system are presented. Also a review of the lessons learned from the Dawn and NSTAR systems is provided. In summary the NEXT project activities through 2007 have brought next-generation ion propulsion technology to a sufficient maturity level.
System and method for seamless task-directed autonomy for robots
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nielsen, Curtis; Bruemmer, David; Few, Douglas
Systems, methods, and user interfaces are used for controlling a robot. An environment map and a robot designator are presented to a user. The user may place, move, and modify task designators on the environment map. The task designators indicate a position in the environment map and indicate a task for the robot to achieve. A control intermediary links task designators with robot instructions issued to the robot. The control intermediary analyzes a relative position between the task designators and the robot. The control intermediary uses the analysis to determine a task-oriented autonomy level for the robot and communicates targetmore » achievement information to the robot. The target achievement information may include instructions for directly guiding the robot if the task-oriented autonomy level indicates low robot initiative and may include instructions for directing the robot to determine a robot plan for achieving the task if the task-oriented autonomy level indicates high robot initiative.« less
Project InterActions: A Multigenerational Robotic Learning Environment
NASA Astrophysics Data System (ADS)
Bers, Marina U.
2007-12-01
This paper presents Project InterActions, a series of 5-week workshops in which very young learners (4- to 7-year-old children) and their parents come together to build and program a personally meaningful robotic project in the context of a multigenerational robotics-based community of practice. The goal of these family workshops is to teach both parents and children about the mechanical and programming aspects involved in robotics, as well as to initiate them in a learning trajectory with and about technology. Results from this project address different ways in which parents and children learn together and provide insights into how to develop educational interventions that would educate parents, as well as children, in new domains of knowledge and skills such as robotics and new technologies.
Bridging the gap between motor imagery and motor execution with a brain-robot interface.
Bauer, Robert; Fels, Meike; Vukelić, Mathias; Ziemann, Ulf; Gharabaghi, Alireza
2015-03-01
According to electrophysiological studies motor imagery and motor execution are associated with perturbations of brain oscillations over spatially similar cortical areas. By contrast, neuroimaging and lesion studies suggest that at least partially distinct cortical networks are involved in motor imagery and execution. We sought to further disentangle this relationship by studying the role of brain-robot interfaces in the context of motor imagery and motor execution networks. Twenty right-handed subjects performed several behavioral tasks as indicators for imagery and execution of movements of the left hand, i.e. kinesthetic imagery, visual imagery, visuomotor integration and tonic contraction. In addition, subjects performed motor imagery supported by haptic/proprioceptive feedback from a brain-robot-interface. Principal component analysis was applied to assess the relationship of these indicators. The respective cortical resting state networks in the α-range were investigated by electroencephalography using the phase slope index. We detected two distinct abilities and cortical networks underlying motor control: a motor imagery network connecting the left parietal and motor areas with the right prefrontal cortex and a motor execution network characterized by transmission from the left to right motor areas. We found that a brain-robot-interface might offer a way to bridge the gap between these networks, opening thereby a backdoor to the motor execution system. This knowledge might promote patient screening and may lead to novel treatment strategies, e.g. for the rehabilitation of hemiparesis after stroke. Copyright © 2014 Elsevier Inc. All rights reserved.
Queuing Models of Tertiary Storage
NASA Technical Reports Server (NTRS)
Johnson, Theodore
1996-01-01
Large scale scientific projects generate and use large amounts of data. For example, the NASA Earth Observation System Data and Information System (EOSDIS) project is expected to archive one petabyte per year of raw satellite data. This data is made automatically available for processing into higher level data products and for dissemination to the scientific community. Such large volumes of data can only be stored in robotic storage libraries (RSL's) for near-line access. A characteristic of RSL's is the use of a robot arm that transfers media between a storage rack and the read/write drives, thus multiplying the capacity of the system. The performance of the RSL's can be a critical limiting factor for the performance of the archive system. However, the many interacting components of an RSL make a performance analysis difficult. In addition, different RSL components can have widely varying performance characteristics. This paper describes our work to develop performance models of an RSL in isolation. Next we show how the RSL model can be incorporated into a queuing network model. We use the models to make some example performance studies of archive systems. The models described in this paper, developed for the NASA EODIS project, are implemented in C with a well defined interface. The source code, accompanying documentation, and also sample JAVA applets are available at: http://www.cis.ufl.edu/ted/
2010-03-01
piece of tissue. Full Mobility Manipulator Robot The primary challenge with the design of a full mobility robot is meeting the competing design...streamed through an embedded plug-in for VLC player using asf/wmv encoding with 200ms buffering. A benchtop test of the remote user interface was...encountered in ensuring quality video is being made available to the surgeon. A significant challenge has been to consistently provide high quality video
Can Robots and Humans Get Along?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scholtz, Jean
2007-06-01
Now that robots have moved into the mainstream—as vacuum cleaners, lawn mowers, autonomous vehicles, tour guides, and even pets—it is important to consider how everyday people will interact with them. A robot is really just a computer, but many researchers are beginning to understand that human-robot interactions are much different than human-computer interactions. So while the metrics used to evaluate the human-computer interaction (usability of the software interface in terms of time, accuracy, and user satisfaction) may also be appropriate for human-robot interactions, we need to determine whether there are additional metrics that should be considered.
Artificial intelligence - New tools for aerospace project managers
NASA Technical Reports Server (NTRS)
Moja, D. C.
1985-01-01
Artificial Intelligence (AI) is currently being used for business-oriented, money-making applications, such as medical diagnosis, computer system configuration, and geological exploration. The present paper has the objective to assess new AI tools and techniques which will be available to assist aerospace managers in the accomplishment of their tasks. A study conducted by Brown and Cheeseman (1983) indicates that AI will be employed in all traditional management areas, taking into account goal setting, decision making, policy formulation, evaluation, planning, budgeting, auditing, personnel management, training, legal affairs, and procurement. Artificial intelligence/expert systems are discussed, giving attention to the three primary areas concerned with intelligent robots, natural language interfaces, and expert systems. Aspects of information retrieval are also considered along with the decision support system, and expert systems for project planning and scheduling.
Building a Relationship between Robot Characteristics and Teleoperation User Interfaces.
Mortimer, Michael; Horan, Ben; Seyedmahmoudian, Mehdi
2017-03-14
The Robot Operating System (ROS) provides roboticists with a standardized and distributed framework for real-time communication between robotic systems using a microkernel environment. This paper looks at how ROS metadata, Unified Robot Description Format (URDF), Semantic Robot Description Format (SRDF), and its message description language, can be used to identify key robot characteristics to inform User Interface (UI) design for the teleoperation of heterogeneous robot teams. Logical relationships between UI components and robot characteristics are defined by a set of relationship rules created using relevant and available information including developer expertise and ROS metadata. This provides a significant opportunity to move towards a rule-driven approach for generating the designs of teleoperation UIs; in particular the reduction of the number of different UI configurations required to teleoperate each individual robot within a heterogeneous robot team. This approach is based on using an underlying rule set identifying robots that can be teleoperated using the same UI configuration due to having the same or similar robot characteristics. Aside from reducing the number of different UI configurations an operator needs to be familiar with, this approach also supports consistency in UI configurations when a teleoperator is periodically switching between different robots. To achieve this aim, a Matlab toolbox is developed providing users with the ability to define rules specifying the relationship between robot characteristics and UI components. Once rules are defined, selections that best describe the characteristics of the robot type within a particular heterogeneous robot team can be made. A main advantage of this approach is that rather than specifying discrete robots comprising the team, the user can specify characteristics of the team more generally allowing the system to deal with slight variations that may occur in the future. In fact, by using the defined relationship rules and characteristic selections, the toolbox can automatically identify a reduced set of UI configurations required to control possible robot team configurations, as opposed to the traditional ad-hoc approach to teleoperation UI design. In the results section, three test cases are presented to demonstrate how the selection of different robot characteristics builds a number of robot characteristic combinations, and how the relationship rules are used to determine a reduced set of required UI configurations needed to control each individual robot in the robot team.
Building a Relationship between Robot Characteristics and Teleoperation User Interfaces
Mortimer, Michael; Horan, Ben; Seyedmahmoudian, Mehdi
2017-01-01
The Robot Operating System (ROS) provides roboticists with a standardized and distributed framework for real-time communication between robotic systems using a microkernel environment. This paper looks at how ROS metadata, Unified Robot Description Format (URDF), Semantic Robot Description Format (SRDF), and its message description language, can be used to identify key robot characteristics to inform User Interface (UI) design for the teleoperation of heterogeneous robot teams. Logical relationships between UI components and robot characteristics are defined by a set of relationship rules created using relevant and available information including developer expertise and ROS metadata. This provides a significant opportunity to move towards a rule-driven approach for generating the designs of teleoperation UIs; in particular the reduction of the number of different UI configurations required to teleoperate each individual robot within a heterogeneous robot team. This approach is based on using an underlying rule set identifying robots that can be teleoperated using the same UI configuration due to having the same or similar robot characteristics. Aside from reducing the number of different UI configurations an operator needs to be familiar with, this approach also supports consistency in UI configurations when a teleoperator is periodically switching between different robots. To achieve this aim, a Matlab toolbox is developed providing users with the ability to define rules specifying the relationship between robot characteristics and UI components. Once rules are defined, selections that best describe the characteristics of the robot type within a particular heterogeneous robot team can be made. A main advantage of this approach is that rather than specifying discrete robots comprising the team, the user can specify characteristics of the team more generally allowing the system to deal with slight variations that may occur in the future. In fact, by using the defined relationship rules and characteristic selections, the toolbox can automatically identify a reduced set of UI configurations required to control possible robot team configurations, as opposed to the traditional ad-hoc approach to teleoperation UI design. In the results section, three test cases are presented to demonstrate how the selection of different robot characteristics builds a number of robot characteristic combinations, and how the relationship rules are used to determine a reduced set of required UI configurations needed to control each individual robot in the robot team. PMID:28335431
The role of assistive robotics in the lives of persons with disability.
Brose, Steven W; Weber, Douglas J; Salatin, Ben A; Grindle, Garret G; Wang, Hongwu; Vazquez, Juan J; Cooper, Rory A
2010-06-01
Robotic assistive devices are used increasingly to improve the independence and quality of life of persons with disabilities. Devices as varied as robotic feeders, smart-powered wheelchairs, independent mobile robots, and socially assistive robots are becoming more clinically relevant. There is a growing importance for the rehabilitation professional to be aware of available systems and ongoing research efforts. The aim of this article is to describe the advances in assistive robotics that are relevant to professionals serving persons with disabilities. This review breaks down relevant advances into categories of Assistive Robotic Systems, User Interfaces and Control Systems, Sensory and Feedback Systems, and User Perspectives. An understanding of the direction that assistive robotics is taking is important for the clinician and researcher alike; this review is intended to address this need.
NASA Astrophysics Data System (ADS)
Ayres, R.; Miller, S.
1982-06-01
The characteristics, applications, and operational capabilities of currently available robots are examined. Designed to function at tasks of a repetitive, hazardous, or uncreative nature, robot appendages are controlled by microprocessors which permit some simple decision-making on-the-job, and have served for sample gathering on the Mars Viking lander. Critical developmental areas concern active sensors at the robot grappler-object interface, where sufficient data must be gathered for the central processor to which the robot is attached to conclude the state of completion and suitability of the workpiece. Although present robots must be programmed through every step of a particular industrial process, thus limiting each robot to specialized tasks, the potential for closed cells of batch-processing robot-run units is noted to be close to realization. Finally, consideration is given to methods for retraining the human workforce that robots replace
Robot computer problem solving system
NASA Technical Reports Server (NTRS)
Merriam, E. W.; Becker, J. D.
1973-01-01
A robot computer problem solving system which represents a robot exploration vehicle in a simulated Mars environment is described. The model exhibits changes and improvements made on a previously designed robot in a city environment. The Martian environment is modeled in Cartesian coordinates; objects are scattered about a plane; arbitrary restrictions on the robot's vision have been removed; and the robot's path contains arbitrary curves. New environmental features, particularly the visual occlusion of objects by other objects, were added to the model. Two different algorithms were developed for computing occlusion. Movement and vision capabilities of the robot were established in the Mars environment, using LISP/FORTRAN interface for computational efficiency. The graphical display program was redesigned to reflect the change to the Mars-like environment.
Towards free 3D end-point control for robotic-assisted human reaching using binocular eye tracking.
Maimon-Dror, Roni O; Fernandez-Quesada, Jorge; Zito, Giuseppe A; Konnaris, Charalambos; Dziemian, Sabine; Faisal, A Aldo
2017-07-01
Eye-movements are the only directly observable behavioural signals that are highly correlated with actions at the task level, and proactive of body movements and thus reflect action intentions. Moreover, eye movements are preserved in many movement disorders leading to paralysis (or amputees) from stroke, spinal cord injury, Parkinson's disease, multiple sclerosis, and muscular dystrophy among others. Despite this benefit, eye tracking is not widely used as control interface for robotic interfaces in movement impaired patients due to poor human-robot interfaces. We demonstrate here how combining 3D gaze tracking using our GT3D binocular eye tracker with custom designed 3D head tracking system and calibration method enables continuous 3D end-point control of a robotic arm support system. The users can move their own hand to any location of the workspace by simple looking at the target and winking once. This purely eye tracking based system enables the end-user to retain free head movement and yet achieves high spatial end point accuracy in the order of 6 cm RMSE error in each dimension and standard deviation of 4 cm. 3D calibration is achieved by moving the robot along a 3 dimensional space filling Peano curve while the user is tracking it with their eyes. This results in a fully automated calibration procedure that yields several thousand calibration points versus standard approaches using a dozen points, resulting in beyond state-of-the-art 3D accuracy and precision.
Force-sensed interface for control and training space robot
NASA Astrophysics Data System (ADS)
Moiseev, O. S.; Sarsadskikh, A. S.; Povalyaev, N. D.; Gorbunov, V. I.; Kulakov, F. M.; Vasilev, V. V.
2018-05-01
A method of positional and force-torque control of robots is proposed. Prototypes of the system and the master handle have been created. Algorithm of bias estimation and gravity compensation for force-torque sensor and force-torque trajectory correction are described.
Matching brain-machine interface performance to space applications.
Citi, Luca; Tonet, Oliver; Marinelli, Martina
2009-01-01
A brain-machine interface (BMI) is a particular class of human-machine interface (HMI). BMIs have so far been studied mostly as a communication means for people who have little or no voluntary control of muscle activity. For able-bodied users, such as astronauts, a BMI would only be practical if conceived as an augmenting interface. A method is presented for pointing out effective combinations of HMIs and applications of robotics and automation to space. Latency and throughput are selected as performance measures for a hybrid bionic system (HBS), that is, the combination of a user, a device, and a HMI. We classify and briefly describe HMIs and space applications and then compare the performance of classes of interfaces with the requirements of classes of applications, both in terms of latency and throughput. Regions of overlap correspond to effective combinations. Devices requiring simpler control, such as a rover, a robotic camera, or environmental controls are suitable to be driven by means of BMI technology. Free flyers and other devices with six degrees of freedom can be controlled, but only at low-interactivity levels. More demanding applications require conventional interfaces, although they could be controlled by BMIs once the same levels of performance as currently recorded in animal experiments are attained. Robotic arms and manipulators could be the next frontier for noninvasive BMIs. Integrating smart controllers in HBSs could improve interactivity and boost the use of BMI technology in space applications.
An assembly-type master-slave catheter and guidewire driving system for vascular intervention.
Cha, Hyo-Jeong; Yi, Byung-Ju; Won, Jong Yun
2017-01-01
Current vascular intervention inevitably exposes a large amount of X-ray to both an operator and a patient during the procedure. The purpose of this study is to propose a new catheter driving system which assists the operator in aspects of less X-ray exposure and convenient user interface. For this, an assembly-type 4-degree-of-freedom master-slave system was designed and tested to verify the efficiency. First, current vascular intervention procedures are analyzed to develop a new robotic procedure that enables us to use conventional vascular intervention devices such as catheter and guidewire which are commercially available in the market. Some parts of the slave robot which contact the devices were designed to be easily assembled and dissembled from the main body of the slave robot for sterilization. A master robot is compactly designed to conduct insertion and rotational motion and is able to switch from the guidewire driving mode to the catheter driving mode or vice versa. A phantom resembling the human arteries was developed, and the master-slave robotic system is tested using the phantom. The contact force of the guidewire tip according to the shape of the arteries is measured and reflected to the user through the master robot during the phantom experiment. This system can drastically reduce radiation exposure by replacing human effort by a robotic system for high radiation exposure procedures. Also, benefits of the proposed robot system are low cost by employing currently available devices and easy human interface.
Stanford Aerospace Research Laboratory research overview
NASA Technical Reports Server (NTRS)
Ballhaus, W. L.; Alder, L. J.; Chen, V. W.; Dickson, W. C.; Ullman, M. A.
1993-01-01
Over the last ten years, the Stanford Aerospace Robotics Laboratory (ARL) has developed a hardware facility in which a number of space robotics issues have been, and continue to be, addressed. This paper reviews two of the current ARL research areas: navigation and control of free flying space robots, and modelling and control of extremely flexible space structures. The ARL has designed and built several semi-autonomous free-flying robots that perform numerous tasks in a zero-gravity, drag-free, two-dimensional environment. It is envisioned that future generations of these robots will be part of a human-robot team, in which the robots will operate under the task-level commands of astronauts. To make this possible, the ARL has developed a graphical user interface (GUI) with an intuitive object-level motion-direction capability. Using this interface, the ARL has demonstrated autonomous navigation, intercept and capture of moving and spinning objects, object transport, multiple-robot cooperative manipulation, and simple assemblies from both free-flying and fixed bases. The ARL has also built a number of experimental test beds on which the modelling and control of flexible manipulators has been studied. Early ARL experiments in this arena demonstrated for the first time the capability to control the end-point position of both single-link and multi-link flexible manipulators using end-point sensing. Building on these accomplishments, the ARL has been able to control payloads with unknown dynamics at the end of a flexible manipulator, and to achieve high-performance control of a multi-link flexible manipulator.
Mobile app for human-interaction with sitter robots
NASA Astrophysics Data System (ADS)
Das, Sumit Kumar; Sahu, Ankita; Popa, Dan O.
2017-05-01
Human environments are often unstructured and unpredictable, thus making the autonomous operation of robots in such environments is very difficult. Despite many remaining challenges in perception, learning, and manipulation, more and more studies involving assistive robots have been carried out in recent years. In hospital environments, and in particular in patient rooms, there are well-established practices with respect to the type of furniture, patient services, and schedule of interventions. As a result, adding a robot into semi-structured hospital environments is an easier problem to tackle, with results that could have positive benefits to the quality of patient care and the help that robots can offer to nursing staff. When working in a healthcare facility, robots need to interact with patients and nurses through Human-Machine Interfaces (HMIs) that are intuitive to use, they should maintain awareness of surroundings, and offer safety guarantees for humans. While fully autonomous operation for robots is not yet technically feasible, direct teleoperation control of the robot would also be extremely cumbersome, as it requires expert user skills, and levels of concentration not available to many patients. Therefore, in our current study we present a traded control scheme, in which the robot and human both perform expert tasks. The human-robot communication and control scheme is realized through a mobile tablet app that can be customized for robot sitters in hospital environments. The role of the mobile app is to augment the verbal commands given to a robot through natural speech, camera and other native interfaces, while providing failure mode recovery options for users. Our app can access video feed and sensor data from robots, assist the user with decision making during pick and place operations, monitor the user health over time, and provides conversational dialogue during sitting sessions. In this paper, we present the software and hardware framework that enable a patient sitter HMI, and we include experimental results with a small number of users that demonstrate that the concept is sound and scalable.
Robot Tracking of Human Subjects in Field Environments
NASA Technical Reports Server (NTRS)
Graham, Jeffrey; Shillcutt, Kimberly
2003-01-01
Future planetary exploration will involve both humans and robots. Understanding and improving their interaction is a main focus of research in the Intelligent Systems Branch at NASA's Johnson Space Center. By teaming intelligent robots with astronauts on surface extra-vehicular activities (EVAs), safety and productivity can be improved. The EVA Robotic Assistant (ERA) project was established to study the issues of human-robot teams, to develop a testbed robot to assist space-suited humans in exploration tasks, and to experimentally determine the effectiveness of an EVA assistant robot. A companion paper discusses the ERA project in general, its history starting with ASRO (Astronaut-Rover project), and the results of recent field tests in Arizona. This paper focuses on one aspect of the research, robot tracking, in greater detail: the software architecture and algorithms. The ERA robot is capable of moving towards and/or continuously following mobile or stationary targets or sequences of targets. The contributions made by this research include how the low-level pose data is assembled, normalized and communicated, how the tracking algorithm was generalized and implemented, and qualitative performance reports from recent field tests.
Kim, Youngmoo E.
2017-01-01
Motor-imagery tasks are a popular input method for controlling brain-computer interfaces (BCIs), partially due to their similarities to naturally produced motor signals. The use of functional near-infrared spectroscopy (fNIRS) in BCIs is still emerging and has shown potential as a supplement or replacement for electroencephalography. However, studies often use only two or three motor-imagery tasks, limiting the number of available commands. In this work, we present the results of the first four-class motor-imagery-based online fNIRS-BCI for robot control. Thirteen participants utilized upper- and lower-limb motor-imagery tasks (left hand, right hand, left foot, and right foot) that were mapped to four high-level commands (turn left, turn right, move forward, and move backward) to control the navigation of a simulated or real robot. A significant improvement in classification accuracy was found between the virtual-robot-based BCI (control of a virtual robot) and the physical-robot BCI (control of the DARwIn-OP humanoid robot). Differences were also found in the oxygenated hemoglobin activation patterns of the four tasks between the first and second BCI. These results corroborate previous findings that motor imagery can be improved with feedback and imply that a four-class motor-imagery-based fNIRS-BCI could be feasible with sufficient subject training. PMID:28804712
Visual and tactile interfaces for bi-directional human robot communication
NASA Astrophysics Data System (ADS)
Barber, Daniel; Lackey, Stephanie; Reinerman-Jones, Lauren; Hudson, Irwin
2013-05-01
Seamless integration of unmanned and systems and Soldiers in the operational environment requires robust communication capabilities. Multi-Modal Communication (MMC) facilitates achieving this goal due to redundancy and levels of communication superior to single mode interaction using auditory, visual, and tactile modalities. Visual signaling using arm and hand gestures is a natural method of communication between people. Visual signals standardized within the U.S. Army Field Manual and in use by Soldiers provide a foundation for developing gestures for human to robot communication. Emerging technologies using Inertial Measurement Units (IMU) enable classification of arm and hand gestures for communication with a robot without the requirement of line-of-sight needed by computer vision techniques. These devices improve the robustness of interpreting gestures in noisy environments and are capable of classifying signals relevant to operational tasks. Closing the communication loop between Soldiers and robots necessitates them having the ability to return equivalent messages. Existing visual signals from robots to humans typically require highly anthropomorphic features not present on military vehicles. Tactile displays tap into an unused modality for robot to human communication. Typically used for hands-free navigation and cueing, existing tactile display technologies are used to deliver equivalent visual signals from the U.S. Army Field Manual. This paper describes ongoing research to collaboratively develop tactile communication methods with Soldiers, measure classification accuracy of visual signal interfaces, and provides an integration example including two robotic platforms.
Batula, Alyssa M; Kim, Youngmoo E; Ayaz, Hasan
2017-01-01
Motor-imagery tasks are a popular input method for controlling brain-computer interfaces (BCIs), partially due to their similarities to naturally produced motor signals. The use of functional near-infrared spectroscopy (fNIRS) in BCIs is still emerging and has shown potential as a supplement or replacement for electroencephalography. However, studies often use only two or three motor-imagery tasks, limiting the number of available commands. In this work, we present the results of the first four-class motor-imagery-based online fNIRS-BCI for robot control. Thirteen participants utilized upper- and lower-limb motor-imagery tasks (left hand, right hand, left foot, and right foot) that were mapped to four high-level commands (turn left, turn right, move forward, and move backward) to control the navigation of a simulated or real robot. A significant improvement in classification accuracy was found between the virtual-robot-based BCI (control of a virtual robot) and the physical-robot BCI (control of the DARwIn-OP humanoid robot). Differences were also found in the oxygenated hemoglobin activation patterns of the four tasks between the first and second BCI. These results corroborate previous findings that motor imagery can be improved with feedback and imply that a four-class motor-imagery-based fNIRS-BCI could be feasible with sufficient subject training.
Zeng, Hong; Wang, Yanxin; Wu, Changcheng; Song, Aiguo; Liu, Jia; Ji, Peng; Xu, Baoguo; Zhu, Lifeng; Li, Huijun; Wen, Pengcheng
2017-01-01
Brain-machine interface (BMI) can be used to control the robotic arm to assist paralysis people for performing activities of daily living. However, it is still a complex task for the BMI users to control the process of objects grasping and lifting with the robotic arm. It is hard to achieve high efficiency and accuracy even after extensive trainings. One important reason is lacking of sufficient feedback information for the user to perform the closed-loop control. In this study, we proposed a method of augmented reality (AR) guiding assistance to provide the enhanced visual feedback to the user for a closed-loop control with a hybrid Gaze-BMI, which combines the electroencephalography (EEG) signals based BMI and the eye tracking for an intuitive and effective control of the robotic arm. Experiments for the objects manipulation tasks while avoiding the obstacle in the workspace are designed to evaluate the performance of our method for controlling the robotic arm. According to the experimental results obtained from eight subjects, the advantages of the proposed closed-loop system (with AR feedback) over the open-loop system (with visual inspection only) have been verified. The number of trigger commands used for controlling the robotic arm to grasp and lift the objects with AR feedback has reduced significantly and the height gaps of the gripper in the lifting process have decreased more than 50% compared to those trials with normal visual inspection only. The results reveal that the hybrid Gaze-BMI user can benefit from the information provided by the AR interface, improving the efficiency and reducing the cognitive load during the grasping and lifting processes. PMID:29163123
Wireless brain-machine interface using EEG and EOG: brain wave classification and robot control
NASA Astrophysics Data System (ADS)
Oh, Sechang; Kumar, Prashanth S.; Kwon, Hyeokjun; Varadan, Vijay K.
2012-04-01
A brain-machine interface (BMI) links a user's brain activity directly to an external device. It enables a person to control devices using only thought. Hence, it has gained significant interest in the design of assistive devices and systems for people with disabilities. In addition, BMI has also been proposed to replace humans with robots in the performance of dangerous tasks like explosives handling/diffusing, hazardous materials handling, fire fighting etc. There are mainly two types of BMI based on the measurement method of brain activity; invasive and non-invasive. Invasive BMI can provide pristine signals but it is expensive and surgery may lead to undesirable side effects. Recent advances in non-invasive BMI have opened the possibility of generating robust control signals from noisy brain activity signals like EEG and EOG. A practical implementation of a non-invasive BMI such as robot control requires: acquisition of brain signals with a robust wearable unit, noise filtering and signal processing, identification and extraction of relevant brain wave features and finally, an algorithm to determine control signals based on the wave features. In this work, we developed a wireless brain-machine interface with a small platform and established a BMI that can be used to control the movement of a robot by using the extracted features of the EEG and EOG signals. The system records and classifies EEG as alpha, beta, delta, and theta waves. The classified brain waves are then used to define the level of attention. The acceleration and deceleration or stopping of the robot is controlled based on the attention level of the wearer. In addition, the left and right movements of eye ball control the direction of the robot.
Crystallization screening test for the whole-cell project on Thermus thermophilus HB8
Iino, Hitoshi; Naitow, Hisashi; Nakamura, Yuki; Nakagawa, Noriko; Agari, Yoshihiro; Kanagawa, Mayumi; Ebihara, Akio; Shinkai, Akeo; Sugahara, Mitsuaki; Miyano, Masashi; Kamiya, Nobuo; Yokoyama, Shigeyuki; Hirotsu, Ken; Kuramitsu, Seiki
2008-01-01
It was essential for the structural genomics of Thermus thermophilus HB8 to efficiently crystallize a number of proteins. To this end, three conventional robots, an HTS-80 (sitting-drop vapour diffusion), a Crystal Finder (hanging-drop vapour diffusion) and a TERA (modified microbatch) robot, were subjected to a crystallization condition screening test involving 18 proteins from T. thermophilus HB8. In addition, a TOPAZ (microfluidic free-interface diffusion) designed specifically for initial screening was also briefly examined. The number of diffraction-quality crystals and the time of appearance of crystals increased in the order HTS-80, Crystal Finder, TERA. With the HTS-80 and Crystal Finder, the time of appearance was short and the rate of salt crystallization was low. With the TERA, the number of diffraction-quality crystals was high, while the time of appearance was long and the rate of salt crystallization was relatively high. For the protein samples exhibiting low crystallization success rates, there were few crystallization conditions that were common to the robots used. In some cases, the success rate depended greatly on the robot used. The TOPAZ showed the shortest time of appearance and the highest success rate, although the crystals obtained were too small for diffraction studies. These results showed that the combined use of different robots significantly increases the chance of obtaining crystals, especially for proteins exhibiting low crystallization success rates. The structures of 360 of 944 purified proteins have been successfully determined through the combined use of an HTS-80 and a TERA. PMID:18540056
Sensor control of robot arc welding
NASA Technical Reports Server (NTRS)
Sias, F. R., Jr.
1985-01-01
A basic problem in the application of robots for welding which is how to guide a torch along a weld seam using sensory information was studied. Improvement of the quality and consistency of certain Gas Tungsten Arc welds on the Space Shuttle Main Engine (SSME) that are too complex geometrically for conventional automation and therefore are done by hand was examined. The particular problems associated with space shuttle main egnine (SSME) manufacturing and weld-seam tracking with an emphasis on computer vision methods were analyzed. Special interface software for the MINC computr are developed which will allow it to be used both as a test system to check out the robot interface software and later as a development tool for further investigation of sensory systems to be incorporated in welding procedures.
Resquin, F; Ibañez, J; Gonzalez-Vargas, J; Brunetti, F; Dimbwadyo, I; Alves, S; Carrasco, L; Torres, L; Pons, Jose Luis
2016-08-01
Reaching and grasping are two of the most affected functions after stroke. Hybrid rehabilitation systems combining Functional Electrical Stimulation with Robotic devices have been proposed in the literature to improve rehabilitation outcomes. In this work, we present the combined use of a hybrid robotic system with an EEG-based Brain-Machine Interface to detect the user's movement intentions to trigger the assistance. The platform has been tested in a single session with a stroke patient. The results show how the patient could successfully interact with the BMI and command the assistance of the hybrid system with low latencies. Also, the Feedback Error Learning controller implemented in this system could adjust the required FES intensity to perform the task.
Investigating the Usefulness of Soldier Aids for Autonomous Unmanned Ground Vehicles, Part 2
2015-03-01
distribution is unlimited. 13. SUPPLEMENTARY NOTES DCS Corporation, Alexandria, VA 14. ABSTRACT In the past, robot operation has been a high-cognitive...increase performance and reduce perceived workload. The aids were overlays displaying what an autonomous robot perceived in the environment and the...subsequent course of action planned by the robot . Eight active-duty, US Army Soldiers completed 16 scenario missions using an operator interface
Development of hardwares and computer interface for a two-degree-of-freedom robot
NASA Technical Reports Server (NTRS)
Nguyen, Charles C.; Pooran, Farhad J.
1987-01-01
The research results that were obtained are reviewed. Then the robot actuator, the selection of the data acquisition system, and the design of the power amplifier will be discussed. The machine design of the robot manipulator will then be presented. After that, the integration of the developed hardware into the open-loop system will also be discussed. Current and future research work is addressed.
A self-paced motor imagery based brain-computer interface for robotic wheelchair control.
Tsui, Chun Sing Louis; Gan, John Q; Hu, Huosheng
2011-10-01
This paper presents a simple self-paced motor imagery based brain-computer interface (BCI) to control a robotic wheelchair. An innovative control protocol is proposed to enable a 2-class self-paced BCI for wheelchair control, in which the user makes path planning and fully controls the wheelchair except for the automatic obstacle avoidance based on a laser range finder when necessary. In order for the users to train their motor imagery control online safely and easily, simulated robot navigation in a specially designed environment was developed. This allowed the users to practice motor imagery control with the core self-paced BCI system in a simulated scenario before controlling the wheelchair. The self-paced BCI can then be applied to control a real robotic wheelchair using a protocol similar to that controlling the simulated robot. Our emphasis is on allowing more potential users to use the BCI controlled wheelchair with minimal training; a simple 2-class self paced system is adequate with the novel control protocol, resulting in a better transition from offline training to online control. Experimental results have demonstrated the usefulness of the online practice under the simulated scenario, and the effectiveness of the proposed self-paced BCI for robotic wheelchair control.
Development of a robotic device for facilitating learning by children who have severe disabilities.
Cook, Albert M; Meng, Max Q H; Gu, Jason J; Howery, Kathy
2002-09-01
This paper presents technical aspects of a robot manipulator developed to facilitate learning by young children who are generally unable to grasp objects or speak. The severity of these physical disabilities also limits assessment of their cognitive and language skills and abilities. The CRS robot manipulator was adapted for use by children who have disabilities. Our emphasis is on the technical control aspects of the development of an interface and communication environment between the child and the robot arm. The system is designed so that each child has user control and control procedures that are individually adapted. Control interfaces include large push buttons, keyboards, laser pointer, and head-controlled switches. Preliminary results have shown that young children who have severe disabilities can use the robotic arm system to complete functional play-related tasks. Developed software allows the child to accomplish a series of multistep tasks by activating one or more single switches. Through a single switch press the child can replay a series of preprogrammed movements that have a development sequence. Children using this system engaged in three-step sequential activities and were highly responsive to the robotic tasks. This was in marked contrast to other interventions using toys and computer games.
Towards Rehabilitation Robotics: Off-the-Shelf BCI Control of Anthropomorphic Robotic Arms.
Athanasiou, Alkinoos; Xygonakis, Ioannis; Pandria, Niki; Kartsidis, Panagiotis; Arfaras, George; Kavazidi, Kyriaki Rafailia; Foroglou, Nicolas; Astaras, Alexander; Bamidis, Panagiotis D
2017-01-01
Advances in neural interfaces have demonstrated remarkable results in the direction of replacing and restoring lost sensorimotor function in human patients. Noninvasive brain-computer interfaces (BCIs) are popular due to considerable advantages including simplicity, safety, and low cost, while recent advances aim at improving past technological and neurophysiological limitations. Taking into account the neurophysiological alterations of disabled individuals, investigating brain connectivity features for implementation of BCI control holds special importance. Off-the-shelf BCI systems are based on fast, reproducible detection of mental activity and can be implemented in neurorobotic applications. Moreover, social Human-Robot Interaction (HRI) is increasingly important in rehabilitation robotics development. In this paper, we present our progress and goals towards developing off-the-shelf BCI-controlled anthropomorphic robotic arms for assistive technologies and rehabilitation applications. We account for robotics development, BCI implementation, and qualitative assessment of HRI characteristics of the system. Furthermore, we present two illustrative experimental applications of the BCI-controlled arms, a study of motor imagery modalities on healthy individuals' BCI performance, and a pilot investigation on spinal cord injured patients' BCI control and brain connectivity. We discuss strengths and limitations of our design and propose further steps on development and neurophysiological study, including implementation of connectivity features as BCI modality.
Towards Rehabilitation Robotics: Off-the-Shelf BCI Control of Anthropomorphic Robotic Arms
Xygonakis, Ioannis; Pandria, Niki; Kartsidis, Panagiotis; Arfaras, George; Kavazidi, Kyriaki Rafailia; Foroglou, Nicolas
2017-01-01
Advances in neural interfaces have demonstrated remarkable results in the direction of replacing and restoring lost sensorimotor function in human patients. Noninvasive brain-computer interfaces (BCIs) are popular due to considerable advantages including simplicity, safety, and low cost, while recent advances aim at improving past technological and neurophysiological limitations. Taking into account the neurophysiological alterations of disabled individuals, investigating brain connectivity features for implementation of BCI control holds special importance. Off-the-shelf BCI systems are based on fast, reproducible detection of mental activity and can be implemented in neurorobotic applications. Moreover, social Human-Robot Interaction (HRI) is increasingly important in rehabilitation robotics development. In this paper, we present our progress and goals towards developing off-the-shelf BCI-controlled anthropomorphic robotic arms for assistive technologies and rehabilitation applications. We account for robotics development, BCI implementation, and qualitative assessment of HRI characteristics of the system. Furthermore, we present two illustrative experimental applications of the BCI-controlled arms, a study of motor imagery modalities on healthy individuals' BCI performance, and a pilot investigation on spinal cord injured patients' BCI control and brain connectivity. We discuss strengths and limitations of our design and propose further steps on development and neurophysiological study, including implementation of connectivity features as BCI modality. PMID:28948168
First Annual Workshop on Space Operations Automation and Robotics (SOAR 87)
NASA Technical Reports Server (NTRS)
Griffin, Sandy (Editor)
1987-01-01
Several topics relative to automation and robotics technology are discussed. Automation of checkout, ground support, and logistics; automated software development; man-machine interfaces; neural networks; systems engineering and distributed/parallel processing architectures; and artificial intelligence/expert systems are among the topics covered.
A Human Machine Interface for EVA
NASA Astrophysics Data System (ADS)
Hartmann, L.
EVA astronauts work in a challenging environment that includes high rate of muscle fatigue, haptic and proprioception impairment, lack of dexterity and interaction with robotic equipment. Currently they are heavily dependent on support from on-board crew and ground station staff for information and robotics operation. They are limited to the operation of simple controls on the suit exterior and external robot controls that are difficult to operate because of the heavy gloves that are part of the EVA suit. A wearable human machine interface (HMI) inside the suit provides a powerful alternative for robot teleoperation, procedure checklist access, generic equipment operation via virtual control panels and general information retrieval and presentation. The HMI proposed here includes speech input and output, a simple 6 degree of freedom (dof) pointing device and a heads up display (HUD). The essential characteristic of this interface is that it offers an alternative to the standard keyboard and mouse interface of a desktop computer. The astronaut's speech is used as input to command mode changes, execute arbitrary computer commands and generate text. The HMI can respond with speech also in order to confirm selections, provide status and feedback and present text output. A candidate 6 dof pointing device is Measurand's Shapetape, a flexible "tape" substrate to which is attached an optic fiber with embedded sensors. Measurement of the modulation of the light passing through the fiber can be used to compute the shape of the tape and, in particular, the position and orientation of the end of the Shapetape. It can be used to provide any kind of 3d geometric information including robot teleoperation control. The HUD can overlay graphical information onto the astronaut's visual field including robot joint torques, end effector configuration, procedure checklists and virtual control panels. With suitable tracking information about the position and orientation of the EVA suit, the overlaid graphical information can be registered with the external world. For example, information about an object can be positioned on or beside the object. This wearable HMI supports many applications during EVA including robot teleoperation, procedure checklist usage, operation of virtual control panels and general information or documentation retrieval and presentation. Whether the robot end effector is a mobile platform for the EVA astronaut or is an assistant to the astronaut in an assembly or repair task, the astronaut can control the robot via a direct manipulation interface. Embedded in the suit or the astronaut's clothing, Shapetape can measure the user's arm/hand position and orientation which can be directly mapped into the workspace coordinate system of the robot. Motion of the users hand can generate corresponding motion of the robot end effector in order to reposition the EVA platform or to manipulate objects in the robot's grasp. Speech input can be used to execute commands and mode changes without the astronaut having to withdraw from the teleoperation task. Speech output from the system can provide feedback without affecting the user's visual attention. The procedure checklist guiding the astronaut's detailed activities can be presented on the HUD and manipulated (e.g., move, scale, annotate, mark tasks as done, consult prerequisite tasks) by spoken command. Virtual control panels for suit equipment, equipment being repaired or arbitrary equipment on the space station can be displayed on the HUD and can be operated by speech commands or by hand gestures. For example, an antenna being repaired could be pointed under the control of the EVA astronaut. Additionally arbitrary computer activities such as information retrieval and presentation can be carried out using similar interface techniques. Considering the risks, expense and physical challenges of EVA work, it is appropriate that EVA astronauts have considerable support from station crew and ground station staff. Reducing their dependence on such personnel may under many circumstances, however, improve performance and reduce risk. For example, the EVA astronaut is likely to have the best viewpoint at a robotic worksite. Direct access to the procedure checklist can help provide temporal context and continuity throughout an EVA. Access to station facilities through an HMI such as the one described here could be invaluable during an emergency or in a situation in which a fault occurs. The full paper will describe the HMI operation and applications in the EVA context in more detail and will describe current laboratory prototyping activities.
NASA Technical Reports Server (NTRS)
Stecklein, Jonette
2017-01-01
NASA has held an annual robotic mining competition for teams of university/college students since 2010. This competition is yearlong, suitable for a senior university engineering capstone project. It encompasses the full project life cycle from ideation of a robot design to actual tele-operation of the robot in simulated Mars conditions mining and collecting simulated regolith. A major required element for this competition is a Systems Engineering Paper in which each team describes the systems engineering approaches used on their project. The score for the Systems Engineering Paper contributes 25% towards the team's score for the competition's grand prize. The required use of systems engineering on the project by this competition introduces the students to an intense practical application of systems engineering throughout a full project life cycle.
Basic Operational Robotics Instructional System
NASA Technical Reports Server (NTRS)
Todd, Brian Keith; Fischer, James; Falgout, Jane; Schweers, John
2013-01-01
The Basic Operational Robotics Instructional System (BORIS) is a six-degree-of-freedom rotational robotic manipulator system simulation used for training of fundamental robotics concepts, with in-line shoulder, offset elbow, and offset wrist. BORIS is used to provide generic robotics training to aerospace professionals including flight crews, flight controllers, and robotics instructors. It uses forward kinematic and inverse kinematic algorithms to simulate joint and end-effector motion, combined with a multibody dynamics model, moving-object contact model, and X-Windows based graphical user interfaces, coordinated in the Trick Simulation modeling environment. The motivation for development of BORIS was the need for a generic system for basic robotics training. Before BORIS, introductory robotics training was done with either the SRMS (Shuttle Remote Manipulator System) or SSRMS (Space Station Remote Manipulator System) simulations. The unique construction of each of these systems required some specialized training that distracted students from the ideas and goals of the basic robotics instruction.
Choi, Bongjae; Jo, Sungho
2013-01-01
This paper describes a hybrid brain-computer interface (BCI) technique that combines the P300 potential, the steady state visually evoked potential (SSVEP), and event related de-synchronization (ERD) to solve a complicated multi-task problem consisting of humanoid robot navigation and control along with object recognition using a low-cost BCI system. Our approach enables subjects to control the navigation and exploration of a humanoid robot and recognize a desired object among candidates. This study aims to demonstrate the possibility of a hybrid BCI based on a low-cost system for a realistic and complex task. It also shows that the use of a simple image processing technique, combined with BCI, can further aid in making these complex tasks simpler. An experimental scenario is proposed in which a subject remotely controls a humanoid robot in a properly sized maze. The subject sees what the surrogate robot sees through visual feedback and can navigate the surrogate robot. While navigating, the robot encounters objects located in the maze. It then recognizes if the encountered object is of interest to the subject. The subject communicates with the robot through SSVEP and ERD-based BCIs to navigate and explore with the robot, and P300-based BCI to allow the surrogate robot recognize their favorites. Using several evaluation metrics, the performances of five subjects navigating the robot were quite comparable to manual keyboard control. During object recognition mode, favorite objects were successfully selected from two to four choices. Subjects conducted humanoid navigation and recognition tasks as if they embodied the robot. Analysis of the data supports the potential usefulness of the proposed hybrid BCI system for extended applications. This work presents an important implication for the future work that a hybridization of simple BCI protocols provide extended controllability to carry out complicated tasks even with a low-cost system. PMID:24023953
Sengül, Ali; van Elk, Michiel; Rognini, Giulio; Aspell, Jane Elizabeth; Bleuler, Hannes; Blanke, Olaf
2012-01-01
The effects of real-world tool use on body or space representations are relatively well established in cognitive neuroscience. Several studies have shown, for example, that active tool use results in a facilitated integration of multisensory information in peripersonal space, i.e. the space directly surrounding the body. However, it remains unknown to what extent similar mechanisms apply to the use of virtual-robotic tools, such as those used in the field of surgical robotics, in which a surgeon may use bimanual haptic interfaces to control a surgery robot at a remote location. This paper presents two experiments in which participants used a haptic handle, originally designed for a commercial surgery robot, to control a virtual tool. The integration of multisensory information related to the virtual-robotic tool was assessed by means of the crossmodal congruency task, in which subjects responded to tactile vibrations applied to their fingers while ignoring visual distractors superimposed on the tip of the virtual-robotic tool. Our results show that active virtual-robotic tool use changes the spatial modulation of the crossmodal congruency effects, comparable to changes in the representation of peripersonal space observed during real-world tool use. Moreover, when the virtual-robotic tools were held in a crossed position, the visual distractors interfered strongly with tactile stimuli that was connected with the hand via the tool, reflecting a remapping of peripersonal space. Such remapping was not only observed when the virtual-robotic tools were actively used (Experiment 1), but also when passively held the tools (Experiment 2). The present study extends earlier findings on the extension of peripersonal space from physical and pointing tools to virtual-robotic tools using techniques from haptics and virtual reality. We discuss our data with respect to learning and human factors in the field of surgical robotics and discuss the use of new technologies in the field of cognitive neuroscience. PMID:23227142
Sengül, Ali; van Elk, Michiel; Rognini, Giulio; Aspell, Jane Elizabeth; Bleuler, Hannes; Blanke, Olaf
2012-01-01
The effects of real-world tool use on body or space representations are relatively well established in cognitive neuroscience. Several studies have shown, for example, that active tool use results in a facilitated integration of multisensory information in peripersonal space, i.e. the space directly surrounding the body. However, it remains unknown to what extent similar mechanisms apply to the use of virtual-robotic tools, such as those used in the field of surgical robotics, in which a surgeon may use bimanual haptic interfaces to control a surgery robot at a remote location. This paper presents two experiments in which participants used a haptic handle, originally designed for a commercial surgery robot, to control a virtual tool. The integration of multisensory information related to the virtual-robotic tool was assessed by means of the crossmodal congruency task, in which subjects responded to tactile vibrations applied to their fingers while ignoring visual distractors superimposed on the tip of the virtual-robotic tool. Our results show that active virtual-robotic tool use changes the spatial modulation of the crossmodal congruency effects, comparable to changes in the representation of peripersonal space observed during real-world tool use. Moreover, when the virtual-robotic tools were held in a crossed position, the visual distractors interfered strongly with tactile stimuli that was connected with the hand via the tool, reflecting a remapping of peripersonal space. Such remapping was not only observed when the virtual-robotic tools were actively used (Experiment 1), but also when passively held the tools (Experiment 2). The present study extends earlier findings on the extension of peripersonal space from physical and pointing tools to virtual-robotic tools using techniques from haptics and virtual reality. We discuss our data with respect to learning and human factors in the field of surgical robotics and discuss the use of new technologies in the field of cognitive neuroscience.
Choi, Bongjae; Jo, Sungho
2013-01-01
This paper describes a hybrid brain-computer interface (BCI) technique that combines the P300 potential, the steady state visually evoked potential (SSVEP), and event related de-synchronization (ERD) to solve a complicated multi-task problem consisting of humanoid robot navigation and control along with object recognition using a low-cost BCI system. Our approach enables subjects to control the navigation and exploration of a humanoid robot and recognize a desired object among candidates. This study aims to demonstrate the possibility of a hybrid BCI based on a low-cost system for a realistic and complex task. It also shows that the use of a simple image processing technique, combined with BCI, can further aid in making these complex tasks simpler. An experimental scenario is proposed in which a subject remotely controls a humanoid robot in a properly sized maze. The subject sees what the surrogate robot sees through visual feedback and can navigate the surrogate robot. While navigating, the robot encounters objects located in the maze. It then recognizes if the encountered object is of interest to the subject. The subject communicates with the robot through SSVEP and ERD-based BCIs to navigate and explore with the robot, and P300-based BCI to allow the surrogate robot recognize their favorites. Using several evaluation metrics, the performances of five subjects navigating the robot were quite comparable to manual keyboard control. During object recognition mode, favorite objects were successfully selected from two to four choices. Subjects conducted humanoid navigation and recognition tasks as if they embodied the robot. Analysis of the data supports the potential usefulness of the proposed hybrid BCI system for extended applications. This work presents an important implication for the future work that a hybridization of simple BCI protocols provide extended controllability to carry out complicated tasks even with a low-cost system.
RoboJockey: Designing an Entertainment Experience with Robots.
Yoshida, Shigeo; Shirokura, Takumi; Sugiura, Yuta; Sakamoto, Daisuke; Ono, Tetsuo; Inami, Masahiko; Igarashi, Takeo
2016-01-01
The RoboJockey entertainment system consists of a multitouch tabletop interface for multiuser collaboration. RoboJockey enables a user to choreograph a mobile robot or a humanoid robot by using a simple visual language. With RoboJockey, a user can coordinate the mobile robot's actions with a combination of back, forward, and rotating movements and coordinate the humanoid robot's actions with a combination of arm and leg movements. Every action is automatically performed to background music. RoboJockey was demonstrated to the public during two pilot studies, and the authors observed users' behavior. Here, they report the results of their observations and discuss the RoboJockey entertainment experience.
Mouraviev, Vladimir; Klein, Martina; Schommer, Eric; Thiel, David D; Samavedi, Srinivas; Kumar, Anup; Leveillee, Raymond J; Thomas, Raju; Pow-Sang, Julio M; Su, Li-Ming; Mui, Engy; Smith, Roger; Patel, Vipul
2016-03-01
In pursuit of improving the quality of residents' education, the Southeastern Section of the American Urological Association (SES AUA) hosts an annual robotic training course for its residents. The workshop involves performing a robotic live porcine nephrectomy as well as virtual reality robotic training modules. The aim of this study was to evaluate workload levels of urology residents when performing a live porcine nephrectomy and the virtual reality robotic surgery training modules employed during this workshop. Twenty-one residents from 14 SES AUA programs participated in 2015. On the first-day residents were taught with didactic lectures by faculty. On the second day, trainees were divided into two groups. Half were asked to perform training modules of the Mimic da Vinci-Trainer (MdVT, Mimic Technologies, Inc., Seattle, WA, USA) for 4 h, while the other half performed nephrectomy procedures on a live porcine model using the da Vinci Si robot (Intuitive Surgical Inc., Sunnyvale, CA, USA). After the first 4 h the groups changed places for another 4-h session. All trainees were asked to complete the NASA-TLX 1-page questionnaire following both the MdVT simulation and live animal model sessions. A significant interface and TLX interaction was observed. The interface by TLX interaction was further analyzed to determine whether the scores of each of the six TLX scales varied across the two interfaces. The means of the TLX scores observed at the two interfaces were similar. The only significant difference was observed for frustration, which was significantly higher at the simulation than the animal model, t (20) = 4.12, p = 0.001. This could be due to trainees' familiarity with live anatomical structures over skill set simulations which remain a real challenge to novice surgeons. Another reason might be that the simulator provides performance metrics for specific performance traits as well as composite scores for entire exercises. Novice trainees experienced substantial mental workload while performing tasks on both the simulator and the live animal model during the robotics course. The NASA-TLX profiles demonstrated that the live animal model and the MdVT were similar in difficulty, as indicated by their comparable workload profiles.
DOT National Transportation Integrated Search
2005-01-01
This report presents the results of a project to finalize and apply a crawling robotic system for the remote visual inspection of high-mast light poles. The first part of the project focused on finalizing the prototype crawler robot hardware and cont...
Analysis on the workspace of palletizing robot based on AutoCAD
NASA Astrophysics Data System (ADS)
Li, Jin-quan; Zhang, Rui; Guan, Qi; Cui, Fang; Chen, Kuan
2017-10-01
In this paper, a four-degree-of-freedom articulated palletizing robot is used as the object of research. Based on the analysis of the overall configuration of the robot, the kinematic mathematical model is established by D-H method to figure out the workspace of the robot. In order to meet the needs of design and analysis, using AutoCAD secondary development technology and AutoLisp language to develop AutoCAD-based 2D and 3D workspace simulation interface program of palletizing robot. At last, using AutoCAD plugin, the influence of structural parameters on the shape and position of the working space is analyzed when the structure parameters of the robot are changed separately. This study laid the foundation for the design, control and planning of palletizing robots.
VEVI: A Virtual Reality Tool For Robotic Planetary Explorations
NASA Technical Reports Server (NTRS)
Piguet, Laurent; Fong, Terry; Hine, Butler; Hontalas, Phil; Nygren, Erik
1994-01-01
The Virtual Environment Vehicle Interface (VEVI), developed by the NASA Ames Research Center's Intelligent Mechanisms Group, is a modular operator interface for direct teleoperation and supervisory control of robotic vehicles. Virtual environments enable the efficient display and visualization of complex data. This characteristic allows operators to perceive and control complex systems in a natural fashion, utilizing the highly-evolved human sensory system. VEVI utilizes real-time, interactive, 3D graphics and position / orientation sensors to produce a range of interface modalities from the flat panel (windowed or stereoscopic) screen displays to head mounted/head-tracking stereo displays. The interface provides generic video control capability and has been used to control wheeled, legged, air bearing, and underwater vehicles in a variety of different environments. VEVI was designed and implemented to be modular, distributed and easily operated through long-distance communication links, using a communication paradigm called SYNERGY.
Castellini, Claudio; Artemiadis, Panagiotis; Wininger, Michael; Ajoudani, Arash; Alimusaj, Merkur; Bicchi, Antonio; Caputo, Barbara; Craelius, William; Dosen, Strahinja; Englehart, Kevin; Farina, Dario; Gijsberts, Arjan; Godfrey, Sasha B.; Hargrove, Levi; Ison, Mark; Kuiken, Todd; Marković, Marko; Pilarski, Patrick M.; Rupp, Rüdiger; Scheme, Erik
2014-01-01
One of the hottest topics in rehabilitation robotics is that of proper control of prosthetic devices. Despite decades of research, the state of the art is dramatically behind the expectations. To shed light on this issue, in June, 2013 the first international workshop on Present and future of non-invasive peripheral nervous system (PNS)–Machine Interfaces (MI; PMI) was convened, hosted by the International Conference on Rehabilitation Robotics. The keyword PMI has been selected to denote human–machine interfaces targeted at the limb-deficient, mainly upper-limb amputees, dealing with signals gathered from the PNS in a non-invasive way, that is, from the surface of the residuum. The workshop was intended to provide an overview of the state of the art and future perspectives of such interfaces; this paper represents is a collection of opinions expressed by each and every researcher/group involved in it. PMID:25177292
Lyons, Kenneth R; Joshi, Sanjay S
2013-06-01
Here we demonstrate the use of a new singlesignal surface electromyography (sEMG) brain-computer interface (BCI) to control a mobile robot in a remote location. Previous work on this BCI has shown that users are able to perform cursor-to-target tasks in two-dimensional space using only a single sEMG signal by continuously modulating the signal power in two frequency bands. Using the cursor-to-target paradigm, targets are shown on the screen of a tablet computer so that the user can select them, commanding the robot to move in different directions for a fixed distance/angle. A Wifi-enabled camera transmits video from the robot's perspective, giving the user feedback about robot motion. Current results show a case study with a C3-C4 spinal cord injury (SCI) subject using a single auricularis posterior muscle site to navigate a simple obstacle course. Performance metrics for operation of the BCI as well as completion of the telerobotic command task are developed. It is anticipated that this noninvasive and mobile system will open communication opportunities for the severely paralyzed, possibly using only a single sensor.
NASA Technical Reports Server (NTRS)
Ambrose, Robert; Askew, Scott; Bluethmann, William; Diftler, Myron
2001-01-01
NASA began with the challenge of building a robot fo r doing assembly, maintenance, and diagnostic work in the Og environment of space. A robot with human form was then chosen as the best means of achieving that mission. The goal was not to build a machine to look like a human, but rather, to build a system that could do the same work. Robonaut could be inserted into the existing space environment, designed for a population of astronauts, and be able to perform many of the same tasks, with the same tools, and use the same interfaces. Rather than change that world to accommodate the robot, instead Robonaut accepts that it exists for humans, and must conform to it. While it would be easier to build a robot if all the interfaces could be changed, this is not the reality of space at present, where NASA has invested billions of dollars building spacecraft like the Space Shuttle and International Space Station. It is not possible to go back in time, and redesign those systems to accommodate full automation, but a robot can be built that adapts to them. This paper describes that design process, and the res ultant solution, that NASA has named Robonaut.
Brain-Machine Interface control of a robot arm using actor-critic rainforcement learning.
Pohlmeyer, Eric A; Mahmoudi, Babak; Geng, Shijia; Prins, Noeline; Sanchez, Justin C
2012-01-01
Here we demonstrate how a marmoset monkey can use a reinforcement learning (RL) Brain-Machine Interface (BMI) to effectively control the movements of a robot arm for a reaching task. In this work, an actor-critic RL algorithm used neural ensemble activity in the monkey's motor cortext to control the robot movements during a two-target decision task. This novel approach to decoding offers unique advantages for BMI control applications. Compared to supervised learning decoding methods, the actor-critic RL algorithm does not require an explicit set of training data to create a static control model, but rather it incrementally adapts the model parameters according to its current performance, in this case requiring only a very basic feedback signal. We show how this algorithm achieved high performance when mapping the monkey's neural states (94%) to robot actions, and only needed to experience a few trials before obtaining accurate real-time control of the robot arm. Since RL methods responsively adapt and adjust their parameters, they can provide a method to create BMIs that are robust against perturbations caused by changes in either the neural input space or the output actions they generate under different task requirements or goals.
Smart mobile robot system for rubbish collection
NASA Astrophysics Data System (ADS)
Ali, Mohammed A. H.; Sien Siang, Tan
2018-03-01
This paper records the research and procedures of developing a smart mobility robot with detection system to collect rubbish. The objective of this paper is to design a mobile robot that can detect and recognize medium-size rubbish such as drinking cans. Besides that, the objective is also to design a mobile robot with the ability to estimate the position of rubbish from the robot. In addition, the mobile robot is also able to approach the rubbish based on position of rubbish. This paper explained about the types of image processing, detection and recognition methods and image filters. This project implements RGB subtraction method as the prior system. Other than that, algorithm for distance measurement based on image plane is implemented in this project. This project is limited to use computer webcam as the sensor. Secondly, the robot is only able to approach the nearest rubbish in the same views of camera vision and any rubbish that contain RGB colour components on its body.
ARK: Autonomous mobile robot in an industrial environment
NASA Technical Reports Server (NTRS)
Nickerson, S. B.; Jasiobedzki, P.; Jenkin, M.; Jepson, A.; Milios, E.; Down, B.; Service, J. R. R.; Terzopoulos, D.; Tsotsos, J.; Wilkes, D.
1994-01-01
This paper describes research on the ARK (Autonomous Mobile Robot in a Known Environment) project. The technical objective of the project is to build a robot that can navigate in a complex industrial environment using maps with permanent structures. The environment is not altered in any way by adding easily identifiable beacons and the robot relies on naturally occurring objects to use as visual landmarks for navigation. The robot is equipped with various sensors that can detect unmapped obstacles, landmarks and objects. In this paper we describe the robot's industrial environment, it's architecture, a novel combined range and vision sensor and our recent results in controlling the robot in the real-time detection of objects using their color and in the processing of the robot's range and vision sensor data for navigation.
Interface colloidal robotic manipulator
Aronson, Igor; Snezhko, Oleksiy
2015-08-04
A magnetic colloidal system confined at the interface between two immiscible liquids and energized by an alternating magnetic field dynamically self-assembles into localized asters and arrays of asters. The colloidal system exhibits locomotion and shape change. By controlling a small external magnetic field applied parallel to the interface, structures can capture, transport, and position target particles.
Advanced Space Surface Systems Operations
NASA Technical Reports Server (NTRS)
Huffaker, Zachary Lynn; Mueller, Robert P.
2014-01-01
The importance of advanced surface systems is becoming increasingly relevant in the modern age of space technology. Specifically, projects pursued by the Granular Mechanics and Regolith Operations (GMRO) Lab are unparalleled in the field of planetary resourcefulness. This internship opportunity involved projects that support properly utilizing natural resources from other celestial bodies. Beginning with the tele-robotic workstation, mechanical upgrades were necessary to consider for specific portions of the workstation consoles and successfully designed in concept. This would provide more means for innovation and creativity concerning advanced robotic operations. Project RASSOR is a regolith excavator robot whose primary objective is to mine, store, and dump regolith efficiently on other planetary surfaces. Mechanical adjustments were made to improve this robot's functionality, although there were some minor system changes left to perform before the opportunity ended. On the topic of excavator robots, the notes taken by the GMRO staff during the 2013 and 2014 Robotic Mining Competitions were effectively organized and analyzed for logistical purposes. Lessons learned from these annual competitions at Kennedy Space Center are greatly influential to the GMRO engineers and roboticists. Another project that GMRO staff support is Project Morpheus. Support for this project included successfully producing mathematical models of the eroded landing pad surface for the vertical testbed vehicle to predict a timeline for pad reparation. And finally, the last project this opportunity made contribution to was Project Neo, a project exterior to GMRO Lab projects, which focuses on rocket propulsion systems. Additions were successfully installed to the support structure of an original vertical testbed rocket engine, thus making progress towards futuristic test firings in which data will be analyzed by students affiliated with Rocket University. Each project will be explained in further detail, as well as the full scope of the contributions made during this opportunity.
EXOS research on master controllers for robotic devices
NASA Technical Reports Server (NTRS)
Marcus, Beth A.; An, Ben; Eberman, Brian
1992-01-01
Two projects are currently being conducted by EXOS under the Small Business Innovation Research (SBIR) program with NASA. One project will develop a force feedback device for controlling robot hands, the other will develop an elbow and shoulder exoskeleton which can be integrated with other EXOS devices to provide whole robot arm and hand control. Aspects covered are the project objectives, important research issues which have arisen during the developments, and interim results of the projects. The Phase 1 projects currently underway will result in hardware prototypes and identification of research issues required for complete system development and/or integration.
2016-11-14
necessary capability to build a high density communication highway between 86 billion brain neurons and intelligent vehicles or robots . With this...build a high density communication highway between brain neurons and intelligent vehicles or robots . The final outcome of the INI using TDT system...will be beneficial to wounded warriors suffering from loss of limb function, so that, using sophisticated bidirectional robotic limbs, these
Akce, Abdullah; Johnson, Miles; Dantsker, Or; Bretl, Timothy
2013-03-01
This paper presents an interface for navigating a mobile robot that moves at a fixed speed in a planar workspace, with noisy binary inputs that are obtained asynchronously at low bit-rates from a human user through an electroencephalograph (EEG). The approach is to construct an ordered symbolic language for smooth planar curves and to use these curves as desired paths for a mobile robot. The underlying problem is then to design a communication protocol by which the user can, with vanishing error probability, specify a string in this language using a sequence of inputs. Such a protocol, provided by tools from information theory, relies on a human user's ability to compare smooth curves, just like they can compare strings of text. We demonstrate our interface by performing experiments in which twenty subjects fly a simulated aircraft at a fixed speed and altitude with input only from EEG. Experimental results show that the majority of subjects are able to specify desired paths despite a wide range of errors made in decoding EEG signals.
Rover Wheel-Actuated Tool Interface
NASA Technical Reports Server (NTRS)
Matthews, Janet; Ahmad, Norman; Wilcox, Brian
2007-01-01
A report describes an interface for utilizing some of the mobility features of a mobile robot for general-purpose manipulation of tools and other objects. The robot in question, now undergoing conceptual development for use on the Moon, is the All-Terrain Hex-Limbed Extra-Terrestrial Explorer (ATHLETE) rover, which is designed to roll over gentle terrain or walk over rough or steep terrain. Each leg of the robot is a six-degree-of-freedom general purpose manipulator tipped by a wheel with a motor drive. The tool interface includes a square cross-section peg, equivalent to a conventional socket-wrench drive, that rotates with the wheel. The tool interface also includes a clamp that holds a tool on the peg, and a pair of fold-out cameras that provides close-up stereoscopic images of the tool and its vicinity. The field of view of the imagers is actuated by the clamp mechanism and is specific to each tool. The motor drive can power any of a variety of tools, including rotating tools for helical fasteners, drills, and such clamping tools as pliers. With the addition of a flexible coupling, it could also power another tool or remote manipulator at a short distance. The socket drive can provide very high torque and power because it is driven by the wheel motor.
Determining robot actions for tasks requiring sensor interaction
NASA Technical Reports Server (NTRS)
Budenske, John; Gini, Maria
1989-01-01
The performance of non-trivial tasks by a mobile robot has been a long term objective of robotic research. One of the major stumbling blocks to this goal is the conversion of the high-level planning goals and commands into the actuator and sensor processing controls. In order for a mobile robot to accomplish a non-trivial task, the task must be described in terms of primitive actions of the robot's actuators. Most non-trivial tasks require the robot to interact with its environment; thus necessitating coordination of sensor processing and actuator control to accomplish the task. The main contention is that the transformation from the high level description of the task to the primitive actions should be performed primarily at execution time, when knowledge about the environment can be obtained through sensors. It is proposed to produce the detailed plan of primitive actions by using a collection of low-level planning components that contain domain specific knowledge and knowledge about the available sensors, actuators, and sensor/actuator processing. This collection will perform signal and control processing as well as serve as a control interface between an actual mobile robot and a high-level planning system. Previous research has shown the usefulness of high-level planning systems to plan the coordination of activities such to achieve a goal, but none have been fully applied to actual mobile robots due to the complexity of interacting with sensors and actuators. This control interface is currently being implemented on a LABMATE mobile robot connected to a SUN workstation and will be developed such to enable the LABMATE to perform non-trivial, sensor-intensive tasks as specified by a planning system.
NASA Astrophysics Data System (ADS)
Heath Pastore, Tracy; Barnes, Mitchell; Hallman, Rory
2005-05-01
Robot technology is developing at a rapid rate for both commercial and Department of Defense (DOD) applications. As a result, the task of managing both technology and experience information is growing. In the not-to-distant past, tracking development efforts of robot platforms, subsystems and components was not too difficult, expensive, or time consuming. To do the same today is a significant undertaking. The Mobile Robot Knowledge Base (MRKB) provides the robotics community with a web-accessible, centralized resource for sharing information, experience, and technology to more efficiently and effectively meet the needs of the robot system user. The resource includes searchable information on robot components, subsystems, mission payloads, platforms, and DOD robotics programs. In addition, the MRKB website provides a forum for technology and information transfer within the DOD robotics community and an interface for the Robotic Systems Pool (RSP). The RSP manages a collection of small teleoperated and semi-autonomous robotic platforms, available for loan to DOD and other qualified entities. The objective is to put robots in the hands of users and use the test data and fielding experience to improve robot systems.
Yandell, Matthew B; Quinlivan, Brendan T; Popov, Dmitry; Walsh, Conor; Zelik, Karl E
2017-05-18
Wearable assistive devices have demonstrated the potential to improve mobility outcomes for individuals with disabilities, and to augment healthy human performance; however, these benefits depend on how effectively power is transmitted from the device to the human user. Quantifying and understanding this power transmission is challenging due to complex human-device interface dynamics that occur as biological tissues and physical interface materials deform and displace under load, absorbing and returning power. Here we introduce a new methodology for quickly estimating interface power dynamics during movement tasks using common motion capture and force measurements, and then apply this method to quantify how a soft robotic ankle exosuit interacts with and transfers power to the human body during walking. We partition exosuit end-effector power (i.e., power output from the device) into power that augments ankle plantarflexion (termed augmentation power) vs. power that goes into deformation and motion of interface materials and underlying soft tissues (termed interface power). We provide empirical evidence of how human-exosuit interfaces absorb and return energy, reshaping exosuit-to-human power flow and resulting in three key consequences: (i) During exosuit loading (as applied forces increased), about 55% of exosuit end-effector power was absorbed into the interfaces. (ii) However, during subsequent exosuit unloading (as applied forces decreased) most of the absorbed interface power was returned viscoelastically. Consequently, the majority (about 75%) of exosuit end-effector work over each stride contributed to augmenting ankle plantarflexion. (iii) Ankle augmentation power (and work) was delayed relative to exosuit end-effector power, due to these interface energy absorption and return dynamics. Our findings elucidate the complexities of human-exosuit interface dynamics during transmission of power from assistive devices to the human body, and provide insight into improving the design and control of wearable robots. We conclude that in order to optimize the performance of wearable assistive devices it is important, throughout design and evaluation phases, to account for human-device interface dynamics that affect power transmission and thus human augmentation benefits.
Human-machine interfaces based on EMG and EEG applied to robotic systems.
Ferreira, Andre; Celeste, Wanderley C; Cheein, Fernando A; Bastos-Filho, Teodiano F; Sarcinelli-Filho, Mario; Carelli, Ricardo
2008-03-26
Two different Human-Machine Interfaces (HMIs) were developed, both based on electro-biological signals. One is based on the EMG signal and the other is based on the EEG signal. Two major features of such interfaces are their relatively simple data acquisition and processing systems, which need just a few hardware and software resources, so that they are, computationally and financially speaking, low cost solutions. Both interfaces were applied to robotic systems, and their performances are analyzed here. The EMG-based HMI was tested in a mobile robot, while the EEG-based HMI was tested in a mobile robot and a robotic manipulator as well. Experiments using the EMG-based HMI were carried out by eight individuals, who were asked to accomplish ten eye blinks with each eye, in order to test the eye blink detection algorithm. An average rightness rate of about 95% reached by individuals with the ability to blink both eyes allowed to conclude that the system could be used to command devices. Experiments with EEG consisted of inviting 25 people (some of them had suffered cases of meningitis and epilepsy) to test the system. All of them managed to deal with the HMI in only one training session. Most of them learnt how to use such HMI in less than 15 minutes. The minimum and maximum training times observed were 3 and 50 minutes, respectively. Such works are the initial parts of a system to help people with neuromotor diseases, including those with severe dysfunctions. The next steps are to convert a commercial wheelchair in an autonomous mobile vehicle; to implement the HMI onboard the autonomous wheelchair thus obtained to assist people with motor diseases, and to explore the potentiality of EEG signals, making the EEG-based HMI more robust and faster, aiming at using it to help individuals with severe motor dysfunctions.
Internet Based Robot Control Using CORBA Based Communications
2009-12-01
Proceedings of the IADIS International Conference WWW/Internet, ICWI 2002, pp. 485–490. [5] Flanagan, David , Farley, Jim, Crawford, William, and...Conference on Robotics andAutomation, ICRA’00., pp. 2019–2024. [7] Schulz, D., Burgard, W., Cremers , A., Fox, D., and Thrun, S. (2000), Web interfaces
SUBTLE: Situation Understanding Bot through Language and Environment
2016-01-06
a 4 day “hackathon” by Stuart Young’s small robots group which successfully ported the SUBTLE MURI NLP robot interface to the Packbot platform they...null element restoration, a step typically ig- nored in NLP systems, allows for correct parsing of im- peratives and questions, critical structures
A Project-Based Biologically-Inspired Robotics Module
ERIC Educational Resources Information Center
Crowder, R. M.; Zauner, K.-P.
2013-01-01
The design of any robotic system requires input from engineers from a variety of technical fields. This paper describes a project-based module, "Biologically-Inspired Robotics," that is offered to Electronics and Computer Science students at the University of Southampton, U.K. The overall objective of the module is for student groups to…
Sensor fusion V; Proceedings of the Meeting, Boston, MA, Nov. 15-17, 1992
NASA Technical Reports Server (NTRS)
Schenker, Paul S. (Editor)
1992-01-01
Topics addressed include 3D object perception, human-machine interface in multisensor systems, sensor fusion architecture, fusion of multiple and distributed sensors, interface and decision models for sensor fusion, computational networks, simple sensing for complex action, multisensor-based control, and metrology and calibration of multisensor systems. Particular attention is given to controlling 3D objects by sketching 2D views, the graphical simulation and animation environment for flexible structure robots, designing robotic systems from sensorimotor modules, cylindrical object reconstruction from a sequence of images, an accurate estimation of surface properties by integrating information using Bayesian networks, an adaptive fusion model for a distributed detection system, multiple concurrent object descriptions in support of autonomous navigation, robot control with multiple sensors and heuristic knowledge, and optical array detectors for image sensors calibration. (No individual items are abstracted in this volume)
Preparing project managers for faster-better-cheaper robotic planetary missions
NASA Technical Reports Server (NTRS)
Gowler, P.; Atkins, K.
2003-01-01
The authors have developed and implemented a week-long workshop for Jet Propulsion Laboratory Project Managers, designed around the development phases of the JPL Project Life Cycle. The workshop emphasizes the specific activities and deliverables that pertain to JPL managers of NASA robotic space exploration and instrument development projects.
Bio-robots automatic navigation with electrical reward stimulation.
Sun, Chao; Zhang, Xinlu; Zheng, Nenggan; Chen, Weidong; Zheng, Xiaoxiang
2012-01-01
Bio-robots that controlled by outer stimulation through brain computer interface (BCI) suffer from the dependence on realtime guidance of human operators. Current automatic navigation methods for bio-robots focus on the controlling rules to force animals to obey man-made commands, with animals' intelligence ignored. This paper proposes a new method to realize the automatic navigation for bio-robots with electrical micro-stimulation as real-time rewards. Due to the reward-seeking instinct and trial-and-error capability, bio-robot can be steered to keep walking along the right route with rewards and correct its direction spontaneously when rewards are deprived. In navigation experiments, rat-robots learn the controlling methods in short time. The results show that our method simplifies the controlling logic and realizes the automatic navigation for rat-robots successfully. Our work might have significant implication for the further development of bio-robots with hybrid intelligence.
LEGO Mindstorms NXT for elderly and visually impaired people in need: A platform.
Al-Halhouli, Ala'aldeen; Qitouqa, Hala; Malkosh, Nancy; Shubbak, Alaa; Al-Gharabli, Samer; Hamad, Eyad
2016-07-27
This paper presents the employment of LEGO Mindstorms NXT robotics as core component of low cost multidisciplinary platform for assisting elderly and visually impaired people. LEGO Mindstorms system offers a plug-and-play programmable robotics toolkit, incorporating construction guides, microcontrollers and sensors, all connected via a comprehensive programming language. It facilitates, without special training and at low cost, the use of such device for interpersonal communication and for handling multiple tasks required for elderly and visually impaired people in-need. The research project provides a model for larger-scale implementation, tackling the issues of creating additional functions in order to assist people in-need. The new functions were built and programmed using MATLAB through a user friendly Graphical User Interface (GUI). Power consumption problem, besides the integration of WiFi connection has been resolved, incorporating GPS application on smart phones enhanced the guiding and tracking functions. We believe that developing and expanding the system to encompass a range of applications beyond the initial design schematics to ease conducting a limited number of pre-described protocols. However, the beneficiaries for the proposed research would be limited to elderly people who require assistance within their household as assistive-robot to facilitate a low-cost solution for a highly demanding health circumstance.
Analysis and prediction of meal motion by EMG signals
NASA Astrophysics Data System (ADS)
Horihata, S.; Iwahara, H.; Yano, K.
2007-12-01
The lack of carers for senior citizens and physically handicapped persons in our country has now become a huge issue and has created a great need for carer robots. The usual carer robots (many of which have switches or joysticks for their interfaces), however, are neither easy to use it nor very popular. Therefore, haptic devices have been adopted for a human-machine interface that will enable an intuitive operation. At this point, a method is being tested that seeks to prevent a wrong operation from occurring from the user's signals. This method matches motions with EMG signals.
ANSO study: evaluation in an indoor environment of a mobile assistance robotic grasping arm.
Coignard, P; Departe, J P; Remy Neris, O; Baillet, A; Bar, A; Drean, D; Verier, A; Leroux, C; Belletante, P; Le Guiet, J L
2013-12-01
To evaluate the reliability and functional acceptability of the ‘‘Synthetic Autonomous Majordomo’’ (SAM) robotic aid system (a mobile Neobotix base equipped with a semi-automatic vision interface and a Manus robotic arm). An open, multicentre, controlled study. We included 29 tetraplegic patients (23 patients with spinal cord injuries, 3 with locked-in syndrome and 4 with other disorders; mean SD age: 37.83 13.3) and 34 control participants (mean SD age: 32.44 11.2). The reliability of the user interface was evaluated in three multi-step scenarios: selection of the room in which the object to be retrieved was located (in the presence or absence of visual control by the user), selection of the object to be retrieved, the grasping of the object itself and the robot’s return to the user with the object. A questionnaire was used to assess the robot’s user acceptability. The SAM system was stable and reliable: both patients and control participants experienced few failures when completing the various stages of the scenarios. The graphic interface was effective for selecting and grasping the object – even in the absence of visual control. Users and carers were generally satisfied with SAM, although only a quarter of patients said that they would consider using the robot in their activities of daily living. Copyright © 2013 Elsevier Masson SAS. All rights reserved.
Interface evaluation for soft robotic manipulators
NASA Astrophysics Data System (ADS)
Moore, Kristin S.; Rodes, William M.; Csencsits, Matthew A.; Kwoka, Martha J.; Gomer, Joshua A.; Pagano, Christopher C.
2006-05-01
The results of two usability experiments evaluating an interface for the operation of OctArm, a biologically inspired robotic arm modeled after an octopus tentacle, are reported. Due to the many degrees-of-freedom (DOF) for the operator to control, such 'continuum' robotic limbs provide unique challenges for human operators because they do not map intuitively. Two modes have been developed to control the arm and reduce the DOF under the explicit direction of the operator. In coupled velocity (CV) mode, a joystick controls changes in arm curvature. In end-effector (EE) mode, a joystick controls the arm by moving the position of an endpoint along a straight line. In Experiment 1, participants used the two modes to grasp objects placed at different locations in a virtual reality modeling language (VRML). Objective measures of performance and subjective preferences were recorded. Results revealed lower grasp times and a subjective preference for the CV mode. Recommendations for improving the interface included providing additional feedback and implementation of an error recovery function. In Experiment 2, only the CV mode was tested with improved training of participants and several changes to the interface. The error recovery function was implemented, allowing participants to reverse through previously attained positions. The mean time to complete the trials in the second usability test was reduced by more than 4 minutes compared with the first usability test, confirming the interface changes improved performance. The results of these tests will be incorporated into future versions of the arm and improve future usability tests.
Considerations for human-machine interfaces in tele-operations
NASA Technical Reports Server (NTRS)
Newport, Curt
1991-01-01
Numerous factors impact on the efficiency of tele-operative manipulative work. Generally, these are related to the physical environment of the tele-operator and how he interfaces with robotic control consoles. The capabilities of the operator can be influenced by considerations such as temperature, eye strain, body fatigue, and boredom created by repetitive work tasks. In addition, the successful combination of man and machine will, in part, be determined by the configuration of the visual and physical interfaces available to the teleoperator. The design and operation of system components such as full-scale and mini-master manipulator controllers, servo joysticks, and video monitors will have a direct impact on operational efficiency. As a result, the local environment and the interaction of the operator with the robotic control console have a substantial effect on mission productivity.
Application of industrial robots in automatic disassembly line of waste LCD displays
NASA Astrophysics Data System (ADS)
Wang, Sujuan
2017-11-01
In the automatic disassembly line of waste LCD displays, LCD displays are disassembled into plastic shells, metal shields, circuit boards, and LCD panels. Two industrial robots are used to cut metal shields and remove circuit boards in this automatic disassembly line. The functions of these two industrial robots, and the solutions to the critical issues of model selection, the interfaces with PLCs and the workflows were described in detail in this paper.
ERIC Educational Resources Information Center
Faria, Carlos; Vale, Carolina; Machado, Toni; Erlhagen, Wolfram; Rito, Manuel; Monteiro, Sérgio; Bicho, Estela
2016-01-01
Robotics has been playing an important role in modern surgery, especially in procedures that require extreme precision, such as neurosurgery. This paper addresses the challenge of teaching robotics to undergraduate engineering students, through an experiential learning project of robotics fundamentals based on a case study of robot-assisted…
Zhang, Chen; Sun, Chao; Gao, Liqiang; Zheng, Nenggan; Chen, Weidong; Zheng, Xiaoxiang
2013-01-01
Bio-robots based on brain computer interface (BCI) suffer from the lack of considering the characteristic of the animals in navigation. This paper proposed a new method for bio-robots' automatic navigation combining the reward generating algorithm base on Reinforcement Learning (RL) with the learning intelligence of animals together. Given the graded electrical reward, the animal e.g. the rat, intends to seek the maximum reward while exploring an unknown environment. Since the rat has excellent spatial recognition, the rat-robot and the RL algorithm can convergent to an optimal route by co-learning. This work has significant inspiration for the practical development of bio-robots' navigation with hybrid intelligence.
A prototype home robot with an ambient facial interface to improve drug compliance.
Takacs, Barnabas; Hanak, David
2008-01-01
We have developed a prototype home robot to improve drug compliance. The robot is a small mobile device, capable of autonomous behaviour, as well as remotely controlled operation via a wireless datalink. The robot is capable of face detection and also has a display screen to provide facial feedback to help motivate patients and thus increase their level of compliance. An RFID reader can identify tags attached to different objects, such as bottles, for fluid intake monitoring. A tablet dispenser allows drug compliance monitoring. Despite some limitations, experience with the prototype suggests that simple and low-cost robots may soon become feasible for care of people living alone or in isolation.
Cloud-based robot remote control system for smart factory
NASA Astrophysics Data System (ADS)
Wu, Zhiming; Li, Lianzhong; Xu, Yang; Zhai, Jingmei
2015-12-01
With the development of internet technologies and the wide application of robots, there is a prospect (trend/tendency) of integration between network and robots. A cloud-based robot remote control system over networks for smart factory is proposed, which enables remote users to control robots and then realize intelligent production. To achieve it, a three-layer system architecture is designed including user layer, service layer and physical layer. Remote control applications running on the cloud server is developed on Microsoft Azure. Moreover, DIV+ CSS technologies are used to design human-machine interface to lower maintenance cost and improve development efficiency. Finally, an experiment is implemented to verify the feasibility of the program.
Development of a telepresence robot for medical consultation
NASA Astrophysics Data System (ADS)
Bugtai, Nilo T.; Ong, Aira Patrice R.; Angeles, Patrick Bryan C.; Cervera, John Keen P.; Ganzon, Rachel Ann E.; Villanueva, Carlos A. G.; Maniquis, Samuel Nazirite F.
2017-02-01
There are numerous efforts to add value for telehealth applications in the country. In this study, the design of a telepresence doctor to facilitate remote medical consultations in the wards of Philippine General Hospital is proposed. This includes the design of a robot capable of performing a medical consultation with clear audio and video information for both ends. It also provides the operating doctor full control of the telepresence robot and gives a user-friendly interface for the controlling doctor. The results have shown that it provides a stable and reliable mobile medical service through the use of the telepresence robot.
A CLIPS-based expert system for the evaluation and selection of robots
NASA Technical Reports Server (NTRS)
Nour, Mohamed A.; Offodile, Felix O.; Madey, Gregory R.
1994-01-01
This paper describes the development of a prototype expert system for intelligent selection of robots for manufacturing operations. The paper first develops a comprehensive, three-stage process to model the robot selection problem. The decisions involved in this model easily lend themselves to an expert system application. A rule-based system, based on the selection model, is developed using the CLIPS expert system shell. Data about actual robots is used to test the performance of the prototype system. Further extensions to the rule-based system for data handling and interfacing capabilities are suggested.
Robotics Projects and Learning Concepts in Science, Technology and Problem Solving
ERIC Educational Resources Information Center
Barak, Moshe; Zadok, Yair
2009-01-01
This paper presents a study about learning and the problem solving process identified among junior high school pupils participating in robotics projects in the Lego Mindstorm environment. The research was guided by the following questions: (1) How do pupils come up with inventive solutions to problems in the context of robotics activities? (2)…
Pyro: A Python-Based Versatile Programming Environment for Teaching Robotics
ERIC Educational Resources Information Center
Blank, Douglas; Kumar, Deepak; Meeden, Lisa; Yanco, Holly
2004-01-01
In this article we describe a programming framework called Pyro, which provides a set of abstractions that allows students to write platform-independent robot programs. This project is unique because of its focus on the pedagogical implications of teaching mobile robotics via a top-down approach. We describe the background of the project, its…
An Intelligent Agent Approach for Teaching Neural Networks Using LEGO[R] Handy Board Robots
ERIC Educational Resources Information Center
Imberman, Susan P.
2004-01-01
In this article we describe a project for an undergraduate artificial intelligence class. The project teaches neural networks using LEGO[R] handy board robots. Students construct robots with two motors and two photosensors. Photosensors provide readings that act as inputs for the neural network. Output values power the motors and maintain the…
NASA Technical Reports Server (NTRS)
Welch, Richard V.; Edmonds, Gary O.
1994-01-01
The use of robotics in situations involving hazardous materials can significantly reduce the risk of human injuries. The Emergency Response Robotics Project, which began in October 1990 at the Jet Propulsion Laboratory, is developing a teleoperated mobile robot allowing HAZMAT (hazardous materials) teams to remotely respond to incidents involving hazardous materials. The current robot, called HAZBOT III, can assist in locating characterizing, identifying, and mitigating hazardous material incidents without risking entry team personnel. The active involvement of the JPL Fire Department HAZMAT team has been vital in developing a robotic system which enables them to perform remote reconnaissance of a HAZMAT incident site. This paper provides a brief review of the history of the project, discusses the current system in detail, and presents other areas in which robotics can be applied removing people from hazardous environments/operations.
Roberts, Luke; Park, Hae Won; Howard, Ayanna M
2012-01-01
Rehabilitation robots in home environments has the potential to dramatically improve quality of life for individuals who experience disabling circumstances due to injury or chronic health conditions. Unfortunately, although classes of robotic systems for rehabilitation exist, these devices are typically not designed for children. And since over 150 million children in the world live with a disability, this causes a unique challenge for deploying such robotics for this target demographic. To overcome this barrier, we discuss a system that uses a wireless arm glove input device to enable interaction with a robotic playmate during various play scenarios. Results from testing the system with 20 human subjects shows that the system has potential, but certain aspects need to be improved before deployment with children.
Event-Based Control Strategy for Mobile Robots in Wireless Environments.
Socas, Rafael; Dormido, Sebastián; Dormido, Raquel; Fabregas, Ernesto
2015-12-02
In this paper, a new event-based control strategy for mobile robots is presented. It has been designed to work in wireless environments where a centralized controller has to interchange information with the robots over an RF (radio frequency) interface. The event-based architectures have been developed for differential wheeled robots, although they can be applied to other kinds of robots in a simple way. The solution has been checked over classical navigation algorithms, like wall following and obstacle avoidance, using scenarios with a unique or multiple robots. A comparison between the proposed architectures and the classical discrete-time strategy is also carried out. The experimental results shows that the proposed solution has a higher efficiency in communication resource usage than the classical discrete-time strategy with the same accuracy.
Event-Based Control Strategy for Mobile Robots in Wireless Environments
Socas, Rafael; Dormido, Sebastián; Dormido, Raquel; Fabregas, Ernesto
2015-01-01
In this paper, a new event-based control strategy for mobile robots is presented. It has been designed to work in wireless environments where a centralized controller has to interchange information with the robots over an RF (radio frequency) interface. The event-based architectures have been developed for differential wheeled robots, although they can be applied to other kinds of robots in a simple way. The solution has been checked over classical navigation algorithms, like wall following and obstacle avoidance, using scenarios with a unique or multiple robots. A comparison between the proposed architectures and the classical discrete-time strategy is also carried out. The experimental results shows that the proposed solution has a higher efficiency in communication resource usage than the classical discrete-time strategy with the same accuracy. PMID:26633412
Robot and Human Surface Operations on Solar System Bodies
NASA Technical Reports Server (NTRS)
Weisbin, C. R.; Easter, R.; Rodriguez, G.
2001-01-01
This paper presents a comparison of robot and human surface operations on solar system bodies. The topics include: 1) Long Range Vision of Surface Scenarios; 2) Human and Robots Complement Each Other; 3) Respective Human and Robot Strengths; 4) Need More In-Depth Quantitative Analysis; 5) Projected Study Objectives; 6) Analysis Process Summary; 7) Mission Scenarios Decompose into Primitive Tasks; 7) Features of the Projected Analysis Approach; and 8) The "Getting There Effect" is a Major Consideration. This paper is in viewgraph form.
Laboratory testing of candidate robotic applications for space
NASA Technical Reports Server (NTRS)
Purves, R. B.
1987-01-01
Robots have potential for increasing the value of man's presence in space. Some categories with potential benefit are: (1) performing extravehicular tasks like satellite and station servicing, (2) supporting the science mission of the station by manipulating experiment tasks, and (3) performing intravehicular activities which would be boring, tedious, exacting, or otherwise unpleasant for astronauts. An important issue in space robotics is selection of an appropriate level of autonomy. In broad terms three levels of autonomy can be defined: (1) teleoperated - an operator explicitly controls robot movement; (2) telerobotic - an operator controls the robot directly, but by high-level commands, without, for example, detailed control of trajectories; and (3) autonomous - an operator supplies a single high-level command, the robot does all necessary task sequencing and planning to satisfy the command. Researchers chose three projects for their exploration of technology and implementation issues in space robots, one each of the three application areas, each with a different level of autonomy. The projects were: (1) satellite servicing - teleoperated; (2) laboratory assistant - telerobotic; and (3) on-orbit inventory manager - autonomous. These projects are described and some results of testing are summarized.
Mindstorms Robots and the Application of Cognitive Load Theory in Introductory Programming
ERIC Educational Resources Information Center
Mason, Raina; Cooper, Graham
2013-01-01
This paper reports on a series of introductory programming workshops, initially targeting female high school students, which utilised Lego Mindstorms robots. Cognitive load theory (CLT) was applied to the instructional design of the workshops, and a controlled experiment was also conducted investigating aspects of the interface. Results indicated…
ERIC Educational Resources Information Center
Burleson, Winslow S.; Harlow, Danielle B.; Nilsen, Katherine J.; Perlin, Ken; Freed, Natalie; Jensen, Camilla Nørgaard; Lahey, Byron; Lu, Patrick; Muldner, Kasia
2018-01-01
As computational thinking becomes increasingly important for children to learn, we must develop interfaces that leverage the ways that young children learn to provide opportunities for them to develop these skills. Active Learning Environments with Robotic Tangibles (ALERT) and Robopad, an analogous on-screen virtual spatial programming…
Hiding the system from the user: Moving from complex mental models to elegant metaphors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curtis W. Nielsen; David J. Bruemmer
2007-08-01
In previous work, increased complexity of robot behaviors and the accompanying interface design often led to operator confusion and/or a fight for control between the robot and operator. We believe the reason for the conflict was that the design of the interface and interactions presented too much of the underlying robot design model to the operator. Since the design model includes the implementation of sensors, behaviors, and sophisticated algorithms, the result was that the operator’s cognitive efforts were focused on understanding the design of the robot system as opposed to focusing on the task at hand. This paper illustrates howmore » this very problem emerged at the INL and how the implementation of new metaphors for interaction has allowed us to hide the design model from the user and allow the user to focus more on the task at hand. Supporting the user’s focus on the task rather than on the design model allows increased use of the system and significant performance improvement in a search task with novice users.« less
Six axis force feedback input device
NASA Technical Reports Server (NTRS)
Ohm, Timothy (Inventor)
1998-01-01
The present invention is a low friction, low inertia, six-axis force feedback input device comprising an arm with double-jointed, tendon-driven revolute joints, a decoupled tendon-driven wrist, and a base with encoders and motors. The input device functions as a master robot manipulator of a microsurgical teleoperated robot system including a slave robot manipulator coupled to an amplifier chassis, which is coupled to a control chassis, which is coupled to a workstation with a graphical user interface. The amplifier chassis is coupled to the motors of the master robot manipulator and the control chassis is coupled to the encoders of the master robot manipulator. A force feedback can be applied to the input device and can be generated from the slave robot to enable a user to operate the slave robot via the input device without physically viewing the slave robot. Also, the force feedback can be generated from the workstation to represent fictitious forces to constrain the input device's control of the slave robot to be within imaginary predetermined boundaries.
RADIOLOGICAL SURVEY STATION DEVELOPMENT FOR THE PIT DISASSEMBLY AND CONVERSION PROJECT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dalmaso, M.; Gibbs, K.; Gregory, D.
2011-05-22
The Savannah River National Laboratory (SRNL) has developed prototype equipment to demonstrate remote surveying of Inner and Outer DOE Standard 3013 containers for fixed and transferable contamination in accordance with DOE Standard 3013 and 10 CFR 835 Appendix B. When fully developed the equipment will be part of a larger suite of equipment used to package material in accordance with DOE Standard 3013 at the Pit Disassembly and Conversion Project slated for installation at the Savannah River Site. The prototype system consists of a small six-axis industrial robot with an end effector consisting of a force sensor, vacuum gripper andmore » a three fingered pneumatic gripper. The work cell also contains two alpha survey instruments, swipes, swipe dispenser, and other ancillary equipment. An external controller interfaces with the robot controller, survey instruments and other ancillary equipment to control the overall process. SRNL is developing automated equipment for the Pit Disassembly and Conversion (PDC) Project that is slated for the Savannah River Site (SRS). The equipment being developed is automated packaging equipment for packaging plutonium bearing materials in accordance with DOE-STD-3013-2004. The subject of this paper is the development of a prototype Radiological Survey Station (RSS). Other automated equipment being developed for the PDC includes the Bagless transfer System, Outer Can Welder, Gantry Robot System (GRS) and Leak Test Station. The purpose of the RSS is to perform a frisk and swipe of the DOE Standard 3013 Container (either inner can or outer can) to check for fixed and transferable contamination. This is required to verify that the contamination levels are within the limits specified in DOE-STD-3013-2004 and 10 CFR 835, Appendix D. The surface contamination limit for the 3013 Outer Can (OC) is 500 dpm/100 cm2 (total) and 20 dpm/100 cm2 (transferable). This paper will concentrate on the RSS developments for the 3013 OC but the system for the 3013 Inner Can (IC) is nearly identical.« less
DEMONSTRATION OF AUTONOMOUS AIR MONITORING THROUGH ROBOTICS
This project included modifying an existing teleoperated robot to include autonomous navigation, large object avoidance, and air monitoring and demonstrating that prototype robot system in indoor and outdoor environments. An existing teleoperated "Surveyor" robot developed by ARD...
Robotic Lunar Lander Development Project Status
NASA Technical Reports Server (NTRS)
Hammond, Monica; Bassler, Julie; Morse, Brian
2010-01-01
This slide presentation reviews the status of the development of a robotic lunar lander. The goal of the project is to perform engineering tests and risk reduction activities to support the development of a small lunar lander for lunar surface science. This includes: (1) risk reduction for the flight of the robotic lander, (i.e., testing and analyzing various phase of the project); (2) the incremental development for the design of the robotic lander, which is to demonstrate autonomous, controlled descent and landing on airless bodies, and design of thruster configuration for 1/6th of the gravity of earth; (3) cold gas test article in flight demonstration testing; (4) warm gas testing of the robotic lander design; (5) develop and test landing algorithms; (6) validate the algorithms through analysis and test; and (7) tests of the flight propulsion system.
Projective invariant biplanar registration of a compact modular orthopaedic robot.
Luan, Sheng; Sun, Lei; Hu, Lei; Hao, Aimin; Li, Changsheng; Tang, Peifu; Zhang, Lihai; Du, Hailong
2014-01-01
This paper presents a compact orthopedic robot designed with modular concept. The layout of the modular configuration is adaptive to various conditions such as surgical workspace and targeting path. A biplanar algorithm is adopted for the mapping from the fluoroscopic image to the robot, while the former affine based method is satisfactory only when the projection rays are basically perpendicular to the reference coordinate planes. This paper introduces the area cross-ratio as a projective invariant to improve the registration accuracy for non-orthogonal orientations, so that the robotic system could be applied to more orthopedic procedures under various C-Arm orientation conditions. The system configurations for femoral neck screw and sacroiliac screw fixation are presented. The accuracy of the robotic system and its efficacy for the two typical applications are validated by experiments.
ERIC Educational Resources Information Center
Hull, Daniel M.; Lovett, James E.
The six new robotics and automated systems specialty courses developed by the Robotics/Automated Systems Technician (RAST) project are described in this publication. Course titles are Fundamentals of Robotics and Automated Systems, Automated Systems and Support Components, Controllers for Robots and Automated Systems, Robotics and Automated…
NASA Astrophysics Data System (ADS)
Rembala, Richard; Ower, Cameron
2009-10-01
MDA has provided 25 years of real-time engineering support to Shuttle (Canadarm) and ISS (Canadarm2) robotic operations beginning with the second shuttle flight STS-2 in 1981. In this capacity, our engineering support teams have become familiar with the evolution of mission planning and flight support practices for robotic assembly and support operations at mission control. This paper presents observations on existing practices and ideas to achieve reduced operational overhead to present programs. It also identifies areas where robotic assembly and maintenance of future space stations and space-based facilities could be accomplished more effectively and efficiently. Specifically, our experience shows that past and current space Shuttle and ISS assembly and maintenance operations have used the approach of extensive preflight mission planning and training to prepare the flight crews for the entire mission. This has been driven by the overall communication latency between the earth and remote location of the space station/vehicle as well as the lack of consistent robotic and interface standards. While the early Shuttle and ISS architectures included robotics, their eventual benefits on the overall assembly and maintenance operations could have been greater through incorporating them as a major design driver from the beginning of the system design. Lessons learned from the ISS highlight the potential benefits of real-time health monitoring systems, consistent standards for robotic interfaces and procedures and automated script-driven ground control in future space station assembly and logistics architectures. In addition, advances in computer vision systems and remote operation, supervised autonomous command and control systems offer the potential to adjust the balance between assembly and maintenance tasks performed using extra vehicular activity (EVA), extra vehicular robotics (EVR) and EVR controlled from the ground, offloading the EVA astronaut and even the robotic operator on-orbit of some of the more routine tasks. Overall these proposed approaches when used effectively offer the potential to drive down operations overhead and allow more efficient and productive robotic operations.
The Rise of Robots and the Implications for Military Organizations
2013-09-01
assesses the impact of robots on military organizations and suggests the way forward for military organizations to facilitate the adoption of robots...organizational processes in the long term. Military organizations will benefit from a better understanding of the impact of robots and the resulting...organizations, projects the adoption timeframe for robots in military organizations, proposes how robots might evolve, assesses the impact of robots
Kampmann, Peter; Kirchner, Frank
2014-01-01
With the increasing complexity of robotic missions and the development towards long-term autonomous systems, the need for multi-modal sensing of the environment increases. Until now, the use of tactile sensor systems has been mostly based on sensing one modality of forces in the robotic end-effector. The use of a multi-modal tactile sensory system is motivated, which combines static and dynamic force sensor arrays together with an absolute force measurement system. This publication is focused on the development of a compact sensor interface for a fiber-optic sensor array, as optic measurement principles tend to have a bulky interface. Mechanical, electrical and software approaches are combined to realize an integrated structure that provides decentralized data pre-processing of the tactile measurements. Local behaviors are implemented using this setup to show the effectiveness of this approach. PMID:24743158
Towards a new modality-independent interface for a robotic wheelchair.
Bastos-Filho, Teodiano Freire; Cheein, Fernando Auat; Müller, Sandra Mara Torres; Celeste, Wanderley Cardoso; de la Cruz, Celso; Cavalieri, Daniel Cruz; Sarcinelli-Filho, Mário; Amaral, Paulo Faria Santos; Perez, Elisa; Soria, Carlos Miguel; Carelli, Ricardo
2014-05-01
This work presents the development of a robotic wheelchair that can be commanded by users in a supervised way or by a fully automatic unsupervised navigation system. It provides flexibility to choose different modalities to command the wheelchair, in addition to be suitable for people with different levels of disabilities. Users can command the wheelchair based on their eye blinks, eye movements, head movements, by sip-and-puff and through brain signals. The wheelchair can also operate like an auto-guided vehicle, following metallic tapes, or in an autonomous way. The system is provided with an easy to use and flexible graphical user interface onboard a personal digital assistant, which is used to allow users to choose commands to be sent to the robotic wheelchair. Several experiments were carried out with people with disabilities, and the results validate the developed system as an assistive tool for people with distinct levels of disability.
Human-Robot Control Strategies for the NASA/DARPA Robonaut
NASA Technical Reports Server (NTRS)
Diftler, M. A.; Culbert, Chris J.; Ambrose, Robert O.; Huber, E.; Bluethmann, W. J.
2003-01-01
The Robotic Systems Technology Branch at the NASA Johnson Space Center (JSC) is currently developing robot systems to reduce the Extra-Vehicular Activity (EVA) and planetary exploration burden on astronauts. One such system, Robonaut, is capable of interfacing with external Space Station systems that currently have only human interfaces. Robonaut is human scale, anthropomorphic, and designed to approach the dexterity of a space-suited astronaut. Robonaut can perform numerous human rated tasks, including actuating tether hooks, manipulating flexible materials, soldering wires, grasping handrails to move along space station mockups, and mating connectors. More recently, developments in autonomous control and perception for Robonaut have enabled dexterous, real-time man-machine interaction. Robonaut is now capable of acting as a practical autonomous assistant to the human, providing and accepting tools by reacting to body language. A versatile, vision-based algorithm for matching range silhouettes is used for monitoring human activity as well as estimating tool pose.
A scanning laser rangefinder for a robotic vehicle
NASA Technical Reports Server (NTRS)
Lewis, R. A.; Johnston, A. R.
1977-01-01
A scanning Laser Rangefinder (LRF) which operates in conjunction with a minicomputer as part of a robotic vehicle is described. The description, in sufficient detail for replication, modification, and maintenance, includes both hardware and software. Also included is a discussion of functional requirements relative to a detailing of the instrument and its performance, a summary of the robot system in which the LRF functions, the software organization, interfaces and description, and the applications to which the LRF has been put.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beugelsdijk, T.J.
1990-11-01
This paper reports on robotics applications at the Los Alamos National Laboratory. The topics of the paper include the ROBOCAL project to assay all nuclear materials entering and leaving the process floor at the Los Alamos Plutonium Facility, the isotope detector fabrication project, a plutonium dissolution robotic system, a safeguards waste automated measurement instrument, and DNA filter array construction. This report consists of overheads only.
2012-04-17
difficult to imagine that lethal robots would find themselves among the list of particularly inhumane weapons. Albert Einstein whose research was...REPORT DATE (DD-MM-YYYY) 17-04-2012 2. REPORT TYPE Strategy Research Project 3. DATES COVERED (From - To) 4. TITLE AND SUBTITLE Enabling...Enabling Soldiers with Robots FORMAT: Strategy Research Project DATE : 17 April 2012 WORD COUNT: 5202 PAGES: 26 KEY TERMS: Ethics, Doctrine
Three Years of Using Robots in an Artificial Intelligence Course: Lessons Learned
ERIC Educational Resources Information Center
Kumar, Amruth N.
2004-01-01
We have been using robots in our artificial intelligence course since fall 2000. We have been using the robots for open-laboratory projects. The projects are designed to emphasize high-level knowledge-based AI algorithms. After three offerings of the course, we paused to analyze the collected data and to see if we could answer the following…
Dynamic Routing and Coordination in Multi-Agent Networks
2016-06-10
SECURITY CLASSIFICATION OF: Supported by this project, we designed innovative routing, planning and coordination strategies for robotic networks and...tasks partitioned among robots , in what order are they to be performed, and along which deterministic routes or according to which stochastic rules do...individual robots move. The fundamental novelties and our recent breakthroughs supported by this project are manifold: (1) the application 1
Liu, Ju-Chi; Chou, Hung-Chyun; Chen, Chien-Hsiu; Lin, Yi-Tseng
2016-01-01
A high efficient time-shift correlation algorithm was proposed to deal with the peak time uncertainty of P300 evoked potential for a P300-based brain-computer interface (BCI). The time-shift correlation series data were collected as the input nodes of an artificial neural network (ANN), and the classification of four LED visual stimuli was selected as the output node. Two operating modes, including fast-recognition mode (FM) and accuracy-recognition mode (AM), were realized. The proposed BCI system was implemented on an embedded system for commanding an adult-size humanoid robot to evaluate the performance from investigating the ground truth trajectories of the humanoid robot. When the humanoid robot walked in a spacious area, the FM was used to control the robot with a higher information transfer rate (ITR). When the robot walked in a crowded area, the AM was used for high accuracy of recognition to reduce the risk of collision. The experimental results showed that, in 100 trials, the accuracy rate of FM was 87.8% and the average ITR was 52.73 bits/min. In addition, the accuracy rate was improved to 92% for the AM, and the average ITR decreased to 31.27 bits/min. due to strict recognition constraints. PMID:27579033
McWhinney, S R; Tremblay, A; Boe, S G; Bardouille, T
2018-02-01
Neurofeedback training teaches individuals to modulate brain activity by providing real-time feedback and can be used for brain-computer interface control. The present study aimed to optimize training by maximizing engagement through goal-oriented task design. Participants were shown either a visual display or a robot, where each was manipulated using motor imagery (MI)-related electroencephalography signals. Those with the robot were instructed to quickly navigate grid spaces, as the potential for goal-oriented design to strengthen learning was central to our investigation. Both groups were hypothesized to show increased magnitude of these signals across 10 sessions, with the greatest gains being seen in those navigating the robot due to increased engagement. Participants demonstrated the predicted increase in magnitude, with no differentiation between hemispheres. Participants navigating the robot showed stronger left-hand MI increases than those with the computer display. This is likely due to success being reliant on maintaining strong MI-related signals. While older participants showed stronger signals in early sessions, this trend later reversed, suggesting greater natural proficiency but reduced flexibility. These results demonstrate capacity for modulating neurofeedback using MI over a series of training sessions, using tasks of varied design. Importantly, the more goal-oriented robot control task resulted in greater improvements.
Liu, Ju-Chi; Chou, Hung-Chyun; Chen, Chien-Hsiu; Lin, Yi-Tseng; Kuo, Chung-Hsien
2016-01-01
A high efficient time-shift correlation algorithm was proposed to deal with the peak time uncertainty of P300 evoked potential for a P300-based brain-computer interface (BCI). The time-shift correlation series data were collected as the input nodes of an artificial neural network (ANN), and the classification of four LED visual stimuli was selected as the output node. Two operating modes, including fast-recognition mode (FM) and accuracy-recognition mode (AM), were realized. The proposed BCI system was implemented on an embedded system for commanding an adult-size humanoid robot to evaluate the performance from investigating the ground truth trajectories of the humanoid robot. When the humanoid robot walked in a spacious area, the FM was used to control the robot with a higher information transfer rate (ITR). When the robot walked in a crowded area, the AM was used for high accuracy of recognition to reduce the risk of collision. The experimental results showed that, in 100 trials, the accuracy rate of FM was 87.8% and the average ITR was 52.73 bits/min. In addition, the accuracy rate was improved to 92% for the AM, and the average ITR decreased to 31.27 bits/min. due to strict recognition constraints.
Cortical Spiking Network Interfaced with Virtual Musculoskeletal Arm and Robotic Arm.
Dura-Bernal, Salvador; Zhou, Xianlian; Neymotin, Samuel A; Przekwas, Andrzej; Francis, Joseph T; Lytton, William W
2015-01-01
Embedding computational models in the physical world is a critical step towards constraining their behavior and building practical applications. Here we aim to drive a realistic musculoskeletal arm model using a biomimetic cortical spiking model, and make a robot arm reproduce the same trajectories in real time. Our cortical model consisted of a 3-layered cortex, composed of several hundred spiking model-neurons, which display physiologically realistic dynamics. We interconnected the cortical model to a two-joint musculoskeletal model of a human arm, with realistic anatomical and biomechanical properties. The virtual arm received muscle excitations from the neuronal model, and fed back proprioceptive information, forming a closed-loop system. The cortical model was trained using spike timing-dependent reinforcement learning to drive the virtual arm in a 2D reaching task. Limb position was used to simultaneously control a robot arm using an improved network interface. Virtual arm muscle activations responded to motoneuron firing rates, with virtual arm muscles lengths encoded via population coding in the proprioceptive population. After training, the virtual arm performed reaching movements which were smoother and more realistic than those obtained using a simplistic arm model. This system provided access to both spiking network properties and to arm biophysical properties, including muscle forces. The use of a musculoskeletal virtual arm and the improved control system allowed the robot arm to perform movements which were smoother than those reported in our previous paper using a simplistic arm. This work provides a novel approach consisting of bidirectionally connecting a cortical model to a realistic virtual arm, and using the system output to drive a robotic arm in real time. Our techniques are applicable to the future development of brain neuroprosthetic control systems, and may enable enhanced brain-machine interfaces with the possibility for finer control of limb prosthetics.
NASA Astrophysics Data System (ADS)
Nagata, Fusaomi; Okada, Yudai; Sakamoto, Tatsuhiko; Kusano, Takamasa; Habib, Maki K.; Watanabe, Keigo
2017-06-01
The authors have developed earlier an industrial machining robotic system for foamed polystyrene materials. The developed robotic CAM system provided a simple and effective interface without the need to use any robot language between operators and the machining robot. In this paper, a preprocessor for generating Cutter Location Source data (CLS data) from Stereolithography (STL data) is first proposed for robotic machining. The preprocessor enables to control the machining robot directly using STL data without using any commercially provided CAM system. The STL deals with a triangular representation for a curved surface geometry. The preprocessor allows machining robots to be controlled through a zigzag or spiral path directly calculated from STL data. Then, a smart spline interpolation method is proposed and implemented for smoothing coarse CLS data. The effectiveness and potential of the developed approaches are demonstrated through experiments on actual machining and interpolation.
The coming revolution in personal care robotics: what does it mean for nurses?
Sharts-Hopko, Nancy C
2014-01-01
The business sector provides regular reportage on the development of personal care robots to enable elders and people with disabilities to remain in their homes. Technology in this area is advancing rapidly in Asia, Europe, and North America. To date, the nursing literature has not addressed how nurses will assist these vulnerable populations in the selection and use of robotic technology or how robotics could effect nursing care and patient outcomes. This article provides an overview of development in the area of personal care robotics to address societal needs reflecting demographic trends. Selected relevant issues related to the human-robotic interface including ethical concerns are identified. Implications for nursing education and the delivery of nursing services are identified. Collaboration with engineers in the development of personal care robotic technology has the potential to contribute to the creation of products that optimally address the needs of elders and people with disabilities.
Virtual and remote robotic laboratory using EJS, MATLAB and LabVIEW.
Chaos, Dictino; Chacón, Jesús; Lopez-Orozco, Jose Antonio; Dormido, Sebastián
2013-02-21
This paper describes the design and implementation of a virtual and remote laboratory based on Easy Java Simulations (EJS) and LabVIEW. The main application of this laboratory is to improve the study of sensors in Mobile Robotics, dealing with the problems that arise on the real world experiments. This laboratory allows the user to work from their homes, tele-operating a real robot that takes measurements from its sensors in order to obtain a map of its environment. In addition, the application allows interacting with a robot simulation (virtual laboratory) or with a real robot (remote laboratory), with the same simple and intuitive graphical user interface in EJS. Thus, students can develop signal processing and control algorithms for the robot in simulation and then deploy them on the real robot for testing purposes. Practical examples of application of the laboratory on the inter-University Master of Systems Engineering and Automatic Control are presented.
Virtual and Remote Robotic Laboratory Using EJS, MATLAB and Lab VIEW
Chaos, Dictino; Chacón, Jesús; Lopez-Orozco, Jose Antonio; Dormido, Sebastián
2013-01-01
This paper describes the design and implementation of a virtual and remote laboratory based on Easy Java Simulations (EJS) and LabVIEW. The main application of this laboratory is to improve the study of sensors in Mobile Robotics, dealing with the problems that arise on the real world experiments. This laboratory allows the user to work from their homes, tele-operating a real robot that takes measurements from its sensors in order to obtain a map of its environment. In addition, the application allows interacting with a robot simulation (virtual laboratory) or with a real robot (remote laboratory), with the same simple and intuitive graphical user interface in EJS. Thus, students can develop signal processing and control algorithms for the robot in simulation and then deploy them on the real robot for testing purposes. Practical examples of application of the laboratory on the inter-University Master of Systems Engineering and Automatic Control are presented. PMID:23429578
Google glass-based remote control of a mobile robot
NASA Astrophysics Data System (ADS)
Yu, Song; Wen, Xi; Li, Wei; Chen, Genshe
2016-05-01
In this paper, we present an approach to remote control of a mobile robot via a Google Glass with the multi-function and compact size. This wearable device provides a new human-machine interface (HMI) to control a robot without need for a regular computer monitor because the Google Glass micro projector is able to display live videos around robot environments. In doing it, we first develop a protocol to establish WI-FI connection between Google Glass and a robot and then implement five types of robot behaviors: Moving Forward, Turning Left, Turning Right, Taking Pause, and Moving Backward, which are controlled by sliding and clicking the touchpad located on the right side of the temple. In order to demonstrate the effectiveness of the proposed Google Glass-based remote control system, we navigate a virtual Surveyor robot to pass a maze. Experimental results demonstrate that the proposed control system achieves the desired performance.
Robotic Mining Competition - Activities
2018-05-17
Team members from Case Western Reserve University pause with their robot miner in the RobotPits on the fourth day of NASA's 9th Robotic Mining Competition, May 17, at NASA's Kennedy Space Center Visitor Complex in Florida. More than 40 student teams from colleges and universities around the U.S. are using their mining robots to dig in a supersized sandbox filled with BP-1, or simulated Lunar soil, gravel and rocks, and participate in other competition requirements. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's deep space missions.
Robotic Mining Competition - Activities
2018-05-17
Team members from The University of Utah pause with their robot miner in the RobotPits on the fourth day of NASA's 9th Robotic Mining Competition, May 17, at NASA's Kennedy Space Center Visitor Complex in Florida. More than 40 student teams from colleges and universities around the U.S. are using their mining robots to dig in a supersized sandbox filled with BP-1, or simulated Lunar soil, gravel and rocks, and participate in other competition requirements. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's deep space missions.
Robotic Mining Competition - Activities
2018-05-17
First-time participants from Saginaw Valley State University pause with their robot miner in the RobotPits on the fourth day of NASA's 9th Robotic Mining Competition, May 17, at NASA's Kennedy Space Center Visitor Complex in Florida. More than 40 student teams from colleges and universities around the U.S. are using their mining robots to dig in a supersized sandbox filled with BP-1, or simulated Lunar soil, gravel and rocks, and participate in other competition requirements. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's deep space missions.
Robotic Mining Competition - Opening Ceremony
2018-05-15
On the second day of NASA's 9th Robotic Mining Competition, May 15, team members from Temple University work on their robot miner in the RobotPits in the Educator Resource Center at Kennedy Space Center Visitor Complex in Florida. More than 40 student teams from colleges and universities around the U.S. will use their mining robots to dig in a supersized sandbox filled with BP-1, or simulated Martian soil, gravel and rocks, and participate in other competition requirements. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's deep space missions.
Robotic Mining Competition - Activities
2018-05-17
Team members from The University of Alabama pause with their robot miner in the RobotPits on the fourth day of NASA's 9th Robotic Mining Competition, May 17, at NASA's Kennedy Space Center Visitor Complex in Florida. More than 40 student teams from colleges and universities around the U.S. will use their mining robots to dig in a supersized sandbox filled with BP-1, or simulated Lunar soil, gravel and rocks, and participate in other competition requirements. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's deep space missions.
Robotic Mining Competition - Activities
2018-05-17
Team members from the South Dakota School of Mines & Technology pause with their robot miner in the RobotPits on the fourth day of NASA's 9th Robotic Mining Competition, May 17, at NASA's Kennedy Space Center Visitor Complex in Florida. More than 40 student teams from colleges and universities around the U.S. are using their mining robots to dig in a supersized sandbox filled with BP-1, or simulated Lunar soil, gravel and rocks, and participate in other competition requirements. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's deep space missions.
Robotic Mining Competition - Opening Ceremony
2018-05-15
Team members from Iowa State University prepare their robot miner on the second day of NASA's 9th Robotic Mining Competition, May 15, in the RobotPits in the Educator Resource Center at Kennedy Space Center Visitor Complex in Florida. More than 40 student teams from colleges and universities around the U.S. will use their mining robots to dig in a supersized sandbox filled with BP-1, or simulated Martian soil, gravel and rocks, and participate in other competition requirements. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's deep space missions.
Robotic Mining Competition - Setup
2018-05-14
On the first day of NASA's 9th Robotic Mining Competition, set-up day on May 14, college team members work on their robot miner in the RobotPits in the Educator Resource Center at Kennedy Space Center Visitor Complex in Florida. More than 40 student teams from colleges and universities around the U.S. will use their mining robots to dig in a supersized sandbox filled with BP-1, or simulated Martian soil, gravel and rocks, and participate in other competition requirements. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's deep space missions.
Robotic Mining Competition - Activities
2018-05-17
Team members from New York University work on their robot miner in the RobotPits on the fourth day of NASA's 9th Robotic Mining Competition, May 17, at NASA's Kennedy Space Center Visitor Complex in Florida. More than 40 student teams from colleges and universities around the U.S. are using their mining robots to dig in a supersized sandbox filled with BP-1, or simulated Lunar soil, gravel and rocks, and participate in other competition requirements. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's deep space missions.
Robotic Mining Competition - Activities
2018-05-17
Team members from York College CUNY are with their robot miner in the RobotPits on the fourth day of NASA's 9th Robotic Mining Competition, May 17, at NASA's Kennedy Space Center Visitor Complex in Florida. More than 40 student teams from colleges and universities around the U.S. are using their mining robots to dig in a supersized sandbox filled with BP-1, or simulated Lunar soil, gravel and rocks, and participate in other competition requirements. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's deep space missions.
Robotic Mining Competition - Activities
2018-05-17
Team members from the University of Arkansas pause with their robot miner in the RobotPits on the fourth day of NASA's 9th Robotic Mining Competition, May 17, at NASA's Kennedy Space Center Visitor Complex in Florida. More than 40 student teams from colleges and universities around the U.S. are using their mining robots to dig in a supersized sandbox filled with BP-1, or simulated Lunar soil, gravel and rocks, and participate in other competition requirements. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's deep space missions.
The problem with multiple robots
NASA Technical Reports Server (NTRS)
Huber, Marcus J.; Kenny, Patrick G.
1994-01-01
The issues that can arise in research associated with multiple, robotic agents are discussed. Two particular multi-robot projects are presented as examples. This paper was written in the hope that it might ease the transition from single to multiple robot research.
2017 Robotic Mining Competition
2017-05-24
A robotic miner digs in the mining arena during NASA's 8th Annual Robotic Mining Competition at the Kennedy Space Center Visitor Complex in Florida. More than 40 student teams from colleges and universities around the U.S. are using their uniquely-designed mining robots to dig in a supersized sandbox filled with BP-1, or simulated Martian soil, and participate in other competition requirements. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's Journey to Mars.
NASA Technical Reports Server (NTRS)
Jacobus, Heidi; Riggs, Alan J.; Jacobus, Charles; Weinstein, Yechiel
1991-01-01
Teleoperated control requires a master human interface device that can provide haptic input and output which reflect the responses of a slave robotic system. The effort reported in this paper addresses the design and prototyping of a six degree-of-freedom (DOF) Cartesian coordinate hand controller for this purpose. The device design recommended is an XYZ stage attached to a three-roll wrist which positions a flight-type handgrip. Six degrees of freedom are transduced and control brushless DC motor servo electronics similar in design to those used in computer controlled robotic manipulators. This general approach supports scaled force, velocity, and position feedback to aid an operator in achieving telepresence. The generality of the device and control system characteristics allow the use of inverse dynamics robotic control methodology to project slave robot system forces and inertias to the operator (in scaled form) and at the same time to reduce the apparent inertia of the robotic handcontroller itself. The current control design, which is not multiple fault tolerant, can be extended to make flight control or space use possible. The proposed handcontroller will have advantages in space-based applications where an operator must control several robot arms in a simultaneous and coordinated fashion. It will also have applications in intravehicular activities (within the Space Station) such as microgravity experiments in metallurgy and biological experiments that require isolation from the astronauts' environment. For ground applications, the handcontroller will be useful in underwater activities where the generality of the proposed handcontroller becomes an asset for operation of many different manipulator types. Also applications will emerge in the Military, Construction, and Maintenance/Manufacturing areas including ordnance handling, mine removal, NBC (Nuclear, Chemical, Biological) operations, control of vehicles, and operating strength and agility enhanced machines. Future avionics applications including advanced helicopter and aircraft control may also become important.
An Interactive Astronaut-Robot System with Gesture Control
Liu, Jinguo; Luo, Yifan; Ju, Zhaojie
2016-01-01
Human-robot interaction (HRI) plays an important role in future planetary exploration mission, where astronauts with extravehicular activities (EVA) have to communicate with robot assistants by speech-type or gesture-type user interfaces embedded in their space suits. This paper presents an interactive astronaut-robot system integrating a data-glove with a space suit for the astronaut to use hand gestures to control a snake-like robot. Support vector machine (SVM) is employed to recognize hand gestures and particle swarm optimization (PSO) algorithm is used to optimize the parameters of SVM to further improve its recognition accuracy. Various hand gestures from American Sign Language (ASL) have been selected and used to test and validate the performance of the proposed system. PMID:27190503
Tsekos, Nikolaos V; Khanicheh, Azadeh; Christoforou, Eftychios; Mavroidis, Constantinos
2007-01-01
The continuous technological progress of magnetic resonance imaging (MRI), as well as its widespread clinical use as a highly sensitive tool in diagnostics and advanced brain research, has brought a high demand for the development of magnetic resonance (MR)-compatible robotic/mechatronic systems. Revolutionary robots guided by real-time three-dimensional (3-D)-MRI allow reliable and precise minimally invasive interventions with relatively short recovery times. Dedicated robotic interfaces used in conjunction with fMRI allow neuroscientists to investigate the brain mechanisms of manipulation and motor learning, as well as to improve rehabilitation therapies. This paper gives an overview of the motivation, advantages, technical challenges, and existing prototypes for MR-compatible robotic/mechatronic devices.
NASA Astrophysics Data System (ADS)
Chen, ChuXin; Trivedi, Mohan M.
1992-03-01
This research is focused on enhancing the overall productivity of an integrated human-robot system. A simulation, animation, visualization, and interactive control (SAVIC) environment has been developed for the design and operation of an integrated robotic manipulator system. This unique system possesses the abilities for multisensor simulation, kinematics and locomotion animation, dynamic motion and manipulation animation, transformation between real and virtual modes within the same graphics system, ease in exchanging software modules and hardware devices between real and virtual world operations, and interfacing with a real robotic system. This paper describes a working system and illustrates the concepts by presenting the simulation, animation, and control methodologies for a unique mobile robot with articulated tracks, a manipulator, and sensory modules.
NASA Technical Reports Server (NTRS)
Bejczy, Antal K.
1995-01-01
This presentation focuses on the application of computer graphics or 'virtual reality' (VR) techniques as a human-computer interface tool in the operation of telerobotic systems. VR techniques offer very valuable task realization aids for planning, previewing and predicting robotic actions, operator training, and for visual perception of non-visible events like contact forces in robotic tasks. The utility of computer graphics in telerobotic operation can be significantly enhanced by high-fidelity calibration of virtual reality images to actual TV camera images. This calibration will even permit the creation of artificial (synthetic) views of task scenes for which no TV camera views are available.
Satellite Docking Simulator with Generic Contact Dynamics Capabilities
NASA Astrophysics Data System (ADS)
Ma, O.; Crabtree, D.; Carr, R.; Gonthier, Y.; Martin, E.; Piedboeuf, J.-C.
2002-01-01
Satellite docking (and capture) systems are critical for the servicing or salvage of satellites. Satellite servicing has comparatively recently become a realistic and promising space operation/mission. Satellite servicing includes several of the following operations: rendezvous; docking (capturing); inspection; towing (transporting); refueling; refurbishing (replacement of faulty or "used-up" modules/boxes); and un-docking (releasing). Because spacecraft servicing has been, until recently non-feasible or non-economical, spacecraft servicing technology has been neglected. Accordingly, spacecraft designs have featured self- contained systems without consideration for operational servicing. Consistent with this view, most spacecrafts were designed and built without docking interfaces. If, through some mishap, a spacecraft was rendered non-operational, it was simply considered expendable. Several feasibility studies are in progress on salvaging stranded satellites (which, in fact had led to this project). The task of the designer of the docking system for a salvaging task is difficult. He/she has to work with whatever it is on orbit, and this excludes any special docking interfaces, which might have made his/her task easier. As satellite servicing becomes an accepted design requirement, many future satellites will be equipped with appropriate docking interfaces. The designer of docking systems will be faced with slightly different challenges: reliable, cost-effective, docking (and re-supply) systems. Thus, the role of designers of docking systems will increase from one of a kind, ad-hoc interfaces intended for salvaging operations, to docking systems for satellites and "caretaker" spacecraft which are meant for servicing and are produced in larger numbers. As in any space system (for which full and representative ground hardware test-beds are very expensive and often impossible to develop), simulations are mandatory for the development of systems and operations for satellite servicing. Simulations are also instrumental in concept studies during proposals and early development stages. Finally, simulations are useful during the operational phase of satellite servicing: improving the operational procedures; training ground operators; command and control, etc. Hence the need exists for a Satellite Servicing Simulator, which will support a project throughout its lifecycle. The paper addresses a project to develop a Simulink-based Satellite Docking Simulator (SDS) with generic Contact Dynamics (CD) capabilities. The simulator is intended to meet immediate practical demands for development of complex docking systems and operations at MD Robotics. The docking phase is the most critical and complex phase of the entire servicing sequence, and without docking there is no servicing. Docking mechanisms are often quite complex, especially when built to dock with a satellite manufactured without special docking interfaces. For successful docking operations, the design of a docking system must take into consideration: complexity of 3D geometric shapes defining the contact interfaces; sophistication of the docking mechanism; friction and stiction at the contacting surfaces; compliance (stiffness) and damping, in all axes; positional (translation and rotation) misalignments and relative velocities, in all axes; inertial properties of the docking satellites (including their distribution); complexity of the drive mechanisms and control sub-systems for the overall docking system; fully autonomous or tele-operated docking from the ground; etc. The docking simulator, which makes use of the proven Contact Dynamics Toolkit (CDT) developed by MD Robotics, is thus practically indispensable for the docking system designer. The use of the simulator could greatly reduce the prototyping and development time of a docking interface. A special feature of the simulator, which required an update of CDT, is variable step-size integration. This new capability permits increases in speed to accomplish all the simulation tasks.
Morimoto, Jun; Kawato, Mitsuo
2015-03-06
In the past two decades, brain science and robotics have made gigantic advances in their own fields, and their interactions have generated several interdisciplinary research fields. First, in the 'understanding the brain by creating the brain' approach, computational neuroscience models have been applied to many robotics problems. Second, such brain-motivated fields as cognitive robotics and developmental robotics have emerged as interdisciplinary areas among robotics, neuroscience and cognitive science with special emphasis on humanoid robots. Third, in brain-machine interface research, a brain and a robot are mutually connected within a closed loop. In this paper, we review the theoretical backgrounds of these three interdisciplinary fields and their recent progress. Then, we introduce recent efforts to reintegrate these research fields into a coherent perspective and propose a new direction that integrates brain science and robotics where the decoding of information from the brain, robot control based on the decoded information and multimodal feedback to the brain from the robot are carried out in real time and in a closed loop. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
ROBOSIM: An intelligent simulator for robotic systems
NASA Technical Reports Server (NTRS)
Fernandez, Kenneth R.; Cook, George E.; Biegl, Csaba; Springfield, James F.
1993-01-01
The purpose of this paper is to present an update of an intelligent robotics simulator package, ROBOSIM, first introduced at Technology 2000 in 1990. ROBOSIM is used for three-dimensional geometrical modeling of robot manipulators and various objects in their workspace, and for the simulation of action sequences performed by the manipulators. Geometric modeling of robot manipulators has an expanding area of interest because it can aid the design and usage of robots in a number of ways, including: design and testing of manipulators, robot action planning, on-line control of robot manipulators, telerobotic user interface, and training and education. NASA developed ROBOSIM between 1985-88 to facilitate the development of robotics, and used the package to develop robotics for welding, coating, and space operations. ROBOSIM has been further developed for academic use by its co-developer Vanderbilt University, and has been in both classroom and laboratory environments for teaching complex robotic concepts. Plans are being formulated to make ROBOSIM available to all U.S. engineering/engineering technology schools (over three hundred total with an estimated 10,000+ users per year).
Creating the brain and interacting with the brain: an integrated approach to understanding the brain
Morimoto, Jun; Kawato, Mitsuo
2015-01-01
In the past two decades, brain science and robotics have made gigantic advances in their own fields, and their interactions have generated several interdisciplinary research fields. First, in the ‘understanding the brain by creating the brain’ approach, computational neuroscience models have been applied to many robotics problems. Second, such brain-motivated fields as cognitive robotics and developmental robotics have emerged as interdisciplinary areas among robotics, neuroscience and cognitive science with special emphasis on humanoid robots. Third, in brain–machine interface research, a brain and a robot are mutually connected within a closed loop. In this paper, we review the theoretical backgrounds of these three interdisciplinary fields and their recent progress. Then, we introduce recent efforts to reintegrate these research fields into a coherent perspective and propose a new direction that integrates brain science and robotics where the decoding of information from the brain, robot control based on the decoded information and multimodal feedback to the brain from the robot are carried out in real time and in a closed loop. PMID:25589568
ISS Robotic Student Programming
NASA Technical Reports Server (NTRS)
Barlow, J.; Benavides, J.; Hanson, R.; Cortez, J.; Le Vasseur, D.; Soloway, D.; Oyadomari, K.
2016-01-01
The SPHERES facility is a set of three free-flying satellites launched in 2006. In addition to scientists and engineering, middle- and high-school students program the SPHERES during the annual Zero Robotics programming competition. Zero Robotics conducts virtual competitions via simulator and on SPHERES aboard the ISS, with students doing the programming. A web interface allows teams to submit code, receive results, collaborate, and compete in simulator-based initial rounds and semi-final rounds. The final round of each competition is conducted with SPHERES aboard the ISS. At the end of 2017 a new robotic platform called Astrobee will launch, providing new game elements and new ground support for even more student interaction.
Meal assistance robot with ultrasonic motor
NASA Astrophysics Data System (ADS)
Kodani, Yasuhiro; Tanaka, Kanya; Wakasa, Yuji; Akashi, Takuya; Oka, Masato
2007-12-01
In this paper, we have constructed a robot that help people with disabilities of upper extremities and advanced stage amyotrophic lateral sclerosis (ALS) patients to eat with their residual abilities. Especially, many of people suffering from advanced stage ALS of the use a pacemaker. And they need to avoid electromagnetic waves. Therefore we adopt ultra sonic motor that does not generate electromagnetic waves as driving sources. Additionally we approach the problem of the conventional meal assistance robot. Moreover, we introduce the interface with eye movement so that extremities can also use our system. User operates our robot not with hands or foot but with eye movement.
Prototyping and Simulation of Robot Group Intelligence using Kohonen Networks.
Wang, Zhijun; Mirdamadi, Reza; Wang, Qing
2016-01-01
Intelligent agents such as robots can form ad hoc networks and replace human being in many dangerous scenarios such as a complicated disaster relief site. This project prototypes and builds a computer simulator to simulate robot kinetics, unsupervised learning using Kohonen networks, as well as group intelligence when an ad hoc network is formed. Each robot is modeled using an object with a simple set of attributes and methods that define its internal states and possible actions it may take under certain circumstances. As the result, simple, reliable, and affordable robots can be deployed to form the network. The simulator simulates a group of robots as an unsupervised learning unit and tests the learning results under scenarios with different complexities. The simulation results show that a group of robots could demonstrate highly collaborative behavior on a complex terrain. This study could potentially provide a software simulation platform for testing individual and group capability of robots before the design process and manufacturing of robots. Therefore, results of the project have the potential to reduce the cost and improve the efficiency of robot design and building.
Perspectives on mobile robots as tools for child development and pediatric rehabilitation.
Michaud, François; Salter, Tamie; Duquette, Audrey; Laplante, Jean-François
2007-01-01
Mobile robots (i.e., robots capable of translational movements) can be designed to become interesting tools for child development studies and pediatric rehabilitation. In this article, the authors present two of their projects that involve mobile robots interacting with children: One is a spherical robot deployed in a variety of contexts, and the other is mobile robots used as pedagogical tools for children with pervasive developmental disorders. Locomotion capability appears to be key in creating meaningful and sustained interactions with children: Intentional and purposeful motion is an implicit appealing factor in obtaining children's attention and engaging them in interaction and learning. Both of these projects started with robotic objectives but are revealed to be rich sources of interdisciplinary collaborations in the field of assistive technology. This article presents perspectives on how mobile robots can be designed to address the requirements of child-robot interactions and studies. The authors also argue that mobile robot technology can be a useful tool in rehabilitation engineering, reaching its full potential through strong collaborations between roboticists and pediatric specialists.
Prototyping and Simulation of Robot Group Intelligence using Kohonen Networks
Wang, Zhijun; Mirdamadi, Reza; Wang, Qing
2016-01-01
Intelligent agents such as robots can form ad hoc networks and replace human being in many dangerous scenarios such as a complicated disaster relief site. This project prototypes and builds a computer simulator to simulate robot kinetics, unsupervised learning using Kohonen networks, as well as group intelligence when an ad hoc network is formed. Each robot is modeled using an object with a simple set of attributes and methods that define its internal states and possible actions it may take under certain circumstances. As the result, simple, reliable, and affordable robots can be deployed to form the network. The simulator simulates a group of robots as an unsupervised learning unit and tests the learning results under scenarios with different complexities. The simulation results show that a group of robots could demonstrate highly collaborative behavior on a complex terrain. This study could potentially provide a software simulation platform for testing individual and group capability of robots before the design process and manufacturing of robots. Therefore, results of the project have the potential to reduce the cost and improve the efficiency of robot design and building. PMID:28540284
Ground Fluidization Promotes Rapid Running of a Lightweight Robot
2013-01-01
SCMs ) (Wood et al., 2008) have enabled the development of small, lightweight robots (∼ 10 cm, ∼ 20 g) (Hoover et al., 2010; Birkmeyer et al., 2009) such...communicated to the controller through a Bluetooth wireless interface. 2.1.2. Model granular media We used 3.0±0.2 mm diam- eter glass particles (density
Compendium of Abstracts. Volume 2
2010-08-01
researched for various applications such as self - healing and fluid transport. One method of creating these vascular systems is through a process called...Daniel J. Dexterous robotic manipulators that rely on joystick type interfaces for teleoperation require considerable time and effort to master...and lack an intuitive basis for human- robot interaction. This hampers operator performance, increases cognitive workload, and limits overall
An Embedded Systems Laboratory to Support Rapid Prototyping of Robotics and the Internet of Things
ERIC Educational Resources Information Center
Hamblen, J. O.; van Bekkum, G. M. E.
2013-01-01
This paper describes a new approach for a course and laboratory designed to allow students to develop low-cost prototypes of robotic and other embedded devices that feature Internet connectivity, I/O, networking, a real-time operating system (RTOS), and object-oriented C/C++. The application programming interface (API) libraries provided permit…
NASA Systems Autonomy Demonstration Project - Development of Space Station automation technology
NASA Technical Reports Server (NTRS)
Bull, John S.; Brown, Richard; Friedland, Peter; Wong, Carla M.; Bates, William
1987-01-01
A 1984 Congressional expansion of the 1958 National Aeronautics and Space Act mandated that NASA conduct programs, as part of the Space Station program, which will yield the U.S. material benefits, particularly in the areas of advanced automation and robotics systems. Demonstration programs are scheduled for automated systems such as the thermal control, expert system coordination of Station subsystems, and automation of multiple subsystems. The programs focus the R&D efforts and provide a gateway for transfer of technology to industry. The NASA Office of Aeronautics and Space Technology is responsible for directing, funding and evaluating the Systems Autonomy Demonstration Project, which will include simulated interactions between novice personnel and astronauts and several automated, expert subsystems to explore the effectiveness of the man-machine interface being developed. Features and progress on the TEXSYS prototype thermal control system expert system are outlined.
Glyco-Immune Diagnostic Signatures and Therapeutic Targets of Mesothelioma
2015-09-01
Mesothelioma; Glycan Array; Immunoprofiles; Robotic Arrayer 16. SECURITY CLASSIFICATION OF: U 17. LIMITATION OF ABSTRACT: UU 18. NUMBER OF PAGES 19 19a...PROJECT SUMMARY: General Comments: This project involved novel technology in which biochemically synthesized glycans were robotically printed on glass...include 386 glycans and the platform was known as the PGA-400. (Figure 1) A standard robotic technology for printing a large range of
A Mobile Service Robot for Life Science Laboratories
NASA Astrophysics Data System (ADS)
Schulenburg, Erik; Elkmann, Norbert; Fritzsche, Markus; Teutsch, Christian
In this paper we presents a project that is developing a mobile service robot to assist users in biological and pharmaceutical laboratories by executing routine jobs such as filling and transporting microplates. A preliminary overview of the design of the mobile platform with a robotic arm is provided. Safety aspects are one focus of the project since the robot and humans will share a common environment. Hence, several safety sensors such as laser scanners, thermographie components and artificial skin are employed. These are described along with the approaches to object recognition.
Development of cable drive systems for an automated assembly project
NASA Technical Reports Server (NTRS)
Monroe, Charles A., Jr.
1990-01-01
In a robotic assembly project, a method was needed to accurately position a robot and a structure which the robot was to assemble. The requirements for high precision and relatively long travel distances dictated the use of cable drive systems. The design of the mechanisms used in translating the robot and in rotating the assembly under construction is discussed. The design criteria are discussed, and the effect of particular requirements on the design is noted. Finally, the measured performance of the completed mechanism is compared with design requirements.
Assistant Personal Robot (APR): Conception and Application of a Tele-Operated Assisted Living Robot.
Clotet, Eduard; Martínez, Dani; Moreno, Javier; Tresanchez, Marcel; Palacín, Jordi
2016-04-28
This paper presents the technical description, mechanical design, electronic components, software implementation and possible applications of a tele-operated mobile robot designed as an assisted living tool. This robotic concept has been named Assistant Personal Robot (or APR for short) and has been designed as a remotely telecontrolled robotic platform built to provide social and assistive services to elderly people and those with impaired mobility. The APR features a fast high-mobility motion system adapted for tele-operation in plain indoor areas, which incorporates a high-priority collision avoidance procedure. This paper presents the mechanical architecture, electrical fundaments and software implementation required in order to develop the main functionalities of an assistive robot. The APR uses a tablet in order to implement the basic peer-to-peer videoconference and tele-operation control combined with a tactile graphic user interface. The paper also presents the development of some applications proposed in the framework of an assisted living robot.
Daluja, Sachin; Golenberg, Lavie; Cao, Alex; Pandya, Abhilash K; Auner, Gregory W; Klein, Michael D
2009-01-01
Robotic surgery has gradually gained acceptance due to its numerous advantages such as tremor filtration, increased dexterity and motion scaling. There remains, however, a significant scope for improvement, especially in the areas of surgeon-robot interface and autonomous procedures. Previous studies have attempted to identify factors affecting a surgeon's performance in a master-slave robotic system by tracking hand movements. These studies relied on conventional optical or magnetic tracking systems, making their use impracticable in the operating room. This study concentrated on building an intrinsic movement capture platform using microcontroller based hardware wired to a surgical robot. Software was developed to enable tracking and analysis of hand movements while surgical tasks were performed. Movement capture was applied towards automated movements of the robotic instruments. By emulating control signals, recorded surgical movements were replayed by the robot's end-effectors. Though this work uses a surgical robot as the platform, the ideas and concepts put forward are applicable to telerobotic systems in general.
SMART (Sandia's Modular Architecture for Robotics and Teleoperation) Ver. 1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Robert
"SMART Ver. 0.8 Beta" provides a system developer with software tools to create a telerobotic control system, i.e., a system whereby an end-user can interact with mechatronic equipment. It consists of three main components: the SMART Editor (tsmed), the SMART Real-time kernel (rtos), and the SMART Supervisor (gui). The SMART Editor is a graphical icon-based code generation tool for creating end-user systems, given descriptions of SMART modules. The SMART real-time kernel implements behaviors that combine modules representing input devices, sensors, constraints, filters, and robotic devices. Included with this software release is a number of core modules, which can be combinedmore » with additional project and device specific modules to create a telerobotic controller. The SMART Supervisor is a graphical front-end for running a SMART system. It is an optional component of the SMART Environment and utilizes the TeVTk windowing and scripting environment. Although the code contained within this release is complete, and can be utilized for defining, running, and interfacing to a sample end-user SMART system, most systems will include additional project and hardware specific modules developed either by the system developer or obtained independently from a SMART module developer. SMART is a software system designed to integrate the different robots, input devices, sensors and dynamic elements required for advanced modes of telerobotic control. "SMART Ver. 0.8 Beta" defines and implements a telerobotic controller. A telerobotic system consists of combinations of modules that implement behaviors. Each real-time module represents an input device, robot device, sensor, constraint, connection or filter. The underlying theory utilizes non-linear discretized multidimensional network elements to model each individual module, and guarantees that upon a valid connection, the resulting system will perform in a stable fashion. Different combinations of modules implement different behaviors. Each module must have at a minimum an initialization routine, a parameter adjustment routine, and an update routine. The SMART runtime kernel runs continuously within a real-time embedded system. Each module is first set-up by the kernel, initialized, and then updated at a fixed rate whenever it is in context. The kernel responds to operator directed commands by changing the state of the system, changing parameters on individual modules, and switching behavioral modes. The SMART Editor is a tool used to define, verify, configure and generate source code for a SMART control system. It uses icon representations of the modules, code patches from valid configurations of the modules, and configuration files describing how a module can be connected into a system to lead the end-user in through the steps needed to create a final system. The SMART Supervisor serves as an interface to a SMART run-time system. It provides an interface on a host computer that connects to the embedded system via TCPIIP ASCII commands. It utilizes a scripting language (Tel) and a graphics windowing environment (Tk). This system can either be customized to fit an end-user's needs or completely replaced as needed.« less
2017-02-01
DARPA ROBOTICS CHALLENGE (DRC) USING HUMAN-MACHINE TEAMWORK TO PERFORM DISASTER RESPONSE WITH A HUMANOID ROBOT FLORIDA INSTITUTE FOR HUMAN AND...AND SUBTITLE DARPA ROBOTICS CHALLENGE (DRC) USING HUMAN-MACHINE TEAMWORK TO PERFORM DISASTER RESPONSE WITH A HUMANOID ROBOT 5a. CONTRACT NUMBER...Human and Machine Cognition (IHMC) from 2012-2016 through three phases of the Defense Advanced Research Projects Agency (DARPA) Robotics Challenge
Particle protection capability of SEMI-compliant EUV-pod carriers
NASA Astrophysics Data System (ADS)
Huang, George; He, Long; Lystad, John; Kielbaso, Tom; Montgomery, Cecilia; Goodwin, Frank
2010-04-01
With the projected rollout of pre-production extreme ultraviolet lithography (EUVL) scanners in 2010, EUVL pilot line production will become a reality in wafer fabrication companies. Among EUVL infrastructure items that must be ready, EUV mask carriers remain critical. To keep non-pellicle EUV masks free from particle contamination, an EUV pod concept has been extensively studied. Early prototypes demonstrated nearly particle-free results at a 53 nm PSL equivalent inspection sensitivity during EUVL mask robotic handling, shipment, vacuum pump-purge, and storage. After the passage of SEMI E152, which specifies the EUV pod mechanical interfaces, standards-compliant EUV pod prototypes, including a production version inner pod and prototype outer pod, were built and tested. Their particle protection capability results are reported in this paper. A state-of-the-art blank defect inspection tool was used to quantify their defect protection capability during mask robotic handling, shipment, and storage tests. To ensure the availability of an EUV pod for 2010 pilot production, the progress and preliminary test results of pre-production EUV outer pods are reported as well.
Robotic Mining Competition - Setup
2018-05-14
On the first day of NASA's 9th Robotic Mining Competition, set-up day on May 14, team members from the University of Minnesota-Twin Cities work on their robot miner in the RobotPits in the Educator Resource Center at Kennedy Space Center Visitor Complex in Florida. More than 40 student teams from colleges and universities around the U.S. will use their mining robots to dig in a supersized sandbox filled with BP-1, or simulated Martian soil, gravel and rocks, and participate in other competition requirements. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's deep space missions.
Robotic Mining Competition - Opening Ceremony
2018-05-15
On the second day of NASA's 9th Robotic Mining Competition, May 15, team members from the University of Tulsa work on their robot miner in the RobotPits in the Educator Resource Center at Kennedy Space Center Visitor Complex in Florida. More than 40 student teams from colleges and universities around the U.S. will use their mining robots to dig in a supersized sandbox filled with BP-1, or simulated Martian soil, gravel and rocks, and participate in other competition requirements. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's deep space missions.
Robotic Mining Competition - Setup
2018-05-14
On the first day of NASA's 9th Robotic Mining Competition, set-up day on May 14, team members from the South Dakota School of Mines & Technology work on their robot miner in the RobotPits in the Educator Resource Center at Kennedy Space Center Visitor Complex in Florida. More than 40 student teams from colleges and universities around the U.S. will use their mining robots to dig in a supersized sandbox filled with BP-1, or simulated Martian soil, gravel and rocks, and participate in other competition requirements. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's deep space missions.
Robotic Mining Competition - Setup
2018-05-14
On the first day of NASA's 9th Robotic Mining Competition, set-up day on May 14, team members from Montana Tech of the University of Montana work on their robot miner in the RobotPits in the Educator Resource Center at Kennedy Space Center Visitor Complex in Florida. More than 40 student teams from colleges and universities around the U.S. will use their mining robots to dig in a supersized sandbox filled with BP-1, or simulated Martian soil, gravel and rocks, and participate in other competition requirements. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's deep space missions.
Robotic Mining Competition - Activities
2018-05-17
Team members from York College CUNY make adjustments to their robot miner for its turn in the mining arena on the fourth day of NASA's 9th Robotic Mining Competition, May 17, inside the RobotPits at NASA's Kennedy Space Center Visitor Complex in Florida. More than 40 student teams from colleges and universities around the U.S. are using their mining robots to dig in a supersized sandbox filled with BP-1, or simulated Lunar soil, gravel and rocks, and participate in other competition requirements. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's deep space missions.
Robotic Mining Competition - Setup
2018-05-14
On the first day of NASA's 9th Robotic Mining Competition, set-up day on May 14, team members from the Illinois Institute of Technology work on their robot miner in the RobotPits in the Educator Resource Center at Kennedy Space Center Visitor Complex in Florida. More than 40 student teams from colleges and universities around the U.S. will use their mining robots to dig in a supersized sandbox filled with BP-1, or simulated Martian soil, gravel and rocks, and participate in other competition requirements. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's deep space missions.
Robotic Mining Competition - Setup
2018-05-14
On the first day of NASA's 9th Robotic Mining Competition, set-up day on May 14, team members from the University of North Carolina at Charlotte work on their robot miner in the RobotPits in the Educator Resource Center at Kennedy Space Center Visitor Complex in Florida. More than 40 student teams from colleges and universities around the U.S. will use their mining robots to dig in a supersized sandbox filled with BP-1, or simulated Martian soil, gravel and rocks, and participate in other competition requirements. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's deep space missions.
Robotic Mining Competition - Setup
2018-05-14
On the first day of NASA's 9th Robotic Mining Competition, set-up day on May 14, team members from Temple University work on their robot miner in the RobotPits in the Educator Resource Center at Kennedy Space Center Visitor Complex in Florida. More than 40 student teams from colleges and universities around the U.S. will use their mining robots to dig in a supersized sandbox filled with BP-1, or simulated Martian soil, gravel and rocks, and participate in other competition requirements. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's deep space missions.
Robotic Mining Competition - Activities
2018-05-17
Team members and their faculty advisor, far left, from The University of North Carolina at Charlotte pause with their robot miner in the RobotPits on the fourth day of NASA's 9th Robotic Mining Competition, May 17, at NASA's Kennedy Space Center Visitor Complex in Florida. More than 40 student teams from colleges and universities around the U.S. will use their mining robots to dig in a supersized sandbox filled with BP-1, or simulated Lunar soil, gravel and rocks, and participate in other competition requirements. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's deep space missions.
Robotic Mining Competition - Activities
2018-05-17
Team members from the University of Colorado Boulder work on their robot miner in the RobotPits in the Educator Resource Center on the fourth day of NASA's 9th Robotic Mining Competition, May 17, at NASA's Kennedy Space Center Visitor Complex in Florida. More than 40 student teams from colleges and universities around the U.S. are using their mining robots to dig in a supersized sandbox filled with BP-1, or simulated Lunar soil, gravel and rocks, and participate in other competition requirements. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's deep space missions.
Robotic Mining Competition - Activities
2018-05-17
First-time participants from the University of Maine, along with their faculty advisor, at far right, are with their robot miner in the RobotPits on the fourth day of NASA's 9th Robotic Mining Competition, May 17, at NASA's Kennedy Space Center Visitor Complex in Florida. More than 40 student teams from colleges and universities around the U.S. are using their mining robots to dig in a supersized sandbox filled with BP-1, or simulated Lunar soil, gravel and rocks, and participate in other competition requirements. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's deep space missions.
Robotic Mining Competition - Opening Ceremony
2018-05-15
On the second day of NASA's 9th Robotic Mining Competition, May 15, team members from Saginaw Valley State University in Michigan work on their robot miner in the RobotPits in the Educator Resource Center at Kennedy Space Center Visitor Complex in Florida. More than 40 student teams from colleges and universities around the U.S. will use their mining robots to dig in a supersized sandbox filled with BP-1, or simulated Martian soil, gravel and rocks, and participate in other competition requirements. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's deep space missions.
Robotic Mining Competition - Opening Ceremony
2018-05-15
Team members and their advisor, far right, from Montana Tech of the University of Montana, prepare their robot miner on the second day of NASA's 9th Robotic Mining Competition, May 15, in the RobotPits in the Educator Resource Center at Kennedy Space Center Visitor Complex in Florida. More than 40 student teams from colleges and universities around the U.S. will use their mining robots to dig in a supersized sandbox filled with BP-1, or simulated Martian soil, gravel and rocks, and participate in other competition requirements. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's deep space missions.
Agarwal, Rahul; Levinson, Adam W; Allaf, Mohamad; Makarov, Danil; Nason, Alex; Su, Li-Ming
2007-11-01
Remote presence is the ability of an individual to project himself from one location to another to see, hear, roam, talk, and interact just as if that individual were actually there. The objective of this study was to evaluate the efficacy and functionality of a novel mobile robotic telementoring system controlled by a portable laptop control station linked via broadband Internet connection. RoboConsultant (RemotePresence-7; InTouch Health, Sunnyvale, CA) was employed for the purpose of intraoperative telementoring and consultation during five laparoscopic and endoscopic urologic procedures. Robot functionality including navigation, zoom capability, examination of external and internal endoscopic camera views, and telestration were evaluated. The robot was controlled by a senior surgeon from various locations ranging from an adjacent operating room to an affiliated hospital 5 miles away. The RoboConsultant performed without connection failure or interruption in each case, allowing the consulting surgeon to immerse himself and navigate within the operating room environment and provide effective communication, mentoring, telestration, and consultation. RoboConsultant provided clear, real-time, and effective telementoring and telestration and allowed the operator to experience remote presence in the operating room environment as a surgical consultant. The portable laptop control station and wireless connectivity allowed the consultant to be mobile and interact with the operating room team from virtually any location. In the future, the remote presence provided by the RoboConsultant may provide useful and effective intraoperative consultation by expert surgeons located in remote sites.
Robotics in Orthopedics: A Brave New World.
Parsley, Brian S
2018-02-16
Future health-care projection projects a significant growth in population by 2020. Health care has seen an exponential growth in technology to address the growing population with the decreasing number of physicians and health-care workers. Robotics in health care has been introduced to address this growing need. Early adoption of robotics was limited because of the limited application of the technology, the cumbersome nature of the equipment, and technical complications. A continued improvement in efficacy, adaptability, and cost reduction has stimulated increased interest in robotic-assisted surgery. The evolution in orthopedic surgery has allowed for advanced surgical planning, precision robotic machining of bone, improved implant-bone contact, optimization of implant placement, and optimization of the mechanical alignment. The potential benefits of robotic surgery include improved surgical work flow, improvements in efficacy and reduction in surgical time. Robotic-assisted surgery will continue to evolve in the orthopedic field. Copyright © 2018 Elsevier Inc. All rights reserved.
Robotic Mining Competition - Awards Ceremony
2018-05-18
NASA's 9th Annual Robotic Mining Competition concludes with an awards ceremony May 18, 2018, at the Apollo/Saturn V Center at the Kennedy Space Center Visitor Complex in Florida. The team from Iowa State University received second place in the Outreach Project category. At left is retired NASA astronaut Jerry Ross. At right is Bethanne Hull, NASA Education specialist and lead Outreach Project judge. More than 40 student teams from colleges and universities around the U.S. participated in the competition, May 14-18, by using their mining robots to dig in a supersized sandbox filled with BP-1, or simulated lunar soil, gravel and rocks, and participate in other competition requirements. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's deep space missions.
Robotic Mining Competition - Awards Ceremony
2018-05-18
NASA's 9th Annual Robotic Mining Competition concludes with an awards ceremony May 18, 2018, at the Apollo/Saturn V Center at the Kennedy Space Center Visitor Complex in Florida. The University of Alabama Team Astrobotics received first place in the Outreach Project category. At left is retired NASA astronaut Jerry Ross. At right is Bethanne Hull, NASA Education specialist and lead Outreach Project judge. More than 40 student teams from colleges and universities around the U.S. participated in the competition, May 14-18, by using their mining robots to dig in a supersized sandbox filled with BP-1, or simulated lunar soil, gravel and rocks, and participate in other competition requirements. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's deep space missions.
Robotic Mining Competition - Awards Ceremony
2018-05-18
NASA's 9th Annual Robotic Mining Competition concludes with an awards ceremony May 18, 2018, at the Apollo/Saturn V Center at the Kennedy Space Center Visitor Complex in Florida. The team from The University of Akron received third place in the Outreach Project category. At left is retired NASA astronaut Jerry Ross. At right is Bethanne Hull, NASA Education specialist and lead Outreach Project judge. More than 40 student teams from colleges and universities around the U.S. participated in the competition, May 14-18, by using their mining robots to dig in a supersized sandbox filled with BP-1, or simulated lunar soil, gravel and rocks, and participate in other competition requirements. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's deep space missions.
An adaptable product for material processing and life science missions
NASA Technical Reports Server (NTRS)
Wassick, Gregory; Dobbs, Michael
1995-01-01
The Experiment Control System II (ECS-II) is designed to make available to the microgravity research community the same tools and mode of automated experimentation that their ground-based counterparts have enjoyed for the last two decades. The design goal was accomplished by combining commercial automation tools familiar to the experimenter community with system control components that interface with the on-orbit platform in a distributed architecture. The architecture insulates the tools necessary for managing a payload. By using commercial software and hardware components whenever possible, development costs were greatly reduced when compared to traditional space development projects. Using commercial-off-the-shelf (COTS) components also improved the usability documentation, and reducing the need for training of the system by providing familiar user interfaces, providing a wealth of readily available documentation, and reducing the need for training on system-specific details. The modularity of the distributed architecture makes it very amenable for modification to different on-orbit experiments requiring robotics-based automation.
2017 Robotic Mining Competition
2017-05-23
College team members watch a live display of their mining robots during test runs in the mining arena at NASA's 8th Annual Robotic Mining Competition at the Kennedy Space Center Visitor Complex in Florida. More than 40 student teams from colleges and universities around the U.S. will use their uniquely-designed mining robots to dig in a supersized sandbox filled with BP-1, or simulated Martian soil, and participate in other competition requirements. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's Journey to Mars.
2017 Robotic Mining Competition
2017-05-24
Team members from the New York University Tandon School of Engineering transport their robot to the mining arena during NASA's 8th Annual Robotic Mining Competition at the Kennedy Space Center Visitor Complex in Florida. More than 40 student teams from colleges and universities around the U.S. are using their uniquely-designed mining robots to dig in a supersized sandbox filled with BP-1, or simulated Martian soil, and participate in other competition requirements. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's Journey to Mars.
High level functions for the intuitive use of an assistive robot.
Lebec, Olivier; Ben Ghezala, Mohamed Walid; Leynart, Violaine; Laffont, Isabelle; Fattal, Charles; Devilliers, Laurence; Chastagnol, Clement; Martin, Jean-Claude; Mezouar, Youcef; Korrapatti, Hermanth; Dupourqué, Vincent; Leroux, Christophe
2013-06-01
This document presents the research project ARMEN (Assistive Robotics to Maintain Elderly People in a Natural environment), aimed at the development of a user friendly robot with advanced functions for assistance to elderly or disabled persons at home. Focus is given to the robot SAM (Smart Autonomous Majordomo) and its new features of navigation, manipulation, object recognition, and knowledge representation developed for the intuitive supervision of the robot. The results of the technical evaluations show the value and potential of these functions for practical applications. The paper also documents the details of the clinical evaluations carried out with elderly and disabled persons in a therapeutic setting to validate the project.
2017 Robotic Mining Competition
2017-05-23
College team members prepare to enter the robotic mining arena for a test run during NASA's 8th Annual Robotic Mining Competition at the Kennedy Space Center Visitor Complex in Florida. More than 40 student teams from colleges and universities around the U.S. will use their uniquely-designed mining robots to dig in a supersized sandbox filled with BP-1, or simulated Martian soil, and participate in other competition requirements. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's Journey to Mars.
Robotic Mining Competition - Activities
2018-05-16
Team members cheer during their robot miner's turn in the mining arena on the third day of NASA's 9th Robotic Mining Competition, May 16, at NASA's Kennedy Space Center Visitor Complex in Florida. More than 40 student teams from colleges and universities around the U.S. will use their mining robots to dig in a supersized sandbox filled with BP-1, or simulated Lunar soil, gravel and rocks, and participate in other competition requirements. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's deep space missions.
2017 Robotic Mining Competition
2017-05-24
The robotic miner from Mississippi State University digs in the mining arena during NASA's 8th Annual Robotic Mining Competition at the Kennedy Space Center Visitor Complex in Florida. More than 40 student teams from colleges and universities around the U.S. are using their uniquely-designed mining robots to dig in a supersized sandbox filled with BP-1, or simulated Martian soil, and participate in other competition requirements. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's Journey to Mars.
2017 Robotic Mining Competition
2017-05-23
Team members from Purdue University prepare their uniquely-designed robot miner in the RoboPit at NASA's 8th Annual Robotic Mining Competition at the Kennedy Space Center Visitor Complex in Florida. More than 40 student teams from colleges and universities around the U.S. will use their uniquely-designed mining robots to dig in a supersized sandbox filled with BP-1, or simulated Martian soil, and participate in other competition requirements. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's Journey to Mars.
Robotic Mining Competition - Activities
2018-05-16
On the third day of NASA's 9th Robotic Mining Competition, May 16, two robot miners dig in the dirt in the mining arena at NASA's Kennedy Space Center Visitor Complex in Florida. More than 40 student teams from colleges and universities around the U.S. will use their mining robots to dig in a supersized sandbox filled with BP-1, or simulated Lunar soil, gravel and rocks, and participate in other competition requirements. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's deep space missions.
Integrating Telepresence Robots Into Nursing Simulation.
Rudolph, Alexandra; Vaughn, Jacqueline; Crego, Nancy; Hueckel, Remi; Kuszajewski, Michele; Molloy, Margory; Brisson, Raymond; Shaw, Ryan J
This article provides an overview of the use of telepresence robots in clinical practice and describes an evaluation of an educational project in which distance-based nurse practitioner students used telepresence robots in clinical simulations with on-campus Accelerated Bachelor of Science in Nursing students. The results of this project suggest that the incorporation of telepresence in simulation is an effective method to promote engagement, satisfaction, and self-confidence in learning.
Promoting Diversity in Undergraduate Research in Robotics-Based Seismic
NASA Astrophysics Data System (ADS)
Gifford, C. M.; Arthur, C. L.; Carmichael, B. L.; Webber, G. K.; Agah, A.
2006-12-01
The motivation for this research was to investigate forming evenly-spaced grid patterns with a team of mobile robots for future use in seismic imaging in polar environments. A team of robots was incrementally designed and simulated by incorporating sensors and altering each robot's controller. Challenges, design issues, and efficiency were also addressed. This research project incorporated the efforts of two undergraduate REU students from Elizabeth City State University (ECSU) in North Carolina, and the research staff at the Center for Remote Sensing of Ice Sheets (CReSIS) at the University of Kansas. ECSU is a historically black university. Mentoring these two minority students in scientific research, seismic, robotics, and simulation will hopefully encourage them to pursue graduate degrees in science-related or engineering fields. The goals for this 10-week internship during summer 2006 were to educate the students in the fields of seismology, robotics, and virtual prototyping and simulation. Incrementally designing a robot platform for future enhancement and evaluation was central to this research, and involved simulation of several robots working together to change seismic grid shape and spacing. This process gave these undergraduate students experience and knowledge in an actual research project for a real-world application. The two undergraduate students gained valuable research experience and advanced their knowledge of seismic imaging, robotics, sensors, and simulation. They learned that seismic sensors can be used in an array to gather 2D and 3D images of the subsurface. They also learned that robotics can support dangerous or difficult human activities, such as those in a harsh polar environment, by increasing automation, robustness, and precision. Simulating robot designs also gave them experience in programming behaviors for mobile robots. Thus far, one academic paper has resulted from their research. This paper received third place at the 2006 National Technical Association's (NTA) National Conference in Chicago. CReSIS, in conjunction with ECSU, provided these minority students with a well-rounded educational experience in a real-world research project. Their contributions will be used for future projects.
Series Pneumatic Artificial Muscles (sPAMs) and Application to a Soft Continuum Robot.
Greer, Joseph D; Morimoto, Tania K; Okamura, Allison M; Hawkes, Elliot W
2017-01-01
We describe a new series pneumatic artificial muscle (sPAM) and its application as an actuator for a soft continuum robot. The robot consists of three sPAMs arranged radially round a tubular pneumatic backbone. Analogous to tendons, the sPAMs exert a tension force on the robot's pneumatic backbone, causing bending that is approximately constant curvature. Unlike a traditional tendon driven continuum robot, the robot is entirely soft and contains no hard components, making it safer for human interaction. Models of both the sPAM and soft continuum robot kinematics are presented and experimentally verified. We found a mean position accuracy of 5.5 cm for predicting the end-effector position of a 42 cm long robot with the kinematic model. Finally, closed-loop control is demonstrated using an eye-in-hand visual servo control law which provides a simple interface for operation by a human. The soft continuum robot with closed-loop control was found to have a step-response rise time and settling time of less than two seconds.
An Ultralightweight and Living Legged Robot.
Vo Doan, Tat Thang; Tan, Melvin Y W; Bui, Xuan Hien; Sato, Hirotaka
2018-02-01
In this study, we describe the most ultralightweight living legged robot to date that makes it a strong candidate for a search and rescue mission. The robot is a living beetle with a wireless electronic backpack stimulator mounted on its thorax. Inheriting from the living insect, the robot employs a compliant body made of soft actuators, rigid exoskeletons, and flexure hinges. Such structure would allow the robot to easily adapt to any complex terrain due to the benefit of soft interface, self-balance, and self-adaptation of the insect without any complex controller. The antenna stimulation enables the robot to perform not only left/right turning but also backward walking and even cessation of walking. We were also able to grade the turning and backward walking speeds by changing the stimulation frequency. The power required to drive the robot is low as the power consumption of the antenna stimulation is in the order of hundreds of microwatts. In contrast to the traditional legged robots, this robot is of low cost, easy to construct, simple to control, and has ultralow power consumption.
Real time AI expert system for robotic applications
NASA Technical Reports Server (NTRS)
Follin, John F.
1987-01-01
A computer controlled multi-robot process cell to demonstrate advanced technologies for the demilitarization of obsolete chemical munitions was developed. The methods through which the vision system and other sensory inputs were used by the artificial intelligence to provide the information required to direct the robots to complete the desired task are discussed. The mechanisms that the expert system uses to solve problems (goals), the different rule data base, and the methods for adapting this control system to any device that can be controlled or programmed through a high level computer interface are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
von Sternberg, Alex
The contact control code is a generalized force control scheme meant to interface with a robotic arm being controlled using the Robot Operating System (ROS). The code allows the user to specify a control scheme for each control dimension in a way that many different control task controllers could be built from the same generalized controller. The input to the code includes maximum velocity, maximum force, maximum displacement, and a control law assigned to each direction and the output is a 6 degree of freedom velocity command that is sent to the robot controller.
An operator interface design for a telerobotic inspection system
NASA Technical Reports Server (NTRS)
Kim, Won S.; Tso, Kam S.; Hayati, Samad
1993-01-01
The operator interface has recently emerged as an important element for efficient and safe interactions between human operators and telerobotics. Advances in graphical user interface and graphics technologies enable us to produce very efficient operator interface designs. This paper describes an efficient graphical operator interface design newly developed for remote surface inspection at NASA-JPL. The interface, designed so that remote surface inspection can be performed by a single operator with an integrated robot control and image inspection capability, supports three inspection strategies of teleoperated human visual inspection, human visual inspection with automated scanning, and machine-vision-based automated inspection.
ERIC Educational Resources Information Center
Ensign, Todd I.
2017-01-01
Educational robotics (ER) combines accessible and age-appropriate building materials, programmable interfaces, and computer coding to teach science and mathematics using the engineering design process. ER has been shown to increase K-12 students' understanding of STEM concepts, and can develop students' self-confidence and interest in STEM. As…
Universal computing by DNA origami robots in a living animal
Levner, Daniel; Ittah, Shmulik; Abu-Horowitz, Almogit; Bachelet, Ido
2014-01-01
Biological systems are collections of discrete molecular objects that move around and collide with each other. Cells carry out elaborate processes by precisely controlling these collisions, but developing artificial machines that can interface with and control such interactions remains a significant challenge. DNA is a natural substrate for computing and has been used to implement a diverse set of mathematical problems1-3, logic circuits4-6 and robotics7-9. The molecule also naturally interfaces with living systems, and different forms of DNA-based biocomputing have previously been demonstrated10-13. Here we show that DNA origami14-16 can be used to fabricate nanoscale robots that are capable of dynamically interacting with each other17-18 in a living animal. The interactions generate logical outputs, which are relayed to switch molecular payloads on or off. As a proof-of-principle, we use the system to create architectures that emulate various logic gates (AND, OR, XOR, NAND, NOT, CNOT, and a half adder). Following an ex vivo prototyping phase, we successfully employed the DNA origami robots in living cockroaches (Blaberus discoidalis) to control a molecule that targets the cells of the animal. PMID:24705510
Interfacing insect brain for space applications.
Di Pino, Giovanni; Seidl, Tobias; Benvenuto, Antonella; Sergi, Fabrizio; Campolo, Domenico; Accoto, Dino; Maria Rossini, Paolo; Guglielmelli, Eugenio
2009-01-01
Insects exhibit remarkable navigation capabilities that current control architectures are still far from successfully mimic and reproduce. In this chapter, we present the results of a study on conceptualizing insect/machine hybrid controllers for improving autonomy of exploratory vehicles. First, the different principally possible levels of interfacing between insect and machine are examined followed by a review of current approaches towards hybridity and enabling technologies. Based on the insights of this activity, we propose a double hybrid control architecture which hinges around the concept of "insect-in-a-cockpit." It integrates both biological/artificial (insect/robot) modules and deliberative/reactive behavior. The basic assumption is that "low-level" tasks are managed by the robot, while the "insect intelligence" is exploited whenever high-level problem solving and decision making is required. Both neural and natural interfacing have been considered to achieve robustness and redundancy of exchanged information.
FOCU:S--future operator control unit: soldier
NASA Astrophysics Data System (ADS)
O'Brien, Barry J.; Karan, Cem; Young, Stuart H.
2009-05-01
The U.S. Army Research Laboratory's (ARL) Computational and Information Sciences Directorate (CISD) has long been involved in autonomous asset control, specifically as it relates to small robots. Over the past year, CISD has been making strides in the implementation of three areas of small robot autonomy, namely platform autonomy, Soldier-robot interface, and tactical behaviors. It is CISD's belief that these three areas must be considered as a whole in order to provide Soldiers with useful capabilities. In addressing the Soldier-robot interface aspect, CISD has begun development on a unique dismounted controller called the Future Operator Control Unit: Soldier (FOCU:S) that is based on an Apple iPod Touch. The iPod Touch's small form factor, unique touch-screen input device, and the presence of general purpose computing applications such as a web browser combine to give this device the potential to be a disruptive technology. Setting CISD's implementation apart from other similar iPod or iPhone-based devices is the ARL software that allows multiple robotic platforms to be controlled from a single OCU. The FOCU:S uses the same Agile Computing Infrastructure (ACI) that all other assets in the ARL robotic control system use, enabling automated asset discovery on any type of network. Further, a custom ad hoc routing implementation allows the FOCU:S to communicate with the ARL ad hoc communications system and enables it to extend the range of the network. This paper will briefly describe the current robotic control architecture employed by ARL and provide short descriptions of existing capabilities. Further, the paper will discuss FOCU:S specific software developed for the iPod Touch, including unique capabilities enabled by the device's unique hardware.
NASA Astrophysics Data System (ADS)
Nyein, Aung Kyaw; Thu, Theint Theint
2008-10-01
In this paper, an articulated type of industrial used robot is discussed. The robot is mainly intended to be used in pick and place operation. It will sense the object at the specified place and move it to a desired location. A peripheral interface controller (PIC16F84A) is used as the main controller of the robot. Infrared LED and IR receiver unit for object detection and 4-bit bidirectional universal shift registers (74LS194) and high current and high voltage Darlington transistors arrays (ULN2003) for driving the arms' motors are used in this robot. The amount of rotation for each arm is regulated by the limit switches. The operation of the robot is very simple but it has the ability of to overcome resetting position after power failure. It can continue its work from the last position before the power is failed without needing to come back to home position.
SMARBot: a modular miniature mobile robot platform
NASA Astrophysics Data System (ADS)
Meng, Yan; Johnson, Kerry; Simms, Brian; Conforth, Matthew
2008-04-01
Miniature robots have many advantages over their larger counterparts, such as low cost, low power, and easy to build a large scale team for complex tasks. Heterogeneous multi miniature robots could provide powerful situation awareness capability due to different locomotion capabilities and sensor information. However, it would be expensive and time consuming to develop specific embedded system for different type of robots. In this paper, we propose a generic modular embedded system architecture called SMARbot (Stevens Modular Autonomous Robot), which consists of a set of hardware and software modules that can be configured to construct various types of robot systems. These modules include a high performance microprocessor, a reconfigurable hardware component, wireless communication, and diverse sensor and actuator interfaces. The design of all the modules in electrical subsystem, the selection criteria for module components, and the real-time operating system are described. Some proofs of concept experimental results are also presented.
Interactive robot control system and method of use
NASA Technical Reports Server (NTRS)
Abdallah, Muhammad E. (Inventor); Sanders, Adam M. (Inventor); Platt, Robert (Inventor); Reiland, Matthew J. (Inventor); Linn, Douglas Martin (Inventor)
2012-01-01
A robotic system includes a robot having joints, actuators, and sensors, and a distributed controller. The controller includes command-level controller, embedded joint-level controllers each controlling a respective joint, and a joint coordination-level controller coordinating motion of the joints. A central data library (CDL) centralizes all control and feedback data, and a user interface displays a status of each joint, actuator, and sensor using the CDL. A parameterized action sequence has a hierarchy of linked events, and allows the control data to be modified in real time. A method of controlling the robot includes transmitting control data through the various levels of the controller, routing all control and feedback data to the CDL, and displaying status and operation of the robot using the CDL. The parameterized action sequences are generated for execution by the robot, and a hierarchy of linked events is created within the sequence.
Robotic Mining Competition - Opening Ceremony
2018-05-15
On the second day of NASA's 9th Robotic Mining Competition, May 15, team members from the South Dakota School of Mines & Engineering work on their robot miner in the RobotPits in the Educator Resource Center at Kennedy Space Center Visitor Complex in Florida. Second from right is Kennedy Space Center Director Bob Cabana. More than 40 student teams from colleges and universities around the U.S. will use their mining robots to dig in a supersized sandbox filled with BP-1, or simulated Martian soil, gravel and rocks, and participate in other competition requirements. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's deep space missions.
Robotic Mining Competition - Opening Ceremony
2018-05-15
On the second day of NASA's 9th Robotic Mining Competition, May 15, team members from Mississippi State University work on their robot miner in the RobotPits in the Educator Resource Center at Kennedy Space Center Visitor Complex in Florida. At far right is Kennedy Space Center Director Bob Cabana. More than 40 student teams from colleges and universities around the U.S. will use their mining robots to dig in a supersized sandbox filled with BP-1, or simulated Martian soil, gravel and rocks, and participate in other competition requirements. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's deep space missions.
Robotic Mining Competition - Activities
2018-05-17
Team members from the University of Arkansas make adjustments to their robot miner for its turn in the mining arena on the fourth day of NASA's 9th Robotic Mining Competition, May 17, at NASA's Kennedy Space Center Visitor Complex in Florida. They are in the RobotPits inside the Educator Resource Center. More than 40 student teams from colleges and universities around the U.S. are using their mining robots to dig in a supersized sandbox filled with BP-1, or simulated Lunar soil, gravel and rocks, and participate in other competition requirements. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's deep space missions.
A Multidisciplinary PBL Robot Control Project in Automation and Electronic Engineering
ERIC Educational Resources Information Center
Hassan, Houcine; Domínguez, Carlos; Martínez, Juan-Miguel; Perles, Angel; Capella, Juan-Vicente; Albaladejo, José
2015-01-01
This paper presents a multidisciplinary problem-based learning (PBL) project consisting of the development of a robot arm prototype and the implementation of its control system. The project is carried out as part of Industrial Informatics (II), a compulsory third-year course in the Automation and Electronic Engineering (AEE) degree program at the…
Motion Imagery and Robotics Application Project (MIRA)
NASA Technical Reports Server (NTRS)
Grubbs, Rodney P.
2010-01-01
This viewgraph presentation describes the Motion Imagery and Robotics Application (MIRA) Project. A detailed description of the MIRA camera service software architecture, encoder features, and on-board communications are presented. A description of a candidate camera under development is also shown.
Stereo optical guidance system for control of industrial robots
NASA Technical Reports Server (NTRS)
Powell, Bradley W. (Inventor); Rodgers, Mike H. (Inventor)
1992-01-01
A device for the generation of basic electrical signals which are supplied to a computerized processing complex for the operation of industrial robots. The system includes a stereo mirror arrangement for the projection of views from opposite sides of a visible indicia formed on a workpiece. The views are projected onto independent halves of the retina of a single camera. The camera retina is of the CCD (charge-coupled-device) type and is therefore capable of providing signals in response to the image projected thereupon. These signals are then processed for control of industrial robots or similar devices.
Herrero, Héctor; Outón, Jose Luis; Puerto, Mildred; Sallé, Damien; López de Ipiña, Karmele
2017-01-01
This paper presents a state machine-based architecture, which enhances the flexibility and reusability of industrial robots, more concretely dual-arm multisensor robots. The proposed architecture, in addition to allowing absolute control of the execution, eases the programming of new applications by increasing the reusability of the developed modules. Through an easy-to-use graphical user interface, operators are able to create, modify, reuse and maintain industrial processes, increasing the flexibility of the cell. Moreover, the proposed approach is applied in a real use case in order to demonstrate its capabilities and feasibility in industrial environments. A comparative analysis is presented for evaluating the presented approach versus traditional robot programming techniques. PMID:28561750
Herrero, Héctor; Outón, Jose Luis; Puerto, Mildred; Sallé, Damien; López de Ipiña, Karmele
2017-05-31
This paper presents a state machine-based architecture, which enhances the flexibility and reusability of industrial robots, more concretely dual-arm multisensor robots. The proposed architecture, in addition to allowing absolute control of the execution, eases the programming of new applications by increasing the reusability of the developed modules. Through an easy-to-use graphical user interface, operators are able to create, modify, reuse and maintain industrial processes, increasing the flexibility of the cell. Moreover, the proposed approach is applied in a real use case in order to demonstrate its capabilities and feasibility in industrial environments. A comparative analysis is presented for evaluating the presented approach versus traditional robot programming techniques.
Robotics virtual rail system and method
Bruemmer, David J [Idaho Falls, ID; Few, Douglas A [Idaho Falls, ID; Walton, Miles C [Idaho Falls, ID
2011-07-05
A virtual track or rail system and method is described for execution by a robot. A user, through a user interface, generates a desired path comprised of at least one segment representative of the virtual track for the robot. Start and end points are assigned to the desired path and velocities are also associated with each of the at least one segment of the desired path. A waypoint file is generated including positions along the virtual track representing the desired path with the positions beginning from the start point to the end point including the velocities of each of the at least one segment. The waypoint file is sent to the robot for traversing along the virtual track.
A hybrid BCI for enhanced control of a telepresence robot.
Carlson, Tom; Tonin, Luca; Perdikis, Serafeim; Leeb, Robert; del R Millán, José
2013-01-01
Motor-disabled end users have successfully driven a telepresence robot in a complex environment using a Brain-Computer Interface (BCI). However, to facilitate the interaction aspect that underpins the notion of telepresence, users must be able to voluntarily and reliably stop the robot at any moment, not just drive from point to point. In this work, we propose to exploit the user's residual muscular activity to provide a fast and reliable control channel, which can start/stop the telepresence robot at any moment. Our preliminary results show that not only does this hybrid approach increase the accuracy, but it also helps to reduce the workload and was the preferred control paradigm of all the participants.
ERIC Educational Resources Information Center
Kitts, Christopher; Quinn, Neil
2004-01-01
Santa Clara University's Robotic Systems Laboratory conducts an aggressive robotic development and operations program in which interdisciplinary teams of undergraduate students build and deploy a wide range of robotic systems, ranging from underwater vehicles to spacecraft. These year-long projects expose students to the breadth of and…
Robotics in Industrial Arts. Final Narrative Report for the Exemplary Project.
ERIC Educational Resources Information Center
Ascension Parish School Board, Donaldsonville, LA.
To introduce students to the world of robotics and industrial automation, robotics was introduced to students enrolled in electronics classes in the industrial arts program at St. Amant High School (Louisiana). Three robots, three host microcomputers, and necessary software were purchased. The electronics instructor installed the three robots…
Hand-in-hand advances in biomedical engineering and sensorimotor restoration.
Pisotta, Iolanda; Perruchoud, David; Ionta, Silvio
2015-05-15
Living in a multisensory world entails the continuous sensory processing of environmental information in order to enact appropriate motor routines. The interaction between our body and our brain is the crucial factor for achieving such sensorimotor integration ability. Several clinical conditions dramatically affect the constant body-brain exchange, but the latest developments in biomedical engineering provide promising solutions for overcoming this communication breakdown. The ultimate technological developments succeeded in transforming neuronal electrical activity into computational input for robotic devices, giving birth to the era of the so-called brain-machine interfaces. Combining rehabilitation robotics and experimental neuroscience the rise of brain-machine interfaces into clinical protocols provided the technological solution for bypassing the neural disconnection and restore sensorimotor function. Based on these advances, the recovery of sensorimotor functionality is progressively becoming a concrete reality. However, despite the success of several recent techniques, some open issues still need to be addressed. Typical interventions for sensorimotor deficits include pharmaceutical treatments and manual/robotic assistance in passive movements. These procedures achieve symptoms relief but their applicability to more severe disconnection pathologies is limited (e.g. spinal cord injury or amputation). Here we review how state-of-the-art solutions in biomedical engineering are continuously increasing expectances in sensorimotor rehabilitation, as well as the current challenges especially with regards to the translation of the signals from brain-machine interfaces into sensory feedback and the incorporation of brain-machine interfaces into daily activities. Copyright © 2015 Elsevier B.V. All rights reserved.
On-Line Point Positioning with Single Frame Camera Data
1992-03-15
tion algorithms and methods will be found in robotics and industrial quality control. 1. Project data The project has been defined as "On-line point...development and use of the OLT algorithms and meth- ods for applications in robotics , industrial quality control and autonomous vehicle naviga- tion...Of particular interest in robotics and autonomous vehicle navigation is, for example, the task of determining the position and orientation of a mobile
Development of the HERMIES III mobile robot research testbed at Oak Ridge National Laboratory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Manges, W.W.; Hamel, W.R.; Weisbin, C.R.
1988-01-01
The latest robot in the Hostile Environment Robotic Machine Intelligence Experiment Series (HERMIES) is now under development at the Center for Engineering Systems Advanced Research (CESAR) in the Oak Ridge National Laboratory. The HERMIES III robot incorporates a larger than human size 7-degree-of-freedom manipulator mounted on a 2-degree-of-freedom mobile platform including a variety of sensors and computers. The deployment of this robot represents a significant increase in research capabilities for the CESAR laboratory. The initial on-board computer capacity of the robot exceeds that of 20 Vax 11/780s. The navigation and vision algorithms under development make extensive use of the on-boardmore » NCUBE hypercube computer while the sensors are interfaced through five VME computers running the OS-9 real-time, multitasking operating system. This paper describes the motivation, key issues, and detailed design trade-offs of implementing the first phase (basic functionality) of the HERMIES III robot. 10 refs., 7 figs.« less
Method and apparatus for automatic control of a humanoid robot
NASA Technical Reports Server (NTRS)
Abdallah, Muhammad E (Inventor); Platt, Robert (Inventor); Wampler, II, Charles W. (Inventor); Sanders, Adam M (Inventor); Reiland, Matthew J (Inventor)
2013-01-01
A robotic system includes a humanoid robot having a plurality of joints adapted for force control with respect to an object acted upon by the robot, a graphical user interface (GUI) for receiving an input signal from a user, and a controller. The GUI provides the user with intuitive programming access to the controller. The controller controls the joints using an impedance-based control framework, which provides object level, end-effector level, and/or joint space-level control of the robot in response to the input signal. A method for controlling the robotic system includes receiving the input signal via the GUI, e.g., a desired force, and then processing the input signal using a host machine to control the joints via an impedance-based control framework. The framework provides object level, end-effector level, and/or joint space-level control of the robot, and allows for functional-based GUI to simplify implementation of a myriad of operating modes.
Cellular-level surgery using nano robots.
Song, Bo; Yang, Ruiguo; Xi, Ning; Patterson, Kevin Charles; Qu, Chengeng; Lai, King Wai Chiu
2012-12-01
The atomic force microscope (AFM) is a popular instrument for studying the nano world. AFM is naturally suitable for imaging living samples and measuring mechanical properties. In this article, we propose a new concept of an AFM-based nano robot that can be applied for cellular-level surgery on living samples. The nano robot has multiple functions of imaging, manipulation, characterizing mechanical properties, and tracking. In addition, the technique of tip functionalization allows the nano robot the ability for precisely delivering a drug locally. Therefore, the nano robot can be used for conducting complicated nano surgery on living samples, such as cells and bacteria. Moreover, to provide a user-friendly interface, the software in this nano robot provides a "videolized" visual feedback for monitoring the dynamic changes on the sample surface. Both the operation of nano surgery and observation of the surgery results can be simultaneously achieved. This nano robot can be easily integrated with extra modules that have the potential applications of characterizing other properties of samples such as local conductance and capacitance.
NASA Astrophysics Data System (ADS)
Uehara, Hideyuki; Higa, Hiroki; Soken, Takashi; Namihira, Yoshinori
A mobile feeding assistive robotic arm for people with physical disabilities of the extremities has been developed in this paper. This system is composed of a robotic arm, microcontroller, and its interface. The main unit of the robotic arm can be contained in a laptop computer's briefcase. Its weight is 5kg, including two 12-V lead acid rechargeable batteries. This robotic arm can be also mounted on a wheelchair. To verify performance of the mobile robotic arm system, drinking tea task was experimentally performed by two able-bodied subjects as well as three persons suffering from muscular dystrophy. From the experimental results, it was clear that they could smoothly carry out the drinking task, and that the robotic arm could firmly grasp a commercially available 500-ml plastic bottle. The eating task was also performed by the two able-bodied subjects. The experimental results showed that they could eat porridge by using a spoon without any difficulty.
Iosa, Marco; Morone, Giovanni; Cherubini, Andrea; Paolucci, Stefano
Most studies and reviews on robots for neurorehabilitation focus on their effectiveness. These studies often report inconsistent results. This and many other reasons limit the credit given to these robots by therapists and patients. Further, neurorehabilitation is often still based on therapists' expertise, with competition among different schools of thought, generating substantial uncertainty about what exactly a neurorehabilitation robot should do. Little attention has been given to ethics. This review adopts a new approach, inspired by Asimov's three laws of robotics and based on the most recent studies in neurorobotics, for proposing new guidelines for designing and using robots for neurorehabilitation. We propose three laws of neurorobotics based on the ethical need for safe and effective robots, the redefinition of their role as therapist helpers, and the need for clear and transparent human-machine interfaces. These laws may allow engineers and clinicians to work closely together on a new generation of neurorobots.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neely, Jason C.; Sturgis, Beverly Rainwater; Byrne, Raymond Harry
This report contains the results of a research effort on advanced robot locomotion. The majority of this work focuses on walking robots. Walking robot applications include delivery of special payloads to unique locations that require human locomotion to exo-skeleton human assistance applications. A walking robot could step over obstacles and move through narrow openings that a wheeled or tracked vehicle could not overcome. It could pick up and manipulate objects in ways that a standard robot gripper could not. Most importantly, a walking robot would be able to rapidly perform these tasks through an intuitive user interface that mimics naturalmore » human motion. The largest obstacle arises in emulating stability and balance control naturally present in humans but needed for bipedal locomotion in a robot. A tracked robot is bulky and limited, but a wide wheel base assures passive stability. Human bipedal motion is so common that it is taken for granted, but bipedal motion requires active balance and stability control for which the analysis is non-trivial. This report contains an extensive literature study on the state-of-the-art of legged robotics, and it additionally provides the analysis, simulation, and hardware verification of two variants of a proto-type leg design.« less
An egocentric vision based assistive co-robot.
Zhang, Jingzhe; Zhuang, Lishuo; Wang, Yang; Zhou, Yameng; Meng, Yan; Hua, Gang
2013-06-01
We present the prototype of an egocentric vision based assistive co-robot system. In this co-robot system, the user is wearing a pair of glasses with a forward looking camera, and is actively engaged in the control loop of the robot in navigational tasks. The egocentric vision glasses serve for two purposes. First, it serves as a source of visual input to request the robot to find a certain object in the environment. Second, the motion patterns computed from the egocentric video associated with a specific set of head movements are exploited to guide the robot to find the object. These are especially helpful for quadriplegic individuals who do not have needed hand functionality for interaction and control with other modalities (e.g., joystick). In our co-robot system, when the robot does not fulfill the object finding task in a pre-specified time window, it would actively solicit user controls for guidance. Then the users can use the egocentric vision based gesture interface to orient the robot towards the direction of the object. After that the robot will automatically navigate towards the object until it finds it. Our experiments validated the efficacy of the closed-loop design to engage the human in the loop.
Hand gesture guided robot-assisted surgery based on a direct augmented reality interface.
Wen, Rong; Tay, Wei-Liang; Nguyen, Binh P; Chng, Chin-Boon; Chui, Chee-Kong
2014-09-01
Radiofrequency (RF) ablation is a good alternative to hepatic resection for treatment of liver tumors. However, accurate needle insertion requires precise hand-eye coordination and is also affected by the difficulty of RF needle navigation. This paper proposes a cooperative surgical robot system, guided by hand gestures and supported by an augmented reality (AR)-based surgical field, for robot-assisted percutaneous treatment. It establishes a robot-assisted natural AR guidance mechanism that incorporates the advantages of the following three aspects: AR visual guidance information, surgeon's experiences and accuracy of robotic surgery. A projector-based AR environment is directly overlaid on a patient to display preoperative and intraoperative information, while a mobile surgical robot system implements specified RF needle insertion plans. Natural hand gestures are used as an intuitive and robust method to interact with both the AR system and surgical robot. The proposed system was evaluated on a mannequin model. Experimental results demonstrated that hand gesture guidance was able to effectively guide the surgical robot, and the robot-assisted implementation was found to improve the accuracy of needle insertion. This human-robot cooperative mechanism is a promising approach for precise transcutaneous ablation therapy. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Modeling Leadership Styles in Human-Robot Team Dynamics
NASA Technical Reports Server (NTRS)
Cruz, Gerardo E.
2005-01-01
The recent proliferation of robotic systems in our society has placed questions regarding interaction between humans and intelligent machines at the forefront of robotics research. In response, our research attempts to understand the context in which particular types of interaction optimize efficiency in tasks undertaken by human-robot teams. It is our conjecture that applying previous research results regarding leadership paradigms in human organizations will lead us to a greater understanding of the human-robot interaction space. In doing so, we adapt four leadership styles prevalent in human organizations to human-robot teams. By noting which leadership style is more appropriately suited to what situation, as given by previous research, a mapping is created between the adapted leadership styles and human-robot interaction scenarios-a mapping which will presumably maximize efficiency in task completion for a human-robot team. In this research we test this mapping with two adapted leadership styles: directive and transactional. For testing, we have taken a virtual 3D interface and integrated it with a genetic algorithm for use in &le-operation of a physical robot. By developing team efficiency metrics, we can determine whether this mapping indeed prescribes interaction styles that will maximize efficiency in the teleoperation of a robot.
2017 Robotic Mining Competition
2017-05-23
Team Raptor members from the University of North Dakota College of Engineering and Mines check their robot, named "Marsbot," in the RoboPit at NASA's 8th Annual Robotic Mining Competition at the Kennedy Space Center Visitor Complex in Florida. More than 40 student teams from colleges and universities around the U.S. will use their uniquely-designed mining robots to dig in a supersized sandbox filled with BP-1, or simulated Martian soil, and participate in other competition requirements. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's Journey to Mars.
2017 Robotic Mining Competition
2017-05-24
Team members from West Virginia University prepare their mining robot for a test run in a giant sandbox before their scheduled mining run in the arena during NASA's 8th Annual Robotic Mining Competition at the Kennedy Space Center Visitor Complex in Florida. More than 40 student teams from colleges and universities around the U.S. are using their uniquely-designed mining robots to dig in a supersized sandbox filled with BP-1, or simulated Martian soil, and participate in other competition requirements. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's Journey to Mars.
2017 Robotic Mining Competition
2017-05-24
Twin mining robots from the University of Iowa dig in a supersized sandbox filled with BP-1, or simulated Martian soil, during NASA's 8th Annual Robotic Mining Competition at the Kennedy Space Center Visitor Complex in Florida. More than 40 student teams from colleges and universities around the U.S. are using their uniquely-designed mining robots to dig in a supersized sandbox filled with BP-1, or simulated Martian soil, and participate in other competition requirements. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's Journey to Mars.
Robotic Mining Competition - Activities
2018-05-16
Team members from the University of Colorado at Boulder pause with their robot miner outside of the mining arena on the third day of NASA's 9th Robotic Mining Competition, May 16, at NASA's Kennedy Space Center Visitor Complex in Florida. More than 40 student teams from colleges and universities around the U.S. will use their mining robots to dig in a supersized sandbox filled with BP-1, or simulated Lunar soil, gravel and rocks, and participate in other competition requirements. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's deep space missions.
Robotic Mining Competition - Activities
2018-05-16
On the third day of NASA's 9th Robotic Mining Competition, May 16, team members from Temple University prepare their robot miner for its turn in the mining arena at NASA's Kennedy Space Center Visitor Complex in Florida. More than 40 student teams from colleges and universities around the U.S. will use their mining robots to dig in a supersized sandbox filled with BP-1, or simulated Lunar soil, gravel and rocks, and participate in other competition requirements. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's deep space missions.
Robotic Mining Competition - Activities
2018-05-16
On the third day of NASA's 9th Robotic Mining Competition, May 16, team members prepare their robot miner for its turn in the mining arena at NASA's Kennedy Space Center Visitor Complex in Florida. More than 40 student teams from colleges and universities around the U.S. will use their mining robots to dig in a supersized sandbox filled with BP-1, or simulated Lunar soil, gravel and rocks, and participate in other competition requirements. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's deep space missions.
Robotic Mining Competition - Activities
2018-05-16
On the third day of NASA's 9th Robotic Mining Competition, May 16, judges watch as a robot miner digs in the dirt in the mining arena at NASA's Kennedy Space Center Visitor Complex in Florida. More than 40 student teams from colleges and universities around the U.S. will use their mining robots to dig in a supersized sandbox filled with BP-1, or simulated Lunar soil, gravel and rocks, and participate in other competition requirements. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's deep space missions.
Robotic Mining Competition - Activities
2018-05-16
On the third day of NASA's 9th Robotic Mining Competition, May 16, team members from the University of Portland prepare their robot miner for its turn in the mining arena at NASA's Kennedy Space Center Visitor Complex in Florida. More than 40 student teams from colleges and universities around the U.S. will use their mining robots to dig in a supersized sandbox filled with BP-1, or simulated Lunar soil, gravel and rocks, and participate in other competition requirements. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's deep space missions.
Robotic Mining Competition - Activities
2018-05-16
Members of a college team watch on the monitor as their robot miner digs in the mining arena on the third day of NASA's 9th Robotic Mining Competition, May 16, at NASA's Kennedy Space Center Visitor Complex in Florida. More than 40 student teams from colleges and universities around the U.S. will use their mining robots to dig in a supersized sandbox filled with BP-1, or simulated Lunar soil, gravel and rocks, and participate in other competition requirements. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's deep space missions.
Robotic Mining Competition - Activities
2018-05-16
On the third day of NASA's 9th Robotic Mining Competition, May 16, a university team cleans their robot miner after its turn in the mining arena at NASA's Kennedy Space Center Visitor Complex in Florida. More than 40 student teams from colleges and universities around the U.S. will use their mining robots to dig in a supersized sandbox filled with BP-1, or simulated Lunar soil, gravel and rocks, and participate in other competition requirements. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's deep space missions.
Robotic Mining Competition - Activities
2018-05-16
On the third day of NASA's 9th Robotic Mining Competition, May 16, team members from the University of Portland pause with their robot miner before its turn in the mining arena at NASA's Kennedy Space Center Visitor Complex in Florida. More than 40 student teams from colleges and universities around the U.S. will use their mining robots to dig in a supersized sandbox filled with BP-1, or simulated Lunar soil, gravel and rocks, and participate in other competition requirements. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's deep space missions.
Robotic Mining Competition - Activities
2018-05-16
Team members from New York University prepare their robot miner for its turn in the mining arena on the third day of NASA's 9th Robotic Mining Competition, May 16, at NASA's Kennedy Space Center Visitor Complex in Florida. More than 40 student teams from colleges and universities around the U.S. will use their mining robots to dig in a supersized sandbox filled with BP-1, or simulated Lunar soil, gravel and rocks, and participate in other competition requirements. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's deep space missions.
Personnel occupied woven envelope robot power
NASA Technical Reports Server (NTRS)
Wessling, F. C.
1988-01-01
The Personnel Occupied Woven Envelope Robot (POWER) concept has evolved over the course of the study. The goal of the project was the development of methods and algorithms for solid modeling for the flexible robot arm.
Robopedia: Leveraging Sensorpedia for Web-Enabled Robot Control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Resseguie, David R
There is a growing interest in building Internetscale sensor networks that integrate sensors from around the world into a single unified system. In contrast, robotics application development has primarily focused on building specialized systems. These specialized systems take scalability and reliability into consideration, but generally neglect exploring the key components required to build a large scale system. Integrating robotic applications with Internet-scale sensor networks will unify specialized robotics applications and provide answers to large scale implementation concerns. We focus on utilizing Internet-scale sensor network technology to construct a framework for unifying robotic systems. Our framework web-enables a surveillance robot smore » sensor observations and provides a webinterface to the robot s actuators. This lets robots seamlessly integrate into web applications. In addition, the framework eliminates most prerequisite robotics knowledge, allowing for the creation of general web-based robotics applications. The framework also provides mechanisms to create applications that can interface with any robot. Frameworks such as this one are key to solving large scale mobile robotics implementation problems. We provide an overview of previous Internetscale sensor networks, Sensorpedia (an ad-hoc Internet-scale sensor network), our framework for integrating robots with Sensorpedia, two applications which illustrate our frameworks ability to support general web-based robotic control, and offer experimental results that illustrate our framework s scalability, feasibility, and resource requirements.« less
Optimized Algorithms for Prediction Within Robotic Tele-Operative Interfaces
NASA Technical Reports Server (NTRS)
Martin, Rodney A.; Wheeler, Kevin R.; Allan, Mark B.; SunSpiral, Vytas
2010-01-01
Robonaut, the humanoid robot developed at the Dexterous Robotics Labo ratory at NASA Johnson Space Center serves as a testbed for human-rob ot collaboration research and development efforts. One of the recent efforts investigates how adjustable autonomy can provide for a safe a nd more effective completion of manipulation-based tasks. A predictiv e algorithm developed in previous work was deployed as part of a soft ware interface that can be used for long-distance tele-operation. In this work, Hidden Markov Models (HMM?s) were trained on data recorded during tele-operation of basic tasks. In this paper we provide the d etails of this algorithm, how to improve upon the methods via optimization, and also present viable alternatives to the original algorithmi c approach. We show that all of the algorithms presented can be optim ized to meet the specifications of the metrics shown as being useful for measuring the performance of the predictive methods. 1
NASA Astrophysics Data System (ADS)
Zhou, Ying; Wang, Youhua; Liu, Runfeng; Xiao, Lin; Zhang, Qin; Huang, YongAn
2018-01-01
Epidermal electronics (e-skin) emerging in recent years offer the opportunity to noninvasively and wearably extract biosignals from human bodies. The conventional processes of e-skin based on standard microelectronic fabrication processes and a variety of transfer printing methods, nevertheless, unquestionably constrains the size of the devices, posing a serious challenge to collecting signals via skin, the largest organ in the human body. Herein we propose a multichannel noninvasive human-machine interface (HMI) using stretchable surface electromyography (sEMG) patches to realize a robot hand mimicking human gestures. Time-efficient processes are first developed to manufacture µm thick large-scale stretchable devices. With micron thickness, the stretchable µm thick sEMG patches show excellent conformability with human skin and consequently comparable electrical performance with conventional gel electrodes. Combined with the large-scale size, the multichannel noninvasive HMI via stretchable µm thick sEMG patches successfully manipulates the robot hand with eight different gestures, whose precision is as high as conventional gel electrodes array.
Building an environment model using depth information
NASA Technical Reports Server (NTRS)
Roth-Tabak, Y.; Jain, Ramesh
1989-01-01
Modeling the environment is one of the most crucial issues for the development and research of autonomous robot and tele-perception. Though the physical robot operates (navigates and performs various tasks) in the real world, any type of reasoning, such as situation assessment, planning or reasoning about action, is performed based on information in its internal world. Hence, the robot's intentional actions are inherently constrained by the models it has. These models may serve as interfaces between sensing modules and reasoning modules, or in the case of telerobots serve as interface between the human operator and the distant robot. A robot operating in a known restricted environment may have a priori knowledge of its whole possible work domain, which will be assimilated in its World Model. As the information in the World Model is relatively fixed, an Environment Model must be introduced to cope with the changes in the environment and to allow exploring entirely new domains. Introduced here is an algorithm that uses dense range data collected at various positions in the environment to refine and update or generate a 3-D volumetric model of an environment. The model, which is intended for autonomous robot navigation and tele-perception, consists of cubic voxels with the possible attributes: Void, Full, and Unknown. Experimental results from simulations of range data in synthetic environments are given. The quality of the results show great promise for dealing with noisy input data. The performance measures for the algorithm are defined, and quantitative results for noisy data and positional uncertainty are presented.
Performance Evaluation Methods for Assistive Robotic Technology
NASA Astrophysics Data System (ADS)
Tsui, Katherine M.; Feil-Seifer, David J.; Matarić, Maja J.; Yanco, Holly A.
Robots have been developed for several assistive technology domains, including intervention for Autism Spectrum Disorders, eldercare, and post-stroke rehabilitation. Assistive robots have also been used to promote independent living through the use of devices such as intelligent wheelchairs, assistive robotic arms, and external limb prostheses. Work in the broad field of assistive robotic technology can be divided into two major research phases: technology development, in which new devices, software, and interfaces are created; and clinical, in which assistive technology is applied to a given end-user population. Moving from technology development towards clinical applications is a significant challenge. Developing performance metrics for assistive robots poses a related set of challenges. In this paper, we survey several areas of assistive robotic technology in order to derive and demonstrate domain-specific means for evaluating the performance of such systems. We also present two case studies of applied performance measures and a discussion regarding the ubiquity of functional performance measures across the sampled domains. Finally, we present guidelines for incorporating human performance metrics into end-user evaluations of assistive robotic technologies.
Series Pneumatic Artificial Muscles (sPAMs) and Application to a Soft Continuum Robot
Greer, Joseph D.; Morimoto, Tania K.; Okamura, Allison M.; Hawkes, Elliot W.
2017-01-01
We describe a new series pneumatic artificial muscle (sPAM) and its application as an actuator for a soft continuum robot. The robot consists of three sPAMs arranged radially round a tubular pneumatic backbone. Analogous to tendons, the sPAMs exert a tension force on the robot’s pneumatic backbone, causing bending that is approximately constant curvature. Unlike a traditional tendon driven continuum robot, the robot is entirely soft and contains no hard components, making it safer for human interaction. Models of both the sPAM and soft continuum robot kinematics are presented and experimentally verified. We found a mean position accuracy of 5.5 cm for predicting the end-effector position of a 42 cm long robot with the kinematic model. Finally, closed-loop control is demonstrated using an eye-in-hand visual servo control law which provides a simple interface for operation by a human. The soft continuum robot with closed-loop control was found to have a step-response rise time and settling time of less than two seconds. PMID:29379672
Evaluation of Telerobotic Interface Components for Teaching Robot Operation
ERIC Educational Resources Information Center
Goldstain, Ofir H.; Ben-Gal, Irad; Bukchin, Yossi
2011-01-01
Remote learning has been an increasingly growing field in the last two decades. The Internet development, as well as the increase in PC's capabilities and bandwidth capacity, has made remote learning through the internet a convenient learning preference, leading to a variety of new interfaces and methods. In this work, we consider a remote…
Robotic Form-Finding and Construction Based on the Architectural Projection Logic
NASA Astrophysics Data System (ADS)
Zexin, Sun; Mei, Hongyuan
2017-06-01
In this article we analyze the relationship between the architectural drawings and form-finding, indicate that architects should reuse and redefine the traditional architectural drawings as a from-finding tool. Explain the projection systems and analyze how these systems affected the architectural design. Use robotic arm to do the experiment and establish a cylindrical projection form-finding system.
NASA Technical Reports Server (NTRS)
Nguyen, Charles C.; Antrazi, Sami S.
1992-01-01
This report deals with testing of a pair of robot fingers designed for the Flight Telerobotic Servicer (FTS) to grasp a cylinder type of Orbital Replaceable Unit (ORU) interface. The report first describes the objectives of the study and then the testbed consisting of a Stewart Platform-based manipulator equipped with a passive compliant platform which also serves as a force/torque sensor. Kinematic analysis is then performed to provide a closed-form solution for the force inverse kinematics and iterative solution for the force forward kinematics using the Newton's Raphson Method. Mathematical expressions are then derived to compute force/torques applied to the FTS fingers during the mating/demating with the interface. The report then presents the three parts of the experimental study on the feasibility and characteristics of the fingers. The first part obtains data of forces applied by the fingers to the interface under various misalignments, the second part determines the maximum allowable capture angles for mating, and the third part processes and interprets the obtained force/torque data.
ROMPS critical design review. Volume 1: Hardware
NASA Technical Reports Server (NTRS)
Dobbs, M. E.
1992-01-01
Topics concerning the Robot-Operated Material Processing in Space (ROMPS) Program are presented in viewgraph form and include the following: a systems overview; servocontrol and servomechanisms; testbed and simulation results; system V controller; robot module; furnace module; SCL experiment supervisor; SCL script sample processing control; SCL experiment supervisor fault handling; block diagrams; hitchhiker interfaces; battery systems; watchdog timers; mechanical/thermal systems; and fault conditions and recovery.
Integrated mobile robot control
NASA Technical Reports Server (NTRS)
Amidi, Omead; Thorpe, Charles
1991-01-01
This paper describes the structure, implementation, and operation of a real-time mobile robot controller which integrates capabilities such as: position estimation, path specification and tracking, human interfaces, fast communication, and multiple client support. The benefits of such high-level capabilities in a low-level controller was shown by its implementation for the Navlab autonomous vehicle. In addition, performance results from positioning and tracking systems are reported and analyzed.
Integrated mobile robot control
NASA Astrophysics Data System (ADS)
Amidi, Omead; Thorpe, Chuck E.
1991-03-01
This paper describes the strucwre implementation and operation of a real-time mobile robot controller which integrates capabilities such as: position estimation path specification and hacking human interfaces fast communication and multiple client support The benefits of such high-level capabilities in a low-level controller was shown by its implementation for the Naviab autonomous vehicle. In addition performance results from positioning and tracking systems are reported and analyzed.
NASA Technical Reports Server (NTRS)
Whittaker, William; Dowling, Kevin
1994-01-01
Carnegie Mellon University's Autonomous Planetary Exploration Program (APEX) is currently building the Daedalus robot; a system capable of performing extended autonomous planetary exploration missions. Extended autonomy is an important capability because the continued exploration of the Moon, Mars and other solid bodies within the solar system will probably be carried out by autonomous robotic systems. There are a number of reasons for this - the most important of which are the high cost of placing a man in space, the high risk associated with human exploration and communication delays that make teleoperation infeasible. The Daedalus robot represents an evolutionary approach to robot mechanism design and software system architecture. Daedalus incorporates key features from a number of predecessor systems. Using previously proven technologies, the Apex project endeavors to encompass all of the capabilities necessary for robust planetary exploration. The Ambler, a six-legged walking machine was developed by CMU for demonstration of technologies required for planetary exploration. In its five years of life, the Ambler project brought major breakthroughs in various areas of robotic technology. Significant progress was made in: mechanism and control, by introducing a novel gait pattern (circulating gait) and use of orthogonal legs; perception, by developing sophisticated algorithms for map building; and planning, by developing and implementing the Task Control Architecture to coordinate tasks and control complex system functions. The APEX project is the successor of the Ambler project.
NASA Astrophysics Data System (ADS)
Whittaker, William; Dowling, Kevin
1994-03-01
Carnegie Mellon University's Autonomous Planetary Exploration Program (APEX) is currently building the Daedalus robot; a system capable of performing extended autonomous planetary exploration missions. Extended autonomy is an important capability because the continued exploration of the Moon, Mars and other solid bodies within the solar system will probably be carried out by autonomous robotic systems. There are a number of reasons for this - the most important of which are the high cost of placing a man in space, the high risk associated with human exploration and communication delays that make teleoperation infeasible. The Daedalus robot represents an evolutionary approach to robot mechanism design and software system architecture. Daedalus incorporates key features from a number of predecessor systems. Using previously proven technologies, the Apex project endeavors to encompass all of the capabilities necessary for robust planetary exploration. The Ambler, a six-legged walking machine was developed by CMU for demonstration of technologies required for planetary exploration. In its five years of life, the Ambler project brought major breakthroughs in various areas of robotic technology. Significant progress was made in: mechanism and control, by introducing a novel gait pattern (circulating gait) and use of orthogonal legs; perception, by developing sophisticated algorithms for map building; and planning, by developing and implementing the Task Control Architecture to coordinate tasks and control complex system functions. The APEX project is the successor of the Ambler project.
ERIC Educational Resources Information Center
Illi, M.; And Others
This collection includes five papers assessing current and projected developments in the field of robotics and the implications of these developments for vocational-technical education. The first paper, "New Applications for Industrial Robots--Perspectives for the Next Five Years" (M. Illi) compares advances in robotics in Japan and the…
Robotics: Instructional Manual. The North Dakota High Technology Mobile Laboratory Project.
ERIC Educational Resources Information Center
Auer, Herbert J.
This instructional manual contains 20 learning activity packets for use in a workshop on robotics. The lessons cover the following topics: safety considerations in robotics; introduction to technology-level and coordinate-systems categories; the teach pendant (a hand-held computer, usually attached to the robot controller, with which the operator…
Robotics. Guidance for Further Education. FEU/PICKUP Project Report.
ERIC Educational Resources Information Center
Further Education Unit, London (England).
This report contains materials to assist teachers and others in designing curricula in robotics. The first section includes the results of a survey of technicians and supervisors in nine companies involved with robots that was designed to gather information concerning the education and training needed to prepare for a career in robotics. The…
Software for Project-Based Learning of Robot Motion Planning
ERIC Educational Resources Information Center
Moll, Mark; Bordeaux, Janice; Kavraki, Lydia E.
2013-01-01
Motion planning is a core problem in robotics concerned with finding feasible paths for a given robot. Motion planning algorithms perform a search in the high-dimensional continuous space of robot configurations and exemplify many of the core algorithmic concepts of search algorithms and associated data structures. Motion planning algorithms can…