Off-line programming motion and process commands for robotic welding of Space Shuttle main engines
NASA Technical Reports Server (NTRS)
Ruokangas, C. C.; Guthmiller, W. A.; Pierson, B. L.; Sliwinski, K. E.; Lee, J. M. F.
1987-01-01
The off-line-programming software and hardware being developed for robotic welding of the Space Shuttle main engine are described and illustrated with diagrams, drawings, graphs, and photographs. The menu-driven workstation-based interactive programming system is designed to permit generation of both motion and process commands for the robotic workcell by weld engineers (with only limited knowledge of programming or CAD systems) on the production floor. Consideration is given to the user interface, geometric-sources interfaces, overall menu structure, weld-parameter data base, and displays of run time and archived data. Ongoing efforts to address limitations related to automatic-downhand-configuration coordinated motion, a lack of source codes for the motion-control software, CAD data incompatibility, interfacing with the robotic workcell, and definition of the welding data base are discussed.
Dynamics simulation and controller interfacing for legged robots
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reichler, J.A.; Delcomyn, F.
2000-01-01
Dynamics simulation can play a critical role in the engineering of robotic control code, and there exist a variety of strategies both for building physical models and for interacting with these models. This paper presents an approach to dynamics simulation and controller interfacing for legged robots, and contrasts it to existing approaches. The authors describe dynamics algorithms and contact-resolution strategies for multibody articulated mobile robots based on the decoupled tree-structure approach, and present a novel scripting language that provides a unified framework for control-code interfacing, user-interface design, and data analysis. Special emphasis is placed on facilitating the rapid integration ofmore » control algorithms written in a standard object-oriented language (C++), the production of modular, distributed, reusable controllers, and the use of parameterized signal-transmission properties such as delay, sampling rate, and noise.« less
Innovation in robotic surgery: the Indian scenario.
Deshpande, Suresh V
2015-01-01
Robotics is the science. In scientific words a "Robot" is an electromechanical arm device with a computer interface, a combination of electrical, mechanical, and computer engineering. It is a mechanical arm that performs tasks in Industries, space exploration, and science. One such idea was to make an automated arm - A robot - In laparoscopy to control the telescope-camera unit electromechanically and then with a computer interface using voice control. It took us 5 long years from 2004 to bring it to the level of obtaining a patent. That was the birth of the Swarup Robotic Arm (SWARM) which is the first and the only Indian contribution in the field of robotics in laparoscopy as a total voice controlled camera holding robotic arm developed without any support by industry or research institutes.
Automation and Robotics in the Laboratory.
ERIC Educational Resources Information Center
DiCesare, Frank; And Others
1985-01-01
A general laboratory course featuring microcomputer interfacing for data acquisition, process control and automation, and robotics was developed at Rensselaer Polytechnic Institute and is now available to all junior engineering students. The development and features of the course are described. (JN)
I want what you've got: Cross platform portabiity and human-robot interaction assessment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Julie L. Marble, Ph.D.*.; Douglas A. Few; David J. Bruemmer
2005-08-01
Human-robot interaction is a subtle, yet critical aspect of design that must be assessed during the development of both the human-robot interface and robot behaviors if the human-robot team is to effectively meet the complexities of the task environment. Testing not only ensures that the system can successfully achieve the tasks for which it was designed, but more importantly, usability testing allows the designers to understand how humans and robots can, will, and should work together to optimize workload distribution. A lack of human-centered robot interface design, the rigidity of sensor configuration, and the platform-specific nature of research robot developmentmore » environments are a few factors preventing robotic solutions from reaching functional utility in real word environments. Often the difficult engineering challenge of implementing adroit reactive behavior, reliable communication, trustworthy autonomy that combines with system transparency and usable interfaces is overlooked in favor of other research aims. The result is that many robotic systems never reach a level of functional utility necessary even to evaluate the efficacy of the basic system, much less result in a system that can be used in a critical, real-world environment. Further, because control architectures and interfaces are often platform specific, it is difficult or even impossible to make usability comparisons between them. This paper discusses the challenges inherent to the conduct of human factors testing of variable autonomy control architectures and across platforms within a complex, real-world environment. It discusses the need to compare behaviors, architectures, and interfaces within a structured environment that contains challenging real-world tasks, and the implications for system acceptance and trust of autonomous robotic systems for how humans and robots interact in true interactive teams.« less
First Annual Workshop on Space Operations Automation and Robotics (SOAR 87)
NASA Technical Reports Server (NTRS)
Griffin, Sandy (Editor)
1987-01-01
Several topics relative to automation and robotics technology are discussed. Automation of checkout, ground support, and logistics; automated software development; man-machine interfaces; neural networks; systems engineering and distributed/parallel processing architectures; and artificial intelligence/expert systems are among the topics covered.
Contreras-Vidal, Jose L.; Grossman, Robert G.
2013-01-01
In this communication, a translational clinical brain-machine interface (BMI) roadmap for an EEG-based BMI to a robotic exoskeleton (NeuroRex) is presented. This multi-faceted project addresses important engineering and clinical challenges: It addresses the validation of an intelligent, self-balancing, robotic lower-body and trunk exoskeleton (Rex) augmented with EEG-based BMI capabilities to interpret user intent to assist a mobility-impaired person to walk independently. The goal is to improve the quality of life and health status of wheelchair-bounded persons by enabling standing and sitting, walking and backing, turning, ascending and descending stairs/curbs, and navigating sloping surfaces in a variety of conditions without the need for additional support or crutches. PMID:24110003
NASA Technical Reports Server (NTRS)
Dischinger, H. Charles., Jr.; Mullins, Jeffrey B.
2005-01-01
The United States is entering a new period of human exploration of the inner Solar System, and robotic human helpers will be partners in that effort. In order to support integration of these new worker robots into existing and new human systems, a new design standard should be developed, to be called the Robot-Systems Integration Standard (RSIS). It will address the requirements for and constraints upon robotic collaborators with humans. These workers are subject to the same functional constraints as humans of work, reach, and visibility/situational awareness envelopes, and they will deal with the same maintenance and communication interfaces. Thus, the RSIS will be created by discipline experts with the same sort of perspective on these and other interface concerns as human engineers.
Hand-in-hand advances in biomedical engineering and sensorimotor restoration.
Pisotta, Iolanda; Perruchoud, David; Ionta, Silvio
2015-05-15
Living in a multisensory world entails the continuous sensory processing of environmental information in order to enact appropriate motor routines. The interaction between our body and our brain is the crucial factor for achieving such sensorimotor integration ability. Several clinical conditions dramatically affect the constant body-brain exchange, but the latest developments in biomedical engineering provide promising solutions for overcoming this communication breakdown. The ultimate technological developments succeeded in transforming neuronal electrical activity into computational input for robotic devices, giving birth to the era of the so-called brain-machine interfaces. Combining rehabilitation robotics and experimental neuroscience the rise of brain-machine interfaces into clinical protocols provided the technological solution for bypassing the neural disconnection and restore sensorimotor function. Based on these advances, the recovery of sensorimotor functionality is progressively becoming a concrete reality. However, despite the success of several recent techniques, some open issues still need to be addressed. Typical interventions for sensorimotor deficits include pharmaceutical treatments and manual/robotic assistance in passive movements. These procedures achieve symptoms relief but their applicability to more severe disconnection pathologies is limited (e.g. spinal cord injury or amputation). Here we review how state-of-the-art solutions in biomedical engineering are continuously increasing expectances in sensorimotor rehabilitation, as well as the current challenges especially with regards to the translation of the signals from brain-machine interfaces into sensory feedback and the incorporation of brain-machine interfaces into daily activities. Copyright © 2015 Elsevier B.V. All rights reserved.
Modular Countermine Payload for Small Robots
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herman Herman; Doug Few; Roelof Versteeg
2010-04-01
Payloads for small robotic platforms have historically been designed and implemented as platform and task specific solutions. A consequence of this approach is that payloads cannot be deployed on different robotic platforms without substantial re-engineering efforts. To address this issue, we developed a modular countermine payload that is designed from the ground-up to be platform agnostic. The payload consists of the multi-mission payload controller unit (PCU) coupled with the configurable mission specific threat detection, navigation and marking payloads. The multi-mission PCU has all the common electronics to control and interface to all the payloads. It also contains the embedded processormore » that can be used to run the navigational and control software. The PCU has a very flexible robot interface which can be configured to interface to various robot platforms. The threat detection payload consists of a two axis sweeping arm and the detector. The navigation payload consists of several perception sensors that are used for terrain mapping, obstacle detection and navigation. Finally, the marking payload consists of a dual-color paint marking system. Through the multi-mission PCU, all these payloads are packaged in a platform agnostic way to allow deployment on multiple robotic platforms, including Talon and Packbot.« less
Modular countermine payload for small robots
NASA Astrophysics Data System (ADS)
Herman, Herman; Few, Doug; Versteeg, Roelof; Valois, Jean-Sebastien; McMahill, Jeff; Licitra, Michael; Henciak, Edward
2010-04-01
Payloads for small robotic platforms have historically been designed and implemented as platform and task specific solutions. A consequence of this approach is that payloads cannot be deployed on different robotic platforms without substantial re-engineering efforts. To address this issue, we developed a modular countermine payload that is designed from the ground-up to be platform agnostic. The payload consists of the multi-mission payload controller unit (PCU) coupled with the configurable mission specific threat detection, navigation and marking payloads. The multi-mission PCU has all the common electronics to control and interface to all the payloads. It also contains the embedded processor that can be used to run the navigational and control software. The PCU has a very flexible robot interface which can be configured to interface to various robot platforms. The threat detection payload consists of a two axis sweeping arm and the detector. The navigation payload consists of several perception sensors that are used for terrain mapping, obstacle detection and navigation. Finally, the marking payload consists of a dual-color paint marking system. Through the multimission PCU, all these payloads are packaged in a platform agnostic way to allow deployment on multiple robotic platforms, including Talon and Packbot.
Virtual Reality System Offers a Wide Perspective
NASA Technical Reports Server (NTRS)
2008-01-01
Robot Systems Technology Branch engineers at Johnson Space Center created the remotely controlled Robonaut for use as an additional "set of hands" in extravehicular activities (EVAs) and to allow exploration of environments that would be too dangerous or difficult for humans. One of the problems Robonaut developers encountered was that the robot s interface offered an extremely limited field of vision. Johnson robotics engineer, Darby Magruder, explained that the 40-degree field-of-view (FOV) in initial robotic prototypes provided very narrow tunnel vision, which posed difficulties for Robonaut operators trying to see the robot s surroundings. Because of the narrow FOV, NASA decided to reach out to the private sector for assistance. In addition to a wider FOV, NASA also desired higher resolution in a head-mounted display (HMD) with the added ability to capture and display video.
ROBOSIM: An intelligent simulator for robotic systems
NASA Technical Reports Server (NTRS)
Fernandez, Kenneth R.; Cook, George E.; Biegl, Csaba; Springfield, James F.
1993-01-01
The purpose of this paper is to present an update of an intelligent robotics simulator package, ROBOSIM, first introduced at Technology 2000 in 1990. ROBOSIM is used for three-dimensional geometrical modeling of robot manipulators and various objects in their workspace, and for the simulation of action sequences performed by the manipulators. Geometric modeling of robot manipulators has an expanding area of interest because it can aid the design and usage of robots in a number of ways, including: design and testing of manipulators, robot action planning, on-line control of robot manipulators, telerobotic user interface, and training and education. NASA developed ROBOSIM between 1985-88 to facilitate the development of robotics, and used the package to develop robotics for welding, coating, and space operations. ROBOSIM has been further developed for academic use by its co-developer Vanderbilt University, and has been in both classroom and laboratory environments for teaching complex robotic concepts. Plans are being formulated to make ROBOSIM available to all U.S. engineering/engineering technology schools (over three hundred total with an estimated 10,000+ users per year).
Applications of Brain–Machine Interface Systems in Stroke Recovery and Rehabilitation
Francisco, Gerard E.; Contreras-Vidal, Jose L.
2014-01-01
Stroke is a leading cause of disability, significantly impacting the quality of life (QOL) in survivors, and rehabilitation remains the mainstay of treatment in these patients. Recent engineering and technological advances such as brain-machine interfaces (BMI) and robotic rehabilitative devices are promising to enhance stroke neu-rorehabilitation, to accelerate functional recovery and improve QOL. This review discusses the recent applications of BMI and robotic-assisted rehabilitation in stroke patients. We present the framework for integrated BMI and robotic-assisted therapies, and discuss their potential therapeutic, assistive and diagnostic functions in stroke rehabilitation. Finally, we conclude with an outlook on the potential challenges and future directions of these neurotechnologies, and their impact on clinical rehabilitation. PMID:25110624
ERIC Educational Resources Information Center
Ensign, Todd I.
2017-01-01
Educational robotics (ER) combines accessible and age-appropriate building materials, programmable interfaces, and computer coding to teach science and mathematics using the engineering design process. ER has been shown to increase K-12 students' understanding of STEM concepts, and can develop students' self-confidence and interest in STEM. As…
INL Multi-Robot Control Interface
DOE Office of Scientific and Technical Information (OSTI.GOV)
2005-03-30
The INL Multi-Robot Control Interface controls many robots through a single user interface. The interface includes a robot display window for each robot showing the robotâs condition. More than one window can be used depending on the number of robots. The user interface also includes a robot control window configured to receive commands for sending to the respective robot and a multi-robot common window showing information received from each robot.
Sensor control of robot arc welding
NASA Technical Reports Server (NTRS)
Sias, F. R., Jr.
1985-01-01
A basic problem in the application of robots for welding which is how to guide a torch along a weld seam using sensory information was studied. Improvement of the quality and consistency of certain Gas Tungsten Arc welds on the Space Shuttle Main Engine (SSME) that are too complex geometrically for conventional automation and therefore are done by hand was examined. The particular problems associated with space shuttle main egnine (SSME) manufacturing and weld-seam tracking with an emphasis on computer vision methods were analyzed. Special interface software for the MINC computr are developed which will allow it to be used both as a test system to check out the robot interface software and later as a development tool for further investigation of sensory systems to be incorporated in welding procedures.
Demonstration of a Spoken Dialogue Interface for Planning Activities of a Semi-autonomous Robot
NASA Technical Reports Server (NTRS)
Dowding, John; Frank, Jeremy; Hockey, Beth Ann; Jonsson, Ari; Aist, Gregory
2002-01-01
Planning and scheduling in the face of uncertainty and change pushes the capabilities of both planning and dialogue technologies by requiring complex negotiation to arrive at a workable plan. Planning for use of semi-autonomous robots involves negotiation among multiple participants with competing scientific and engineering goals to co-construct a complex plan. In NASA applications this plan construction is done under severe time pressure so having a dialogue interface to the plan construction tools can aid rapid completion of the process. But, this will put significant demands on spoken dialogue technology, particularly in the areas of dialogue management and generation. The dialogue interface will need to be able to handle the complex dialogue strategies that occur in negotiation dialogues, including hypotheticals and revisions, and the generation component will require an ability to summarize complex plans. This demonstration will describe a work in progress towards building a spoken dialogue interface to the EUROPA planner for the purposes of planning and scheduling the activities of a semi-autonomous robot. A prototype interface has been built for planning the schedule of the Personal Satellite Assistant (PSA), a mobile robot designed for micro-gravity environments that is intended for use on the Space Shuttle and International Space Station. The spoken dialogue interface gives the user the capability to ask for a description of the plan, ask specific questions about the plan, and update or modify the plan. We anticipate that a spoken dialogue interface to the planner will provide a natural augmentation or alternative to the visualization interface, in situations in which the user needs very targeted information about the plan, in situations where natural language can express complex ideas more concisely than GUI actions, or in situations in which a graphical user interface is not appropriate.
ISS Robotic Student Programming
NASA Technical Reports Server (NTRS)
Barlow, J.; Benavides, J.; Hanson, R.; Cortez, J.; Le Vasseur, D.; Soloway, D.; Oyadomari, K.
2016-01-01
The SPHERES facility is a set of three free-flying satellites launched in 2006. In addition to scientists and engineering, middle- and high-school students program the SPHERES during the annual Zero Robotics programming competition. Zero Robotics conducts virtual competitions via simulator and on SPHERES aboard the ISS, with students doing the programming. A web interface allows teams to submit code, receive results, collaborate, and compete in simulator-based initial rounds and semi-final rounds. The final round of each competition is conducted with SPHERES aboard the ISS. At the end of 2017 a new robotic platform called Astrobee will launch, providing new game elements and new ground support for even more student interaction.
NASA Astrophysics Data System (ADS)
Rembala, Richard; Ower, Cameron
2009-10-01
MDA has provided 25 years of real-time engineering support to Shuttle (Canadarm) and ISS (Canadarm2) robotic operations beginning with the second shuttle flight STS-2 in 1981. In this capacity, our engineering support teams have become familiar with the evolution of mission planning and flight support practices for robotic assembly and support operations at mission control. This paper presents observations on existing practices and ideas to achieve reduced operational overhead to present programs. It also identifies areas where robotic assembly and maintenance of future space stations and space-based facilities could be accomplished more effectively and efficiently. Specifically, our experience shows that past and current space Shuttle and ISS assembly and maintenance operations have used the approach of extensive preflight mission planning and training to prepare the flight crews for the entire mission. This has been driven by the overall communication latency between the earth and remote location of the space station/vehicle as well as the lack of consistent robotic and interface standards. While the early Shuttle and ISS architectures included robotics, their eventual benefits on the overall assembly and maintenance operations could have been greater through incorporating them as a major design driver from the beginning of the system design. Lessons learned from the ISS highlight the potential benefits of real-time health monitoring systems, consistent standards for robotic interfaces and procedures and automated script-driven ground control in future space station assembly and logistics architectures. In addition, advances in computer vision systems and remote operation, supervised autonomous command and control systems offer the potential to adjust the balance between assembly and maintenance tasks performed using extra vehicular activity (EVA), extra vehicular robotics (EVR) and EVR controlled from the ground, offloading the EVA astronaut and even the robotic operator on-orbit of some of the more routine tasks. Overall these proposed approaches when used effectively offer the potential to drive down operations overhead and allow more efficient and productive robotic operations.
TARDEC's Intelligent Ground Systems overview
NASA Astrophysics Data System (ADS)
Jaster, Jeffrey F.
2009-05-01
The mission of the Intelligent Ground Systems (IGS) Area at the Tank Automotive Research, Development and Engineering Center (TARDEC) is to conduct technology maturation and integration to increase Soldier robot control/interface intuitiveness and robotic ground system robustness, functionality and overall system effectiveness for the Future Combat System Brigade Combat Team, Robotics Systems Joint Project Office and game changing capabilities to be fielded beyond the current force. This is accomplished through technology component development focused on increasing unmanned ground vehicle autonomy, optimizing crew interfaces and mission planners that capture commanders' intent, integrating payloads that provide 360 degree local situational awareness and expanding current UGV tactical behavior, learning and adaptation capabilities. The integration of these technology components into ground vehicle demonstrators permits engineering evaluation, User assessment and performance characterization in increasingly complex, dynamic and relevant environments to include high speed on road or cross country operations, all weather/visibility conditions and military operations in urban terrain (MOUT). Focused testing and experimentation is directed at reducing PM risk areas (safe operations, autonomous maneuver, manned-unmanned collaboration) and transitioning technology in the form of hardware, software algorithms, test and performance data, as well as User feedback and lessons learned.
The coming revolution in personal care robotics: what does it mean for nurses?
Sharts-Hopko, Nancy C
2014-01-01
The business sector provides regular reportage on the development of personal care robots to enable elders and people with disabilities to remain in their homes. Technology in this area is advancing rapidly in Asia, Europe, and North America. To date, the nursing literature has not addressed how nurses will assist these vulnerable populations in the selection and use of robotic technology or how robotics could effect nursing care and patient outcomes. This article provides an overview of development in the area of personal care robotics to address societal needs reflecting demographic trends. Selected relevant issues related to the human-robotic interface including ethical concerns are identified. Implications for nursing education and the delivery of nursing services are identified. Collaboration with engineers in the development of personal care robotic technology has the potential to contribute to the creation of products that optimally address the needs of elders and people with disabilities.
Virtual and remote robotic laboratory using EJS, MATLAB and LabVIEW.
Chaos, Dictino; Chacón, Jesús; Lopez-Orozco, Jose Antonio; Dormido, Sebastián
2013-02-21
This paper describes the design and implementation of a virtual and remote laboratory based on Easy Java Simulations (EJS) and LabVIEW. The main application of this laboratory is to improve the study of sensors in Mobile Robotics, dealing with the problems that arise on the real world experiments. This laboratory allows the user to work from their homes, tele-operating a real robot that takes measurements from its sensors in order to obtain a map of its environment. In addition, the application allows interacting with a robot simulation (virtual laboratory) or with a real robot (remote laboratory), with the same simple and intuitive graphical user interface in EJS. Thus, students can develop signal processing and control algorithms for the robot in simulation and then deploy them on the real robot for testing purposes. Practical examples of application of the laboratory on the inter-University Master of Systems Engineering and Automatic Control are presented.
Virtual and Remote Robotic Laboratory Using EJS, MATLAB and Lab VIEW
Chaos, Dictino; Chacón, Jesús; Lopez-Orozco, Jose Antonio; Dormido, Sebastián
2013-01-01
This paper describes the design and implementation of a virtual and remote laboratory based on Easy Java Simulations (EJS) and LabVIEW. The main application of this laboratory is to improve the study of sensors in Mobile Robotics, dealing with the problems that arise on the real world experiments. This laboratory allows the user to work from their homes, tele-operating a real robot that takes measurements from its sensors in order to obtain a map of its environment. In addition, the application allows interacting with a robot simulation (virtual laboratory) or with a real robot (remote laboratory), with the same simple and intuitive graphical user interface in EJS. Thus, students can develop signal processing and control algorithms for the robot in simulation and then deploy them on the real robot for testing purposes. Practical examples of application of the laboratory on the inter-University Master of Systems Engineering and Automatic Control are presented. PMID:23429578
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bruemmer, David J; Walton, Miles C
Methods and systems for controlling a plurality of robots through a single user interface include at least one robot display window for each of the plurality of robots with the at least one robot display window illustrating one or more conditions of a respective one of the plurality of robots. The user interface further includes at least one robot control window for each of the plurality of robots with the at least one robot control window configured to receive one or more commands for sending to the respective one of the plurality of robots. The user interface further includes amore » multi-robot common window comprised of information received from each of the plurality of robots.« less
Human-Robot Interface Controller Usability for Mission Planning on the Move
2012-11-01
5 Figure 3. Microsoft Xbox 360 controller for Windows...6 Figure 5. Microsoft Trackball Explorer. .........................................................................................7 Figure 6...Xbox 360 Controller is a registered trademark of Microsoft Corporation. 4 3.2.1 HMMWV The HMMWV was equipped with a diesel engine
A Biotic Game Design Project for Integrated Life Science and Engineering Education
Denisin, Aleksandra K.; Rensi, Stefano; Sanchez, Gabriel N.; Quake, Stephen R.; Riedel-Kruse, Ingmar H.
2015-01-01
Engaging, hands-on design experiences are key for formal and informal Science, Technology, Engineering, and Mathematics (STEM) education. Robotic and video game design challenges have been particularly effective in stimulating student interest, but equivalent experiences for the life sciences are not as developed. Here we present the concept of a "biotic game design project" to motivate student learning at the interface of life sciences and device engineering (as part of a cornerstone bioengineering devices course). We provide all course material and also present efforts in adapting the project's complexity to serve other time frames, age groups, learning focuses, and budgets. Students self-reported that they found the biotic game project fun and motivating, resulting in increased effort. Hence this type of design project could generate excitement and educational impact similar to robotics and video games. PMID:25807212
A biotic game design project for integrated life science and engineering education.
Cira, Nate J; Chung, Alice M; Denisin, Aleksandra K; Rensi, Stefano; Sanchez, Gabriel N; Quake, Stephen R; Riedel-Kruse, Ingmar H
2015-03-01
Engaging, hands-on design experiences are key for formal and informal Science, Technology, Engineering, and Mathematics (STEM) education. Robotic and video game design challenges have been particularly effective in stimulating student interest, but equivalent experiences for the life sciences are not as developed. Here we present the concept of a "biotic game design project" to motivate student learning at the interface of life sciences and device engineering (as part of a cornerstone bioengineering devices course). We provide all course material and also present efforts in adapting the project's complexity to serve other time frames, age groups, learning focuses, and budgets. Students self-reported that they found the biotic game project fun and motivating, resulting in increased effort. Hence this type of design project could generate excitement and educational impact similar to robotics and video games.
Development of the HERMIES III mobile robot research testbed at Oak Ridge National Laboratory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Manges, W.W.; Hamel, W.R.; Weisbin, C.R.
1988-01-01
The latest robot in the Hostile Environment Robotic Machine Intelligence Experiment Series (HERMIES) is now under development at the Center for Engineering Systems Advanced Research (CESAR) in the Oak Ridge National Laboratory. The HERMIES III robot incorporates a larger than human size 7-degree-of-freedom manipulator mounted on a 2-degree-of-freedom mobile platform including a variety of sensors and computers. The deployment of this robot represents a significant increase in research capabilities for the CESAR laboratory. The initial on-board computer capacity of the robot exceeds that of 20 Vax 11/780s. The navigation and vision algorithms under development make extensive use of the on-boardmore » NCUBE hypercube computer while the sensors are interfaced through five VME computers running the OS-9 real-time, multitasking operating system. This paper describes the motivation, key issues, and detailed design trade-offs of implementing the first phase (basic functionality) of the HERMIES III robot. 10 refs., 7 figs.« less
Iosa, Marco; Morone, Giovanni; Cherubini, Andrea; Paolucci, Stefano
Most studies and reviews on robots for neurorehabilitation focus on their effectiveness. These studies often report inconsistent results. This and many other reasons limit the credit given to these robots by therapists and patients. Further, neurorehabilitation is often still based on therapists' expertise, with competition among different schools of thought, generating substantial uncertainty about what exactly a neurorehabilitation robot should do. Little attention has been given to ethics. This review adopts a new approach, inspired by Asimov's three laws of robotics and based on the most recent studies in neurorobotics, for proposing new guidelines for designing and using robots for neurorehabilitation. We propose three laws of neurorobotics based on the ethical need for safe and effective robots, the redefinition of their role as therapist helpers, and the need for clear and transparent human-machine interfaces. These laws may allow engineers and clinicians to work closely together on a new generation of neurorobots.
Kranzfelder, Michael; Schneider, Armin; Fiolka, Adam; Koller, Sebastian; Wilhelm, Dirk; Reiser, Silvano; Meining, Alexander; Feussner, Hubertus
2015-08-01
To investigate why natural orifice translumenal endoscopic surgery (NOTES) has not yet become widely accepted and to prove whether the main reason is still the lack of appropriate platforms due to the deficiency of applicable interfaces. To assess expectations of a suitable interface design, we performed a survey on human-machine interfaces for NOTES mechatronic support systems among surgeons, gastroenterologists, and medical engineers. Of 120 distributed questionnaires, each consisting of 14 distinct questions, 100 (83%) were eligible for analysis. A mechatronic platform for NOTES was considered "important" by 71% of surgeons, 83% of gastroenterologist,s and 56% of medical engineers. "Intuitivity" and "simple to use" were the most favored aspects (33% to 51%). Haptic feedback was considered "important" by 70% of participants. In all, 53% of surgeons, 50% of gastroenterologists, and 33% of medical engineers already had experience with NOTES platforms or other surgical robots; however, current interfaces only met expectations in just more than 50%. Whereas surgeons did not favor a certain working posture, gastroenterologists and medical engineers preferred a sitting position. Three-dimensional visualization was generally considered "nice to have" (67% to 72%); however, for 26% of surgeons, 17% of gastroenterologists, and 7% of medical engineers it did not matter (P = 0.018). Requests and expectations of human-machine interfaces for NOTES seem to be generally similar for surgeons, gastroenterologist, and medical engineers. Consensus exists on the importance of developing interfaces that should be both intuitive and simple to use, are similar to preexisting familiar instruments, and exceed current available systems. © The Author(s) 2014.
Fourth Annual Workshop on Space Operations Applications and Research (SOAR 90)
NASA Technical Reports Server (NTRS)
Savely, Robert T. (Editor)
1991-01-01
The papers from the symposium are presented. Emphasis is placed on human factors engineering and space environment interactions. The technical areas covered in the human factors section include: satellite monitoring and control, man-computer interfaces, expert systems, AI/robotics interfaces, crew system dynamics, and display devices. The space environment interactions section presents the following topics: space plasma interaction, spacecraft contamination, space debris, and atomic oxygen interaction with materials. Some of the above topics are discussed in relation to the space station and space shuttle.
Proceedings of the 1986 IEEE international conference on systems, man and cybernetics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1986-01-01
This book presents the papers given at a conference on man-machine systems. Topics considered at the conference included neural model-based cognitive theory and engineering, user interfaces, adaptive and learning systems, human interaction with robotics, decision making, the testing and evaluation of expert systems, software development, international conflict resolution, intelligent interfaces, automation in man-machine system design aiding, knowledge acquisition in expert systems, advanced architectures for artificial intelligence, pattern recognition, knowledge bases, and machine vision.
A Matlab/Simulink-Based Interactive Module for Servo Systems Learning
ERIC Educational Resources Information Center
Aliane, N.
2010-01-01
This paper presents an interactive module for learning both the fundamental and practical issues of servo systems. This module, developed using Simulink in conjunction with the Matlab graphical user interface (Matlab-GUI) tool, is used to supplement conventional lectures in control engineering and robotics subjects. First, the paper introduces the…
NASA Technical Reports Server (NTRS)
Voellmer, George M.
1992-01-01
Mechanism enables robot to change tools on end of arm. Actuated by motion of robot: requires no additional electrical or pneumatic energy to make or break connection between tool and wrist at end of arm. Includes three basic subassemblies: wrist interface plate attached to robot arm at wrist, tool interface plate attached to tool, and holster. Separate tool interface plate and holster provided for each tool robot uses.
Designing speech-based interfaces for telepresence robots for people with disabilities.
Tsui, Katherine M; Flynn, Kelsey; McHugh, Amelia; Yanco, Holly A; Kontak, David
2013-06-01
People with cognitive and/or motor impairments may benefit from using telepresence robots to engage in social activities. To date, these robots, their user interfaces, and their navigation behaviors have not been designed for operation by people with disabilities. We conducted an experiment in which participants (n=12) used a telepresence robot in a scavenger hunt task to determine how they would use speech to command the robot. Based upon the results, we present design guidelines for speech-based interfaces for telepresence robots.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simmons, R.; Thrun, S.; Armstrong, G.
1996-12-31
Amelia was built by Real World Interface (RWI) using Xavier-a mobile robot platform developed at CMU on a B24 base from RWI-as a prototype. Amelia has substantial engineering improvements over Xavier. Amelia is built on a B21 base. It has a top speed of 32 inches per second, while improved integral dead-reckoning insures extremely accurate drive and position controls.
A hardware/software environment to support R D in intelligent machines and mobile robotic systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mann, R.C.
1990-01-01
The Center for Engineering Systems Advanced Research (CESAR) serves as a focal point at the Oak Ridge National Laboratory (ORNL) for basic and applied research in intelligent machines. R D at CESAR addresses issues related to autonomous systems, unstructured (i.e. incompletely known) operational environments, and multiple performing agents. Two mobile robot prototypes (HERMIES-IIB and HERMIES-III) are being used to test new developments in several robot component technologies. This paper briefly introduces the computing environment at CESAR which includes three hypercube concurrent computers (two on-board the mobile robots), a graphics workstation, VAX, and multiple VME-based systems (several on-board the mobile robots).more » The current software environment at CESAR is intended to satisfy several goals, e.g.: code portability, re-usability in different experimental scenarios, modularity, concurrent computer hardware transparent to applications programmer, future support for multiple mobile robots, support human-machine interface modules, and support for integration of software from other, geographically disparate laboratories with different hardware set-ups. 6 refs., 1 fig.« less
An EMG Interface for the Control of Motion and Compliance of a Supernumerary Robotic Finger
Hussain, Irfan; Spagnoletti, Giovanni; Salvietti, Gionata; Prattichizzo, Domenico
2016-01-01
In this paper, we propose a novel electromyographic (EMG) control interface to control motion and joints compliance of a supernumerary robotic finger. The supernumerary robotic fingers are a recently introduced class of wearable robotics that provides users additional robotic limbs in order to compensate or augment the existing abilities of natural limbs without substituting them. Since supernumerary robotic fingers are supposed to closely interact and perform actions in synergy with the human limbs, the control principles of extra finger should have similar behavior as human’s ones including the ability of regulating the compliance. So that, it is important to propose a control interface and to consider the actuators and sensing capabilities of the robotic extra finger compatible to implement stiffness regulation control techniques. We propose EMG interface and a control approach to regulate the compliance of the device through servo actuators. In particular, we use a commercial EMG armband for gesture recognition to be associated with the motion control of the robotic device and surface one channel EMG electrodes interface to regulate the compliance of the robotic device. We also present an updated version of a robotic extra finger where the adduction/abduction motion is realized through ball bearing and spur gears mechanism. We have validated the proposed interface with two sets of experiments related to compensation and augmentation. In the first set of experiments, different bimanual tasks have been performed with the help of the robotic device and simulating a paretic hand since this novel wearable system can be used to compensate the missing grasping abilities in chronic stroke patients. In the second set, the robotic extra finger is used to enlarge the workspace and manipulation capability of healthy hands. In both sets, the same EMG control interface has been used. The obtained results demonstrate that the proposed control interface is intuitive and can successfully be used, not only to control the motion of a supernumerary robotic finger but also to regulate its compliance. The proposed approach can be exploited also for the control of different wearable devices that has to actively cooperate with the human limbs. PMID:27891088
Soft brain-machine interfaces for assistive robotics: A novel control approach.
Schiatti, Lucia; Tessadori, Jacopo; Barresi, Giacinto; Mattos, Leonardo S; Ajoudani, Arash
2017-07-01
Robotic systems offer the possibility of improving the life quality of people with severe motor disabilities, enhancing the individual's degree of independence and interaction with the external environment. In this direction, the operator's residual functions must be exploited for the control of the robot movements and the underlying dynamic interaction through intuitive and effective human-robot interfaces. Towards this end, this work aims at exploring the potential of a novel Soft Brain-Machine Interface (BMI), suitable for dynamic execution of remote manipulation tasks for a wide range of patients. The interface is composed of an eye-tracking system, for an intuitive and reliable control of a robotic arm system's trajectories, and a Brain-Computer Interface (BCI) unit, for the control of the robot Cartesian stiffness, which determines the interaction forces between the robot and environment. The latter control is achieved by estimating in real-time a unidimensional index from user's electroencephalographic (EEG) signals, which provides the probability of a neutral or active state. This estimated state is then translated into a stiffness value for the robotic arm, allowing a reliable modulation of the robot's impedance. A preliminary evaluation of this hybrid interface concept provided evidence on the effective execution of tasks with dynamic uncertainties, demonstrating the great potential of this control method in BMI applications for self-service and clinical care.
Touchdown to take-off: at the interface of flight and surface locomotion
2017-01-01
Small aerial robots are limited to short mission times because aerodynamic and energy conversion efficiency diminish with scale. One way to extend mission times is to perch, as biological flyers do. Beyond perching, small robot flyers benefit from manoeuvring on surfaces for a diverse set of tasks, including exploration, inspection and collection of samples. These opportunities have prompted an interest in bimodal aerial and surface locomotion on both engineered and natural surfaces. To accomplish such novel robot behaviours, recent efforts have included advancing our understanding of the aerodynamics of surface approach and take-off, the contact dynamics of perching and attachment and making surface locomotion more efficient and robust. While current aerial robots show promise, flying animals, including insects, bats and birds, far surpass them in versatility, reliability and robustness. The maximal size of both perching animals and robots is limited by scaling laws for both adhesion and claw-based surface attachment. Biomechanists can use the current variety of specialized robots as inspiration for probing unknown aspects of bimodal animal locomotion. Similarly, the pitch-up landing manoeuvres and surface attachment techniques of animals can offer an evolutionary design guide for developing robots that perch on more diverse and complex surfaces. PMID:28163884
Dickstein-Fischer, Laurie; Fischer, Gregory S
2014-01-01
It is estimated that Autism Spectrum Disorder (ASD) affects 1 in 68 children. Early identification of an ASD is exceedingly important to the introduction of an intervention. We are developing a robot-assisted approach that will serve as an improved diagnostic and early intervention tool for children with autism. The robot, named PABI® (Penguin for Autism Behavioral Interventions), is a compact humanoid robot taking on an expressive cartoon-like embodiment. The robot is affordable, durable, and portable so that it can be used in various settings including schools, clinics, and the home. Thus enabling significantly enhanced and more readily available diagnosis and continuation of care. Through facial expressions, body motion, verbal cues, stereo vision-based tracking, and a tablet computer, the robot is capable of interacting meaningfully with an autistic child. Initial implementations of the robot, as part of a comprehensive treatment model (CTM), include Applied Behavioral Analysis (ABA) therapy where the child interacts with a tablet computer wirelessly interfaced with the robot. At the same time, the robot makes meaningful expressions and utterances and uses stereo cameras in eyes to track the child, maintain eye contact, and collect data such as affect and gaze direction for charting of progress. In this paper we present the clinical justification, anticipated usage with corresponding requirements, prototype development of the robotic system, and demonstration of a sample application for robot-assisted ABA therapy.
Comparison of tongue interface with keyboard for control of an assistive robotic arm.
Struijk, Lotte N S Andreasen; Lontis, Romulus
2017-07-01
This paper demonstrates how an assistive 6 DoF robotic arm with a gripper can be controlled manually using a tongue interface. The proposed method suggests that it possible for a user to manipulate the surroundings with his or her tongue using the inductive tongue control system as deployed in this study. The sensors of an inductive tongue-computer interface were mapped to the Cartesian control of an assistive robotic arm. The resulting control system was tested manually in order to compare manual control of the robot using a standard keyboard and using the tongue interface. Two healthy subjects controlled the robotic arm to precisely move a bottle of water from one location to another. The results shows that the tongue interface was able to fully control the robotic arm in a similar manner as the standard keyboard resulting in the same number of successful manipulations and an average increase in task duration of up to 30% as compared with the standard keyboard.
A multimodal interface for real-time soldier-robot teaming
NASA Astrophysics Data System (ADS)
Barber, Daniel J.; Howard, Thomas M.; Walter, Matthew R.
2016-05-01
Recent research and advances in robotics have led to the development of novel platforms leveraging new sensing capabilities for semantic navigation. As these systems becoming increasingly more robust, they support highly complex commands beyond direct teleoperation and waypoint finding facilitating a transition away from robots as tools to robots as teammates. Supporting future Soldier-Robot teaming requires communication capabilities on par with human-human teams for successful integration of robots. Therefore, as robots increase in functionality, it is equally important that the interface between the Soldier and robot advances as well. Multimodal communication (MMC) enables human-robot teaming through redundancy and levels of communications more robust than single mode interaction. Commercial-off-the-shelf (COTS) technologies released in recent years for smart-phones and gaming provide tools for the creation of portable interfaces incorporating MMC through the use of speech, gestures, and visual displays. However, for multimodal interfaces to be successfully used in the military domain, they must be able to classify speech, gestures, and process natural language in real-time with high accuracy. For the present study, a prototype multimodal interface supporting real-time interactions with an autonomous robot was developed. This device integrated COTS Automated Speech Recognition (ASR), a custom gesture recognition glove, and natural language understanding on a tablet. This paper presents performance results (e.g. response times, accuracy) of the integrated device when commanding an autonomous robot to perform reconnaissance and surveillance activities in an unknown outdoor environment.
Brain Computer Interfaces for Enhanced Interaction with Mobile Robot Agents
2016-07-27
synergistic and complementary way. This project focused on acquiring a mobile robotic agent platform that can be used to explore these interfaces...providing a test environment where the human control of a robot agent can be experimentally validated in 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND...Distribution Unlimited UU UU UU UU 27-07-2016 17-Sep-2013 16-Sep-2014 Final Report: Brain Computer Interfaces for Enhanced Interactions with Mobile Robot
A PC-Based Controller for Dextrous Arms
NASA Technical Reports Server (NTRS)
Fiorini, Paolo; Seraji, Homayoun; Long, Mark
1996-01-01
This paper describes the architecture and performance of a PC-based controller for 7-DOF dextrous manipulators. The computing platform is a 486-based personal computer equipped with a bus extender to access the robot Multibus controller, together with a single board computer as the graphical engine, and with a parallel I/O board to interface with a force-torque sensor mounted on the manipulator wrist.
ERIC Educational Resources Information Center
Strawhacker, Amanda; Bers, Marina U.
2015-01-01
In recent years, educational robotics has become an increasingly popular research area. However, limited studies have focused on differentiated learning outcomes based on type of programming interface. This study aims to explore how successfully young children master foundational programming concepts based on the robotics user interface (tangible,…
A graphical, rule based robotic interface system
NASA Technical Reports Server (NTRS)
Mckee, James W.; Wolfsberger, John
1988-01-01
The ability of a human to take control of a robotic system is essential in any use of robots in space in order to handle unforeseen changes in the robot's work environment or scheduled tasks. But in cases in which the work environment is known, a human controlling a robot's every move by remote control is both time consuming and frustrating. A system is needed in which the user can give the robotic system commands to perform tasks but need not tell the system how. To be useful, this system should be able to plan and perform the tasks faster than a telerobotic system. The interface between the user and the robot system must be natural and meaningful to the user. A high level user interface program under development at the University of Alabama, Huntsville, is described. A graphical interface is proposed in which the user selects objects to be manipulated by selecting representations of the object on projections of a 3-D model of the work environment. The user may move in the work environment by changing the viewpoint of the projections. The interface uses a rule based program to transform user selection of items on a graphics display of the robot's work environment into commands for the robot. The program first determines if the desired task is possible given the abilities of the robot and any constraints on the object. If the task is possible, the program determines what movements the robot needs to make to perform the task. The movements are transformed into commands for the robot. The information defining the robot, the work environment, and how objects may be moved is stored in a set of data bases accessible to the program and displayable to the user.
NASA Astrophysics Data System (ADS)
Yang, Zhixiao; Ito, Kazuyuki; Saijo, Kazuhiko; Hirotsune, Kazuyuki; Gofuku, Akio; Matsuno, Fumitoshi
This paper aims at constructing an efficient interface being similar to those widely used in human daily life, to fulfill the need of many volunteer rescuers operating rescue robots at large-scale disaster sites. The developed system includes a force feedback steering wheel interface and an artificial neural network (ANN) based mouse-screen interface. The former consists of a force feedback steering control and a six monitors’ wall. It provides a manual operation like driving cars to navigate a rescue robot. The latter consists of a mouse and a camera’s view displayed in a monitor. It provides a semi-autonomous operation by mouse clicking to navigate a rescue robot. Results of experiments show that a novice volunteer can skillfully navigate a tank rescue robot through both interfaces after 20 to 30 minutes of learning their operation respectively. The steering wheel interface has high navigating speed in open areas, without restriction of terrains and surface conditions of a disaster site. The mouse-screen interface is good at exact navigation in complex structures, while bringing little tension to operators. The two interfaces are designed to switch into each other at any time to provide a combined efficient navigation method.
A Mobile, Map-Based Tasking Interface for Human-Robot Interaction
2010-12-01
A MOBILE, MAP-BASED TASKING INTERFACE FOR HUMAN-ROBOT INTERACTION By Eli R. Hooten Thesis Submitted to the Faculty of the Graduate School of...SUBTITLE A Mobile, Map-Based Tasking Interface for Human-Robot Interaction 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6...3 II.1 Interactive Modalities and Multi-Touch . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 II.2
A two-class self-paced BCI to control a robot in four directions.
Ron-Angevin, Ricardo; Velasco-Alvarez, Francisco; Sancha-Ros, Salvador; da Silva-Sauer, Leandro
2011-01-01
In this work, an electroencephalographic analysis-based, self-paced (asynchronous) brain-computer interface (BCI) is proposed to control a mobile robot using four different navigation commands: turn right, turn left, move forward and move back. In order to reduce the probability of misclassification, the BCI is to be controlled with only two mental tasks (relaxed state versus imagination of right hand movements), using an audio-cued interface. Four healthy subjects participated in the experiment. After two sessions controlling a simulated robot in a virtual environment (which allowed the user to become familiar with the interface), three subjects successfully moved the robot in a real environment. The obtained results show that the proposed interface enables control over the robot, even for subjects with low BCI performance. © 2011 IEEE
NASA Astrophysics Data System (ADS)
Lee, Sam; Lucas, Nathan P.; Ellis, R. Darin; Pandya, Abhilash
2012-06-01
This paper presents a seamlessly controlled human multi-robot system comprised of ground and aerial robots of semiautonomous nature for source localization tasks. The system combines augmented reality interfaces capabilities with human supervisor's ability to control multiple robots. The role of this human multi-robot interface is to allow an operator to control groups of heterogeneous robots in real time in a collaborative manner. It used advanced path planning algorithms to ensure obstacles are avoided and that the operators are free for higher-level tasks. Each robot knows the environment and obstacles and can automatically generate a collision-free path to any user-selected target. It displayed sensor information from each individual robot directly on the robot in the video view. In addition, a sensor data fused AR view is displayed which helped the users pin point source information or help the operator with the goals of the mission. The paper studies a preliminary Human Factors evaluation of this system in which several interface conditions are tested for source detection tasks. Results show that the novel Augmented Reality multi-robot control (Point-and-Go and Path Planning) reduced mission completion times compared to the traditional joystick control for target detection missions. Usability tests and operator workload analysis are also investigated.
Human guidance of mobile robots in complex 3D environments using smart glasses
NASA Astrophysics Data System (ADS)
Kopinsky, Ryan; Sharma, Aneesh; Gupta, Nikhil; Ordonez, Camilo; Collins, Emmanuel; Barber, Daniel
2016-05-01
In order for humans to safely work alongside robots in the field, the human-robot (HR) interface, which enables bi-directional communication between human and robot, should be able to quickly and concisely express the robot's intentions and needs. While the robot operates mostly in autonomous mode, the human should be able to intervene to effectively guide the robot in complex, risky and/or highly uncertain scenarios. Using smart glasses such as Google Glass∗, we seek to develop an HR interface that aids in reducing interaction time and distractions during interaction with the robot.
A Sample Return Container with Hermetic Seal
NASA Technical Reports Server (NTRS)
Kong, Kin Yuen; Rafeek, Shaheed; Sadick, Shazad; Porter, Christopher C.
2000-01-01
A sample return container is being developed by Honeybee Robotics to receive samples from a derivative of the Champollion/ST4 Sample Acquisition and Transfer Mechanism or other samplers and then hermetically seal samples for a sample return mission. The container is enclosed in a phase change material (PCM) chamber to prevent phase change during return and re-entry to earth. This container is designed to operate passively with no motors and actuators. Using the sampler's featured drill tip for interfacing, transfer-ring and sealing samples, the container consumes no electrical power and therefore minimizes sample temperature change. The circular container houses a few isolated canisters, which will be sealed individually for samples acquired from different sites or depths. The drill based sampler indexes each canister to the sample transfer position, below the index interface for sample transfer. After sample transfer is completed, the sampler indexes a seal carrier, which lines up seals with the openings of the canisters. The sampler moves to the sealing interface and seals the sample canisters one by one. The sealing interface can be designed to work with C-seals, knife edge seals and cup seals. Again, the sampler provides all sealing actuation. This sample return container and co-engineered sample acquisition system are being developed by Honeybee Robotics in collaboration with the JPL Exploration Technology program.
Peña-Tapia, Elena; Martín-Barrio, Andrés; Olivares-Méndez, Miguel A.
2017-01-01
Multi-robot missions are a challenge for operators in terms of workload and situational awareness. These operators have to receive data from the robots, extract information, understand the situation properly, make decisions, generate the adequate commands, and send them to the robots. The consequences of excessive workload and lack of awareness can vary from inefficiencies to accidents. This work focuses on the study of future operator interfaces of multi-robot systems, taking into account relevant issues such as multimodal interactions, immersive devices, predictive capabilities and adaptive displays. Specifically, four interfaces have been designed and developed: a conventional, a predictive conventional, a virtual reality and a predictive virtual reality interface. The four interfaces have been validated by the performance of twenty-four operators that supervised eight multi-robot missions of fire surveillance and extinguishing. The results of the workload and situational awareness tests show that virtual reality improves the situational awareness without increasing the workload of operators, whereas the effects of predictive components are not significant and depend on their implementation. PMID:28749407
Scalability of Robotic Controllers: An Evaluation of Controller Options-Experiment II
2011-09-01
for the Soldier, to ensure mission success while maximizing the survivability and lethality through the synergistic interaction of equipment...based touch interface for gloved finger interactions . This interface had to have larger-than-normal touch-screen buttons for commanding the robot...C.; Hill, S.; Pillalamarri, K. Extreme Scalability: Designing Interfaces and Algorithms for Soldier-Robotic Swarm Interaction , Year 2; ARL- TR
Combined virtual and real robotic test-bed for single operator control of multiple robots
NASA Astrophysics Data System (ADS)
Lee, Sam Y.-S.; Hunt, Shawn; Cao, Alex; Pandya, Abhilash
2010-04-01
Teams of heterogeneous robots with different dynamics or capabilities could perform a variety of tasks such as multipoint surveillance, cooperative transport and explorations in hazardous environments. In this study, we work with heterogeneous robots of semi-autonomous ground and aerial robots for contaminant localization. We developed a human interface system which linked every real robot to its virtual counterpart. A novel virtual interface has been integrated with Augmented Reality that can monitor the position and sensory information from video feed of ground and aerial robots in the 3D virtual environment, and improve user situational awareness. An operator can efficiently control the real multi-robots using the Drag-to-Move method on the virtual multi-robots. This enables an operator to control groups of heterogeneous robots in a collaborative way for allowing more contaminant sources to be pursued simultaneously. The advanced feature of the virtual interface system is guarded teleoperation. This can be used to prevent operators from accidently driving multiple robots into walls and other objects. Moreover, the feature of the image guidance and tracking is able to reduce operator workload.
NASA Technical Reports Server (NTRS)
Maxwell, Scott A.; Cooper, Brian; Hartman, Frank; Wright, John; Yen, Jeng; Leger, Chris
2005-01-01
A Mars rover is a complex system, and driving one is a complex endeavor. Rover driver must be intimately familiar with the hardware and software of the mobility system and of the robotic arm. They must rapidly assess threats in the terrain, then creatively combine their knowledge o f the vehicle and its environment to achieve each day's science and engineering objective.
My thoughts through a robot's eyes: an augmented reality-brain-machine interface.
Kansaku, Kenji; Hata, Naoki; Takano, Kouji
2010-02-01
A brain-machine interface (BMI) uses neurophysiological signals from the brain to control external devices, such as robot arms or computer cursors. Combining augmented reality with a BMI, we show that the user's brain signals successfully controlled an agent robot and operated devices in the robot's environment. The user's thoughts became reality through the robot's eyes, enabling the augmentation of real environments outside the anatomy of the human body.
An EMG-based robot control scheme robust to time-varying EMG signal features.
Artemiadis, Panagiotis K; Kyriakopoulos, Kostas J
2010-05-01
Human-robot control interfaces have received increased attention during the past decades. With the introduction of robots in everyday life, especially in providing services to people with special needs (i.e., elderly, people with impairments, or people with disabilities), there is a strong necessity for simple and natural control interfaces. In this paper, electromyographic (EMG) signals from muscles of the human upper limb are used as the control interface between the user and a robot arm. EMG signals are recorded using surface EMG electrodes placed on the user's skin, making the user's upper limb free of bulky interface sensors or machinery usually found in conventional human-controlled systems. The proposed interface allows the user to control in real time an anthropomorphic robot arm in 3-D space, using upper limb motion estimates based only on EMG recordings. Moreover, the proposed interface is robust to EMG changes with respect to time, mainly caused by muscle fatigue or adjustments of contraction level. The efficiency of the method is assessed through real-time experiments, including random arm motions in the 3-D space with variable hand speed profiles.
Investigation of human-robot interface performance in household environments
NASA Astrophysics Data System (ADS)
Cremer, Sven; Mirza, Fahad; Tuladhar, Yathartha; Alonzo, Rommel; Hingeley, Anthony; Popa, Dan O.
2016-05-01
Today, assistive robots are being introduced into human environments at an increasing rate. Human environments are highly cluttered and dynamic, making it difficult to foresee all necessary capabilities and pre-program all desirable future skills of the robot. One approach to increase robot performance is semi-autonomous operation, allowing users to intervene and guide the robot through difficult tasks. To this end, robots need intuitive Human-Machine Interfaces (HMIs) that support fine motion control without overwhelming the operator. In this study we evaluate the performance of several interfaces that balance autonomy and teleoperation of a mobile manipulator for accomplishing several household tasks. Our proposed HMI framework includes teleoperation devices such as a tablet, as well as physical interfaces in the form of piezoresistive pressure sensor arrays. Mobile manipulation experiments were performed with a sensorized KUKA youBot, an omnidirectional platform with a 5 degrees of freedom (DOF) arm. The pick and place tasks involved navigation and manipulation of objects in household environments. Performance metrics included time for task completion and position accuracy.
La Vida Robot - High School Engineering Program Combats Engineering Brain Drain
Cameron, Allan; Lajvardi, Fredi
2018-05-04
Carl Hayden High School has built an impressive reputation with its robotics club. At a time when interest in science, math and engineering is declining, the Falcon Robotics club has young people fired up about engineering. Their program in underwater robots (MATE) and FIRST robotics is becoming a national model, not for building robots, but for building engineers. Teachers Fredi Lajvardi and Allan Cameron will present their story (How kids 'from the mean streets of Phoenix took on the best from M.I.T. in the national underwater bot championship' - Wired Magazine, April 2005) and how every student needs the opportunity to 'do real engineering.'
NASA Technical Reports Server (NTRS)
Backes, Paul G. (Inventor); Tso, Kam S. (Inventor)
1993-01-01
This invention relates to an operator interface for controlling a telerobot to perform tasks in a poorly modeled environment and/or within unplanned scenarios. The telerobot control system includes a remote robot manipulator linked to an operator interface. The operator interface includes a setup terminal, simulation terminal, and execution terminal for the control of the graphics simulator and local robot actuator as well as the remote robot actuator. These terminals may be combined in a single terminal. Complex tasks are developed from sequential combinations of parameterized task primitives and recorded teleoperations, and are tested by execution on a graphics simulator and/or local robot actuator, together with adjustable time delays. The novel features of this invention include the shared and supervisory control of the remote robot manipulator via operator interface by pretested complex tasks sequences based on sequences of parameterized task primitives combined with further teleoperation and run-time binding of parameters based on task context.
ROS-IGTL-Bridge: an open network interface for image-guided therapy using the ROS environment.
Frank, Tobias; Krieger, Axel; Leonard, Simon; Patel, Niravkumar A; Tokuda, Junichi
2017-08-01
With the growing interest in advanced image-guidance for surgical robot systems, rapid integration and testing of robotic devices and medical image computing software are becoming essential in the research and development. Maximizing the use of existing engineering resources built on widely accepted platforms in different fields, such as robot operating system (ROS) in robotics and 3D Slicer in medical image computing could simplify these tasks. We propose a new open network bridge interface integrated in ROS to ensure seamless cross-platform data sharing. A ROS node named ROS-IGTL-Bridge was implemented. It establishes a TCP/IP network connection between the ROS environment and external medical image computing software using the OpenIGTLink protocol. The node exports ROS messages to the external software over the network and vice versa simultaneously, allowing seamless and transparent data sharing between the ROS-based devices and the medical image computing platforms. Performance tests demonstrated that the bridge could stream transforms, strings, points, and images at 30 fps in both directions successfully. The data transfer latency was <1.2 ms for transforms, strings and points, and 25.2 ms for color VGA images. A separate test also demonstrated that the bridge could achieve 900 fps for transforms. Additionally, the bridge was demonstrated in two representative systems: a mock image-guided surgical robot setup consisting of 3D slicer, and Lego Mindstorms with ROS as a prototyping and educational platform for IGT research; and the smart tissue autonomous robot surgical setup with 3D Slicer. The study demonstrated that the bridge enabled cross-platform data sharing between ROS and medical image computing software. This will allow rapid and seamless integration of advanced image-based planning/navigation offered by the medical image computing software such as 3D Slicer into ROS-based surgical robot systems.
La Vida Robot - High School Engineering Program Combats Engineering Brain Drain
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cameron, Allan; Lajvardi, Fredi
Carl Hayden High School has built an impressive reputation with its robotics club. At a time when interest in science, math and engineering is declining, the Falcon Robotics club has young people fired up about engineering. Their program in underwater robots (MATE) and FIRST robotics is becoming a national model, not for building robots, but for building engineers. Teachers Fredi Lajvardi and Allan Cameron will present their story (How kids 'from the mean streets of Phoenix took on the best from M.I.T. in the national underwater bot championship' - Wired Magazine, April 2005) and how every student needs the opportunitymore » to 'do real engineering.'« less
An Integrated Testbed for Cooperative Perception with Heterogeneous Mobile and Static Sensors
Jiménez-González, Adrián; Martínez-De Dios, José Ramiro; Ollero, Aníbal
2011-01-01
Cooperation among devices with different sensing, computing and communication capabilities provides interesting possibilities in a growing number of problems and applications including domotics (domestic robotics), environmental monitoring or intelligent cities, among others. Despite the increasing interest in academic and industrial communities, experimental tools for evaluation and comparison of cooperative algorithms for such heterogeneous technologies are still very scarce. This paper presents a remote testbed with mobile robots and Wireless Sensor Networks (WSN) equipped with a set of low-cost off-the-shelf sensors, commonly used in cooperative perception research and applications, that present high degree of heterogeneity in their technology, sensed magnitudes, features, output bandwidth, interfaces and power consumption, among others. Its open and modular architecture allows tight integration and interoperability between mobile robots and WSN through a bidirectional protocol that enables full interaction. Moreover, the integration of standard tools and interfaces increases usability, allowing an easy extension to new hardware and software components and the reuse of code. Different levels of decentralization are considered, supporting from totally distributed to centralized approaches. Developed for the EU-funded Cooperating Objects Network of Excellence (CONET) and currently available at the School of Engineering of Seville (Spain), the testbed provides full remote control through the Internet. Numerous experiments have been performed, some of which are described in the paper. PMID:22247679
An integrated testbed for cooperative perception with heterogeneous mobile and static sensors.
Jiménez-González, Adrián; Martínez-De Dios, José Ramiro; Ollero, Aníbal
2011-01-01
Cooperation among devices with different sensing, computing and communication capabilities provides interesting possibilities in a growing number of problems and applications including domotics (domestic robotics), environmental monitoring or intelligent cities, among others. Despite the increasing interest in academic and industrial communities, experimental tools for evaluation and comparison of cooperative algorithms for such heterogeneous technologies are still very scarce. This paper presents a remote testbed with mobile robots and Wireless Sensor Networks (WSN) equipped with a set of low-cost off-the-shelf sensors, commonly used in cooperative perception research and applications, that present high degree of heterogeneity in their technology, sensed magnitudes, features, output bandwidth, interfaces and power consumption, among others. Its open and modular architecture allows tight integration and interoperability between mobile robots and WSN through a bidirectional protocol that enables full interaction. Moreover, the integration of standard tools and interfaces increases usability, allowing an easy extension to new hardware and software components and the reuse of code. Different levels of decentralization are considered, supporting from totally distributed to centralized approaches. Developed for the EU-funded Cooperating Objects Network of Excellence (CONET) and currently available at the School of Engineering of Seville (Spain), the testbed provides full remote control through the Internet. Numerous experiments have been performed, some of which are described in the paper.
Single board system for fuzzy inference
NASA Technical Reports Server (NTRS)
Symon, James R.; Watanabe, Hiroyuki
1991-01-01
The very large scale integration (VLSI) implementation of a fuzzy logic inference mechanism allows the use of rule-based control and decision making in demanding real-time applications. Researchers designed a full custom VLSI inference engine. The chip was fabricated using CMOS technology. The chip consists of 688,000 transistors of which 476,000 are used for RAM memory. The fuzzy logic inference engine board system incorporates the custom designed integrated circuit into a standard VMEbus environment. The Fuzzy Logic system uses Transistor-Transistor Logic (TTL) parts to provide the interface between the Fuzzy chip and a standard, double height VMEbus backplane, allowing the chip to perform application process control through the VMEbus host. High level C language functions hide details of the hardware system interface from the applications level programmer. The first version of the board was installed on a robot at Oak Ridge National Laboratory in January of 1990.
Scaling up nanoscale water-driven energy conversion into evaporation-driven engines and generators
Chen, Xi; Goodnight, Davis; Gao, Zhenghan; Cavusoglu, Ahmet H.; Sabharwal, Nina; DeLay, Michael; Driks, Adam; Sahin, Ozgur
2015-01-01
Evaporation is a ubiquitous phenomenon in the natural environment and a dominant form of energy transfer in the Earth's climate. Engineered systems rarely, if ever, use evaporation as a source of energy, despite myriad examples of such adaptations in the biological world. Here, we report evaporation-driven engines that can power common tasks like locomotion and electricity generation. These engines start and run autonomously when placed at air–water interfaces. They generate rotary and piston-like linear motion using specially designed, biologically based artificial muscles responsive to moisture fluctuations. Using these engines, we demonstrate an electricity generator that rests on water while harvesting its evaporation to power a light source, and a miniature car (weighing 0.1 kg) that moves forward as the water in the car evaporates. Evaporation-driven engines may find applications in powering robotic systems, sensors, devices and machinery that function in the natural environment. PMID:26079632
Scaling up nanoscale water-driven energy conversion into evaporation-driven engines and generators
NASA Astrophysics Data System (ADS)
Chen, Xi; Goodnight, Davis; Gao, Zhenghan; Cavusoglu, Ahmet H.; Sabharwal, Nina; Delay, Michael; Driks, Adam; Sahin, Ozgur
2015-06-01
Evaporation is a ubiquitous phenomenon in the natural environment and a dominant form of energy transfer in the Earth's climate. Engineered systems rarely, if ever, use evaporation as a source of energy, despite myriad examples of such adaptations in the biological world. Here, we report evaporation-driven engines that can power common tasks like locomotion and electricity generation. These engines start and run autonomously when placed at air-water interfaces. They generate rotary and piston-like linear motion using specially designed, biologically based artificial muscles responsive to moisture fluctuations. Using these engines, we demonstrate an electricity generator that rests on water while harvesting its evaporation to power a light source, and a miniature car (weighing 0.1 kg) that moves forward as the water in the car evaporates. Evaporation-driven engines may find applications in powering robotic systems, sensors, devices and machinery that function in the natural environment.
Integration of an intelligent systems behavior simulator and a scalable soldier-machine interface
NASA Astrophysics Data System (ADS)
Johnson, Tony; Manteuffel, Chris; Brewster, Benjamin; Tierney, Terry
2007-04-01
As the Army's Future Combat Systems (FCS) introduce emerging technologies and new force structures to the battlefield, soldiers will increasingly face new challenges in workload management. The next generation warfighter will be responsible for effectively managing robotic assets in addition to performing other missions. Studies of future battlefield operational scenarios involving the use of automation, including the specification of existing and proposed technologies, will provide significant insight into potential problem areas regarding soldier workload. The US Army Tank Automotive Research, Development, and Engineering Center (TARDEC) is currently executing an Army technology objective program to analyze and evaluate the effect of automated technologies and their associated control devices with respect to soldier workload. The Human-Robotic Interface (HRI) Intelligent Systems Behavior Simulator (ISBS) is a human performance measurement simulation system that allows modelers to develop constructive simulations of military scenarios with various deployments of interface technologies in order to evaluate operator effectiveness. One such interface is TARDEC's Scalable Soldier-Machine Interface (SMI). The scalable SMI provides a configurable machine interface application that is capable of adapting to several hardware platforms by recognizing the physical space limitations of the display device. This paper describes the integration of the ISBS and Scalable SMI applications, which will ultimately benefit both systems. The ISBS will be able to use the Scalable SMI to visualize the behaviors of virtual soldiers performing HRI tasks, such as route planning, and the scalable SMI will benefit from stimuli provided by the ISBS simulation environment. The paper describes the background of each system and details of the system integration approach.
Multidisciplinary unmanned technology teammate (MUTT)
NASA Astrophysics Data System (ADS)
Uzunovic, Nenad; Schneider, Anne; Lacaze, Alberto; Murphy, Karl; Del Giorno, Mark
2013-01-01
The U.S. Army Tank Automotive Research, Development and Engineering Center (TARDEC) held an autonomous robot competition called CANINE in June 2012. The goal of the competition was to develop innovative and natural control methods for robots. This paper describes the winning technology, including the vision system, the operator interaction, and the autonomous mobility. The rules stated only gestures or voice commands could be used for control. The robots would learn a new object at the start of each phase, find the object after it was thrown into a field, and return the object to the operator. Each of the six phases became more difficult, including clutter of the same color or shape as the object, moving and stationary obstacles, and finding the operator who moved from the starting location to a new location. The Robotic Research Team integrated techniques in computer vision, speech recognition, object manipulation, and autonomous navigation. A multi-filter computer vision solution reliably detected the objects while rejecting objects of similar color or shape, even while the robot was in motion. A speech-based interface with short commands provided close to natural communication of complicated commands from the operator to the robot. An innovative gripper design allowed for efficient object pickup. A robust autonomous mobility and navigation solution for ground robotic platforms provided fast and reliable obstacle avoidance and course navigation. The research approach focused on winning the competition while remaining cognizant and relevant to real world applications.
1991-06-05
functions on the excavator. 28-6 m0 E Figure 1: Conceptual User Interface for the Rapid Runway Repair (RRR) Remote Control System IV. COMMUNICATION SYSTEM...Vehicle Systems Conference, Dayton, OH. Mariani, D., 1988, "Robotic Vehicle Communications Interoperability," RD& E Center Technical Report, US Army Tank...D.g e : BS Va;,derbilt University Specialty: Computer Engineering Electrical Eng. Dept. Absigned: Arnold Er-gineering Nashville, TN 37240 Developmer
Augmented reality and haptic interfaces for robot-assisted surgery.
Yamamoto, Tomonori; Abolhassani, Niki; Jung, Sung; Okamura, Allison M; Judkins, Timothy N
2012-03-01
Current teleoperated robot-assisted minimally invasive surgical systems do not take full advantage of the potential performance enhancements offered by various forms of haptic feedback to the surgeon. Direct and graphical haptic feedback systems can be integrated with vision and robot control systems in order to provide haptic feedback to improve safety and tissue mechanical property identification. An interoperable interface for teleoperated robot-assisted minimally invasive surgery was developed to provide haptic feedback and augmented visual feedback using three-dimensional (3D) graphical overlays. The software framework consists of control and command software, robot plug-ins, image processing plug-ins and 3D surface reconstructions. The feasibility of the interface was demonstrated in two tasks performed with artificial tissue: palpation to detect hard lumps and surface tracing, using vision-based forbidden-region virtual fixtures to prevent the patient-side manipulator from entering unwanted regions of the workspace. The interoperable interface enables fast development and successful implementation of effective haptic feedback methods in teleoperation. Copyright © 2011 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Creus, Carolina
1991-01-01
Active (dynamic) tactile sensing was explored using a commercially available tactile array sensor. This task requires the redesign of the sensor interface and a full understanding of the old sensor hardware implementation. There were different stages to this research; the first stage involved the reverse engineering of the old tactile sensor. The second stage had to do with the exploration of the characteristics and behavior of the tactile sensor pad. The next stage dealt with the redesign of the sensor interface using the knowledge gained from the previous two stages. Finally, in the last stage, software to control the tactile sensor was developed to aid in the data acquisition process.
Zygomalas, Apollon; Giokas, Konstantinos; Koutsouris, Dimitrios
2014-01-01
Aim. Modular mini-robots can be used in novel minimally invasive surgery techniques like natural orifice transluminal endoscopic surgery (NOTES) and laparoendoscopic single site (LESS) surgery. The control of these miniature assistants is complicated. The aim of this study is the in silico investigation of a remote controlling interface for modular miniature robots which can be used in minimally invasive surgery. Methods. The conceptual controlling system was developed, programmed, and simulated using professional robotics simulation software. Three different modes of control were programmed. The remote controlling surgical interface was virtually designed as a high scale representation of the respective modular mini-robot, therefore a modular controlling system itself. Results. With the proposed modular controlling system the user could easily identify the conformation of the modular mini-robot and adequately modify it as needed. The arrangement of each module was always known. The in silico investigation gave useful information regarding the controlling mode, the adequate speed of rearrangements, and the number of modules needed for efficient working tasks. Conclusions. The proposed conceptual model may promote the research and development of more sophisticated modular controlling systems. Modular surgical interfaces may improve the handling and the dexterity of modular miniature robots during minimally invasive procedures. PMID:25295187
Zygomalas, Apollon; Giokas, Konstantinos; Koutsouris, Dimitrios
2014-01-01
Aim. Modular mini-robots can be used in novel minimally invasive surgery techniques like natural orifice transluminal endoscopic surgery (NOTES) and laparoendoscopic single site (LESS) surgery. The control of these miniature assistants is complicated. The aim of this study is the in silico investigation of a remote controlling interface for modular miniature robots which can be used in minimally invasive surgery. Methods. The conceptual controlling system was developed, programmed, and simulated using professional robotics simulation software. Three different modes of control were programmed. The remote controlling surgical interface was virtually designed as a high scale representation of the respective modular mini-robot, therefore a modular controlling system itself. Results. With the proposed modular controlling system the user could easily identify the conformation of the modular mini-robot and adequately modify it as needed. The arrangement of each module was always known. The in silico investigation gave useful information regarding the controlling mode, the adequate speed of rearrangements, and the number of modules needed for efficient working tasks. Conclusions. The proposed conceptual model may promote the research and development of more sophisticated modular controlling systems. Modular surgical interfaces may improve the handling and the dexterity of modular miniature robots during minimally invasive procedures.
Mechanically Compliant Electronic Materials for Wearable Photovoltaics and Human-Machine Interfaces
NASA Astrophysics Data System (ADS)
O'Connor, Timothy Francis, III
Applications of stretchable electronic materials for human-machine interfaces are described herein. Intrinsically stretchable organic conjugated polymers and stretchable electronic composites were used to develop stretchable organic photovoltaics (OPVs), mechanically robust wearable OPVs, and human-machine interfaces for gesture recognition, American Sign Language Translation, haptic control of robots, and touch emulation for virtual reality, augmented reality, and the transmission of touch. The stretchable and wearable OPVs comprise active layers of poly-3-alkylthiophene:phenyl-C61-butyric acid methyl ester (P3AT:PCBM) and transparent conductive electrodes of poly(3,4-ethylenedioxythiophene)-poly(styrenesulfonate) (PEDOT:PSS) and devices could only be fabricated through a deep understanding of the connection between molecular structure and the co-engineering of electronic performance with mechanical resilience. The talk concludes with the use of composite piezoresistive sensors two smart glove prototypes. The first integrates stretchable strain sensors comprising a carbon-elastomer composite, a wearable microcontroller, low energy Bluetooth, and a 6-axis accelerometer/gyroscope to construct a fully functional gesture recognition glove capable of wirelessly translating American Sign Language to text on a cell phone screen. The second creates a system for the haptic control of a 3D printed robot arm, as well as the transmission of touch and temperature information.
Neurobionics and the brain-computer interface: current applications and future horizons.
Rosenfeld, Jeffrey V; Wong, Yan Tat
2017-05-01
The brain-computer interface (BCI) is an exciting advance in neuroscience and engineering. In a motor BCI, electrical recordings from the motor cortex of paralysed humans are decoded by a computer and used to drive robotic arms or to restore movement in a paralysed hand by stimulating the muscles in the forearm. Simultaneously integrating a BCI with the sensory cortex will further enhance dexterity and fine control. BCIs are also being developed to: provide ambulation for paraplegic patients through controlling robotic exoskeletons; restore vision in people with acquired blindness; detect and control epileptic seizures; and improve control of movement disorders and memory enhancement. High-fidelity connectivity with small groups of neurons requires microelectrode placement in the cerebral cortex. Electrodes placed on the cortical surface are less invasive but produce inferior fidelity. Scalp surface recording using electroencephalography is much less precise. BCI technology is still in an early phase of development and awaits further technical improvements and larger multicentre clinical trials before wider clinical application and impact on the care of people with disabilities. There are also many ethical challenges to explore as this technology evolves.
Human-Robot Interaction Directed Research Project
NASA Technical Reports Server (NTRS)
Sandor, Aniko; Cross, Ernest V., II; Chang, M. L.
2014-01-01
Human-robot interaction (HRI) is a discipline investigating the factors affecting the interactions between humans and robots. It is important to evaluate how the design of interfaces and command modalities affect the human's ability to perform tasks accurately, efficiently, and effectively when working with a robot. By understanding the effects of interface design on human performance, workload, and situation awareness, interfaces can be developed to appropriately support the human in performing tasks with minimal errors and with appropriate interaction time and effort. Thus, the results of research on human-robot interfaces have direct implications for the design of robotic systems. This DRP concentrates on three areas associated with interfaces and command modalities in HRI which are applicable to NASA robot systems: 1) Video Overlays, 2) Camera Views, and 3) Command Modalities. The first study focused on video overlays that investigated how Augmented Reality (AR) symbology can be added to the human-robot interface to improve teleoperation performance. Three types of AR symbology were explored in this study, command guidance (CG), situation guidance (SG), and both (SCG). CG symbology gives operators explicit instructions on what commands to input, whereas SG symbology gives operators implicit cues so that operators can infer the input commands. The combination of CG and SG provided operators with explicit and implicit cues allowing the operator to choose which symbology to utilize. The objective of the study was to understand how AR symbology affects the human operator's ability to align a robot arm to a target using a flight stick and the ability to allocate attention between the symbology and external views of the world. The study evaluated the effects type of symbology (CG and SG) has on operator tasks performance and attention allocation during teleoperation of a robot arm. The second study expanded on the first study by evaluating the effects of the type of navigational guidance (CG and SG) on operator task performance and attention allocation during teleoperation of a robot arm through uplinked commands. Although this study complements the first study on navigational guidance with hand controllers, it is a separate investigation due to the distinction in intended operators (i.e., crewmembers versus ground-operators). A third study looked at superimposed and integrated overlays for teleoperation of a mobile robot using a hand controller. When AR is superimposed on the external world, it appears to be fixed onto the display and internal to the operators' workstation. Unlike superimposed overlays, integrated overlays often appear as three-dimensional objects and move as if part of the external world. Studies conducted in the aviation domain show that integrated overlays can improve situation awareness and reduce the amount of deviation from the optimal path. The purpose of the study was to investigate whether these results apply to HRI tasks, such as navigation with a mobile robot. HRP GAPS This HRI research contributes to closure of HRP gaps by providing information on how display and control characteristics - those related to guidance, feedback, and command modalities - affect operator performance. The overarching goals are to improve interface usability, reduce operator error, and develop candidate guidelines to design effective human-robot interfaces.
A development of intelligent entertainment robot for home life
NASA Astrophysics Data System (ADS)
Kim, Cheoltaek; Lee, Ju-Jang
2005-12-01
The purpose of this paper was to present the study and design idea for entertainment robot with educational purpose (IRFEE). The robot has been designed for home life considering dependability and interaction. The developed robot has three objectives - 1. Develop autonomous robot, 2. Design robot considering mobility and robustness, 3. Develop robot interface and software considering entertainment and education functionalities. The autonomous navigation was implemented by active vision based SLAM and modified EPF algorithm. The two differential wheels, the pan-tilt were designed mobility and robustness and the exterior was designed considering esthetic element and minimizing interference. The speech and tracking algorithm provided the good interface with human. The image transfer and Internet site connection is needed for service of remote connection and educational purpose.
A Tailored Concept of Operations for NASA LSP Integrated Operations
NASA Technical Reports Server (NTRS)
Owens, Clark V.
2016-01-01
An integral part of the Systems Engineering process is the creation of a Concept of Operations (ConOps) for a given system, with the ConOps initially established early in the system design process and evolved as the system definition and design matures. As Integration Engineers in NASA's Launch Services Program (LSP) at Kennedy Space Center (KSC), our job is to manage the interface requirements for all the robotic space missions that come to our Program for a Launch Service. LSP procures and manages a launch service from one of our many commercial Launch Vehicle Contractors (LVCs) and these commercial companies are then responsible for developing the Interface Control Document (ICD), the verification of the requirements in that document, and all the services pertaining to integrating the spacecraft and launching it into orbit. However, one of the systems engineering tools that have not been employed within LSP to date is a Concept of Operations. The goal of this project is to research the format and content that goes into these various aerospace industry ConOps and tailor the format and content into template form, so the template may be used as an engineering tool for spacecraft integration with future LSP procured launch services.
Weintek interfaces for controlling the position of a robotic arm
NASA Astrophysics Data System (ADS)
Barz, C.; Ilia, M.; Ilut, T.; Pop-Vadean, A.; Pop, P. P.; Dragan, F.
2016-08-01
The paper presents the use of Weintek panels to control the position of a robotic arm, operated step by step on the three motor axes. PLC control interface is designed with a Weintek touch screen. The HMI Weintek eMT3070a is the user interface in the process command of the PLC. This HMI controls the local PLC, entering the coordinate on the axes X, Y and Z. The subject allows the development in a virtual environment for e-learning and monitoring the robotic arm actions.
Human-Robot Interaction Directed Research Project
NASA Technical Reports Server (NTRS)
Rochlis, Jennifer; Ezer, Neta; Sandor, Aniko
2011-01-01
Human-robot interaction (HRI) is about understanding and shaping the interactions between humans and robots (Goodrich & Schultz, 2007). It is important to evaluate how the design of interfaces and command modalities affect the human s ability to perform tasks accurately, efficiently, and effectively (Crandall, Goodrich, Olsen Jr., & Nielsen, 2005) It is also critical to evaluate the effects of human-robot interfaces and command modalities on operator mental workload (Sheridan, 1992) and situation awareness (Endsley, Bolt , & Jones, 2003). By understanding the effects of interface design on human performance, workload, and situation awareness, interfaces can be developed that support the human in performing tasks with minimal errors and with appropriate interaction time and effort. Thus, the results of research on human-robot interfaces have direct implications for design. Because the factors associated with interfaces and command modalities in HRI are too numerous to address in 3 years of research, the proposed research concentrates on three manageable areas applicable to National Aeronautics and Space Administration (NASA) robot systems. These topic areas emerged from the Fiscal Year (FY) 2011 work that included extensive literature reviews and observations of NASA systems. The three topic areas are: 1) video overlays, 2) camera views, and 3) command modalities. Each area is described in detail below, along with relevance to existing NASA human-robot systems. In addition to studies in these three topic areas, a workshop is proposed for FY12. The workshop will bring together experts in human-robot interaction and robotics to discuss the state of the practice as applicable to research in space robotics. Studies proposed in the area of video overlays consider two factors in the implementation of augmented reality (AR) for operator displays during teleoperation. The first of these factors is the type of navigational guidance provided by AR symbology. In the proposed studies, participants performance during teleoperation of a robot arm will be compared when they are provided with command-guidance symbology (that is, directing the operator what commands to make) or situation-guidance symbology (that is, providing natural cues so that the operator can infer what commands to make). The second factor for AR symbology is the effects of overlays that are either superimposed or integrated into the external view of the world. A study is proposed in which the effects of superimposed and integrated overlays on operator task performance during teleoperated driving tasks are compared
Advantages of Brahms for Specifying and Implementing a Multiagent Human-Robotic Exploration System
NASA Technical Reports Server (NTRS)
Clancey, William J.; Sierhuis, Maarten; Kaskiris, Charis; vanHoof, Ron
2003-01-01
We have developed a model-based, distributed architecture that integrates diverse components in a system designed for lunar and planetary surface operations: an astronaut's space suit, cameras, all-terrain vehicles, robotic assistant, crew in a local habitat, and mission support team. Software processes ('agents') implemented in the Brahms language, run on multiple, mobile platforms. These mobile agents interpret and transform available data to help people and robotic systems coordinate their actions to make operations more safe and efficient. The Brahms-based mobile agent architecture (MAA) uses a novel combination of agent types so the software agents may understand and facilitate communications between people and between system components. A state-of-the-art spoken dialogue interface is integrated with Brahms models, supporting a speech-driven field observation record and rover command system. An important aspect of the methodology involves first simulating the entire system in Brahms, then configuring the agents into a runtime system Thus, Brahms provides a language, engine, and system builder's toolkit for specifying and implementing multiagent systems.
NASA Astrophysics Data System (ADS)
Schieber, Marc H.
2016-07-01
Control of the human hand has been both difficult to understand scientifically and difficult to emulate technologically. The article by Santello and colleagues in the current issue of Physics of Life Reviews[1] highlights the accelerating pace of interaction between the neuroscience of controlling body movement and the engineering of robotic hands that can be used either autonomously or as part of a motor neuroprosthesis, an artificial body part that moves under control from a human subject's own nervous system. Motor neuroprostheses typically involve a brain-computer interface (BCI) that takes signals from the subject's nervous system or muscles, interprets those signals through a decoding algorithm, and then applies the resulting output to control the artificial device.
Remote secure observing for the Faulkes Telescopes
NASA Astrophysics Data System (ADS)
Smith, Robert J.; Steele, Iain A.; Marchant, Jonathan M.; Fraser, Stephen N.; Mucke-Herzberg, Dorothea
2004-09-01
Since the Faulkes Telescopes are to be used by a wide variety of audiences, both powerful engineering level and simple graphical interfaces exist giving complete remote and robotic control of the telescope over the internet. Security is extremely important to protect the health of both humans and equipment. Data integrity must also be carefully guarded for images being delivered directly into the classroom. The adopted network architecture is described along with the variety of security and intrusion detection software. We use a combination of SSL, proxies, IPSec, and both Linux iptables and Cisco IOS firewalls to ensure only authenticated and safe commands are sent to the telescopes. With an eye to a possible future global network of robotic telescopes, the system implemented is capable of scaling linearly to any moderate (of order ten) number of telescopes.
Dominici, Nadia; Keller, Urs; Vallery, Heike; Friedli, Lucia; van den Brand, Rubia; Starkey, Michelle L; Musienko, Pavel; Riener, Robert; Courtine, Grégoire
2012-07-01
Central nervous system (CNS) disorders distinctly impair locomotor pattern generation and balance, but technical limitations prevent independent assessment and rehabilitation of these subfunctions. Here we introduce a versatile robotic interface to evaluate, enable and train pattern generation and balance independently during natural walking behaviors in rats. In evaluation mode, the robotic interface affords detailed assessments of pattern generation and dynamic equilibrium after spinal cord injury (SCI) and stroke. In enabling mode,the robot acts as a propulsive or postural neuroprosthesis that instantly promotes unexpected locomotor capacities including overground walking after complete SCI, stair climbing following partial SCI and precise paw placement shortly after stroke. In training mode, robot-enabled rehabilitation, epidural electrical stimulation and monoamine agonists reestablish weight-supported locomotion, coordinated steering and balance in rats with a paralyzing SCI. This new robotic technology and associated concepts have broad implications for both assessing and restoring motor functions after CNS disorders, both in animals and in humans.
ERIC Educational Resources Information Center
Landa-Jiménez, M. A.; González-Gaspar, P.; Pérez-Estudillo, C.; López-Meraz, M. L.; Morgado-Valle, C.; Beltran-Parrazal, L.
2016-01-01
A Muscle-Computer Interface (muCI) is a human-machine system that uses electromyographic (EMG) signals to communicate with a computer. Surface EMG (sEMG) signals are currently used to command robotic devices, such as robotic arms and hands, and mobile robots, such as wheelchairs. These signals reflect the motor intention of a user before the…
Experiments in teleoperator and autonomous control of space robotic vehicles
NASA Technical Reports Server (NTRS)
Alexander, Harold L.
1990-01-01
A research program and strategy are described which include fundamental teleoperation issues and autonomous-control issues of sensing and navigation for satellite robots. The program consists of developing interfaces for visual operation and studying the consequences of interface designs as well as developing navigation and control technologies based on visual interaction. A space-robot-vehicle simulator is under development for use in virtual-environment teleoperation experiments and neutral-buoyancy investigations. These technologies can be utilized in a study of visual interfaces to address tradeoffs between head-tracking and manual remote cameras, panel-mounted and helmet-mounted displays, and stereoscopic and monoscopic display systems. The present program can provide significant data for the development of control experiments for autonomously controlled satellite robots.
Robotic devices and brain-machine interfaces for hand rehabilitation post-stroke.
McConnell, Alistair C; Moioli, Renan C; Brasil, Fabricio L; Vallejo, Marta; Corne, David W; Vargas, Patricia A; Stokes, Adam A
2017-06-28
To review the state of the art of robotic-aided hand physiotherapy for post-stroke rehabilitation, including the use of brain-machine interfaces. Each patient has a unique clinical history and, in response to personalized treatment needs, research into individualized and at-home treatment options has expanded rapidly in recent years. This has resulted in the development of many devices and design strategies for use in stroke rehabilitation. The development progression of robotic-aided hand physiotherapy devices and brain-machine interface systems is outlined, focussing on those with mechanisms and control strategies designed to improve recovery outcomes of the hand post-stroke. A total of 110 commercial and non-commercial hand and wrist devices, spanning the 2 major core designs: end-effector and exoskeleton are reviewed. The growing body of evidence on the efficacy and relevance of incorporating brain-machine interfaces in stroke rehabilitation is summarized. The challenges involved in integrating robotic rehabilitation into the healthcare system are discussed. This review provides novel insights into the use of robotics in physiotherapy practice, and may help system designers to develop new devices.
Recent trends for practical rehabilitation robotics, current challenges and the future.
Yakub, Fitri; Md Khudzari, Ahmad Zahran; Mori, Yasuchika
2014-03-01
This paper presents and studies various selected literature primarily from conference proceedings, journals and clinical tests of the robotic, mechatronics, neurology and biomedical engineering of rehabilitation robotic systems. The present paper focuses of three main categories: types of rehabilitation robots, key technologies with current issues and future challenges. Literature on fundamental research with some examples from commercialized robots and new robot development projects related to rehabilitation are introduced. Most of the commercialized robots presented in this paper are well known especially to robotics engineers and scholars in the robotic field, but are less known to humanities scholars. The field of rehabilitation robot research is expanding; in light of this, some of the current issues and future challenges in rehabilitation robot engineering are recalled, examined and clarified with future directions. This paper is concluded with some recommendations with respect to rehabilitation robots.
NASA Technical Reports Server (NTRS)
Stecklein, Jonette
2017-01-01
NASA has held an annual robotic mining competition for teams of university/college students since 2010. This competition is yearlong, suitable for a senior university engineering capstone project. It encompasses the full project life cycle from ideation of a robot design, through tele-operation of the robot collecting regolith in simulated Mars conditions, to disposal of the robot systems after the competition. A major required element for this competition is a Systems Engineering Paper in which each team describes the systems engineering approaches used on their project. The score for the Systems Engineering Paper contributes 25% towards the team’s score for the competition’s grand prize. The required use of systems engineering on the project by this competition introduces the students to an intense practical application of systems engineering throughout a full project life cycle.
Experiences in Developing an Experimental Robotics Course Program for Undergraduate Education
ERIC Educational Resources Information Center
Jung, Seul
2013-01-01
An interdisciplinary undergraduate-level robotics course offers students the chance to integrate their engineering knowledge learned throughout their college years by building a robotic system. Robotics is thus a core course in system and control-related engineering education. This paper summarizes the experience of developing robotics courses…
De Momi, E; Ferrigno, G
2010-01-01
The robot and sensors integration for computer-assisted surgery and therapy (ROBOCAST) project (FP7-ICT-2007-215190) is co-funded by the European Union within the Seventh Framework Programme in the field of information and communication technologies. The ROBOCAST project focuses on robot- and artificial-intelligence-assisted keyhole neurosurgery (tumour biopsy and local drug delivery along straight or turning paths). The goal of this project is to assist surgeons with a robotic system controlled by an intelligent high-level controller (HLC) able to gather and integrate information from the surgeon, from diagnostic images, and from an array of on-field sensors. The HLC integrates pre-operative and intra-operative diagnostics data and measurements, intelligence augmentation, multiple-robot dexterity, and multiple sensory inputs in a closed-loop cooperating scheme including a smart interface for improved haptic immersion and integration. This paper, after the overall architecture description, focuses on the intelligent trajectory planner based on risk estimation and human criticism. The current status of development is reported, and first tests on the planner are shown by using a real image stack and risk descriptor phantom. The advantages of using a fuzzy risk description are given by the possibility of upgrading the knowledge on-field without the intervention of a knowledge engineer.
NASA Technical Reports Server (NTRS)
Torosyan, David
2012-01-01
Just as important as the engineering that goes into building a robot is the method of interaction, or how human users will use the machine. As part of the Human-System Interactions group (Conductor) at JPL, I explored using a web interface to interact with ATHLETE, a prototype lunar rover. I investigated the usefulness of HTML 5 and Javascript as a telemetry viewer as well as the feasibility of having a rover communicate with a web server. To test my ideas I built a mobile-compatible website and designed primarily for an Android tablet. The website took input from ATHLETE engineers, and upon its completion I conducted a user test to assess its effectiveness.
Human-Robot Interface: Issues in Operator Performance, Interface Design, and Technologies
2006-07-01
and the use of lightweight portable robotic sensor platforms. 5 robotics has reached a point where some generalities of HRI transcend specific...displays with control devices such as joysticks, wheels, and pedals (Kamsickas, 2003). Typical control stations include panels displaying (a) sensor ...tasks that do not involve mobility and usually involve camera control or data fusion from sensors Active search: Search tasks that involve mobility
NASA Astrophysics Data System (ADS)
Singh, N. Nirmal; Chatterjee, Amitava; Rakshit, Anjan
2010-02-01
The present article describes the development of a peripheral interface controller (PIC) microcontroller-based system for interfacing external add-on peripherals with a real mobile robot, for real life applications. This system serves as an important building block of a complete integrated vision-based mobile robot system, integrated indigenously in our laboratory. The system is composed of the KOALA mobile robot in conjunction with a personal computer (PC) and a two-camera-based vision system where the PIC microcontroller is used to drive servo motors, in interrupt-driven mode, to control additional degrees of freedom of the vision system. The performance of the developed system is tested by checking it under the control of several user-specified commands, issued from the PC end.
Bioengineering solutions for neural repair and recovery in stroke.
Modo, Michel; Ambrosio, Fabrisia; Friedlander, Robert M; Badylak, Stephen F; Wechsler, Lawrence R
2013-12-01
This review discusses emerging bioengineering opportunities for the treatment of stroke and their potential to build on current rehabilitation protocols. Bioengineering is a vast field that ranges from biomaterials to brain-computer interfaces. Biomaterials find application in the delivery of pharmacotherapies, as well as the emerging field of tissue engineering. For the treatment of stroke, these approaches have to be seen in the context of physical therapy in order to maximize functional outcomes. There is also an emergence of rehabilitation that engages engineering solutions, such as robot-assisted training, as well as brain-computer interfaces that can potentially assist in the case of paralysis. Stroke remains the main cause of adult disability with rehabilitation therapy being the focus for chronic impairments. Bioengineering is offering new opportunities to both support and synergize with currently available treatment options, and also promises to potentially dramatically improve available approaches. See the Video Supplementary Digital Content 1 (http://links.lww.com/CONR/A21).
NASA Technical Reports Server (NTRS)
2004-01-01
Topics covered include: COTS MEMS Flow-Measurement Probes; Measurement of an Evaporating Drop on a Reflective Substrate; Airplane Ice Detector Based on a Microwave Transmission Line; Microwave/Sonic Apparatus Measures Flow and Density in Pipe; Reducing Errors by Use of Redundancy in Gravity Measurements; Membrane-Based Water Evaporator for a Space Suit; Compact Microscope Imaging System with Intelligent Controls; Chirped-Superlattice, Blocked-Intersubband QWIP; Charge-Dissipative Electrical Cables; Deep-Sea Video Cameras Without Pressure Housings; RFID and Memory Devices Fabricated Integrally on Substrates; Analyzing Dynamics of Cooperating Spacecraft; Spacecraft Attitude Maneuver Planning Using Genetic Algorithms; Forensic Analysis of Compromised Computers; Document Concurrence System; Managing an Archive of Images; MPT Prediction of Aircraft-Engine Fan Noise; Improving Control of Two Motor Controllers; Electro-deionization Using Micro-separated Bipolar Membranes; Safer Electrolytes for Lithium-Ion Cells; Rotating Reverse-Osmosis for Water Purification; Making Precise Resonators for Mesoscale Vibratory Gyroscopes; Robotic End Effectors for Hard-Rock Climbing; Improved Nutation Damper for a Spin-Stabilized Spacecraft; Exhaust Nozzle for a Multitube Detonative Combustion Engine; Arc-Second Pointer for Balloon-Borne Astronomical Instrument; Compact, Automated Centrifugal Slide-Staining System; Two-Armed, Mobile, Sensate Research Robot; Compensating for Effects of Humidity on Electronic Noses; Brush/Fin Thermal Interfaces; Multispectral Scanner for Monitoring Plants; Coding for Communication Channels with Dead-Time Constraints; System for Better Spacing of Airplanes En Route; Algorithm for Training a Recurrent Multilayer Perceptron; Orbiter Interface Unit and Early Communication System; White-Light Nulling Interferometers for Detecting Planets; and Development of Methodology for Programming Autonomous Agents.
Scaling up nanoscale water-driven energy conversion into evaporation-driven engines and generators
Chen, Xi; Goodnight, Davis; Gao, Zhenghan; ...
2015-06-16
Evaporation is a ubiquitous phenomenon in the natural environment and a dominant form of energy transfer in the Earth’s climate. Engineered systems rarely, if ever, use evaporation as a source of energy, despite myriad examples of such adaptations in the biological world. In this work, we report evaporation-driven engines that can power common tasks like locomotion and electricity generation. These engines start and run autonomously when placed at air–water interfaces. They generate rotary and piston-like linear motion using specially designed, biologically based artificial muscles responsive to moisture fluctuations. Using these engines, we demonstrate an electricity generator that rests on watermore » while harvesting its evaporation to power a light source, and a miniature car (weighing 0.1 kg) that moves forward as the water in the car evaporates. Evaporation-driven engines may find applications in powering robotic systems, sensors, devices and machinery that function in the natural environment.« less
Scaling up nanoscale water-driven energy conversion into evaporation-driven engines and generators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Xi; Goodnight, Davis; Gao, Zhenghan
Evaporation is a ubiquitous phenomenon in the natural environment and a dominant form of energy transfer in the Earth’s climate. Engineered systems rarely, if ever, use evaporation as a source of energy, despite myriad examples of such adaptations in the biological world. In this work, we report evaporation-driven engines that can power common tasks like locomotion and electricity generation. These engines start and run autonomously when placed at air–water interfaces. They generate rotary and piston-like linear motion using specially designed, biologically based artificial muscles responsive to moisture fluctuations. Using these engines, we demonstrate an electricity generator that rests on watermore » while harvesting its evaporation to power a light source, and a miniature car (weighing 0.1 kg) that moves forward as the water in the car evaporates. Evaporation-driven engines may find applications in powering robotic systems, sensors, devices and machinery that function in the natural environment.« less
A motion sensing-based framework for robotic manipulation.
Deng, Hao; Xia, Zeyang; Weng, Shaokui; Gan, Yangzhou; Fang, Peng; Xiong, Jing
2016-01-01
To data, outside of the controlled environments, robots normally perform manipulation tasks operating with human. This pattern requires the robot operators with high technical skills training for varied teach-pendant operating system. Motion sensing technology, which enables human-machine interaction in a novel and natural interface using gestures, has crucially inspired us to adopt this user-friendly and straightforward operation mode on robotic manipulation. Thus, in this paper, we presented a motion sensing-based framework for robotic manipulation, which recognizes gesture commands captured from motion sensing input device and drives the action of robots. For compatibility, a general hardware interface layer was also developed in the framework. Simulation and physical experiments have been conducted for preliminary validation. The results have shown that the proposed framework is an effective approach for general robotic manipulation with motion sensing control.
SOFT ROBOTICS. A 3D-printed, functionally graded soft robot powered by combustion.
Bartlett, Nicholas W; Tolley, Michael T; Overvelde, Johannes T B; Weaver, James C; Mosadegh, Bobak; Bertoldi, Katia; Whitesides, George M; Wood, Robert J
2015-07-10
Roboticists have begun to design biologically inspired robots with soft or partially soft bodies, which have the potential to be more robust and adaptable, and safer for human interaction, than traditional rigid robots. However, key challenges in the design and manufacture of soft robots include the complex fabrication processes and the interfacing of soft and rigid components. We used multimaterial three-dimensional (3D) printing to manufacture a combustion-powered robot whose body transitions from a rigid core to a soft exterior. This stiffness gradient, spanning three orders of magnitude in modulus, enables reliable interfacing between rigid driving components (controller, battery, etc.) and the primarily soft body, and also enhances performance. Powered by the combustion of butane and oxygen, this robot is able to perform untethered jumping. Copyright © 2015, American Association for the Advancement of Science.
A novel interface for the telementoring of robotic surgery.
Shin, Daniel H; Dalag, Leonard; Azhar, Raed A; Santomauro, Michael; Satkunasivam, Raj; Metcalfe, Charles; Dunn, Matthew; Berger, Andre; Djaladat, Hooman; Nguyen, Mike; Desai, Mihir M; Aron, Monish; Gill, Inderbir S; Hung, Andrew J
2015-08-01
To prospectively evaluate the feasibility and safety of a novel, second-generation telementoring interface (Connect(™) ; Intuitive Surgical Inc., Sunnyvale, CA, USA) for the da Vinci robot. Robotic surgery trainees were mentored during portions of robot-assisted prostatectomy and renal surgery cases. Cases were assigned as traditional in-room mentoring or remote mentoring using Connect. While viewing two-dimensional, real-time video of the surgical field, remote mentors delivered verbal and visual counsel, using two-way audio and telestration (drawing) capabilities. Perioperative and technical data were recorded. Trainee robotic performance was rated using a validated assessment tool by both mentors and trainees. The mentoring interface was rated using a multi-factorial Likert-based survey. The Mann-Whitney and t-tests were used to determine statistical differences. We enrolled 55 mentored surgical cases (29 in-room, 26 remote). Perioperative variables of operative time and blood loss were similar between in-room and remote mentored cases. Robotic skills assessment showed no significant difference (P > 0.05). Mentors preferred remote over in-room telestration (P = 0.05); otherwise no significant difference existed in evaluation of the interfaces. Remote cases using wired (vs wireless) connections had lower latency and better data transfer (P = 0.005). Three of 18 (17%) wireless sessions were disrupted; one was converted to wired, one continued after restarting Connect, and the third was aborted. A bipolar injury to the colon occurred during one (3%) in-room mentored case; no intraoperative injuries were reported during remote sessions. In a tightly controlled environment, the Connect interface allows trainee robotic surgeons to be telementored in a safe and effective manner while performing basic surgical techniques. Significant steps remain prior to widespread use of this technology. © 2014 The Authors BJU International © 2014 BJU International Published by John Wiley & Sons Ltd.
Soft, Conformal Bioelectronics for a Wireless Human-Wheelchair Interface
Mishra, Saswat; Norton, James J. S.; Lee, Yongkuk; Lee, Dong Sup; Agee, Nicolas; Chen, Yanfei; Chun, Youngjae; Yeo, Woon-Hong
2017-01-01
There are more than 3 million people in the world whose mobility relies on wheelchairs. Recent advancement on engineering technology enables more intuitive, easy-to-use rehabilitation systems. A human-machine interface that uses non-invasive, electrophysiological signals can allow a systematic interaction between human and devices; for example, eye movement-based wheelchair control. However, the existing machine-interface platforms are obtrusive, uncomfortable, and often cause skin irritations as they require a metal electrode affixed to the skin with a gel and acrylic pad. Here, we introduce a bioelectronic system that makes dry, conformal contact to the skin. The mechanically comfortable sensor records high-fidelity electrooculograms, comparable to the conventional gel electrode. Quantitative signal analysis and infrared thermographs show the advantages of the soft biosensor for an ergonomic human-machine interface. A classification algorithm with an optimized set of features shows the accuracy of 94% with five eye movements. A Bluetooth-enabled system incorporating the soft bioelectronics demonstrates a precise, hands-free control of a robotic wheelchair via electrooculograms. PMID:28152485
Autonomous Systems, Robotics, and Computing Systems Capability Roadmap: NRC Dialogue
NASA Technical Reports Server (NTRS)
Zornetzer, Steve; Gage, Douglas
2005-01-01
Contents include the following: Introduction. Process, Mission Drivers, Deliverables, and Interfaces. Autonomy. Crew-Centered and Remote Operations. Integrated Systems Health Management. Autonomous Vehicle Control. Autonomous Process Control. Robotics. Robotics for Solar System Exploration. Robotics for Lunar and Planetary Habitation. Robotics for In-Space Operations. Computing Systems. Conclusion.
Knowledge representation system for assembly using robots
NASA Technical Reports Server (NTRS)
Jain, A.; Donath, M.
1987-01-01
Assembly robots combine the benefits of speed and accuracy with the capability of adaptation to changes in the work environment. However, an impediment to the use of robots is the complexity of the man-machine interface. This interface can be improved by providing a means of using a priori-knowledge and reasoning capabilities for controlling and monitoring the tasks performed by robots. Robots ought to be able to perform complex assembly tasks with the help of only supervisory guidance from human operators. For such supervisory quidance, it is important to express the commands in terms of the effects desired, rather than in terms of the motion the robot must undertake in order to achieve these effects. A suitable knowledge representation can facilitate the conversion of task level descriptions into explicit instructions to the robot. Such a system would use symbolic relationships describing the a priori information about the robot, its environment, and the tasks specified by the operator to generate the commands for the robot.
Design And Control Of Agricultural Robot For Tomato Plants Treatment And Harvesting
NASA Astrophysics Data System (ADS)
Sembiring, Arnes; Budiman, Arif; Lestari, Yuyun D.
2017-12-01
Although Indonesia is one of the biggest agricultural country in the world, implementation of robotic technology, otomation and efficiency enhancement in agriculture process hasn’t extensive yet. This research proposed a low cost agricultural robot architecture. The robot could help farmer to survey their farm area, treat the tomato plants and harvest the ripe tomatoes. Communication between farmer and robot was facilitated by wireless line using radio wave to reach wide area (120m radius). The radio wave was combinated with Bluetooth to simplify the communication between robot and farmer’s Android smartphone. The robot was equipped with a camera, so the farmers could survey the farm situation through 7 inch monitor display real time. The farmers controlled the robot and arm movement through an user interface in Android smartphone. The user interface contains control icons that allow farmers to control the robot movement (formard, reverse, turn right and turn left) and cut the spotty leaves or harvest the ripe tomatoes.
Developments in brain-machine interfaces from the perspective of robotics.
Kim, Hyun K; Park, Shinsuk; Srinivasan, Mandayam A
2009-04-01
Many patients suffer from the loss of motor skills, resulting from traumatic brain and spinal cord injuries, stroke, and many other disabling conditions. Thanks to technological advances in measuring and decoding the electrical activity of cortical neurons, brain-machine interfaces (BMI) have become a promising technology that can aid paralyzed individuals. In recent studies on BMI, robotic manipulators have demonstrated their potential as neuroprostheses. Restoring motor skills through robot manipulators controlled by brain signals may improve the quality of life of people with disability. This article reviews current robotic technologies that are relevant to BMI and suggests strategies that could improve the effectiveness of a brain-operated neuroprosthesis through robotics.
Towards a real-time interface between a biomimetic model of sensorimotor cortex and a robotic arm
Dura-Bernal, Salvador; Chadderdon, George L; Neymotin, Samuel A; Francis, Joseph T; Lytton, William W
2015-01-01
Brain-machine interfaces can greatly improve the performance of prosthetics. Utilizing biomimetic neuronal modeling in brain machine interfaces (BMI) offers the possibility of providing naturalistic motor-control algorithms for control of a robotic limb. This will allow finer control of a robot, while also giving us new tools to better understand the brain’s use of electrical signals. However, the biomimetic approach presents challenges in integrating technologies across multiple hardware and software platforms, so that the different components can communicate in real-time. We present the first steps in an ongoing effort to integrate a biomimetic spiking neuronal model of motor learning with a robotic arm. The biomimetic model (BMM) was used to drive a simple kinematic two-joint virtual arm in a motor task requiring trial-and-error convergence on a single target. We utilized the output of this model in real time to drive mirroring motion of a Barrett Technology WAM robotic arm through a user datagram protocol (UDP) interface. The robotic arm sent back information on its joint positions, which was then used by a visualization tool on the remote computer to display a realistic 3D virtual model of the moving robotic arm in real time. This work paves the way towards a full closed-loop biomimetic brain-effector system that can be incorporated in a neural decoder for prosthetic control, to be used as a platform for developing biomimetic learning algorithms for controlling real-time devices. PMID:26709323
Evaluation of a graphic interface to control a robotic grasping arm: a multicenter study.
Laffont, Isabelle; Biard, Nicolas; Chalubert, Gérard; Delahoche, Laurent; Marhic, Bruno; Boyer, François C; Leroux, Christophe
2009-10-01
Laffont I, Biard N, Chalubert G, Delahoche L, Marhic B, Boyer FC, Leroux C. Evaluation of a graphic interface to control a robotic grasping arm: a multicenter study. Grasping robots are still difficult to use for persons with disabilities because of inadequate human-machine interfaces (HMIs). Our purpose was to evaluate the efficacy of a graphic interface enhanced by a panoramic camera to detect out-of-view objects and control a commercialized robotic grasping arm. Multicenter, open-label trial. Four French departments of physical and rehabilitation medicine. Control subjects (N=24; mean age, 33y) and 20 severely impaired patients (mean age, 44y; 5 with muscular dystrophies, 13 with traumatic tetraplegia, and 2 others) completed the study. None of these patients was able to grasp a 50-cL bottle without the robot. Participants were asked to grasp 6 objects scattered around their wheelchair using the robotic arm. They were able to select the desired object through the graphic interface available on their computer screen. Global success rate, time needed to select the object on the screen of the computer, number of clicks on the HMI, and satisfaction among users. We found a significantly lower success rate in patients (81.1% vs 88.7%; chi(2)P=.017). The duration of the task was significantly higher in patients (71.6s vs 39.1s; P<.001). We set a cut-off for the maximum duration at 79 seconds, representing twice the amount of time needed by the control subjects to complete the task. In these conditions, the success rate for the impaired participants was 65% versus 85.4% for control subjects. The mean number of clicks necessary to select the object with the HMI was very close in both groups: patients used (mean +/- SD) 7.99+/-6.07 clicks, whereas controls used 7.04+/-2.87 clicks. Considering the severity of patients' impairment, all these differences were considered tiny. Furthermore, a high satisfaction rate was reported for this population concerning the use of the graphic interface. The graphic interface is of interest in controlling robotic arms for disabled people, with numerous potential applications in daily life.
CESAR robotics and intelligent systems research for nuclear environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mann, R.C.
1992-07-01
The Center for Engineering Systems Advanced Research (CESAR) at the Oak Ridge National Laboratory (ORNL) encompasses expertise and facilities to perform basic and applied research in robotics and intelligent systems in order to address a broad spectrum of problems related to nuclear and other environments. For nuclear environments, research focus is derived from applications in advanced nuclear power stations, and in environmental restoration and waste management. Several programs at CESAR emphasize the cross-cutting technology issues, and are executed in appropriate cooperation with projects that address specific problem areas. Although the main thrust of the CESAR long-term research is on developingmore » highly automated systems that can cooperate and function reliably in complex environments, the development of advanced human-machine interfaces represents a significant part of our research. 11 refs.« less
CESAR robotics and intelligent systems research for nuclear environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mann, R.C.
1992-01-01
The Center for Engineering Systems Advanced Research (CESAR) at the Oak Ridge National Laboratory (ORNL) encompasses expertise and facilities to perform basic and applied research in robotics and intelligent systems in order to address a broad spectrum of problems related to nuclear and other environments. For nuclear environments, research focus is derived from applications in advanced nuclear power stations, and in environmental restoration and waste management. Several programs at CESAR emphasize the cross-cutting technology issues, and are executed in appropriate cooperation with projects that address specific problem areas. Although the main thrust of the CESAR long-term research is on developingmore » highly automated systems that can cooperate and function reliably in complex environments, the development of advanced human-machine interfaces represents a significant part of our research. 11 refs.« less
NASA Technical Reports Server (NTRS)
Manouchehri, Davoud; Lindsay, Thomas; Ghosh, David
1994-01-01
NASA's Langley Research Center (LaRC) is addressing the problem of isolating the vibrations of the Shuttle remote manipulator system (RMS) from its end-effector and/or payload by modeling an RMS flat-floor simulator with a dynamic payload. Analysis of the model can lead to control techniques that will improve the speed, accuracy, and safety of the RMS in capturing satellites and eventually facilitate berthing with the space station. Rockwell International Corporation, also involved in vibration isolation, has developed a hardware interface unit to isolate the end-effector from the vibrations of an arm on a Shuttle robotic tile processing system (RTPS). To apply the RTPS isolation techniques to long-reach arms like the RMS, engineers have modeled the dynamics of the hardware interface unit with simulation software. By integrating the Rockwell interface model with the NASA LaRC RMS simulator model, investigators can study the use of a hardware interface to isolate dynamic payloads from the RMS. The interface unit uses both active and passive compliance and damping for vibration isolation. Thus equipped, the RMS could be used as a telemanipulator with control characteristics for capture and berthing operations. The hardware interface also has applications in industry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anthony L. Crawford
MODIFIED PAPER TITLE AND ABSTRACT DUE TO SLIGHTLY MODIFIED SCOPE: TITLE: Nonlinear Force Profile Used to Increase the Performance of a Haptic User Interface for Teleoperating a Robotic Hand Natural movements and force feedback are important elements in using teleoperated equipment if complex and speedy manipulation tasks are to be accomplished in hazardous environments, such as hot cells, glove boxes, decommissioning, explosives disarmament, and space. The research associated with this paper hypothesizes that a user interface and complementary radiation compatible robotic hand that integrates the human hand’s anthropometric properties, speed capability, nonlinear strength profile, reduction of active degrees of freedommore » during the transition from manipulation to grasping, and just noticeable difference force sensation characteristics will enhance a user’s teleoperation performance. The main contribution of this research is in that a system that concisely integrates all these factors has yet to be developed and furthermore has yet to be applied to a hazardous environment as those referenced above. In fact, the most prominent slave manipulator teleoperation technology in use today is based on a design patented in 1945 (Patent 2632574) [1]. The robotic hand/user interface systems of similar function as the one being developed in this research limit their design input requirements in the best case to only complementing the hand’s anthropometric properties, speed capability, and linearly scaled force application relationship (e.g. robotic force is a constant, 4 times that of the user). In this paper a nonlinear relationship between the force experienced between the user interface and the robotic hand was devised based on property differences of manipulation and grasping activities as they pertain to the human hand. The results show that such a relationship when subjected to a manipulation task and grasping task produces increased performance compared to the traditional linear scaling techniques used by other systems. Key Words: Teleoperation, Robotic Hand, Robotic Force Scaling« less
He, Yongtian; Nathan, Kevin; Venkatakrishnan, Anusha; Rovekamp, Roger; Beck, Christopher; Ozdemir, Recep; Francisco, Gerard E; Contreras-Vidal, Jose L
2014-01-01
Stroke remains a leading cause of disability, limiting independent ambulation in survivors, and consequently affecting quality of life (QOL). Recent technological advances in neural interfacing with robotic rehabilitation devices are promising in the context of gait rehabilitation. Here, the X1, NASA's powered robotic lower limb exoskeleton, is introduced as a potential diagnostic, assistive, and therapeutic tool for stroke rehabilitation. Additionally, the feasibility of decoding lower limb joint kinematics and kinetics during walking with the X1 from scalp electroencephalographic (EEG) signals--the first step towards the development of a brain-machine interface (BMI) system to the X1 exoskeleton--is demonstrated.
A Novel Passive Robotic Tool Interface
NASA Astrophysics Data System (ADS)
Roberts, Paul
2013-09-01
The increased capability of space robotics has seen their uses increase from simple sample gathering and mechanical adjuncts to humans, to sophisticated multi- purpose investigative and maintenance tools that substitute for humans for many external space tasks. As with all space missions, reducing mass and system complexity is critical. A key component of robotic systems mass and complexity is the number of motors and actuators needed. MDA has developed a passive tool interface that, like a household power drill, permits a single tool actuator to be interfaced with many Tool Tips without requiring additional actuators to manage the changing and storage of these tools. MDA's Multifunction Tool interface permits a wide range of Tool Tips to be designed to a single interface that can be pre-qualified to torque and strength limits such that additional Tool Tips can be added to a mission's "tool kit" simply and quickly.
Autonomous assistance navigation for robotic wheelchairs in confined spaces.
Cheein, Fernando Auat; Carelli, Ricardo; De la Cruz, Celso; Muller, Sandra; Bastos Filho, Teodiano F
2010-01-01
In this work, a visual interface for the assistance of a robotic wheelchair's navigation is presented. The visual interface is developed for the navigation in confined spaces such as narrows corridors or corridor-ends. The interface performs two navigation modus: non-autonomous and autonomous. The non-autonomous driving of the robotic wheelchair is made by means of a hand-joystick. The joystick directs the motion of the vehicle within the environment. The autonomous driving is performed when the user of the wheelchair has to turn (90, 90 or 180 degrees) within the environment. The turning strategy is performed by a maneuverability algorithm compatible with the kinematics of the wheelchair and by the SLAM (Simultaneous Localization and Mapping) algorithm. The SLAM algorithm provides the interface with the information concerning the environment disposition and the pose -position and orientation-of the wheelchair within the environment. Experimental and statistical results of the interface are also shown in this work.
Development of wrist rehabilitation robot and interface system.
Yamamoto, Ikuo; Matsui, Miki; Inagawa, Naohiro; Hachisuka, Kenji; Wada, Futoshi; Hachisuka, Akiko; Saeki, Satoru
2015-01-01
The authors have developed a practical wrist rehabilitation robot for hemiplegic patients. It consists of a mechanical rotation unit, sensor, grip, and computer system. A myoelectric sensor is used to monitor the extensor carpi radialis longus/brevis muscle and flexor carpi radialis muscle activity during training. The training robot can provoke training through myoelectric sensors, a biological signal detector and processor in advance, so that patients can undergo effective training of extention and flexion in an excited condition. In addition, both-wrist system has been developed for mirror effect training, which is the most effective function of the system, so that autonomous training using both wrists is possible. Furthermore, a user-friendly screen interface with easily recognizable touch panels has been developed to give effective training for patients. The developed robot is small size and easy to carry. The developed aspiring interface system is effective to motivate the training of patients. The effectiveness of the robot system has been verified in hospital trails.
Sensing Pressure Distribution on a Lower-Limb Exoskeleton Physical Human-Machine Interface
De Rossi, Stefano Marco Maria; Vitiello, Nicola; Lenzi, Tommaso; Ronsse, Renaud; Koopman, Bram; Persichetti, Alessandro; Vecchi, Fabrizio; Ijspeert, Auke Jan; van der Kooij, Herman; Carrozza, Maria Chiara
2011-01-01
A sensory apparatus to monitor pressure distribution on the physical human-robot interface of lower-limb exoskeletons is presented. We propose a distributed measure of the interaction pressure over the whole contact area between the user and the machine as an alternative measurement method of human-robot interaction. To obtain this measure, an array of newly-developed soft silicone pressure sensors is inserted between the limb and the mechanical interface that connects the robot to the user, in direct contact with the wearer’s skin. Compared to state-of-the-art measures, the advantage of this approach is that it allows for a distributed measure of the interaction pressure, which could be useful for the assessment of safety and comfort of human-robot interaction. This paper presents the new sensor and its characterization, and the development of an interaction measurement apparatus, which is applied to a lower-limb rehabilitation robot. The system is calibrated, and an example its use during a prototypical gait training task is presented. PMID:22346574
Surgeon Design Interface for Patient-Specific Concentric Tube Robots
Morimoto, Tania K.; Greer, Joseph D.; Hsieh, Michael H.; Okamura, Allison M.
2017-01-01
Concentric tube robots have potential for use in a wide variety of surgical procedures due to their small size, dexterity, and ability to move in highly curved paths. Unlike most existing clinical robots, the design of these robots can be developed and manufactured on a patient- and procedure-specific basis. The design of concentric tube robots typically requires significant computation and optimization, and it remains unclear how the surgeon should be involved. We propose to use a virtual reality-based design environment for surgeons to easily and intuitively visualize and design a set of concentric tube robots for a specific patient and procedure. In this paper, we describe a novel patient-specific design process in the context of the virtual reality interface. We also show a resulting concentric tube robot design, created by a pediatric urologist to access a kidney stone in a pediatric patient. PMID:28656124
Surgeon Design Interface for Patient-Specific Concentric Tube Robots.
Morimoto, Tania K; Greer, Joseph D; Hsieh, Michael H; Okamura, Allison M
2016-06-01
Concentric tube robots have potential for use in a wide variety of surgical procedures due to their small size, dexterity, and ability to move in highly curved paths. Unlike most existing clinical robots, the design of these robots can be developed and manufactured on a patient- and procedure-specific basis. The design of concentric tube robots typically requires significant computation and optimization, and it remains unclear how the surgeon should be involved. We propose to use a virtual reality-based design environment for surgeons to easily and intuitively visualize and design a set of concentric tube robots for a specific patient and procedure. In this paper, we describe a novel patient-specific design process in the context of the virtual reality interface. We also show a resulting concentric tube robot design, created by a pediatric urologist to access a kidney stone in a pediatric patient.
Özcan, Alpay; Christoforou, Eftychios; Brown, Daniel; Tsekos, Nikolaos
2011-01-01
The graphical user interface for an MR compatible robotic device has the capability of displaying oblique MR slices in 2D and a 3D virtual environment along with the representation of the robotic arm in order to swiftly complete the intervention. Using the advantages of the MR modality the device saves time and effort, is safer for the medical staff and is more comfortable for the patient. PMID:17946067
Human Factors and Robotics: Current Status and Future Prospects.
ERIC Educational Resources Information Center
Parsons, H. McIlvaine; Kearsley, Greg P.
The principal human factors engineering issue in robotics is the division of labor between automation (robots) and human beings. This issue reflects a prime human factors engineering consideration in systems design--what equipment should do and what operators and maintainers should do. Understanding of capabilities and limitations of robots and…
NASA Technical Reports Server (NTRS)
Stecklein, Jonette
2017-01-01
NASA has held an annual robotic mining competition for teams of university/college students since 2010. This competition is yearlong, suitable for a senior university engineering capstone project. It encompasses the full project life cycle from ideation of a robot design to actual tele-operation of the robot in simulated Mars conditions mining and collecting simulated regolith. A major required element for this competition is a Systems Engineering Paper in which each team describes the systems engineering approaches used on their project. The score for the Systems Engineering Paper contributes 25% towards the team's score for the competition's grand prize. The required use of systems engineering on the project by this competition introduces the students to an intense practical application of systems engineering throughout a full project life cycle.
Medical Engineering and Microneurosurgery: Application and Future.
Morita, Akio; Sora, Shigeo; Nakatomi, Hirofumi; Harada, Kanako; Sugita, Naohiko; Saito, Nobuhito; Mitsuishi, Mamoru
2016-10-15
Robotics and medical engineering can convert traditional surgery into digital and scientific procedures. Here, we describe our work to develop microsurgical robotic systems and apply engineering technology to assess microsurgical skills. With the collaboration of neurosurgeons and an engineering team, we have developed two types of microsurgical robotic systems. The first, the deep surgical systems, enable delicate surgical procedures such as vessel suturing in a deep and narrow space. The second type allows for super-fine surgical procedures such as anastomosing artificial vessels of 0.3 mm in diameter. Both systems are constructed with master and slave manipulator robots connected to local area networks. Robotic systems allowed for secure and accurate procedures in a deep surgical field. In cadaveric models, these systems showed a good potential of being useful in actual human surgeries, but mechanical refinements in thickness and durability are necessary for them to be established as clinical systems. The super-fine robotic system made the very intricate surgery possible and will be applied in clinical trials. Another trial included the digitization of surgical technique and scientific analysis of surgical skills. Robotic and human hand motions were analyzed in numerical fashion as we tried to define surgical skillfulness in a digital format. Engineered skill assessment is also feasible and should be useful for microsurgical training. Robotics and medical engineering should bring science into the surgical field and training of surgeons. Active collaboration between medical and engineering teams and academic and industry groups is mandatory to establish such medical systems to improve patient care.
Medical Engineering and Microneurosurgery: Application and Future
MORITA, Akio; SORA, Shigeo; NAKATOMI, Hirofumi; HARADA, Kanako; SUGITA, Naohiko; SAITO, Nobuhito; MITSUISHI, Mamoru
2016-01-01
Robotics and medical engineering can convert traditional surgery into digital and scientific procedures. Here, we describe our work to develop microsurgical robotic systems and apply engineering technology to assess microsurgical skills. With the collaboration of neurosurgeons and an engineering team, we have developed two types of microsurgical robotic systems. The first, the deep surgical systems, enable delicate surgical procedures such as vessel suturing in a deep and narrow space. The second type allows for super-fine surgical procedures such as anastomosing artificial vessels of 0.3 mm in diameter. Both systems are constructed with master and slave manipulator robots connected to local area networks. Robotic systems allowed for secure and accurate procedures in a deep surgical field. In cadaveric models, these systems showed a good potential of being useful in actual human surgeries, but mechanical refinements in thickness and durability are necessary for them to be established as clinical systems. The super-fine robotic system made the very intricate surgery possible and will be applied in clinical trials. Another trial included the digitization of surgical technique and scientific analysis of surgical skills. Robotic and human hand motions were analyzed in numerical fashion as we tried to define surgical skillfulness in a digital format. Engineered skill assessment is also feasible and should be useful for microsurgical training. Robotics and medical engineering should bring science into the surgical field and training of surgeons. Active collaboration between medical and engineering teams and academic and industry groups is mandatory to establish such medical systems to improve patient care. PMID:27464471
Goal Tracking in a Natural Language Interface: Towards Achieving Adjustable Autonomy
1999-01-01
communication , we believe that human/machine interfaces that share some of the characteristics of human- human communication can be friendlier and easier...natural means of communicating with a mobile robot. Although we are not claiming that communication with robotic agents must be patterned after human
ERIC Educational Resources Information Center
Mosley, Pauline Helen; Liu, Yun; Hargrove, S. Keith; Doswell, Jayfus T.
2010-01-01
This paper gives an overview of a new pre-engineering program--Robotics Technician Curriculum--that uses robots to solicit underrepresented students pursuing careers in science, technology, engineering, and mathematics (STEM). The curriculum uses a project-based learning environment, which consists of part lecture and part laboratory. This program…
EVA Roadmap: New Space Suit for the 21st Century
NASA Technical Reports Server (NTRS)
Yowell, Robert
1998-01-01
New spacesuit design considerations for the extra vehicular activity (EVA) of a manned Martian exploration mission are discussed. Considerations of the design includes:(1) regenerable CO2 removal, (2) a portable life support system (PLSS) which would include cryogenic oxygen produced from in-situ manufacture, (3) a power supply for the EVA, (4) the thermal control systems, (5) systems engineering, (5) space suit systems (materials, and mobility), (6) human considerations, such as improved biomedical sensors and astronaut comfort, (7) displays and controls, and robotic interfaces, such as rovers, and telerobotic commands.
Teleoperation of Robonaut Using Finger Tracking
NASA Technical Reports Server (NTRS)
Champoux, Rachel G.; Luo, Victor
2012-01-01
With the advent of new finger tracking systems, the idea of a more expressive and intuitive user interface is being explored and implemented. One practical application for this new kind of interface is that of teleoperating a robot. For humanoid robots, a finger tracking interface is required due to the level of complexity in a human-like hand, where a joystick isn't accurate. Moreover, for some tasks, using one's own hands allows the user to communicate their intentions more effectively than other input. The purpose of this project was to develop a natural user interface for someone to teleoperate a robot that is elsewhere. Specifically, this was designed to control Robonaut on the international space station to do tasks too dangerous and/or too trivial for human astronauts. This interface was developed by integrating and modifying 3Gear's software, which includes a library of gestures and the ability to track hands. The end result is an interface in which the user can manipulate objects in real time in the user interface. then, the information is relayed to a simulator, the stand in for Robonaut, at a slight delay.
GOM-Face: GKP, EOG, and EMG-based multimodal interface with application to humanoid robot control.
Nam, Yunjun; Koo, Bonkon; Cichocki, Andrzej; Choi, Seungjin
2014-02-01
We present a novel human-machine interface, called GOM-Face , and its application to humanoid robot control. The GOM-Face bases its interfacing on three electric potentials measured on the face: 1) glossokinetic potential (GKP), which involves the tongue movement; 2) electrooculogram (EOG), which involves the eye movement; 3) electromyogram, which involves the teeth clenching. Each potential has been individually used for assistive interfacing to provide persons with limb motor disabilities or even complete quadriplegia an alternative communication channel. However, to the best of our knowledge, GOM-Face is the first interface that exploits all these potentials together. We resolved the interference between GKP and EOG by extracting discriminative features from two covariance matrices: a tongue-movement-only data matrix and eye-movement-only data matrix. With the feature extraction method, GOM-Face can detect four kinds of horizontal tongue or eye movements with an accuracy of 86.7% within 2.77 s. We demonstrated the applicability of the GOM-Face to humanoid robot control: users were able to communicate with the robot by selecting from a predefined menu using the eye and tongue movements.
NASA Technical Reports Server (NTRS)
Sandor, Aniko; Cross, E. Vincent, II; Chang, Mai Lee
2015-01-01
Human-robot interaction (HRI) is a discipline investigating the factors affecting the interactions between humans and robots. It is important to evaluate how the design of interfaces affect the human's ability to perform tasks effectively and efficiently when working with a robot. By understanding the effects of interface design on human performance, workload, and situation awareness, interfaces can be developed to appropriately support the human in performing tasks with minimal errors and with appropriate interaction time and effort. Thus, the results of research on human-robot interfaces have direct implications for the design of robotic systems. For efficient and effective remote navigation of a rover, a human operator needs to be aware of the robot's environment. However, during teleoperation, operators may get information about the environment only through a robot's front-mounted camera causing a keyhole effect. The keyhole effect reduces situation awareness which may manifest in navigation issues such as higher number of collisions, missing critical aspects of the environment, or reduced speed. One way to compensate for the keyhole effect and the ambiguities operators experience when they teleoperate a robot is adding multiple cameras and including the robot chassis in the camera view. Augmented reality, such as overlays, can also enhance the way a person sees objects in the environment or in camera views by making them more visible. Scenes can be augmented with integrated telemetry, procedures, or map information. Furthermore, the addition of an exocentric (i.e., third-person) field of view from a camera placed in the robot's environment may provide operators with the additional information needed to gain spatial awareness of the robot. Two research studies investigated possible mitigation approaches to address the keyhole effect: 1) combining the inclusion of the robot chassis in the camera view with augmented reality overlays, and 2) modifying the camera frame of reference. The first study investigated the effects of inclusion and exclusion of the robot chassis along with superimposing a simple arrow overlay onto the video feed of operator task performance during teleoperation of a mobile robot in a driving task. In this study, the front half of the robot chassis was made visible through the use of three cameras, two side-facing and one forward-facing. The purpose of the second study was to compare operator performance when teleoperating a robot from an egocentric-only and combined (egocentric plus exocentric camera) view. Camera view parameters that are found to be beneficial in these laboratory experiments can be implemented on NASA rovers and tested in a real-world driving and navigation scenario on-site at the Johnson Space Center.
Human-Robot Interaction Directed Research Project
NASA Technical Reports Server (NTRS)
Sandor, Aniko; Cross, Ernest V., II; Chang, Mai Lee
2014-01-01
Human-robot interaction (HRI) is a discipline investigating the factors affecting the interactions between humans and robots. It is important to evaluate how the design of interfaces and command modalities affect the human's ability to perform tasks accurately, efficiently, and effectively when working with a robot. By understanding the effects of interface design on human performance, workload, and situation awareness, interfaces can be developed to appropriately support the human in performing tasks with minimal errors and with appropriate interaction time and effort. Thus, the results of research on human-robot interfaces have direct implications for the design of robotic systems. This DRP concentrates on three areas associated with interfaces and command modalities in HRI which are applicable to NASA robot systems: 1) Video Overlays, 2) Camera Views, and 3) Command Modalities. The first study focused on video overlays that investigated how Augmented Reality (AR) symbology can be added to the human-robot interface to improve teleoperation performance. Three types of AR symbology were explored in this study, command guidance (CG), situation guidance (SG), and both (SCG). CG symbology gives operators explicit instructions on what commands to input, whereas SG symbology gives operators implicit cues so that operators can infer the input commands. The combination of CG and SG provided operators with explicit and implicit cues allowing the operator to choose which symbology to utilize. The objective of the study was to understand how AR symbology affects the human operator's ability to align a robot arm to a target using a flight stick and the ability to allocate attention between the symbology and external views of the world. The study evaluated the effects type of symbology (CG and SG) has on operator tasks performance and attention allocation during teleoperation of a robot arm. The second study expanded on the first study by evaluating the effects of the type of navigational guidance (CG and SG) on operator task performance and attention allocation during teleoperation of a robot arm through uplinked commands. Although this study complements the first study on navigational guidance with hand controllers, it is a separate investigation due to the distinction in intended operators (i.e., crewmembers versus ground-operators). A third study looked at superimposed and integrated overlays for teleoperation of a mobile robot using a hand controller. When AR is superimposed on the external world, it appears to be fixed onto the display and internal to the operators' workstation. Unlike superimposed overlays, integrated overlays often appear as three-dimensional objects and move as if part of the external world. Studies conducted in the aviation domain show that integrated overlays can improve situation awareness and reduce the amount of deviation from the optimal path. The purpose of the study was to investigate whether these results apply to HRI tasks, such as navigation with a mobile robot.
From the laboratory to the soldier: providing tactical behaviors for Army robots
NASA Astrophysics Data System (ADS)
Knichel, David G.; Bruemmer, David J.
2008-04-01
The Army Future Combat System (FCS) Operational Requirement Document has identified a number of advanced robot tactical behavior requirements to enable the Future Brigade Combat Team (FBCT). The FBCT advanced tactical behaviors include Sentinel Behavior, Obstacle Avoidance Behavior, and Scaled Levels of Human-Machine control Behavior. The U.S. Army Training and Doctrine Command, (TRADOC) Maneuver Support Center (MANSCEN) has also documented a number of robotic behavior requirements for the Army non FCS forces such as the Infantry Brigade Combat Team (IBCT), Stryker Brigade Combat Team (SBCT), and Heavy Brigade Combat Team (HBCT). The general categories of useful robot tactical behaviors include Ground/Air Mobility behaviors, Tactical Mission behaviors, Manned-Unmanned Teaming behaviors, and Soldier-Robot Interface behaviors. Many DoD research and development centers are achieving the necessary components necessary for artificial tactical behaviors for ground and air robots to include the Army Research Laboratory (ARL), U.S. Army Research, Development and Engineering Command (RDECOM), Space and Naval Warfare (SPAWAR) Systems Center, US Army Tank-Automotive Research, Development and Engineering Center (TARDEC) and non DoD labs such as Department of Energy (DOL). With the support of the Joint Ground Robotics Enterprise (JGRE) through DoD and non DoD labs the Army Maneuver Support Center has recently concluded successful field trails of ground and air robots with specialized tactical behaviors and sensors to enable semi autonomous detection, reporting, and marking of explosive hazards to include Improvised Explosive Devices (IED) and landmines. A specific goal of this effort was to assess how collaborative behaviors for multiple unmanned air and ground vehicles can reduce risks to Soldiers and increase efficiency for on and off route explosive hazard detection, reporting, and marking. This paper discusses experimental results achieved with a robotic countermine system that utilizes autonomous behaviors and a mixed-initiative control scheme to address the challenges of detecting and marking buried landmines. Emerging requirements for robotic countermine operations are outlined as are the technologies developed under this effort to address them. A first experiment shows that the resulting system was able to find and mark landmines with a very low level of human involvement. In addition, the data indicates that the robotic system is able to decrease the time to find mines and increase the detection accuracy and reliability. Finally, the paper presents current efforts to incorporate new countermine sensors and port the resulting behaviors to two fielded military systems for rigorous assessing.
Design of a haptic device with grasp and push-pull force feedback for a master-slave surgical robot.
Hu, Zhenkai; Yoon, Chae-Hyun; Park, Samuel Byeongjun; Jo, Yung-Ho
2016-07-01
We propose a portable haptic device providing grasp (kinesthetic) and push-pull (cutaneous) sensations for optical-motion-capture master interfaces. Although optical-motion-capture master interfaces for surgical robot systems can overcome the stiffness, friction, and coupling problems of mechanical master interfaces, it is difficult to add haptic feedback to an optical-motion-capture master interface without constraining the free motion of the operator's hands. Therefore, we utilized a Bowden cable-driven mechanism to provide the grasp and push-pull sensation while retaining the free hand motion of the optical-motion capture master interface. To evaluate the haptic device, we construct a 2-DOF force sensing/force feedback system. We compare the sensed force and the reproduced force of the haptic device. Finally, a needle insertion test was done to evaluate the performance of the haptic interface in the master-slave system. The results demonstrate that both the grasp force feedback and the push-pull force feedback provided by the haptic interface closely matched with the sensed forces of the slave robot. We successfully apply our haptic interface in the optical-motion-capture master-slave system. The results of the needle insertion test showed that our haptic feedback can provide more safety than merely visual observation. We develop a suitable haptic device to produce both kinesthetic grasp force feedback and cutaneous push-pull force feedback. Our future research will include further objective performance evaluations of the optical-motion-capture master-slave robot system with our haptic interface in surgical scenarios.
A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera
Chao, Chun-Tang; Chung, Ming-Hsuan; Chiou, Juing-Shian; Wang, Chi-Jo
2016-01-01
In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots. PMID:27023556
A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera.
Chao, Chun-Tang; Chung, Ming-Hsuan; Chiou, Juing-Shian; Wang, Chi-Jo
2016-03-25
In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots.
Tele-rehabilitation using in-house wearable ankle rehabilitation robot.
Jamwal, Prashant K; Hussain, Shahid; Mir-Nasiri, Nazim; Ghayesh, Mergen H; Xie, Sheng Q
2018-01-01
This article explores wide-ranging potential of the wearable ankle robot for in-house rehabilitation. The presented robot has been conceptualized following a brief analysis of the existing technologies, systems, and solutions for in-house physical ankle rehabilitation. Configuration design analysis and component selection for ankle robot have been discussed as part of the conceptual design. The complexities of human robot interaction are closely encountered while maneuvering a rehabilitation robot. We present a fuzzy logic-based controller to perform the required robot-assisted ankle rehabilitation treatment. Designs of visual haptic interfaces have also been discussed, which will make the treatment interesting, and the subject will be motivated to exert more and regain lost functions rapidly. The complex nature of web-based communication between user and remotely sitting physiotherapy staff has also been discussed. A high-level software architecture appended with robot ensures user-friendly operations. This software is made up of three important components: patient-related database, graphical user interface (GUI), and a library of exercises creating virtual reality-specifically developed for ankle rehabilitation.
Development of a Guide-Dog Robot: Leading and Recognizing a Visually-Handicapped Person using a LRF
NASA Astrophysics Data System (ADS)
Saegusa, Shozo; Yasuda, Yuya; Uratani, Yoshitaka; Tanaka, Eiichirou; Makino, Toshiaki; Chang, Jen-Yuan (James
A conceptual Guide-Dog Robot prototype to lead and to recognize a visually-handicapped person is developed and discussed in this paper. Key design features of the robot include a movable platform, human-machine interface, and capability of avoiding obstacles. A novel algorithm enabling the robot to recognize its follower's locomotion as well to detect the center of corridor is proposed and implemented in the robot's human-machine interface. It is demonstrated that using the proposed novel leading and detecting algorithm along with a rapid scanning laser range finder (LRF) sensor, the robot is able to successfully and effectively lead a human walking in corridor without running into obstacles such as trash boxes or adjacent walking persons. Position and trajectory of the robot leading a human maneuvering in common corridor environment are measured by an independent LRF observer. The measured data suggest that the proposed algorithms are effective to enable the robot to detect center of the corridor and position of its follower correctly.
Melidis, Christos; Iizuka, Hiroyuki; Marocco, Davide
2018-05-01
In this paper, we present a novel approach to human-robot control. Taking inspiration from behaviour-based robotics and self-organisation principles, we present an interfacing mechanism, with the ability to adapt both towards the user and the robotic morphology. The aim is for a transparent mechanism connecting user and robot, allowing for a seamless integration of control signals and robot behaviours. Instead of the user adapting to the interface and control paradigm, the proposed architecture allows the user to shape the control motifs in their way of preference, moving away from the case where the user has to read and understand an operation manual, or it has to learn to operate a specific device. Starting from a tabula rasa basis, the architecture is able to identify control patterns (behaviours) for the given robotic morphology and successfully merge them with control signals from the user, regardless of the input device used. The structural components of the interface are presented and assessed both individually and as a whole. Inherent properties of the architecture are presented and explained. At the same time, emergent properties are presented and investigated. As a whole, this paradigm of control is found to highlight the potential for a change in the paradigm of robotic control, and a new level in the taxonomy of human in the loop systems.
Review of surgical robotics user interface: what is the best way to control robotic surgery?
Simorov, Anton; Otte, R Stephen; Kopietz, Courtni M; Oleynikov, Dmitry
2012-08-01
As surgical robots begin to occupy a larger place in operating rooms around the world, continued innovation is necessary to improve our outcomes. A comprehensive review of current surgical robotic user interfaces was performed to describe the modern surgical platforms, identify the benefits, and address the issues of feedback and limitations of visualization. Most robots currently used in surgery employ a master/slave relationship, with the surgeon seated at a work-console, manipulating the master system and visualizing the operation on a video screen. Although enormous strides have been made to advance current technology to the point of clinical use, limitations still exist. A lack of haptic feedback to the surgeon and the inability of the surgeon to be stationed at the operating table are the most notable examples. The future of robotic surgery sees a marked increase in the visualization technologies used in the operating room, as well as in the robots' abilities to convey haptic feedback to the surgeon. This will allow unparalleled sensation for the surgeon and almost eliminate inadvertent tissue contact and injury. A novel design for a user interface will allow the surgeon to have access to the patient bedside, remaining sterile throughout the procedure, employ a head-mounted three-dimensional visualization system, and allow the most intuitive master manipulation of the slave robot to date.
Robot navigation research using the HERMIES mobile robot
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnett, D.L.
1989-01-01
In recent years robot navigation has attracted much attention from researchers around the world. Not only are theoretical studies being simulated on sophisticated computers, but many mobile robots are now used as test vehicles for these theoretical studies. Various algorithms have been perfected for navigation in a known static environment; but navigation in an unknown and dynamic environment poses a much more challenging problem for researchers. Many different methodologies have been developed for autonomous robot navigation, but each methodology is usually restricted to a particular type of environment. One important research focus of the Center for Engineering Systems Advanced researchmore » (CESAR) at Oak Ridge National Laboratory, is autonomous navigation in unknown and dynamic environments using the series of HERMIES mobile robots. The research uses an expert system for high-level planning interfaced with C-coded routines for implementing the plans, and for quick processing of data requested by the expert system. In using this approach, the navigation is not restricted to one methodology since the expert system can activate a rule module for the methodology best suited for the current situation. Rule modules can be added the rule base as they are developed and tested. Modules are being developed or enhanced for navigating from a map, searching for a target, exploring, artificial potential-field navigation, navigation using edge-detection, etc. This paper will report on the various rule modules and methods of navigation in use, or under development at CESAR, using the HERMIES-IIB robot as a testbed. 13 refs., 5 figs., 1 tab.« less
Generic command interpreter for robot controllers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Werner, J.
1991-04-09
Generic command interpreter programs have been written for robot controllers at Sandia National Laboratories (SNL). Each interpreter program resides on a robot controller and interfaces the controller with a supervisory program on another (host) computer. We call these interpreter programs monitors because they wait, monitoring a communication line, for commands from the supervisory program. These monitors are designed to interface with the object-oriented software structure of the supervisory programs. The functions of the monitor programs are written in each robot controller's native language but reflect the object-oriented functions of the supervisory programs. These functions and other specifics of the monitormore » programs written for three different robots at SNL will be discussed. 4 refs., 4 figs.« less
NASA Astrophysics Data System (ADS)
Kattoju, Ravi Kiran; Barber, Daniel J.; Abich, Julian; Harris, Jonathan
2016-05-01
With increasing necessity for intuitive Soldier-robot communication in military operations and advancements in interactive technologies, autonomous robots have transitioned from assistance tools to functional and operational teammates able to service an array of military operations. Despite improvements in gesture and speech recognition technologies, their effectiveness in supporting Soldier-robot communication is still uncertain. The purpose of the present study was to evaluate the performance of gesture and speech interface technologies to facilitate Soldier-robot communication during a spatial-navigation task with an autonomous robot. Gesture and speech semantically based spatial-navigation commands leveraged existing lexicons for visual and verbal communication from the U.S Army field manual for visual signaling and a previously established Squad Level Vocabulary (SLV). Speech commands were recorded by a Lapel microphone and Microsoft Kinect, and classified by commercial off-the-shelf automatic speech recognition (ASR) software. Visual signals were captured and classified using a custom wireless gesture glove and software. Participants in the experiment commanded a robot to complete a simulated ISR mission in a scaled down urban scenario by delivering a sequence of gesture and speech commands, both individually and simultaneously, to the robot. Performance and reliability of gesture and speech hardware interfaces and recognition tools were analyzed and reported. Analysis of experimental results demonstrated the employed gesture technology has significant potential for enabling bidirectional Soldier-robot team dialogue based on the high classification accuracy and minimal training required to perform gesture commands.
2000-03-09
Team 393 from Morristown, Ind., sets up its robot on a table to prepare it for the FIRST (For Inspiration and Recognition of Science and Technology) Southeast Regional competition March 9-11 at the KSC Visitor Complex. KSC is co-sponsoring the team, The Bee Bots, from Morristown Junior and Senior High Schools. On the floor at right is team 386, known as Voltage: The South Brevard First Team. This team is made up of students from Eau Gallie, Satellite, Palm Bay, Melbourne, Bayside and Melbourne Central Catholic High Schools. They are sponsored by KSC as well as Harris Corp., Intersil Corp., Interface & Control Systems. Inc. and Rockwell Collins. Teams of high school students are testing the limits of their imagination using robots they have designed, with the support of business and engineering professionals and corporate sponsors, to compete in a technological battle against other schools' robots. Of the 30 high school teams competing at KSC, 16 are Florida teams co-sponsored by NASA and KSC contractors. Local high schools participating are Astronaut, Bayside, Cocoa Beach, Eau Gallie, Melbourne, Melbourne Central Catholic, Palm Bay, Rockledge, Satellite, and Titusville
Teams begin their preparations for the FIRST competition
NASA Technical Reports Server (NTRS)
2000-01-01
Team 393 from Morristown, Ind., sets up its robot on a table to prepare it for the FIRST (For Inspiration and Recognition of Science and Technology) Southeast Regional competition March 9-11 at the KSC Visitor Complex. KSC is co-sponsoring the team, The Bee Bots, from Morristown Junior and Senior High Schools. On the floor at right is team 386, known as Voltage: The South Brevard First Team. This team is made up of students from Eau Gallie, Satellite, Palm Bay, Melbourne, Bayside and Melbourne Central Catholic High Schools. They are sponsored by KSC as well as Harris Corp., Intersil Corp., Interface & Control Systems. Inc. and Rockwell Collins. Teams of high school students are testing the limits of their imagination using robots they have designed, with the support of business and engineering professionals and corporate sponsors, to compete in a technological battle against other schools' robots. Of the 30 high school teams competing at KSC, 16 are Florida teams co-sponsored by NASA and KSC contractors. Local high schools participating are Astronaut, Bayside, Cocoa Beach, Eau Gallie, Melbourne, Melbourne Central Catholic, Palm Bay, Rockledge, Satellite, and Titusville.
2000-03-09
Team 393 from Morristown, Ind., sets up its robot on a table to prepare it for the FIRST (For Inspiration and Recognition of Science and Technology) Southeast Regional competition March 9-11 at the KSC Visitor Complex. KSC is co-sponsoring the team, The Bee Bots, from Morristown Junior and Senior High Schools. On the floor at right is team 386, known as Voltage: The South Brevard First Team. This team is made up of students from Eau Gallie, Satellite, Palm Bay, Melbourne, Bayside and Melbourne Central Catholic High Schools. They are sponsored by KSC as well as Harris Corp., Intersil Corp., Interface & Control Systems. Inc. and Rockwell Collins. Teams of high school students are testing the limits of their imagination using robots they have designed, with the support of business and engineering professionals and corporate sponsors, to compete in a technological battle against other schools' robots. Of the 30 high school teams competing at KSC, 16 are Florida teams co-sponsored by NASA and KSC contractors. Local high schools participating are Astronaut, Bayside, Cocoa Beach, Eau Gallie, Melbourne, Melbourne Central Catholic, Palm Bay, Rockledge, Satellite, and Titusville
2007-09-01
behaviour based on past experience of interacting with the operator), and mobile (i.e., can move themselves from one machine to another). Edwards argues that...Sofge, D., Bugajska, M., Adams, W., Perzanowski, D., and Schultz, A. (2003). Agent-based Multimodal Interface for Dynamically Autonomous Mobile Robots...based architecture can provide a natural and scalable approach to implementing a multimodal interface to control mobile robots through dynamic
2011-06-01
effective way- point navigation algorithm that interfaced with a Java based graphical user interface (GUI), written by Uzun, for a robot named Bender [2...the angular acceleration, θ̈, or angular rate, θ̇. When considering a joint driven by an electric motor, the inertia and friction can be divided into...interactive simulations that can receive input from user controls, scripts , and other applications, such as Excel and MATLAB. One drawback is that the
The ACE multi-user web-based Robotic Observatory Control System
NASA Astrophysics Data System (ADS)
Mack, P.
2003-05-01
We have developed an observatory control system that can be operated in interactive, remote or robotic modes. In interactive and remote mode the observer typically acquires the first object then creates a script through a window interface to complete observations for the rest of the night. The system closes early in the event of bad weather. In robotic mode observations are submitted ahead of time through a web-based interface. We present observations made with a 1.0-m telescope using these methods.
Robust human machine interface based on head movements applied to assistive robotics.
Perez, Elisa; López, Natalia; Orosco, Eugenio; Soria, Carlos; Mut, Vicente; Freire-Bastos, Teodiano
2013-01-01
This paper presents an interface that uses two different sensing techniques and combines both results through a fusion process to obtain the minimum-variance estimator of the orientation of the user's head. Sensing techniques of the interface are based on an inertial sensor and artificial vision. The orientation of the user's head is used to steer the navigation of a robotic wheelchair. Also, a control algorithm for assistive technology system is presented. The system is evaluated by four individuals with severe motors disability and a quantitative index was developed, in order to objectively evaluate the performance. The results obtained are promising since most users could perform the proposed tasks with the robotic wheelchair.
Robust Human Machine Interface Based on Head Movements Applied to Assistive Robotics
Perez, Elisa; López, Natalia; Orosco, Eugenio; Soria, Carlos; Mut, Vicente; Freire-Bastos, Teodiano
2013-01-01
This paper presents an interface that uses two different sensing techniques and combines both results through a fusion process to obtain the minimum-variance estimator of the orientation of the user's head. Sensing techniques of the interface are based on an inertial sensor and artificial vision. The orientation of the user's head is used to steer the navigation of a robotic wheelchair. Also, a control algorithm for assistive technology system is presented. The system is evaluated by four individuals with severe motors disability and a quantitative index was developed, in order to objectively evaluate the performance. The results obtained are promising since most users could perform the proposed tasks with the robotic wheelchair. PMID:24453877
PIR-1 and PIRPL. A Project in Robotics Education. Revised.
ERIC Educational Resources Information Center
Schultz, Charles P.
This paper presents the results of a project in robotics education that included: (1) designing a mobile robot--the Personal Instructional Robot-1 (PIR-1); (2) providing a guide to the purchase and assembly of necessary parts; (3) providing a way to interface the robot with common classroom microcomputers; and (4) providing a language by which the…
A Multi- and Cross-Disciplinary Capstone Experience in Engineering Art: Animatronic Polar Bear
ERIC Educational Resources Information Center
Sirinterlikci, Arif; Toukonen, Kayne; Mason, Steve; Madison, Russel
2005-01-01
An animatronic robot was designed and constructed for the 2003 Annual Student Robotic Technology and Engineering Challenge organized by the Robotics International (RI) association of the Society of Manufacturing Engineers (SME). It was also the senior capstone design project for two of the design team members. After a thorough study of body and…
Bergamasco, Massimo; Frisoli, Antonio; Fontana, Marco; Loconsole, Claudio; Leonardis, Daniele; Troncossi, Marco; Foumashi, Mohammad Mozaffari; Parenti-Castelli, Vincenzo
2011-01-01
This paper presents the preliminary results of the project BRAVO (Brain computer interfaces for Robotic enhanced Action in Visuo-motOr tasks). The objective of this project is to define a new approach to the development of assistive and rehabilitative robots for motor impaired users to perform complex visuomotor tasks that require a sequence of reaches, grasps and manipulations of objects. BRAVO aims at developing new robotic interfaces and HW/SW architectures for rehabilitation and regain/restoration of motor function in patients with upper limb sensorimotor impairment through extensive rehabilitation therapy and active assistance in the execution of Activities of Daily Living. The final system developed within this project will include a robotic arm exoskeleton and a hand orthosis that will be integrated together for providing force assistance. The main novelty that BRAVO introduces is the control of the robotic assistive device through the active prediction of intention/action. The system will actually integrate the information about the movement carried out by the user with a prediction of the performed action through an interpretation of current gaze of the user (measured through eye-tracking), brain activation (measured through BCI) and force sensor measurements. © 2011 IEEE
Wireless intraoral tongue control of an assistive robotic arm for individuals with tetraplegia.
Andreasen Struijk, Lotte N S; Egsgaard, Line Lindhardt; Lontis, Romulus; Gaihede, Michael; Bentsen, Bo
2017-11-06
For an individual with tetraplegia assistive robotic arms provide a potentially invaluable opportunity for rehabilitation. However, there is a lack of available control methods to allow these individuals to fully control the assistive arms. Here we show that it is possible for an individual with tetraplegia to use the tongue to fully control all 14 movements of an assistive robotic arm in a three dimensional space using a wireless intraoral control system, thus allowing for numerous activities of daily living. We developed a tongue-based robotic control method incorporating a multi-sensor inductive tongue interface. One abled-bodied individual and one individual with tetraplegia performed a proof of concept study by controlling the robot with their tongue using direct actuator control and endpoint control, respectively. After 30 min of training, the able-bodied experimental participant tongue controlled the assistive robot to pick up a roll of tape in 80% of the attempts. Further, the individual with tetraplegia succeeded in fully tongue controlling the assistive robot to reach for and touch a roll of tape in 100% of the attempts and to pick up the roll in 50% of the attempts. Furthermore, she controlled the robot to grasp a bottle of water and pour its contents into a cup; her first functional action in 19 years. To our knowledge, this is the first time that an individual with tetraplegia has been able to fully control an assistive robotic arm using a wireless intraoral tongue interface. The tongue interface used to control the robot is currently available for control of computers and of powered wheelchairs, and the robot employed in this study is also commercially available. Therefore, the presented results may translate into available solutions within reasonable time.
INL Generic Robot Architecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
2005-03-30
The INL Generic Robot Architecture is a generic, extensible software framework that can be applied across a variety of different robot geometries, sensor suites and low-level proprietary control application programming interfaces (e.g. mobility, aria, aware, player, etc.).
Surface EMG signals in very late-stage of Duchenne muscular dystrophy: a case study.
Lobo-Prat, Joan; Janssen, Mariska M H P; Koopman, Bart F J M; Stienen, Arno H A; de Groot, Imelda J M
2017-08-29
Robotic arm supports aim at improving the quality of life for adults with Duchenne muscular dystrophy (DMD) by augmenting their residual functional abilities. A critical component of robotic arm supports is the control interface, as is it responsible for the human-machine interaction. Our previous studies showed the feasibility of using surface electromyography (sEMG) as a control interface to operate robotic arm supports in adults with DMD (22-24 years-old). However, in the biomedical engineering community there is an often raised skepticism on whether adults with DMD at the last stage of their disease have sEMG signals that can be measured and used for control. In this study sEMG signals from Biceps and Triceps Brachii muscles were measured for the first time in a 37 year-old man with DMD (Brooke 6) that lost his arm function 15 years ago. The sEMG signals were measured during maximal and sub-maximal voluntary isometric contractions and evaluated in terms of signal-to-noise ratio and co-activation ratio. Beyond the profound deterioration of the muscles, we found that sEMG signals from both Biceps and Triceps muscles were measurable in this individual, although with a maximum signal amplitude 100 times lower compared to sEMG from healthy subjects. The participant was able to voluntarily modulate the required level of muscle activation during the sub-maximal voluntary isometric contractions. Despite the low sEMG amplitude and a considerable level of muscle co-activation, simulations of an elbow orthosis using the measured sEMG as driving signal indicated that the sEMG signals of the participant had the potential to provide control of elbow movements. To the best of our knowledge this is the first time that sEMG signals from a man with DMD at the last-stage of the disease were measured, analyzed and reported. These findings offer promising perspectives to the use of sEMG as an intuitive and natural control interface for robotic arm supports in adults with DMD until the last stage of the disease.
Robot Teleoperation and Perception Assistance with a Virtual Holographic Display
NASA Technical Reports Server (NTRS)
Goddard, Charles O.
2012-01-01
Teleoperation of robots in space from Earth has historically been dfficult. Speed of light delays make direct joystick-type control infeasible, so it is desirable to command a robot in a very high-level fashion. However, in order to provide such an interface, knowledge of what objects are in the robot's environment and how they can be interacted with is required. In addition, many tasks that would be desirable to perform are highly spatial, requiring some form of six degree of freedom input. These two issues can be combined, allowing the user to assist the robot's perception by identifying the locations of objects in the scene. The zSpace system, a virtual holographic environment, provides a virtual three-dimensional space superimposed over real space and a stylus tracking position and rotation inside of it. Using this system, a possible interface for this sort of robot control is proposed.
Understanding of and applications for robot vision guidance at KSC
NASA Technical Reports Server (NTRS)
Shawaga, Lawrence M.
1988-01-01
The primary thrust of robotics at KSC is for the servicing of Space Shuttle remote umbilical docking functions. In order for this to occur, robots performing servicing operations must be capable of tracking a swaying Orbiter in Six Degrees of Freedom (6-DOF). Currently, in NASA KSC's Robotic Applications Development Laboratory (RADL), an ASEA IRB-90 industrial robot is being equipped with a real-time computer vision (hardware and software) system to allow it to track a simulated Orbiter interface (target) in 6-DOF. The real-time computer vision system effectively becomes the eyes for the lab robot, guiding it through a closed loop visual feedback system to move with the simulated Orbiter interface. This paper will address an understanding of this vision guidance system and how it will be applied to remote umbilical servicing at KSC. In addition, other current and future applications will be addressed.
Comparison of three different techniques for camera and motion control of a teleoperated robot.
Doisy, Guillaume; Ronen, Adi; Edan, Yael
2017-01-01
This research aims to evaluate new methods for robot motion control and camera orientation control through the operator's head orientation in robot teleoperation tasks. Specifically, the use of head-tracking in a non-invasive way, without immersive virtual reality devices was combined and compared with classical control modes for robot movements and camera control. Three control conditions were tested: 1) a condition with classical joystick control of both the movements of the robot and the robot camera, 2) a condition where the robot movements were controlled by a joystick and the robot camera was controlled by the user head orientation, and 3) a condition where the movements of the robot were controlled by hand gestures and the robot camera was controlled by the user head orientation. Performance, workload metrics and their evolution as the participants gained experience with the system were evaluated in a series of experiments: for each participant, the metrics were recorded during four successive similar trials. Results shows that the concept of robot camera control by user head orientation has the potential of improving the intuitiveness of robot teleoperation interfaces, specifically for novice users. However, more development is needed to reach a margin of progression comparable to a classical joystick interface. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Mckee, James W.
1989-01-01
The objective is to develop a system that will allow a person not necessarily skilled in the art of programming robots to quickly and naturally create the necessary data and commands to enable a robot to perform a desired task. The system will use a menu driven graphical user interface. This interface will allow the user to input data to select objects to be moved. There will be an imbedded expert system to process the knowledge about objects and the robot to determine how they are to be moved. There will be automatic path planning to avoid obstacles in the work space and to create a near optimum path. The system will contain the software to generate the required robot instructions.
Experimental setup for evaluating an adaptive user interface for teleoperation control
NASA Astrophysics Data System (ADS)
Wijayasinghe, Indika B.; Peetha, Srikanth; Abubakar, Shamsudeen; Saadatzi, Mohammad Nasser; Cremer, Sven; Popa, Dan O.
2017-05-01
A vital part of human interactions with a machine is the control interface, which single-handedly could define the user satisfaction and the efficiency of performing a task. This paper elaborates the implementation of an experimental setup to study an adaptive algorithm that can help the user better tele-operate the robot. The formulation of the adaptive interface and associate learning algorithms are general enough to apply when the mapping between the user controls and the robot actuators is complex and/or ambiguous. The method uses a genetic algorithm to find the optimal parameters that produce the input-output mapping for teleoperation control. In this paper, we describe the experimental setup and associated results that was used to validate the adaptive interface to a differential drive robot from two different input devices; a joystick, and a Myo gesture control armband. Results show that after the learning phase, the interface converges to an intuitive mapping that can help even inexperienced users drive the system to a goal location.
Sutherland, Garnette R; Wolfsberger, Stefan; Lama, Sanju; Zarei-nia, Kourosh
2013-01-01
Intraoperative imaging disrupts the rhythm of surgery despite providing an excellent opportunity for surgical monitoring and assessment. To allow surgery within real-time images, neuroArm, a teleoperated surgical robotic system, was conceptualized. The objective was to design and manufacture a magnetic resonance-compatible robot with a human-machine interface that could reproduce some of the sight, sound, and touch of surgery at a remote workstation. University of Calgary researchers worked with MacDonald, Dettwiler and Associates engineers to produce a requirements document, preliminary design review, and critical design review, followed by the manufacture, preclinical testing, and clinical integration of neuroArm. During the preliminary design review, the scope of the neuroArm project changed to performing microsurgery outside the magnet and stereotaxy inside the bore. neuroArm was successfully manufactured and installed in an intraoperative magnetic resonance imaging operating room. neuroArm was clinically integrated into 35 cases in a graded fashion. As a result of this experience, neuroArm II is in development, and advances in technology will allow microsurgery within the bore of the magnet. neuroArm represents a successful interdisciplinary collaboration. It has positive implications for the future of robotic technology in neurosurgery in that the precision and accuracy of robots will continue to augment human capability.
Graphical programming: A systems approach for telerobotic servicing of space assets
NASA Technical Reports Server (NTRS)
Pinkerton, James T.; Mcdonald, Michael J.; Palmquist, Robert D.; Patten, Richard
1994-01-01
Satellite servicing is in many ways analogous to subsea robotic servicing in the late 1970's. A cost effective, reliable, telerobotic capability had to be demonstrated before the oil companies invested money in deep water robot serviceable production facilities. In the same sense, aeronautic engineers will not design satellites for telerobotic servicing until such a quantifiable capability has been demonstrated. New space servicing systems will be markedly different than existing space robot systems. Past space manipulator systems, including the Space Shuttle's robot arm, have used master/slave technologies with poor fidelity, slow operating speeds and most importantly, in-orbit human operators. In contrast, new systems will be capable of precision operations, conducted at higher rates of speed, and be commanded via ground-control communication links. Challenge presented by this environment include achieving a mandated level of robustness and dependability, radiation hardening, minimum weight and power consumption, and a system which accommodates the inherent communication delay between the ground station and the satellite. There is also a need for a user interface which is easy to use, ensures collision free motions, and is capable of adjusting to an unknown workcell (for repair operations the condition of the satellite may not be known in advance). This paper describes the novel technologies required to deliver such a capability.
NASA Technical Reports Server (NTRS)
Henderson, A. J., Jr.
2001-01-01
FIRST is the acronym of For Inspiration and Recognition of Science and Technology. FIRST is a 501.C.3 non-profit organization whose mission is to generate an interest in science and engineering among today's young adults and youth. This mission is accomplished through a robot competition held annually in the spring of each year. NASAs Marshall Space Flight Center, Education Programs Department, awarded a grant to Lee High School, the sole engineering magnet school in Huntsville, Alabama. MSFC awarded the grant in hopes of fulfilling its goal of giving back invaluable resources to its community and engineers, as well as educating tomorrow's work force in the high-tech area of science and technology. Marshall engineers, Lee High School students and teachers, and a host of other volunteers and parents officially initiated this robot design process and competitive strategic game plan. The FIRST Robotics Competition is a national engineering contest, which immerses high school students in the exciting world of science and engineering. Teaming with engineers from government agencies, businesses, and universities enables the students to learn about the engineering profession. The students and engineers have six weeks to work together to brainstorm, design, procure, construct, and test their robot. The team then competes in a spirited, 'no-holds barred' tournament, complete with referees, other FIRST-designed robots, cheerleaders, and time clocks. The partnerships developed between schools, government agencies, businesses, and universities provide an exchange of resources and talent that build cooperation and expose students to new and rewarding career options. The result is a fun, exciting, and stimulating environment in which all participants discover the important connections between classroom experiences and real-world applications. This paper will highlight the story, engineering development, and evolutionary design of Xtraktor, the rookie robot, a manufacturing marvel and engineering achievement.
NASA Astrophysics Data System (ADS)
Gîlcă, G.; Bîzdoacă, N. G.; Diaconu, I.
2016-08-01
This article aims to implement some practical applications using the Socibot Desktop social robot. We mean to realize three applications: creating a speech sequence using the Kiosk menu of the browser interface, creating a program in the Virtual Robot browser interface and making a new guise to be loaded into the robot's memory in order to be projected onto it face. The first application is actually created in the Compose submenu that contains 5 file categories: audio, eyes, face, head, mood, this being helpful in the creation of the projected sequence. The second application is more complex, the completed program containing: audio files, speeches (can be created in over 20 languages), head movements, the robot's facial parameters function of each action units (AUs) of the facial muscles, its expressions and its line of sight. Last application aims to change the robot's appearance with the guise created by us. The guise was created in Adobe Photoshop and then loaded into the robot's memory.
Mobility Systems For Robotic Vehicles
NASA Astrophysics Data System (ADS)
Chun, Wendell
1987-02-01
The majority of existing robotic systems can be decomposed into five distinct subsystems: locomotion, control/man-machine interface (MMI), sensors, power source, and manipulator. When designing robotic vehicles, there are two main requirements: first, to design for the environment and second, for the task. The environment can be correlated with known missions. This can be seen by analyzing existing mobile robots. Ground mobile systems are generally wheeled, tracked, or legged. More recently, underwater vehicles have gained greater attention. For example, Jason Jr. made history by surveying the sunken luxury liner, the Titanic. The next big surge of robotic vehicles will be in space. This will evolve as a result of NASA's commitment to the Space Station. The foreseeable robots will interface with current systems as well as standalone, free-flying systems. A space robotic vehicle is similar to its underwater counterpart with very few differences. Their commonality includes missions and degrees-of-freedom. The issues of stability and communication are inherent in both systems and environment.
Autonomous caregiver following robotic wheelchair
NASA Astrophysics Data System (ADS)
Ratnam, E. Venkata; Sivaramalingam, Sethurajan; Vignesh, A. Sri; Vasanth, Elanthendral; Joans, S. Mary
2011-12-01
In the last decade, a variety of robotic/intelligent wheelchairs have been proposed to meet the need in aging society. Their main research topics are autonomous functions such as moving toward some goals while avoiding obstacles, or user-friendly interfaces. Although it is desirable for wheelchair users to go out alone, caregivers often accompany them. Therefore we have to consider not only autonomous functions and user interfaces but also how to reduce caregivers' load and support their activities in a communication aspect. From this point of view, we have proposed a robotic wheelchair moving with a caregiver side by side based on the MATLAB process. In this project we discussing about robotic wheel chair to follow a caregiver by using a microcontroller, Ultrasonic sensor, keypad, Motor drivers to operate robot. Using camera interfaced with the DM6437 (Davinci Code Processor) image is captured. The captured image are then processed by using image processing technique, the processed image are then converted into voltage levels through MAX 232 level converter and given it to the microcontroller unit serially and ultrasonic sensor to detect the obstacle in front of robot. In this robot we have mode selection switch Automatic and Manual control of robot, we use ultrasonic sensor in automatic mode to find obstacle, in Manual mode to use the keypad to operate wheel chair. In the microcontroller unit, c language coding is predefined, according to this coding the robot which connected to it was controlled. Robot which has several motors is activated by using the motor drivers. Motor drivers are nothing but a switch which ON/OFF the motor according to the control given by the microcontroller unit.
Robotic Design Studio: Exploring the Big Ideas of Engineering in a Liberal Arts Environment.
ERIC Educational Resources Information Center
Turbak, Franklyn; Berg, Robbie
2002-01-01
Suggests that it is important to introduce liberal arts students to the essence of engineering. Describes Robotic Design Studio, a course in which students learn how to design, assemble, and program robots made out of LEGO parts, sensors, motors, and small embedded computers. Represents an alternative vision of how robot design can be used to…
ERIC Educational Resources Information Center
McLurkin, J.; Rykowski, J.; John, M.; Kaseman, Q.; Lynch, A. J.
2013-01-01
This paper describes the experiences of using an advanced, low-cost robot in science, technology, engineering, and mathematics (STEM) education. It presents three innovations: It is a powerful, cheap, robust, and small advanced personal robot; it forms the foundation of a problem-based learning curriculum; and it enables a novel multi-robot…
Boninger, Michael L; Wechsler, Lawrence R; Stein, Joel
2014-11-01
The aim of this study was to describe the current state and latest advances in robotics, stem cells, and brain-computer interfaces in rehabilitation and recovery for stroke. The authors of this summary recently reviewed this work as part of a national presentation. The article represents the information included in each area. Each area has seen great advances and challenges as products move to market and experiments are ongoing. Robotics, stem cells, and brain-computer interfaces all have tremendous potential to reduce disability and lead to better outcomes for patients with stroke. Continued research and investment will be needed as the field moves forward. With this investment, the potential for recovery of function is likely substantial.
Boninger, Michael L; Wechsler, Lawrence R.; Stein, Joel
2014-01-01
Objective To describe the current state and latest advances in robotics, stem cells, and brain computer interfaces in rehabilitation and recovery for stroke. Design The authors of this summary recently reviewed this work as part of a national presentation. The paper represents the information included in each area. Results Each area has seen great advances and challenges as products move to market and experiments are ongoing. Conclusion Robotics, stem cells, and brain computer interfaces all have tremendous potential to reduce disability and lead to better outcomes for patients with stroke. Continued research and investment will be needed as the field moves forward. With this investment, the potential for recovery of function is likely substantial PMID:25313662
NASA, Engineering, and Swarming Robots
NASA Technical Reports Server (NTRS)
Leucht, Kurt
2015-01-01
This presentation is an introduction to NASA, to science and engineering, to biologically inspired robotics, and to the Swarmie ant-inspired robot project at KSC. This presentation is geared towards elementary school students, middle school students, and also high school students. This presentation is suitable for use in STEM (science, technology, engineering, and math) outreach events. The first use of this presentation will be on Oct 28, 2015 at Madison Middle School in Titusville, Florida where the author has been asked by the NASA-KSC Speakers Bureau to speak to the students about the Swarmie robots.
NASA Technical Reports Server (NTRS)
Ezer, Neta; Zumbado, Jennifer Rochlis; Sandor, Aniko; Boyer, Jennifer
2011-01-01
Human-robot systems are expected to have a central role in future space exploration missions that extend beyond low-earth orbit [1]. As part of a directed research project funded by NASA s Human Research Program (HRP), researchers at the Johnson Space Center have started to use a variety of techniques, including literature reviews, case studies, knowledge capture, field studies, and experiments to understand critical human-robot interaction (HRI) variables for current and future systems. Activities accomplished to date include observations of the International Space Station s Special Purpose Dexterous Manipulator (SPDM), Robonaut, and Space Exploration Vehicle (SEV), as well as interviews with robotics trainers, robot operators, and developers of gesture interfaces. A survey of methods and metrics used in HRI was completed to identify those most applicable to space robotics. These methods and metrics included techniques and tools associated with task performance, the quantification of human-robot interactions and communication, usability, human workload, and situation awareness. The need for more research in areas such as natural interfaces, compensations for loss of signal and poor video quality, psycho-physiological feedback, and common HRI testbeds were identified. The initial findings from these activities and planned future research are discussed. Human-robot systems are expected to have a central role in future space exploration missions that extend beyond low-earth orbit [1]. As part of a directed research project funded by NASA s Human Research Program (HRP), researchers at the Johnson Space Center have started to use a variety of techniques, including literature reviews, case studies, knowledge capture, field studies, and experiments to understand critical human-robot interaction (HRI) variables for current and future systems. Activities accomplished to date include observations of the International Space Station s Special Purpose Dexterous Manipulator (SPDM), Robonaut, and Space Exploration Vehicle (SEV), as well as interviews with robotics trainers, robot operators, and developers of gesture interfaces. A survey of methods and metrics used in HRI was completed to identify those most applicable to space robotics. These methods and metrics included techniques and tools associated with task performance, the quantification of human-robot interactions and communication, usability, human workload, and situation awareness. The need for more research in areas such as natural interfaces, compensations for loss of signal and poor video quality, psycho-physiological feedback, and common HRI testbeds were identified. The initial findings from these activities and planned future research are discussed.
The Human-Robot Interaction Operating System
NASA Technical Reports Server (NTRS)
Fong, Terrence; Kunz, Clayton; Hiatt, Laura M.; Bugajska, Magda
2006-01-01
In order for humans and robots to work effectively together, they need to be able to converse about abilities, goals and achievements. Thus, we are developing an interaction infrastructure called the "Human-Robot Interaction Operating System" (HRI/OS). The HRI/OS provides a structured software framework for building human-robot teams, supports a variety of user interfaces, enables humans and robots to engage in task-oriented dialogue, and facilitates integration of robots through an extensible API.
SpikingLab: modelling agents controlled by Spiking Neural Networks in Netlogo.
Jimenez-Romero, Cristian; Johnson, Jeffrey
2017-01-01
The scientific interest attracted by Spiking Neural Networks (SNN) has lead to the development of tools for the simulation and study of neuronal dynamics ranging from phenomenological models to the more sophisticated and biologically accurate Hodgkin-and-Huxley-based and multi-compartmental models. However, despite the multiple features offered by neural modelling tools, their integration with environments for the simulation of robots and agents can be challenging and time consuming. The implementation of artificial neural circuits to control robots generally involves the following tasks: (1) understanding the simulation tools, (2) creating the neural circuit in the neural simulator, (3) linking the simulated neural circuit with the environment of the agent and (4) programming the appropriate interface in the robot or agent to use the neural controller. The accomplishment of the above-mentioned tasks can be challenging, especially for undergraduate students or novice researchers. This paper presents an alternative tool which facilitates the simulation of simple SNN circuits using the multi-agent simulation and the programming environment Netlogo (educational software that simplifies the study and experimentation of complex systems). The engine proposed and implemented in Netlogo for the simulation of a functional model of SNN is a simplification of integrate and fire (I&F) models. The characteristics of the engine (including neuronal dynamics, STDP learning and synaptic delay) are demonstrated through the implementation of an agent representing an artificial insect controlled by a simple neural circuit. The setup of the experiment and its outcomes are described in this work.
Robot-aided electrospinning toward intelligent biomedical engineering.
Tan, Rong; Yang, Xiong; Shen, Yajing
2017-01-01
The rapid development of robotics offers new opportunities for the traditional biofabrication in higher accuracy and controllability, which provides great potentials for the intelligent biomedical engineering. This paper reviews the state of the art of robotics in a widely used biomaterial fabrication process, i.e., electrospinning, including its working principle, main applications, challenges, and prospects. First, the principle and technique of electrospinning are introduced by categorizing it to melt electrospinning, solution electrospinning, and near-field electrospinning. Then, the applications of electrospinning in biomedical engineering are introduced briefly from the aspects of drug delivery, tissue engineering, and wound dressing. After that, we conclude the existing problems in traditional electrospinning such as low production, rough nanofibers, and uncontrolled morphology, and then discuss how those problems are addressed by robotics via four case studies. Lastly, the challenges and outlooks of robotics in electrospinning are discussed and prospected.
Robotics research projects report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hsia, T.C.
The research results of the Robotics Research Laboratory are summarized. Areas of research include robotic control, a stand-alone vision system for industrial robots, and sensors other than vision that would be useful for image ranging, including ultrasonic and infra-red devices. One particular project involves RHINO, a 6-axis robotic arm that can be manipulated by serial transmission of ASCII command strings to its interfaced controller. (LEW)
The Dawning of the Ethics of Environmental Robots.
van Wynsberghe, Aimee; Donhauser, Justin
2017-10-23
Environmental scientists and engineers have been exploring research and monitoring applications of robotics, as well as exploring ways of integrating robotics into ecosystems to aid in responses to accelerating environmental, climatic, and biodiversity changes. These emerging applications of robots and other autonomous technologies present novel ethical and practical challenges. Yet, the critical applications of robots for environmental research, engineering, protection and remediation have received next to no attention in the ethics of robotics literature to date. This paper seeks to fill that void, and promote the study of environmental robotics. It provides key resources for further critical examination of the issues environmental robots present by explaining and differentiating the sorts of environmental robotics that exist to date and identifying unique conceptual, ethical, and practical issues they present.
Chung, Cheng-Shiu; Wang, Hongwu; Cooper, Rory A
2013-07-01
The user interface development of assistive robotic manipulators can be traced back to the 1960s. Studies include kinematic designs, cost-efficiency, user experience involvements, and performance evaluation. This paper is to review studies conducted with clinical trials using activities of daily living (ADLs) tasks to evaluate performance categorized using the International Classification of Functioning, Disability, and Health (ICF) frameworks, in order to give the scope of current research and provide suggestions for future studies. We conducted a literature search of assistive robotic manipulators from 1970 to 2012 in PubMed, Google Scholar, and University of Pittsburgh Library System - PITTCat. Twenty relevant studies were identified. Studies were separated into two broad categories: user task preferences and user-interface performance measurements of commercialized and developing assistive robotic manipulators. The outcome measures and ICF codes associated with the performance evaluations are reported. Suggestions for the future studies include (1) standardized ADL tasks for the quantitative and qualitative evaluation of task efficiency and performance to build comparable measures between research groups, (2) studies relevant to the tasks from user priority lists and ICF codes, and (3) appropriate clinical functional assessment tests with consideration of constraints in assistive robotic manipulator user interfaces. In addition, these outcome measures will help physicians and therapists build standardized tools while prescribing and assessing assistive robotic manipulators.
NASA Center for Intelligent Robotic Systems for Space Exploration
NASA Technical Reports Server (NTRS)
1990-01-01
NASA's program for the civilian exploration of space is a challenge to scientists and engineers to help maintain and further develop the United States' position of leadership in a focused sphere of space activity. Such an ambitious plan requires the contribution and further development of many scientific and technological fields. One research area essential for the success of these space exploration programs is Intelligent Robotic Systems. These systems represent a class of autonomous and semi-autonomous machines that can perform human-like functions with or without human interaction. They are fundamental for activities too hazardous for humans or too distant or complex for remote telemanipulation. To meet this challenge, Rensselaer Polytechnic Institute (RPI) has established an Engineering Research Center for Intelligent Robotic Systems for Space Exploration (CIRSSE). The Center was created with a five year $5.5 million grant from NASA submitted by a team of the Robotics and Automation Laboratories. The Robotics and Automation Laboratories of RPI are the result of the merger of the Robotics and Automation Laboratory of the Department of Electrical, Computer, and Systems Engineering (ECSE) and the Research Laboratory for Kinematics and Robotic Mechanisms of the Department of Mechanical Engineering, Aeronautical Engineering, and Mechanics (ME,AE,&M), in 1987. This report is an examination of the activities that are centered at CIRSSE.
2014-03-14
CAPE CANAVERAL, Fla. – Two young visitors get an up-close look at an engineering model of Robonaut 2, complete with a set of legs, during the Robot Rocket Rally. The three-day event at Florida's Kennedy Space Center Visitor Complex is highlighted by exhibits, games and demonstrations of a variety of robots, with exhibitors ranging from school robotics clubs to veteran NASA scientists and engineers. Photo credit: NASA/Kim Shiflett
Interactive multi-objective path planning through a palette-based user interface
NASA Astrophysics Data System (ADS)
Shaikh, Meher T.; Goodrich, Michael A.; Yi, Daqing; Hoehne, Joseph
2016-05-01
n a problem where a human uses supervisory control to manage robot path-planning, there are times when human does the path planning, and if satisfied commits those paths to be executed by the robot, and the robot executes that plan. In planning a path, the robot often uses an optimization algorithm that maximizes or minimizes an objective. When a human is assigned the task of path planning for robot, the human may care about multiple objectives. This work proposes a graphical user interface (GUI) designed for interactive robot path-planning when an operator may prefer one objective over others or care about how multiple objectives are traded off. The GUI represents multiple objectives using the metaphor of an artist's palette. A distinct color is used to represent each objective, and tradeoffs among objectives are balanced in a manner that an artist mixes colors to get the desired shade of color. Thus, human intent is analogous to the artist's shade of color. We call the GUI an "Adverb Palette" where the word "Adverb" represents a specific type of objective for the path, such as the adverbs "quickly" and "safely" in the commands: "travel the path quickly", "make the journey safely". The novel interactive interface provides the user an opportunity to evaluate various alternatives (that tradeoff between different objectives) by allowing her to visualize the instantaneous outcomes that result from her actions on the interface. In addition to assisting analysis of various solutions given by an optimization algorithm, the palette has additional feature of allowing the user to define and visualize her own paths, by means of waypoints (guiding locations) thereby spanning variety for planning. The goal of the Adverb Palette is thus to provide a way for the user and robot to find an acceptable solution even though they use very different representations of the problem. Subjective evaluations suggest that even non-experts in robotics can carry out the planning tasks with a great deal of flexibility using the adverb palette.
Tailoring a ConOps for NASA LSP Integrated Operations
NASA Technical Reports Server (NTRS)
Owens, Skip Clark V., III
2017-01-01
An integral part of the Systems Engineering process is the creation of a Concept of Operations (ConOps) for a given system, with the ConOps initially established early in the system design process and evolved as the system definition and design matures. As Integration Engineers in NASA's Launch Services Program (LSP) at Kennedy Space Center (KSC), our job is to manage the interface requirements for all the robotic space missions that come to our Program for a Launch Service. LSP procures and manages a launch service from one of our many commercial Launch Vehicle Contractors (LVCs) and these commercial companies are then responsible for developing the Interface Control Document (ICD), the verification of the requirements in that document, and all the services pertaining to integrating the spacecraft and launching it into orbit. However, one of the systems engineering tools that have not been employed within LSP to date is a Concept of Operations. The goal of this paper is to research the format and content that goes into these various aerospace industry ConOps and tailor the format and content into template form, so the template may be used as an engineering tool for spacecraft integration with future LSP procured launch services. This tailoring effort was performed as the authors final Masters Project in the Spring of 2016 for the Stevens Institute of Technology and modified for publication with INCOSE (Owens, 2016).
Table-Top Robotics for Engineering Design
ERIC Educational Resources Information Center
Wilczynski, Vincent; Dixon, Gregg; Ford, Eric
2005-01-01
The Mechanical Engineering Section at the U.S. Coast Guard Academy has developed a comprehensive activity based course to introduce second year students to mechanical engineering design. The culminating design activity for the course requires students to design, construct and test robotic devices that complete engineering challenges. Teams of…
ERIC Educational Resources Information Center
Cobb, Cheryl
2004-01-01
This article describes BEST (Boosting Engineering, Science, and Technology), a hands-on robotics program founded by Texas Instruments engineers Ted Mahler and Steve Marum. BEST links educators with industry to provide middle and high school students with a peek into the exciting world of robotics, with the goal of inspiring and interesting…
Robotic sampling system for an unmanned Mars mission
NASA Technical Reports Server (NTRS)
Chun, Wendell
1989-01-01
A major robotics opportunity for NASA will be the Mars Rover/Sample Return Mission which could be launched as early as the 1990s. The exploratory portion of this mission will include two autonomous subsystems: the rover vehicle and a sample handling system. The sample handling system is the key to the process of collecting Martian soils. This system could include a core drill, a general-purpose manipulator, tools, containers, a return canister, certification hardware and a labeling system. Integrated into a functional package, the sample handling system is analogous to a complex robotic workcell. Discussed here are the different components of the system, their interfaces, forseeable problem areas and many options based on the scientific goals of the mission. The various interfaces in the sample handling process (component to component and handling system to rover) will be a major engineering effort. Two critical evaluation criteria that will be imposed on the system are flexibility and reliability. It needs to be flexible enough to adapt to different scenarios and environments and acquire the most desirable specimens for return to Earth. Scientists may decide to change the distribution and ratio of core samples to rock samples in the canister. The long distance and duration of this planetary mission places a reliability burden on the hardware. The communication time delay between Earth and Mars minimizes operator interaction (teleoperation, supervisory modes) with the sample handler. An intelligent system will be required to plan the actions, make sample choices, interpret sensor inputs, and query unknown surroundings. A combination of autonomous functions and supervised movements will be integrated into the sample handling system.
RoMPS concept review automatic control of space robot, volume 2
NASA Technical Reports Server (NTRS)
Dobbs, M. E.
1991-01-01
Topics related to robot operated materials processing in space (RoMPS) are presented in view graph form and include: (1) system concept; (2) Hitchhiker Interface Requirements; (3) robot axis control concepts; (4) Autonomous Experiment Management System; (5) Zymate Robot Controller; (6) Southwest SC-4 Computer; (7) oven control housekeeping data; and (8) power distribution.
Tonet, Oliver; Marinelli, Martina; Citi, Luca; Rossini, Paolo Maria; Rossini, Luca; Megali, Giuseppe; Dario, Paolo
2008-01-15
Interaction with machines is mediated by human-machine interfaces (HMIs). Brain-machine interfaces (BMIs) are a particular class of HMIs and have so far been studied as a communication means for people who have little or no voluntary control of muscle activity. In this context, low-performing interfaces can be considered as prosthetic applications. On the other hand, for able-bodied users, a BMI would only be practical if conceived as an augmenting interface. In this paper, a method is introduced for pointing out effective combinations of interfaces and devices for creating real-world applications. First, devices for domotics, rehabilitation and assistive robotics, and their requirements, in terms of throughput and latency, are described. Second, HMIs are classified and their performance described, still in terms of throughput and latency. Then device requirements are matched with performance of available interfaces. Simple rehabilitation and domotics devices can be easily controlled by means of BMI technology. Prosthetic hands and wheelchairs are suitable applications but do not attain optimal interactivity. Regarding humanoid robotics, the head and the trunk can be controlled by means of BMIs, while other parts require too much throughput. Robotic arms, which have been controlled by means of cortical invasive interfaces in animal studies, could be the next frontier for non-invasive BMIs. Combining smart controllers with BMIs could improve interactivity and boost BMI applications.
Yap, Hwa Jen; Taha, Zahari; Md Dawal, Siti Zawiah; Chang, Siow-Wee
2014-01-01
Traditional robotic work cell design and programming are considered inefficient and outdated in current industrial and market demands. In this research, virtual reality (VR) technology is used to improve human-robot interface, whereby complicated commands or programming knowledge is not required. The proposed solution, known as VR-based Programming of a Robotic Work Cell (VR-Rocell), consists of two sub-programmes, which are VR-Robotic Work Cell Layout (VR-RoWL) and VR-based Robot Teaching System (VR-RoT). VR-RoWL is developed to assign the layout design for an industrial robotic work cell, whereby VR-RoT is developed to overcome safety issues and lack of trained personnel in robot programming. Simple and user-friendly interfaces are designed for inexperienced users to generate robot commands without damaging the robot or interrupting the production line. The user is able to attempt numerous times to attain an optimum solution. A case study is conducted in the Robotics Laboratory to assemble an electronics casing and it is found that the output models are compatible with commercial software without loss of information. Furthermore, the generated KUKA commands are workable when loaded into a commercial simulator. The operation of the actual robotic work cell shows that the errors may be due to the dynamics of the KUKA robot rather than the accuracy of the generated programme. Therefore, it is concluded that the virtual reality based solution approach can be implemented in an industrial robotic work cell. PMID:25360663
Yap, Hwa Jen; Taha, Zahari; Dawal, Siti Zawiah Md; Chang, Siow-Wee
2014-01-01
Traditional robotic work cell design and programming are considered inefficient and outdated in current industrial and market demands. In this research, virtual reality (VR) technology is used to improve human-robot interface, whereby complicated commands or programming knowledge is not required. The proposed solution, known as VR-based Programming of a Robotic Work Cell (VR-Rocell), consists of two sub-programmes, which are VR-Robotic Work Cell Layout (VR-RoWL) and VR-based Robot Teaching System (VR-RoT). VR-RoWL is developed to assign the layout design for an industrial robotic work cell, whereby VR-RoT is developed to overcome safety issues and lack of trained personnel in robot programming. Simple and user-friendly interfaces are designed for inexperienced users to generate robot commands without damaging the robot or interrupting the production line. The user is able to attempt numerous times to attain an optimum solution. A case study is conducted in the Robotics Laboratory to assemble an electronics casing and it is found that the output models are compatible with commercial software without loss of information. Furthermore, the generated KUKA commands are workable when loaded into a commercial simulator. The operation of the actual robotic work cell shows that the errors may be due to the dynamics of the KUKA robot rather than the accuracy of the generated programme. Therefore, it is concluded that the virtual reality based solution approach can be implemented in an industrial robotic work cell.
Human-rating Automated and Robotic Systems - (How HAL Can Work Safely with Astronauts)
NASA Technical Reports Server (NTRS)
Baroff, Lynn; Dischinger, Charlie; Fitts, David
2009-01-01
Long duration human space missions, as planned in the Vision for Space Exploration, will not be possible without applying unprecedented levels of automation to support the human endeavors. The automated and robotic systems must carry the load of routine housekeeping for the new generation of explorers, as well as assist their exploration science and engineering work with new precision. Fortunately, the state of automated and robotic systems is sophisticated and sturdy enough to do this work - but the systems themselves have never been human-rated as all other NASA physical systems used in human space flight have. Our intent in this paper is to provide perspective on requirements and architecture for the interfaces and interactions between human beings and the astonishing array of automated systems; and the approach we believe necessary to create human-rated systems and implement them in the space program. We will explain our proposed standard structure for automation and robotic systems, and the process by which we will develop and implement that standard as an addition to NASA s Human Rating requirements. Our work here is based on real experience with both human system and robotic system designs; for surface operations as well as for in-flight monitoring and control; and on the necessities we have discovered for human-systems integration in NASA's Constellation program. We hope this will be an invitation to dialog and to consideration of a new issue facing new generations of explorers and their outfitters.
Robot Contest as a Laboratory for Experiential Engineering Education
ERIC Educational Resources Information Center
Verner, Igor M.; Ahlgren, David J.
2004-01-01
By designing, building, and operating autonomous robots students learn key engineering subjects and develop systems-thinking, problem-solving, and teamwork skills. Such events as the Trinity College Fire-Fighting Home Robot Contest (TCFFHRC) offer rich opportunities for students to apply their skills by requiring design, and implementation of…
Experiments in teleoperator and autonomous control of space robotic vehicles
NASA Technical Reports Server (NTRS)
Alexander, Harold L.
1991-01-01
A program of research embracing teleoperator and automatic navigational control of freely flying satellite robots is presented. Current research goals include: (1) developing visual operator interfaces for improved vehicle teleoperation; (2) determining the effects of different visual interface system designs on operator performance; and (3) achieving autonomous vision-based vehicle navigation and control. This research program combines virtual-environment teleoperation studies and neutral-buoyancy experiments using a space-robot simulator vehicle currently under development. Visual-interface design options under investigation include monoscopic versus stereoscopic displays and cameras, helmet-mounted versus panel-mounted display monitors, head-tracking versus fixed or manually steerable remote cameras, and the provision of vehicle-fixed visual cues, or markers, in the remote scene for improved sensing of vehicle position, orientation, and motion.
Collinger, Jennifer L.; Kryger, Michael A.; Barbara, Richard; Betler, Timothy; Bowsher, Kristen; Brown, Elke H. P.; Clanton, Samuel T.; Degenhart, Alan D.; Foldes, Stephen T.; Gaunt, Robert A.; Gyulai, Ferenc E.; Harchick, Elizabeth A.; Harrington, Deborah; Helder, John B.; Hemmes, Timothy; Johannes, Matthew S.; Katyal, Kapil D.; Ling, Geoffrey S. F.; McMorland, Angus J. C.; Palko, Karina; Para, Matthew P.; Scheuermann, Janet; Schwartz, Andrew B.; Skidmore, Elizabeth R.; Solzbacher, Florian; Srikameswaran, Anita V.; Swanson, Dennis P.; Swetz, Scott; Tyler‐Kabara, Elizabeth C.; Velliste, Meel; Wang, Wei; Weber, Douglas J.; Wodlinger, Brian
2013-01-01
Abstract Our research group recently demonstrated that a person with tetraplegia could use a brain–computer interface (BCI) to control a sophisticated anthropomorphic robotic arm with skill and speed approaching that of an able‐bodied person. This multiyear study exemplifies important principles in translating research from foundational theory and animal experiments into a clinical study. We present a roadmap that may serve as an example for other areas of clinical device research as well as an update on study results. Prior to conducting a multiyear clinical trial, years of animal research preceded BCI testing in an epilepsy monitoring unit, and then in a short‐term (28 days) clinical investigation. Scientists and engineers developed the necessary robotic and surgical hardware, software environment, data analysis techniques, and training paradigms. Coordination among researchers, funding institutes, and regulatory bodies ensured that the study would provide valuable scientific information in a safe environment for the study participant. Finally, clinicians from neurosurgery, anesthesiology, physiatry, psychology, and occupational therapy all worked in a multidisciplinary team along with the other researchers to conduct a multiyear BCI clinical study. This teamwork and coordination can be used as a model for others attempting to translate basic science into real‐world clinical situations. PMID:24528900
Tolikas, Mary; Antoniou, Ayis; Ingber, Donald E
2017-09-01
The Wyss Institute for Biologically Inspired Engineering at Harvard University was formed based on the recognition that breakthrough discoveries cannot change the world if they never leave the laboratory. The Institute's mission is to discover the biological principles that Nature uses to build living things, and to harness these insights to create biologically inspired engineering innovations to advance human health and create a more sustainable world. Since its launch in 2009, the Institute has developed a new model for innovation, collaboration, and technology translation within academia, breaking "silos" to enable collaborations that cross institutional and disciplinary barriers. Institute faculty and staff engage in high-risk research that leads to transformative breakthroughs. The biological principles uncovered are harnessed to develop new engineering solutions for medicine and healthcare, as well as nonmedical areas, such as energy, architecture, robotics, and manufacturing. These technologies are translated into commercial products and therapies through collaborations with clinical investigators, corporate alliances, and the formation of new start-ups that are driven by a unique internal business development team including entrepreneurs-in-residence with domain-specific expertise. Here, we describe this novel organizational model that the Institute has developed to change the paradigm of how fundamental discovery, medical technology innovation, and commercial translation are carried out at the academic-industrial interface.
A neurorobotic platform for locomotor prosthetic development in rats and mice
NASA Astrophysics Data System (ADS)
von Zitzewitz, Joachim; Asboth, Leonie; Fumeaux, Nicolas; Hasse, Alexander; Baud, Laetitia; Vallery, Heike; Courtine, Grégoire
2016-04-01
Objectives. We aimed to develop a robotic interface capable of providing finely-tuned, multidirectional trunk assistance adjusted in real-time during unconstrained locomotion in rats and mice. Approach. We interfaced a large-scale robotic structure actuated in four degrees of freedom to exchangeable attachment modules exhibiting selective compliance along distinct directions. This combination allowed high-precision force and torque control in multiple directions over a large workspace. We next designed a neurorobotic platform wherein real-time kinematics and physiological signals directly adjust robotic actuation and prosthetic actions. We tested the performance of this platform in both rats and mice with spinal cord injury. Main Results. Kinematic analyses showed that the robotic interface did not impede locomotor movements of lightweight mice that walked freely along paths with changing directions and height profiles. Personalized trunk assistance instantly enabled coordinated locomotion in mice and rats with severe hindlimb motor deficits. Closed-loop control of robotic actuation based on ongoing movement features enabled real-time control of electromyographic activity in anti-gravity muscles during locomotion. Significance. This neurorobotic platform will support the study of the mechanisms underlying the therapeutic effects of locomotor prosthetics and rehabilitation using high-resolution genetic tools in rodent models.
A neurorobotic platform for locomotor prosthetic development in rats and mice.
von Zitzewitz, Joachim; Asboth, Leonie; Fumeaux, Nicolas; Hasse, Alexander; Baud, Laetitia; Vallery, Heike; Courtine, Grégoire
2016-04-01
We aimed to develop a robotic interface capable of providing finely-tuned, multidirectional trunk assistance adjusted in real-time during unconstrained locomotion in rats and mice. We interfaced a large-scale robotic structure actuated in four degrees of freedom to exchangeable attachment modules exhibiting selective compliance along distinct directions. This combination allowed high-precision force and torque control in multiple directions over a large workspace. We next designed a neurorobotic platform wherein real-time kinematics and physiological signals directly adjust robotic actuation and prosthetic actions. We tested the performance of this platform in both rats and mice with spinal cord injury. Kinematic analyses showed that the robotic interface did not impede locomotor movements of lightweight mice that walked freely along paths with changing directions and height profiles. Personalized trunk assistance instantly enabled coordinated locomotion in mice and rats with severe hindlimb motor deficits. Closed-loop control of robotic actuation based on ongoing movement features enabled real-time control of electromyographic activity in anti-gravity muscles during locomotion. This neurorobotic platform will support the study of the mechanisms underlying the therapeutic effects of locomotor prosthetics and rehabilitation using high-resolution genetic tools in rodent models.
Extending human proprioception to cyber-physical systems
NASA Astrophysics Data System (ADS)
Keller, Kevin; Robinson, Ethan; Dickstein, Leah; Hahn, Heidi A.; Cattaneo, Alessandro; Mascareñas, David
2016-04-01
Despite advances in computational cognition, there are many cyber-physical systems where human supervision and control is desirable. One pertinent example is the control of a robot arm, which can be found in both humanoid and commercial ground robots. Current control mechanisms require the user to look at several screens of varying perspective on the robot, then give commands through a joystick-like mechanism. This control paradigm fails to provide the human operator with an intuitive state feedback, resulting in awkward and slow behavior and underutilization of the robot's physical capabilities. To overcome this bottleneck, we introduce a new human-machine interface that extends the operator's proprioception by exploiting sensory substitution. Humans have a proprioceptive sense that provides us information on how our bodies are configured in space without having to directly observe our appendages. We constructed a wearable device with vibrating actuators on the forearm, where frequency of vibration corresponds to the spatial configuration of a robotic arm. The goal of this interface is to provide a means to communicate proprioceptive information to the teleoperator. Ultimately we will measure the change in performance (time taken to complete the task) achieved by the use of this interface.
Improved CLARAty Functional-Layer/Decision-Layer Interface
NASA Technical Reports Server (NTRS)
Estlin, Tara; Rabideau, Gregg; Gaines, Daniel; Johnston, Mark; Chouinard, Caroline; Nessnas, Issa; Shu, I-Hsiang
2008-01-01
Improved interface software for communication between the CLARAty Decision and Functional layers has been developed. [The Coupled Layer Architecture for Robotics Autonomy (CLARAty) was described in Coupled-Layer Robotics Architecture for Autonomy (NPO-21218), NASA Tech Briefs, Vol. 26, No. 12 (December 2002), page 48. To recapitulate: the CLARAty architecture was developed to improve the modularity of robotic software while tightening coupling between planning/execution and basic control subsystems. Whereas prior robotic software architectures typically contained three layers, the CLARAty contains two layers: a decision layer (DL) and a functional layer (FL).] Types of communication supported by the present software include sending commands from DL modules to FL modules and sending data updates from FL modules to DL modules. The present software supplants prior interface software that had little error-checking capability, supported data parameters in string form only, supported commanding at only one level of the FL, and supported only limited updates of the state of the robot. The present software offers strong error checking, and supports complex data structures and commanding at multiple levels of the FL, and relative to the prior software, offers a much wider spectrum of state-update capabilities.
Creative Engineering Based Education with Autonomous Robots Considering Job Search Support
NASA Astrophysics Data System (ADS)
Takezawa, Satoshi; Nagamatsu, Masao; Takashima, Akihiko; Nakamura, Kaeko; Ohtake, Hideo; Yoshida, Kanou
The Robotics Course in our Mechanical Systems Engineering Department offers “Robotics Exercise Lessons” as one of its Problem-Solution Based Specialized Subjects. This is intended to motivate students learning and to help them acquire fundamental items and skills on mechanical engineering and improve understanding of Robotics Basic Theory. Our current curriculum was established to accomplish this objective based on two pieces of research in 2005: an evaluation questionnaire on the education of our Mechanical Systems Engineering Department for graduates and a survey on the kind of human resources which companies are seeking and their expectations for our department. This paper reports the academic results and reflections of job search support in recent years as inherited and developed from the previous curriculum.
Toward a practical mobile robotic aid system for people with severe physical disabilities.
Regalbuto, M A; Krouskop, T A; Cheatham, J B
1992-01-01
A simple, relatively inexpensive robotic system that can aid severely disabled persons by providing pick-and-place manipulative abilities to augment the functions of human or trained animal assistants is under development at Rice University and the Baylor College of Medicine. A stand-alone software application program runs on a Macintosh personal computer and provides the user with a selection of interactive windows for commanding the mobile robot via cursor action. A HERO 2000 robot has been modified such that its workspace extends from the floor to tabletop heights, and the robot is interfaced to a Macintosh SE via a wireless communications link for untethered operation. Integrated into the system are hardware and software which allow the user to control household appliances in addition to the robot. A separate Machine Control Interface device converts breath action and head or other three-dimensional motion inputs into cursor signals. Preliminary in-home and laboratory testing has demonstrated the utility of the system to perform useful navigational and manipulative tasks.
Integration of a computerized two-finger gripper for robot workstation safety
NASA Technical Reports Server (NTRS)
Sneckenberger, John E.; Yoshikata, Kazuki
1988-01-01
A microprocessor-based controller has been developed that continuously monitors and adjusts the gripping force applied by a special two-finger gripper. This computerized force sensing gripper system enables the endeffector gripping action to be independently detected and corrected. The gripping force applied to a manipulated object is real-time monitored for problem situations, situations which can occur during both planned and errant robot arm manipulation. When unspecified force conditions occur at the gripper, the gripping force controller initiates specific reactions to cause dynamic corrections to the continuously variable gripping action. The force controller for this intelligent gripper has been interfaced to the controller of an industrial robot. The gripper and robot controllers communicate to accomplish the successful completion of normal gripper operations as well as unexpected hazardous situations. An example of an unexpected gripping condition would be the sudden deformation of the object being manipulated by the robot. The capabilities of the interfaced gripper-robot system to apply workstation safety measures (e.g., stop the robot) when these unexpected gripping effects occur have been assessed.
The NASA automation and robotics technology program
NASA Technical Reports Server (NTRS)
Holcomb, Lee B.; Montemerlo, Melvin D.
1986-01-01
The development and objectives of the NASA automation and robotics technology program are reviewed. The objectives of the program are to utilize AI and robotics to increase the probability of mission success; decrease the cost of ground control; and increase the capability and flexibility of space operations. There is a need for real-time computational capability; an effective man-machine interface; and techniques to validate automated systems. Current programs in the areas of sensing and perception, task planning and reasoning, control execution, operator interface, and system architecture and integration are described. Programs aimed at demonstrating the capabilities of telerobotics and system autonomy are discussed.
Human-Vehicle Interface for Semi-Autonomous Operation of Uninhabited Aero Vehicles
NASA Technical Reports Server (NTRS)
Jones, Henry L.; Frew, Eric W.; Woodley, Bruce R.; Rock, Stephen M.
2001-01-01
The robustness of autonomous robotic systems to unanticipated circumstances is typically insufficient for use in the field. The many skills of human user often fill this gap in robotic capability. To incorporate the human into the system, a useful interaction between man and machine must exist. This interaction should enable useful communication to be exchanged in a natural way between human and robot on a variety of levels. This report describes the current human-robot interaction for the Stanford HUMMINGBIRD autonomous helicopter. In particular, the report discusses the elements of the system that enable multiple levels of communication. An intelligent system agent manages the different inputs given to the helicopter. An advanced user interface gives the user and helicopter a method for exchanging useful information. Using this human-robot interaction, the HUMMINGBIRD has carried out various autonomous search, tracking, and retrieval missions.
System for exchanging tools and end effectors on a robot
Burry, David B.; Williams, Paul M.
1991-02-19
A system and method for exchanging tools and end effectors on a robot permits exchange during a programmed task. The exchange mechanism is located off the robot, thus reducing the mass of the robot arm and permitting smaller robots to perform designated tasks. A simple spring/collet mechanism mounted on the robot is used which permits the engagement and disengagement of the tool or end effector without the need for a rotational orientation of the tool to the end effector/collet interface. As the tool changing system is not located on the robot arm no umbilical cords are located on robot.
Function-based design process for an intelligent ground vehicle vision system
NASA Astrophysics Data System (ADS)
Nagel, Robert L.; Perry, Kenneth L.; Stone, Robert B.; McAdams, Daniel A.
2010-10-01
An engineering design framework for an autonomous ground vehicle vision system is discussed. We present both the conceptual and physical design by following the design process, development and testing of an intelligent ground vehicle vision system constructed for the 2008 Intelligent Ground Vehicle Competition. During conceptual design, the requirements for the vision system are explored via functional and process analysis considering the flows into the vehicle and the transformations of those flows. The conceptual design phase concludes with a vision system design that is modular in both hardware and software and is based on a laser range finder and camera for visual perception. During physical design, prototypes are developed and tested independently, following the modular interfaces identified during conceptual design. Prototype models, once functional, are implemented into the final design. The final vision system design uses a ray-casting algorithm to process camera and laser range finder data and identify potential paths. The ray-casting algorithm is a single thread of the robot's multithreaded application. Other threads control motion, provide feedback, and process sensory data. Once integrated, both hardware and software testing are performed on the robot. We discuss the robot's performance and the lessons learned.
NASA Astrophysics Data System (ADS)
Canfield, Shawn; Edinger, Ben; Frecker, Mary I.; Koopmann, Gary H.
1999-06-01
Recent advances in robotics, tele-robotics, smart material actuators, and mechatronics raise new possibilities for innovative developments in millimeter-scale robotics capable of manipulating objects only fractions of a millimeter in size. These advances can have a wide range of applications in the biomedical community. A potential application of this technology is in minimally invasive surgery (MIS). The focus of this paper is the development of a single degree of freedom prototype to demonstrate the viability of smart materials, force feedback and compliant mechanisms for minimally invasive surgery. The prototype is a compliant gripper that is 7-mm by 17-mm, made from a single piece of titanium that is designed to function as a needle driver for small scale suturing. A custom designed piezoelectric `inchworm' actuator drives the gripper. The integrated system is computer controlled providing a user interface device capable of force feedback. The design methodology described draws from recent advances in three emerging fields in engineering: design of innovative tools for MIS, design of compliant mechanisms, and design of smart materials and actuators. The focus of this paper is on the design of a millimeter-scale inchworm actuator for use with a compliant end effector in MIS.
NASA Technical Reports Server (NTRS)
1988-01-01
Martin Marietta Aero and Naval Systems has advanced the CAD art to a very high level at its Robotics Laboratory. One of the company's major projects is construction of a huge Field Material Handling Robot for the Army's Human Engineering Lab. Design of FMR, intended to move heavy and dangerous material such as ammunition, was a triumph in CAD Engineering. Separate computer problems modeled the robot's kinematics and dynamics, yielding such parameters as the strength of materials required for each component, the length of the arms, their degree of freedom and power of hydraulic system needed. The Robotics Lab went a step further and added data enabling computer simulation and animation of the robot's total operational capability under various loading and unloading conditions. NASA computer program (IAC), integrated Analysis Capability Engineering Database was used. Program contains a series of modules that can stand alone or be integrated with data from sensors or software tools.
Graphical interface between the CIRSSE testbed and CimStation software with MCS/CTOS
NASA Technical Reports Server (NTRS)
Hron, Anna B.
1992-01-01
This research is concerned with developing a graphical simulation of the testbed at the Center for Intelligent Robotic Systems for Space Exploration (CIRSSE) and the interface which allows for communication between the two. Such an interface is useful in telerobotic operations, and as a functional interaction tool for testbed users. Creating a simulated model of a real world system, generates inevitable calibration discrepancies between them. This thesis gives a brief overview of the work done to date in the area of workcell representation and communication, describes the development of the CIRSSE interface, and gives a direction for future work in the area of system calibration. The CimStation software used for development of this interface, is a highly versatile robotic workcell simulation package which has been programmed for this application with a scale graphical model of the testbed, and supporting interface menu code. A need for this tool has been identified for the reasons of path previewing, as a window on teleoperation and for calibration of simulated vs. real world models. The interface allows information (i.e., joint angles) generated by CimStation to be sent as motion goal positions to the testbed robots. An option of the interface has been established such that joint angle information generated by supporting testbed algorithms (i.e., TG, collision avoidance) can be piped through CimStation as a visual preview of the path.
Control of a 2 DoF robot using a brain-machine interface.
Hortal, Enrique; Ubeda, Andrés; Iáñez, Eduardo; Azorín, José M
2014-09-01
In this paper, a non-invasive spontaneous Brain-Machine Interface (BMI) is used to control the movement of a planar robot. To that end, two mental tasks are used to manage the visual interface that controls the robot. The robot used is a PupArm, a force-controlled planar robot designed by the nBio research group at the Miguel Hernández University of Elche (Spain). Two control strategies are compared: hierarchical and directional control. The experimental test (performed by four users) consists of reaching four targets. The errors and time used during the performance of the tests are compared in both control strategies (hierarchical and directional control). The advantages and disadvantages of each method are shown after the analysis of the results. The hierarchical control allows an accurate approaching to the goals but it is slower than using the directional control which, on the contrary, is less precise. The results show both strategies are useful to control this planar robot. In the future, by adding an extra device like a gripper, this BMI could be used in assistive applications such as grasping daily objects in a realistic environment. In order to compare the behavior of the system taking into account the opinion of the users, a NASA Tasks Load Index (TLX) questionnaire is filled out after two sessions are completed. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Chung, Cheng-Shiu; Wang, Hongwu; Cooper, Rory A.
2013-01-01
Context The user interface development of assistive robotic manipulators can be traced back to the 1960s. Studies include kinematic designs, cost-efficiency, user experience involvements, and performance evaluation. This paper is to review studies conducted with clinical trials using activities of daily living (ADLs) tasks to evaluate performance categorized using the International Classification of Functioning, Disability, and Health (ICF) frameworks, in order to give the scope of current research and provide suggestions for future studies. Methods We conducted a literature search of assistive robotic manipulators from 1970 to 2012 in PubMed, Google Scholar, and University of Pittsburgh Library System – PITTCat. Results Twenty relevant studies were identified. Conclusion Studies were separated into two broad categories: user task preferences and user-interface performance measurements of commercialized and developing assistive robotic manipulators. The outcome measures and ICF codes associated with the performance evaluations are reported. Suggestions for the future studies include (1) standardized ADL tasks for the quantitative and qualitative evaluation of task efficiency and performance to build comparable measures between research groups, (2) studies relevant to the tasks from user priority lists and ICF codes, and (3) appropriate clinical functional assessment tests with consideration of constraints in assistive robotic manipulator user interfaces. In addition, these outcome measures will help physicians and therapists build standardized tools while prescribing and assessing assistive robotic manipulators. PMID:23820143
Granata, C; Pino, M; Legouverneur, G; Vidal, J-S; Bidaud, P; Rigaud, A-S
2013-01-01
Socially assistive robotics for elderly care is a growing field. However, although robotics has the potential to support elderly in daily tasks by offering specific services, the development of usable interfaces is still a challenge. Since several factors such as age or disease-related changes in perceptual or cognitive abilities and familiarity with computer technologies influence technology use they must be considered when designing interfaces for these users. This paper presents findings from usability testing of two different services provided by a social assistive robot intended for elderly with cognitive impairment: a grocery shopping list and an agenda application. The main goal of this study is to identify the usability problems of the robot interface for target end-users as well as to isolate the human factors that affect the use of the technology by elderly. Socio-demographic characteristics and computer experience were examined as factors that could have an influence on task performance. A group of 11 elderly persons with Mild Cognitive Impairment and a group of 11 cognitively healthy elderly individuals took part in this study. Performance measures (task completion time and number of errors) were collected. Cognitive profile, age and computer experience were found to impact task performance. Participants with cognitive impairment achieved the tasks committing more errors than cognitively healthy elderly. Instead younger participants and those with previous computer experience were faster at completing the tasks confirming previous findings in the literature. The overall results suggested that interfaces and contents of the services assessed were usable by older adults with cognitive impairment. However, some usability problems were identified and should be addressed to better meet the needs and capacities of target end-users.
Integration of advanced teleoperation technologies for control of space robots
NASA Technical Reports Server (NTRS)
Stagnaro, Michael J.
1993-01-01
Teleoperated robots require one or more humans to control actuators, mechanisms, and other robot equipment given feedback from onboard sensors. To accomplish this task, the human or humans require some form of control station. Desirable features of such a control station include operation by a single human, comfort, and natural human interfaces (visual, audio, motion, tactile, etc.). These interfaces should work to maximize performance of the human/robot system by streamlining the link between human brain and robot equipment. This paper describes development of a control station testbed with the characteristics described above. Initially, this testbed will be used to control two teleoperated robots. Features of the robots include anthropomorphic mechanisms, slaving to the testbed, and delivery of sensory feedback to the testbed. The testbed will make use of technologies such as helmet mounted displays, voice recognition, and exoskeleton masters. It will allow tor integration and testing of emerging telepresence technologies along with techniques for coping with control link time delays. Systems developed from this testbed could be applied to ground control of space based robots. During man-tended operations, the Space Station Freedom may benefit from ground control of IVA or EVA robots with science or maintenance tasks. Planetary exploration may also find advanced teleoperation systems to be very useful.
Controlling the autonomy of a reconnaissance robot
NASA Astrophysics Data System (ADS)
Dalgalarrondo, Andre; Dufourd, Delphine; Filliat, David
2004-09-01
In this paper, we present our research on the control of a mobile robot for indoor reconnaissance missions. Based on previous work concerning our robot control architecture HARPIC, we have developed a man machine interface and software components that allow a human operator to control a robot at different levels of autonomy. This work aims at studying how a robot could be helpful in indoor reconnaissance and surveillance missions in hostile environment. In such missions, since a soldier faces many threats and must protect himself while looking around and holding his weapon, he cannot devote his attention to the teleoperation of the robot. Moreover, robots are not yet able to conduct complex missions in a fully autonomous mode. Thus, in a pragmatic way, we have built a software that allows dynamic swapping between control modes (manual, safeguarded and behavior-based) while automatically performing map building and localization of the robot. It also includes surveillance functions like movement detection and is designed for multirobot extensions. We first describe the design of our agent-based robot control architecture and discuss the various ways to control and interact with a robot. The main modules and functionalities implementing those ideas in our architecture are detailed. More precisely, we show how we combine manual controls, obstacle avoidance, wall and corridor following, way point and planned travelling. Some experiments on a Pioneer robot equipped with various sensors are presented. Finally, we suggest some promising directions for the development of robots and user interfaces for hostile environment and discuss our planned future improvements.
NASA Astrophysics Data System (ADS)
Popov, E. P.; Iurevich, E. I.
The history and the current status of robotics are reviewed, as are the design, operation, and principal applications of industrial robots. Attention is given to programmable robots, robots with adaptive control and elements of artificial intelligence, and remotely controlled robots. The applications of robots discussed include mechanical engineering, cargo handling during transportation and storage, mining, and metallurgy. The future prospects of robotics are briefly outlined.
User interface for a tele-operated robotic hand system
Crawford, Anthony L
2015-03-24
Disclosed here is a user interface for a robotic hand. The user interface anchors a user's palm in a relatively stationary position and determines various angles of interest necessary for a user's finger to achieve a specific fingertip location. The user interface additionally conducts a calibration procedure to determine the user's applicable physiological dimensions. The user interface uses the applicable physiological dimensions and the specific fingertip location, and treats the user's finger as a two link three degree-of-freedom serial linkage in order to determine the angles of interest. The user interface communicates the angles of interest to a gripping-type end effector which closely mimics the range of motion and proportions of a human hand. The user interface requires minimal contact with the operator and provides distinct advantages in terms of available dexterity, work space flexibility, and adaptability to different users.
Collaborative autonomous sensing with Bayesians in the loop
NASA Astrophysics Data System (ADS)
Ahmed, Nisar
2016-10-01
There is a strong push to develop intelligent unmanned autonomy that complements human reasoning for applications as diverse as wilderness search and rescue, military surveillance, and robotic space exploration. More than just replacing humans for `dull, dirty and dangerous' work, autonomous agents are expected to cope with a whole host of uncertainties while working closely together with humans in new situations. The robotics revolution firmly established the primacy of Bayesian algorithms for tackling challenging perception, learning and decision-making problems. Since the next frontier of autonomy demands the ability to gather information across stretches of time and space that are beyond the reach of a single autonomous agent, the next generation of Bayesian algorithms must capitalize on opportunities to draw upon the sensing and perception abilities of humans-in/on-the-loop. This work summarizes our recent research toward harnessing `human sensors' for information gathering tasks. The basic idea behind is to allow human end users (i.e. non-experts in robotics, statistics, machine learning, etc.) to directly `talk to' the information fusion engine and perceptual processes aboard any autonomous agent. Our approach is grounded in rigorous Bayesian modeling and fusion of flexible semantic information derived from user-friendly interfaces, such as natural language chat and locative hand-drawn sketches. This naturally enables `plug and play' human sensing with existing probabilistic algorithms for planning and perception, and has been successfully demonstrated with human-robot teams in target localization applications.
NASA Technical Reports Server (NTRS)
Mavroidis, Constantinos; Pfeiffer, Charles; Paljic, Alex; Celestino, James; Lennon, Jamie; Bar-Cohen, Yoseph
2000-01-01
For many years, the robotic community sought to develop robots that can eventually operate autonomously and eliminate the need for human operators. However, there is an increasing realization that there are some tasks that human can perform significantly better but, due to associated hazards, distance, physical limitations and other causes, only robot can be employed to perform these tasks. Remotely performing these types of tasks requires operating robots as human surrogates. While current "hand master" haptic systems are able to reproduce the feeling of rigid objects, they present great difficulties in emulating the feeling of remote/virtual stiffness. In addition, they tend to be heavy, cumbersome and usually they only allow limited operator workspace. In this paper a novel haptic interface is presented to enable human-operators to "feel" and intuitively mirror the stiffness/forces at remote/virtual sites enabling control of robots as human-surrogates. This haptic interface is intended to provide human operators intuitive feeling of the stiffness and forces at remote or virtual sites in support of space robots performing dexterous manipulation tasks (such as operating a wrench or a drill). Remote applications are referred to the control of actual robots whereas virtual applications are referred to simulated operations. The developed haptic interface will be applicable to IVA operated robotic EVA tasks to enhance human performance, extend crew capability and assure crew safety. The electrically controlled stiffness is obtained using constrained ElectroRheological Fluids (ERF), which changes its viscosity under electrical stimulation. Forces applied at the robot end-effector due to a compliant environment will be reflected to the user using this ERF device where a change in the system viscosity will occur proportionally to the force to be transmitted. In this paper, we will present the results of our modeling, simulation, and initial testing of such an electrorheological fluid (ERF) based haptic device.
NASA Technical Reports Server (NTRS)
Barlow, Jonathan; Benavides, Jose; Provencher, Chris; Bualat, Maria; Smith, Marion F.; Mora Vargas, Andres
2017-01-01
At the end of 2017, Astrobee will launch three free-flying robots that will navigate the entire US segment of the ISS (International Space Station) and serve as a payload facility. These robots will provide guest science payloads with processor resources, space within the robot for physical attachment, power, communication, propulsion, and human interfaces.
Sports Training Support Method by Self-Coaching with Humanoid Robot
NASA Astrophysics Data System (ADS)
Toyama, S.; Ikeda, F.; Yasaka, T.
2016-09-01
This paper proposes a new training support method called self-coaching with humanoid robots. In the proposed method, two small size inexpensive humanoid robots are used because of their availability. One robot called target robot reproduces motion of a target player and another robot called reference robot reproduces motion of an expert player. The target player can recognize a target technique from the reference robot and his/her inadequate skill from the target robot. Modifying the motion of the target robot as self-coaching, the target player could get advanced cognition. Some experimental results show some possibility as the new training method and some issues of the self-coaching interface program as a future work.
Interactions With Robots: The Truths We Reveal About Ourselves.
Broadbent, Elizabeth
2017-01-03
In movies, robots are often extremely humanlike. Although these robots are not yet reality, robots are currently being used in healthcare, education, and business. Robots provide benefits such as relieving loneliness and enabling communication. Engineers are trying to build robots that look and behave like humans and thus need comprehensive knowledge not only of technology but also of human cognition, emotion, and behavior. This need is driving engineers to study human behavior toward other humans and toward robots, leading to greater understanding of how humans think, feel, and behave in these contexts, including our tendencies for mindless social behaviors, anthropomorphism, uncanny feelings toward robots, and the formation of emotional attachments. However, in considering the increased use of robots, many people have concerns about deception, privacy, job loss, safety, and the loss of human relationships. Human-robot interaction is a fascinating field and one in which psychologists have much to contribute, both to the development of robots and to the study of human behavior.
Anthropomorphic Robot Design and User Interaction Associated with Motion
NASA Technical Reports Server (NTRS)
Ellis, Stephen R.
2016-01-01
Though in its original concept a robot was conceived to have some human-like shape, most robots now in use have specific industrial purposes and do not closely resemble humans. Nevertheless, robots that resemble human form in some way have continued to be introduced. They are called anthropomorphic robots. The fact that the user interface to all robots is now highly mediated means that the form of the user interface is not necessarily connected to the robots form, human or otherwise. Consequently, the unique way the design of anthropomorphic robots affects their user interaction is through their general appearance and the way they move. These robots human-like appearance acts as a kind of generalized predictor that gives its operators, and those with whom they may directly work, the expectation that they will behave to some extent like a human. This expectation is especially prominent for interactions with social robots, which are built to enhance it. Often interaction with them may be mainly cognitive because they are not necessarily kinematically intricate enough for complex physical interaction. Their body movement, for example, may be limited to simple wheeled locomotion. An anthropomorphic robot with human form, however, can be kinematically complex and designed, for example, to reproduce the details of human limb, torso, and head movement. Because of the mediated nature of robot control, there remains in general no necessary connection between the specific form of user interface and the anthropomorphic form of the robot. But their anthropomorphic kinematics and dynamics imply that the impact of their design shows up in the way the robot moves. The central finding of this report is that the control of this motion is a basic design element through which the anthropomorphic form can affect user interaction. In particular, designers of anthropomorphic robots can take advantage of the inherent human-like movement to 1) improve the users direct manual control over robot limbs and body positions, 2) improve users ability to detect anomalous robot behavior which could signal malfunction, and 3) enable users to be better able to infer the intent of robot movement. These three benefits of anthropomorphic design are inherent implications of the anthropomorphic form but they need to be recognized by designers as part of anthropomorphic design and explicitly enhanced to maximize their beneficial impact. Examples of such enhancements are provided in this report. If implemented, these benefits of anthropomorphic design can help reduce the risk of Inadequate Design of Human and Automation Robotic Integration (HARI) associated with the HARI-01 gap by providing efficient and dexterous operator control over robots and by improving operator ability to detect malfunctions and understand the intention of robot movement.
Design, fabrication and control of soft robots.
Rus, Daniela; Tolley, Michael T
2015-05-28
Conventionally, engineers have employed rigid materials to fabricate precise, predictable robotic systems, which are easily modelled as rigid members connected at discrete joints. Natural systems, however, often match or exceed the performance of robotic systems with deformable bodies. Cephalopods, for example, achieve amazing feats of manipulation and locomotion without a skeleton; even vertebrates such as humans achieve dynamic gaits by storing elastic energy in their compliant bones and soft tissues. Inspired by nature, engineers have begun to explore the design and control of soft-bodied robots composed of compliant materials. This Review discusses recent developments in the emerging field of soft robotics.
A High School Level Course On Robot Design And Construction
NASA Astrophysics Data System (ADS)
Sadler, Paul M.; Crandall, Jack L.
1984-02-01
The Robotics Design and Construction Class at Sehome High School was developed to offer gifted and/or highly motivated students an in-depth introduction to a modern engineering topic. The course includes instruction in basic electronics, digital and radio electronics, construction skills, robotics literacy, construction of the HERO 1 Heathkit Robot, computer/ robot programming, and voice synthesis. A key element which leads to the success of the course is the involvement of various community assets including manpower and financial assistance. The instructors included a physics/electronics teacher, a computer science teacher, two retired engineers, and an electronics technician.
Miniature surgical robot for laparoendoscopic single-incision colectomy.
Wortman, Tyler D; Meyer, Avishai; Dolghi, Oleg; Lehman, Amy C; McCormick, Ryan L; Farritor, Shane M; Oleynikov, Dmitry
2012-03-01
This study aimed to demonstrate the effectiveness of using a multifunctional miniature in vivo robotic platform to perform a single-incision colectomy. Standard laparoscopic techniques require multiple ports. A miniature robotic platform to be inserted completely into the peritoneal cavity through a single incision has been designed and built. The robot can be quickly repositioned, thus enabling multiquadrant access to the abdominal cavity. The miniature in vivo robotic platform used in this study consists of a multifunctional robot and a remote surgeon interface. The robot is composed of two arms with shoulder and elbow joints. Each forearm is equipped with specialized interchangeable end effectors (i.e., graspers and monopolar electrocautery). Five robotic colectomies were performed in a porcine model. For each procedure, the robot was completely inserted into the peritoneal cavity, and the surgeon manipulated the user interface to control the robot to perform the colectomy. The robot mobilized the colon from its lateral retroperitoneal attachments and assisted in the placement of a standard stapler to transect the sigmoid colon. This objective was completed for all five colectomies without any complications. The adoption of both laparoscopic and single-incision colectomies currently is constrained by the inadequacies of existing instruments. The described multifunctional robot provides a platform that overcomes existing limitations by operating completely within one incision in the peritoneal cavity and by improving visualization and dexterity. By repositioning the small robot to the area of the colon to be mobilized, the ability of the surgeon to perform complex surgical tasks is improved. Furthermore, the success of the robot in performing a completely in vivo colectomy suggests the feasibility of using this robotic platform to perform other complex surgeries through a single incision.
Solazzi, Massimiliano; Loconsole, Claudio; Barsotti, Michele
2016-01-01
This paper illustrates the application of emerging technologies and human-machine interfaces to the neurorehabilitation and motor assistance fields. The contribution focuses on wearable technologies and in particular on robotic exoskeleton as tools for increasing freedom to move and performing Activities of Daily Living (ADLs). This would result in a deep improvement in quality of life, also in terms of improved function of internal organs and general health status. Furthermore, the integration of these robotic systems with advanced bio-signal driven human-machine interface can increase the degree of participation of patient in robotic training allowing to recognize user's intention and assisting the patient in rehabilitation tasks, thus representing a fundamental aspect to elicit motor learning PMID:28484314
Petri net controllers for distributed robotic systems
NASA Technical Reports Server (NTRS)
Lefebvre, D. R.; Saridis, George N.
1992-01-01
Petri nets are a well established modelling technique for analyzing parallel systems. When coupled with an event-driven operating system, Petri nets can provide an effective means for integrating and controlling the functions of distributed robotic applications. Recent work has shown that Petri net graphs can also serve as remarkably intuitive operator interfaces. In this paper, the advantages of using Petri nets as high-level controllers to coordinate robotic functions are outlined, the considerations for designing Petri net controllers are discussed, and simple Petri net structures for implementing an interface for operator supervision are presented. A detailed example is presented which illustrates these concepts for a sensor-based assembly application.
Skin-inspired hydrogel-elastomer hybrids with robust interfaces and functional microstructures
NASA Astrophysics Data System (ADS)
Yuk, Hyunwoo; Zhang, Teng; Parada, German Alberto; Liu, Xinyue; Zhao, Xuanhe
2016-06-01
Inspired by mammalian skins, soft hybrids integrating the merits of elastomers and hydrogels have potential applications in diverse areas including stretchable and bio-integrated electronics, microfluidics, tissue engineering, soft robotics and biomedical devices. However, existing hydrogel-elastomer hybrids have limitations such as weak interfacial bonding, low robustness and difficulties in patterning microstructures. Here, we report a simple yet versatile method to assemble hydrogels and elastomers into hybrids with extremely robust interfaces (interfacial toughness over 1,000 Jm-2) and functional microstructures such as microfluidic channels and electrical circuits. The proposed method is generally applicable to various types of tough hydrogels and diverse commonly used elastomers including polydimethylsiloxane Sylgard 184, polyurethane, latex, VHB and Ecoflex. We further demonstrate applications enabled by the robust and microstructured hydrogel-elastomer hybrids including anti-dehydration hydrogel-elastomer hybrids, stretchable and reactive hydrogel-elastomer microfluidics, and stretchable hydrogel circuit boards patterned on elastomer.
System for exchanging tools and end effectors on a robot
Burry, D.B.; Williams, P.M.
1991-02-19
A system and method for exchanging tools and end effectors on a robot permits exchange during a programmed task. The exchange mechanism is located off the robot, thus reducing the mass of the robot arm and permitting smaller robots to perform designated tasks. A simple spring/collet mechanism mounted on the robot is used which permits the engagement and disengagement of the tool or end effector without the need for a rotational orientation of the tool to the end effector/collet interface. As the tool changing system is not located on the robot arm no umbilical cords are located on robot. 12 figures.
Wearable computer for mobile augmented-reality-based controlling of an intelligent robot
NASA Astrophysics Data System (ADS)
Turunen, Tuukka; Roening, Juha; Ahola, Sami; Pyssysalo, Tino
2000-10-01
An intelligent robot can be utilized to perform tasks that are either hazardous or unpleasant for humans. Such tasks include working in disaster areas or conditions that are, for example, too hot. An intelligent robot can work on its own to some extent, but in some cases the aid of humans will be needed. This requires means for controlling the robot from somewhere else, i.e. teleoperation. Mobile augmented reality can be utilized as a user interface to the environment, as it enhances the user's perception of the situation compared to other interfacing methods and allows the user to perform other tasks while controlling the intelligent robot. Augmented reality is a method that combines virtual objects into the user's perception of the real world. As computer technology evolves, it is possible to build very small devices that have sufficient capabilities for augmented reality applications. We have evaluated the existing wearable computers and mobile augmented reality systems to build a prototype of a future mobile terminal- the CyPhone. A wearable computer with sufficient system resources for applications, wireless communication media with sufficient throughput and enough interfaces for peripherals has been built at the University of Oulu. It is self-sustained in energy, with enough operating time for the applications to be useful, and uses accurate positioning systems.
Ali, S M; Reisner, L A; King, B; Cao, A; Auner, G; Klein, M; Pandya, A K
2008-01-01
A redesigned motion control system for the medical robot Aesop allows automating and programming its movements. An IR eye tracking system has been integrated with this control interface to implement an intelligent, autonomous eye gaze-based laparoscopic positioning system. A laparoscopic camera held by Aesop can be moved based on the data from the eye tracking interface to keep the user's gaze point region at the center of a video feedback monitor. This system setup provides autonomous camera control that works around the surgeon, providing an optimal robotic camera platform.
Robot Control Through Brain Computer Interface For Patterns Generation
NASA Astrophysics Data System (ADS)
Belluomo, P.; Bucolo, M.; Fortuna, L.; Frasca, M.
2011-09-01
A Brain Computer Interface (BCI) system processes and translates neuronal signals, that mainly comes from EEG instruments, into commands for controlling electronic devices. This system can allow people with motor disabilities to control external devices through the real-time modulation of their brain waves. In this context an EEG-based BCI system that allows creative luminous artistic representations is here presented. The system that has been designed and realized in our laboratory interfaces the BCI2000 platform performing real-time analysis of EEG signals with a couple of moving luminescent twin robots. Experiments are also presented.
Ma, Jiaxin; Zhang, Yu; Cichocki, Andrzej; Matsuno, Fumitoshi
2015-03-01
This study presents a novel human-machine interface (HMI) based on both electrooculography (EOG) and electroencephalography (EEG). This hybrid interface works in two modes: an EOG mode recognizes eye movements such as blinks, and an EEG mode detects event related potentials (ERPs) like P300. While both eye movements and ERPs have been separately used for implementing assistive interfaces, which help patients with motor disabilities in performing daily tasks, the proposed hybrid interface integrates them together. In this way, both the eye movements and ERPs complement each other. Therefore, it can provide a better efficiency and a wider scope of application. In this study, we design a threshold algorithm that can recognize four kinds of eye movements including blink, wink, gaze, and frown. In addition, an oddball paradigm with stimuli of inverted faces is used to evoke multiple ERP components including P300, N170, and VPP. To verify the effectiveness of the proposed system, two different online experiments are carried out. One is to control a multifunctional humanoid robot, and the other is to control four mobile robots. In both experiments, the subjects can complete tasks effectively by using the proposed interface, whereas the best completion time is relatively short and very close to the one operated by hand.
SSSFD manipulator engineering using statistical experiment design techniques
NASA Technical Reports Server (NTRS)
Barnes, John
1991-01-01
The Satellite Servicer System Flight Demonstration (SSSFD) program is a series of Shuttle flights designed to verify major on-orbit satellite servicing capabilities, such as rendezvous and docking of free flyers, Orbital Replacement Unit (ORU) exchange, and fluid transfer. A major part of this system is the manipulator system that will perform the ORU exchange. The manipulator must possess adequate toolplate dexterity to maneuver a variety of EVA-type tools into position to interface with ORU fasteners, connectors, latches, and handles on the satellite, and to move workpieces and ORUs through 6 degree of freedom (dof) space from the Target Vehicle (TV) to the Support Module (SM) and back. Two cost efficient tools were combined to perform a study of robot manipulator design parameters. These tools are graphical computer simulations and Taguchi Design of Experiment methods. Using a graphics platform, an off-the-shelf robot simulation software package, and an experiment designed with Taguchi's approach, the sensitivities of various manipulator kinematic design parameters to performance characteristics are determined with minimal cost.
Affordance Templates for Shared Robot Control
NASA Technical Reports Server (NTRS)
Hart, Stephen; Dinh, Paul; Hambuchen, Kim
2014-01-01
This paper introduces the Affordance Template framework used to supervise task behaviors on the NASA-JSC Valkyrie robot at the 2013 DARPA Robotics Challenge (DRC) Trials. This framework provides graphical interfaces to human supervisors that are adjustable based on the run-time environmental context (e.g., size, location, and shape of objects that the robot must interact with, etc.). Additional improvements, described below, inject degrees of autonomy into instantiations of affordance templates at run-time in order to enable efficient human supervision of the robot for accomplishing tasks.
Control Robotics Programming Technology. Technology Learning Activity. Teacher Edition.
ERIC Educational Resources Information Center
Oklahoma State Dept. of Vocational and Technical Education, Stillwater. Curriculum and Instructional Materials Center.
This Technology Learning Activity (TLA) for control robotics programming technology in grades 6-10 is designed to teach students to construct and program computer-controlled devices using a LEGO DACTA set and computer interface and to help them understand how control technology and robotics affect them and their lifestyle. The suggested time for…
ERIC Educational Resources Information Center
Sullivan, Amanda; Bers, Marina Umaschi
2016-01-01
In recent years there has been an increasing focus on the missing "T" of technology and "E" of engineering in early childhood STEM (science, technology, engineering, mathematics) curricula. Robotics offers a playful and tangible way for children to engage with both T and E concepts during their foundational early childhood…
Distributed Automated Medical Robotics to Improve Medical Field Operations
2010-04-01
ROBOT PATIENT INTERFACE Robotic trauma diagnosis and intervention is performed using instruments and tools mounted on the end of a robotic manipulator...manipulator to respond quickly enough to accommodate for motion due to high inertia and inaccuracies caused by low stiffness at the tool point. Ultrasonic...program was licensed to Intuitive Surgical, Inc and subsequently morphed into the daVinci surgical system. The daVinci has been widely applied in
Robotic and Virtual Reality BCIs Using Spatial Tactile and Auditory Oddball Paradigms.
Rutkowski, Tomasz M
2016-01-01
The paper reviews nine robotic and virtual reality (VR) brain-computer interface (BCI) projects developed by the author, in collaboration with his graduate students, within the BCI-lab research group during its association with University of Tsukuba, Japan. The nine novel approaches are discussed in applications to direct brain-robot and brain-virtual-reality-agent control interfaces using tactile and auditory BCI technologies. The BCI user intentions are decoded from the brainwaves in realtime using a non-invasive electroencephalography (EEG) and they are translated to a symbiotic robot or virtual reality agent thought-based only control. A communication protocol between the BCI output and the robot or the virtual environment is realized in a symbiotic communication scenario using an user datagram protocol (UDP), which constitutes an internet of things (IoT) control scenario. Results obtained from healthy users reproducing simple brain-robot and brain-virtual-agent control tasks in online experiments support the research goal of a possibility to interact with robotic devices and virtual reality agents using symbiotic thought-based BCI technologies. An offline BCI classification accuracy boosting method, using a previously proposed information geometry derived approach, is also discussed in order to further support the reviewed robotic and virtual reality thought-based control paradigms.
The magic glove: a gesture-based remote controller for intelligent mobile robots
NASA Astrophysics Data System (ADS)
Luo, Chaomin; Chen, Yue; Krishnan, Mohan; Paulik, Mark
2012-01-01
This paper describes the design of a gesture-based Human Robot Interface (HRI) for an autonomous mobile robot entered in the 2010 Intelligent Ground Vehicle Competition (IGVC). While the robot is meant to operate autonomously in the various Challenges of the competition, an HRI is useful in moving the robot to the starting position and after run termination. In this paper, a user-friendly gesture-based embedded system called the Magic Glove is developed for remote control of a robot. The system consists of a microcontroller and sensors that is worn by the operator as a glove and is capable of recognizing hand signals. These are then transmitted through wireless communication to the robot. The design of the Magic Glove included contributions on two fronts: hardware configuration and algorithm development. A triple axis accelerometer used to detect hand orientation passes the information to a microcontroller, which interprets the corresponding vehicle control command. A Bluetooth device interfaced to the microcontroller then transmits the information to the vehicle, which acts accordingly. The user-friendly Magic Glove was successfully demonstrated first in a Player/Stage simulation environment. The gesture-based functionality was then also successfully verified on an actual robot and demonstrated to judges at the 2010 IGVC.
Robotic and Virtual Reality BCIs Using Spatial Tactile and Auditory Oddball Paradigms
Rutkowski, Tomasz M.
2016-01-01
The paper reviews nine robotic and virtual reality (VR) brain–computer interface (BCI) projects developed by the author, in collaboration with his graduate students, within the BCI–lab research group during its association with University of Tsukuba, Japan. The nine novel approaches are discussed in applications to direct brain-robot and brain-virtual-reality-agent control interfaces using tactile and auditory BCI technologies. The BCI user intentions are decoded from the brainwaves in realtime using a non-invasive electroencephalography (EEG) and they are translated to a symbiotic robot or virtual reality agent thought-based only control. A communication protocol between the BCI output and the robot or the virtual environment is realized in a symbiotic communication scenario using an user datagram protocol (UDP), which constitutes an internet of things (IoT) control scenario. Results obtained from healthy users reproducing simple brain-robot and brain-virtual-agent control tasks in online experiments support the research goal of a possibility to interact with robotic devices and virtual reality agents using symbiotic thought-based BCI technologies. An offline BCI classification accuracy boosting method, using a previously proposed information geometry derived approach, is also discussed in order to further support the reviewed robotic and virtual reality thought-based control paradigms. PMID:27999538
NASA Technical Reports Server (NTRS)
Jones, Corey; Kapatos, Dennis; Skradski, Cory
2012-01-01
Do you have workflows with many manual tasks that slow down your business? Or, do you scale back workflows because there are simply too many manual tasks? Basic workflow robots can automate some common tasks, but not everything. This presentation will show how advanced robots called "expression robots" can be set up to perform everything from simple tasks such as: moving, creating folders, renaming, changing or creating an attribute, and revising, to more complex tasks like: creating a pdf, or even launching a session of Creo Parametric and performing a specific modeling task. Expression robots are able to utilize the Java API and Info*Engine to do almost anything you can imagine! Best of all, these tools are supported by PTC and will work with later releases of Windchill. Limited knowledge of Java, Info*Engine, and XML are required. The attendee will learn what task expression robots are capable of performing. The attendee will learn what is involved in setting up an expression robot. The attendee will gain a basic understanding of simple Info*Engine tasks
Charter for Systems Engineer Working Group
NASA Technical Reports Server (NTRS)
Suffredini, Michael T.; Grissom, Larry
2015-01-01
This charter establishes the International Space Station Program (ISSP) Mobile Servicing System (MSS) Systems Engineering Working Group (SEWG). The MSS SEWG is established to provide a mechanism for Systems Engineering for the end-to-end MSS function. The MSS end-to-end function includes the Space Station Remote Manipulator System (SSRMS), the Mobile Remote Servicer (MRS) Base System (MBS), Robotic Work Station (RWS), Special Purpose Dexterous Manipulator (SPDM), Video Signal Converters (VSC), and Operations Control Software (OCS), the Mobile Transporter (MT), and by interfaces between and among these elements, and United States On-Orbit Segment (USOS) distributed systems, and other International Space Station Elements and Payloads, (including the Power Data Grapple Fixtures (PDGFs), MSS Capture Attach System (MCAS) and the Mobile Transporter Capture Latch (MTCL)). This end-to-end function will be supported by the ISS and MSS ground segment facilities. This charter defines the scope and limits of the program authority and document control that is delegated to the SEWG and it also identifies the panel core membership and specific operating policies.
ERIC Educational Resources Information Center
Faria, Carlos; Vale, Carolina; Machado, Toni; Erlhagen, Wolfram; Rito, Manuel; Monteiro, Sérgio; Bicho, Estela
2016-01-01
Robotics has been playing an important role in modern surgery, especially in procedures that require extreme precision, such as neurosurgery. This paper addresses the challenge of teaching robotics to undergraduate engineering students, through an experiential learning project of robotics fundamentals based on a case study of robot-assisted…
Open Issues in Evolutionary Robotics.
Silva, Fernando; Duarte, Miguel; Correia, Luís; Oliveira, Sancho Moura; Christensen, Anders Lyhne
2016-01-01
One of the long-term goals in evolutionary robotics is to be able to automatically synthesize controllers for real autonomous robots based only on a task specification. While a number of studies have shown the applicability of evolutionary robotics techniques for the synthesis of behavioral control, researchers have consistently been faced with a number of issues preventing the widespread adoption of evolutionary robotics for engineering purposes. In this article, we review and discuss the open issues in evolutionary robotics. First, we analyze the benefits and challenges of simulation-based evolution and subsequent deployment of controllers versus evolution on real robotic hardware. Second, we discuss specific evolutionary computation issues that have plagued evolutionary robotics: (1) the bootstrap problem, (2) deception, and (3) the role of genomic encoding and genotype-phenotype mapping in the evolution of controllers for complex tasks. Finally, we address the absence of standard research practices in the field. We also discuss promising avenues of research. Our underlying motivation is the reduction of the current gap between evolutionary robotics and mainstream robotics, and the establishment of evolutionary robotics as a canonical approach for the engineering of autonomous robots.
Health Cars Robotics: A Progress Report
NASA Technical Reports Server (NTRS)
Fiorini, P.; Ali, K.; Seraji, H.
1997-01-01
This paper describes the approach followed in the design of a service robot for health care applications. This paper describes the architecture of the subsystem, the features of the manipulator arm, and the operator interface.
A small, cheap, and portable reconnaissance robot
NASA Astrophysics Data System (ADS)
Kenyon, Samuel H.; Creary, D.; Thi, Dan; Maynard, Jeffrey
2005-05-01
While there is much interest in human-carriable mobile robots for defense/security applications, existing examples are still too large/heavy, and there are not many successful small human-deployable mobile ground robots, especially ones that can survive being thrown/dropped. We have developed a prototype small short-range teleoperated indoor reconnaissance/surveillance robot that is semi-autonomous. It is self-powered, self-propelled, spherical, and meant to be carried and thrown by humans into indoor, yet relatively unstructured, dynamic environments. The robot uses multiple channels for wireless control and feedback, with the potential for inter-robot communication, swarm behavior, or distributed sensor network capabilities. The primary reconnaissance sensor for this prototype is visible-spectrum video. This paper focuses more on the software issues, both the onboard intelligent real time control system and the remote user interface. The communications, sensor fusion, intelligent real time controller, etc. are implemented with onboard microcontrollers. We based the autonomous and teleoperation controls on a simple finite state machine scripting layer. Minimal localization and autonomous routines were designed to best assist the operator, execute whatever mission the robot may have, and promote its own survival. We also discuss the advantages and pitfalls of an inexpensive, rapidly-developed semi-autonomous robotic system, especially one that is spherical, and the importance of human-robot interaction as considered for the human-deployment and remote user interface.
A universal six-joint robot controller
NASA Technical Reports Server (NTRS)
Bihn, D. G.; Hsia, T. C.
1987-01-01
A general purpose six-axis robotic manipulator controller was designed and implemented to serve as a research tool for the investigation of the practical and theoretical aspects of various control strategies in robotics. A 80286-based Intel System 310 running the Xenix operating servo software as well as the higher level software (e.g., kinematics and path planning) were employed. A Multibus compatible interface board was designed and constructed to handle I/O signals from the robot manipulator's joint motors. From the design point of view, the universal controller is capable of driving robot manipulators equipped with D.C. joint motors and position optical encoders. To test its functionality, the controller is connected to the joint motor D.C. power amplifier of a PUMA 560 arm bypassing completely the manufacturer-supplied Unimation controller. A controller algorithm consisting of local PD control laws was written and installed into the Xenix operating system. Additional software drivers were implemented to allow application programs access to the interface board. All software was written in the C language.
NASA Astrophysics Data System (ADS)
Erickson, David; Lacheray, Hervé; Lai, Gilbert; Haddadi, Amir
2014-06-01
This paper presents the latest advancements of the Haptics-based Immersive Tele-robotic System (HITS) project, a next generation Improvised Explosive Device (IED) disposal (IEDD) robotic interface containing an immersive telepresence environment for a remotely-controlled three-articulated-robotic-arm system. While the haptic feedback enhances the operator's perception of the remote environment, a third teleoperated dexterous arm, equipped with multiple vision sensors and cameras, provides stereo vision with proper visual cues, and a 3D photo-realistic model of the potential IED. This decentralized system combines various capabilities including stable and scaled motion, singularity avoidance, cross-coupled hybrid control, active collision detection and avoidance, compliance control and constrained motion to provide a safe and intuitive control environment for the operators. Experimental results and validation of the current system are presented through various essential IEDD tasks. This project demonstrates that a two-armed anthropomorphic Explosive Ordnance Disposal (EOD) robot interface can achieve complex neutralization techniques against realistic IEDs without the operator approaching at any time.
SLAM algorithm applied to robotics assistance for navigation in unknown environments.
Cheein, Fernando A Auat; Lopez, Natalia; Soria, Carlos M; di Sciascio, Fernando A; Pereira, Fernando Lobo; Carelli, Ricardo
2010-02-17
The combination of robotic tools with assistance technology determines a slightly explored area of applications and advantages for disability or elder people in their daily tasks. Autonomous motorized wheelchair navigation inside an environment, behaviour based control of orthopaedic arms or user's preference learning from a friendly interface are some examples of this new field. In this paper, a Simultaneous Localization and Mapping (SLAM) algorithm is implemented to allow the environmental learning by a mobile robot while its navigation is governed by electromyographic signals. The entire system is part autonomous and part user-decision dependent (semi-autonomous). The environmental learning executed by the SLAM algorithm and the low level behaviour-based reactions of the mobile robot are robotic autonomous tasks, whereas the mobile robot navigation inside an environment is commanded by a Muscle-Computer Interface (MCI). In this paper, a sequential Extended Kalman Filter (EKF) feature-based SLAM algorithm is implemented. The features correspond to lines and corners -concave and convex- of the environment. From the SLAM architecture, a global metric map of the environment is derived. The electromyographic signals that command the robot's movements can be adapted to the patient's disabilities. For mobile robot navigation purposes, five commands were obtained from the MCI: turn to the left, turn to the right, stop, start and exit. A kinematic controller to control the mobile robot was implemented. A low level behavior strategy was also implemented to avoid robot's collisions with the environment and moving agents. The entire system was tested in a population of seven volunteers: three elder, two below-elbow amputees and two young normally limbed patients. The experiments were performed within a closed low dynamic environment. Subjects took an average time of 35 minutes to navigate the environment and to learn how to use the MCI. The SLAM results have shown a consistent reconstruction of the environment. The obtained map was stored inside the Muscle-Computer Interface. The integration of a highly demanding processing algorithm (SLAM) with a MCI and the communication between both in real time have shown to be consistent and successful. The metric map generated by the mobile robot would allow possible future autonomous navigation without direct control of the user, whose function could be relegated to choose robot destinations. Also, the mobile robot shares the same kinematic model of a motorized wheelchair. This advantage can be exploited for wheelchair autonomous navigation.
ROBOTICS IN HAZARDOUS ENVIRONMENTS - REAL DEPLOYMENTS BY THE SAVANNAH RIVER NATIONAL LABORATORY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kriikku, E.; Tibrea, S.; Nance, T.
The Research & Development Engineering (R&DE) section in the Savannah River National Laboratory (SRNL) engineers, integrates, tests, and supports deployment of custom robotics, systems, and tools for use in radioactive, hazardous, or inaccessible environments. Mechanical and electrical engineers, computer control professionals, specialists, machinists, welders, electricians, and mechanics adapt and integrate commercially available technology with in-house designs, to meet the needs of Savannah River Site (SRS), Department of Energy (DOE), and other governmental agency customers. This paper discusses five R&DE robotic and remote system projects.
Regenerative Engineering and Bionic Limbs.
James, Roshan; Laurencin, Cato T
2015-03-01
Amputations of the upper extremity are severely debilitating, current treatments support very basic limb movement, and patients undergo extensive physiotherapy and psychological counselling. There is no prosthesis that allows the amputees near-normal function. With increasing number of amputees due to injuries sustained in accidents, natural calamities and international conflicts, there is a growing requirement for novel strategies and new discoveries. Advances have been made in technological, material and in prosthesis integration where researchers are now exploring artificial prosthesis that integrate with the residual tissues and function based on signal impulses received from the residual nerves. Efforts are focused on challenging experts in different disciplines to integrate ideas and technologies to allow for the regeneration of injured tissues, recording on tissue signals and feed-back to facilitate responsive movements and gradations of muscle force. A fully functional replacement and regenerative or integrated prosthesis will rely on interface of biological process with robotic systems to allow individual control of movement such as at the elbow, forearm, digits and thumb in the upper extremity. Regenerative engineering focused on the regeneration of complex tissue and organ systems will be realized by the cross-fertilization of advances over the past thirty years in the fields of tissue engineering, nanotechnology, stem cell science, and developmental biology. The convergence of toolboxes crated within each discipline will allow interdisciplinary teams from engineering, science, and medicine to realize new strategies, mergers of disparate technologies, such as biophysics, smart bionics, and the healing power of the mind. Tackling the clinical challenges, interfacing the biological process with bionic technologies, engineering biological control of the electronic systems, and feed-back will be the important goals in regenerative engineering over the next two decades.
Regenerative Engineering and Bionic Limbs
James, Roshan; Laurencin, Cato T.
2015-01-01
Amputations of the upper extremity are severely debilitating, current treatments support very basic limb movement, and patients undergo extensive physiotherapy and psychological counselling. There is no prosthesis that allows the amputees near-normal function. With increasing number of amputees due to injuries sustained in accidents, natural calamities and international conflicts, there is a growing requirement for novel strategies and new discoveries. Advances have been made in technological, material and in prosthesis integration where researchers are now exploring artificial prosthesis that integrate with the residual tissues and function based on signal impulses received from the residual nerves. Efforts are focused on challenging experts in different disciplines to integrate ideas and technologies to allow for the regeneration of injured tissues, recording on tissue signals and feed-back to facilitate responsive movements and gradations of muscle force. A fully functional replacement and regenerative or integrated prosthesis will rely on interface of biological process with robotic systems to allow individual control of movement such as at the elbow, forearm, digits and thumb in the upper extremity. Regenerative engineering focused on the regeneration of complex tissue and organ systems will be realized by the cross-fertilization of advances over the past thirty years in the fields of tissue engineering, nanotechnology, stem cell science, and developmental biology. The convergence of toolboxes crated within each discipline will allow interdisciplinary teams from engineering, science, and medicine to realize new strategies, mergers of disparate technologies, such as biophysics, smart bionics, and the healing power of the mind. Tackling the clinical challenges, interfacing the biological process with bionic technologies, engineering biological control of the electronic systems, and feed-back will be the important goals in regenerative engineering over the next two decades. PMID:25983525
HRI usability evaluation of interaction modes for a teleoperated agricultural robotic sprayer.
Adamides, George; Katsanos, Christos; Parmet, Yisrael; Christou, Georgios; Xenos, Michalis; Hadzilacos, Thanasis; Edan, Yael
2017-07-01
Teleoperation of an agricultural robotic system requires effective and efficient human-robot interaction. This paper investigates the usability of different interaction modes for agricultural robot teleoperation. Specifically, we examined the overall influence of two types of output devices (PC screen, head mounted display), two types of peripheral vision support mechanisms (single view, multiple views), and two types of control input devices (PC keyboard, PS3 gamepad) on observed and perceived usability of a teleoperated agricultural sprayer. A modular user interface for teleoperating an agricultural robot sprayer was constructed and field-tested. Evaluation included eight interaction modes: the different combinations of the 3 factors. Thirty representative participants used each interaction mode to navigate the robot along a vineyard and spray grape clusters based on a 2 × 2 × 2 repeated measures experimental design. Objective metrics of the effectiveness and efficiency of the human-robot collaboration were collected. Participants also completed questionnaires related to their user experience with the system in each interaction mode. Results show that the most important factor for human-robot interface usability is the number and placement of views. The type of robot control input device was also a significant factor in certain dependents, whereas the effect of the screen output type was only significant on the participants' perceived workload index. Specific recommendations for mobile field robot teleoperation to improve HRI awareness for the agricultural spraying task are presented. Copyright © 2017 Elsevier Ltd. All rights reserved.
Telerobotic management system: coordinating multiple human operators with multiple robots
NASA Astrophysics Data System (ADS)
King, Jamie W.; Pretty, Raymond; Brothers, Brendan; Gosine, Raymond G.
2003-09-01
This paper describes an application called the Tele-robotic management system (TMS) for coordinating multiple operators with multiple robots for applications such as underground mining. TMS utilizes several graphical interfaces to allow the user to define a partially ordered plan for multiple robots. This plan is then converted to a Petri net for execution and monitoring. TMS uses a distributed framework to allow robots and operators to easily integrate with the applications. This framework allows robots and operators to join the network and advertise their capabilities through services. TMS then decides whether tasks should be dispatched to a robot or a remote operator based on the services offered by the robots and operators.
Tolikas, Mary; Antoniou, Ayis
2017-01-01
Abstract The Wyss Institute for Biologically Inspired Engineering at Harvard University was formed based on the recognition that breakthrough discoveries cannot change the world if they never leave the laboratory. The Institute's mission is to discover the biological principles that Nature uses to build living things, and to harness these insights to create biologically inspired engineering innovations to advance human health and create a more sustainable world. Since its launch in 2009, the Institute has developed a new model for innovation, collaboration, and technology translation within academia, breaking “silos” to enable collaborations that cross institutional and disciplinary barriers. Institute faculty and staff engage in high‐risk research that leads to transformative breakthroughs. The biological principles uncovered are harnessed to develop new engineering solutions for medicine and healthcare, as well as nonmedical areas, such as energy, architecture, robotics, and manufacturing. These technologies are translated into commercial products and therapies through collaborations with clinical investigators, corporate alliances, and the formation of new start‐ups that are driven by a unique internal business development team including entrepreneurs‐in‐residence with domain‐specific expertise. Here, we describe this novel organizational model that the Institute has developed to change the paradigm of how fundamental discovery, medical technology innovation, and commercial translation are carried out at the academic‐industrial interface. PMID:29313034
Simulation-based intelligent robotic agent for Space Station Freedom
NASA Technical Reports Server (NTRS)
Biegl, Csaba A.; Springfield, James F.; Cook, George E.; Fernandez, Kenneth R.
1990-01-01
A robot control package is described which utilizes on-line structural simulation of robot manipulators and objects in their workspace. The model-based controller is interfaced with a high level agent-independent planner, which is responsible for the task-level planning of the robot's actions. Commands received from the agent-independent planner are refined and executed in the simulated workspace, and upon successful completion, they are transferred to the real manipulators.
Predictive Interfaces for Long-Distance Tele-Operations
NASA Technical Reports Server (NTRS)
Wheeler, Kevin R.; Martin, Rodney; Allan, Mark B.; Sunspiral, Vytas
2005-01-01
We address the development of predictive tele-operator interfaces for humanoid robots with respect to two basic challenges. Firstly, we address automating the transition from fully tele-operated systems towards degrees of autonomy. Secondly, we develop compensation for the time-delay that exists when sending telemetry data from a remote operation point to robots located at low earth orbit and beyond. Humanoid robots have a great advantage over other robotic platforms for use in space-based construction and maintenance because they can use the same tools as astronauts do. The major disadvantage is that they are difficult to control due to the large number of degrees of freedom, which makes it difficult to synthesize autonomous behaviors using conventional means. We are working with the NASA Johnson Space Center's Robonaut which is an anthropomorphic robot with fully articulated hands, arms, and neck. We have trained hidden Markov models that make use of the command data, sensory streams, and other relevant data sources to predict a tele-operator's intent. This allows us to achieve subgoal level commanding without the use of predefined command dictionaries, and to create sub-goal autonomy via sequence generation from generative models. Our method works as a means to incrementally transition from manual tele-operation to semi-autonomous, supervised operation. The multi-agent laboratory experiments conducted by Ambrose et. al. have shown that it is feasible to directly tele-operate multiple Robonauts with humans to perform complex tasks such as truss assembly. However, once a time-delay is introduced into the system, the rate of tele\\ioperation slows down to mimic a bump and wait type of activity. We would like to maintain the same interface to the operator despite time-delays. To this end, we are developing an interface which will allow for us to predict the intentions of the operator while interacting with a 3D virtual representation of the expected state of the robot. The predictive interface anticipates the intention of the operator, and then uses this prediction to initiate appropriate sub-goal autonomy tasks.
New generation emerging technologies for neurorehabilitation and motor assistance.
Frisoli, Antonio; Solazzi, Massimiliano; Loconsole, Claudio; Barsotti, Michele
2016-12-01
This paper illustrates the application of emerging technologies and human-machine interfaces to the neurorehabilitation and motor assistance fields. The contribution focuses on wearable technologies and in particular on robotic exoskeleton as tools for increasing freedom to move and performing Activities of Daily Living (ADLs). This would result in a deep improvement in quality of life, also in terms of improved function of internal organs and general health status. Furthermore, the integration of these robotic systems with advanced bio-signal driven human-machine interface can increase the degree of participation of patient in robotic training allowing to recognize user's intention and assisting the patient in rehabilitation tasks, thus representing a fundamental aspect to elicit motor learning.
CHIMERA II - A real-time multiprocessing environment for sensor-based robot control
NASA Technical Reports Server (NTRS)
Stewart, David B.; Schmitz, Donald E.; Khosla, Pradeep K.
1989-01-01
A multiprocessing environment for a wide variety of sensor-based robot system, providing the flexibility, performance, and UNIX-compatible interface needed for fast development of real-time code is addressed. The requirements imposed on the design of a programming environment for sensor-based robotic control is outlined. The details of the current hardware configuration are presented, along with the details of the CHIMERA II software. Emphasis is placed on the kernel, low-level interboard communication, user interface, extended file system, user-definable and dynamically selectable real-time schedulers, remote process synchronization, and generalized interprocess communication. A possible implementation of a hierarchical control model, the NASA/NBS standard reference model for telerobot control system is demonstrated.
[History of robotics: from Archytas of Tarentum until da Vinci robot. (Part I)].
Sánchez Martín, F M; Millán Rodríguez, F; Salvador Bayarri, J; Palou Redorta, J; Rodríguez Escovar, F; Esquena Fernández, S; Villavicencio Mavrich, H
2007-02-01
Robotic surgery is the newst technologic option in urology. To understand how new robots work is interesting to know their history. The desire to design machines imitating humans continued for more than 4000 years. There are references to King-su Tse (clasic China) making up automaton at 500 a. C. Archytas of Tarentum (at around 400 a.C.) is considered the father of mechanical engineering, and one of the occidental robotics classic referents. Heron of Alexandria, Hsieh-Fec, Al-Jazari, Roger Bacon, Juanelo Turriano, Leonardo da Vinci, Vaucanson o von Kempelen were robot inventors in the middle age, renaissance and classicism. At the XIXth century, automaton production underwent a peak and all engineering branches suffered a great development. At 1942 Asimov published the three robotics laws, based on mechanics, electronics and informatics advances. At XXth century robots able to do very complex self governing works were developed, like da Vinci Surgical System (Intuitive Surgical Inc, Sunnyvale, CA, USA), a very sophisticated robot to assist surgeons.
Ando, Noriyasu; Kanzaki, Ryohei
2017-09-01
The use of mobile robots is an effective method of validating sensory-motor models of animals in a real environment. The well-identified insect sensory-motor systems have been the major targets for modeling. Furthermore, mobile robots implemented with such insect models attract engineers who aim to avail advantages from organisms. However, directly comparing the robots with real insects is still difficult, even if we successfully model the biological systems, because of the physical differences between them. We developed a hybrid robot to bridge the gap. This hybrid robot is an insect-controlled robot, in which a tethered male silkmoth (Bombyx mori) drives the robot in order to localize an odor source. This robot has the following three advantages: 1) from a biomimetic perspective, the robot enables us to evaluate the potential performance of future insect-mimetic robots; 2) from a biological perspective, the robot enables us to manipulate the closed-loop of an onboard insect for further understanding of its sensory-motor system; and 3) the robot enables comparison with insect models as a reference biological system. In this paper, we review the recent works regarding insect-controlled robots and discuss the significance for both engineering and biology. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Noor, Ahmed K.
2013-12-01
Some of the recent attempts for improving and transforming engineering education are reviewed. The attempts aim at providing the entry level engineers with the skills needed to address the challenges of future large-scale complex systems and projects. Some of the frontier sectors and future challenges for engineers are outlined. The major characteristics of the coming intelligence convergence era (the post-information age) are identified. These include the prevalence of smart devices and environments, the widespread applications of anticipatory computing and predictive / prescriptive analytics, as well as a symbiotic relationship between humans and machines. Devices and machines will be able to learn from, and with, humans in a natural collaborative way. The recent game changers in learnscapes (learning paradigms, technologies, platforms, spaces, and environments) that can significantly impact engineering education in the coming era are identified. Among these are open educational resources, knowledge-rich classrooms, immersive interactive 3D learning, augmented reality, reverse instruction / flipped classroom, gamification, robots in the classroom, and adaptive personalized learning. Significant transformative changes in, and mass customization of, learning are envisioned to emerge from the synergistic combination of the game changers and other technologies. The realization of the aforementioned vision requires the development of a new multidisciplinary framework of emergent engineering for relating innovation, complexity and cybernetics, within the future learning environments. The framework can be used to treat engineering education as a complex adaptive system, with dynamically interacting and communicating components (instructors, individual, small, and large groups of learners). The emergent behavior resulting from the interactions can produce progressively better, and continuously improving, learning environment. As a first step towards the realization of the vision, intelligent adaptive cyber-physical ecosystems need to be developed to facilitate collaboration between the various stakeholders of engineering education, and to accelerate the development of a skilled engineering workforce. The major components of the ecosystems include integrated knowledge discovery and exploitation facilities, blended learning and research spaces, novel ultra-intelligent software agents, multimodal and autonomous interfaces, and networked cognitive and tele-presence robots.
Initial Experiments with the Leap Motion as a User Interface in Robotic Endonasal Surgery.
Travaglini, T A; Swaney, P J; Weaver, Kyle D; Webster, R J
The Leap Motion controller is a low-cost, optically-based hand tracking system that has recently been introduced on the consumer market. Prior studies have investigated its precision and accuracy, toward evaluating its usefulness as a surgical robot master interface. Yet due to the diversity of potential slave robots and surgical procedures, as well as the dynamic nature of surgery, it is challenging to make general conclusions from published accuracy and precision data. Thus, our goal in this paper is to explore the use of the Leap in the specific scenario of endonasal pituitary surgery. We use it to control a concentric tube continuum robot in a phantom study, and compare user performance using the Leap to previously published results using the Phantom Omni. We find that the users were able to achieve nearly identical average resection percentage and overall surgical duration with the Leap.
Brain computer interface for operating a robot
NASA Astrophysics Data System (ADS)
Nisar, Humaira; Balasubramaniam, Hari Chand; Malik, Aamir Saeed
2013-10-01
A Brain-Computer Interface (BCI) is a hardware/software based system that translates the Electroencephalogram (EEG) signals produced by the brain activity to control computers and other external devices. In this paper, we will present a non-invasive BCI system that reads the EEG signals from a trained brain activity using a neuro-signal acquisition headset and translates it into computer readable form; to control the motion of a robot. The robot performs the actions that are instructed to it in real time. We have used the cognitive states like Push, Pull to control the motion of the robot. The sensitivity and specificity of the system is above 90 percent. Subjective results show a mixed trend of the difficulty level of the training activities. The quantitative EEG data analysis complements the subjective results. This technology may become very useful for the rehabilitation of disabled and elderly people.
Initial Experiments with the Leap Motion as a User Interface in Robotic Endonasal Surgery
Travaglini, T. A.; Swaney, P. J.; Weaver, Kyle D.; Webster, R. J.
2016-01-01
The Leap Motion controller is a low-cost, optically-based hand tracking system that has recently been introduced on the consumer market. Prior studies have investigated its precision and accuracy, toward evaluating its usefulness as a surgical robot master interface. Yet due to the diversity of potential slave robots and surgical procedures, as well as the dynamic nature of surgery, it is challenging to make general conclusions from published accuracy and precision data. Thus, our goal in this paper is to explore the use of the Leap in the specific scenario of endonasal pituitary surgery. We use it to control a concentric tube continuum robot in a phantom study, and compare user performance using the Leap to previously published results using the Phantom Omni. We find that the users were able to achieve nearly identical average resection percentage and overall surgical duration with the Leap. PMID:26752501
NASA Astrophysics Data System (ADS)
Zuhrie, M. S.; Basuki, I.; Asto, B. I. G. P.; Anifah, L.
2018-04-01
The development of robotics in Indonesia has been very encouraging. The barometer is the success of the Indonesian Robot Contest. The focus of research is a teaching module manufacturing, planning mechanical design, control system through microprocessor technology and maneuverability of the robot. Contextual Teaching and Learning (CTL) strategy is the concept of learning where the teacher brings the real world into the classroom and encourage students to make connections between knowledge possessed by its application in everyday life. This research the development model used is the 4-D model. This Model consists of four stages: Define Stage, Design Stage, Develop Stage, and Disseminate Stage. This research was conducted by applying the research design development with the aim to produce a tool of learning in the form of smart educational robot modules and kit based on Contextual Teaching and Learning at the Department of Electrical Engineering to improve the skills of the Electrical Engineering student. Socialization questionnaires showed that levels of the student majoring in electrical engineering competencies image currently only limited to conventional machines. The average assessment is 3.34 validator included in either category. Modules developed can give hope to the future are able to produce Intelligent Robot Tool for Teaching.
Duffy, Rebecca M; Feinberg, Adam W
2014-01-01
Skeletal muscle is a scalable actuator system used throughout nature from the millimeter to meter length scales and over a wide range of frequencies and force regimes. This adaptability has spurred interest in using engineered skeletal muscle to power soft robotics devices and in biotechnology and medical applications. However, the challenges to doing this are similar to those facing the tissue engineering and regenerative medicine fields; specifically, how do we translate our understanding of myogenesis in vivo to the engineering of muscle constructs in vitro to achieve functional integration with devices. To do this researchers are developing a number of ways to engineer the cellular microenvironment to guide skeletal muscle tissue formation. This includes understanding the role of substrate stiffness and the mechanical environment, engineering the spatial organization of biochemical and physical cues to guide muscle alignment, and developing bioreactors for mechanical and electrical conditioning. Examples of engineered skeletal muscle that can potentially be used in soft robotics include 2D cantilever-based skeletal muscle actuators and 3D skeletal muscle tissues engineered using scaffolds or directed self-organization. Integration into devices has led to basic muscle-powered devices such as grippers and pumps as well as more sophisticated muscle-powered soft robots that walk and swim. Looking forward, current, and future challenges include identifying the best source of muscle precursor cells to expand and differentiate into myotubes, replacing cardiomyocytes with skeletal muscle tissue as the bio-actuator of choice for soft robots, and vascularization and innervation to enable control and nourishment of larger muscle tissue constructs. © 2013 Wiley Periodicals, Inc.
Visual exploration and analysis of human-robot interaction rules
NASA Astrophysics Data System (ADS)
Zhang, Hui; Boyles, Michael J.
2013-01-01
We present a novel interaction paradigm for the visual exploration, manipulation and analysis of human-robot interaction (HRI) rules; our development is implemented using a visual programming interface and exploits key techniques drawn from both information visualization and visual data mining to facilitate the interaction design and knowledge discovery process. HRI is often concerned with manipulations of multi-modal signals, events, and commands that form various kinds of interaction rules. Depicting, manipulating and sharing such design-level information is a compelling challenge. Furthermore, the closed loop between HRI programming and knowledge discovery from empirical data is a relatively long cycle. This, in turn, makes design-level verification nearly impossible to perform in an earlier phase. In our work, we exploit a drag-and-drop user interface and visual languages to support depicting responsive behaviors from social participants when they interact with their partners. For our principal test case of gaze-contingent HRI interfaces, this permits us to program and debug the robots' responsive behaviors through a graphical data-flow chart editor. We exploit additional program manipulation interfaces to provide still further improvement to our programming experience: by simulating the interaction dynamics between a human and a robot behavior model, we allow the researchers to generate, trace and study the perception-action dynamics with a social interaction simulation to verify and refine their designs. Finally, we extend our visual manipulation environment with a visual data-mining tool that allows the user to investigate interesting phenomena such as joint attention and sequential behavioral patterns from multiple multi-modal data streams. We have created instances of HRI interfaces to evaluate and refine our development paradigm. As far as we are aware, this paper reports the first program manipulation paradigm that integrates visual programming interfaces, information visualization, and visual data mining methods to facilitate designing, comprehending, and evaluating HRI interfaces.
2014-03-14
CAPE CANAVERAL, Fla. – Students from Hagerty High School in Oviedo, Fla., participants in FIRST Robotics, show off their robots' capabilities at the Robot Rocket Rally. The three-day event at Florida's Kennedy Space Center Visitor Complex is highlighted by exhibits, games and demonstrations of a variety of robots, with exhibitors ranging from school robotics clubs to veteran NASA scientists and engineers. Photo credit: NASA/Kim Shiflett
2014-03-14
CAPE CANAVERAL, Fla. – A child gets an up-close look at Charli, an autonomous walking robot developed by Virginia Tech Robotics, during the Robot Rocket Rally. The three-day event at Florida's Kennedy Space Center Visitor Complex is highlighted by exhibits, games and demonstrations of a variety of robots, with exhibitors ranging from school robotics clubs to veteran NASA scientists and engineers. Photo credit: NASA/Kim Shiflett
NASA Astrophysics Data System (ADS)
Kryuchkov, B. I.; Usov, V. M.; Chertopolokhov, V. A.; Ronzhin, A. L.; Karpov, A. A.
2017-05-01
Extravehicular activity (EVA) on the lunar surface, necessary for the future exploration of the Moon, involves extensive use of robots. One of the factors of safe EVA is a proper interaction between cosmonauts and robots in extreme environments. This requires a simple and natural man-machine interface, e.g. multimodal contactless interface based on recognition of gestures and cosmonaut's poses. When travelling in the "Follow Me" mode (master/slave), a robot uses onboard tools for tracking cosmonaut's position and movements, and on the basis of these data builds its itinerary. The interaction in the system "cosmonaut-robot" on the lunar surface is significantly different from that on the Earth surface. For example, a man, dressed in a space suit, has limited fine motor skills. In addition, EVA is quite tiring for the cosmonauts, and a tired human being less accurately performs movements and often makes mistakes. All this leads to new requirements for the convenient use of the man-machine interface designed for EVA. To improve the reliability and stability of human-robot communication it is necessary to provide options for duplicating commands at the task stages and gesture recognition. New tools and techniques for space missions must be examined at the first stage of works in laboratory conditions, and then in field tests (proof tests at the site of application). The article analyzes the methods of detection and tracking of movements and gesture recognition of the cosmonaut during EVA, which can be used for the design of human-machine interface. A scenario for testing these methods by constructing a virtual environment simulating EVA on the lunar surface is proposed. Simulation involves environment visualization and modeling of the use of the "vision" of the robot to track a moving cosmonaut dressed in a spacesuit.
Analyzing Robotic Kinematics Via Computed Simulations
NASA Technical Reports Server (NTRS)
Carnahan, Timothy M.
1992-01-01
Computing system assists in evaluation of kinematics of conceptual robot. Displays positions and motions of robotic manipulator within work cell. Also displays interactions between robotic manipulator and other objects. Results of simulation displayed on graphical computer workstation. System includes both off-the-shelf software originally developed for automotive industry and specially developed software. Simulation system also used to design human-equivalent hand, to model optical train in infrared system, and to develop graphical interface for teleoperator simulation system.
Supersmart Robots: The Next Generation of Robots Has Evolutionary Capabilities
ERIC Educational Resources Information Center
Simkins, Michael
2008-01-01
Robots that can learn new behaviors. Robots that can reproduce themselves. Science fiction? Not anymore. Roboticists at Cornell's Computational Synthesis Lab have developed just such engineered creatures that offer interesting implications for education. The team, headed by Hod Lipson, was intrigued by the question, "How can you get robots to be…
2017 Robotic Mining Competition
2017-05-24
Team members from the New York University Tandon School of Engineering transport their robot to the mining arena during NASA's 8th Annual Robotic Mining Competition at the Kennedy Space Center Visitor Complex in Florida. More than 40 student teams from colleges and universities around the U.S. are using their uniquely-designed mining robots to dig in a supersized sandbox filled with BP-1, or simulated Martian soil, and participate in other competition requirements. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's Journey to Mars.
NASA Astrophysics Data System (ADS)
See, Swee Lan; Tan, Mitchell; Looi, Qin En
This paper presents findings from a descriptive research on social gaming. A video-enhanced diary method was used to understand the user experience in social gaming. From this experiment, we found that natural human behavior and gamer’s decision making process can be elicited and speculated during human computer interaction. These are new information that we should consider as they can help us build better human computer interfaces and human robotic interfaces in future.
NASA Technical Reports Server (NTRS)
Aghazarian, Hrand
2009-01-01
The R4SA GUI mentioned in the immediately preceding article is a userfriendly interface for controlling one or more robot(s). This GUI makes it possible to perform meaningful real-time field experiments and research in robotics at an unmatched level of fidelity, within minutes of setup. It provides such powerful graphing modes as that of a digitizing oscilloscope that displays up to 250 variables at rates between 1 and 200 Hz. This GUI can be configured as multiple intuitive interfaces for acquisition of data, command, and control to enable rapid testing of subsystems or an entire robot system while simultaneously performing analysis of data. The R4SA software establishes an intuitive component-based design environment that can be easily reconfigured for any robotic platform by creating or editing setup configuration files. The R4SA GUI enables event-driven and conditional sequencing similar to those of Mars Exploration Rover (MER) operations. It has been certified as part of the MER ground support equipment and, therefore, is allowed to be utilized in conjunction with MER flight hardware. The R4SA GUI could also be adapted to use in embedded computing systems, other than that of the MER, for commanding and real-time analysis of data.
Automation and robotics for the Space Exploration Initiative: Results from Project Outreach
NASA Technical Reports Server (NTRS)
Gonzales, D.; Criswell, D.; Heer, E.
1991-01-01
A total of 52 submissions were received in the Automation and Robotics (A&R) area during Project Outreach. About half of the submissions (24) contained concepts that were judged to have high utility for the Space Exploration Initiative (SEI) and were analyzed further by the robotics panel. These 24 submissions are analyzed here. Three types of robots were proposed in the high scoring submissions: structured task robots (STRs), teleoperated robots (TORs), and surface exploration robots. Several advanced TOR control interface technologies were proposed in the submissions. Many A&R concepts or potential standards were presented or alluded to by the submitters, but few specific technologies or systems were suggested.
The Tactile Ethics of Soft Robotics: Designing Wisely for Human-Robot Interaction.
Arnold, Thomas; Scheutz, Matthias
2017-06-01
Soft robots promise an exciting design trajectory in the field of robotics and human-robot interaction (HRI), promising more adaptive, resilient movement within environments as well as a safer, more sensitive interface for the objects or agents the robot encounters. In particular, tactile HRI is a critical dimension for designers to consider, especially given the onrush of assistive and companion robots into our society. In this article, we propose to surface an important set of ethical challenges for the field of soft robotics to meet. Tactile HRI strongly suggests that soft-bodied robots balance tactile engagement against emotional manipulation, model intimacy on the bonding with a tool not with a person, and deflect users from personally and socially destructive behavior the soft bodies and surfaces could normally entice.
NASA Technical Reports Server (NTRS)
Brewer, W. V.; Rasis, E. P.; Shih, H. R.
1993-01-01
Results from NASA/HBCU Grant No. NAG-1-1125 are summarized. Designs developed for model fabrication, exploratory concepts drafted, interface of computer with robot and end-effector, and capability enhancement are discussed.
Implementation of an i.v.-compounding robot in a hospital-based cancer center pharmacy.
Yaniv, Angela W; Knoer, Scott J
2013-11-15
The implementation of a robotic device for compounding patient-specific chemotherapy doses is described, including a review of data on the robot's performance over a 13-month period. The automated system prepares individualized i.v. chemotherapy doses in a variety of infusion bags and syringes; more than 50 drugs are validated for use in the machine. The robot is programmed to recognize the physical parameters of syringes and vials and uses photographic identification, barcode identification, and gravimetric measurements to ensure that the correct ingredients are compounded and the final dose is accurate. The implementation timeline, including site preparation, logistics planning, installation, calibration, staff training, development of a pharmacy information system (PIS) interface, and validation by the state board of pharmacy, was about 10 months. In its first 13 months of operation, the robot was used to prepare 7384 medication doses; 85 doses (1.2%) found to be outside the desired accuracy range (±4%) were manually modified by pharmacy staff. Ongoing system monitoring has identified mechanical and materials-related problems including vial-recognition failures (in many instances, these issues were resolved by the system operator and robotic compounding proceeded successfully), interface issues affecting robot-PIS communication, and human errors such as the loading of an incorrect vial or bag into the machine. Through staff training, information technology improvements, and workflow adjustments, the robot's throughput has been steadily improved. An i.v.-compounding robot was successfully implemented in a cancer center pharmacy. The robot performs compounding tasks safely and accurately and has been integrated into the pharmacy's workflow.
Some Applications of Gröbner Bases in Robotics and Engineering
NASA Astrophysics Data System (ADS)
Abłamowicz, Rafał
Gröbner bases in polynomial rings have numerous applications in geometry, applied mathematics, and engineering. We show a few applications of Gröbner bases in robotics, formulated in the language of Clifford algebras, and in engineering to the theory of curves, including Fermat and Bézier cubics, and interpolation functions used in finite element theory.
Robots, systems, and methods for hazard evaluation and visualization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nielsen, Curtis W.; Bruemmer, David J.; Walton, Miles C.
A robot includes a hazard sensor, a locomotor, and a system controller. The robot senses a hazard intensity at a location of the robot, moves to a new location in response to the hazard intensity, and autonomously repeats the sensing and moving to determine multiple hazard levels at multiple locations. The robot may also include a communicator to communicate the multiple hazard levels to a remote controller. The remote controller includes a communicator for sending user commands to the robot and receiving the hazard levels from the robot. A graphical user interface displays an environment map of the environment proximatemore » the robot and a scale for indicating a hazard intensity. A hazard indicator corresponds to a robot position in the environment map and graphically indicates the hazard intensity at the robot position relative to the scale.« less
Design of the arm-wrestling robot's force acquisition system based on Qt
NASA Astrophysics Data System (ADS)
Huo, Zhixiang; Chen, Feng; Wang, Yongtao
2017-03-01
As a collection of entertainment and medical rehabilitation in a robot, the research on the arm-wrestling robot is of great significance. In order to achieve the collection of the arm-wrestling robot's force signals, the design and implementation of arm-wrestling robot's force acquisition system is introduced in this paper. The system is based on MP4221 data acquisition card and is programmed by Qt. It runs successfully in collecting the analog signals on PC. The interface of the system is simple and the real-time performance is good. The result of the test shows the feasibility in arm-wrestling robot.
Robotics--The New Silent Majority: Engineering Robot Applications and Education.
ERIC Educational Resources Information Center
Kimbler, D. L.
1984-01-01
The impact of robotics in education is discussed in terms of academic assistance to industry in robotics as well as academic problems in handling the demands put upon it. Some potential solutions that can have lasting impact on educational systems are proposed. (JN)
2014-03-14
CAPE CANAVERAL, Fla. – Students gather to watch as a DARwin-OP miniature humanoid robot from Virginia Tech Robotics demonstrates its soccer abilities at the Robot Rocket Rally. The three-day event at Florida's Kennedy Space Center Visitor Complex is highlighted by exhibits, games and demonstrations of a variety of robots, with exhibitors ranging from school robotics clubs to veteran NASA scientists and engineers. Photo credit: NASA/Kim Shiflett
DOE Office of Scientific and Technical Information (OSTI.GOV)
Torres, P.; Luque de Castro, M.D.
1996-12-31
A fully automated method for the determination of organochlorine pesticides in vegetables is proposed. The overall system acts as an {open_quotes}analytical black box{close_quotes} because a robotic station performs the prelimninary operations, from weighing to capping the leached analytes and location in an autosampler of an automated gas chromatograph with electron capture detection. The method has been applied to the determination of lindane, heptachlor, captan, chlordane and metoxcychlor in tea, marjoram, cinnamon, pennyroyal, and mint with good results in most cases. A gas chromatograph has been interfaced to a robotic station for the determination of pesticides in vegetables. 15 refs., 4more » figs., 2 tabs.« less
Bakkum, Douglas J.; Gamblen, Philip M.; Ben-Ary, Guy; Chao, Zenas C.; Potter, Steve M.
2007-01-01
Here, we and others describe an unusual neurorobotic project, a merging of art and science called MEART, the semi-living artist. We built a pneumatically actuated robotic arm to create drawings, as controlled by a living network of neurons from rat cortex grown on a multi-electrode array (MEA). Such embodied cultured networks formed a real-time closed-loop system which could now behave and receive electrical stimulation as feedback on its behavior. We used MEART and simulated embodiments, or animats, to study the network mechanisms that produce adaptive, goal-directed behavior. This approach to neural interfacing will help instruct the design of other hybrid neural-robotic systems we call hybrots. The interfacing technologies and algorithms developed have potential applications in responsive deep brain stimulation systems and for motor prosthetics using sensory components. In a broader context, MEART educates the public about neuroscience, neural interfaces, and robotics. It has paved the way for critical discussions on the future of bio-art and of biotechnology. PMID:18958276
Graphical user interface for a robotic workstation in a surgical environment.
Bielski, A; Lohmann, C P; Maier, M; Zapp, D; Nasseri, M A
2016-08-01
Surgery using a robotic system has proven to have significant potential but is still a highly challenging task for the surgeon. An eye surgery assistant has been developed to eliminate the problem of tremor caused by human motions endangering the outcome of ophthalmic surgery. In order to exploit the full potential of the robot and improve the workflow of the surgeon, providing the ability to change control parameters live in the system as well as the ability to connect additional ancillary systems is necessary. Additionally the surgeon should always be able to get an overview over the status of all systems with a quick glance. Therefore a workstation has been built. The contribution of this paper is the design and the implementation of an intuitive graphical user interface for this workstation. The interface has been designed with feedback from surgeons and technical staff in order to ensure its usability in a surgical environment. Furthermore, the system was designed with the intent of supporting additional systems with minimal additional effort.
2014-03-14
CAPE CANAVERAL, Fla. – Andrew Nick of Kennedy Space Center's Swamp Works shows off RASSOR, a robotic miner, at the Robot Rocket Rally. The three-day event at Florida's Kennedy Space Center Visitor Complex is highlighted by exhibits, games and demonstrations of a variety of robots, with exhibitors ranging from school robotics clubs to veteran NASA scientists and engineers. Photo credit: NASA/Kim Shiflett
Case Studies of a Robot-Based Game to Shape Interests and Hone Proportional Reasoning Skills
ERIC Educational Resources Information Center
Alfieri, Louis; Higashi, Ross; Shoop, Robin; Schunn, Christian D.
2015-01-01
Background: Robot-math is a term used to describe mathematics instruction centered on engineering, particularly robotics. This type of instruction seeks first to make the mathematics skills useful for robotics-centered challenges, and then to help students extend (transfer) those skills. A robot-math intervention was designed to target the…
Web Environment for Programming and Control of a Mobile Robot in a Remote Laboratory
ERIC Educational Resources Information Center
dos Santos Lopes, Maísa Soares; Gomes, Iago Pacheco; Trindade, Roque M. P.; da Silva, Alzira F.; de C. Lima, Antonio C.
2017-01-01
Remote robotics laboratories have been successfully used for engineering education. However, few of them use mobile robots to to teach computer science. This article describes a mobile robot Control and Programming Environment (CPE) and its pedagogical applications. The system comprises a remote laboratory for robotics, an online programming tool,…
NASA Astrophysics Data System (ADS)
Zhao, Ming-fu; Hu, Xin-Yu; Shao, Yun; Luo, Bin-bin; Wang, Xin
2008-10-01
This article analyses nowadays in common use of football robots in China, intended to improve the football robots' hardware platform system's capability, and designed a football robot which based on DSP core controller, and combined Fuzzy-PID control algorithm. The experiment showed, because of the advantages of DSP, such as quickly operation, various of interfaces, low power dissipation etc. It has great improvement on the football robot's performance of movement, controlling precision, real-time performance.
Online Learning Techniques for Improving Robot Navigation in Unfamiliar Domains
2010-12-01
In In Proceedings of the 1996 Symposium on Human Interaction and Complex Systems, pages 276–283, 1996. 6.1 [15] Colin Campbell and Kristin P. Bennett...ISBN 0-262-19450-3. 5.1 [104] Jean Scholtz, Jeff Young, Jill L. Drury , and Holly A. Yanco. Evaluation of human-robot interaction awareness in search...2004. 6.1 [147] Holly A. Yanco and Jill L. Drury . Rescuing interfaces: A multi-year study of human-robot interaction at the AAAI robot rescue
Chan, Joshua L; Mazilu, Dumitru; Miller, Justin G; Hunt, Timothy; Horvath, Keith A; Li, Ming
2016-10-01
Real-time magnetic resonance imaging (rtMRI) guidance provides significant advantages during transcatheter aortic valve replacement (TAVR) as it provides superior real-time visualization and accurate device delivery tracking. However, performing a TAVR within an MRI scanner remains difficult due to a constrained procedural environment. To address these concerns, a magnetic resonance (MR)-compatible robotic system to assist in TAVR deployments was developed. This study evaluates the technical design and interface considerations of an MR-compatible robotic-assisted TAVR system with the purpose of demonstrating that such a system can be developed and executed safely and precisely in a preclinical model. An MR-compatible robotic surgical assistant system was built for TAVR deployment. This system integrates a 5-degrees of freedom (DoF) robotic arm with a 3-DoF robotic valve delivery module. A user interface system was designed for procedural planning and real-time intraoperative manipulation of the robot. The robotic device was constructed of plastic materials, pneumatic actuators, and fiber-optical encoders. The mechanical profile and MR compatibility of the robotic system were evaluated. The system-level error based on a phantom model was 1.14 ± 0.33 mm. A self-expanding prosthesis was successfully deployed in eight Yorkshire swine under rtMRI guidance. Post-deployment imaging and necropsy confirmed placement of the stent within 3 mm of the aortic valve annulus. These phantom and in vivo studies demonstrate the feasibility and advantages of robotic-assisted TAVR under rtMRI guidance. This robotic system increases the precision of valve deployments, diminishes environmental constraints, and improves the overall success of TAVR.
SLAM algorithm applied to robotics assistance for navigation in unknown environments
2010-01-01
Background The combination of robotic tools with assistance technology determines a slightly explored area of applications and advantages for disability or elder people in their daily tasks. Autonomous motorized wheelchair navigation inside an environment, behaviour based control of orthopaedic arms or user's preference learning from a friendly interface are some examples of this new field. In this paper, a Simultaneous Localization and Mapping (SLAM) algorithm is implemented to allow the environmental learning by a mobile robot while its navigation is governed by electromyographic signals. The entire system is part autonomous and part user-decision dependent (semi-autonomous). The environmental learning executed by the SLAM algorithm and the low level behaviour-based reactions of the mobile robot are robotic autonomous tasks, whereas the mobile robot navigation inside an environment is commanded by a Muscle-Computer Interface (MCI). Methods In this paper, a sequential Extended Kalman Filter (EKF) feature-based SLAM algorithm is implemented. The features correspond to lines and corners -concave and convex- of the environment. From the SLAM architecture, a global metric map of the environment is derived. The electromyographic signals that command the robot's movements can be adapted to the patient's disabilities. For mobile robot navigation purposes, five commands were obtained from the MCI: turn to the left, turn to the right, stop, start and exit. A kinematic controller to control the mobile robot was implemented. A low level behavior strategy was also implemented to avoid robot's collisions with the environment and moving agents. Results The entire system was tested in a population of seven volunteers: three elder, two below-elbow amputees and two young normally limbed patients. The experiments were performed within a closed low dynamic environment. Subjects took an average time of 35 minutes to navigate the environment and to learn how to use the MCI. The SLAM results have shown a consistent reconstruction of the environment. The obtained map was stored inside the Muscle-Computer Interface. Conclusions The integration of a highly demanding processing algorithm (SLAM) with a MCI and the communication between both in real time have shown to be consistent and successful. The metric map generated by the mobile robot would allow possible future autonomous navigation without direct control of the user, whose function could be relegated to choose robot destinations. Also, the mobile robot shares the same kinematic model of a motorized wheelchair. This advantage can be exploited for wheelchair autonomous navigation. PMID:20163735
ROBOSIM Modeling of NASA and DoD Robotic Concepts
NASA Technical Reports Server (NTRS)
Fernandez, Kenneth R.
2005-01-01
Dr. Fernandez will discuss using ROBOSIM to model a robotic minesweeper for DoD and to model NASA's use of the Shuttle robot arm to examine shuttle tiles. He will show some of the actual robotic simulations that were developed, and provide some insight on solving the challenging issues involved with developing robotic simulations. Dr. Fernandez developed an earlier version of ROBOSIM with his Ph.D. advisor, Dr. George E. Cook, professor of Electrical Engineering at Vanderbilt University. After being honored as a NASA Administrator s Fellow, he chose Alabama A&M University as the location where he would do a year of teaching and a year of research, provided by the NASA Fellowship Grant. Dr. Trent Montgomery, Associate Dean of Engineering/Chairman Electrical Engineering Department, was his host for the NASA fellowship position at Alabama A&M. Mr. Lionel Macklin is a student at Alabama A&M University who developed the model of the minesweeper concept as his senior project.
System and method for seamless task-directed autonomy for robots
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nielsen, Curtis; Bruemmer, David; Few, Douglas
Systems, methods, and user interfaces are used for controlling a robot. An environment map and a robot designator are presented to a user. The user may place, move, and modify task designators on the environment map. The task designators indicate a position in the environment map and indicate a task for the robot to achieve. A control intermediary links task designators with robot instructions issued to the robot. The control intermediary analyzes a relative position between the task designators and the robot. The control intermediary uses the analysis to determine a task-oriented autonomy level for the robot and communicates targetmore » achievement information to the robot. The target achievement information may include instructions for directly guiding the robot if the task-oriented autonomy level indicates low robot initiative and may include instructions for directing the robot to determine a robot plan for achieving the task if the task-oriented autonomy level indicates high robot initiative.« less
2014-07-16
Limbed robot RoboSimian was developed at NASA Jet Propulsion Laboratory, seen here with Brett Kennedy, supervisor of the JPL Robotic Vehicles and Manipulators Group, and Chuck Bergh, a senior engineer in JPL Robotic Hardware Systems Group.
RAFCON: A Graphical Tool for Engineering Complex, Robotic Tasks
2016-10-09
Robotic tasks are becoming increasingly complex, and with this also the robotic systems. This requires new tools to manage this complexity and to...execution of robotic tasks, called RAFCON. These tasks are described in hierarchical state machines supporting concurrency. A formal notation of this concept
Exploring TeleRobotics: A Radio-Controlled Robot
ERIC Educational Resources Information Center
Deal, Walter F., III; Hsiung, Steve C.
2007-01-01
Robotics is a rich and exciting multidisciplinary area to study and learn about electronics and control technology. The interest in robotic devices and systems provides the technology teacher with an excellent opportunity to make many concrete connections between electronics, control technology, and computers and science, engineering, and…
A review on the mechanical design elements of ankle rehabilitation robot.
Khalid, Yusuf M; Gouwanda, Darwin; Parasuraman, Subramanian
2015-06-01
Ankle rehabilitation robots are developed to enhance ankle strength, flexibility and proprioception after injury and to promote motor learning and ankle plasticity in patients with drop foot. This article reviews the design elements that have been incorporated into the existing robots, for example, backdrivability, safety measures and type of actuation. It also discusses numerous challenges faced by engineers in designing this robot, including robot stability and its dynamic characteristics, universal evaluation criteria to assess end-user comfort, safety and training performance and the scientific basis on the optimal rehabilitation strategies to improve ankle condition. This article can serve as a reference to design robot with better stability and dynamic characteristics and good safety measures against internal and external events. It can also serve as a guideline for the engineers to report their designs and findings. © IMechE 2015.
Robotic Mining Competition - Opening Ceremony
2018-05-15
On the second day of NASA's 9th Robotic Mining Competition, May 15, team members from the South Dakota School of Mines & Engineering work on their robot miner in the RobotPits in the Educator Resource Center at Kennedy Space Center Visitor Complex in Florida. Second from right is Kennedy Space Center Director Bob Cabana. More than 40 student teams from colleges and universities around the U.S. will use their mining robots to dig in a supersized sandbox filled with BP-1, or simulated Martian soil, gravel and rocks, and participate in other competition requirements. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's deep space missions.
Analysis of the position of robotic cell components and its impact on energy consumption by robot
NASA Astrophysics Data System (ADS)
Banas, W.; Gwiazda, A.; Monica, Z.; Sekala, A.; Foit, K.
2016-08-01
Location elements in the robot cell is very important must provide reasonable access to technological points. This is a basic condition, but it is possible to shift these elements worth considering over other criteria. One of them can be energy consumption. This is an economic parameter and in most cases its improvement make shorten the working time an industrial robot. In most conventional mechanical systems you do not need to consume power in standby mode only for a move. Robot because of its construction, even if it does not move has enabled engines and is ready to move. In this case, the servo speed is zero. During this stop servo squeak. Low-speed motors cause the engine torque is reduced and increases power consumption. In larger robots are installed brakes that when the robot does not move mechanically hold the position. Off the robot has enabled brakes and remembers the position servo drives. Brakes must be released when the robot wants to move and drives hold the position.
Bridging the gap between motor imagery and motor execution with a brain-robot interface.
Bauer, Robert; Fels, Meike; Vukelić, Mathias; Ziemann, Ulf; Gharabaghi, Alireza
2015-03-01
According to electrophysiological studies motor imagery and motor execution are associated with perturbations of brain oscillations over spatially similar cortical areas. By contrast, neuroimaging and lesion studies suggest that at least partially distinct cortical networks are involved in motor imagery and execution. We sought to further disentangle this relationship by studying the role of brain-robot interfaces in the context of motor imagery and motor execution networks. Twenty right-handed subjects performed several behavioral tasks as indicators for imagery and execution of movements of the left hand, i.e. kinesthetic imagery, visual imagery, visuomotor integration and tonic contraction. In addition, subjects performed motor imagery supported by haptic/proprioceptive feedback from a brain-robot-interface. Principal component analysis was applied to assess the relationship of these indicators. The respective cortical resting state networks in the α-range were investigated by electroencephalography using the phase slope index. We detected two distinct abilities and cortical networks underlying motor control: a motor imagery network connecting the left parietal and motor areas with the right prefrontal cortex and a motor execution network characterized by transmission from the left to right motor areas. We found that a brain-robot-interface might offer a way to bridge the gap between these networks, opening thereby a backdoor to the motor execution system. This knowledge might promote patient screening and may lead to novel treatment strategies, e.g. for the rehabilitation of hemiparesis after stroke. Copyright © 2014 Elsevier Inc. All rights reserved.
2010-03-01
piece of tissue. Full Mobility Manipulator Robot The primary challenge with the design of a full mobility robot is meeting the competing design...streamed through an embedded plug-in for VLC player using asf/wmv encoding with 200ms buffering. A benchtop test of the remote user interface was...encountered in ensuring quality video is being made available to the surgeon. A significant challenge has been to consistently provide high quality video
2017 Robotic Mining Competition
2017-05-23
Team Raptor members from the University of North Dakota College of Engineering and Mines check their robot, named "Marsbot," in the RoboPit at NASA's 8th Annual Robotic Mining Competition at the Kennedy Space Center Visitor Complex in Florida. More than 40 student teams from colleges and universities around the U.S. will use their uniquely-designed mining robots to dig in a supersized sandbox filled with BP-1, or simulated Martian soil, and participate in other competition requirements. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's Journey to Mars.
Interface for Physics Simulation Engines
NASA Technical Reports Server (NTRS)
Damer, Bruce
2007-01-01
DSS-Prototyper is an open-source, realtime 3D virtual environment software that supports design simulation for the new Vision for Space Exploration (VSE). This is a simulation of NASA's proposed Robotic Lunar Exploration Program, second mission (RLEP2). It simulates the Lunar Surface Access Module (LSAM), which is designed to carry up to four astronauts to the lunar surface for durations of a week or longer. This simulation shows the virtual vehicle making approaches and landings on a variety of lunar terrains. The physics of the descent engine thrust vector, production of dust, and the dynamics of the suspension are all modeled in this set of simulations. The RLEP2 simulations are drivable (by keyboard or joystick) virtual rovers with controls for speed and motor torque, and can be articulated into higher or lower centers of gravity (depending on driving hazards) to enable drill placement. Gravity also can be set to lunar, terrestrial, or zero-g. This software has been used to support NASA's Marshall Space Flight Center in simulations of proposed vehicles for robotically exploring the lunar surface for water ice, and could be used to model all other aspects of the VSE from the Ares launch vehicles and Crew Exploration Vehicle (CEV) to the International Space Station (ISS). This simulator may be installed and operated on any Windows PC with an installed 3D graphics card.
Can Robots and Humans Get Along?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scholtz, Jean
2007-06-01
Now that robots have moved into the mainstream—as vacuum cleaners, lawn mowers, autonomous vehicles, tour guides, and even pets—it is important to consider how everyday people will interact with them. A robot is really just a computer, but many researchers are beginning to understand that human-robot interactions are much different than human-computer interactions. So while the metrics used to evaluate the human-computer interaction (usability of the software interface in terms of time, accuracy, and user satisfaction) may also be appropriate for human-robot interactions, we need to determine whether there are additional metrics that should be considered.
Control Architecture for Robotic Agent Command and Sensing
NASA Technical Reports Server (NTRS)
Huntsberger, Terrance; Aghazarian, Hrand; Estlin, Tara; Gaines, Daniel
2008-01-01
Control Architecture for Robotic Agent Command and Sensing (CARACaS) is a recent product of a continuing effort to develop architectures for controlling either a single autonomous robotic vehicle or multiple cooperating but otherwise autonomous robotic vehicles. CARACaS is potentially applicable to diverse robotic systems that could include aircraft, spacecraft, ground vehicles, surface water vessels, and/or underwater vessels. CARACaS incudes an integral combination of three coupled agents: a dynamic planning engine, a behavior engine, and a perception engine. The perception and dynamic planning en - gines are also coupled with a memory in the form of a world model. CARACaS is intended to satisfy the need for two major capabilities essential for proper functioning of an autonomous robotic system: a capability for deterministic reaction to unanticipated occurrences and a capability for re-planning in the face of changing goals, conditions, or resources. The behavior engine incorporates the multi-agent control architecture, called CAMPOUT, described in An Architecture for Controlling Multiple Robots (NPO-30345), NASA Tech Briefs, Vol. 28, No. 11 (November 2004), page 65. CAMPOUT is used to develop behavior-composition and -coordination mechanisms. Real-time process algebra operators are used to compose a behavior network for any given mission scenario. These operators afford a capability for producing a formally correct kernel of behaviors that guarantee predictable performance. By use of a method based on multi-objective decision theory (MODT), recommendations from multiple behaviors are combined to form a set of control actions that represents their consensus. In this approach, all behaviors contribute simultaneously to the control of the robotic system in a cooperative rather than a competitive manner. This approach guarantees a solution that is good enough with respect to resolution of complex, possibly conflicting goals within the constraints of the mission to be accomplished by the vehicle(s).
Building a Relationship between Robot Characteristics and Teleoperation User Interfaces.
Mortimer, Michael; Horan, Ben; Seyedmahmoudian, Mehdi
2017-03-14
The Robot Operating System (ROS) provides roboticists with a standardized and distributed framework for real-time communication between robotic systems using a microkernel environment. This paper looks at how ROS metadata, Unified Robot Description Format (URDF), Semantic Robot Description Format (SRDF), and its message description language, can be used to identify key robot characteristics to inform User Interface (UI) design for the teleoperation of heterogeneous robot teams. Logical relationships between UI components and robot characteristics are defined by a set of relationship rules created using relevant and available information including developer expertise and ROS metadata. This provides a significant opportunity to move towards a rule-driven approach for generating the designs of teleoperation UIs; in particular the reduction of the number of different UI configurations required to teleoperate each individual robot within a heterogeneous robot team. This approach is based on using an underlying rule set identifying robots that can be teleoperated using the same UI configuration due to having the same or similar robot characteristics. Aside from reducing the number of different UI configurations an operator needs to be familiar with, this approach also supports consistency in UI configurations when a teleoperator is periodically switching between different robots. To achieve this aim, a Matlab toolbox is developed providing users with the ability to define rules specifying the relationship between robot characteristics and UI components. Once rules are defined, selections that best describe the characteristics of the robot type within a particular heterogeneous robot team can be made. A main advantage of this approach is that rather than specifying discrete robots comprising the team, the user can specify characteristics of the team more generally allowing the system to deal with slight variations that may occur in the future. In fact, by using the defined relationship rules and characteristic selections, the toolbox can automatically identify a reduced set of UI configurations required to control possible robot team configurations, as opposed to the traditional ad-hoc approach to teleoperation UI design. In the results section, three test cases are presented to demonstrate how the selection of different robot characteristics builds a number of robot characteristic combinations, and how the relationship rules are used to determine a reduced set of required UI configurations needed to control each individual robot in the robot team.
Building a Relationship between Robot Characteristics and Teleoperation User Interfaces
Mortimer, Michael; Horan, Ben; Seyedmahmoudian, Mehdi
2017-01-01
The Robot Operating System (ROS) provides roboticists with a standardized and distributed framework for real-time communication between robotic systems using a microkernel environment. This paper looks at how ROS metadata, Unified Robot Description Format (URDF), Semantic Robot Description Format (SRDF), and its message description language, can be used to identify key robot characteristics to inform User Interface (UI) design for the teleoperation of heterogeneous robot teams. Logical relationships between UI components and robot characteristics are defined by a set of relationship rules created using relevant and available information including developer expertise and ROS metadata. This provides a significant opportunity to move towards a rule-driven approach for generating the designs of teleoperation UIs; in particular the reduction of the number of different UI configurations required to teleoperate each individual robot within a heterogeneous robot team. This approach is based on using an underlying rule set identifying robots that can be teleoperated using the same UI configuration due to having the same or similar robot characteristics. Aside from reducing the number of different UI configurations an operator needs to be familiar with, this approach also supports consistency in UI configurations when a teleoperator is periodically switching between different robots. To achieve this aim, a Matlab toolbox is developed providing users with the ability to define rules specifying the relationship between robot characteristics and UI components. Once rules are defined, selections that best describe the characteristics of the robot type within a particular heterogeneous robot team can be made. A main advantage of this approach is that rather than specifying discrete robots comprising the team, the user can specify characteristics of the team more generally allowing the system to deal with slight variations that may occur in the future. In fact, by using the defined relationship rules and characteristic selections, the toolbox can automatically identify a reduced set of UI configurations required to control possible robot team configurations, as opposed to the traditional ad-hoc approach to teleoperation UI design. In the results section, three test cases are presented to demonstrate how the selection of different robot characteristics builds a number of robot characteristic combinations, and how the relationship rules are used to determine a reduced set of required UI configurations needed to control each individual robot in the robot team. PMID:28335431
2014-03-14
CAPE CANAVERAL, Fla. – A miniature humanoid robot known as DARwin-OP, from Virginia Tech Robotics, plays soccer with a red tennis ball for a crowd of students at the Robot Rocket Rally. The three-day event at Florida's Kennedy Space Center Visitor Complex is highlighted by exhibits, games and demonstrations of a variety of robots, with exhibitors ranging from school robotics clubs to veteran NASA scientists and engineers. Photo credit: NASA/Kim Shiflett
The role of assistive robotics in the lives of persons with disability.
Brose, Steven W; Weber, Douglas J; Salatin, Ben A; Grindle, Garret G; Wang, Hongwu; Vazquez, Juan J; Cooper, Rory A
2010-06-01
Robotic assistive devices are used increasingly to improve the independence and quality of life of persons with disabilities. Devices as varied as robotic feeders, smart-powered wheelchairs, independent mobile robots, and socially assistive robots are becoming more clinically relevant. There is a growing importance for the rehabilitation professional to be aware of available systems and ongoing research efforts. The aim of this article is to describe the advances in assistive robotics that are relevant to professionals serving persons with disabilities. This review breaks down relevant advances into categories of Assistive Robotic Systems, User Interfaces and Control Systems, Sensory and Feedback Systems, and User Perspectives. An understanding of the direction that assistive robotics is taking is important for the clinician and researcher alike; this review is intended to address this need.
NASA Astrophysics Data System (ADS)
Ayres, R.; Miller, S.
1982-06-01
The characteristics, applications, and operational capabilities of currently available robots are examined. Designed to function at tasks of a repetitive, hazardous, or uncreative nature, robot appendages are controlled by microprocessors which permit some simple decision-making on-the-job, and have served for sample gathering on the Mars Viking lander. Critical developmental areas concern active sensors at the robot grappler-object interface, where sufficient data must be gathered for the central processor to which the robot is attached to conclude the state of completion and suitability of the workpiece. Although present robots must be programmed through every step of a particular industrial process, thus limiting each robot to specialized tasks, the potential for closed cells of batch-processing robot-run units is noted to be close to realization. Finally, consideration is given to methods for retraining the human workforce that robots replace
A Preliminary Study Exploring the Use of Fictional Narrative in Robotics Activities
ERIC Educational Resources Information Center
Williams, Douglas; Ma, Yuxin; Prejean, Louise
2010-01-01
Educational robotics activities are gaining in popularity. Though some research data suggest that educational robotics can be an effective approach in teaching mathematics, science, and engineering, research is needed to generate the best practices and strategies for designing these learning environments. Existing robotics activities typically do…
ERIC Educational Resources Information Center
Kitts, Christopher; Quinn, Neil
2004-01-01
Santa Clara University's Robotic Systems Laboratory conducts an aggressive robotic development and operations program in which interdisciplinary teams of undergraduate students build and deploy a wide range of robotic systems, ranging from underwater vehicles to spacecraft. These year-long projects expose students to the breadth of and…
Robots as Language Learning Tools
ERIC Educational Resources Information Center
Collado, Ericka
2017-01-01
Robots are machines that resemble different forms, usually those of humans or animals, that can perform preprogrammed or autonomous tasks (Robot, n.d.). With the emergence of STEM programs, there has been a rise in the use of robots in educational settings. STEM programs are those where students study science, technology, engineering and…
Students Learn Programming Faster through Robotic Simulation
ERIC Educational Resources Information Center
Liu, Allison; Newsom, Jeff; Schunn, Chris; Shoop, Robin
2013-01-01
Schools everywhere are using robotics education to engage kids in applied science, technology, engineering, and mathematics (STEM) activities, but teaching programming can be challenging due to lack of resources. This article reports on using Robot Virtual Worlds (RVW) and curriculum available on the Internet to teach robot programming. It also…
Robot computer problem solving system
NASA Technical Reports Server (NTRS)
Merriam, E. W.; Becker, J. D.
1973-01-01
A robot computer problem solving system which represents a robot exploration vehicle in a simulated Mars environment is described. The model exhibits changes and improvements made on a previously designed robot in a city environment. The Martian environment is modeled in Cartesian coordinates; objects are scattered about a plane; arbitrary restrictions on the robot's vision have been removed; and the robot's path contains arbitrary curves. New environmental features, particularly the visual occlusion of objects by other objects, were added to the model. Two different algorithms were developed for computing occlusion. Movement and vision capabilities of the robot were established in the Mars environment, using LISP/FORTRAN interface for computational efficiency. The graphical display program was redesigned to reflect the change to the Mars-like environment.
Towards free 3D end-point control for robotic-assisted human reaching using binocular eye tracking.
Maimon-Dror, Roni O; Fernandez-Quesada, Jorge; Zito, Giuseppe A; Konnaris, Charalambos; Dziemian, Sabine; Faisal, A Aldo
2017-07-01
Eye-movements are the only directly observable behavioural signals that are highly correlated with actions at the task level, and proactive of body movements and thus reflect action intentions. Moreover, eye movements are preserved in many movement disorders leading to paralysis (or amputees) from stroke, spinal cord injury, Parkinson's disease, multiple sclerosis, and muscular dystrophy among others. Despite this benefit, eye tracking is not widely used as control interface for robotic interfaces in movement impaired patients due to poor human-robot interfaces. We demonstrate here how combining 3D gaze tracking using our GT3D binocular eye tracker with custom designed 3D head tracking system and calibration method enables continuous 3D end-point control of a robotic arm support system. The users can move their own hand to any location of the workspace by simple looking at the target and winking once. This purely eye tracking based system enables the end-user to retain free head movement and yet achieves high spatial end point accuracy in the order of 6 cm RMSE error in each dimension and standard deviation of 4 cm. 3D calibration is achieved by moving the robot along a 3 dimensional space filling Peano curve while the user is tracking it with their eyes. This results in a fully automated calibration procedure that yields several thousand calibration points versus standard approaches using a dozen points, resulting in beyond state-of-the-art 3D accuracy and precision.
Force-sensed interface for control and training space robot
NASA Astrophysics Data System (ADS)
Moiseev, O. S.; Sarsadskikh, A. S.; Povalyaev, N. D.; Gorbunov, V. I.; Kulakov, F. M.; Vasilev, V. V.
2018-05-01
A method of positional and force-torque control of robots is proposed. Prototypes of the system and the master handle have been created. Algorithm of bias estimation and gravity compensation for force-torque sensor and force-torque trajectory correction are described.
Matching brain-machine interface performance to space applications.
Citi, Luca; Tonet, Oliver; Marinelli, Martina
2009-01-01
A brain-machine interface (BMI) is a particular class of human-machine interface (HMI). BMIs have so far been studied mostly as a communication means for people who have little or no voluntary control of muscle activity. For able-bodied users, such as astronauts, a BMI would only be practical if conceived as an augmenting interface. A method is presented for pointing out effective combinations of HMIs and applications of robotics and automation to space. Latency and throughput are selected as performance measures for a hybrid bionic system (HBS), that is, the combination of a user, a device, and a HMI. We classify and briefly describe HMIs and space applications and then compare the performance of classes of interfaces with the requirements of classes of applications, both in terms of latency and throughput. Regions of overlap correspond to effective combinations. Devices requiring simpler control, such as a rover, a robotic camera, or environmental controls are suitable to be driven by means of BMI technology. Free flyers and other devices with six degrees of freedom can be controlled, but only at low-interactivity levels. More demanding applications require conventional interfaces, although they could be controlled by BMIs once the same levels of performance as currently recorded in animal experiments are attained. Robotic arms and manipulators could be the next frontier for noninvasive BMIs. Integrating smart controllers in HBSs could improve interactivity and boost the use of BMI technology in space applications.
An assembly-type master-slave catheter and guidewire driving system for vascular intervention.
Cha, Hyo-Jeong; Yi, Byung-Ju; Won, Jong Yun
2017-01-01
Current vascular intervention inevitably exposes a large amount of X-ray to both an operator and a patient during the procedure. The purpose of this study is to propose a new catheter driving system which assists the operator in aspects of less X-ray exposure and convenient user interface. For this, an assembly-type 4-degree-of-freedom master-slave system was designed and tested to verify the efficiency. First, current vascular intervention procedures are analyzed to develop a new robotic procedure that enables us to use conventional vascular intervention devices such as catheter and guidewire which are commercially available in the market. Some parts of the slave robot which contact the devices were designed to be easily assembled and dissembled from the main body of the slave robot for sterilization. A master robot is compactly designed to conduct insertion and rotational motion and is able to switch from the guidewire driving mode to the catheter driving mode or vice versa. A phantom resembling the human arteries was developed, and the master-slave robotic system is tested using the phantom. The contact force of the guidewire tip according to the shape of the arteries is measured and reflected to the user through the master robot during the phantom experiment. This system can drastically reduce radiation exposure by replacing human effort by a robotic system for high radiation exposure procedures. Also, benefits of the proposed robot system are low cost by employing currently available devices and easy human interface.
Stanford Aerospace Research Laboratory research overview
NASA Technical Reports Server (NTRS)
Ballhaus, W. L.; Alder, L. J.; Chen, V. W.; Dickson, W. C.; Ullman, M. A.
1993-01-01
Over the last ten years, the Stanford Aerospace Robotics Laboratory (ARL) has developed a hardware facility in which a number of space robotics issues have been, and continue to be, addressed. This paper reviews two of the current ARL research areas: navigation and control of free flying space robots, and modelling and control of extremely flexible space structures. The ARL has designed and built several semi-autonomous free-flying robots that perform numerous tasks in a zero-gravity, drag-free, two-dimensional environment. It is envisioned that future generations of these robots will be part of a human-robot team, in which the robots will operate under the task-level commands of astronauts. To make this possible, the ARL has developed a graphical user interface (GUI) with an intuitive object-level motion-direction capability. Using this interface, the ARL has demonstrated autonomous navigation, intercept and capture of moving and spinning objects, object transport, multiple-robot cooperative manipulation, and simple assemblies from both free-flying and fixed bases. The ARL has also built a number of experimental test beds on which the modelling and control of flexible manipulators has been studied. Early ARL experiments in this arena demonstrated for the first time the capability to control the end-point position of both single-link and multi-link flexible manipulators using end-point sensing. Building on these accomplishments, the ARL has been able to control payloads with unknown dynamics at the end of a flexible manipulator, and to achieve high-performance control of a multi-link flexible manipulator.
Mobile app for human-interaction with sitter robots
NASA Astrophysics Data System (ADS)
Das, Sumit Kumar; Sahu, Ankita; Popa, Dan O.
2017-05-01
Human environments are often unstructured and unpredictable, thus making the autonomous operation of robots in such environments is very difficult. Despite many remaining challenges in perception, learning, and manipulation, more and more studies involving assistive robots have been carried out in recent years. In hospital environments, and in particular in patient rooms, there are well-established practices with respect to the type of furniture, patient services, and schedule of interventions. As a result, adding a robot into semi-structured hospital environments is an easier problem to tackle, with results that could have positive benefits to the quality of patient care and the help that robots can offer to nursing staff. When working in a healthcare facility, robots need to interact with patients and nurses through Human-Machine Interfaces (HMIs) that are intuitive to use, they should maintain awareness of surroundings, and offer safety guarantees for humans. While fully autonomous operation for robots is not yet technically feasible, direct teleoperation control of the robot would also be extremely cumbersome, as it requires expert user skills, and levels of concentration not available to many patients. Therefore, in our current study we present a traded control scheme, in which the robot and human both perform expert tasks. The human-robot communication and control scheme is realized through a mobile tablet app that can be customized for robot sitters in hospital environments. The role of the mobile app is to augment the verbal commands given to a robot through natural speech, camera and other native interfaces, while providing failure mode recovery options for users. Our app can access video feed and sensor data from robots, assist the user with decision making during pick and place operations, monitor the user health over time, and provides conversational dialogue during sitting sessions. In this paper, we present the software and hardware framework that enable a patient sitter HMI, and we include experimental results with a small number of users that demonstrate that the concept is sound and scalable.
Kim, Youngmoo E.
2017-01-01
Motor-imagery tasks are a popular input method for controlling brain-computer interfaces (BCIs), partially due to their similarities to naturally produced motor signals. The use of functional near-infrared spectroscopy (fNIRS) in BCIs is still emerging and has shown potential as a supplement or replacement for electroencephalography. However, studies often use only two or three motor-imagery tasks, limiting the number of available commands. In this work, we present the results of the first four-class motor-imagery-based online fNIRS-BCI for robot control. Thirteen participants utilized upper- and lower-limb motor-imagery tasks (left hand, right hand, left foot, and right foot) that were mapped to four high-level commands (turn left, turn right, move forward, and move backward) to control the navigation of a simulated or real robot. A significant improvement in classification accuracy was found between the virtual-robot-based BCI (control of a virtual robot) and the physical-robot BCI (control of the DARwIn-OP humanoid robot). Differences were also found in the oxygenated hemoglobin activation patterns of the four tasks between the first and second BCI. These results corroborate previous findings that motor imagery can be improved with feedback and imply that a four-class motor-imagery-based fNIRS-BCI could be feasible with sufficient subject training. PMID:28804712
Visual and tactile interfaces for bi-directional human robot communication
NASA Astrophysics Data System (ADS)
Barber, Daniel; Lackey, Stephanie; Reinerman-Jones, Lauren; Hudson, Irwin
2013-05-01
Seamless integration of unmanned and systems and Soldiers in the operational environment requires robust communication capabilities. Multi-Modal Communication (MMC) facilitates achieving this goal due to redundancy and levels of communication superior to single mode interaction using auditory, visual, and tactile modalities. Visual signaling using arm and hand gestures is a natural method of communication between people. Visual signals standardized within the U.S. Army Field Manual and in use by Soldiers provide a foundation for developing gestures for human to robot communication. Emerging technologies using Inertial Measurement Units (IMU) enable classification of arm and hand gestures for communication with a robot without the requirement of line-of-sight needed by computer vision techniques. These devices improve the robustness of interpreting gestures in noisy environments and are capable of classifying signals relevant to operational tasks. Closing the communication loop between Soldiers and robots necessitates them having the ability to return equivalent messages. Existing visual signals from robots to humans typically require highly anthropomorphic features not present on military vehicles. Tactile displays tap into an unused modality for robot to human communication. Typically used for hands-free navigation and cueing, existing tactile display technologies are used to deliver equivalent visual signals from the U.S. Army Field Manual. This paper describes ongoing research to collaboratively develop tactile communication methods with Soldiers, measure classification accuracy of visual signal interfaces, and provides an integration example including two robotic platforms.
Batula, Alyssa M; Kim, Youngmoo E; Ayaz, Hasan
2017-01-01
Motor-imagery tasks are a popular input method for controlling brain-computer interfaces (BCIs), partially due to their similarities to naturally produced motor signals. The use of functional near-infrared spectroscopy (fNIRS) in BCIs is still emerging and has shown potential as a supplement or replacement for electroencephalography. However, studies often use only two or three motor-imagery tasks, limiting the number of available commands. In this work, we present the results of the first four-class motor-imagery-based online fNIRS-BCI for robot control. Thirteen participants utilized upper- and lower-limb motor-imagery tasks (left hand, right hand, left foot, and right foot) that were mapped to four high-level commands (turn left, turn right, move forward, and move backward) to control the navigation of a simulated or real robot. A significant improvement in classification accuracy was found between the virtual-robot-based BCI (control of a virtual robot) and the physical-robot BCI (control of the DARwIn-OP humanoid robot). Differences were also found in the oxygenated hemoglobin activation patterns of the four tasks between the first and second BCI. These results corroborate previous findings that motor imagery can be improved with feedback and imply that a four-class motor-imagery-based fNIRS-BCI could be feasible with sufficient subject training.
Zeng, Hong; Wang, Yanxin; Wu, Changcheng; Song, Aiguo; Liu, Jia; Ji, Peng; Xu, Baoguo; Zhu, Lifeng; Li, Huijun; Wen, Pengcheng
2017-01-01
Brain-machine interface (BMI) can be used to control the robotic arm to assist paralysis people for performing activities of daily living. However, it is still a complex task for the BMI users to control the process of objects grasping and lifting with the robotic arm. It is hard to achieve high efficiency and accuracy even after extensive trainings. One important reason is lacking of sufficient feedback information for the user to perform the closed-loop control. In this study, we proposed a method of augmented reality (AR) guiding assistance to provide the enhanced visual feedback to the user for a closed-loop control with a hybrid Gaze-BMI, which combines the electroencephalography (EEG) signals based BMI and the eye tracking for an intuitive and effective control of the robotic arm. Experiments for the objects manipulation tasks while avoiding the obstacle in the workspace are designed to evaluate the performance of our method for controlling the robotic arm. According to the experimental results obtained from eight subjects, the advantages of the proposed closed-loop system (with AR feedback) over the open-loop system (with visual inspection only) have been verified. The number of trigger commands used for controlling the robotic arm to grasp and lift the objects with AR feedback has reduced significantly and the height gaps of the gripper in the lifting process have decreased more than 50% compared to those trials with normal visual inspection only. The results reveal that the hybrid Gaze-BMI user can benefit from the information provided by the AR interface, improving the efficiency and reducing the cognitive load during the grasping and lifting processes. PMID:29163123
Wireless brain-machine interface using EEG and EOG: brain wave classification and robot control
NASA Astrophysics Data System (ADS)
Oh, Sechang; Kumar, Prashanth S.; Kwon, Hyeokjun; Varadan, Vijay K.
2012-04-01
A brain-machine interface (BMI) links a user's brain activity directly to an external device. It enables a person to control devices using only thought. Hence, it has gained significant interest in the design of assistive devices and systems for people with disabilities. In addition, BMI has also been proposed to replace humans with robots in the performance of dangerous tasks like explosives handling/diffusing, hazardous materials handling, fire fighting etc. There are mainly two types of BMI based on the measurement method of brain activity; invasive and non-invasive. Invasive BMI can provide pristine signals but it is expensive and surgery may lead to undesirable side effects. Recent advances in non-invasive BMI have opened the possibility of generating robust control signals from noisy brain activity signals like EEG and EOG. A practical implementation of a non-invasive BMI such as robot control requires: acquisition of brain signals with a robust wearable unit, noise filtering and signal processing, identification and extraction of relevant brain wave features and finally, an algorithm to determine control signals based on the wave features. In this work, we developed a wireless brain-machine interface with a small platform and established a BMI that can be used to control the movement of a robot by using the extracted features of the EEG and EOG signals. The system records and classifies EEG as alpha, beta, delta, and theta waves. The classified brain waves are then used to define the level of attention. The acceleration and deceleration or stopping of the robot is controlled based on the attention level of the wearer. In addition, the left and right movements of eye ball control the direction of the robot.
Human-robot skills transfer interfaces for a flexible surgical robot.
Calinon, Sylvain; Bruno, Danilo; Malekzadeh, Milad S; Nanayakkara, Thrishantha; Caldwell, Darwin G
2014-09-01
In minimally invasive surgery, tools go through narrow openings and manipulate soft organs to perform surgical tasks. There are limitations in current robot-assisted surgical systems due to the rigidity of robot tools. The aim of the STIFF-FLOP European project is to develop a soft robotic arm to perform surgical tasks. The flexibility of the robot allows the surgeon to move within organs to reach remote areas inside the body and perform challenging procedures in laparoscopy. This article addresses the problem of designing learning interfaces enabling the transfer of skills from human demonstration. Robot programming by demonstration encompasses a wide range of learning strategies, from simple mimicking of the demonstrator's actions to the higher level imitation of the underlying intent extracted from the demonstrations. By focusing on this last form, we study the problem of extracting an objective function explaining the demonstrations from an over-specified set of candidate reward functions, and using this information for self-refinement of the skill. In contrast to inverse reinforcement learning strategies that attempt to explain the observations with reward functions defined for the entire task (or a set of pre-defined reward profiles active for different parts of the task), the proposed approach is based on context-dependent reward-weighted learning, where the robot can learn the relevance of candidate objective functions with respect to the current phase of the task or encountered situation. The robot then exploits this information for skills refinement in the policy parameters space. The proposed approach is tested in simulation with a cutting task performed by the STIFF-FLOP flexible robot, using kinesthetic demonstrations from a Barrett WAM manipulator. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Brain-Computer Interfaces in Medicine
Shih, Jerry J.; Krusienski, Dean J.; Wolpaw, Jonathan R.
2012-01-01
Brain-computer interfaces (BCIs) acquire brain signals, analyze them, and translate them into commands that are relayed to output devices that carry out desired actions. BCIs do not use normal neuromuscular output pathways. The main goal of BCI is to replace or restore useful function to people disabled by neuromuscular disorders such as amyotrophic lateral sclerosis, cerebral palsy, stroke, or spinal cord injury. From initial demonstrations of electroencephalography-based spelling and single-neuron-based device control, researchers have gone on to use electroencephalographic, intracortical, electrocorticographic, and other brain signals for increasingly complex control of cursors, robotic arms, prostheses, wheelchairs, and other devices. Brain-computer interfaces may also prove useful for rehabilitation after stroke and for other disorders. In the future, they might augment the performance of surgeons or other medical professionals. Brain-computer interface technology is the focus of a rapidly growing research and development enterprise that is greatly exciting scientists, engineers, clinicians, and the public in general. Its future achievements will depend on advances in 3 crucial areas. Brain-computer interfaces need signal-acquisition hardware that is convenient, portable, safe, and able to function in all environments. Brain-computer interface systems need to be validated in long-term studies of real-world use by people with severe disabilities, and effective and viable models for their widespread dissemination must be implemented. Finally, the day-to-day and moment-to-moment reliability of BCI performance must be improved so that it approaches the reliability of natural muscle-based function. PMID:22325364
Richer Connections to Robotics through Project Personalization
ERIC Educational Resources Information Center
Veltman, Melanie; Davidson, Valerie; Deyell, Bethany
2012-01-01
In this work, we describe youth outreach activities carried out under the Chair for Women in Science and Engineering for Ontario (CWSE-ON) program. Specifically, we outline our design and implementation of robotics workshops to introduce and engage middle and secondary school students in engineering and computer science. Toward the goal of…
An Engineering Mentor's Take on "FIRST" Robotics
ERIC Educational Resources Information Center
Jackson, Jim
2013-01-01
In this article, the author describes a program that he says has "made being smart cool." "FIRST" (For Inspiration and Recognition of Science and Technology) Robotics has made a significant contribution toward progress in advancing science, technology, engineering, and mathematics (STEM) courses and STEM careers with young people. "FIRST" Robotics…
JPL-20170926-TECHf-0001-Robot Descends into Alaska Moulin
2017-09-26
JPL engineer Andy Klesh lowers a robotic submersible into a moulin. Klesh and JPL's John Leichty used robots and probes to explore the Matanuska Glacier in Alaska this past July. Image Credit: NASA/JPL-Caltech
2016-05-01
research, Kunkler (2006) suggested that the similarities between computer simulation tools and robotic surgery systems (e.g., mechanized feedback...distribution is unlimited. 49 Davies B. A review of robotics in surgery . Proceedings of the Institution of Mechanical Engineers, Part H: Journal...ARL-TR-7683 ● MAY 2016 US Army Research Laboratory A Guide for Developing Human- Robot Interaction Experiments in the Robotic
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mann, R.C.; Weisbin, C.R.; Pin, F.G.
1989-01-01
This paper reviews ongoing and planned research with mobile autonomous robots at the Oak Ridge National Laboratory (ORNL), Center for Engineering Systems Advanced Research (CESAR). Specifically we report on results obtained with the robot HERMIES-IIB in navigation, intelligent sensing, learning, and on-board parallel computing in support of these functions. We briefly summarize an experiment with HERMIES-IIB that demonstrates the capability of smooth transitions between robot autonomy and tele-operation. This experiment results from collaboration among teams at the Universities of Florida, Michigan, Tennessee, and Texas; and ORNL in a program targeted at robotics for advanced nuclear power stations. We conclude bymore » summarizing ongoing R D with our new mobile robot HERMIES-III which is equipped with a seven degree-of-freedom research manipulator arm. 12 refs., 4 figs.« less
Final matches of the FIRST regional robotic competition at KSC
NASA Technical Reports Server (NTRS)
1999-01-01
During final matches at the 1999 Southeastern Regional robotic competition at the KSC Visitor Complex, referees in opposite corners and student teams watch as two robots raise their pillow disks to a height of eight feet, one of the goals of the competition. Thirty schools from around the country have converged at KSC for the event that pits gladiator robots against each other in an athletic-style competition. The robots have to retrieve the pillow disks from the floor, climb onto a platform (with flags), as well as raise the cache of pillows, maneuvered by student teams behind protective walls. KSC is hosting the event being sponsored by the nonprofit organization For Inspiration and Recognition of Science and Technology, known as FIRST. The FIRST robotics competition is designed to provide students with a hands-on, inside look at engineering and other professional careers by pairing engineers and corporations with student teams.
Robot Task Commander with Extensible Programming Environment
NASA Technical Reports Server (NTRS)
Hart, Stephen W (Inventor); Wightman, Brian J (Inventor); Dinh, Duy Paul (Inventor); Yamokoski, John D. (Inventor); Gooding, Dustin R (Inventor)
2014-01-01
A system for developing distributed robot application-level software includes a robot having an associated control module which controls motion of the robot in response to a commanded task, and a robot task commander (RTC) in networked communication with the control module over a network transport layer (NTL). The RTC includes a script engine(s) and a GUI, with a processor and a centralized library of library blocks constructed from an interpretive computer programming code and having input and output connections. The GUI provides access to a Visual Programming Language (VPL) environment and a text editor. In executing a method, the VPL is opened, a task for the robot is built from the code library blocks, and data is assigned to input and output connections identifying input and output data for each block. A task sequence(s) is sent to the control module(s) over the NTL to command execution of the task.
Final matches of the FIRST regional robotic competition at KSC
NASA Technical Reports Server (NTRS)
1999-01-01
Four robots vie for position on the playing field during the 1999 FIRST Southeastern Regional robotic competition held at KSC. Powered by 12-volt batteries and operated by remote control, the robotic gladiators spent two minutes each trying to grab, claw and hoist large, satin pillows onto their machines. Student teams, shown behind protective walls, play defense by taking away competitors' pillows and generally harassing opposing machines. Two of the robots have lifted their caches of pillows above the field, a movement which earns them points. Along with the volunteer referees, at the edge of the playing field, judges at right watch the action. FIRST is a nonprofit organization, For Inspiration and Recognition of Science and Technology. The competition comprised 27 teams, pairing high school students with engineer mentors and corporations. The FIRST robotics competition is designed to provide students with a hands-on, inside look at engineering and other professional careers.
Resquin, F; Ibañez, J; Gonzalez-Vargas, J; Brunetti, F; Dimbwadyo, I; Alves, S; Carrasco, L; Torres, L; Pons, Jose Luis
2016-08-01
Reaching and grasping are two of the most affected functions after stroke. Hybrid rehabilitation systems combining Functional Electrical Stimulation with Robotic devices have been proposed in the literature to improve rehabilitation outcomes. In this work, we present the combined use of a hybrid robotic system with an EEG-based Brain-Machine Interface to detect the user's movement intentions to trigger the assistance. The platform has been tested in a single session with a stroke patient. The results show how the patient could successfully interact with the BMI and command the assistance of the hybrid system with low latencies. Also, the Feedback Error Learning controller implemented in this system could adjust the required FES intensity to perform the task.
Investigating the Usefulness of Soldier Aids for Autonomous Unmanned Ground Vehicles, Part 2
2015-03-01
distribution is unlimited. 13. SUPPLEMENTARY NOTES DCS Corporation, Alexandria, VA 14. ABSTRACT In the past, robot operation has been a high-cognitive...increase performance and reduce perceived workload. The aids were overlays displaying what an autonomous robot perceived in the environment and the...subsequent course of action planned by the robot . Eight active-duty, US Army Soldiers completed 16 scenario missions using an operator interface
Development of hardwares and computer interface for a two-degree-of-freedom robot
NASA Technical Reports Server (NTRS)
Nguyen, Charles C.; Pooran, Farhad J.
1987-01-01
The research results that were obtained are reviewed. Then the robot actuator, the selection of the data acquisition system, and the design of the power amplifier will be discussed. The machine design of the robot manipulator will then be presented. After that, the integration of the developed hardware into the open-loop system will also be discussed. Current and future research work is addressed.
A self-paced motor imagery based brain-computer interface for robotic wheelchair control.
Tsui, Chun Sing Louis; Gan, John Q; Hu, Huosheng
2011-10-01
This paper presents a simple self-paced motor imagery based brain-computer interface (BCI) to control a robotic wheelchair. An innovative control protocol is proposed to enable a 2-class self-paced BCI for wheelchair control, in which the user makes path planning and fully controls the wheelchair except for the automatic obstacle avoidance based on a laser range finder when necessary. In order for the users to train their motor imagery control online safely and easily, simulated robot navigation in a specially designed environment was developed. This allowed the users to practice motor imagery control with the core self-paced BCI system in a simulated scenario before controlling the wheelchair. The self-paced BCI can then be applied to control a real robotic wheelchair using a protocol similar to that controlling the simulated robot. Our emphasis is on allowing more potential users to use the BCI controlled wheelchair with minimal training; a simple 2-class self paced system is adequate with the novel control protocol, resulting in a better transition from offline training to online control. Experimental results have demonstrated the usefulness of the online practice under the simulated scenario, and the effectiveness of the proposed self-paced BCI for robotic wheelchair control.
Development of a robotic device for facilitating learning by children who have severe disabilities.
Cook, Albert M; Meng, Max Q H; Gu, Jason J; Howery, Kathy
2002-09-01
This paper presents technical aspects of a robot manipulator developed to facilitate learning by young children who are generally unable to grasp objects or speak. The severity of these physical disabilities also limits assessment of their cognitive and language skills and abilities. The CRS robot manipulator was adapted for use by children who have disabilities. Our emphasis is on the technical control aspects of the development of an interface and communication environment between the child and the robot arm. The system is designed so that each child has user control and control procedures that are individually adapted. Control interfaces include large push buttons, keyboards, laser pointer, and head-controlled switches. Preliminary results have shown that young children who have severe disabilities can use the robotic arm system to complete functional play-related tasks. Developed software allows the child to accomplish a series of multistep tasks by activating one or more single switches. Through a single switch press the child can replay a series of preprogrammed movements that have a development sequence. Children using this system engaged in three-step sequential activities and were highly responsive to the robotic tasks. This was in marked contrast to other interventions using toys and computer games.
Towards Rehabilitation Robotics: Off-the-Shelf BCI Control of Anthropomorphic Robotic Arms.
Athanasiou, Alkinoos; Xygonakis, Ioannis; Pandria, Niki; Kartsidis, Panagiotis; Arfaras, George; Kavazidi, Kyriaki Rafailia; Foroglou, Nicolas; Astaras, Alexander; Bamidis, Panagiotis D
2017-01-01
Advances in neural interfaces have demonstrated remarkable results in the direction of replacing and restoring lost sensorimotor function in human patients. Noninvasive brain-computer interfaces (BCIs) are popular due to considerable advantages including simplicity, safety, and low cost, while recent advances aim at improving past technological and neurophysiological limitations. Taking into account the neurophysiological alterations of disabled individuals, investigating brain connectivity features for implementation of BCI control holds special importance. Off-the-shelf BCI systems are based on fast, reproducible detection of mental activity and can be implemented in neurorobotic applications. Moreover, social Human-Robot Interaction (HRI) is increasingly important in rehabilitation robotics development. In this paper, we present our progress and goals towards developing off-the-shelf BCI-controlled anthropomorphic robotic arms for assistive technologies and rehabilitation applications. We account for robotics development, BCI implementation, and qualitative assessment of HRI characteristics of the system. Furthermore, we present two illustrative experimental applications of the BCI-controlled arms, a study of motor imagery modalities on healthy individuals' BCI performance, and a pilot investigation on spinal cord injured patients' BCI control and brain connectivity. We discuss strengths and limitations of our design and propose further steps on development and neurophysiological study, including implementation of connectivity features as BCI modality.
Towards Rehabilitation Robotics: Off-the-Shelf BCI Control of Anthropomorphic Robotic Arms
Xygonakis, Ioannis; Pandria, Niki; Kartsidis, Panagiotis; Arfaras, George; Kavazidi, Kyriaki Rafailia; Foroglou, Nicolas
2017-01-01
Advances in neural interfaces have demonstrated remarkable results in the direction of replacing and restoring lost sensorimotor function in human patients. Noninvasive brain-computer interfaces (BCIs) are popular due to considerable advantages including simplicity, safety, and low cost, while recent advances aim at improving past technological and neurophysiological limitations. Taking into account the neurophysiological alterations of disabled individuals, investigating brain connectivity features for implementation of BCI control holds special importance. Off-the-shelf BCI systems are based on fast, reproducible detection of mental activity and can be implemented in neurorobotic applications. Moreover, social Human-Robot Interaction (HRI) is increasingly important in rehabilitation robotics development. In this paper, we present our progress and goals towards developing off-the-shelf BCI-controlled anthropomorphic robotic arms for assistive technologies and rehabilitation applications. We account for robotics development, BCI implementation, and qualitative assessment of HRI characteristics of the system. Furthermore, we present two illustrative experimental applications of the BCI-controlled arms, a study of motor imagery modalities on healthy individuals' BCI performance, and a pilot investigation on spinal cord injured patients' BCI control and brain connectivity. We discuss strengths and limitations of our design and propose further steps on development and neurophysiological study, including implementation of connectivity features as BCI modality. PMID:28948168
Space station automation and robotics study. Operator-systems interface
NASA Technical Reports Server (NTRS)
1984-01-01
This is the final report of a Space Station Automation and Robotics Planning Study, which was a joint project of the Boeing Aerospace Company, Boeing Commercial Airplane Company, and Boeing Computer Services Company. The study is in support of the Advanced Technology Advisory Committee established by NASA in accordance with a mandate by the U.S. Congress. Boeing support complements that provided to the NASA Contractor study team by four aerospace contractors, the Stanford Research Institute (SRI), and the California Space Institute. This study identifies automation and robotics (A&R) technologies that can be advanced by requirements levied by the Space Station Program. The methodology used in the study is to establish functional requirements for the operator system interface (OSI), establish the technologies needed to meet these requirements, and to forecast the availability of these technologies. The OSI would perform path planning, tracking and control, object recognition, fault detection and correction, and plan modifications in connection with extravehicular (EV) robot operations.
Final matches of the FIRST regional robotic competition at KSC
NASA Technical Reports Server (NTRS)
1999-01-01
Students cheer their team during final matches at the 1999 Southeastern Regional robotic competition at the KSC Visitor Complex. Thirty schools from around the country have converged at KSC for the event that pits gladiator robots against each other in an athletic-style competition. The robots have to retrieve pillow-like disks from the floor, climb onto a platform (with flags), as well as raise the cache of pillows, maneuvered by student teams behind protective walls. KSC is hosting the event being sponsored by the nonprofit organization For Inspiration and Recognition of Science and Technology, known as FIRST. The FIRST robotics competition is designed to provide students with a hands-on, inside look at engineering and other professional careers by pairing engineers and corporations with student teams.
Experiential Learning of Electronics Subject Matter in Middle School Robotics Courses
ERIC Educational Resources Information Center
Rihtaršic, David; Avsec, Stanislav; Kocijancic, Slavko
2016-01-01
The purpose of this paper is to investigate whether the experiential learning of electronics subject matter is effective in the middle school open learning of robotics. Electronics is often ignored in robotics courses. Since robotics courses are typically comprised of computer-related subjects, and mechanical and electrical engineering, these…
Dancing Robots: Integrating Art, Music, and Robotics in Singapore's Early Childhood Centers
ERIC Educational Resources Information Center
Sullivan, Amanda; Bers, Marina Umaschi
2018-01-01
In recent years, Singapore has increased its national emphasis on technology and engineering in early childhood education. Their newest initiative, the Playmaker Programme, has focused on teaching robotics and coding in preschool settings. Robotics offers a playful and collaborative way for children to engage with foundational technology and…
Creating Hybrid Learning Experiences in Robotics: Implications for Supporting Teaching and Learning
ERIC Educational Resources Information Center
Frerichs, Saundra Wever; Barker, Bradley; Morgan, Kathy; Patent-Nygren, Megan; Rezac, Micaela
2012-01-01
Geospatial and Robotics Technologies for the 21st Century (GEAR-Tech-21), teaches science, technology, engineering and mathematics (STEM) through robotics, global positioning systems (GPS), and geographic information systems (GIS) activities for youth in grades 5-8. Participants use a robotics kit, handheld GPS devices, and GIS technology to…
A Gradient Optimization Approach to Adaptive Multi-Robot Control
2009-09-01
implemented for deploying a group of three flying robots with downward facing cameras to monitor an environment on the ground. Thirdly, the multi-robot...theoretically proven, and implemented on multi-robot platforms. Thesis Supervisor: Daniela Rus Title: Professor of Electrical Engineering and Computer...often nonlinear, and they are coupled through a network which changes over time. Thirdly, implementing multi-robot controllers requires maintaining mul
2014-03-14
CAPE CANAVERAL, Fla. – A visitor to the Robot Rocket Rally takes an up-close look at RASSOR, a robotic miner developed by NASA Kennedy Space Center's Swamp Works. The three-day event at Florida's Kennedy Space Center Visitor Complex is highlighted by exhibits, games and demonstrations of a variety of robots, with exhibitors ranging from school robotics clubs to veteran NASA scientists and engineers. Photo credit: NASA/Kim Shiflett
2014-03-14
CAPE CANAVERAL, Fla. – Students observe as Otherlab shows off a life-size, inflatable robot from its "" program. The demonstration was one of several provided during the Robot Rocket Rally. The three-day event at Florida's Kennedy Space Center Visitor Complex is highlighted by exhibits, games and demonstrations of a variety of robots, with exhibitors ranging from school robotics clubs to veteran NASA scientists and engineers. Photo credit: NASA/Kim Shiflett
Robotic Mining Competition - Awards Ceremony
2018-05-18
NASA's 9th Annual Robotic Mining Competition concludes with an awards ceremony May 18, 2018, at the Apollo/Saturn V Center at the Kennedy Space Center Visitor Complex in Florida. The University of Alabama Team Astrobotics received first place for their Systems Engineering Paper. At left is retired NASA astronaut Jerry Ross. At right is Jonette Stecklein, lead systems engineering paper judge. More than 40 student teams from colleges and universities around the U.S. participated in the competition, May 14-18, by using their mining robots to dig in a supersized sandbox filled with BP-1, or simulated lunar soil, gravel and rocks, and participate in other competition requirements. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's deep space missions.
Robotic Mining Competition - Awards Ceremony
2018-05-18
NASA's 9th Annual Robotic Mining Competition concludes with an awards ceremony May 18, 2018, at the Apollo/Saturn V Center at the Kennedy Space Center Visitor Complex in Florida. The team from The University of Akron received third place for their Systems Engineering Paper. At left is retired NASA astronaut Jerry Ross. At right is Jonette Stecklein, lead systems engineering paper judge. More than 40 student teams from colleges and universities around the U.S. participated in the competition, May 14-18, by using their mining robots to dig in a supersized sandbox filled with BP-1, or simulated lunar soil, gravel and rocks, and participate in other competition requirements. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's deep space missions.
A Human Machine Interface for EVA
NASA Astrophysics Data System (ADS)
Hartmann, L.
EVA astronauts work in a challenging environment that includes high rate of muscle fatigue, haptic and proprioception impairment, lack of dexterity and interaction with robotic equipment. Currently they are heavily dependent on support from on-board crew and ground station staff for information and robotics operation. They are limited to the operation of simple controls on the suit exterior and external robot controls that are difficult to operate because of the heavy gloves that are part of the EVA suit. A wearable human machine interface (HMI) inside the suit provides a powerful alternative for robot teleoperation, procedure checklist access, generic equipment operation via virtual control panels and general information retrieval and presentation. The HMI proposed here includes speech input and output, a simple 6 degree of freedom (dof) pointing device and a heads up display (HUD). The essential characteristic of this interface is that it offers an alternative to the standard keyboard and mouse interface of a desktop computer. The astronaut's speech is used as input to command mode changes, execute arbitrary computer commands and generate text. The HMI can respond with speech also in order to confirm selections, provide status and feedback and present text output. A candidate 6 dof pointing device is Measurand's Shapetape, a flexible "tape" substrate to which is attached an optic fiber with embedded sensors. Measurement of the modulation of the light passing through the fiber can be used to compute the shape of the tape and, in particular, the position and orientation of the end of the Shapetape. It can be used to provide any kind of 3d geometric information including robot teleoperation control. The HUD can overlay graphical information onto the astronaut's visual field including robot joint torques, end effector configuration, procedure checklists and virtual control panels. With suitable tracking information about the position and orientation of the EVA suit, the overlaid graphical information can be registered with the external world. For example, information about an object can be positioned on or beside the object. This wearable HMI supports many applications during EVA including robot teleoperation, procedure checklist usage, operation of virtual control panels and general information or documentation retrieval and presentation. Whether the robot end effector is a mobile platform for the EVA astronaut or is an assistant to the astronaut in an assembly or repair task, the astronaut can control the robot via a direct manipulation interface. Embedded in the suit or the astronaut's clothing, Shapetape can measure the user's arm/hand position and orientation which can be directly mapped into the workspace coordinate system of the robot. Motion of the users hand can generate corresponding motion of the robot end effector in order to reposition the EVA platform or to manipulate objects in the robot's grasp. Speech input can be used to execute commands and mode changes without the astronaut having to withdraw from the teleoperation task. Speech output from the system can provide feedback without affecting the user's visual attention. The procedure checklist guiding the astronaut's detailed activities can be presented on the HUD and manipulated (e.g., move, scale, annotate, mark tasks as done, consult prerequisite tasks) by spoken command. Virtual control panels for suit equipment, equipment being repaired or arbitrary equipment on the space station can be displayed on the HUD and can be operated by speech commands or by hand gestures. For example, an antenna being repaired could be pointed under the control of the EVA astronaut. Additionally arbitrary computer activities such as information retrieval and presentation can be carried out using similar interface techniques. Considering the risks, expense and physical challenges of EVA work, it is appropriate that EVA astronauts have considerable support from station crew and ground station staff. Reducing their dependence on such personnel may under many circumstances, however, improve performance and reduce risk. For example, the EVA astronaut is likely to have the best viewpoint at a robotic worksite. Direct access to the procedure checklist can help provide temporal context and continuity throughout an EVA. Access to station facilities through an HMI such as the one described here could be invaluable during an emergency or in a situation in which a fault occurs. The full paper will describe the HMI operation and applications in the EVA context in more detail and will describe current laboratory prototyping activities.
Basic Operational Robotics Instructional System
NASA Technical Reports Server (NTRS)
Todd, Brian Keith; Fischer, James; Falgout, Jane; Schweers, John
2013-01-01
The Basic Operational Robotics Instructional System (BORIS) is a six-degree-of-freedom rotational robotic manipulator system simulation used for training of fundamental robotics concepts, with in-line shoulder, offset elbow, and offset wrist. BORIS is used to provide generic robotics training to aerospace professionals including flight crews, flight controllers, and robotics instructors. It uses forward kinematic and inverse kinematic algorithms to simulate joint and end-effector motion, combined with a multibody dynamics model, moving-object contact model, and X-Windows based graphical user interfaces, coordinated in the Trick Simulation modeling environment. The motivation for development of BORIS was the need for a generic system for basic robotics training. Before BORIS, introductory robotics training was done with either the SRMS (Shuttle Remote Manipulator System) or SSRMS (Space Station Remote Manipulator System) simulations. The unique construction of each of these systems required some specialized training that distracted students from the ideas and goals of the basic robotics instruction.
Choi, Bongjae; Jo, Sungho
2013-01-01
This paper describes a hybrid brain-computer interface (BCI) technique that combines the P300 potential, the steady state visually evoked potential (SSVEP), and event related de-synchronization (ERD) to solve a complicated multi-task problem consisting of humanoid robot navigation and control along with object recognition using a low-cost BCI system. Our approach enables subjects to control the navigation and exploration of a humanoid robot and recognize a desired object among candidates. This study aims to demonstrate the possibility of a hybrid BCI based on a low-cost system for a realistic and complex task. It also shows that the use of a simple image processing technique, combined with BCI, can further aid in making these complex tasks simpler. An experimental scenario is proposed in which a subject remotely controls a humanoid robot in a properly sized maze. The subject sees what the surrogate robot sees through visual feedback and can navigate the surrogate robot. While navigating, the robot encounters objects located in the maze. It then recognizes if the encountered object is of interest to the subject. The subject communicates with the robot through SSVEP and ERD-based BCIs to navigate and explore with the robot, and P300-based BCI to allow the surrogate robot recognize their favorites. Using several evaluation metrics, the performances of five subjects navigating the robot were quite comparable to manual keyboard control. During object recognition mode, favorite objects were successfully selected from two to four choices. Subjects conducted humanoid navigation and recognition tasks as if they embodied the robot. Analysis of the data supports the potential usefulness of the proposed hybrid BCI system for extended applications. This work presents an important implication for the future work that a hybridization of simple BCI protocols provide extended controllability to carry out complicated tasks even with a low-cost system. PMID:24023953
Sengül, Ali; van Elk, Michiel; Rognini, Giulio; Aspell, Jane Elizabeth; Bleuler, Hannes; Blanke, Olaf
2012-01-01
The effects of real-world tool use on body or space representations are relatively well established in cognitive neuroscience. Several studies have shown, for example, that active tool use results in a facilitated integration of multisensory information in peripersonal space, i.e. the space directly surrounding the body. However, it remains unknown to what extent similar mechanisms apply to the use of virtual-robotic tools, such as those used in the field of surgical robotics, in which a surgeon may use bimanual haptic interfaces to control a surgery robot at a remote location. This paper presents two experiments in which participants used a haptic handle, originally designed for a commercial surgery robot, to control a virtual tool. The integration of multisensory information related to the virtual-robotic tool was assessed by means of the crossmodal congruency task, in which subjects responded to tactile vibrations applied to their fingers while ignoring visual distractors superimposed on the tip of the virtual-robotic tool. Our results show that active virtual-robotic tool use changes the spatial modulation of the crossmodal congruency effects, comparable to changes in the representation of peripersonal space observed during real-world tool use. Moreover, when the virtual-robotic tools were held in a crossed position, the visual distractors interfered strongly with tactile stimuli that was connected with the hand via the tool, reflecting a remapping of peripersonal space. Such remapping was not only observed when the virtual-robotic tools were actively used (Experiment 1), but also when passively held the tools (Experiment 2). The present study extends earlier findings on the extension of peripersonal space from physical and pointing tools to virtual-robotic tools using techniques from haptics and virtual reality. We discuss our data with respect to learning and human factors in the field of surgical robotics and discuss the use of new technologies in the field of cognitive neuroscience. PMID:23227142
Sengül, Ali; van Elk, Michiel; Rognini, Giulio; Aspell, Jane Elizabeth; Bleuler, Hannes; Blanke, Olaf
2012-01-01
The effects of real-world tool use on body or space representations are relatively well established in cognitive neuroscience. Several studies have shown, for example, that active tool use results in a facilitated integration of multisensory information in peripersonal space, i.e. the space directly surrounding the body. However, it remains unknown to what extent similar mechanisms apply to the use of virtual-robotic tools, such as those used in the field of surgical robotics, in which a surgeon may use bimanual haptic interfaces to control a surgery robot at a remote location. This paper presents two experiments in which participants used a haptic handle, originally designed for a commercial surgery robot, to control a virtual tool. The integration of multisensory information related to the virtual-robotic tool was assessed by means of the crossmodal congruency task, in which subjects responded to tactile vibrations applied to their fingers while ignoring visual distractors superimposed on the tip of the virtual-robotic tool. Our results show that active virtual-robotic tool use changes the spatial modulation of the crossmodal congruency effects, comparable to changes in the representation of peripersonal space observed during real-world tool use. Moreover, when the virtual-robotic tools were held in a crossed position, the visual distractors interfered strongly with tactile stimuli that was connected with the hand via the tool, reflecting a remapping of peripersonal space. Such remapping was not only observed when the virtual-robotic tools were actively used (Experiment 1), but also when passively held the tools (Experiment 2). The present study extends earlier findings on the extension of peripersonal space from physical and pointing tools to virtual-robotic tools using techniques from haptics and virtual reality. We discuss our data with respect to learning and human factors in the field of surgical robotics and discuss the use of new technologies in the field of cognitive neuroscience.
Choi, Bongjae; Jo, Sungho
2013-01-01
This paper describes a hybrid brain-computer interface (BCI) technique that combines the P300 potential, the steady state visually evoked potential (SSVEP), and event related de-synchronization (ERD) to solve a complicated multi-task problem consisting of humanoid robot navigation and control along with object recognition using a low-cost BCI system. Our approach enables subjects to control the navigation and exploration of a humanoid robot and recognize a desired object among candidates. This study aims to demonstrate the possibility of a hybrid BCI based on a low-cost system for a realistic and complex task. It also shows that the use of a simple image processing technique, combined with BCI, can further aid in making these complex tasks simpler. An experimental scenario is proposed in which a subject remotely controls a humanoid robot in a properly sized maze. The subject sees what the surrogate robot sees through visual feedback and can navigate the surrogate robot. While navigating, the robot encounters objects located in the maze. It then recognizes if the encountered object is of interest to the subject. The subject communicates with the robot through SSVEP and ERD-based BCIs to navigate and explore with the robot, and P300-based BCI to allow the surrogate robot recognize their favorites. Using several evaluation metrics, the performances of five subjects navigating the robot were quite comparable to manual keyboard control. During object recognition mode, favorite objects were successfully selected from two to four choices. Subjects conducted humanoid navigation and recognition tasks as if they embodied the robot. Analysis of the data supports the potential usefulness of the proposed hybrid BCI system for extended applications. This work presents an important implication for the future work that a hybridization of simple BCI protocols provide extended controllability to carry out complicated tasks even with a low-cost system.
Mobile Agents: A Distributed Voice-Commanded Sensory and Robotic System for Surface EVA Assistance
NASA Technical Reports Server (NTRS)
Clancey, William J.; Sierhuis, Maarten; Alena, Rick; Crawford, Sekou; Dowding, John; Graham, Jeff; Kaskiris, Charis; Tyree, Kim S.; vanHoof, Ronnie
2003-01-01
A model-based, distributed architecture integrates diverse components in a system designed for lunar and planetary surface operations: spacesuit biosensors, cameras, GPS, and a robotic assistant. The system transmits data and assists communication between the extra-vehicular activity (EVA) astronauts, the crew in a local habitat, and a remote mission support team. Software processes ("agents"), implemented in a system called Brahms, run on multiple, mobile platforms, including the spacesuit backpacks, all-terrain vehicles, and robot. These "mobile agents" interpret and transform available data to help people and robotic systems coordinate their actions to make operations more safe and efficient. Different types of agents relate platforms to each other ("proxy agents"), devices to software ("comm agents"), and people to the system ("personal agents"). A state-of-the-art spoken dialogue interface enables people to communicate with their personal agents, supporting a speech-driven navigation and scheduling tool, field observation record, and rover command system. An important aspect of the engineering methodology involves first simulating the entire hardware and software system in Brahms, and then configuring the agents into a runtime system. Design of mobile agent functionality has been based on ethnographic observation of scientists working in Mars analog settings in the High Canadian Arctic on Devon Island and the southeast Utah desert. The Mobile Agents system is developed iteratively in the context of use, with people doing authentic work. This paper provides a brief introduction to the architecture and emphasizes the method of empirical requirements analysis, through which observation, modeling, design, and testing are integrated in simulated EVA operations.
A study of space-rated connectors using a robot end-effector
NASA Technical Reports Server (NTRS)
Nguyen, Charles C.
1995-01-01
The main research activities have been directed toward the study of the Robot Operated Materials Processing System (ROMPS), developed at GSFC under a flight project to investigate commercially promising in-space material processes and to design reflyable robot automated systems to be used in the above processes for low-cost operations. The research activities can be divided into two phases. Phase 1 dealt with testing of ROMPS robot mechanical interfaces and compliant device using a Stewart Platform testbed and Phase 2 with computer simulation study of the ROMPS robot control system. This report provides a summary of the results obtained in Phase 1 and Phase 2.
RoboJockey: Designing an Entertainment Experience with Robots.
Yoshida, Shigeo; Shirokura, Takumi; Sugiura, Yuta; Sakamoto, Daisuke; Ono, Tetsuo; Inami, Masahiko; Igarashi, Takeo
2016-01-01
The RoboJockey entertainment system consists of a multitouch tabletop interface for multiuser collaboration. RoboJockey enables a user to choreograph a mobile robot or a humanoid robot by using a simple visual language. With RoboJockey, a user can coordinate the mobile robot's actions with a combination of back, forward, and rotating movements and coordinate the humanoid robot's actions with a combination of arm and leg movements. Every action is automatically performed to background music. RoboJockey was demonstrated to the public during two pilot studies, and the authors observed users' behavior. Here, they report the results of their observations and discuss the RoboJockey entertainment experience.
Mouraviev, Vladimir; Klein, Martina; Schommer, Eric; Thiel, David D; Samavedi, Srinivas; Kumar, Anup; Leveillee, Raymond J; Thomas, Raju; Pow-Sang, Julio M; Su, Li-Ming; Mui, Engy; Smith, Roger; Patel, Vipul
2016-03-01
In pursuit of improving the quality of residents' education, the Southeastern Section of the American Urological Association (SES AUA) hosts an annual robotic training course for its residents. The workshop involves performing a robotic live porcine nephrectomy as well as virtual reality robotic training modules. The aim of this study was to evaluate workload levels of urology residents when performing a live porcine nephrectomy and the virtual reality robotic surgery training modules employed during this workshop. Twenty-one residents from 14 SES AUA programs participated in 2015. On the first-day residents were taught with didactic lectures by faculty. On the second day, trainees were divided into two groups. Half were asked to perform training modules of the Mimic da Vinci-Trainer (MdVT, Mimic Technologies, Inc., Seattle, WA, USA) for 4 h, while the other half performed nephrectomy procedures on a live porcine model using the da Vinci Si robot (Intuitive Surgical Inc., Sunnyvale, CA, USA). After the first 4 h the groups changed places for another 4-h session. All trainees were asked to complete the NASA-TLX 1-page questionnaire following both the MdVT simulation and live animal model sessions. A significant interface and TLX interaction was observed. The interface by TLX interaction was further analyzed to determine whether the scores of each of the six TLX scales varied across the two interfaces. The means of the TLX scores observed at the two interfaces were similar. The only significant difference was observed for frustration, which was significantly higher at the simulation than the animal model, t (20) = 4.12, p = 0.001. This could be due to trainees' familiarity with live anatomical structures over skill set simulations which remain a real challenge to novice surgeons. Another reason might be that the simulator provides performance metrics for specific performance traits as well as composite scores for entire exercises. Novice trainees experienced substantial mental workload while performing tasks on both the simulator and the live animal model during the robotics course. The NASA-TLX profiles demonstrated that the live animal model and the MdVT were similar in difficulty, as indicated by their comparable workload profiles.
Deploying the ODIS robot in Iraq and Afghanistan
NASA Astrophysics Data System (ADS)
Smuda, Bill; Schoenherr, Edward; Andrusz, Henry; Gerhart, Grant
2005-05-01
The wars in Iraq and Afghanistan have shown the importance of robotic technology as a force multiplier and a tool for moving soldiers out of harms way. Situations on the ground make soldiers performing checkpoint operations easy targets for snipers and suicide bombers. Robotics technology reduces risk to soldiers and other personnel at checkpoints. Early user involvement in innovative and aggressive development and acquisition strategies are the key to moving robotic and associated technology into the hands of the user. This paper updates activity associated with rapid development of the Omni-Directional Inspection System (ODIS) robot for under vehicle inspection and reports on our field experience with robotics in Iraq and Afghanistan. In February of 2004, two TARDEC Engineers departed for a mission to Iraq and Afghanistan with ten ODIS Robots. Six robots were deployed in the Green Zone in Baghdad. Two Robots were deployed at Kandahar Army Airfield and two were deployed at Bagram Army Airfield in Afghanistan. The TARDEC Engineers who performed this mission trained the soldiers and provided initial on site support. They also trained Exponent employees assigned to the Rapid Equipping Force in ODIS repair. We will discuss our initial deployment, lessons learned and future plans.
Analysis on the workspace of palletizing robot based on AutoCAD
NASA Astrophysics Data System (ADS)
Li, Jin-quan; Zhang, Rui; Guan, Qi; Cui, Fang; Chen, Kuan
2017-10-01
In this paper, a four-degree-of-freedom articulated palletizing robot is used as the object of research. Based on the analysis of the overall configuration of the robot, the kinematic mathematical model is established by D-H method to figure out the workspace of the robot. In order to meet the needs of design and analysis, using AutoCAD secondary development technology and AutoLisp language to develop AutoCAD-based 2D and 3D workspace simulation interface program of palletizing robot. At last, using AutoCAD plugin, the influence of structural parameters on the shape and position of the working space is analyzed when the structure parameters of the robot are changed separately. This study laid the foundation for the design, control and planning of palletizing robots.
The AGINAO Self-Programming Engine
NASA Astrophysics Data System (ADS)
Skaba, Wojciech
2013-01-01
The AGINAO is a project to create a human-level artificial general intelligence system (HL AGI) embodied in the Aldebaran Robotics' NAO humanoid robot. The dynamical and open-ended cognitive engine of the robot is represented by an embedded and multi-threaded control program, that is self-crafted rather than hand-crafted, and is executed on a simulated Universal Turing Machine (UTM). The actual structure of the cognitive engine emerges as a result of placing the robot in a natural preschool-like environment and running a core start-up system that executes self-programming of the cognitive layer on top of the core layer. The data from the robot's sensory devices supplies the training samples for the machine learning methods, while the commands sent to actuators enable testing hypotheses and getting a feedback. The individual self-created subroutines are supposed to reflect the patterns and concepts of the real world, while the overall program structure reflects the spatial and temporal hierarchy of the world dependencies. This paper focuses on the details of the self-programming approach, limiting the discussion of the applied cognitive architecture to a necessary minimum.
Application of historical mobility testing to sensor-based robotic performance
NASA Astrophysics Data System (ADS)
Willoughby, William E.; Jones, Randolph A.; Mason, George L.; Shoop, Sally A.; Lever, James H.
2006-05-01
The USA Engineer Research and Development Center (ERDC) has conducted on-/off-road experimental field testing with full-sized and scale-model military vehicles for more than fifty years. Some 4000 acres of local terrain are available for tailored field evaluations or verification/validation of future robotic designs in a variety of climatic regimes. Field testing and data collection procedures, as well as techniques for quantifying terrain in engineering terms, have been developed and refined into algorithms and models for predicting vehicle-terrain interactions and resulting forces or speeds of military-sized vehicles. Based on recent experiments with Matilda, Talon, and Pacbot, these predictive capabilities appear to be relevant to most robotic systems currently in development. Utilization of current testing capabilities with sensor-based vehicle drivers, or use of the procedures for terrain quantification from sensor data, would immediately apply some fifty years of historical knowledge to the development, refinement, and implementation of future robotic systems. Additionally, translation of sensor-collected terrain data into engineering terms would allow assessment of robotic performance a priori deployment of the actual system and ensure maximum system performance in the theater of operation.
VEVI: A Virtual Reality Tool For Robotic Planetary Explorations
NASA Technical Reports Server (NTRS)
Piguet, Laurent; Fong, Terry; Hine, Butler; Hontalas, Phil; Nygren, Erik
1994-01-01
The Virtual Environment Vehicle Interface (VEVI), developed by the NASA Ames Research Center's Intelligent Mechanisms Group, is a modular operator interface for direct teleoperation and supervisory control of robotic vehicles. Virtual environments enable the efficient display and visualization of complex data. This characteristic allows operators to perceive and control complex systems in a natural fashion, utilizing the highly-evolved human sensory system. VEVI utilizes real-time, interactive, 3D graphics and position / orientation sensors to produce a range of interface modalities from the flat panel (windowed or stereoscopic) screen displays to head mounted/head-tracking stereo displays. The interface provides generic video control capability and has been used to control wheeled, legged, air bearing, and underwater vehicles in a variety of different environments. VEVI was designed and implemented to be modular, distributed and easily operated through long-distance communication links, using a communication paradigm called SYNERGY.
Castellini, Claudio; Artemiadis, Panagiotis; Wininger, Michael; Ajoudani, Arash; Alimusaj, Merkur; Bicchi, Antonio; Caputo, Barbara; Craelius, William; Dosen, Strahinja; Englehart, Kevin; Farina, Dario; Gijsberts, Arjan; Godfrey, Sasha B.; Hargrove, Levi; Ison, Mark; Kuiken, Todd; Marković, Marko; Pilarski, Patrick M.; Rupp, Rüdiger; Scheme, Erik
2014-01-01
One of the hottest topics in rehabilitation robotics is that of proper control of prosthetic devices. Despite decades of research, the state of the art is dramatically behind the expectations. To shed light on this issue, in June, 2013 the first international workshop on Present and future of non-invasive peripheral nervous system (PNS)–Machine Interfaces (MI; PMI) was convened, hosted by the International Conference on Rehabilitation Robotics. The keyword PMI has been selected to denote human–machine interfaces targeted at the limb-deficient, mainly upper-limb amputees, dealing with signals gathered from the PNS in a non-invasive way, that is, from the surface of the residuum. The workshop was intended to provide an overview of the state of the art and future perspectives of such interfaces; this paper represents is a collection of opinions expressed by each and every researcher/group involved in it. PMID:25177292
Fifth Grade Students' Understanding of Ratio and Proportion in an Engineering Robotics Program
ERIC Educational Resources Information Center
Ortiz, Araceli Martinez
2010-01-01
The research described in this dissertation explores the impact of utilizing a LEGO-robotics integrated engineering and mathematics program to support fifth grade students' learning of ratios and proportion in an extracurricular program. The research questions guiding this research study were (1) how do students' test results compare for students…
Lyons, Kenneth R; Joshi, Sanjay S
2013-06-01
Here we demonstrate the use of a new singlesignal surface electromyography (sEMG) brain-computer interface (BCI) to control a mobile robot in a remote location. Previous work on this BCI has shown that users are able to perform cursor-to-target tasks in two-dimensional space using only a single sEMG signal by continuously modulating the signal power in two frequency bands. Using the cursor-to-target paradigm, targets are shown on the screen of a tablet computer so that the user can select them, commanding the robot to move in different directions for a fixed distance/angle. A Wifi-enabled camera transmits video from the robot's perspective, giving the user feedback about robot motion. Current results show a case study with a C3-C4 spinal cord injury (SCI) subject using a single auricularis posterior muscle site to navigate a simple obstacle course. Performance metrics for operation of the BCI as well as completion of the telerobotic command task are developed. It is anticipated that this noninvasive and mobile system will open communication opportunities for the severely paralyzed, possibly using only a single sensor.
NASA Technical Reports Server (NTRS)
Ambrose, Robert; Askew, Scott; Bluethmann, William; Diftler, Myron
2001-01-01
NASA began with the challenge of building a robot fo r doing assembly, maintenance, and diagnostic work in the Og environment of space. A robot with human form was then chosen as the best means of achieving that mission. The goal was not to build a machine to look like a human, but rather, to build a system that could do the same work. Robonaut could be inserted into the existing space environment, designed for a population of astronauts, and be able to perform many of the same tasks, with the same tools, and use the same interfaces. Rather than change that world to accommodate the robot, instead Robonaut accepts that it exists for humans, and must conform to it. While it would be easier to build a robot if all the interfaces could be changed, this is not the reality of space at present, where NASA has invested billions of dollars building spacecraft like the Space Shuttle and International Space Station. It is not possible to go back in time, and redesign those systems to accommodate full automation, but a robot can be built that adapts to them. This paper describes that design process, and the res ultant solution, that NASA has named Robonaut.
Brain-Machine Interface control of a robot arm using actor-critic rainforcement learning.
Pohlmeyer, Eric A; Mahmoudi, Babak; Geng, Shijia; Prins, Noeline; Sanchez, Justin C
2012-01-01
Here we demonstrate how a marmoset monkey can use a reinforcement learning (RL) Brain-Machine Interface (BMI) to effectively control the movements of a robot arm for a reaching task. In this work, an actor-critic RL algorithm used neural ensemble activity in the monkey's motor cortext to control the robot movements during a two-target decision task. This novel approach to decoding offers unique advantages for BMI control applications. Compared to supervised learning decoding methods, the actor-critic RL algorithm does not require an explicit set of training data to create a static control model, but rather it incrementally adapts the model parameters according to its current performance, in this case requiring only a very basic feedback signal. We show how this algorithm achieved high performance when mapping the monkey's neural states (94%) to robot actions, and only needed to experience a few trials before obtaining accurate real-time control of the robot arm. Since RL methods responsively adapt and adjust their parameters, they can provide a method to create BMIs that are robust against perturbations caused by changes in either the neural input space or the output actions they generate under different task requirements or goals.
Interface colloidal robotic manipulator
Aronson, Igor; Snezhko, Oleksiy
2015-08-04
A magnetic colloidal system confined at the interface between two immiscible liquids and energized by an alternating magnetic field dynamically self-assembles into localized asters and arrays of asters. The colloidal system exhibits locomotion and shape change. By controlling a small external magnetic field applied parallel to the interface, structures can capture, transport, and position target particles.
2016-11-14
necessary capability to build a high density communication highway between 86 billion brain neurons and intelligent vehicles or robots . With this...build a high density communication highway between brain neurons and intelligent vehicles or robots . The final outcome of the INI using TDT system...will be beneficial to wounded warriors suffering from loss of limb function, so that, using sophisticated bidirectional robotic limbs, these
2014-03-14
CAPE CANAVERAL, Fla. – A visitor to the Robot Rocket Rally tries his hand at virtual reality in a demonstration of the Oculus Rift technology, provided by the Open Source Robotics Foundation. The three-day event at Florida's Kennedy Space Center Visitor Complex is highlighted by exhibits, games and demonstrations of a variety of robots, with exhibitors ranging from school robotics clubs to veteran NASA scientists and engineers. Photo credit: NASA/Kim Shiflett
ERIC Educational Resources Information Center
Guo, Yi; Zhang, Shubo; Ritter, Arthur; Man, Hong
2014-01-01
Despite the increasing importance of robotics, there is a significant challenge involved in teaching this to undergraduate students in biomedical engineering (BME) and other related disciplines in which robotics techniques could be readily applied. This paper addresses this challenge through the development and pilot testing of a bio-microrobotics…
Robotic Design for the Classroom
NASA Technical Reports Server (NTRS)
Culbert, Chris; Burns, Kaylynn
2001-01-01
This slide presentation reviews the use of robotic design to interest students in science and engineering. It describes one program, BEST, and resources that area available to design and create a robot. BEST is a competition for sixth and seventh graders that is designed to engage gifted and talented students. A couple of scenarios involving the use of a robot are outlined.
A Case Study: Motivational Attributes of 4-H Participants Engaged in Robotics
ERIC Educational Resources Information Center
Smith, Mariah Lea
2013-01-01
Robotics has gained a great deal of popularity across the United States as a means to engage youth in science, technology, engineering, and math. Understanding what motivates youth and adults to participate in a robotics project is critical to understanding how to engage others. By developing a robotics program built on a proper understanding of…
ERIC Educational Resources Information Center
Ortiz, Octavio Ortiz; Pastor Franco, Juan Ángel; Alcover Garau, Pedro María; Herrero Martín, Ruth
2017-01-01
This paper describes a study of teaching a programming language in a C programming course by having students assemble and program a low-cost mobile robot. Writing their own programs to define the robot's behavior raised students' motivation. Working in small groups, students programmed the robots by using the control structures of structured…
Akce, Abdullah; Johnson, Miles; Dantsker, Or; Bretl, Timothy
2013-03-01
This paper presents an interface for navigating a mobile robot that moves at a fixed speed in a planar workspace, with noisy binary inputs that are obtained asynchronously at low bit-rates from a human user through an electroencephalograph (EEG). The approach is to construct an ordered symbolic language for smooth planar curves and to use these curves as desired paths for a mobile robot. The underlying problem is then to design a communication protocol by which the user can, with vanishing error probability, specify a string in this language using a sequence of inputs. Such a protocol, provided by tools from information theory, relies on a human user's ability to compare smooth curves, just like they can compare strings of text. We demonstrate our interface by performing experiments in which twenty subjects fly a simulated aircraft at a fixed speed and altitude with input only from EEG. Experimental results show that the majority of subjects are able to specify desired paths despite a wide range of errors made in decoding EEG signals.
Mobile tele-echography: user interface design.
Cañero, Cristina; Thomos, Nikolaos; Triantafyllidis, George A; Litos, George C; Strintzis, Michael Gerassimos
2005-03-01
Ultrasound imaging allows the evaluation of the degree of emergency of a patient. However, in some instances, a well-trained sonographer is unavailable to perform such echography. To cope with this issue, the Mobile Tele-Echography Using an Ultralight Robot (OTELO) project aims to develop a fully integrated end-to-end mobile tele-echography system using an ultralight remote-controlled robot for population groups that are not served locally by medical experts. This paper focuses on the user interface of the OTELO system, consisting of the following parts: an ultrasound video transmission system providing real-time images of the scanned area, an audio/video conference to communicate with the paramedical assistant and with the patient, and a virtual-reality environment, providing visual and haptic feedback to the expert, while capturing the expert's hand movements. These movements are reproduced by the robot at the patient site while holding the ultrasound probe against the patient skin. In addition, the user interface includes an image processing facility for enhancing the received images and the possibility to include them into a database.
Rover Wheel-Actuated Tool Interface
NASA Technical Reports Server (NTRS)
Matthews, Janet; Ahmad, Norman; Wilcox, Brian
2007-01-01
A report describes an interface for utilizing some of the mobility features of a mobile robot for general-purpose manipulation of tools and other objects. The robot in question, now undergoing conceptual development for use on the Moon, is the All-Terrain Hex-Limbed Extra-Terrestrial Explorer (ATHLETE) rover, which is designed to roll over gentle terrain or walk over rough or steep terrain. Each leg of the robot is a six-degree-of-freedom general purpose manipulator tipped by a wheel with a motor drive. The tool interface includes a square cross-section peg, equivalent to a conventional socket-wrench drive, that rotates with the wheel. The tool interface also includes a clamp that holds a tool on the peg, and a pair of fold-out cameras that provides close-up stereoscopic images of the tool and its vicinity. The field of view of the imagers is actuated by the clamp mechanism and is specific to each tool. The motor drive can power any of a variety of tools, including rotating tools for helical fasteners, drills, and such clamping tools as pliers. With the addition of a flexible coupling, it could also power another tool or remote manipulator at a short distance. The socket drive can provide very high torque and power because it is driven by the wheel motor.
Determining robot actions for tasks requiring sensor interaction
NASA Technical Reports Server (NTRS)
Budenske, John; Gini, Maria
1989-01-01
The performance of non-trivial tasks by a mobile robot has been a long term objective of robotic research. One of the major stumbling blocks to this goal is the conversion of the high-level planning goals and commands into the actuator and sensor processing controls. In order for a mobile robot to accomplish a non-trivial task, the task must be described in terms of primitive actions of the robot's actuators. Most non-trivial tasks require the robot to interact with its environment; thus necessitating coordination of sensor processing and actuator control to accomplish the task. The main contention is that the transformation from the high level description of the task to the primitive actions should be performed primarily at execution time, when knowledge about the environment can be obtained through sensors. It is proposed to produce the detailed plan of primitive actions by using a collection of low-level planning components that contain domain specific knowledge and knowledge about the available sensors, actuators, and sensor/actuator processing. This collection will perform signal and control processing as well as serve as a control interface between an actual mobile robot and a high-level planning system. Previous research has shown the usefulness of high-level planning systems to plan the coordination of activities such to achieve a goal, but none have been fully applied to actual mobile robots due to the complexity of interacting with sensors and actuators. This control interface is currently being implemented on a LABMATE mobile robot connected to a SUN workstation and will be developed such to enable the LABMATE to perform non-trivial, sensor-intensive tasks as specified by a planning system.
NASA Astrophysics Data System (ADS)
Heath Pastore, Tracy; Barnes, Mitchell; Hallman, Rory
2005-05-01
Robot technology is developing at a rapid rate for both commercial and Department of Defense (DOD) applications. As a result, the task of managing both technology and experience information is growing. In the not-to-distant past, tracking development efforts of robot platforms, subsystems and components was not too difficult, expensive, or time consuming. To do the same today is a significant undertaking. The Mobile Robot Knowledge Base (MRKB) provides the robotics community with a web-accessible, centralized resource for sharing information, experience, and technology to more efficiently and effectively meet the needs of the robot system user. The resource includes searchable information on robot components, subsystems, mission payloads, platforms, and DOD robotics programs. In addition, the MRKB website provides a forum for technology and information transfer within the DOD robotics community and an interface for the Robotic Systems Pool (RSP). The RSP manages a collection of small teleoperated and semi-autonomous robotic platforms, available for loan to DOD and other qualified entities. The objective is to put robots in the hands of users and use the test data and fielding experience to improve robot systems.
Yandell, Matthew B; Quinlivan, Brendan T; Popov, Dmitry; Walsh, Conor; Zelik, Karl E
2017-05-18
Wearable assistive devices have demonstrated the potential to improve mobility outcomes for individuals with disabilities, and to augment healthy human performance; however, these benefits depend on how effectively power is transmitted from the device to the human user. Quantifying and understanding this power transmission is challenging due to complex human-device interface dynamics that occur as biological tissues and physical interface materials deform and displace under load, absorbing and returning power. Here we introduce a new methodology for quickly estimating interface power dynamics during movement tasks using common motion capture and force measurements, and then apply this method to quantify how a soft robotic ankle exosuit interacts with and transfers power to the human body during walking. We partition exosuit end-effector power (i.e., power output from the device) into power that augments ankle plantarflexion (termed augmentation power) vs. power that goes into deformation and motion of interface materials and underlying soft tissues (termed interface power). We provide empirical evidence of how human-exosuit interfaces absorb and return energy, reshaping exosuit-to-human power flow and resulting in three key consequences: (i) During exosuit loading (as applied forces increased), about 55% of exosuit end-effector power was absorbed into the interfaces. (ii) However, during subsequent exosuit unloading (as applied forces decreased) most of the absorbed interface power was returned viscoelastically. Consequently, the majority (about 75%) of exosuit end-effector work over each stride contributed to augmenting ankle plantarflexion. (iii) Ankle augmentation power (and work) was delayed relative to exosuit end-effector power, due to these interface energy absorption and return dynamics. Our findings elucidate the complexities of human-exosuit interface dynamics during transmission of power from assistive devices to the human body, and provide insight into improving the design and control of wearable robots. We conclude that in order to optimize the performance of wearable assistive devices it is important, throughout design and evaluation phases, to account for human-device interface dynamics that affect power transmission and thus human augmentation benefits.
Human-machine interfaces based on EMG and EEG applied to robotic systems.
Ferreira, Andre; Celeste, Wanderley C; Cheein, Fernando A; Bastos-Filho, Teodiano F; Sarcinelli-Filho, Mario; Carelli, Ricardo
2008-03-26
Two different Human-Machine Interfaces (HMIs) were developed, both based on electro-biological signals. One is based on the EMG signal and the other is based on the EEG signal. Two major features of such interfaces are their relatively simple data acquisition and processing systems, which need just a few hardware and software resources, so that they are, computationally and financially speaking, low cost solutions. Both interfaces were applied to robotic systems, and their performances are analyzed here. The EMG-based HMI was tested in a mobile robot, while the EEG-based HMI was tested in a mobile robot and a robotic manipulator as well. Experiments using the EMG-based HMI were carried out by eight individuals, who were asked to accomplish ten eye blinks with each eye, in order to test the eye blink detection algorithm. An average rightness rate of about 95% reached by individuals with the ability to blink both eyes allowed to conclude that the system could be used to command devices. Experiments with EEG consisted of inviting 25 people (some of them had suffered cases of meningitis and epilepsy) to test the system. All of them managed to deal with the HMI in only one training session. Most of them learnt how to use such HMI in less than 15 minutes. The minimum and maximum training times observed were 3 and 50 minutes, respectively. Such works are the initial parts of a system to help people with neuromotor diseases, including those with severe dysfunctions. The next steps are to convert a commercial wheelchair in an autonomous mobile vehicle; to implement the HMI onboard the autonomous wheelchair thus obtained to assist people with motor diseases, and to explore the potentiality of EEG signals, making the EEG-based HMI more robust and faster, aiming at using it to help individuals with severe motor dysfunctions.
Internet Based Robot Control Using CORBA Based Communications
2009-12-01
Proceedings of the IADIS International Conference WWW/Internet, ICWI 2002, pp. 485–490. [5] Flanagan, David , Farley, Jim, Crawford, William, and...Conference on Robotics andAutomation, ICRA’00., pp. 2019–2024. [7] Schulz, D., Burgard, W., Cremers , A., Fox, D., and Thrun, S. (2000), Web interfaces
SUBTLE: Situation Understanding Bot through Language and Environment
2016-01-06
a 4 day “hackathon” by Stuart Young’s small robots group which successfully ported the SUBTLE MURI NLP robot interface to the Packbot platform they...null element restoration, a step typically ig- nored in NLP systems, allows for correct parsing of im- peratives and questions, critical structures
Flexible automation of cell culture and tissue engineering tasks.
Knoll, Alois; Scherer, Torsten; Poggendorf, Iris; Lütkemeyer, Dirk; Lehmann, Jürgen
2004-01-01
Until now, the predominant use cases of industrial robots have been routine handling tasks in the automotive industry. In biotechnology and tissue engineering, in contrast, only very few tasks have been automated with robots. New developments in robot platform and robot sensor technology, however, make it possible to automate plants that largely depend on human interaction with the production process, e.g., for material and cell culture fluid handling, transportation, operation of equipment, and maintenance. In this paper we present a robot system that lends itself to automating routine tasks in biotechnology but also has the potential to automate other production facilities that are similar in process structure. After motivating the design goals, we describe the system and its operation, illustrate sample runs, and give an assessment of the advantages. We conclude this paper by giving an outlook on possible further developments.
1999-03-06
Robots, maneuvered by student teams behind protective walls, raise their caches of pillow-like disks to earn points in competition while spectators in the bleachers and on the sidelines cheer their favorite teams. Held at the KSC Visitor Complex, the 1999 Southeastern Regional robotic competition, sponsored by the nonprofit organization For Inspiration and Recognition of Science and Technology, known as FIRST, comprises 27 teams pairing high school students with engineer mentors and corporations, pitting gladiator robots against each other in an athletic-style competition. Powered by 12-volt batteries and operated by remote control, the robotic gladiators spend two minutes each trying to grab, claw and hoist the pillows onto their machines. Teams play defense by taking away competitors' pillows and generally harassing opposing machines. The FIRST robotics competition is designed to provide students with a hands-on, inside look at engineering and other professional careers
Robotic Mining Competition - Activities
2018-05-16
During the third day of NASA's 9th Robotic Mining Competition, May 16, Al Feinberg, left, with Kennedy Space Center's Communication and Public Engagement, and Kurt Leucht, with Kennedy's Engineering Directorate, provide commentary as robot miners dig in the dirt in the mining arena at NASA's Kennedy Space Center Visitor Complex in Florida. More than 40 student teams from colleges and universities around the U.S. will use their mining robots to dig in a supersized sandbox filled with BP-1, or simulated Lunar soil, gravel and rocks, and participate in other competition requirements. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's deep space missions.
Final matches of the FIRST regional robotic competition at KSC
NASA Technical Reports Server (NTRS)
1999-01-01
Student teams behind protective walls operate remote controls to maneuver their robots around the playing field during the 1999 FIRST Southeastern Regional robotic competition held at KSC. The robotic gladiators spent two minutes each trying to grab, claw and hoist large, satin pillows onto their machines. Teams played defense by taking away competitors' pillows and generally harassing opposing machines. On the side of the field are the judges, including (far left) Deputy Director for Launch and Payload Processing Loren Shriver and former KSC Director of Shuttle Processing Robert Sieck. A giant screen TV displays the action on the field. The competition comprised 27 teams, pairing high school students with engineer mentors and corporations. The FIRST robotics competition is designed to provide students with a hands-on, inside look at engineering and other professional careers.
Sensor fusion V; Proceedings of the Meeting, Boston, MA, Nov. 15-17, 1992
NASA Technical Reports Server (NTRS)
Schenker, Paul S. (Editor)
1992-01-01
Topics addressed include 3D object perception, human-machine interface in multisensor systems, sensor fusion architecture, fusion of multiple and distributed sensors, interface and decision models for sensor fusion, computational networks, simple sensing for complex action, multisensor-based control, and metrology and calibration of multisensor systems. Particular attention is given to controlling 3D objects by sketching 2D views, the graphical simulation and animation environment for flexible structure robots, designing robotic systems from sensorimotor modules, cylindrical object reconstruction from a sequence of images, an accurate estimation of surface properties by integrating information using Bayesian networks, an adaptive fusion model for a distributed detection system, multiple concurrent object descriptions in support of autonomous navigation, robot control with multiple sensors and heuristic knowledge, and optical array detectors for image sensors calibration. (No individual items are abstracted in this volume)
The GENIUS Grid Portal and robot certificates: a new tool for e-Science
Barbera, Roberto; Donvito, Giacinto; Falzone, Alberto; La Rocca, Giuseppe; Milanesi, Luciano; Maggi, Giorgio Pietro; Vicario, Saverio
2009-01-01
Background Grid technology is the computing model which allows users to share a wide pletora of distributed computational resources regardless of their geographical location. Up to now, the high security policy requested in order to access distributed computing resources has been a rather big limiting factor when trying to broaden the usage of Grids into a wide community of users. Grid security is indeed based on the Public Key Infrastructure (PKI) of X.509 certificates and the procedure to get and manage those certificates is unfortunately not straightforward. A first step to make Grids more appealing for new users has recently been achieved with the adoption of robot certificates. Methods Robot certificates have recently been introduced to perform automated tasks on Grids on behalf of users. They are extremely useful for instance to automate grid service monitoring, data processing production, distributed data collection systems. Basically these certificates can be used to identify a person responsible for an unattended service or process acting as client and/or server. Robot certificates can be installed on a smart card and used behind a portal by everyone interested in running the related applications in a Grid environment using a user-friendly graphic interface. In this work, the GENIUS Grid Portal, powered by EnginFrame, has been extended in order to support the new authentication based on the adoption of these robot certificates. Results The work carried out and reported in this manuscript is particularly relevant for all users who are not familiar with personal digital certificates and the technical aspects of the Grid Security Infrastructure (GSI). The valuable benefits introduced by robot certificates in e-Science can so be extended to users belonging to several scientific domains, providing an asset in raising Grid awareness to a wide number of potential users. Conclusion The adoption of Grid portals extended with robot certificates, can really contribute to creating transparent access to computational resources of Grid Infrastructures, enhancing the spread of this new paradigm in researchers' working life to address new global scientific challenges. The evaluated solution can of course be extended to other portals, applications and scientific communities. PMID:19534747
The GENIUS Grid Portal and robot certificates: a new tool for e-Science.
Barbera, Roberto; Donvito, Giacinto; Falzone, Alberto; La Rocca, Giuseppe; Milanesi, Luciano; Maggi, Giorgio Pietro; Vicario, Saverio
2009-06-16
Grid technology is the computing model which allows users to share a wide pletora of distributed computational resources regardless of their geographical location. Up to now, the high security policy requested in order to access distributed computing resources has been a rather big limiting factor when trying to broaden the usage of Grids into a wide community of users. Grid security is indeed based on the Public Key Infrastructure (PKI) of X.509 certificates and the procedure to get and manage those certificates is unfortunately not straightforward. A first step to make Grids more appealing for new users has recently been achieved with the adoption of robot certificates. Robot certificates have recently been introduced to perform automated tasks on Grids on behalf of users. They are extremely useful for instance to automate grid service monitoring, data processing production, distributed data collection systems. Basically these certificates can be used to identify a person responsible for an unattended service or process acting as client and/or server. Robot certificates can be installed on a smart card and used behind a portal by everyone interested in running the related applications in a Grid environment using a user-friendly graphic interface. In this work, the GENIUS Grid Portal, powered by EnginFrame, has been extended in order to support the new authentication based on the adoption of these robot certificates. The work carried out and reported in this manuscript is particularly relevant for all users who are not familiar with personal digital certificates and the technical aspects of the Grid Security Infrastructure (GSI). The valuable benefits introduced by robot certificates in e-Science can so be extended to users belonging to several scientific domains, providing an asset in raising Grid awareness to a wide number of potential users. The adoption of Grid portals extended with robot certificates, can really contribute to creating transparent access to computational resources of Grid Infrastructures, enhancing the spread of this new paradigm in researchers' working life to address new global scientific challenges. The evaluated solution can of course be extended to other portals, applications and scientific communities.
Efforts toward an autonomous wheelchair - biomed 2011.
Barrett, Steven; Streeter, Robert
2011-01-01
An autonomous wheelchair is in development to provide mobility to those with significant physical challenges. The overall goal of the project is to develop a wheelchair that is fully autonomous with the ability to navigate about an environment and negotiate obstacles. As a starting point for the project, we have reversed engineered the joystick control system of an off-the-shelf commercially available wheelchair. The joystick control has been replaced with a microcontroller based system. The microcontroller has the capability to interface with a number of subsystems currently under development including wheel odometers, obstacle avoidance sensors, and ultrasonic-based wall sensors. This paper will discuss the microcontroller based system and provide a detailed system description. Results of this study may be adapted to commercial or military robot control.
NASA Technical Reports Server (NTRS)
Bell, Jerome A.; Stephens, Elaine; Barton, Gregg
1991-01-01
An overview is provided of the Space Exploration Initiative (SEI) concepts for telecommunications, information systems, and navigation (TISN), and engineering and architecture issues are discussed. The SEI program data system is reviewed to identify mission TISN interfaces, and reference TISN concepts are described for nominal, degraded, and mission-critical data services. The infrastructures reviewed include telecommunications for robotics support, autonomous navigation without earth-based support, and information networks for tracking and data acquisition. Four options for TISN support architectures are examined which relate to unique SEI exploration strategies. Detailed support estimates are given for: (1) a manned stay on Mars; (2) permanent lunar and Martian settlements; short-duration missions; and (4) systematic exploration of the moon and Mars.
Engineering the evolution of self-organizing behaviors in swarm robotics: a case study.
Trianni, Vito; Nolfi, Stefano
2011-01-01
Evolutionary robotics (ER) is a powerful approach for the automatic synthesis of robot controllers, as it requires little a priori knowledge about the problem to be solved in order to obtain good solutions. This is particularly true for collective and swarm robotics, in which the desired behavior of the group is an indirect result of the control and communication rules followed by each individual. However, the experimenter must make several arbitrary choices in setting up the evolutionary process, in order to define the correct selective pressures that can lead to the desired results. In some cases, only a deep understanding of the obtained results can point to the critical aspects that constrain the system, which can be later modified in order to re-engineer the evolutionary process towards better solutions. In this article, we discuss the problem of engineering the evolutionary machinery that can lead to the desired result in the swarm robotics context. We also present a case study about self-organizing synchronization in a swarm of robots, in which some arbitrarily chosen properties of the communication system hinder the scalability of the behavior to large groups. We show that by modifying the communication system, artificial evolution can synthesize behaviors that scale properly with the group size.
NASA Astrophysics Data System (ADS)
Bar-Cohen, Yoseph
2005-04-01
Human errors have long been recognized as a major factor in the reliability of nondestructive evaluation results. To minimize such errors, there is an increasing reliance on automatic inspection tools that allow faster and consistent tests. Crawlers and various manipulation devices are commonly used to perform variety of inspection procedures that include C-scan with contour following capability to rapidly inspect complex structures. The emergence of robots has been the result of the need to deal with parts that are too complex to handle by a simple automatic system. Economical factors are continuing to hamper the wide use of robotics for inspection applications however technology advances are increasingly changing this paradigm. Autonomous robots, which may look like human, can potentially address the need to inspect structures with configuration that are not predetermined. The operation of such robots that mimic biology may take place at harsh or hazardous environments that are too dangerous for human presence. Biomimetic technologies such as artificial intelligence, artificial muscles, artificial vision and numerous others are increasingly becoming common engineering tools. Inspired by science fiction, making biomimetic robots is increasingly becoming an engineering reality and in this paper the state-of-the-art will be reviewed and the outlook for the future will be discussed.
A Web-Remote/Robotic/Scheduled Astronomical Data Acquisition System
NASA Astrophysics Data System (ADS)
Denny, Robert
2011-03-01
Traditionally, remote/robotic observatory operating systems have been custom made for each observatory. While data reduction pipelines need to be tailored for each investigation, the data acquisition process (especially for stare-mode optical images) is often quite similar across investigations. Since 1999, DC-3 Dreams has focused on providing and supporting a remote/robotic observatory operating system which can be adapted to a wide variety of physical hardware and optics while achieving the highest practical observing efficiency and safe/secure web browser user controls. ACP Expert consists of three main subsystems: (1) a robotic list-driven data acquisition engine which controls all aspects of the observatory, (2) a constraint-driven dispatch scheduler with a long-term database of requests, and (3) a built-in "zero admin" web server and dynamic web pages which provide a remote capability for immediate execution and monitoring as well as entry and monitoring of dispatch-scheduled observing requests. No remote desktop login is necessary for observing, thus keeping the system safe and consistent. All routine operation is via the web browser. A wide variety of telescope mounts, CCD imagers, guiding sensors, filter selectors, focusers, instrument-package rotators, weather sensors, and dome control systems are supported via the ASCOM standardized device driver architecture. The system is most commonly employed on commercial 1-meter and smaller observatories used by universities and advanced amateurs for both science and art. One current project, the AAVSO Photometric All-Sky Survey (APASS), uses ACP Expert to acquire large volumes of data in dispatch-scheduled mode. In its first 18 months of operation (North then South), 40,307 sky images were acquired in 117 photometric nights, resulting in 12,107,135 stars detected two or more times. These stars had measures in 5 filters. The northern station covered 754 fields (6446 square degrees) at least twice, the southern station covered 951 fields (8500 square degrees) at least twice. The database of photometric calibrations is available from AAVSO. The paper will cover the ACP web interface, including the use of AJAX and JSON within a micro-content framework, as well as dispatch scheduler and acquisition engine operation.
Bio-robots automatic navigation with electrical reward stimulation.
Sun, Chao; Zhang, Xinlu; Zheng, Nenggan; Chen, Weidong; Zheng, Xiaoxiang
2012-01-01
Bio-robots that controlled by outer stimulation through brain computer interface (BCI) suffer from the dependence on realtime guidance of human operators. Current automatic navigation methods for bio-robots focus on the controlling rules to force animals to obey man-made commands, with animals' intelligence ignored. This paper proposes a new method to realize the automatic navigation for bio-robots with electrical micro-stimulation as real-time rewards. Due to the reward-seeking instinct and trial-and-error capability, bio-robot can be steered to keep walking along the right route with rewards and correct its direction spontaneously when rewards are deprived. In navigation experiments, rat-robots learn the controlling methods in short time. The results show that our method simplifies the controlling logic and realizes the automatic navigation for rat-robots successfully. Our work might have significant implication for the further development of bio-robots with hybrid intelligence.
Analysis and prediction of meal motion by EMG signals
NASA Astrophysics Data System (ADS)
Horihata, S.; Iwahara, H.; Yano, K.
2007-12-01
The lack of carers for senior citizens and physically handicapped persons in our country has now become a huge issue and has created a great need for carer robots. The usual carer robots (many of which have switches or joysticks for their interfaces), however, are neither easy to use it nor very popular. Therefore, haptic devices have been adopted for a human-machine interface that will enable an intuitive operation. At this point, a method is being tested that seeks to prevent a wrong operation from occurring from the user's signals. This method matches motions with EMG signals.
Mobile robotics research at Sandia National Laboratories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morse, W.D.
Sandia is a National Security Laboratory providing scientific and engineering solutions to meet national needs for both government and industry. As part of this mission, the Intelligent Systems and Robotics Center conducts research and development in robotics and intelligent machine technologies. An overview of Sandia`s mobile robotics research is provided. Recent achievements and future directions in the areas of coordinated mobile manipulation, small smart machines, world modeling, and special application robots are presented.
Recent Trends in Robotics Research
NASA Astrophysics Data System (ADS)
Ejiri, Masakazu
My views on recent trends in the strategy and practice of Japan's robotics research are briefly introduced. To meet ever-increasing public expectations, robotics researchers and engineers have to be more seriously concerned about robots' intrinsic weaknesses. Examples of these are power-related and reliability issues. Resolving these issues will increase the feasibility of creating successful new industry, and the likelihood of robotics becoming a key technology for providing a safe and stress-free society in the future.
ANSO study: evaluation in an indoor environment of a mobile assistance robotic grasping arm.
Coignard, P; Departe, J P; Remy Neris, O; Baillet, A; Bar, A; Drean, D; Verier, A; Leroux, C; Belletante, P; Le Guiet, J L
2013-12-01
To evaluate the reliability and functional acceptability of the ‘‘Synthetic Autonomous Majordomo’’ (SAM) robotic aid system (a mobile Neobotix base equipped with a semi-automatic vision interface and a Manus robotic arm). An open, multicentre, controlled study. We included 29 tetraplegic patients (23 patients with spinal cord injuries, 3 with locked-in syndrome and 4 with other disorders; mean SD age: 37.83 13.3) and 34 control participants (mean SD age: 32.44 11.2). The reliability of the user interface was evaluated in three multi-step scenarios: selection of the room in which the object to be retrieved was located (in the presence or absence of visual control by the user), selection of the object to be retrieved, the grasping of the object itself and the robot’s return to the user with the object. A questionnaire was used to assess the robot’s user acceptability. The SAM system was stable and reliable: both patients and control participants experienced few failures when completing the various stages of the scenarios. The graphic interface was effective for selecting and grasping the object – even in the absence of visual control. Users and carers were generally satisfied with SAM, although only a quarter of patients said that they would consider using the robot in their activities of daily living. Copyright © 2013 Elsevier Masson SAS. All rights reserved.
Localization of Mobile Robots Using an Extended Kalman Filter in a LEGO NXT
ERIC Educational Resources Information Center
Pinto, M.; Moreira, A. P.; Matos, A.
2012-01-01
The inspiration for this paper comes from a successful experiment conducted with students in the "Mobile Robots" course in the fifth year of the integrated Master's program in the Department of Electrical and Computer Engineering, Faculty of Engineering, University of Porto (FEUP), Porto, Portugal. One of the topics in this Mobile Robots…
A Multidisciplinary PBL Robot Control Project in Automation and Electronic Engineering
ERIC Educational Resources Information Center
Hassan, Houcine; Domínguez, Carlos; Martínez, Juan-Miguel; Perles, Angel; Capella, Juan-Vicente; Albaladejo, José
2015-01-01
This paper presents a multidisciplinary problem-based learning (PBL) project consisting of the development of a robot arm prototype and the implementation of its control system. The project is carried out as part of Industrial Informatics (II), a compulsory third-year course in the Automation and Electronic Engineering (AEE) degree program at the…
Memetic Engineering as a Basis for Learning in Robotic Communities
NASA Technical Reports Server (NTRS)
Truszkowski, Walter F.; Rouff, Christopher; Akhavannik, Mohammad H.
2014-01-01
This paper represents a new contribution to the growing literature on memes. While most memetic thought has been focused on its implications on humans, this paper speculates on the role that memetics can have on robotic communities. Though speculative, the concepts are based on proven advanced multi agent technology work done at NASA - Goddard Space Flight Center and Lockheed Martin. The paper is composed of the following sections : 1) An introductory section which gently leads the reader into the realm of memes. 2) A section on memetic engineering which addresses some of the central issues with robotic learning via memes. 3) A section on related work which very concisely identifies three other areas of memetic applications, i.e., news, psychology, and the study of human behaviors. 4) A section which discusses the proposed approach for realizing memetic behaviors in robots and robotic communities. 5) A section which presents an exploration scenario for a community of robots working on Mars. 6) A final section which discusses future research which will be required to realize a comprehensive science of robotic memetics.
Interface evaluation for soft robotic manipulators
NASA Astrophysics Data System (ADS)
Moore, Kristin S.; Rodes, William M.; Csencsits, Matthew A.; Kwoka, Martha J.; Gomer, Joshua A.; Pagano, Christopher C.
2006-05-01
The results of two usability experiments evaluating an interface for the operation of OctArm, a biologically inspired robotic arm modeled after an octopus tentacle, are reported. Due to the many degrees-of-freedom (DOF) for the operator to control, such 'continuum' robotic limbs provide unique challenges for human operators because they do not map intuitively. Two modes have been developed to control the arm and reduce the DOF under the explicit direction of the operator. In coupled velocity (CV) mode, a joystick controls changes in arm curvature. In end-effector (EE) mode, a joystick controls the arm by moving the position of an endpoint along a straight line. In Experiment 1, participants used the two modes to grasp objects placed at different locations in a virtual reality modeling language (VRML). Objective measures of performance and subjective preferences were recorded. Results revealed lower grasp times and a subjective preference for the CV mode. Recommendations for improving the interface included providing additional feedback and implementation of an error recovery function. In Experiment 2, only the CV mode was tested with improved training of participants and several changes to the interface. The error recovery function was implemented, allowing participants to reverse through previously attained positions. The mean time to complete the trials in the second usability test was reduced by more than 4 minutes compared with the first usability test, confirming the interface changes improved performance. The results of these tests will be incorporated into future versions of the arm and improve future usability tests.
2016-07-01
ARL-TR-7729 ● JULY 2016 US Army Research Laboratory US Army Research Laboratory (ARL) Robotics Collaborative Technology Alliance...TR-7729 ● JULY 2016 US Army Research Laboratory US Army Research Laboratory (ARL) Robotics Collaborative Technology Alliance 2014 Capstone...National Robotics Engineering Center, Pittsburgh, PA Robert Dean, Terence Keegan, and Chip Diberardino General Dynamics Land Systems, Westminster
ERIC Educational Resources Information Center
Silva, E.; Almeida, J.; Martins, A.; Baptista, J. P.; Campos Neves, B.
2013-01-01
Robotics research in Portugal is increasing every year, but few students embrace it as one of their first choices for study. Until recently, job offers for engineers were plentiful, and those looking for a degree in science and technology would avoid areas considered to be demanding, like robotics. At the undergraduate level, robotics programs are…
Designing a Microhydraulically driven Mini robotic Squid
2016-05-20
applications for microrobots include remote monitoring, surveillance, search and rescue, nanoassembly, medicine, and in-vivo surgery . Robotics platforms...Secretary of Defense for Research and Engineering. Designing a Microhydraulically-driven Mini- robotic Squid by Kevin Dehan Meng B.S., U.S. Air...Committee on Graduate Students 2 Designing a Microhydraulically-driven Mini- robotic Squid by Kevin Dehan Meng Submitted to the Department
2014-03-14
CAPE CANAVERAL, Fla. – Bruce Yost of NASA's Ames Research Center discusses a small satellite, known as PhoneSat, during the Robot Rocket Rally. The three-day event at Florida's Kennedy Space Center Visitor Complex is highlighted by exhibits, games and demonstrations of a variety of robots, with exhibitors ranging from school robotics clubs to veteran NASA scientists and engineers. Photo credit: NASA/Kim Shiflett
2014-03-14
CAPE CANAVERAL, Fla. – Ron Diftler of NASA's Johnson Space Center in Houston demonstrates the leg movements of Robonaut 2 during the Robot Rocket Rally. The three-day event at Florida's Kennedy Space Center Visitor Complex is highlighted by exhibits, games and demonstrations of a variety of robots, with exhibitors ranging from school robotics clubs to veteran NASA scientists and engineers. Photo credit: NASA/Kim Shiflett
Considerations for human-machine interfaces in tele-operations
NASA Technical Reports Server (NTRS)
Newport, Curt
1991-01-01
Numerous factors impact on the efficiency of tele-operative manipulative work. Generally, these are related to the physical environment of the tele-operator and how he interfaces with robotic control consoles. The capabilities of the operator can be influenced by considerations such as temperature, eye strain, body fatigue, and boredom created by repetitive work tasks. In addition, the successful combination of man and machine will, in part, be determined by the configuration of the visual and physical interfaces available to the teleoperator. The design and operation of system components such as full-scale and mini-master manipulator controllers, servo joysticks, and video monitors will have a direct impact on operational efficiency. As a result, the local environment and the interaction of the operator with the robotic control console have a substantial effect on mission productivity.
Robotics technology discipline
NASA Technical Reports Server (NTRS)
Montemerlo, Melvin D.
1990-01-01
Viewgraphs on robotics technology discipline for Space Station Freedom are presented. Topics covered include: mechanisms; sensors; systems engineering processes for integrated robotics; man/machine cooperative control; 3D-real-time machine perception; multiple arm redundancy control; manipulator control from a movable base; multi-agent reasoning; and surfacing evolution technologies.
Starting a Robotics Program in Your County
ERIC Educational Resources Information Center
Habib, Maria A.
2012-01-01
The current mission mandates of the National 4-H Headquarters are Citizenship, Healthy Living, and Science. Robotics programs are excellent in fulfilling the Science mandate. Robotics engages students in STEM (Science, Engineering, Technology, and Mathematics) fields by providing interactive, hands-on, minds-on, cross-disciplinary learning…
Motivating Students with Robotics
ERIC Educational Resources Information Center
Brand, Brenda; Collver, Michael; Kasarda, Mary
2008-01-01
In recent years, the need to advance the number of individuals pursuing science, technology, engineering, and mathematics fields has gained much attention. The Montgomery County/Virginia Tech Robotics Collaborative (MCVTRC), a yearlong high school robotics program housed in an educational shop facility in Montgomery County, Virginia, seeks to…
Gravish, Nick; Lauder, George V
2018-03-29
For centuries, designers and engineers have looked to biology for inspiration. Biologically inspired robots are just one example of the application of knowledge of the natural world to engineering problems. However, recent work by biologists and interdisciplinary teams have flipped this approach, using robots and physical models to set the course for experiments on biological systems and to generate new hypotheses for biological research. We call this approach robotics-inspired biology; it involves performing experiments on robotic systems aimed at the discovery of new biological phenomena or generation of new hypotheses about how organisms function that can then be tested on living organisms. This new and exciting direction has emerged from the extensive use of physical models by biologists and is already making significant advances in the areas of biomechanics, locomotion, neuromechanics and sensorimotor control. Here, we provide an introduction and overview of robotics-inspired biology, describe two case studies and suggest several directions for the future of this exciting new research area. © 2018. Published by The Company of Biologists Ltd.
Final matches of the FIRST regional robotic competition at KSC
NASA Technical Reports Server (NTRS)
1999-01-01
During final matches at the 1999 Southeastern Regional robotic competition at the KSC Visitor Complex, referees and judges (blue shirts at left) watch as two robots raise their pillow disks to a height of eight feet, one of the goals of the competition. KSC Deputy Director for Launch and Payload Processing Loren Shriver is one of the judges. Thirty schools from around the country have converged at KSC for the event that pits gladiator robots against each other in an athletic-style competition. The robots have to retrieve the disks from the floor, climb onto a platform (with flags), as well as raise the cache of pillows, maneuvered by student teams behind protective walls. KSC is hosting the event being sponsored by the nonprofit organization For Inspiration and Recognition of Science and Technology, known as FIRST. The FIRST robotics competition is designed to provide students with a hands-on, inside look at engineering and other professional careers by pairing engineers and corporations with student teams.
Application of industrial robots in automatic disassembly line of waste LCD displays
NASA Astrophysics Data System (ADS)
Wang, Sujuan
2017-11-01
In the automatic disassembly line of waste LCD displays, LCD displays are disassembled into plastic shells, metal shields, circuit boards, and LCD panels. Two industrial robots are used to cut metal shields and remove circuit boards in this automatic disassembly line. The functions of these two industrial robots, and the solutions to the critical issues of model selection, the interfaces with PLCs and the workflows were described in detail in this paper.
Zhang, Chen; Sun, Chao; Gao, Liqiang; Zheng, Nenggan; Chen, Weidong; Zheng, Xiaoxiang
2013-01-01
Bio-robots based on brain computer interface (BCI) suffer from the lack of considering the characteristic of the animals in navigation. This paper proposed a new method for bio-robots' automatic navigation combining the reward generating algorithm base on Reinforcement Learning (RL) with the learning intelligence of animals together. Given the graded electrical reward, the animal e.g. the rat, intends to seek the maximum reward while exploring an unknown environment. Since the rat has excellent spatial recognition, the rat-robot and the RL algorithm can convergent to an optimal route by co-learning. This work has significant inspiration for the practical development of bio-robots' navigation with hybrid intelligence.
A prototype home robot with an ambient facial interface to improve drug compliance.
Takacs, Barnabas; Hanak, David
2008-01-01
We have developed a prototype home robot to improve drug compliance. The robot is a small mobile device, capable of autonomous behaviour, as well as remotely controlled operation via a wireless datalink. The robot is capable of face detection and also has a display screen to provide facial feedback to help motivate patients and thus increase their level of compliance. An RFID reader can identify tags attached to different objects, such as bottles, for fluid intake monitoring. A tablet dispenser allows drug compliance monitoring. Despite some limitations, experience with the prototype suggests that simple and low-cost robots may soon become feasible for care of people living alone or in isolation.
Cloud-based robot remote control system for smart factory
NASA Astrophysics Data System (ADS)
Wu, Zhiming; Li, Lianzhong; Xu, Yang; Zhai, Jingmei
2015-12-01
With the development of internet technologies and the wide application of robots, there is a prospect (trend/tendency) of integration between network and robots. A cloud-based robot remote control system over networks for smart factory is proposed, which enables remote users to control robots and then realize intelligent production. To achieve it, a three-layer system architecture is designed including user layer, service layer and physical layer. Remote control applications running on the cloud server is developed on Microsoft Azure. Moreover, DIV+ CSS technologies are used to design human-machine interface to lower maintenance cost and improve development efficiency. Finally, an experiment is implemented to verify the feasibility of the program.
Health Care Robotics: A Progress Report
NASA Technical Reports Server (NTRS)
Fiorini, Paolo; Ali, Khaled; Seraji, Homayoun
1997-01-01
This paper describes the approach followed in the design of a service robot for health care applications. Under the auspices of the NASA Technology Transfer program, a partnership was established between JPL and RWI, a manufacturer of mobile robots, to design and evaluate a mobile robot for health care assistance to the elderly and the handicapped. The main emphasis of the first phase of the project is on the development on a multi-modal operator interface and its evaluation by health care professionals and users. This paper describes the architecture of the system, the evaluation method used, and some preliminary results of the user evaluation.
Development of a telepresence robot for medical consultation
NASA Astrophysics Data System (ADS)
Bugtai, Nilo T.; Ong, Aira Patrice R.; Angeles, Patrick Bryan C.; Cervera, John Keen P.; Ganzon, Rachel Ann E.; Villanueva, Carlos A. G.; Maniquis, Samuel Nazirite F.
2017-02-01
There are numerous efforts to add value for telehealth applications in the country. In this study, the design of a telepresence doctor to facilitate remote medical consultations in the wards of Philippine General Hospital is proposed. This includes the design of a robot capable of performing a medical consultation with clear audio and video information for both ends. It also provides the operating doctor full control of the telepresence robot and gives a user-friendly interface for the controlling doctor. The results have shown that it provides a stable and reliable mobile medical service through the use of the telepresence robot.
A CLIPS-based expert system for the evaluation and selection of robots
NASA Technical Reports Server (NTRS)
Nour, Mohamed A.; Offodile, Felix O.; Madey, Gregory R.
1994-01-01
This paper describes the development of a prototype expert system for intelligent selection of robots for manufacturing operations. The paper first develops a comprehensive, three-stage process to model the robot selection problem. The decisions involved in this model easily lend themselves to an expert system application. A rule-based system, based on the selection model, is developed using the CLIPS expert system shell. Data about actual robots is used to test the performance of the prototype system. Further extensions to the rule-based system for data handling and interfacing capabilities are suggested.
Activities report of the Department of Engineering
NASA Astrophysics Data System (ADS)
Acoustics, aerodynamics, fluid mechanics, design, electrical, materials science, mechanical, control, robotics, soil mechanics, structural engineering, thermodynamics, and turbomachine engineering research are described.
Roberts, Luke; Park, Hae Won; Howard, Ayanna M
2012-01-01
Rehabilitation robots in home environments has the potential to dramatically improve quality of life for individuals who experience disabling circumstances due to injury or chronic health conditions. Unfortunately, although classes of robotic systems for rehabilitation exist, these devices are typically not designed for children. And since over 150 million children in the world live with a disability, this causes a unique challenge for deploying such robotics for this target demographic. To overcome this barrier, we discuss a system that uses a wireless arm glove input device to enable interaction with a robotic playmate during various play scenarios. Results from testing the system with 20 human subjects shows that the system has potential, but certain aspects need to be improved before deployment with children.
Event-Based Control Strategy for Mobile Robots in Wireless Environments.
Socas, Rafael; Dormido, Sebastián; Dormido, Raquel; Fabregas, Ernesto
2015-12-02
In this paper, a new event-based control strategy for mobile robots is presented. It has been designed to work in wireless environments where a centralized controller has to interchange information with the robots over an RF (radio frequency) interface. The event-based architectures have been developed for differential wheeled robots, although they can be applied to other kinds of robots in a simple way. The solution has been checked over classical navigation algorithms, like wall following and obstacle avoidance, using scenarios with a unique or multiple robots. A comparison between the proposed architectures and the classical discrete-time strategy is also carried out. The experimental results shows that the proposed solution has a higher efficiency in communication resource usage than the classical discrete-time strategy with the same accuracy.
Event-Based Control Strategy for Mobile Robots in Wireless Environments
Socas, Rafael; Dormido, Sebastián; Dormido, Raquel; Fabregas, Ernesto
2015-01-01
In this paper, a new event-based control strategy for mobile robots is presented. It has been designed to work in wireless environments where a centralized controller has to interchange information with the robots over an RF (radio frequency) interface. The event-based architectures have been developed for differential wheeled robots, although they can be applied to other kinds of robots in a simple way. The solution has been checked over classical navigation algorithms, like wall following and obstacle avoidance, using scenarios with a unique or multiple robots. A comparison between the proposed architectures and the classical discrete-time strategy is also carried out. The experimental results shows that the proposed solution has a higher efficiency in communication resource usage than the classical discrete-time strategy with the same accuracy. PMID:26633412
Electronics and Software Engineer for Robotics Project Intern
NASA Technical Reports Server (NTRS)
Teijeiro, Antonio
2017-01-01
I was assigned to mentor high school students for the 2017 First Robotics Competition. Using a team based approach, I worked with the students to program the robot and applied my electrical background to build the robot from start to finish. I worked with students who had an interest in electrical engineering to teach them about voltage, current, pulse width modulation, solenoids, electromagnets, relays, DC motors, DC motor controllers, crimping and soldering electrical components, Java programming, and robotic simulation. For the simulation, we worked together to generate graphics files, write simulator description format code, operate Linux, and operate SOLIDWORKS. Upon completion of the FRC season, I transitioned over to providing full time support for the LCS hardware team. During this phase of my internship I helped my co-intern write test steps for two networking hardware DVTs , as well as run cables and update cable running lists.
2007-01-06
NASA engineers Scott Olive (left) and Bo Clarke answer questions during the 2007 FIRST (For Inspiration and Recognition of Science and Technology) Robotics Competition regional kickoff event held Saturday, Jan. 6, 2007, at StenniSphere, the visitor center at NASA Stennis Space Center near Bay St. Louis, Miss. The SSC employees and FIRST Robotics volunteer mentors are standing near a mock-up of the playing field for the FIRST Robotics' 2007 `Rack n' Roll' challenge. Roughly 300 students and adult volunteers - representing 29 high schools from four states - attended the kickoff to hear the rules of `Rack n' Roll.' The teams will spend the next six weeks building and programming robots from parts kits they received Saturday, then battle their creations at regional spring competitions in New Orleans, Houston, Atlanta and other cities around the nation. FIRST aims to inspire students in the pursuit of engineering and technology studies and careers.
NASA Technical Reports Server (NTRS)
2007-01-01
NASA engineers Scott Olive (left) and Bo Clarke answer questions during the 2007 FIRST (For Inspiration and Recognition of Science and Technology) Robotics Competition regional kickoff event held Saturday, Jan. 6, 2007, at StenniSphere, the visitor center at NASA Stennis Space Center near Bay St. Louis, Miss. The SSC employees and FIRST Robotics volunteer mentors are standing near a mock-up of the playing field for the FIRST Robotics' 2007 `Rack n' Roll' challenge. Roughly 300 students and adult volunteers - representing 29 high schools from four states - attended the kickoff to hear the rules of `Rack n' Roll.' The teams will spend the next six weeks building and programming robots from parts kits they received Saturday, then battle their creations at regional spring competitions in New Orleans, Houston, Atlanta and other cities around the nation. FIRST aims to inspire students in the pursuit of engineering and technology studies and careers.
1999-03-06
Four robots vie for position on the playing field during the 1999 FIRST Southeastern Regional robotic competition held at KSC. Powered by 12-volt batteries and operated by remote control, the robotic gladiators spent two minutes each trying to grab, claw and hoist large, satin pillows onto their machines. Student teams, shown behind protective walls, play defense by taking away competitors' pillows and generally harassing opposing machines. Two of the robots have lifted their caches of pillows above the field, a movement which earns them points. Along with the volunteer referees, at the edge of the playing field, judges at right watch the action. FIRST is a nonprofit organization, For Inspiration and Recognition of Science and Technology. The competition comprised 27 teams, pairing high school students with engineer mentors and corporations. The FIRST robotics competition is designed to provide students with a hands-on, inside look at engineering and other professional careers
Mindstorms Robots and the Application of Cognitive Load Theory in Introductory Programming
ERIC Educational Resources Information Center
Mason, Raina; Cooper, Graham
2013-01-01
This paper reports on a series of introductory programming workshops, initially targeting female high school students, which utilised Lego Mindstorms robots. Cognitive load theory (CLT) was applied to the instructional design of the workshops, and a controlled experiment was also conducted investigating aspects of the interface. Results indicated…
ERIC Educational Resources Information Center
Burleson, Winslow S.; Harlow, Danielle B.; Nilsen, Katherine J.; Perlin, Ken; Freed, Natalie; Jensen, Camilla Nørgaard; Lahey, Byron; Lu, Patrick; Muldner, Kasia
2018-01-01
As computational thinking becomes increasingly important for children to learn, we must develop interfaces that leverage the ways that young children learn to provide opportunities for them to develop these skills. Active Learning Environments with Robotic Tangibles (ALERT) and Robopad, an analogous on-screen virtual spatial programming…
Hiding the system from the user: Moving from complex mental models to elegant metaphors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curtis W. Nielsen; David J. Bruemmer
2007-08-01
In previous work, increased complexity of robot behaviors and the accompanying interface design often led to operator confusion and/or a fight for control between the robot and operator. We believe the reason for the conflict was that the design of the interface and interactions presented too much of the underlying robot design model to the operator. Since the design model includes the implementation of sensors, behaviors, and sophisticated algorithms, the result was that the operator’s cognitive efforts were focused on understanding the design of the robot system as opposed to focusing on the task at hand. This paper illustrates howmore » this very problem emerged at the INL and how the implementation of new metaphors for interaction has allowed us to hide the design model from the user and allow the user to focus more on the task at hand. Supporting the user’s focus on the task rather than on the design model allows increased use of the system and significant performance improvement in a search task with novice users.« less
Six axis force feedback input device
NASA Technical Reports Server (NTRS)
Ohm, Timothy (Inventor)
1998-01-01
The present invention is a low friction, low inertia, six-axis force feedback input device comprising an arm with double-jointed, tendon-driven revolute joints, a decoupled tendon-driven wrist, and a base with encoders and motors. The input device functions as a master robot manipulator of a microsurgical teleoperated robot system including a slave robot manipulator coupled to an amplifier chassis, which is coupled to a control chassis, which is coupled to a workstation with a graphical user interface. The amplifier chassis is coupled to the motors of the master robot manipulator and the control chassis is coupled to the encoders of the master robot manipulator. A force feedback can be applied to the input device and can be generated from the slave robot to enable a user to operate the slave robot via the input device without physically viewing the slave robot. Also, the force feedback can be generated from the workstation to represent fictitious forces to constrain the input device's control of the slave robot to be within imaginary predetermined boundaries.
Brain-computer interfaces in medicine.
Shih, Jerry J; Krusienski, Dean J; Wolpaw, Jonathan R
2012-03-01
Brain-computer interfaces (BCIs) acquire brain signals, analyze them, and translate them into commands that are relayed to output devices that carry out desired actions. BCIs do not use normal neuromuscular output pathways. The main goal of BCI is to replace or restore useful function to people disabled by neuromuscular disorders such as amyotrophic lateral sclerosis, cerebral palsy, stroke, or spinal cord injury. From initial demonstrations of electroencephalography-based spelling and single-neuron-based device control, researchers have gone on to use electroencephalographic, intracortical, electrocorticographic, and other brain signals for increasingly complex control of cursors, robotic arms, prostheses, wheelchairs, and other devices. Brain-computer interfaces may also prove useful for rehabilitation after stroke and for other disorders. In the future, they might augment the performance of surgeons or other medical professionals. Brain-computer interface technology is the focus of a rapidly growing research and development enterprise that is greatly exciting scientists, engineers, clinicians, and the public in general. Its future achievements will depend on advances in 3 crucial areas. Brain-computer interfaces need signal-acquisition hardware that is convenient, portable, safe, and able to function in all environments. Brain-computer interface systems need to be validated in long-term studies of real-world use by people with severe disabilities, and effective and viable models for their widespread dissemination must be implemented. Finally, the day-to-day and moment-to-moment reliability of BCI performance must be improved so that it approaches the reliability of natural muscle-based function. Copyright © 2012 Mayo Foundation for Medical Education and Research. Published by Elsevier Inc. All rights reserved.
2014-05-23
CAPE CANAVERAL, Fla. -- Kennedy Space Center engineer Marc Seibert presents the Communication Award to the University of New Hampshire team members during NASA's 2014 Robotic Mining Competition award ceremony inside the Space Shuttle Atlantis attraction at the Kennedy Space Center Visitor Complex in Florida. The team moved 10 kilograms of simulated Martian soil with its robot while using the least amount of communication power. More than 35 teams from colleges and universities around the U.S. designed and built remote-controlled robots for the mining competition. The competition is a NASA Human Exploration and Operations Mission Directorate project designed to engage and retain students in science, technology, engineering and mathematics, or STEM, fields by expanding opportunities for student research and design. Teams use their remote-controlled robotics to maneuver and dig in a supersized sandbox filled with a crushed material that has characteristics similar to Martian soil. The objective of the challenge is to see which team’s robot can collect and move the most regolith within a specified amount of time. The competition includes on-site mining, writing a systems engineering paper, performing outreach projects for K-12 students, slide presentation and demonstrations, and team spirit. For more information, visit www.nasa.gov/nasarmc. Photo credit: NASA/Kim Shiflett
Software development to support sensor control of robot arc welding
NASA Technical Reports Server (NTRS)
Silas, F. R., Jr.
1986-01-01
The development of software for a Digital Equipment Corporation MINC-23 Laboratory Computer to provide functions of a workcell host computer for Space Shuttle Main Engine (SSME) robotic welding is documented. Routines were written to transfer robot programs between the MINC and an Advanced Robotic Cyro 750 welding robot. Other routines provide advanced program editing features while additional software allows communicatin with a remote computer aided design system. Access to special robot functions were provided to allow advanced control of weld seam tracking and process control for future development programs.
ERIC Educational Resources Information Center
Cruz-Martin, A.; Fernandez-Madrigal, J. A.; Galindo, C.; Gonzalez-Jimenez, J.; Stockmans-Daou, C.; Blanco-Claraco, J. L.
2012-01-01
LEGO Mindstorms NXT robots are being increasingly used in undergraduate courses, mostly in robotics-related subjects. But other engineering topics, like the ones found in data acquisition, control and real-time subjects, also have difficult concepts that can be well understood only with good lab exercises. Such exercises require physical…
NASA Technical Reports Server (NTRS)
Murphy, Gloria A.
2010-01-01
Embry Riddle Aeronautical University's Daytona Beach Campus Lunabotics Team took the opportunity to share the love of space, engineering and technology through the educational outreach portion of the competition. Through visits to elementary schools and high schools, and through support of science fairs and robotics competitions, younger generations were introduced to space, engineering and robotics. This report documents the outreach activities of team Aether.
ERIC Educational Resources Information Center
Bianco, Andrew S.
2014-01-01
All technology educators have favorite lessons and projects that they most desire to teach. Many teachers might ask why teach robotics when there are many other concepts to cover with the students? The answer to this question is to engage students in science, technology, engineering, and math (commonly referred to as STEM) concepts. In order for…
Kotov and Williams with SSRMS arm training session in Node 1 / Unity module
2007-04-18
ISS014-E-19587 (17 April 2007) --- Cosmonaut Oleg V. Kotov (foreground), Expedition 15 flight engineer representing Russia's Federal Space Agency, and astronaut Sunita L. Williams, flight engineer, participate in a Space Station Remote Manipulator System (SSRMS) training session using the Robotic Onboard Trainer (ROBOT) simulator in the Unity node of the International Space Station.
2016-06-14
Nature is a major source of inspiration for robotics and aerospace engineering, giving rise to biologically inspired structures. Tensegrity robots mimic a structure similar to muscles and bones to produce a robust three-dimensional skeletal structure that is able to adapt. Vytas SunSpiral will present his work on biologically inspired robotics for advancing NASA space exploration missions.
Kampmann, Peter; Kirchner, Frank
2014-01-01
With the increasing complexity of robotic missions and the development towards long-term autonomous systems, the need for multi-modal sensing of the environment increases. Until now, the use of tactile sensor systems has been mostly based on sensing one modality of forces in the robotic end-effector. The use of a multi-modal tactile sensory system is motivated, which combines static and dynamic force sensor arrays together with an absolute force measurement system. This publication is focused on the development of a compact sensor interface for a fiber-optic sensor array, as optic measurement principles tend to have a bulky interface. Mechanical, electrical and software approaches are combined to realize an integrated structure that provides decentralized data pre-processing of the tactile measurements. Local behaviors are implemented using this setup to show the effectiveness of this approach. PMID:24743158
Towards a new modality-independent interface for a robotic wheelchair.
Bastos-Filho, Teodiano Freire; Cheein, Fernando Auat; Müller, Sandra Mara Torres; Celeste, Wanderley Cardoso; de la Cruz, Celso; Cavalieri, Daniel Cruz; Sarcinelli-Filho, Mário; Amaral, Paulo Faria Santos; Perez, Elisa; Soria, Carlos Miguel; Carelli, Ricardo
2014-05-01
This work presents the development of a robotic wheelchair that can be commanded by users in a supervised way or by a fully automatic unsupervised navigation system. It provides flexibility to choose different modalities to command the wheelchair, in addition to be suitable for people with different levels of disabilities. Users can command the wheelchair based on their eye blinks, eye movements, head movements, by sip-and-puff and through brain signals. The wheelchair can also operate like an auto-guided vehicle, following metallic tapes, or in an autonomous way. The system is provided with an easy to use and flexible graphical user interface onboard a personal digital assistant, which is used to allow users to choose commands to be sent to the robotic wheelchair. Several experiments were carried out with people with disabilities, and the results validate the developed system as an assistive tool for people with distinct levels of disability.
Human-Robot Control Strategies for the NASA/DARPA Robonaut
NASA Technical Reports Server (NTRS)
Diftler, M. A.; Culbert, Chris J.; Ambrose, Robert O.; Huber, E.; Bluethmann, W. J.
2003-01-01
The Robotic Systems Technology Branch at the NASA Johnson Space Center (JSC) is currently developing robot systems to reduce the Extra-Vehicular Activity (EVA) and planetary exploration burden on astronauts. One such system, Robonaut, is capable of interfacing with external Space Station systems that currently have only human interfaces. Robonaut is human scale, anthropomorphic, and designed to approach the dexterity of a space-suited astronaut. Robonaut can perform numerous human rated tasks, including actuating tether hooks, manipulating flexible materials, soldering wires, grasping handrails to move along space station mockups, and mating connectors. More recently, developments in autonomous control and perception for Robonaut have enabled dexterous, real-time man-machine interaction. Robonaut is now capable of acting as a practical autonomous assistant to the human, providing and accepting tools by reacting to body language. A versatile, vision-based algorithm for matching range silhouettes is used for monitoring human activity as well as estimating tool pose.
Reusable science tools for analog exploration missions: xGDS Web Tools, VERVE, and Gigapan Voyage
NASA Astrophysics Data System (ADS)
Lee, Susan Y.; Lees, David; Cohen, Tamar; Allan, Mark; Deans, Matthew; Morse, Theodore; Park, Eric; Smith, Trey
2013-10-01
The Exploration Ground Data Systems (xGDS) project led by the Intelligent Robotics Group (IRG) at NASA Ames Research Center creates software tools to support multiple NASA-led planetary analog field experiments. The two primary tools that fall under the xGDS umbrella are the xGDS Web Tools (xGDS-WT) and Visual Environment for Remote Virtual Exploration (VERVE). IRG has also developed a hardware and software system that is closely integrated with our xGDS tools and is used in multiple field experiments called Gigapan Voyage. xGDS-WT, VERVE, and Gigapan Voyage are examples of IRG projects that improve the ratio of science return versus development effort by creating generic and reusable tools that leverage existing technologies in both hardware and software. xGDS Web Tools provides software for gathering and organizing mission data for science and engineering operations, including tools for planning traverses, monitoring autonomous or piloted vehicles, visualization, documentation, analysis, and search. VERVE provides high performance three dimensional (3D) user interfaces used by scientists, robot operators, and mission planners to visualize robot data in real time. Gigapan Voyage is a gigapixel image capturing and processing tool that improves situational awareness and scientific exploration in human and robotic analog missions. All of these technologies emphasize software reuse and leverage open source and/or commercial-off-the-shelf tools to greatly improve the utility and reduce the development and operational cost of future similar technologies. Over the past several years these technologies have been used in many NASA-led robotic field campaigns including the Desert Research and Technology Studies (DRATS), the Pavilion Lake Research Project (PLRP), the K10 Robotic Follow-Up tests, and most recently we have become involved in the NASA Extreme Environment Mission Operations (NEEMO) field experiments. A major objective of these joint robot and crew experiments is to improve NASAs understanding of how to most effectively execute and increase science return from exploration missions. This paper focuses on an integrated suite of xGDS software and compatible hardware tools: xGDS Web Tools, VERVE, and Gigapan Voyage, how they are used, and the design decisions that were made to allow them to be easily developed, integrated, tested, and reused by multiple NASA field experiments and robotic platforms.
A scanning laser rangefinder for a robotic vehicle
NASA Technical Reports Server (NTRS)
Lewis, R. A.; Johnston, A. R.
1977-01-01
A scanning Laser Rangefinder (LRF) which operates in conjunction with a minicomputer as part of a robotic vehicle is described. The description, in sufficient detail for replication, modification, and maintenance, includes both hardware and software. Also included is a discussion of functional requirements relative to a detailing of the instrument and its performance, a summary of the robot system in which the LRF functions, the software organization, interfaces and description, and the applications to which the LRF has been put.
Pandora’s Box: Lethally-Armed Ground Robots in Operations in Iraq and Afghanistan
2010-10-27
Others debate whether Isaac Asimov ‟s famous Three Laws of Robotics (featured in his book I, Robot and the movie of the same name) could be applied in...http://www.tgdaily.com/hardware-features/43441-engineers-rewrite- asimovs -three- laws (accessed 7 September 2010).
RoMPS concept review automatic control of space robot
NASA Technical Reports Server (NTRS)
1991-01-01
The Robot operated Material Processing in Space (RoMPS) experiment is being performed to explore the marriage of two emerging space commercialization technologies: materials processing in microgravity and robotics. This concept review presents engineering drawings and limited technical descriptions of the RoMPS programs' electrical and software systems.
Developing Creative Behavior in Elementary School Students with Robotics
ERIC Educational Resources Information Center
Nemiro, Jill; Larriva, Cesar; Jawaharlal, Mariappan
2017-01-01
The School Robotics Initiative (SRI), a problem-based robotics program for elementary school students, was developed with the objective of reaching students early on to instill an interest in Science, Technology, Engineering, and Math disciplines. The purpose of this exploratory, observational study was to examine how the SRI fosters student…
Liu, Ju-Chi; Chou, Hung-Chyun; Chen, Chien-Hsiu; Lin, Yi-Tseng
2016-01-01
A high efficient time-shift correlation algorithm was proposed to deal with the peak time uncertainty of P300 evoked potential for a P300-based brain-computer interface (BCI). The time-shift correlation series data were collected as the input nodes of an artificial neural network (ANN), and the classification of four LED visual stimuli was selected as the output node. Two operating modes, including fast-recognition mode (FM) and accuracy-recognition mode (AM), were realized. The proposed BCI system was implemented on an embedded system for commanding an adult-size humanoid robot to evaluate the performance from investigating the ground truth trajectories of the humanoid robot. When the humanoid robot walked in a spacious area, the FM was used to control the robot with a higher information transfer rate (ITR). When the robot walked in a crowded area, the AM was used for high accuracy of recognition to reduce the risk of collision. The experimental results showed that, in 100 trials, the accuracy rate of FM was 87.8% and the average ITR was 52.73 bits/min. In addition, the accuracy rate was improved to 92% for the AM, and the average ITR decreased to 31.27 bits/min. due to strict recognition constraints. PMID:27579033
McWhinney, S R; Tremblay, A; Boe, S G; Bardouille, T
2018-02-01
Neurofeedback training teaches individuals to modulate brain activity by providing real-time feedback and can be used for brain-computer interface control. The present study aimed to optimize training by maximizing engagement through goal-oriented task design. Participants were shown either a visual display or a robot, where each was manipulated using motor imagery (MI)-related electroencephalography signals. Those with the robot were instructed to quickly navigate grid spaces, as the potential for goal-oriented design to strengthen learning was central to our investigation. Both groups were hypothesized to show increased magnitude of these signals across 10 sessions, with the greatest gains being seen in those navigating the robot due to increased engagement. Participants demonstrated the predicted increase in magnitude, with no differentiation between hemispheres. Participants navigating the robot showed stronger left-hand MI increases than those with the computer display. This is likely due to success being reliant on maintaining strong MI-related signals. While older participants showed stronger signals in early sessions, this trend later reversed, suggesting greater natural proficiency but reduced flexibility. These results demonstrate capacity for modulating neurofeedback using MI over a series of training sessions, using tasks of varied design. Importantly, the more goal-oriented robot control task resulted in greater improvements.
Liu, Ju-Chi; Chou, Hung-Chyun; Chen, Chien-Hsiu; Lin, Yi-Tseng; Kuo, Chung-Hsien
2016-01-01
A high efficient time-shift correlation algorithm was proposed to deal with the peak time uncertainty of P300 evoked potential for a P300-based brain-computer interface (BCI). The time-shift correlation series data were collected as the input nodes of an artificial neural network (ANN), and the classification of four LED visual stimuli was selected as the output node. Two operating modes, including fast-recognition mode (FM) and accuracy-recognition mode (AM), were realized. The proposed BCI system was implemented on an embedded system for commanding an adult-size humanoid robot to evaluate the performance from investigating the ground truth trajectories of the humanoid robot. When the humanoid robot walked in a spacious area, the FM was used to control the robot with a higher information transfer rate (ITR). When the robot walked in a crowded area, the AM was used for high accuracy of recognition to reduce the risk of collision. The experimental results showed that, in 100 trials, the accuracy rate of FM was 87.8% and the average ITR was 52.73 bits/min. In addition, the accuracy rate was improved to 92% for the AM, and the average ITR decreased to 31.27 bits/min. due to strict recognition constraints.
Cortical Spiking Network Interfaced with Virtual Musculoskeletal Arm and Robotic Arm.
Dura-Bernal, Salvador; Zhou, Xianlian; Neymotin, Samuel A; Przekwas, Andrzej; Francis, Joseph T; Lytton, William W
2015-01-01
Embedding computational models in the physical world is a critical step towards constraining their behavior and building practical applications. Here we aim to drive a realistic musculoskeletal arm model using a biomimetic cortical spiking model, and make a robot arm reproduce the same trajectories in real time. Our cortical model consisted of a 3-layered cortex, composed of several hundred spiking model-neurons, which display physiologically realistic dynamics. We interconnected the cortical model to a two-joint musculoskeletal model of a human arm, with realistic anatomical and biomechanical properties. The virtual arm received muscle excitations from the neuronal model, and fed back proprioceptive information, forming a closed-loop system. The cortical model was trained using spike timing-dependent reinforcement learning to drive the virtual arm in a 2D reaching task. Limb position was used to simultaneously control a robot arm using an improved network interface. Virtual arm muscle activations responded to motoneuron firing rates, with virtual arm muscles lengths encoded via population coding in the proprioceptive population. After training, the virtual arm performed reaching movements which were smoother and more realistic than those obtained using a simplistic arm model. This system provided access to both spiking network properties and to arm biophysical properties, including muscle forces. The use of a musculoskeletal virtual arm and the improved control system allowed the robot arm to perform movements which were smoother than those reported in our previous paper using a simplistic arm. This work provides a novel approach consisting of bidirectionally connecting a cortical model to a realistic virtual arm, and using the system output to drive a robotic arm in real time. Our techniques are applicable to the future development of brain neuroprosthetic control systems, and may enable enhanced brain-machine interfaces with the possibility for finer control of limb prosthetics.
NASA Astrophysics Data System (ADS)
Nagata, Fusaomi; Okada, Yudai; Sakamoto, Tatsuhiko; Kusano, Takamasa; Habib, Maki K.; Watanabe, Keigo
2017-06-01
The authors have developed earlier an industrial machining robotic system for foamed polystyrene materials. The developed robotic CAM system provided a simple and effective interface without the need to use any robot language between operators and the machining robot. In this paper, a preprocessor for generating Cutter Location Source data (CLS data) from Stereolithography (STL data) is first proposed for robotic machining. The preprocessor enables to control the machining robot directly using STL data without using any commercially provided CAM system. The STL deals with a triangular representation for a curved surface geometry. The preprocessor allows machining robots to be controlled through a zigzag or spiral path directly calculated from STL data. Then, a smart spline interpolation method is proposed and implemented for smoothing coarse CLS data. The effectiveness and potential of the developed approaches are demonstrated through experiments on actual machining and interpolation.
Google glass-based remote control of a mobile robot
NASA Astrophysics Data System (ADS)
Yu, Song; Wen, Xi; Li, Wei; Chen, Genshe
2016-05-01
In this paper, we present an approach to remote control of a mobile robot via a Google Glass with the multi-function and compact size. This wearable device provides a new human-machine interface (HMI) to control a robot without need for a regular computer monitor because the Google Glass micro projector is able to display live videos around robot environments. In doing it, we first develop a protocol to establish WI-FI connection between Google Glass and a robot and then implement five types of robot behaviors: Moving Forward, Turning Left, Turning Right, Taking Pause, and Moving Backward, which are controlled by sliding and clicking the touchpad located on the right side of the temple. In order to demonstrate the effectiveness of the proposed Google Glass-based remote control system, we navigate a virtual Surveyor robot to pass a maze. Experimental results demonstrate that the proposed control system achieves the desired performance.
NASA Astrophysics Data System (ADS)
Wedeking, Gregory A.; Zierer, Joseph J.; Jackson, John R.
2010-07-01
The University of Texas, Center for Electromechanics (UT-CEM) is making a major upgrade to the robotic tracking system on the Hobby Eberly Telescope (HET) as part of theWide Field Upgrade (WFU). The upgrade focuses on a seven-fold increase in payload and necessitated a complete redesign of all tracker supporting structure and motion control systems, including the tracker bridge, ten drive systems, carriage frames, a hexapod, and many other subsystems. The cost and sensitivity of the scientific payload, coupled with the tracker system mass increase, necessitated major upgrades to personnel and hardware safety systems. To optimize kinematic design of the entire tracker, UT-CEM developed novel uses of constraints and drivers to interface with a commercially available CAD package (SolidWorks). For example, to optimize volume usage and minimize obscuration, the CAD software was exercised to accurately determine tracker/hexapod operational space needed to meet science requirements. To verify hexapod controller models, actuator travel requirements were graphically measured and compared to well defined equations of motion for Stewart platforms. To ensure critical hardware safety during various failure modes, UT-CEM engineers developed Visual Basic drivers to interface with the CAD software and quickly tabulate distance measurements between critical pieces of optical hardware and adjacent components for thousands of possible hexapod configurations. These advances and techniques, applicable to any challenging robotic system design, are documented and describe new ways to use commercially available software tools to more clearly define hardware requirements and help insure safe operation.
Mechatronic design of haptic forceps for robotic surgery.
Rizun, P; Gunn, D; Cox, B; Sutherland, G
2006-12-01
Haptic feedback increases operator performance and comfort during telerobotic manipulation. Feedback of grasping pressure is critical in many microsurgical tasks, yet no haptic interface for surgical tools is commercially available. Literature on the psychophysics of touch was reviewed to define the spectrum of human touch perception and the fidelity requirements of an ideal haptic interface. Mechanical design and control literature was reviewed to translate the psychophysical requirements to engineering specification. High-fidelity haptic forceps were then developed through an iterative process between engineering and surgery. The forceps are a modular device that integrate with a haptic hand controller to add force feedback for tool actuation in telerobotic or virtual surgery. Their overall length is 153 mm and their mass is 125 g. A contact-free voice coil actuator generates force feedback at frequencies up to 800 Hz. Maximum force output is 6 N (2N continuous) and the force resolution is 4 mN. The forceps employ a contact-free magnetic position sensor as well as micro-machined accelerometers to measure opening/closing acceleration. Position resolution is 0.6 microm with 1.3 microm RMS noise. The forceps can simulate stiffness greater than 20N/mm or impedances smaller than 15 g with no noticeable haptic artifacts or friction. As telerobotic surgery evolves, haptics will play an increasingly important role. Copyright 2006 John Wiley & Sons, Ltd.
Systems engineering interfaces: A model based approach
NASA Astrophysics Data System (ADS)
Fosse, E.; Delp, C. L.
The engineering of interfaces is a critical function of the discipline of Systems Engineering. Included in interface engineering are instances of interaction. Interfaces provide the specifications of the relevant properties of a system or component that can be connected to other systems or components while instances of interaction are identified in order to specify the actual integration to other systems or components. Current Systems Engineering practices rely on a variety of documents and diagrams to describe interface specifications and instances of interaction. The SysML[1] specification provides a precise model based representation for interfaces and interface instance integration. This paper will describe interface engineering as implemented by the Operations Revitalization Task using SysML, starting with a generic case and culminating with a focus on a Flight System to Ground Interaction. The reusability of the interface engineering approach presented as well as its extensibility to more complex interfaces and interactions will be shown. Model-derived tables will support the case studies shown and are examples of model-based documentation products.
NASA Technical Reports Server (NTRS)
Erickson, Jon D. (Editor)
1992-01-01
The present volume on cooperative intelligent robotics in space discusses sensing and perception, Space Station Freedom robotics, cooperative human/intelligent robot teams, and intelligent space robotics. Attention is given to space robotics reasoning and control, ground-based space applications, intelligent space robotics architectures, free-flying orbital space robotics, and cooperative intelligent robotics in space exploration. Topics addressed include proportional proximity sensing for telerobots using coherent lasar radar, ground operation of the mobile servicing system on Space Station Freedom, teleprogramming a cooperative space robotic workcell for space stations, and knowledge-based task planning for the special-purpose dextrous manipulator. Also discussed are dimensions of complexity in learning from interactive instruction, an overview of the dynamic predictive architecture for robotic assistants, recent developments at the Goddard engineering testbed, and parallel fault-tolerant robot control.
An Interactive Astronaut-Robot System with Gesture Control
Liu, Jinguo; Luo, Yifan; Ju, Zhaojie
2016-01-01
Human-robot interaction (HRI) plays an important role in future planetary exploration mission, where astronauts with extravehicular activities (EVA) have to communicate with robot assistants by speech-type or gesture-type user interfaces embedded in their space suits. This paper presents an interactive astronaut-robot system integrating a data-glove with a space suit for the astronaut to use hand gestures to control a snake-like robot. Support vector machine (SVM) is employed to recognize hand gestures and particle swarm optimization (PSO) algorithm is used to optimize the parameters of SVM to further improve its recognition accuracy. Various hand gestures from American Sign Language (ASL) have been selected and used to test and validate the performance of the proposed system. PMID:27190503
Tsekos, Nikolaos V; Khanicheh, Azadeh; Christoforou, Eftychios; Mavroidis, Constantinos
2007-01-01
The continuous technological progress of magnetic resonance imaging (MRI), as well as its widespread clinical use as a highly sensitive tool in diagnostics and advanced brain research, has brought a high demand for the development of magnetic resonance (MR)-compatible robotic/mechatronic systems. Revolutionary robots guided by real-time three-dimensional (3-D)-MRI allow reliable and precise minimally invasive interventions with relatively short recovery times. Dedicated robotic interfaces used in conjunction with fMRI allow neuroscientists to investigate the brain mechanisms of manipulation and motor learning, as well as to improve rehabilitation therapies. This paper gives an overview of the motivation, advantages, technical challenges, and existing prototypes for MR-compatible robotic/mechatronic devices.
NASA Astrophysics Data System (ADS)
Chen, ChuXin; Trivedi, Mohan M.
1992-03-01
This research is focused on enhancing the overall productivity of an integrated human-robot system. A simulation, animation, visualization, and interactive control (SAVIC) environment has been developed for the design and operation of an integrated robotic manipulator system. This unique system possesses the abilities for multisensor simulation, kinematics and locomotion animation, dynamic motion and manipulation animation, transformation between real and virtual modes within the same graphics system, ease in exchanging software modules and hardware devices between real and virtual world operations, and interfacing with a real robotic system. This paper describes a working system and illustrates the concepts by presenting the simulation, animation, and control methodologies for a unique mobile robot with articulated tracks, a manipulator, and sensory modules.
NASA Technical Reports Server (NTRS)
Bejczy, Antal K.
1995-01-01
This presentation focuses on the application of computer graphics or 'virtual reality' (VR) techniques as a human-computer interface tool in the operation of telerobotic systems. VR techniques offer very valuable task realization aids for planning, previewing and predicting robotic actions, operator training, and for visual perception of non-visible events like contact forces in robotic tasks. The utility of computer graphics in telerobotic operation can be significantly enhanced by high-fidelity calibration of virtual reality images to actual TV camera images. This calibration will even permit the creation of artificial (synthetic) views of task scenes for which no TV camera views are available.
Performance Evaluation and Software Design for EVA Robotic Assistant Stereo Vision Heads
NASA Technical Reports Server (NTRS)
DiPaolo, Daniel
2003-01-01
The purpose of this project was to aid the EVA Robotic Assistant project by evaluating and designing the necessary interfaces for two stereo vision heads - the TracLabs Biclops pan-tilt-verge head, and the Helpmate Zebra pan-tilt-verge head. The first half of the project consisted of designing the necessary software interface so that the other modules of the EVA Robotic Assistant had proper access to all of the functionalities offered by each of the stereovision heads. This half took most of the project time, due to a lack of ready-made CORBA drivers for either of the heads. Once this was overcome, the evaluation stage of the project began. The second half of the project was to take these interfaces and to evaluate each of the stereo vision heads in terms of usefulness to the project. In the key project areas such as stability and reliability, the Zebra pan-tilt-verge head came out on top. However, the Biclops did have many more advantages over the Zebra, such as: lower power consumption, faster communications, and a simpler, cleaner API. Overall, the Biclops pan-tilt-verge head outperformed the Zebra pan-tilt-verge head.
Morimoto, Jun; Kawato, Mitsuo
2015-03-06
In the past two decades, brain science and robotics have made gigantic advances in their own fields, and their interactions have generated several interdisciplinary research fields. First, in the 'understanding the brain by creating the brain' approach, computational neuroscience models have been applied to many robotics problems. Second, such brain-motivated fields as cognitive robotics and developmental robotics have emerged as interdisciplinary areas among robotics, neuroscience and cognitive science with special emphasis on humanoid robots. Third, in brain-machine interface research, a brain and a robot are mutually connected within a closed loop. In this paper, we review the theoretical backgrounds of these three interdisciplinary fields and their recent progress. Then, we introduce recent efforts to reintegrate these research fields into a coherent perspective and propose a new direction that integrates brain science and robotics where the decoding of information from the brain, robot control based on the decoded information and multimodal feedback to the brain from the robot are carried out in real time and in a closed loop. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Creating the brain and interacting with the brain: an integrated approach to understanding the brain
Morimoto, Jun; Kawato, Mitsuo
2015-01-01
In the past two decades, brain science and robotics have made gigantic advances in their own fields, and their interactions have generated several interdisciplinary research fields. First, in the ‘understanding the brain by creating the brain’ approach, computational neuroscience models have been applied to many robotics problems. Second, such brain-motivated fields as cognitive robotics and developmental robotics have emerged as interdisciplinary areas among robotics, neuroscience and cognitive science with special emphasis on humanoid robots. Third, in brain–machine interface research, a brain and a robot are mutually connected within a closed loop. In this paper, we review the theoretical backgrounds of these three interdisciplinary fields and their recent progress. Then, we introduce recent efforts to reintegrate these research fields into a coherent perspective and propose a new direction that integrates brain science and robotics where the decoding of information from the brain, robot control based on the decoded information and multimodal feedback to the brain from the robot are carried out in real time and in a closed loop. PMID:25589568
2008-07-15
This photograph shows the rasp protruding from the back of the scoop on NASA Phoenix Mars Lander Robotic Arm engineering model in the Payload Interoperability Testbed at the University of Arizona, Tucson.
Robotic Mining Competition Awards Ceremony
2017-05-26
Inside the Apollo-Saturn V Center at the Kennedy Space Center Visitor Complex in Florida, Pat Simpkins, director of the Engineering Directorate at Kennedy Space Center, speaks to the teams during the award ceremony for NASA's 8th Annual Robotic Mining Competition. More than 40 student teams from colleges and universities around the U.S. used their uniquely-designed mining robots to dig in a supersized sandbox filled with BP-1, or simulated Martian soil, and participated in other competition requirements, May 22-26, at the visitor complex. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's Journey to Mars.
Robotic Mining Competition Awards Ceremony
2017-05-26
Kurt Leucht, a NASA engineer and event emcee, welcomes guests to the awards ceremony for NASA's 8th Annual Robotic Mining Competition in the Apollo-Saturn V Center at the Kennedy Space Center Visitor Complex in Florida. More than 40 student teams from colleges and universities around the U.S. used their uniquely-designed mining robots to dig in a supersized sandbox filled with BP-1, or simulated Martian soil, and participated in other competition requirements, May 22-26 at the visitor complex. The Robotic Mining Competition is a NASA Human Exploration and Operations Mission Directorate project designed to encourage students in science, technology, engineering and math, or STEM fields. The project provides a competitive environment to foster innovative ideas and solutions that could be used on NASA's Journey to Mars.
1999-03-06
Student teams behind protective walls operate remote controls to maneuver their robots around the playing field during the 1999 FIRST Southeastern Regional robotic competition held at KSC. The robotic gladiators spent two minutes each trying to grab, claw and hoist large, satin pillows onto their machines. Teams played defense by taking away competitors' pillows and generally harassing opposing machines. On the side of the field are the judges, including (far left) Deputy Director for Launch and Payload Processing Loren Shriver and former KSC Director of Shuttle Processing Robert Sieck. A giant screen TV displays the action on the field. The competition comprised 27 teams, pairing high school students with engineer mentors and corporations. The FIRST robotics competition is designed to provide students with a hands-on, inside look at engineering and other professional careers
Meal assistance robot with ultrasonic motor
NASA Astrophysics Data System (ADS)
Kodani, Yasuhiro; Tanaka, Kanya; Wakasa, Yuji; Akashi, Takuya; Oka, Masato
2007-12-01
In this paper, we have constructed a robot that help people with disabilities of upper extremities and advanced stage amyotrophic lateral sclerosis (ALS) patients to eat with their residual abilities. Especially, many of people suffering from advanced stage ALS of the use a pacemaker. And they need to avoid electromagnetic waves. Therefore we adopt ultra sonic motor that does not generate electromagnetic waves as driving sources. Additionally we approach the problem of the conventional meal assistance robot. Moreover, we introduce the interface with eye movement so that extremities can also use our system. User operates our robot not with hands or foot but with eye movement.
Development of the auto-steering software and equipment technology (ASSET)
NASA Astrophysics Data System (ADS)
McKay, Mark D.; Anderson, Matthew O.; Wadsworth, Derek C.
2003-09-01
The Idaho National Engineering and Environmental Laboratory (INEEL), through collaboration with INSAT Co., has developed a low cost robotic auto-steering system for parallel contour swathing. The capability to perform parallel contour swathing while minimizing "skip" and "overlap" is a necessity for cost-effective crop management within precision agriculture. Current methods for performing parallel contour swathing consist of using a Differential Global Position System (DGPS) coupled with a light bar system to prompt an operator where to steer. The complexity of operating heavy equipment, ensuring proper chemical mixture and application, and steering to a light bar indicator can be overwhelming to an operator. To simplify these tasks, an inexpensive robotic steering system has been developed and tested on several farming implements. This development leveraged research conducted by the INEEL and Utah State University. The INEEL-INSAT Auto-Steering Software and Equipment Technology provides the following: 1) the ability to drive in a straight line within +/- 2 feet while traveling at least 15 mph, 2) interfaces to a Real Time Kinematic (RTK) DGPS and sub-meter DGPS, 3) safety features such as Emergency-stop, steering wheel deactivation, computer watchdog deactivation, etc., and 4) a low-cost, field-ready system that is easily adapted to other systems.
NASA Astrophysics Data System (ADS)
Hall, Justin R.; Hastrup, Rolf C.
The United States Space Exploration Initiative (SEI) calls for the charting of a new and evolving manned course to the Moon, Mars, and beyond. This paper discusses key challenges in providing effective deep space telecommunications, navigation, and information management (TNIM) architectures and designs for Mars exploration support. The fundamental objectives are to provide the mission with means to monitor and control mission elements, acquire engineering, science, and navigation data, compute state vectors and navigate, and move these data efficiently and automatically between mission nodes for timely analysis and decision-making. Although these objectives do not depart, fundamentally, from those evolved over the past 30 years in supporting deep space robotic exploration, there are several new issues. This paper focuses on summarizing new requirements, identifying related issues and challenges, responding with concepts and strategies which are enabling, and, finally, describing candidate architectures, and driving technologies. The design challenges include the attainment of: 1) manageable interfaces in a large distributed system, 2) highly unattended operations for in-situ Mars telecommunications and navigation functions, 3) robust connectivity for manned and robotic links, 4) information management for efficient and reliable interchange of data between mission nodes, and 5) an adequate Mars-Earth data rate.
NASA Technical Reports Server (NTRS)
Morring, Frank, Jr.
2004-01-01
Robotic technology being developed out of necessity to keep the Hubble Space Telescope operating could also lead to new levels of man-machine team-work in deep-space exploration down the road-if it survives the near-term scramble for funding. Engineers here who have devoted their NASA careers to the concept of humans servicing the telescope in orbit are planning modifications to International Space Station (ISS) robots that would leave the humans on the ground. The work. forced by post-Columbia flight rules that killed a planned shuttle-servicing mission to Hubble, marks another step in the evolution of robot-partners for human space explorers. "Hubble has always been a pathfider for this agency," says Mike Weiss. Hubble deputy program manager technical. "When the space station was flown and assembled, Hubble was the pathfinder. not just for modularity, but for operations, for assembly techniques. Exploration is the next step. Things we're going to do on Hubble are going to be applied to exploration. It's not just putting a robot in space. It's operating a robot in space. It's adapting that robot to what needs to be done the next time you're up there."
NASA Technical Reports Server (NTRS)
Heise, James; Hull, Bethanne J.; Bauer, Jonathan; Beougher, Nathan G.; Boe, Caleb; Canahui, Ricardo; Charles, John P.; Cooper, Zachary Davis Job; DeShaw, Mark A.; Fontanella, Luan Gasparetto;
2012-01-01
The Iowa State University team, Team LunaCY, is composed of the following sub-teams: the main student organization, the Lunabotics Club; a senior mechanical engineering design course, ME 415; a senior multidisciplinary design course, ENGR 466; and a senior design course from Wartburg College in Waverly, Iowa. Team LunaCY designed and fabricated ART-E III, Astra Robotic Tractor- Excavator the Third, for the team's third appearance in the NASA Lunabotic Mining competition. While designing ART-E III, the team had four main goals for this year's competition:to reduce the total weight of the robot, to increase the amount of regolith simulant mined, to reduce dust, and to make ART-E III autonomous. After many designs and research, a final robot design was chosen that obtained all four goals of Team LunaCY. A few changes Team LunaCY made this year was to go to the electrical, computer, and software engineering club fest at Iowa State University to recruit engineering students to accomplish the task of making ART-E III autonomous. Team LunaCY chose to use LabView to program the robot and various sensors were installed to measure the distance between the robot and the surroundings to allow ART-E III to maneuver autonomously. Team LunaCY also built a testing arena to test prototypes and ART-E III in. To best replicate the competition arena at the Kennedy Space Center, a regolith simulant was made from sand, QuickCrete, and fly ash to cover the floor of the arena. Team LunaCY also installed fans to allow ventilation in the arena and used proper safety attire when working in the arena . With the additional practice in the testing arena and innovative robot design, Team LunaCY expects to make a strong appearance at the 2012 NASA Lunabotic Mining Competition. .
Ground Fluidization Promotes Rapid Running of a Lightweight Robot
2013-01-01
SCMs ) (Wood et al., 2008) have enabled the development of small, lightweight robots (∼ 10 cm, ∼ 20 g) (Hoover et al., 2010; Birkmeyer et al., 2009) such...communicated to the controller through a Bluetooth wireless interface. 2.1.2. Model granular media We used 3.0±0.2 mm diam- eter glass particles (density
Compendium of Abstracts. Volume 2
2010-08-01
researched for various applications such as self - healing and fluid transport. One method of creating these vascular systems is through a process called...Daniel J. Dexterous robotic manipulators that rely on joystick type interfaces for teleoperation require considerable time and effort to master...and lack an intuitive basis for human- robot interaction. This hampers operator performance, increases cognitive workload, and limits overall
An Embedded Systems Laboratory to Support Rapid Prototyping of Robotics and the Internet of Things
ERIC Educational Resources Information Center
Hamblen, J. O.; van Bekkum, G. M. E.
2013-01-01
This paper describes a new approach for a course and laboratory designed to allow students to develop low-cost prototypes of robotic and other embedded devices that feature Internet connectivity, I/O, networking, a real-time operating system (RTOS), and object-oriented C/C++. The application programming interface (API) libraries provided permit…
2000 FIRST Robotics Competition
NASA Technical Reports Server (NTRS)
Purman, Richard
2000-01-01
The New Horizons Regional Education Center (NHREC) in Hampton, VA sought and received NASA funding to support its participation in the 2000 FIRST Robotics competition. FIRST, Inc. (For Inspiration and Recognition of Science and Technology) is an organization which encourages the application of creative science, math, and computer science principles to solve real-world engineering problems. The FIRST competition is an international engineering contest featuring high school, government, and business partnerships.
2012-06-15
Summer is a time of educational activity at Stennis Space Center. In June 2012, 25 young people age 13-15 attended the annual Astro STARS (Spaceflight, Technology, Astronomy and Robotics at Stennis) camp at the rocket engine test facility. During the five-day camp, participants engaged in hands-on experiences in a variety of areas, including engineering and robotics. On the final day, campers launched model rockets they had assembled.
Assistant Personal Robot (APR): Conception and Application of a Tele-Operated Assisted Living Robot.
Clotet, Eduard; Martínez, Dani; Moreno, Javier; Tresanchez, Marcel; Palacín, Jordi
2016-04-28
This paper presents the technical description, mechanical design, electronic components, software implementation and possible applications of a tele-operated mobile robot designed as an assisted living tool. This robotic concept has been named Assistant Personal Robot (or APR for short) and has been designed as a remotely telecontrolled robotic platform built to provide social and assistive services to elderly people and those with impaired mobility. The APR features a fast high-mobility motion system adapted for tele-operation in plain indoor areas, which incorporates a high-priority collision avoidance procedure. This paper presents the mechanical architecture, electrical fundaments and software implementation required in order to develop the main functionalities of an assistive robot. The APR uses a tablet in order to implement the basic peer-to-peer videoconference and tele-operation control combined with a tactile graphic user interface. The paper also presents the development of some applications proposed in the framework of an assisted living robot.
Daluja, Sachin; Golenberg, Lavie; Cao, Alex; Pandya, Abhilash K; Auner, Gregory W; Klein, Michael D
2009-01-01
Robotic surgery has gradually gained acceptance due to its numerous advantages such as tremor filtration, increased dexterity and motion scaling. There remains, however, a significant scope for improvement, especially in the areas of surgeon-robot interface and autonomous procedures. Previous studies have attempted to identify factors affecting a surgeon's performance in a master-slave robotic system by tracking hand movements. These studies relied on conventional optical or magnetic tracking systems, making their use impracticable in the operating room. This study concentrated on building an intrinsic movement capture platform using microcontroller based hardware wired to a surgical robot. Software was developed to enable tracking and analysis of hand movements while surgical tasks were performed. Movement capture was applied towards automated movements of the robotic instruments. By emulating control signals, recorded surgical movements were replayed by the robot's end-effectors. Though this work uses a surgical robot as the platform, the ideas and concepts put forward are applicable to telerobotic systems in general.
NASA Astrophysics Data System (ADS)
Tamura, Sho; Maeyama, Shoichi
Rescue robots have been actively developed since Hanshin-Awaji (Kobe) Earthquake. Recently, the rescue robot to reduce the risk of the secondary disaster on NBC terror and critical accident is also developed. For such a background, the development project of mobile RT system in the collapsed is started. This research also participates in this project. It is useful to use the image pointing for the control interface of the rescue robot because it can control the robot by the simple operation. However, the conventional method cannot work on a rough terrain. In this research, we propose the system which controls the robot to arrive the target position on the rough terrain. It is constructed the methods which put the destination into the vector, and control the 3D localizated robot to follow the vector. Finally, the proposed system is evaluated through experiments by remote control of a mobile robot in slope and cofirmed the feasibility.
NASA Technical Reports Server (NTRS)
Bradley, Arthur; Dubowsky, Steven; Quinn, Roger; Marzwell, Neville
2005-01-01
Robots that operate independently of one another will not be adequate to accomplish the future exploration tasks of long-distance autonomous navigation, habitat construction, resource discovery, and material handling. Such activities will require that systems widely share information, plan and divide complex tasks, share common resources, and physically cooperate to manipulate objects. Recognizing the need for interoperable robots to accomplish the new exploration initiative, NASA s Office of Exploration Systems Research & Technology recently funded the development of the Joint Technical Architecture for Robotic Systems (JTARS). JTARS charter is to identify the interface standards necessary to achieve interoperability among space robots. A JTARS working group (JTARS-WG) has been established comprising recognized leaders in the field of space robotics including representatives from seven NASA centers along with academia and private industry. The working group s early accomplishments include addressing key issues required for interoperability, defining which systems are within the project s scope, and framing the JTARS manuals around classes of robotic systems.