Sample records for computer integrated robotics

  1. Integrating Mobile Robotics and Vision with Undergraduate Computer Science

    ERIC Educational Resources Information Center

    Cielniak, G.; Bellotto, N.; Duckett, T.

    2013-01-01

    This paper describes the integration of robotics education into an undergraduate Computer Science curriculum. The proposed approach delivers mobile robotics as well as covering the closely related field of Computer Vision and is directly linked to the research conducted at the authors' institution. The paper describes the most relevant details of…

  2. Computer assisted surgery with 3D robot models and visualisation of the telesurgical action.

    PubMed

    Rovetta, A

    2000-01-01

    This paper deals with the support of virtual reality computer action in the procedures of surgical robotics. Computer support gives a direct representation of the surgical theatre. The modelization of the procedure in course and in development gives a psychological reaction towards safety and reliability. Robots similar to the ones used by the manufacturing industry can be used with little modification as very effective surgical tools. They have high precision, repeatability and are versatile in integrating with the medical instrumentation. Now integrated surgical rooms, with computer and robot-assisted intervention, are operating. The computer is the element for a decision taking aid, and the robot works as a very effective tool.

  3. Autonomous Systems, Robotics, and Computing Systems Capability Roadmap: NRC Dialogue

    NASA Technical Reports Server (NTRS)

    Zornetzer, Steve; Gage, Douglas

    2005-01-01

    Contents include the following: Introduction. Process, Mission Drivers, Deliverables, and Interfaces. Autonomy. Crew-Centered and Remote Operations. Integrated Systems Health Management. Autonomous Vehicle Control. Autonomous Process Control. Robotics. Robotics for Solar System Exploration. Robotics for Lunar and Planetary Habitation. Robotics for In-Space Operations. Computing Systems. Conclusion.

  4. Computer hardware and software for robotic control

    NASA Technical Reports Server (NTRS)

    Davis, Virgil Leon

    1987-01-01

    The KSC has implemented an integrated system that coordinates state-of-the-art robotic subsystems. It is a sensor based real-time robotic control system performing operations beyond the capability of an off-the-shelf robot. The integrated system provides real-time closed loop adaptive path control of position and orientation of all six axes of a large robot; enables the implementation of a highly configurable, expandable testbed for sensor system development; and makes several smart distributed control subsystems (robot arm controller, process controller, graphics display, and vision tracking) appear as intelligent peripherals to a supervisory computer coordinating the overall systems.

  5. Integration of a sensor based multiple robot environment for space applications: The Johnson Space Center Teleoperator Branch Robotics Laboratory

    NASA Technical Reports Server (NTRS)

    Hwang, James; Campbell, Perry; Ross, Mike; Price, Charles R.; Barron, Don

    1989-01-01

    An integrated operating environment was designed to incorporate three general purpose robots, sensors, and end effectors, including Force/Torque Sensors, Tactile Array sensors, Tactile force sensors, and Force-sensing grippers. The design and implementation of: (1) the teleoperation of a general purpose PUMA robot; (2) an integrated sensor hardware/software system; (3) the force-sensing gripper control; (4) the host computer system for dual Robotic Research arms; and (5) the Ethernet integration are described.

  6. Off the Shelf Cloud Robotics for the Smart Home: Empowering a Wireless Robot through Cloud Computing.

    PubMed

    Ramírez De La Pinta, Javier; Maestre Torreblanca, José María; Jurado, Isabel; Reyes De Cozar, Sergio

    2017-03-06

    In this paper, we explore the possibilities offered by the integration of home automation systems and service robots. In particular, we examine how advanced computationally expensive services can be provided by using a cloud computing approach to overcome the limitations of the hardware available at the user's home. To this end, we integrate two wireless low-cost, off-the-shelf systems in this work, namely, the service robot Rovio and the home automation system Z-wave. Cloud computing is used to enhance the capabilities of these systems so that advanced sensing and interaction services based on image processing and voice recognition can be offered.

  7. Off the Shelf Cloud Robotics for the Smart Home: Empowering a Wireless Robot through Cloud Computing

    PubMed Central

    Ramírez De La Pinta, Javier; Maestre Torreblanca, José María; Jurado, Isabel; Reyes De Cozar, Sergio

    2017-01-01

    In this paper, we explore the possibilities offered by the integration of home automation systems and service robots. In particular, we examine how advanced computationally expensive services can be provided by using a cloud computing approach to overcome the limitations of the hardware available at the user’s home. To this end, we integrate two wireless low-cost, off-the-shelf systems in this work, namely, the service robot Rovio and the home automation system Z-wave. Cloud computing is used to enhance the capabilities of these systems so that advanced sensing and interaction services based on image processing and voice recognition can be offered. PMID:28272305

  8. Robot Design

    NASA Technical Reports Server (NTRS)

    1988-01-01

    Martin Marietta Aero and Naval Systems has advanced the CAD art to a very high level at its Robotics Laboratory. One of the company's major projects is construction of a huge Field Material Handling Robot for the Army's Human Engineering Lab. Design of FMR, intended to move heavy and dangerous material such as ammunition, was a triumph in CAD Engineering. Separate computer problems modeled the robot's kinematics and dynamics, yielding such parameters as the strength of materials required for each component, the length of the arms, their degree of freedom and power of hydraulic system needed. The Robotics Lab went a step further and added data enabling computer simulation and animation of the robot's total operational capability under various loading and unloading conditions. NASA computer program (IAC), integrated Analysis Capability Engineering Database was used. Program contains a series of modules that can stand alone or be integrated with data from sensors or software tools.

  9. New Trends in Robotics for Agriculture: Integration and Assessment of a Real Fleet of Robots

    PubMed Central

    Gonzalez-de-Soto, Mariano; Pajares, Gonzalo

    2014-01-01

    Computer-based sensors and actuators such as global positioning systems, machine vision, and laser-based sensors have progressively been incorporated into mobile robots with the aim of configuring autonomous systems capable of shifting operator activities in agricultural tasks. However, the incorporation of many electronic systems into a robot impairs its reliability and increases its cost. Hardware minimization, as well as software minimization and ease of integration, is essential to obtain feasible robotic systems. A step forward in the application of automatic equipment in agriculture is the use of fleets of robots, in which a number of specialized robots collaborate to accomplish one or several agricultural tasks. This paper strives to develop a system architecture for both individual robots and robots working in fleets to improve reliability, decrease complexity and costs, and permit the integration of software from different developers. Several solutions are studied, from a fully distributed to a whole integrated architecture in which a central computer runs all processes. This work also studies diverse topologies for controlling fleets of robots and advances other prospective topologies. The architecture presented in this paper is being successfully applied in the RHEA fleet, which comprises three ground mobile units based on a commercial tractor chassis. PMID:25143976

  10. New trends in robotics for agriculture: integration and assessment of a real fleet of robots.

    PubMed

    Emmi, Luis; Gonzalez-de-Soto, Mariano; Pajares, Gonzalo; Gonzalez-de-Santos, Pablo

    2014-01-01

    Computer-based sensors and actuators such as global positioning systems, machine vision, and laser-based sensors have progressively been incorporated into mobile robots with the aim of configuring autonomous systems capable of shifting operator activities in agricultural tasks. However, the incorporation of many electronic systems into a robot impairs its reliability and increases its cost. Hardware minimization, as well as software minimization and ease of integration, is essential to obtain feasible robotic systems. A step forward in the application of automatic equipment in agriculture is the use of fleets of robots, in which a number of specialized robots collaborate to accomplish one or several agricultural tasks. This paper strives to develop a system architecture for both individual robots and robots working in fleets to improve reliability, decrease complexity and costs, and permit the integration of software from different developers. Several solutions are studied, from a fully distributed to a whole integrated architecture in which a central computer runs all processes. This work also studies diverse topologies for controlling fleets of robots and advances other prospective topologies. The architecture presented in this paper is being successfully applied in the RHEA fleet, which comprises three ground mobile units based on a commercial tractor chassis.

  11. Using Haptic and Auditory Interaction Tools to Engage Students with Visual Impairments in Robot Programming Activities

    ERIC Educational Resources Information Center

    Howard, A. M.; Park, Chung Hyuk; Remy, S.

    2012-01-01

    The robotics field represents the integration of multiple facets of computer science and engineering. Robotics-based activities have been shown to encourage K-12 students to consider careers in computing and have even been adopted as part of core computer-science curriculum at a number of universities. Unfortunately, for students with visual…

  12. Integrating Software Modules For Robot Control

    NASA Technical Reports Server (NTRS)

    Volpe, Richard A.; Khosla, Pradeep; Stewart, David B.

    1993-01-01

    Reconfigurable, sensor-based control system uses state variables in systematic integration of reusable control modules. Designed for open-architecture hardware including many general-purpose microprocessors, each having own local memory plus access to global shared memory. Implemented in software as extension of Chimera II real-time operating system. Provides transparent computing mechanism for intertask communication between control modules and generic process-module architecture for multiprocessor realtime computation. Used to control robot arm. Proves useful in variety of other control and robotic applications.

  13. Systems integration for the Kennedy Space Center (KSC) Robotics Applications Development Laboratory (RADL)

    NASA Technical Reports Server (NTRS)

    Davis, V. Leon; Nordeen, Ross

    1988-01-01

    A laboratory for developing robotics technology for hazardous and repetitive Shuttle and payload processing activities is discussed. An overview of the computer hardware and software responsible for integrating the laboratory systems is given. The center's anthropomorphic robot is placed on a track allowing it to be moved to different stations. Various aspects of the laboratory equipment are described, including industrial robot arm control, smart systems integration, the supervisory computer, programmable process controller, real-time tracking controller, image processing hardware, and control display graphics. Topics of research include: automated loading and unloading of hypergolics for space vehicles and payloads; the use of mobile robotics for security, fire fighting, and hazardous spill operations; nondestructive testing for SRB joint and seal verification; Shuttle Orbiter radiator damage inspection; and Orbiter contour measurements. The possibility of expanding the laboratory in the future is examined.

  14. Integrating surgical robots into the next medical toolkit.

    PubMed

    Lai, Fuji; Entin, Eileen

    2006-01-01

    Surgical robots hold much promise for revolutionizing the field of surgery and improving surgical care. However, despite the potential advantages they offer, there are multiple barriers to adoption and integration into practice that may prevent these systems from realizing their full potential benefit. This study elucidated some of the most salient considerations that need to be addressed for integration of new technologies such as robotic systems into the operating room of the future as it evolves into a complex system of systems. We conducted in-depth interviews with operating room team members and other stakeholders to identify potential barriers in areas of workflow, teamwork, training, clinical acceptance, and human-system interaction. The findings of this study will inform an approach for the design and integration of robotics and related computer-assisted technologies into the next medical toolkit for "computer-enhanced surgery" to improve patient safety and healthcare quality.

  15. Bioinspired decision architectures containing host and microbiome processing units.

    PubMed

    Heyde, K C; Gallagher, P W; Ruder, W C

    2016-09-27

    Biomimetic robots have been used to explore and explain natural phenomena ranging from the coordination of ants to the locomotion of lizards. Here, we developed a series of decision architectures inspired by the information exchange between a host organism and its microbiome. We first modeled the biochemical exchanges of a population of synthetically engineered E. coli. We then built a physical, differential drive robot that contained an integrated, onboard computer vision system. A relay was established between the simulated population of cells and the robot's microcontroller. By placing the robot within a target-containing a two-dimensional arena, we explored how different aspects of the simulated cells and the robot's microcontroller could be integrated to form hybrid decision architectures. We found that distinct decision architectures allow for us to develop models of computation with specific strengths such as runtime efficiency or minimal memory allocation. Taken together, our hybrid decision architectures provide a new strategy for developing bioinspired control systems that integrate both living and nonliving components.

  16. Robotic space simulation integration of vision algorithms into an orbital operations simulation

    NASA Technical Reports Server (NTRS)

    Bochsler, Daniel C.

    1987-01-01

    In order to successfully plan and analyze future space activities, computer-based simulations of activities in low earth orbit will be required to model and integrate vision and robotic operations with vehicle dynamics and proximity operations procedures. The orbital operations simulation (OOS) is configured and enhanced as a testbed for robotic space operations. Vision integration algorithms are being developed in three areas: preprocessing, recognition, and attitude/attitude rates. The vision program (Rice University) was modified for use in the OOS. Systems integration testing is now in progress.

  17. ROS-IGTL-Bridge: an open network interface for image-guided therapy using the ROS environment.

    PubMed

    Frank, Tobias; Krieger, Axel; Leonard, Simon; Patel, Niravkumar A; Tokuda, Junichi

    2017-08-01

    With the growing interest in advanced image-guidance for surgical robot systems, rapid integration and testing of robotic devices and medical image computing software are becoming essential in the research and development. Maximizing the use of existing engineering resources built on widely accepted platforms in different fields, such as robot operating system (ROS) in robotics and 3D Slicer in medical image computing could simplify these tasks. We propose a new open network bridge interface integrated in ROS to ensure seamless cross-platform data sharing. A ROS node named ROS-IGTL-Bridge was implemented. It establishes a TCP/IP network connection between the ROS environment and external medical image computing software using the OpenIGTLink protocol. The node exports ROS messages to the external software over the network and vice versa simultaneously, allowing seamless and transparent data sharing between the ROS-based devices and the medical image computing platforms. Performance tests demonstrated that the bridge could stream transforms, strings, points, and images at 30 fps in both directions successfully. The data transfer latency was <1.2 ms for transforms, strings and points, and 25.2 ms for color VGA images. A separate test also demonstrated that the bridge could achieve 900 fps for transforms. Additionally, the bridge was demonstrated in two representative systems: a mock image-guided surgical robot setup consisting of 3D slicer, and Lego Mindstorms with ROS as a prototyping and educational platform for IGT research; and the smart tissue autonomous robot surgical setup with 3D Slicer. The study demonstrated that the bridge enabled cross-platform data sharing between ROS and medical image computing software. This will allow rapid and seamless integration of advanced image-based planning/navigation offered by the medical image computing software such as 3D Slicer into ROS-based surgical robot systems.

  18. Teaching of Computer Science Topics Using Meta-Programming-Based GLOs and LEGO Robots

    ERIC Educational Resources Information Center

    Štuikys, Vytautas; Burbaite, Renata; Damaševicius, Robertas

    2013-01-01

    The paper's contribution is a methodology that integrates two educational technologies (GLO and LEGO robot) to teach Computer Science (CS) topics at the school level. We present the methodology as a framework of 5 components (pedagogical activities, technology driven processes, tools, knowledge transfer actors, and pedagogical outcomes) and…

  19. Action and language integration: from humans to cognitive robots.

    PubMed

    Borghi, Anna M; Cangelosi, Angelo

    2014-07-01

    The topic is characterized by a highly interdisciplinary approach to the issue of action and language integration. Such an approach, combining computational models and cognitive robotics experiments with neuroscience, psychology, philosophy, and linguistic approaches, can be a powerful means that can help researchers disentangle ambiguous issues, provide better and clearer definitions, and formulate clearer predictions on the links between action and language. In the introduction we briefly describe the papers and discuss the challenges they pose to future research. We identify four important phenomena the papers address and discuss in light of empirical and computational evidence: (a) the role played not only by sensorimotor and emotional information but also of natural language in conceptual representation; (b) the contextual dependency and high flexibility of the interaction between action, concepts, and language; (c) the involvement of the mirror neuron system in action and language processing; (d) the way in which the integration between action and language can be addressed by developmental robotics and Human-Robot Interaction. Copyright © 2014 Cognitive Science Society, Inc.

  20. Vision Based Autonomous Robotic Control for Advanced Inspection and Repair

    NASA Technical Reports Server (NTRS)

    Wehner, Walter S.

    2014-01-01

    The advanced inspection system is an autonomous control and analysis system that improves the inspection and remediation operations for ground and surface systems. It uses optical imaging technology with intelligent computer vision algorithms to analyze physical features of the real-world environment to make decisions and learn from experience. The advanced inspection system plans to control a robotic manipulator arm, an unmanned ground vehicle and cameras remotely, automatically and autonomously. There are many computer vision, image processing and machine learning techniques available as open source for using vision as a sensory feedback in decision-making and autonomous robotic movement. My responsibilities for the advanced inspection system are to create a software architecture that integrates and provides a framework for all the different subsystem components; identify open-source algorithms and techniques; and integrate robot hardware.

  1. Framework and Method for Controlling a Robotic System Using a Distributed Computer Network

    NASA Technical Reports Server (NTRS)

    Sanders, Adam M. (Inventor); Strawser, Philip A. (Inventor); Barajas, Leandro G. (Inventor); Permenter, Frank Noble (Inventor)

    2015-01-01

    A robotic system for performing an autonomous task includes a humanoid robot having a plurality of compliant robotic joints, actuators, and other integrated system devices that are controllable in response to control data from various control points, and having sensors for measuring feedback data at the control points. The system includes a multi-level distributed control framework (DCF) for controlling the integrated system components over multiple high-speed communication networks. The DCF has a plurality of first controllers each embedded in a respective one of the integrated system components, e.g., the robotic joints, a second controller coordinating the components via the first controllers, and a third controller for transmitting a signal commanding performance of the autonomous task to the second controller. The DCF virtually centralizes all of the control data and the feedback data in a single location to facilitate control of the robot across the multiple communication networks.

  2. Direct kinematics solution architectures for industrial robot manipulators: Bit-serial versus parallel

    NASA Astrophysics Data System (ADS)

    Lee, J.; Kim, K.

    A Very Large Scale Integration (VLSI) architecture for robot direct kinematic computation suitable for industrial robot manipulators was investigated. The Denavit-Hartenberg transformations are reviewed to exploit a proper processing element, namely an augmented CORDIC. Specifically, two distinct implementations are elaborated on, such as the bit-serial and parallel. Performance of each scheme is analyzed with respect to the time to compute one location of the end-effector of a 6-links manipulator, and the number of transistors required.

  3. Direct kinematics solution architectures for industrial robot manipulators: Bit-serial versus parallel

    NASA Technical Reports Server (NTRS)

    Lee, J.; Kim, K.

    1991-01-01

    A Very Large Scale Integration (VLSI) architecture for robot direct kinematic computation suitable for industrial robot manipulators was investigated. The Denavit-Hartenberg transformations are reviewed to exploit a proper processing element, namely an augmented CORDIC. Specifically, two distinct implementations are elaborated on, such as the bit-serial and parallel. Performance of each scheme is analyzed with respect to the time to compute one location of the end-effector of a 6-links manipulator, and the number of transistors required.

  4. Autonomous mobile robot for radiologic surveys

    DOEpatents

    Dudar, A.M.; Wagner, D.G.; Teese, G.D.

    1994-06-28

    An apparatus is described for conducting radiologic surveys. The apparatus comprises in the main a robot capable of following a preprogrammed path through an area, a radiation monitor adapted to receive input from a radiation detector assembly, ultrasonic transducers for navigation and collision avoidance, and an on-board computer system including an integrator for interfacing the radiation monitor and the robot. Front and rear bumpers are attached to the robot by bumper mounts. The robot may be equipped with memory boards for the collection and storage of radiation survey information. The on-board computer system is connected to a remote host computer via a UHF radio link. The apparatus is powered by a rechargeable 24-volt DC battery, and is stored at a docking station when not in use and/or for recharging. A remote host computer contains a stored database defining paths between points in the area where the robot is to operate, including but not limited to the locations of walls, doors, stationary furniture and equipment, and sonic markers if used. When a program consisting of a series of paths is downloaded to the on-board computer system, the robot conducts a floor survey autonomously at any preselected rate. When the radiation monitor detects contamination, the robot resurveys the area at reduced speed and resumes its preprogrammed path if the contamination is not confirmed. If the contamination is confirmed, the robot stops and sounds an alarm. 5 figures.

  5. Autonomous mobile robot for radiologic surveys

    DOEpatents

    Dudar, Aed M.; Wagner, David G.; Teese, Gregory D.

    1994-01-01

    An apparatus for conducting radiologic surveys. The apparatus comprises in the main a robot capable of following a preprogrammed path through an area, a radiation monitor adapted to receive input from a radiation detector assembly, ultrasonic transducers for navigation and collision avoidance, and an on-board computer system including an integrator for interfacing the radiation monitor and the robot. Front and rear bumpers are attached to the robot by bumper mounts. The robot may be equipped with memory boards for the collection and storage of radiation survey information. The on-board computer system is connected to a remote host computer via a UHF radio link. The apparatus is powered by a rechargeable 24-volt DC battery, and is stored at a docking station when not in use and/or for recharging. A remote host computer contains a stored database defining paths between points in the area where the robot is to operate, including but not limited to the locations of walls, doors, stationary furniture and equipment, and sonic markers if used. When a program consisting of a series of paths is downloaded to the on-board computer system, the robot conducts a floor survey autonomously at any preselected rate. When the radiation monitor detects contamination, the robot resurveys the area at reduced speed and resumes its preprogrammed path if the contamination is not confirmed. If the contamination is confirmed, the robot stops and sounds an alarm.

  6. Laboratory systems integration: robotics and automation.

    PubMed

    Felder, R A

    1991-01-01

    Robotic technology is going to have a profound impact on the clinical laboratory of the future. Faced with increased pressure to reduce health care spending yet increase services to patients, many laboratories are looking for alternatives to the inflexible or "fixed" automation found in many clinical analyzers. Robots are being examined by many clinical pathologists as an attractive technology which can adapt to the constant changes in laboratory testing. Already, laboratory designs are being altered to accommodate robotics and automated specimen processors. However, the use of robotics and computer intelligence in the clinical laboratory is still in its infancy. Successful examples of robotic automation exist in several laboratories. Investigators have used robots to automate endocrine testing, high performance liquid chromatography, and specimen transportation. Large commercial laboratories are investigating the use of specimen processors which combine the use of fixed automation and robotics. Robotics have also reduced the exposure of medical technologists to specimens infected with viral pathogens. The successful examples of clinical robotics applications were a result of the cooperation of clinical chemists, engineers, and medical technologists. At the University of Virginia we have designed and implemented a robotic critical care laboratory. Initial clinical experience suggests that robotic performance is reliable, however, staff acceptance and utilization requires continuing education. We are also developing a robotic cyclosporine which promises to greatly reduce the labor costs of this analysis. The future will bring lab wide automation that will fully integrate computer artificial intelligence and robotics. Specimens will be transported by mobile robots. Specimen processing, aliquotting, and scheduling will be automated.(ABSTRACT TRUNCATED AT 250 WORDS)

  7. Clinical applicability of robot-guided contact-free laser osteotomy in cranio-maxillo-facial surgery: in-vitro simulation and in-vivo surgery in minipig mandibles.

    PubMed

    Baek, K-W; Deibel, W; Marinov, D; Griessen, M; Bruno, A; Zeilhofer, H-F; Cattin, Ph; Juergens, Ph

    2015-12-01

    Laser was being used in medicine soon after its invention. However, it has been possible to excise hard tissue with lasers only recently, and the Er:YAG laser is now established in the treatment of damaged teeth. Recently experimental studies have investigated its use in bone surgery, where its major advantages are freedom of cutting geometry and precision. However, these advantages become apparent only when the system is used with robotic guidance. The main challenge is ergonomic integration of the laser and the robot, otherwise the surgeon's space in the operating theatre is obstructed during the procedure. Here we present our first experiences with an integrated, miniaturised laser system guided by a surgical robot. An Er:YAG laser source and the corresponding optical system were integrated into a composite casing that was mounted on a surgical robotic arm. The robot-guided laser system was connected to a computer-assisted preoperative planning and intraoperative navigation system, and the laser osteotome was used in an operating theatre to create defects of different shapes in the mandibles of 6 minipigs. Similar defects were created on the opposite side with a piezoelectric (PZE) osteotome and a conventional drill guided by a surgeon. The performance was analysed from the points of view of the workflow, ergonomics, ease of use, and safety features. The integrated robot-guided laser osteotome can be ergonomically used in the operating theatre. The computer-assisted and robot-guided laser osteotome is likely to be suitable for clinical use for ostectomies that require considerable accuracy and individual shape. Copyright © 2015 The British Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  8. Artificial intelligence and robotics in high throughput post-genomics.

    PubMed

    Laghaee, Aroosha; Malcolm, Chris; Hallam, John; Ghazal, Peter

    2005-09-15

    The shift of post-genomics towards a systems approach has offered an ever-increasing role for artificial intelligence (AI) and robotics. Many disciplines (e.g. engineering, robotics, computer science) bear on the problem of automating the different stages involved in post-genomic research with a view to developing quality assured high-dimensional data. We review some of the latest contributions of AI and robotics to this end and note the limitations arising from the current independent, exploratory way in which specific solutions are being presented for specific problems without regard to how these could be eventually integrated into one comprehensible integrated intelligent system.

  9. Robust Software Architecture for Robots

    NASA Technical Reports Server (NTRS)

    Aghazanian, Hrand; Baumgartner, Eric; Garrett, Michael

    2009-01-01

    Robust Real-Time Reconfigurable Robotics Software Architecture (R4SA) is the name of both a software architecture and software that embodies the architecture. The architecture was conceived in the spirit of current practice in designing modular, hard, realtime aerospace systems. The architecture facilitates the integration of new sensory, motor, and control software modules into the software of a given robotic system. R4SA was developed for initial application aboard exploratory mobile robots on Mars, but is adaptable to terrestrial robotic systems, real-time embedded computing systems in general, and robotic toys.

  10. ROBOTICS IN HAZARDOUS ENVIRONMENTS - REAL DEPLOYMENTS BY THE SAVANNAH RIVER NATIONAL LABORATORY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kriikku, E.; Tibrea, S.; Nance, T.

    The Research & Development Engineering (R&DE) section in the Savannah River National Laboratory (SRNL) engineers, integrates, tests, and supports deployment of custom robotics, systems, and tools for use in radioactive, hazardous, or inaccessible environments. Mechanical and electrical engineers, computer control professionals, specialists, machinists, welders, electricians, and mechanics adapt and integrate commercially available technology with in-house designs, to meet the needs of Savannah River Site (SRS), Department of Energy (DOE), and other governmental agency customers. This paper discusses five R&DE robotic and remote system projects.

  11. Preliminary results of BRAVO project: brain computer interfaces for Robotic enhanced Action in Visuo-motOr tasks.

    PubMed

    Bergamasco, Massimo; Frisoli, Antonio; Fontana, Marco; Loconsole, Claudio; Leonardis, Daniele; Troncossi, Marco; Foumashi, Mohammad Mozaffari; Parenti-Castelli, Vincenzo

    2011-01-01

    This paper presents the preliminary results of the project BRAVO (Brain computer interfaces for Robotic enhanced Action in Visuo-motOr tasks). The objective of this project is to define a new approach to the development of assistive and rehabilitative robots for motor impaired users to perform complex visuomotor tasks that require a sequence of reaches, grasps and manipulations of objects. BRAVO aims at developing new robotic interfaces and HW/SW architectures for rehabilitation and regain/restoration of motor function in patients with upper limb sensorimotor impairment through extensive rehabilitation therapy and active assistance in the execution of Activities of Daily Living. The final system developed within this project will include a robotic arm exoskeleton and a hand orthosis that will be integrated together for providing force assistance. The main novelty that BRAVO introduces is the control of the robotic assistive device through the active prediction of intention/action. The system will actually integrate the information about the movement carried out by the user with a prediction of the performed action through an interpretation of current gaze of the user (measured through eye-tracking), brain activation (measured through BCI) and force sensor measurements. © 2011 IEEE

  12. Distributed cooperating processes in a mobile robot control system

    NASA Technical Reports Server (NTRS)

    Skillman, Thomas L., Jr.

    1988-01-01

    A mobile inspection robot has been proposed for the NASA Space Station. It will be a free flying autonomous vehicle that will leave a berthing unit to accomplish a variety of inspection tasks around the Space Station, and then return to its berth to recharge, refuel, and transfer information. The Flying Eye robot will receive voice communication to change its attitude, move at a constant velocity, and move to a predefined location along a self generated path. This mobile robot control system requires integration of traditional command and control techniques with a number of AI technologies. Speech recognition, natural language understanding, task and path planning, sensory abstraction and pattern recognition are all required for successful implementation. The interface between the traditional numeric control techniques and the symbolic processing to the AI technologies must be developed, and a distributed computing approach will be needed to meet the real time computing requirements. To study the integration of the elements of this project, a novel mobile robot control architecture and simulation based on the blackboard architecture was developed. The control system operation and structure is discussed.

  13. Navigation of a robot-integrated fluorescence laparoscope in preoperative SPECT/CT and intraoperative freehand SPECT imaging data: a phantom study

    NASA Astrophysics Data System (ADS)

    van Oosterom, Matthias Nathanaël; Engelen, Myrthe Adriana; van den Berg, Nynke Sjoerdtje; KleinJan, Gijs Hendrik; van der Poel, Henk Gerrit; Wendler, Thomas; van de Velde, Cornelis Jan Hadde; Navab, Nassir; van Leeuwen, Fijs Willem Bernhard

    2016-08-01

    Robot-assisted laparoscopic surgery is becoming an established technique for prostatectomy and is increasingly being explored for other types of cancer. Linking intraoperative imaging techniques, such as fluorescence guidance, with the three-dimensional insights provided by preoperative imaging remains a challenge. Navigation technologies may provide a solution, especially when directly linked to both the robotic setup and the fluorescence laparoscope. We evaluated the feasibility of such a setup. Preoperative single-photon emission computed tomography/X-ray computed tomography (SPECT/CT) or intraoperative freehand SPECT (fhSPECT) scans were used to navigate an optically tracked robot-integrated fluorescence laparoscope via an augmented reality overlay in the laparoscopic video feed. The navigation accuracy was evaluated in soft tissue phantoms, followed by studies in a human-like torso phantom. Navigation accuracies found for SPECT/CT-based navigation were 2.25 mm (coronal) and 2.08 mm (sagittal). For fhSPECT-based navigation, these were 1.92 mm (coronal) and 2.83 mm (sagittal). All errors remained below the <1-cm detection limit for fluorescence imaging, allowing refinement of the navigation process using fluorescence findings. The phantom experiments performed suggest that SPECT-based navigation of the robot-integrated fluorescence laparoscope is feasible and may aid fluorescence-guided surgery procedures.

  14. Molecular robots with sensors and intelligence.

    PubMed

    Hagiya, Masami; Konagaya, Akihiko; Kobayashi, Satoshi; Saito, Hirohide; Murata, Satoshi

    2014-06-17

    CONSPECTUS: What we can call a molecular robot is a set of molecular devices such as sensors, logic gates, and actuators integrated into a consistent system. The molecular robot is supposed to react autonomously to its environment by receiving molecular signals and making decisions by molecular computation. Building such a system has long been a dream of scientists; however, despite extensive efforts, systems having all three functions (sensing, computation, and actuation) have not been realized yet. This Account introduces an ongoing research project that focuses on the development of molecular robotics funded by MEXT (Ministry of Education, Culture, Sports, Science and Technology, Japan). This 5 year project started in July 2012 and is titled "Development of Molecular Robots Equipped with Sensors and Intelligence". The major issues in the field of molecular robotics all correspond to a feedback (i.e., plan-do-see) cycle of a robotic system. More specifically, these issues are (1) developing molecular sensors capable of handling a wide array of signals, (2) developing amplification methods of signals to drive molecular computing devices, (3) accelerating molecular computing, (4) developing actuators that are controllable by molecular computers, and (5) providing bodies of molecular robots encapsulating the above molecular devices, which implement the conformational changes and locomotion of the robots. In this Account, the latest contributions to the project are reported. There are four research teams in the project that specialize on sensing, intelligence, amoeba-like actuation, and slime-like actuation, respectively. The molecular sensor team is focusing on the development of molecular sensors that can handle a variety of signals. This team is also investigating methods to amplify signals from the molecular sensors. The molecular intelligence team is developing molecular computers and is currently focusing on a new photochemical technology for accelerating DNA-based computations. They also introduce novel computational models behind various kinds of molecular computers necessary for designing such computers. The amoeba robot team aims at constructing amoeba-like robots. The team is trying to incorporate motor proteins, including kinesin and microtubules (MTs), for use as actuators implemented in a liposomal compartment as a robot body. They are also developing a methodology to link DNA-based computation and molecular motor control. The slime robot team focuses on the development of slime-like robots. The team is evaluating various gels, including DNA gel and BZ gel, for use as actuators, as well as the body material to disperse various molecular devices in it. They also try to control the gel actuators by DNA signals coming from molecular computers.

  15. Design And Implementation Of Integrated Vision-Based Robotic Workcells

    NASA Astrophysics Data System (ADS)

    Chen, Michael J.

    1985-01-01

    Reports have been sparse on large-scale, intelligent integration of complete robotic systems for automating the microelectronics industry. This paper describes the application of state-of-the-art computer-vision technology for manufacturing of miniaturized electronic components. The concepts of FMS - Flexible Manufacturing Systems, work cells, and work stations and their control hierarchy are illustrated in this paper. Several computer-controlled work cells used in the production of thin-film magnetic heads are described. These cells use vision for in-process control of head-fixture alignment and real-time inspection of production parameters. The vision sensor and other optoelectronic sensors, coupled with transport mechanisms such as steppers, x-y-z tables, and robots, have created complete sensorimotor systems. These systems greatly increase the manufacturing throughput as well as the quality of the final product. This paper uses these automated work cells as examples to exemplify the underlying design philosophy and principles in the fabrication of vision-based robotic systems.

  16. Cyber-physical approach to the network-centric robotics control task

    NASA Astrophysics Data System (ADS)

    Muliukha, Vladimir; Ilyashenko, Alexander; Zaborovsky, Vladimir; Lukashin, Alexey

    2016-10-01

    Complex engineering tasks concerning control for groups of mobile robots are developed poorly. In our work for their formalization we use cyber-physical approach, which extends the range of engineering and physical methods for a design of complex technical objects by researching the informational aspects of communication and interaction between objects and with an external environment [1]. The paper analyzes network-centric methods for control of cyber-physical objects. Robots or cyber-physical objects interact with each other by transmitting information via computer networks using preemptive queueing system and randomized push-out mechanism [2],[3]. The main field of application for the results of our work is space robotics. The selection of cyber-physical systems as a special class of designed objects is due to the necessity of integrating various components responsible for computing, communications and control processes. Network-centric solutions allow using universal means for the organization of information exchange to integrate different technologies for the control system.

  17. Creating the brain and interacting with the brain: an integrated approach to understanding the brain.

    PubMed

    Morimoto, Jun; Kawato, Mitsuo

    2015-03-06

    In the past two decades, brain science and robotics have made gigantic advances in their own fields, and their interactions have generated several interdisciplinary research fields. First, in the 'understanding the brain by creating the brain' approach, computational neuroscience models have been applied to many robotics problems. Second, such brain-motivated fields as cognitive robotics and developmental robotics have emerged as interdisciplinary areas among robotics, neuroscience and cognitive science with special emphasis on humanoid robots. Third, in brain-machine interface research, a brain and a robot are mutually connected within a closed loop. In this paper, we review the theoretical backgrounds of these three interdisciplinary fields and their recent progress. Then, we introduce recent efforts to reintegrate these research fields into a coherent perspective and propose a new direction that integrates brain science and robotics where the decoding of information from the brain, robot control based on the decoded information and multimodal feedback to the brain from the robot are carried out in real time and in a closed loop. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  18. Creating the brain and interacting with the brain: an integrated approach to understanding the brain

    PubMed Central

    Morimoto, Jun; Kawato, Mitsuo

    2015-01-01

    In the past two decades, brain science and robotics have made gigantic advances in their own fields, and their interactions have generated several interdisciplinary research fields. First, in the ‘understanding the brain by creating the brain’ approach, computational neuroscience models have been applied to many robotics problems. Second, such brain-motivated fields as cognitive robotics and developmental robotics have emerged as interdisciplinary areas among robotics, neuroscience and cognitive science with special emphasis on humanoid robots. Third, in brain–machine interface research, a brain and a robot are mutually connected within a closed loop. In this paper, we review the theoretical backgrounds of these three interdisciplinary fields and their recent progress. Then, we introduce recent efforts to reintegrate these research fields into a coherent perspective and propose a new direction that integrates brain science and robotics where the decoding of information from the brain, robot control based on the decoded information and multimodal feedback to the brain from the robot are carried out in real time and in a closed loop. PMID:25589568

  19. Reward-Modulated Hebbian Plasticity as Leverage for Partially Embodied Control in Compliant Robotics

    PubMed Central

    Burms, Jeroen; Caluwaerts, Ken; Dambre, Joni

    2015-01-01

    In embodied computation (or morphological computation), part of the complexity of motor control is offloaded to the body dynamics. We demonstrate that a simple Hebbian-like learning rule can be used to train systems with (partial) embodiment, and can be extended outside of the scope of traditional neural networks. To this end, we apply the learning rule to optimize the connection weights of recurrent neural networks with different topologies and for various tasks. We then apply this learning rule to a simulated compliant tensegrity robot by optimizing static feedback controllers that directly exploit the dynamics of the robot body. This leads to partially embodied controllers, i.e., hybrid controllers that naturally integrate the computations that are performed by the robot body into a neural network architecture. Our results demonstrate the universal applicability of reward-modulated Hebbian learning. Furthermore, they demonstrate the robustness of systems trained with the learning rule. This study strengthens our belief that compliant robots should or can be seen as computational units, instead of dumb hardware that needs a complex controller. This link between compliant robotics and neural networks is also the main reason for our search for simple universal learning rules for both neural networks and robotics. PMID:26347645

  20. Evaluation of automated decisionmaking methodologies and development of an integrated robotic system simulation. Appendix B: ROBSIM programmer's guide

    NASA Technical Reports Server (NTRS)

    Haley, D. C.; Almand, B. J.; Thomas, M. M.; Krauze, L. D.; Gremban, K. D.; Sanborn, J. C.; Kelly, J. H.; Depkovich, T. M.; Wolfe, W. J.; Nguyen, T.

    1986-01-01

    The purpose of the Robotic Simulation (ROBSIM) program is to provide a broad range of computer capabilities to assist in the design, verification, simulation, and study of robotic systems. ROBSIM is programmed in FORTRAM 77 and implemented on a VAX 11/750 computer using the VMS operating system. The programmer's guide describes the ROBSIM implementation and program logic flow, and the functions and structures of the different subroutines. With the manual and the in-code documentation, an experienced programmer can incorporate additional routines and modify existing ones to add desired capabilities.

  1. Evaluation of automated decisionmaking methodologies and development of an integrated robotic system simulation, appendix B

    NASA Technical Reports Server (NTRS)

    Haley, D. C.; Almand, B. J.; Thomas, M. M.; Krauze, L. D.; Gremban, K. D.; Sanborn, J. C.; Kelly, J. H.; Depkovich, T. M.

    1984-01-01

    The purpose of the Robotics Simulation (ROBSIM) program is to provide a broad range of computer capabilities to assist in the design, verification, simulation, and study of robotic systems. ROBSIM is programmed in FORTRAN 77 and implemented on a VAX 11/750 computer using the VMS operating system. This programmer's guide describes the ROBSIM implementation and program logic flow, and the functions and structures of the different subroutines. With this manual and the in-code documentation, and experienced programmer can incorporate additional routines and modify existing ones to add desired capabilities.

  2. Robotics: The next step?

    PubMed

    Broeders, Ivo A M J

    2014-02-01

    Robotic systems were introduced 15 years ago to support complex endoscopic procedures. The technology is increasingly used in gastro-intestinal surgery. In this article, literature on experimental- and clinical research is reviewed and ergonomic issues are discussed. literature review was based on Medline search using a large variety of search terms, including e.g. robot(ic), randomized, rectal, oesophageal, ergonomics. Review articles on relevant topics are discussed with preference. There is abundant evidence of supremacy in performing complex endoscopic surgery tasks when using the robot in an experimental setting. There is little high-level evidence so far on translation of these merits to clinical practice. Robotic systems may appear helpful in complex gastro-intestinal surgery. Moreover, dedicated computer based technology integrated in telepresence systems opens the way to integration of planning, diagnostics and therapy. The first high tech add-ons such as near infrared technology are under clinical evaluation. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Robotics-Centered Outreach Activities: An Integrated Approach

    ERIC Educational Resources Information Center

    Ruiz-del-Solar, Javier

    2010-01-01

    Nowadays, universities are making extensive efforts to attract prospective students to the fields of electrical, electronic, and computer engineering. Thus, outreach is becoming increasingly important, and activities with schoolchildren are being extensively carried out as part of this effort. In this context, robotics is a very attractive and…

  4. A hardware/software environment to support R D in intelligent machines and mobile robotic systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mann, R.C.

    1990-01-01

    The Center for Engineering Systems Advanced Research (CESAR) serves as a focal point at the Oak Ridge National Laboratory (ORNL) for basic and applied research in intelligent machines. R D at CESAR addresses issues related to autonomous systems, unstructured (i.e. incompletely known) operational environments, and multiple performing agents. Two mobile robot prototypes (HERMIES-IIB and HERMIES-III) are being used to test new developments in several robot component technologies. This paper briefly introduces the computing environment at CESAR which includes three hypercube concurrent computers (two on-board the mobile robots), a graphics workstation, VAX, and multiple VME-based systems (several on-board the mobile robots).more » The current software environment at CESAR is intended to satisfy several goals, e.g.: code portability, re-usability in different experimental scenarios, modularity, concurrent computer hardware transparent to applications programmer, future support for multiple mobile robots, support human-machine interface modules, and support for integration of software from other, geographically disparate laboratories with different hardware set-ups. 6 refs., 1 fig.« less

  5. An integrated dexterous robotic testbed for space applications

    NASA Technical Reports Server (NTRS)

    Li, Larry C.; Nguyen, Hai; Sauer, Edward

    1992-01-01

    An integrated dexterous robotic system was developed as a testbed to evaluate various robotics technologies for advanced space applications. The system configuration consisted of a Utah/MIT Dexterous Hand, a PUMA 562 arm, a stereo vision system, and a multiprocessing computer control system. In addition to these major subsystems, a proximity sensing system was integrated with the Utah/MIT Hand to provide capability for non-contact sensing of a nearby object. A high-speed fiber-optic link was used to transmit digitized proximity sensor signals back to the multiprocessing control system. The hardware system was designed to satisfy the requirements for both teleoperated and autonomous operations. The software system was designed to exploit parallel processing capability, pursue functional modularity, incorporate artificial intelligence for robot control, allow high-level symbolic robot commands, maximize reusable code, minimize compilation requirements, and provide an interactive application development and debugging environment for the end users. An overview is presented of the system hardware and software configurations, and implementation is discussed of subsystem functions.

  6. Development of hardwares and computer interface for a two-degree-of-freedom robot

    NASA Technical Reports Server (NTRS)

    Nguyen, Charles C.; Pooran, Farhad J.

    1987-01-01

    The research results that were obtained are reviewed. Then the robot actuator, the selection of the data acquisition system, and the design of the power amplifier will be discussed. The machine design of the robot manipulator will then be presented. After that, the integration of the developed hardware into the open-loop system will also be discussed. Current and future research work is addressed.

  7. Materials science. Materials that couple sensing, actuation, computation, and communication.

    PubMed

    McEvoy, M A; Correll, N

    2015-03-20

    Tightly integrating sensing, actuation, and computation into composites could enable a new generation of truly smart material systems that can change their appearance and shape autonomously. Applications for such materials include airfoils that change their aerodynamic profile, vehicles with camouflage abilities, bridges that detect and repair damage, or robotic skins and prosthetics with a realistic sense of touch. Although integrating sensors and actuators into composites is becoming increasingly common, the opportunities afforded by embedded computation have only been marginally explored. Here, the key challenge is the gap between the continuous physics of materials and the discrete mathematics of computation. Bridging this gap requires a fundamental understanding of the constituents of such robotic materials and the distributed algorithms and controls that make these structures smart. Copyright © 2015, American Association for the Advancement of Science.

  8. Robotics and neurosurgery.

    PubMed

    Nathoo, Narendra; Pesek, Todd; Barnett, Gene H

    2003-12-01

    Ultimately, neurosurgery performed via a robotic interface will serve to improve the standard of a neurosurgeon's skills, thus making a good surgeon a better surgeon. In fact, computer and robotic instrumentation will become allies to the neurosurgeon through the use of these technologies in training, diagnostic, and surgical events. Nonetheless, these technologies are still in an early stage of development, and each device developed will entail its own set of challenges and limitations for use in clinical settings. The future operating room should be regarded as an integrated information system incorporating robotic surgical navigators and telecontrolled micromanipulators, with the capabilities of all principal neurosurgical concepts, sharing information, and under the control of a single person, the neurosurgeon. The eventual integration of robotic technology into mainstream clinical neurosurgery offers the promise of a future of safer, more accurate, and less invasive surgery that will result in improved patient outcome.

  9. Proceedings of the Workshop on Computational Aspects in the Control of Flexible Systems, part 1

    NASA Technical Reports Server (NTRS)

    Taylor, Lawrence W., Jr. (Compiler)

    1989-01-01

    Control/Structures Integration program software needs, computer aided control engineering for flexible spacecraft, computer aided design, computational efficiency and capability, modeling and parameter estimation, and control synthesis and optimization software for flexible structures and robots are among the topics discussed.

  10. Robot-Beacon Distributed Range-Only SLAM for Resource-Constrained Operation

    PubMed Central

    Torres-González, Arturo; Martínez-de Dios, Jose Ramiro; Ollero, Anibal

    2017-01-01

    This work deals with robot-sensor network cooperation where sensor nodes (beacons) are used as landmarks for Range-Only (RO) Simultaneous Localization and Mapping (SLAM). Most existing RO-SLAM techniques consider beacons as passive devices disregarding the sensing, computational and communication capabilities with which they are actually endowed. SLAM is a resource-demanding task. Besides the technological constraints of the robot and beacons, many applications impose further resource consumption limitations. This paper presents a scalable distributed RO-SLAM scheme for resource-constrained operation. It is capable of exploiting robot-beacon cooperation in order to improve SLAM accuracy while meeting a given resource consumption bound expressed as the maximum number of measurements that are integrated in SLAM per iteration. The proposed scheme combines a Sparse Extended Information Filter (SEIF) SLAM method, in which each beacon gathers and integrates robot-beacon and inter-beacon measurements, and a distributed information-driven measurement allocation tool that dynamically selects the measurements that are integrated in SLAM, balancing uncertainty improvement and resource consumption. The scheme adopts a robot-beacon distributed approach in which each beacon participates in the selection, gathering and integration in SLAM of robot-beacon and inter-beacon measurements, resulting in significant estimation accuracies, resource-consumption efficiency and scalability. It has been integrated in an octorotor Unmanned Aerial System (UAS) and evaluated in 3D SLAM outdoor experiments. The experimental results obtained show its performance and robustness and evidence its advantages over existing methods. PMID:28425946

  11. Robot-Beacon Distributed Range-Only SLAM for Resource-Constrained Operation.

    PubMed

    Torres-González, Arturo; Martínez-de Dios, Jose Ramiro; Ollero, Anibal

    2017-04-20

    This work deals with robot-sensor network cooperation where sensor nodes (beacons) are used as landmarks for Range-Only (RO) Simultaneous Localization and Mapping (SLAM). Most existing RO-SLAM techniques consider beacons as passive devices disregarding the sensing, computational and communication capabilities with which they are actually endowed. SLAM is a resource-demanding task. Besides the technological constraints of the robot and beacons, many applications impose further resource consumption limitations. This paper presents a scalable distributed RO-SLAM scheme for resource-constrained operation. It is capable of exploiting robot-beacon cooperation in order to improve SLAM accuracy while meeting a given resource consumption bound expressed as the maximum number of measurements that are integrated in SLAM per iteration. The proposed scheme combines a Sparse Extended Information Filter (SEIF) SLAM method, in which each beacon gathers and integrates robot-beacon and inter-beacon measurements, and a distributed information-driven measurement allocation tool that dynamically selects the measurements that are integrated in SLAM, balancing uncertainty improvement and resource consumption. The scheme adopts a robot-beacon distributed approach in which each beacon participates in the selection, gathering and integration in SLAM of robot-beacon and inter-beacon measurements, resulting in significant estimation accuracies, resource-consumption efficiency and scalability. It has been integrated in an octorotor Unmanned Aerial System (UAS) and evaluated in 3D SLAM outdoor experiments. The experimental results obtained show its performance and robustness and evidence its advantages over existing methods.

  12. Simulation tools for robotics research and assessment

    NASA Astrophysics Data System (ADS)

    Fields, MaryAnne; Brewer, Ralph; Edge, Harris L.; Pusey, Jason L.; Weller, Ed; Patel, Dilip G.; DiBerardino, Charles A.

    2016-05-01

    The Robotics Collaborative Technology Alliance (RCTA) program focuses on four overlapping technology areas: Perception, Intelligence, Human-Robot Interaction (HRI), and Dexterous Manipulation and Unique Mobility (DMUM). In addition, the RCTA program has a requirement to assess progress of this research in standalone as well as integrated form. Since the research is evolving and the robotic platforms with unique mobility and dexterous manipulation are in the early development stage and very expensive, an alternate approach is needed for efficient assessment. Simulation of robotic systems, platforms, sensors, and algorithms, is an attractive alternative to expensive field-based testing. Simulation can provide insight during development and debugging unavailable by many other means. This paper explores the maturity of robotic simulation systems for applications to real-world problems in robotic systems research. Open source (such as Gazebo and Moby), commercial (Simulink, Actin, LMS), government (ANVEL/VANE), and the RCTA-developed RIVET simulation environments are examined with respect to their application in the robotic research domains of Perception, Intelligence, HRI, and DMUM. Tradeoffs for applications to representative problems from each domain are presented, along with known deficiencies and disadvantages. In particular, no single robotic simulation environment adequately covers the needs of the robotic researcher in all of the domains. Simulation for DMUM poses unique constraints on the development of physics-based computational models of the robot, the environment and objects within the environment, and the interactions between them. Most current robot simulations focus on quasi-static systems, but dynamic robotic motion places an increased emphasis on the accuracy of the computational models. In order to understand the interaction of dynamic multi-body systems, such as limbed robots, with the environment, it may be necessary to build component-level computational models to provide the necessary simulation fidelity for accuracy. However, the Perception domain remains the most problematic for adequate simulation performance due to the often cartoon nature of computer rendering and the inability to model realistic electromagnetic radiation effects, such as multiple reflections, in real-time.

  13. Improving robot arm control for safe and robust haptic cooperation in orthopaedic procedures.

    PubMed

    Cruces, R A Castillo; Wahrburg, J

    2007-12-01

    This paper presents the ongoing results of an effort to achieve the integration of a navigated cooperative robotic arm into computer-assisted orthopaedic surgery. A seamless integration requires the system acting in direct cooperation with the surgeon instead of replacing him. Two technical issues are discussed to improve the haptic operating modes for interactive robot guidance. The concept of virtual fixtures is used to restrict the range of motion of the robot according to pre-operatively defined constraints, and methodologies to assure a robust and accurate motion through singular arm configurations are investigated. A new method for handling singularities is proposed, which is superior to the commonly used damped-least-squares method. It produces no deviations of the end-effector in relation to the virtually constrained path. A solution to assure a good performance of a hands-on robotic arm at singularity configurations is proposed. (c) 2007 John Wiley & Sons, Ltd.

  14. A PIC microcontroller-based system for real-life interfacing of external peripherals with a mobile robot

    NASA Astrophysics Data System (ADS)

    Singh, N. Nirmal; Chatterjee, Amitava; Rakshit, Anjan

    2010-02-01

    The present article describes the development of a peripheral interface controller (PIC) microcontroller-based system for interfacing external add-on peripherals with a real mobile robot, for real life applications. This system serves as an important building block of a complete integrated vision-based mobile robot system, integrated indigenously in our laboratory. The system is composed of the KOALA mobile robot in conjunction with a personal computer (PC) and a two-camera-based vision system where the PIC microcontroller is used to drive servo motors, in interrupt-driven mode, to control additional degrees of freedom of the vision system. The performance of the developed system is tested by checking it under the control of several user-specified commands, issued from the PC end.

  15. RCTS: A flexible environment for sensor integration and control of robot systems; the distributed processing approach

    NASA Technical Reports Server (NTRS)

    Allard, R.; Mack, B.; Bayoumi, M. M.

    1989-01-01

    Most robot systems lack a suitable hardware and software environment for the efficient research of new control and sensing schemes. Typically, engineers and researchers need to be experts in control, sensing, programming, communication and robotics in order to implement, integrate and test new ideas in a robot system. In order to reduce this time, the Robot Controller Test Station (RCTS) has been developed. It uses a modular hardware and software architecture allowing easy physical and functional reconfiguration of a robot. This is accomplished by emphasizing four major design goals: flexibility, portability, ease of use, and ease of modification. An enhanced distributed processing version of RCTS is described. It features an expanded and more flexible communication system design. Distributed processing results in the availability of more local computing power and retains the low cost of microprocessors. A large number of possible communication, control and sensing schemes can therefore be easily introduced and tested, using the same basic software structure.

  16. SAVA 3: A testbed for integration and control of visual processes

    NASA Technical Reports Server (NTRS)

    Crowley, James L.; Christensen, Henrik

    1994-01-01

    The development of an experimental test-bed to investigate the integration and control of perception in a continuously operating vision system is described. The test-bed integrates a 12 axis robotic stereo camera head mounted on a mobile robot, dedicated computer boards for real-time image acquisition and processing, and a distributed system for image description. The architecture was designed to: (1) be continuously operating, (2) integrate software contributions from geographically dispersed laboratories, (3) integrate description of the environment with 2D measurements, 3D models, and recognition of objects, (4) capable of supporting diverse experiments in gaze control, visual servoing, navigation, and object surveillance, and (5) dynamically reconfiguarable.

  17. Localization of Mobile Robots Using an Extended Kalman Filter in a LEGO NXT

    ERIC Educational Resources Information Center

    Pinto, M.; Moreira, A. P.; Matos, A.

    2012-01-01

    The inspiration for this paper comes from a successful experiment conducted with students in the "Mobile Robots" course in the fifth year of the integrated Master's program in the Department of Electrical and Computer Engineering, Faculty of Engineering, University of Porto (FEUP), Porto, Portugal. One of the topics in this Mobile Robots…

  18. Onboard Flow Sensing For Downwash Detection and Avoidance On Small Quadrotor Helicopters

    DTIC Science & Technology

    2015-01-01

    onboard computers, one for flight stabilization and a Linux computer for sensor integration and control calculations . The Linux computer runs Robot...Hirokawa, D. Kubo , S. Suzuki, J. Meguro, and T. Suzuki. Small uav for immediate hazard map generation. In AIAA Infotech@Aerospace Conf, May 2007. 8F

  19. Synthetic Ion Channels and DNA Logic Gates as Components of Molecular Robots.

    PubMed

    Kawano, Ryuji

    2018-02-19

    A molecular robot is a next-generation biochemical machine that imitates the actions of microorganisms. It is made of biomaterials such as DNA, proteins, and lipids. Three prerequisites have been proposed for the construction of such a robot: sensors, intelligence, and actuators. This Minireview focuses on recent research on synthetic ion channels and DNA computing technologies, which are viewed as potential candidate components of molecular robots. Synthetic ion channels, which are embedded in artificial cell membranes (lipid bilayers), sense ambient ions or chemicals and import them. These artificial sensors are useful components for molecular robots with bodies consisting of a lipid bilayer because they enable the interface between the inside and outside of the molecular robot to function as gates. After the signal molecules arrive inside the molecular robot, they can operate DNA logic gates, which perform computations. These functions will be integrated into the intelligence and sensor sections of molecular robots. Soon, these molecular machines will be able to be assembled to operate as a mass microrobot and play an active role in environmental monitoring and in vivo diagnosis or therapy. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Robotic and artificial intelligence for keyhole neurosurgery: the ROBOCAST project, a multi-modal autonomous path planner.

    PubMed

    De Momi, E; Ferrigno, G

    2010-01-01

    The robot and sensors integration for computer-assisted surgery and therapy (ROBOCAST) project (FP7-ICT-2007-215190) is co-funded by the European Union within the Seventh Framework Programme in the field of information and communication technologies. The ROBOCAST project focuses on robot- and artificial-intelligence-assisted keyhole neurosurgery (tumour biopsy and local drug delivery along straight or turning paths). The goal of this project is to assist surgeons with a robotic system controlled by an intelligent high-level controller (HLC) able to gather and integrate information from the surgeon, from diagnostic images, and from an array of on-field sensors. The HLC integrates pre-operative and intra-operative diagnostics data and measurements, intelligence augmentation, multiple-robot dexterity, and multiple sensory inputs in a closed-loop cooperating scheme including a smart interface for improved haptic immersion and integration. This paper, after the overall architecture description, focuses on the intelligent trajectory planner based on risk estimation and human criticism. The current status of development is reported, and first tests on the planner are shown by using a real image stack and risk descriptor phantom. The advantages of using a fuzzy risk description are given by the possibility of upgrading the knowledge on-field without the intervention of a knowledge engineer.

  1. Autonomous mobile robot research using the HERMIES-III robot

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pin, F.G.; Beckerman, M.; Spelt, P.F.

    1989-01-01

    This paper reports on the status and future directions in the research, development and experimental validation of intelligent control techniques for autonomous mobile robots using the HERMIES-III robot at the Center for Engineering Systems Advanced research (CESAR) at Oak Ridge National Laboratory (ORNL). HERMIES-III is the fourth robot in a series of increasingly more sophisticated and capable experimental test beds developed at CESAR. HERMIES-III is comprised of a battery powered, onmi-directional wheeled platform with a seven degree-of-freedom manipulator arm, video cameras, sonar range sensors, laser imaging scanner and a dual computer system containing up to 128 NCUBE nodes in hypercubemore » configuration. All electronics, sensors, computers, and communication equipment required for autonomous operation of HERMIES-III are located on board along with sufficient battery power for three to four hours of operation. The paper first provides a more detailed description of the HERMIES-III characteristics, focussing on the new areas of research and demonstration now possible at CESAR with this new test-bed. The initial experimental program is then described with emphasis placed on autonomous performance of human-scale tasks (e.g., valve manipulation, use of tools), integration of a dexterous manipulator and platform motion in geometrically complex environments, and effective use of multiple cooperating robots (HERMIES-IIB and HERMIES- III). The paper concludes with a discussion of the integration problems and safety considerations necessarily arising from the set-up of an experimental program involving human-scale, multi-autonomous mobile robots performance. 10 refs., 3 figs.« less

  2. Poster - Thurs Eve-12: A needle-positioning robot co-registered with volumetric x-ray micro-computed tomography images for minimally-invasive small-animal interventions.

    PubMed

    Waspe, A C; Holdsworth, D W; Lacefield, J C; Fenster, A

    2008-07-01

    Preclinical research protocols often require the delivery of biological substances to specific targets in small animal disease models. To target biologically relevant locations in mice accurately, the needle positioning error needs to be < 200 μm. If targeting is inaccurate, experimental results can be inconclusive or misleading. We have developed a robotic manipulator that is capable of positioning a needle with a mean error < 100 μm. An apparatus and method were developed for integrating the needle-positioning robot with volumetric micro-computed tomography image guidance for interventions in small animals. Accurate image-to-robot registration is critical for integration as it enables targets identified in the image to be mapped to physical coordinates inside the animal. Registration is accomplished by injecting barium sulphate into needle tracks as the robot withdraws the needle from target points in a tissue-mimicking phantom. Registration accuracy is therefore affected by the positioning error of the robot and is assessed by measuring the point-to-line fiducial and target registration errors (FRE, TRE). Centroid points along cross-sectional slices of the track are determined using region growing segmentation followed by application of a center-of-mass algorithm. The centerline points are registered to needle trajectories in robot coordinates by applying an iterative closest point algorithm between points and lines. Implementing this procedure with four fiducial needle tracks produced a point-to-line FRE and TRE of 246 ± 58 μm and 194 ± 18 μm, respectively. The proposed registration technique produced a TRE < 200 μm, in the presence of robot positioning error, meeting design specification. © 2008 American Association of Physicists in Medicine.

  3. Integrated Planning for Telepresence With Time Delays

    NASA Technical Reports Server (NTRS)

    Johnston, Mark; Rabe, Kenneth

    2009-01-01

    A conceptual "intelligent assistant" and an artificial-intelligence computer program that implements the intelligent assistant have been developed to improve control exerted by a human supervisor over a robot that is so distant that communication between the human and the robot involves significant signal-propagation delays. The goal of the effort is not only to help the human supervisor monitor and control the state of the robot, but also to improve the efficiency of the robot by allowing the supervisor to "work ahead". The intelligent assistant is an integrated combination of an artificial-intelligence planner and a monitor of states of both the human supervisor and the remote robot. The novelty of the system lies in the way it uses the planner to reason about the states at both ends of the time delay. The purpose served by the assistant is to provide advice to the human supervisor about current and future activities, derived from a sequence of high-level goals to be achieved.

  4. On discrete control of nonlinear systems with applications to robotics

    NASA Technical Reports Server (NTRS)

    Eslami, Mansour

    1989-01-01

    Much progress has been reported in the areas of modeling and control of nonlinear dynamic systems in a continuous-time framework. From implementation point of view, however, it is essential to study these nonlinear systems directly in a discrete setting that is amenable for interfacing with digital computers. But to develop discrete models and discrete controllers for a nonlinear system such as robot is a nontrivial task. Robot is also inherently a variable-inertia dynamic system involving additional complications. Not only the computer-oriented models of these systems must satisfy the usual requirements for such models, but these must also be compatible with the inherent capabilities of computers and must preserve the fundamental physical characteristics of continuous-time systems such as the conservation of energy and/or momentum. Preliminary issues regarding discrete systems in general and discrete models of a typical industrial robot that is developed with full consideration of the principle of conservation of energy are presented. Some research on the pertinent tactile information processing is reviewed. Finally, system control methods and how to integrate these issues in order to complete the task of discrete control of a robot manipulator are also reviewed.

  5. Fuzzy logic based robotic controller

    NASA Technical Reports Server (NTRS)

    Attia, F.; Upadhyaya, M.

    1994-01-01

    Existing Proportional-Integral-Derivative (PID) robotic controllers rely on an inverse kinematic model to convert user-specified cartesian trajectory coordinates to joint variables. These joints experience friction, stiction, and gear backlash effects. Due to lack of proper linearization of these effects, modern control theory based on state space methods cannot provide adequate control for robotic systems. In the presence of loads, the dynamic behavior of robotic systems is complex and nonlinear, especially where mathematical modeling is evaluated for real-time operators. Fuzzy Logic Control is a fast emerging alternative to conventional control systems in situations where it may not be feasible to formulate an analytical model of the complex system. Fuzzy logic techniques track a user-defined trajectory without having the host computer to explicitly solve the nonlinear inverse kinematic equations. The goal is to provide a rule-based approach, which is closer to human reasoning. The approach used expresses end-point error, location of manipulator joints, and proximity to obstacles as fuzzy variables. The resulting decisions are based upon linguistic and non-numerical information. This paper presents a solution to the conventional robot controller which is independent of computationally intensive kinematic equations. Computer simulation results of this approach as obtained from software implementation are also discussed.

  6. Computerized Manufacturing Automation. Employment, Education, and the Workplace. Summary.

    ERIC Educational Resources Information Center

    Congress of the U.S., Washington, DC. Office of Technology Assessment.

    The application of programmable automation (PA) offers new opportunities to enhance and streamline manufacturing processes. Five PA technologies are examined in this report: computer-aided design, robots, numerically controlled machine tools, flexible manufacturing systems, and computer-integrated manufacturing. Each technology is in a relatively…

  7. A Multimodal Emotion Detection System during Human-Robot Interaction

    PubMed Central

    Alonso-Martín, Fernando; Malfaz, María; Sequeira, João; Gorostiza, Javier F.; Salichs, Miguel A.

    2013-01-01

    In this paper, a multimodal user-emotion detection system for social robots is presented. This system is intended to be used during human–robot interaction, and it is integrated as part of the overall interaction system of the robot: the Robotics Dialog System (RDS). Two modes are used to detect emotions: the voice and face expression analysis. In order to analyze the voice of the user, a new component has been developed: Gender and Emotion Voice Analysis (GEVA), which is written using the Chuck language. For emotion detection in facial expressions, the system, Gender and Emotion Facial Analysis (GEFA), has been also developed. This last system integrates two third-party solutions: Sophisticated High-speed Object Recognition Engine (SHORE) and Computer Expression Recognition Toolbox (CERT). Once these new components (GEVA and GEFA) give their results, a decision rule is applied in order to combine the information given by both of them. The result of this rule, the detected emotion, is integrated into the dialog system through communicative acts. Hence, each communicative act gives, among other things, the detected emotion of the user to the RDS so it can adapt its strategy in order to get a greater satisfaction degree during the human–robot dialog. Each of the new components, GEVA and GEFA, can also be used individually. Moreover, they are integrated with the robotic control platform ROS (Robot Operating System). Several experiments with real users were performed to determine the accuracy of each component and to set the final decision rule. The results obtained from applying this decision rule in these experiments show a high success rate in automatic user emotion recognition, improving the results given by the two information channels (audio and visual) separately. PMID:24240598

  8. System For Research On Multiple-Arm Robots

    NASA Technical Reports Server (NTRS)

    Backes, Paul G.; Hayati, Samad; Tso, Kam S.; Hayward, Vincent

    1991-01-01

    Kali system of computer programs and equipment provides environment for research on distributed programming and distributed control of coordinated-multiple-arm robots. Suitable for telerobotics research involving sensing and execution of low level tasks. Software and configuration of hardware designed flexible so system modified easily to test various concepts in control and programming of robots, including multiple-arm control, redundant-arm control, shared control, traded control, force control, force/position hybrid control, design and integration of sensors, teleoperation, task-space description and control, methods of adaptive control, control of flexible arms, and human factors.

  9. A Phenomenographic Study of the Ways of Understanding Conditional and Repetition Structures in Computer Programming Languages

    ERIC Educational Resources Information Center

    Bucks, Gregory Warren

    2010-01-01

    Computers have become an integral part of how engineers complete their work, allowing them to collect and analyze data, model potential solutions and aiding in production through automation and robotics. In addition, computers are essential elements of the products themselves, from tennis shoes to construction materials. An understanding of how…

  10. Artificial Intelligence and the High School Computer Curriculum.

    ERIC Educational Resources Information Center

    Dillon, Richard W.

    1993-01-01

    Describes a four-part curriculum that can serve as a model for incorporating artificial intelligence (AI) into the high school computer curriculum. The model includes examining questions fundamental to AI, creating and designing an expert system, language processing, and creating programs that integrate machine vision with robotics and…

  11. Applications of computer assisted surgery and medical robotics at the ISSSTE, México: preliminary results.

    PubMed

    Mosso, José Luis; Pohl, Mauricio; Jimenez, Juan Ramon; Valdes, Raquel; Yañez, Oscar; Medina, Veronica; Arambula, Fernando; Padilla, Miguel Angel; Marquez, Jorge; Gastelum, Alfonso; Mosso, Alejo; Frausto, Juan

    2007-01-01

    We present the first results of four projects of a second phase of a Mexican Project Computer Assisted Surgery and Medical Robotics, supported by the Mexican Science and Technology National Council (Consejo Nacional de Ciencia y Tecnología) under grant SALUD-2002-C01-8181. The projects are being developed by three universities (UNAM, UAM, ITESM) and the goal of this project is to integrate a laboratory in a Hospital of the ISSSTE to give service to surgeons or clinicians of Endoscopic surgeons, urologist, gastrointestinal endoscopist and neurosurgeons.

  12. Audio-visual feedback improves the BCI performance in the navigational control of a humanoid robot

    PubMed Central

    Tidoni, Emmanuele; Gergondet, Pierre; Kheddar, Abderrahmane; Aglioti, Salvatore M.

    2014-01-01

    Advancement in brain computer interfaces (BCI) technology allows people to actively interact in the world through surrogates. Controlling real humanoid robots using BCI as intuitively as we control our body represents a challenge for current research in robotics and neuroscience. In order to successfully interact with the environment the brain integrates multiple sensory cues to form a coherent representation of the world. Cognitive neuroscience studies demonstrate that multisensory integration may imply a gain with respect to a single modality and ultimately improve the overall sensorimotor performance. For example, reactivity to simultaneous visual and auditory stimuli may be higher than to the sum of the same stimuli delivered in isolation or in temporal sequence. Yet, knowledge about whether audio-visual integration may improve the control of a surrogate is meager. To explore this issue, we provided human footstep sounds as audio feedback to BCI users while controlling a humanoid robot. Participants were asked to steer their robot surrogate and perform a pick-and-place task through BCI-SSVEPs. We found that audio-visual synchrony between footsteps sound and actual humanoid's walk reduces the time required for steering the robot. Thus, auditory feedback congruent with the humanoid actions may improve motor decisions of the BCI's user and help in the feeling of control over it. Our results shed light on the possibility to increase robot's control through the combination of multisensory feedback to a BCI user. PMID:24987350

  13. The Need for Optical Means as an Alternative for Electronic Computing

    NASA Technical Reports Server (NTRS)

    Adbeldayem, Hossin; Frazier, Donald; Witherow, William; Paley, Steve; Penn, Benjamin; Bank, Curtis; Whitaker, Ann F. (Technical Monitor)

    2001-01-01

    An increasing demand for faster computers is rapidly growing to encounter the fast growing rate of Internet, space communication, and robotic industry. Unfortunately, the Very Large Scale Integration technology is approaching its fundamental limits beyond which the device will be unreliable. Optical interconnections and optical integrated circuits are strongly believed to provide the way out of the extreme limitations imposed on the growth of speed and complexity of nowadays computations by conventional electronics. This paper demonstrates two ultra-fast, all-optical logic gates and a high-density storage medium, which are essential components in building the future optical computer.

  14. Precision instrument placement using a 4-DOF robot with integrated fiducials for minimally invasive interventions

    NASA Astrophysics Data System (ADS)

    Stenzel, Roland; Lin, Ralph; Cheng, Peng; Kronreif, Gernot; Kornfeld, Martin; Lindisch, David; Wood, Bradford J.; Viswanathan, Anand; Cleary, Kevin

    2007-03-01

    Minimally invasive procedures are increasingly attractive to patients and medical personnel because they can reduce operative trauma, recovery times, and overall costs. However, during these procedures, the physician has a very limited view of the interventional field and the exact position of surgical instruments. We present an image-guided platform for precision placement of surgical instruments based upon a small four degree-of-freedom robot (B-RobII; ARC Seibersdorf Research GmbH, Vienna, Austria). This platform includes a custom instrument guide with an integrated spiral fiducial pattern as the robot's end-effector, and it uses intra-operative computed tomography (CT) to register the robot to the patient directly before the intervention. The physician can then use a graphical user interface (GUI) to select a path for percutaneous access, and the robot will automatically align the instrument guide along this path. Potential anatomical targets include the liver, kidney, prostate, and spine. This paper describes the robotic platform, workflow, software, and algorithms used by the system. To demonstrate the algorithmic accuracy and suitability of the custom instrument guide, we also present results from experiments as well as estimates of the maximum error between target and instrument tip.

  15. FPGA-based fused smart sensor for dynamic and vibration parameter extraction in industrial robot links.

    PubMed

    Rodriguez-Donate, Carlos; Morales-Velazquez, Luis; Osornio-Rios, Roque Alfredo; Herrera-Ruiz, Gilberto; de Jesus Romero-Troncoso, Rene

    2010-01-01

    Intelligent robotics demands the integration of smart sensors that allow the controller to efficiently measure physical quantities. Industrial manipulator robots require a constant monitoring of several parameters such as motion dynamics, inclination, and vibration. This work presents a novel smart sensor to estimate motion dynamics, inclination, and vibration parameters on industrial manipulator robot links based on two primary sensors: an encoder and a triaxial accelerometer. The proposed smart sensor implements a new methodology based on an oversampling technique, averaging decimation filters, FIR filters, finite differences and linear interpolation to estimate the interest parameters, which are computed online utilizing digital hardware signal processing based on field programmable gate arrays (FPGA).

  16. FPGA-Based Fused Smart Sensor for Dynamic and Vibration Parameter Extraction in Industrial Robot Links

    PubMed Central

    Rodriguez-Donate, Carlos; Morales-Velazquez, Luis; Osornio-Rios, Roque Alfredo; Herrera-Ruiz, Gilberto; de Jesus Romero-Troncoso, Rene

    2010-01-01

    Intelligent robotics demands the integration of smart sensors that allow the controller to efficiently measure physical quantities. Industrial manipulator robots require a constant monitoring of several parameters such as motion dynamics, inclination, and vibration. This work presents a novel smart sensor to estimate motion dynamics, inclination, and vibration parameters on industrial manipulator robot links based on two primary sensors: an encoder and a triaxial accelerometer. The proposed smart sensor implements a new methodology based on an oversampling technique, averaging decimation filters, FIR filters, finite differences and linear interpolation to estimate the interest parameters, which are computed online utilizing digital hardware signal processing based on field programmable gate arrays (FPGA). PMID:22319345

  17. Use of 3D vision for fine robot motion

    NASA Technical Reports Server (NTRS)

    Lokshin, Anatole; Litwin, Todd

    1989-01-01

    An integration of 3-D vision systems with robot manipulators will allow robots to operate in a poorly structured environment by visually locating targets and obstacles. However, by using computer vision for objects acquisition makes the problem of overall system calibration even more difficult. Indeed, in a CAD based manipulation a control architecture has to find an accurate mapping between the 3-D Euclidean work space and a robot configuration space (joint angles). If a stereo vision is involved, then one needs to map a pair of 2-D video images directly into the robot configuration space. Neural Network approach aside, a common solution to this problem is to calibrate vision and manipulator independently, and then tie them via common mapping into the task space. In other words, both vision and robot refer to some common Absolute Euclidean Coordinate Frame via their individual mappings. This approach has two major difficulties. First a vision system has to be calibrated over the total work space. And second, the absolute frame, which is usually quite arbitrary, has to be the same with a high degree of precision for both robot and vision subsystem calibrations. The use of computer vision to allow robust fine motion manipulation in a poorly structured world which is currently in progress is described along with the preliminary results and encountered problems.

  18. Direct adaptive control of a PUMA 560 industrial robot

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun; Lee, Thomas; Delpech, Michel

    1989-01-01

    The implementation and experimental validation of a new direct adaptive control scheme on a PUMA 560 industrial robot is described. The testbed facility consists of a Unimation PUMA 560 six-jointed robot and controller, and a DEC MicroVAX II computer which hosts the Robot Control C Library software. The control algorithm is implemented on the MicroVAX which acts as a digital controller for the PUMA robot, and the Unimation controller is effectively bypassed and used merely as an I/O device to interface the MicroVAX to the joint motors. The control algorithm for each robot joint consists of an auxiliary signal generated by a constant-gain Proportional plus Integral plus Derivative (PID) controller, and an adaptive position-velocity (PD) feedback controller with adjustable gains. The adaptive independent joint controllers compensate for the inter-joint couplings and achieve accurate trajectory tracking without the need for the complex dynamic model and parameter values of the robot. Extensive experimental results on PUMA joint control are presented to confirm the feasibility of the proposed scheme, in spite of strong interactions between joint motions. Experimental results validate the capabilities of the proposed control scheme. The control scheme is extremely simple and computationally very fast for concurrent processing with high sampling rates.

  19. Comparison of Piezoresistive Monofilament Polymer Sensors

    PubMed Central

    Melnykowycz, Mark; Koll, Birgit; Scharf, Dagobert; Clemens, Frank

    2014-01-01

    The development of flexible polymer monofilament fiber strain sensors have many applications in both wearable computing (clothing, gloves, etc.) and robotics design (large deformation control). For example, a high-stretch monofilament sensor could be integrated into robotic arm design, easily stretching over joints or along curved surfaces. As a monofilament, the sensor can be woven into or integrated with textiles for position or physiological monitoring, computer interface control, etc. Commercially available conductive polymer monofilament sensors were tested alongside monofilaments produced from carbon black (CB) mixed with a thermo-plastic elastomer (TPE) and extruded in different diameters. It was found that signal strength, drift, and precision characteristics were better with a 0.3 mm diameter CB/TPE monofilament than thick (∼2 mm diameter) based on the same material or commercial monofilaments based on natural rubber or silicone elastomer (SE) matrices. PMID:24419161

  20. Social Robotics in Therapy of Apraxia of Speech

    PubMed Central

    Alonso-Martín, Fernando

    2018-01-01

    Apraxia of speech is a motor speech disorder in which messages from the brain to the mouth are disrupted, resulting in an inability for moving lips or tongue to the right place to pronounce sounds correctly. Current therapies for this condition involve a therapist that in one-on-one sessions conducts the exercises. Our aim is to work in the line of robotic therapies in which a robot is able to perform partially or autonomously a therapy session, endowing a social robot with the ability of assisting therapists in apraxia of speech rehabilitation exercises. Therefore, we integrate computer vision and machine learning techniques to detect the mouth pose of the user and, on top of that, our social robot performs autonomously the different steps of the therapy using multimodal interaction. PMID:29713440

  1. Advances in Integrating Autonomy with Acoustic Communications for Intelligent Networks of Marine Robots

    DTIC Science & Technology

    2013-02-01

    Sonar AUV #Environmental Sampling Environmental AUV +name : string = OEX Ocean Explorer +name : string = Hammerhead Iver2 +name : string = Unicorn ...executable» Google Earth Bluefin 21 AUV ( Unicorn ) MOOS Computer GPS «serial» Bluefin 21 AUV (Macrura) MOOS Computer «acoustic» Micro-Modem «wired...Computer Bluefin 21 AUV ( Unicorn ) MOOS Computer NURC AUV (OEX) MOOS Computer Topside MOOS Computer «wifi» 5.0GHz WiLan «acoustic» Edgetech GPS

  2. The NASA automation and robotics technology program

    NASA Technical Reports Server (NTRS)

    Holcomb, Lee B.; Montemerlo, Melvin D.

    1986-01-01

    The development and objectives of the NASA automation and robotics technology program are reviewed. The objectives of the program are to utilize AI and robotics to increase the probability of mission success; decrease the cost of ground control; and increase the capability and flexibility of space operations. There is a need for real-time computational capability; an effective man-machine interface; and techniques to validate automated systems. Current programs in the areas of sensing and perception, task planning and reasoning, control execution, operator interface, and system architecture and integration are described. Programs aimed at demonstrating the capabilities of telerobotics and system autonomy are discussed.

  3. Embodied cognition for autonomous interactive robots.

    PubMed

    Hoffman, Guy

    2012-10-01

    In the past, notions of embodiment have been applied to robotics mainly in the realm of very simple robots, and supporting low-level mechanisms such as dynamics and navigation. In contrast, most human-like, interactive, and socially adept robotic systems turn away from embodiment and use amodal, symbolic, and modular approaches to cognition and interaction. At the same time, recent research in Embodied Cognition (EC) is spanning an increasing number of complex cognitive processes, including language, nonverbal communication, learning, and social behavior. This article suggests adopting a modern EC approach for autonomous robots interacting with humans. In particular, we present three core principles from EC that may be applicable to such robots: (a) modal perceptual representation, (b) action/perception and action/cognition integration, and (c) a simulation-based model of top-down perceptual biasing. We describe a computational framework based on these principles, and its implementation on two physical robots. This could provide a new paradigm for embodied human-robot interaction based on recent psychological and neurological findings. Copyright © 2012 Cognitive Science Society, Inc.

  4. Evaluation of automated decisionmaking methodologies and development of an integrated robotic system simulation. Appendix A: ROBSIM user's guide

    NASA Technical Reports Server (NTRS)

    Haley, D. C.; Almand, B. J.; Thomas, M. M.; Krauze, L. D.; Gremban, K. D.; Sanborn, J. C.; Kelley, J. H.; Depkovich, T. M.; Wolfe, W. J.; Nguyen, T.

    1986-01-01

    The purpose of the Robotics Simulation Program is to provide a broad range of computer capabilities to assist in the design, verification, simulation, and study of robotics systems. ROBSIM is program in FORTRAN 77 for use on a VAX 11/750 computer under the VMS operating system. This user's guide describes the capabilities of the ROBSIM programs, including the system definition function, the analysis tools function and the postprocessor function. The options a user may encounter with each of these executables are explained in detail and the different program prompts appearing to the user are included. Some useful suggestions concerning the appropriate answers to be given by the user are provided. An example user interactive run in enclosed for each of the main program services, and some of the capabilities are illustrated.

  5. Surgical robotics beyond enhanced dexterity instrumentation: a survey of machine learning techniques and their role in intelligent and autonomous surgical actions.

    PubMed

    Kassahun, Yohannes; Yu, Bingbin; Tibebu, Abraham Temesgen; Stoyanov, Danail; Giannarou, Stamatia; Metzen, Jan Hendrik; Vander Poorten, Emmanuel

    2016-04-01

    Advances in technology and computing play an increasingly important role in the evolution of modern surgical techniques and paradigms. This article reviews the current role of machine learning (ML) techniques in the context of surgery with a focus on surgical robotics (SR). Also, we provide a perspective on the future possibilities for enhancing the effectiveness of procedures by integrating ML in the operating room. The review is focused on ML techniques directly applied to surgery, surgical robotics, surgical training and assessment. The widespread use of ML methods in diagnosis and medical image computing is beyond the scope of the review. Searches were performed on PubMed and IEEE Explore using combinations of keywords: ML, surgery, robotics, surgical and medical robotics, skill learning, skill analysis and learning to perceive. Studies making use of ML methods in the context of surgery are increasingly being reported. In particular, there is an increasing interest in using ML for developing tools to understand and model surgical skill and competence or to extract surgical workflow. Many researchers begin to integrate this understanding into the control of recent surgical robots and devices. ML is an expanding field. It is popular as it allows efficient processing of vast amounts of data for interpreting and real-time decision making. Already widely used in imaging and diagnosis, it is believed that ML will also play an important role in surgery and interventional treatments. In particular, ML could become a game changer into the conception of cognitive surgical robots. Such robots endowed with cognitive skills would assist the surgical team also on a cognitive level, such as possibly lowering the mental load of the team. For example, ML could help extracting surgical skill, learned through demonstration by human experts, and could transfer this to robotic skills. Such intelligent surgical assistance would significantly surpass the state of the art in surgical robotics. Current devices possess no intelligence whatsoever and are merely advanced and expensive instruments.

  6. An approach to multivariable control of manipulators

    NASA Technical Reports Server (NTRS)

    Seraji, H.

    1987-01-01

    The paper presents simple schemes for multivariable control of multiple-joint robot manipulators in joint and Cartesian coordinates. The joint control scheme consists of two independent multivariable feedforward and feedback controllers. The feedforward controller is the minimal inverse of the linearized model of robot dynamics and contains only proportional-double-derivative (PD2) terms - implying feedforward from the desired position, velocity and acceleration. This controller ensures that the manipulator joint angles track any reference trajectories. The feedback controller is of proportional-integral-derivative (PID) type and is designed to achieve pole placement. This controller reduces any initial tracking error to zero as desired and also ensures that robust steady-state tracking of step-plus-exponential trajectories is achieved by the joint angles. Simple and explicit expressions of computation of the feedforward and feedback gains are obtained based on the linearized model of robot dynamics. This leads to computationally efficient schemes for either on-line gain computation or off-line gain scheduling to account for variations in the linearized robot model due to changes in the operating point. The joint control scheme is extended to direct control of the end-effector motion in Cartesian space. Simulation results are given for illustration.

  7. Soft Actuators for Small-Scale Robotics.

    PubMed

    Hines, Lindsey; Petersen, Kirstin; Lum, Guo Zhan; Sitti, Metin

    2017-04-01

    This review comprises a detailed survey of ongoing methodologies for soft actuators, highlighting approaches suitable for nanometer- to centimeter-scale robotic applications. Soft robots present a special design challenge in that their actuation and sensing mechanisms are often highly integrated with the robot body and overall functionality. When less than a centimeter, they belong to an even more special subcategory of robots or devices, in that they often lack on-board power, sensing, computation, and control. Soft, active materials are particularly well suited for this task, with a wide range of stimulants and a number of impressive examples, demonstrating large deformations, high motion complexities, and varied multifunctionality. Recent research includes both the development of new materials and composites, as well as novel implementations leveraging the unique properties of soft materials. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. PaR-PaR Laboratory Automation Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Linshiz, G; Stawski, N; Poust, S

    2013-05-01

    Labor-intensive multistep biological tasks, such as the construction and cloning of DNA molecules, are prime candidates for laboratory automation. Flexible and biology-friendly operation of robotic equipment is key to its successful integration in biological laboratories, and the efforts required to operate a robot must be much smaller than the alternative manual lab work. To achieve these goals, a simple high-level biology-friendly robot programming language is needed. We have developed and experimentally validated such a language: Programming a Robot (PaR-PaR). The syntax and compiler for the language are based on computer science principles and a deep understanding of biological workflows. PaR-PaRmore » allows researchers to use liquid-handling robots effectively, enabling experiments that would not have been considered previously. After minimal training, a biologist can independently write complicated protocols for a robot within an hour. Adoption of PaR-PaR as a standard cross-platform language would enable hand-written or software-generated robotic protocols to be shared across laboratories.« less

  9. PaR-PaR laboratory automation platform.

    PubMed

    Linshiz, Gregory; Stawski, Nina; Poust, Sean; Bi, Changhao; Keasling, Jay D; Hillson, Nathan J

    2013-05-17

    Labor-intensive multistep biological tasks, such as the construction and cloning of DNA molecules, are prime candidates for laboratory automation. Flexible and biology-friendly operation of robotic equipment is key to its successful integration in biological laboratories, and the efforts required to operate a robot must be much smaller than the alternative manual lab work. To achieve these goals, a simple high-level biology-friendly robot programming language is needed. We have developed and experimentally validated such a language: Programming a Robot (PaR-PaR). The syntax and compiler for the language are based on computer science principles and a deep understanding of biological workflows. PaR-PaR allows researchers to use liquid-handling robots effectively, enabling experiments that would not have been considered previously. After minimal training, a biologist can independently write complicated protocols for a robot within an hour. Adoption of PaR-PaR as a standard cross-platform language would enable hand-written or software-generated robotic protocols to be shared across laboratories.

  10. A robot sets a table: a case for hybrid reasoning with different types of knowledge

    NASA Astrophysics Data System (ADS)

    Mansouri, Masoumeh; Pecora, Federico

    2016-09-01

    An important contribution of AI to Robotics is the model-centred approach, whereby competent robot behaviour stems from automated reasoning in models of the world which can be changed to suit different environments, physical capabilities and tasks. However models need to capture diverse (and often application-dependent) aspects of the robot's environment and capabilities. They must also have good computational properties, as robots need to reason while they act in response to perceived context. In this article, we investigate the use of a meta-CSP-based technique to interleave reasoning in diverse knowledge types. We reify the approach through a robotic waiter case study, for which a particular selection of spatial, temporal, resource and action KR formalisms is made. Using this case study, we discuss general principles pertaining to the selection of appropriate KR formalisms and jointly reasoning about them. The resulting integration is evaluated both formally and experimentally on real and simulated robotic platforms.

  11. Conjugate-Gradient Algorithms For Dynamics Of Manipulators

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Scheid, Robert E.

    1993-01-01

    Algorithms for serial and parallel computation of forward dynamics of multiple-link robotic manipulators by conjugate-gradient method developed. Parallel algorithms have potential for speedup of computations on multiple linked, specialized processors implemented in very-large-scale integrated circuits. Such processors used to stimulate dynamics, possibly faster than in real time, for purposes of planning and control.

  12. Decentralized Adaptive Control For Robots

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun

    1989-01-01

    Precise knowledge of dynamics not required. Proposed scheme for control of multijointed robotic manipulator calls for independent control subsystem for each joint, consisting of proportional/integral/derivative feedback controller and position/velocity/acceleration feedforward controller, both with adjustable gains. Independent joint controller compensates for unpredictable effects, gravitation, and dynamic coupling between motions of joints, while forcing joints to track reference trajectories. Scheme amenable to parallel processing in distributed computing system wherein each joint controlled by relatively simple algorithm on dedicated microprocessor.

  13. The flight robotics laboratory

    NASA Technical Reports Server (NTRS)

    Tobbe, Patrick A.; Williamson, Marlin J.; Glaese, John R.

    1988-01-01

    The Flight Robotics Laboratory of the Marshall Space Flight Center is described in detail. This facility, containing an eight degree of freedom manipulator, precision air bearing floor, teleoperated motion base, reconfigurable operator's console, and VAX 11/750 computer system, provides simulation capability to study human/system interactions of remote systems. The facility hardware, software and subsequent integration of these components into a real time man-in-the-loop simulation for the evaluation of spacecraft contact proximity and dynamics are described.

  14. The Role of Audio-Visual Feedback in a Thought-Based Control of a Humanoid Robot: A BCI Study in Healthy and Spinal Cord Injured People.

    PubMed

    Tidoni, Emmanuele; Gergondet, Pierre; Fusco, Gabriele; Kheddar, Abderrahmane; Aglioti, Salvatore M

    2017-06-01

    The efficient control of our body and successful interaction with the environment are possible through the integration of multisensory information. Brain-computer interface (BCI) may allow people with sensorimotor disorders to actively interact in the world. In this study, visual information was paired with auditory feedback to improve the BCI control of a humanoid surrogate. Healthy and spinal cord injured (SCI) people were asked to embody a humanoid robot and complete a pick-and-place task by means of a visual evoked potentials BCI system. Participants observed the remote environment from the robot's perspective through a head mounted display. Human-footsteps and computer-beep sounds were used as synchronous/asynchronous auditory feedback. Healthy participants achieved better placing accuracy when listening to human footstep sounds relative to a computer-generated sound. SCI people demonstrated more difficulty in steering the robot during asynchronous auditory feedback conditions. Importantly, subjective reports highlighted that the BCI mask overlaying the display did not limit the observation of the scenario and the feeling of being in control of the robot. Overall, the data seem to suggest that sensorimotor-related information may improve the control of external devices. Further studies are required to understand how the contribution of residual sensory channels could improve the reliability of BCI systems.

  15. Multidisciplinary unmanned technology teammate (MUTT)

    NASA Astrophysics Data System (ADS)

    Uzunovic, Nenad; Schneider, Anne; Lacaze, Alberto; Murphy, Karl; Del Giorno, Mark

    2013-01-01

    The U.S. Army Tank Automotive Research, Development and Engineering Center (TARDEC) held an autonomous robot competition called CANINE in June 2012. The goal of the competition was to develop innovative and natural control methods for robots. This paper describes the winning technology, including the vision system, the operator interaction, and the autonomous mobility. The rules stated only gestures or voice commands could be used for control. The robots would learn a new object at the start of each phase, find the object after it was thrown into a field, and return the object to the operator. Each of the six phases became more difficult, including clutter of the same color or shape as the object, moving and stationary obstacles, and finding the operator who moved from the starting location to a new location. The Robotic Research Team integrated techniques in computer vision, speech recognition, object manipulation, and autonomous navigation. A multi-filter computer vision solution reliably detected the objects while rejecting objects of similar color or shape, even while the robot was in motion. A speech-based interface with short commands provided close to natural communication of complicated commands from the operator to the robot. An innovative gripper design allowed for efficient object pickup. A robust autonomous mobility and navigation solution for ground robotic platforms provided fast and reliable obstacle avoidance and course navigation. The research approach focused on winning the competition while remaining cognizant and relevant to real world applications.

  16. Parametric optimization in virtual prototyping environment of the control device for a robotic system used in thin layers deposition

    NASA Astrophysics Data System (ADS)

    Enescu (Balaş, M. L.; Alexandru, C.

    2016-08-01

    The paper deals with the optimal design of the control system for a 6-DOF robot used in thin layers deposition. The optimization is based on parametric technique, by modelling the design objective as a numerical function, and then establishing the optimal values of the design variables so that to minimize the objective function. The robotic system is a mechatronic product, which integrates the mechanical device and the controlled operating device.The mechanical device of the robot was designed in the CAD (Computer Aided Design) software CATIA, the 3D-model being then transferred to the MBS (Multi-Body Systems) environment ADAMS/View. The control system was developed in the concurrent engineering concept, through the integration with the MBS mechanical model, by using the DFC (Design for Control) software solution EASY5. The necessary angular motions in the six joints of the robot, in order to obtain the imposed trajectory of the end-effector, have been established by performing the inverse kinematic analysis. The positioning error in each joint of the robot is used as design objective, the optimization goal being to minimize the root mean square during simulation, which is a measure of the magnitude of the positioning error varying quantity.

  17. Visual and tactile interfaces for bi-directional human robot communication

    NASA Astrophysics Data System (ADS)

    Barber, Daniel; Lackey, Stephanie; Reinerman-Jones, Lauren; Hudson, Irwin

    2013-05-01

    Seamless integration of unmanned and systems and Soldiers in the operational environment requires robust communication capabilities. Multi-Modal Communication (MMC) facilitates achieving this goal due to redundancy and levels of communication superior to single mode interaction using auditory, visual, and tactile modalities. Visual signaling using arm and hand gestures is a natural method of communication between people. Visual signals standardized within the U.S. Army Field Manual and in use by Soldiers provide a foundation for developing gestures for human to robot communication. Emerging technologies using Inertial Measurement Units (IMU) enable classification of arm and hand gestures for communication with a robot without the requirement of line-of-sight needed by computer vision techniques. These devices improve the robustness of interpreting gestures in noisy environments and are capable of classifying signals relevant to operational tasks. Closing the communication loop between Soldiers and robots necessitates them having the ability to return equivalent messages. Existing visual signals from robots to humans typically require highly anthropomorphic features not present on military vehicles. Tactile displays tap into an unused modality for robot to human communication. Typically used for hands-free navigation and cueing, existing tactile display technologies are used to deliver equivalent visual signals from the U.S. Army Field Manual. This paper describes ongoing research to collaboratively develop tactile communication methods with Soldiers, measure classification accuracy of visual signal interfaces, and provides an integration example including two robotic platforms.

  18. Insect-Inspired Optical-Flow Navigation Sensors

    NASA Technical Reports Server (NTRS)

    Thakoor, Sarita; Morookian, John M.; Chahl, Javan; Soccol, Dean; Hines, Butler; Zornetzer, Steven

    2005-01-01

    Integrated circuits that exploit optical flow to sense motions of computer mice on or near surfaces ( optical mouse chips ) are used as navigation sensors in a class of small flying robots now undergoing development for potential use in such applications as exploration, search, and surveillance. The basic principles of these robots were described briefly in Insect-Inspired Flight Control for Small Flying Robots (NPO-30545), NASA Tech Briefs, Vol. 29, No. 1 (January 2005), page 61. To recapitulate from the cited prior article: The concept of optical flow can be defined, loosely, as the use of texture in images as a source of motion cues. The flight-control and navigation systems of these robots are inspired largely by the designs and functions of the vision systems and brains of insects, which have been demonstrated to utilize optical flow (as detected by their eyes and brains) resulting from their own motions in the environment. Optical flow has been shown to be very effective as a means of avoiding obstacles and controlling speeds and altitudes in robotic navigation. Prior systems used in experiments on navigating by means of optical flow have involved the use of panoramic optics, high-resolution image sensors, and programmable imagedata- processing computers.

  19. Nonhuman gamblers: lessons from rodents, primates, and robots

    PubMed Central

    Paglieri, Fabio; Addessi, Elsa; De Petrillo, Francesca; Laviola, Giovanni; Mirolli, Marco; Parisi, Domenico; Petrosino, Giancarlo; Ventricelli, Marialba; Zoratto, Francesca; Adriani, Walter

    2014-01-01

    The search for neuronal and psychological underpinnings of pathological gambling in humans would benefit from investigating related phenomena also outside of our species. In this paper, we present a survey of studies in three widely different populations of agents, namely rodents, non-human primates, and robots. Each of these populations offers valuable and complementary insights on the topic, as the literature demonstrates. In addition, we highlight the deep and complex connections between relevant results across these different areas of research (i.e., cognitive and computational neuroscience, neuroethology, cognitive primatology, neuropsychiatry, evolutionary robotics), to make the case for a greater degree of methodological integration in future studies on pathological gambling. PMID:24574984

  20. Robotic Variable Polarity Plasma Arc (VPPA) Welding

    NASA Technical Reports Server (NTRS)

    Jaffery, Waris S.

    1993-01-01

    The need for automated plasma welding was identified in the early stages of the Space Station Freedom Program (SSFP) because it requires approximately 1.3 miles of welding for assembly. As a result of the Variable Polarity Plasma Arc Welding (VPPAW) process's ability to make virtually defect-free welds in aluminum, it was chosen to fulfill the welding needs. Space Station Freedom will be constructed of 2219 aluminum utilizing the computer controlled VPPAW process. The 'Node Radial Docking Port', with it's saddle shaped weld path, has a constantly changing surface angle over 360 deg of the 282 inch weld. The automated robotic VPPAW process requires eight-axes of motion (six-axes of robot and two-axes of positioner movement). The robot control system is programmed to maintain Torch Center Point (TCP) orientation perpendicular to the part while the part positioner is tilted and rotated to maintain the vertical up orientation as required by the VPPAW process. The combined speed of the robot and the positioner are integrated to maintain a constant speed between the part and the torch. A laser-based vision sensor system has also been integrated to track the seam and map the surface of the profile during welding.

  1. Robotic Variable Polarity Plasma Arc (VPPA) welding

    NASA Astrophysics Data System (ADS)

    Jaffery, Waris S.

    1993-02-01

    The need for automated plasma welding was identified in the early stages of the Space Station Freedom Program (SSFP) because it requires approximately 1.3 miles of welding for assembly. As a result of the Variable Polarity Plasma Arc Welding (VPPAW) process's ability to make virtually defect-free welds in aluminum, it was chosen to fulfill the welding needs. Space Station Freedom will be constructed of 2219 aluminum utilizing the computer controlled VPPAW process. The 'Node Radial Docking Port', with it's saddle shaped weld path, has a constantly changing surface angle over 360 deg of the 282 inch weld. The automated robotic VPPAW process requires eight-axes of motion (six-axes of robot and two-axes of positioner movement). The robot control system is programmed to maintain Torch Center Point (TCP) orientation perpendicular to the part while the part positioner is tilted and rotated to maintain the vertical up orientation as required by the VPPAW process. The combined speed of the robot and the positioner are integrated to maintain a constant speed between the part and the torch. A laser-based vision sensor system has also been integrated to track the seam and map the surface of the profile during welding.

  2. REBOCOL (Robotic Calorimetry): An automated NDA (Nondestructive assay) calorimetry and gamma isotopic system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hurd, J.R.; Bonner, C.A.; Ostenak, C.A.

    1989-01-01

    ROBOCAL, which is presently being developed and tested at Los Alamos National Laboratory, is a full-scale, prototypical robotic system, for remote calorimetric and gamma-ray analysis of special nuclear materials. It integrates a fully automated, multi-drawer, vertical stacker-retriever system for staging unmeasured nuclear materials, and a fully automated gantry robot for computer-based selection and transfer of nuclear materials to calorimetric and gamma-ray measurement stations. Since ROBOCAL is designed for minimal operator intervention, a completely programmed user interface and data-base system are provided to interact with the automated mechanical and assay systems. The assay system is designed to completely integrate calorimetric andmore » gamma-ray data acquisition and to perform state-of-the-art analyses on both homogeneous and heterogeneous distributions of nuclear materials in a wide variety of matrices. 10 refs., 10 figs., 4 tabs.« less

  3. The contaminant analysis automation robot implementation for the automated laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Younkin, J.R.; Igou, R.E.; Urenda, T.D.

    1995-12-31

    The Contaminant Analysis Automation (CAA) project defines the automated laboratory as a series of standard laboratory modules (SLM) serviced by a robotic standard support module (SSM). These SLMs are designed to allow plug-and-play integration into automated systems that perform standard analysis methods (SAM). While the SLMs are autonomous in the execution of their particular chemical processing task, the SAM concept relies on a high-level task sequence controller (TSC) to coordinate the robotic delivery of materials requisite for SLM operations, initiate an SLM operation with the chemical method dependent operating parameters, and coordinate the robotic removal of materials from the SLMmore » when its commands and events has been established to allow ready them for transport operations as well as performing the Supervisor and Subsystems (GENISAS) software governs events from the SLMs and robot. The Intelligent System Operating Environment (ISOE) enables the inter-process communications used by GENISAS. CAA selected the Hewlett-Packard Optimized Robot for Chemical Analysis (ORCA) and its associated Windows based Methods Development Software (MDS) as the robot SSM. The MDS software is used to teach the robot each SLM position and required material port motions. To allow the TSC to command these SLM motions, a hardware and software implementation was required that allowed message passing between different operating systems. This implementation involved the use of a Virtual Memory Extended (VME) rack with a Force CPU-30 computer running VxWorks; a real-time multitasking operating system, and a Radiuses PC compatible VME computer running MDS. A GENISAS server on The Force computer accepts a transport command from the TSC, a GENISAS supervisor, over Ethernet and notifies software on the RadiSys PC of the pending command through VMEbus shared memory. The command is then delivered to the MDS robot control software using a Windows Dynamic Data Exchange conversation.« less

  4. Research and applications: Artificial intelligence

    NASA Technical Reports Server (NTRS)

    Raphael, B.; Duda, R. O.; Fikes, R. E.; Hart, P. E.; Nilsson, N. J.; Thorndyke, P. W.; Wilber, B. M.

    1971-01-01

    Research in the field of artificial intelligence is discussed. The focus of recent work has been the design, implementation, and integration of a completely new system for the control of a robot that plans, learns, and carries out tasks autonomously in a real laboratory environment. The computer implementation of low-level and intermediate-level actions; routines for automated vision; and the planning, generalization, and execution mechanisms are reported. A scenario that demonstrates the approximate capabilities of the current version of the entire robot system is presented.

  5. Teaching and implementing autonomous robotic lab walkthroughs in a biotech laboratory through model-based visual tracking

    NASA Astrophysics Data System (ADS)

    Wojtczyk, Martin; Panin, Giorgio; Röder, Thorsten; Lenz, Claus; Nair, Suraj; Heidemann, Rüdiger; Goudar, Chetan; Knoll, Alois

    2010-01-01

    After utilizing robots for more than 30 years for classic industrial automation applications, service robots form a constantly increasing market, although the big breakthrough is still awaited. Our approach to service robots was driven by the idea of supporting lab personnel in a biotechnology laboratory. After initial development in Germany, a mobile robot platform extended with an industrial manipulator and the necessary sensors for indoor localization and object manipulation, has been shipped to Bayer HealthCare in Berkeley, CA, USA, a global player in the sector of biopharmaceutical products, located in the San Francisco bay area. The determined goal of the mobile manipulator is to support the off-shift staff to carry out completely autonomous or guided, remote controlled lab walkthroughs, which we implement utilizing a recent development of our computer vision group: OpenTL - an integrated framework for model-based visual tracking.

  6. Surgical robotics for patient safety in the perioperative environment: realizing the promise.

    PubMed

    Fuji Lai; Louw, Deon

    2007-06-01

    Surgery is at a crossroads of complexity. However, there is a potential path toward patient safety. One such course is to leverage computer and robotic assist techniques in the reduction and interception of error in the perioperative environment. This white paper attempts to facilitate the road toward realizing that promise by outlining a research agenda. The paper will briefly review the current status of surgical robotics and summarize any conclusions that can be reached to date based on existing research. It will then lay out a roadmap for future research to determine how surgical robots should be optimally designed and integrated into the perioperative workflow and process. Successful movement down this path would involve focused efforts and multiagency collaboration to address the research priorities outlined, thereby realizing the full potential of surgical robotics to augment human capabilities, enhance task performance, extend the reach of surgical care, improve health care quality, and ultimately enhance patient safety.

  7. The use of interactive computer vision and robot hand controllers for enhancing manufacturing safety

    NASA Technical Reports Server (NTRS)

    Marzwell, Neville I.; Jacobus, Charles J.; Peurach, Thomas M.; Mitchell, Brian T.

    1994-01-01

    Current available robotic systems provide limited support for CAD-based model-driven visualization, sensing algorithm development and integration, and automated graphical planning systems. This paper describes ongoing work which provides the functionality necessary to apply advanced robotics to automated manufacturing and assembly operations. An interface has been built which incorporates 6-DOF tactile manipulation, displays for three dimensional graphical models, and automated tracking functions which depend on automated machine vision. A set of tools for single and multiple focal plane sensor image processing and understanding has been demonstrated which utilizes object recognition models. The resulting tool will enable sensing and planning from computationally simple graphical objects. A synergistic interplay between human and operator vision is created from programmable feedback received from the controller. This approach can be used as the basis for implementing enhanced safety in automated robotics manufacturing, assembly, repair and inspection tasks in both ground and space applications. Thus, an interactive capability has been developed to match the modeled environment to the real task environment for safe and predictable task execution.

  8. The use of interactive computer vision and robot hand controllers for enhancing manufacturing safety

    NASA Astrophysics Data System (ADS)

    Marzwell, Neville I.; Jacobus, Charles J.; Peurach, Thomas M.; Mitchell, Brian T.

    1994-02-01

    Current available robotic systems provide limited support for CAD-based model-driven visualization, sensing algorithm development and integration, and automated graphical planning systems. This paper describes ongoing work which provides the functionality necessary to apply advanced robotics to automated manufacturing and assembly operations. An interface has been built which incorporates 6-DOF tactile manipulation, displays for three dimensional graphical models, and automated tracking functions which depend on automated machine vision. A set of tools for single and multiple focal plane sensor image processing and understanding has been demonstrated which utilizes object recognition models. The resulting tool will enable sensing and planning from computationally simple graphical objects. A synergistic interplay between human and operator vision is created from programmable feedback received from the controller. This approach can be used as the basis for implementing enhanced safety in automated robotics manufacturing, assembly, repair and inspection tasks in both ground and space applications. Thus, an interactive capability has been developed to match the modeled environment to the real task environment for safe and predictable task execution.

  9. An Adaptive Scheme for Robot Localization and Mapping with Dynamically Configurable Inter-Beacon Range Measurements

    PubMed Central

    Torres-González, Arturo; Martinez-de Dios, Jose Ramiro; Ollero, Anibal

    2014-01-01

    This work is motivated by robot-sensor network cooperation techniques where sensor nodes (beacons) are used as landmarks for range-only (RO) simultaneous localization and mapping (SLAM). This paper presents a RO-SLAM scheme that actuates over the measurement gathering process using mechanisms that dynamically modify the rate and variety of measurements that are integrated in the SLAM filter. It includes a measurement gathering module that can be configured to collect direct robot-beacon and inter-beacon measurements with different inter-beacon depth levels and at different rates. It also includes a supervision module that monitors the SLAM performance and dynamically selects the measurement gathering configuration balancing SLAM accuracy and resource consumption. The proposed scheme has been applied to an extended Kalman filter SLAM with auxiliary particle filters for beacon initialization (PF-EKF SLAM) and validated with experiments performed in the CONET Integrated Testbed. It achieved lower map and robot errors (34% and 14%, respectively) than traditional methods with a lower computational burden (16%) and similar beacon energy consumption. PMID:24776938

  10. An adaptive scheme for robot localization and mapping with dynamically configurable inter-beacon range measurements.

    PubMed

    Torres-González, Arturo; Martinez-de Dios, Jose Ramiro; Ollero, Anibal

    2014-04-25

    This work is motivated by robot-sensor network cooperation techniques where sensor nodes (beacons) are used as landmarks for range-only (RO) simultaneous localization and mapping (SLAM). This paper presents a RO-SLAM scheme that actuates over the measurement gathering process using mechanisms that dynamically modify the rate and variety of measurements that are integrated in the SLAM filter. It includes a measurement gathering module that can be configured to collect direct robot-beacon and inter-beacon measurements with different inter-beacon depth levels and at different rates. It also includes a supervision module that monitors the SLAM performance and dynamically selects the measurement gathering configuration balancing SLAM accuracy and resource consumption. The proposed scheme has been applied to an extended Kalman filter SLAM with auxiliary particle filters for beacon initialization (PF-EKF SLAM) and validated with experiments performed in the CONET Integrated Testbed. It achieved lower map and robot errors (34% and 14%, respectively) than traditional methods with a lower computational burden (16%) and similar beacon energy consumption.

  11. Automated platform for designing multiple robot work cells

    NASA Astrophysics Data System (ADS)

    Osman, N. S.; Rahman, M. A. A.; Rahman, A. A. Abdul; Kamsani, S. H.; Bali Mohamad, B. M.; Mohamad, E.; Zaini, Z. A.; Rahman, M. F. Ab; Mohamad Hatta, M. N. H.

    2017-06-01

    Designing the multiple robot work cells is very knowledge-intensive, intricate, and time-consuming process. This paper elaborates the development process of a computer-aided design program for generating the multiple robot work cells which offer a user-friendly interface. The primary purpose of this work is to provide a fast and easy platform for less cost and human involvement with minimum trial and errors adjustments. The automated platform is constructed based on the variant-shaped configuration concept with its mathematical model. A robot work cell layout, system components, and construction procedure of the automated platform are discussed in this paper where integration of these items will be able to automatically provide the optimum robot work cell design according to the information set by the user. This system is implemented on top of CATIA V5 software and utilises its Part Design, Assembly Design, and Macro tool. The current outcomes of this work provide a basis for future investigation in developing a flexible configuration system for the multiple robot work cells.

  12. Center of excellence for small robots

    NASA Astrophysics Data System (ADS)

    Nguyen, Hoa G.; Carroll, Daniel M.; Laird, Robin T.; Everett, H. R.

    2005-05-01

    The mission of the Unmanned Systems Branch of SPAWAR Systems Center, San Diego (SSC San Diego) is to provide network-integrated robotic solutions for Command, Control, Communications, Computers, Intelligence, Surveillance, and Reconnaissance (C4ISR) applications, serving and partnering with industry, academia, and other government agencies. We believe the most important criterion for a successful acquisition program is producing a value-added end product that the warfighter needs, uses and appreciates. Through our accomplishments in the laboratory and field, SSC San Diego has been designated the Center of Excellence for Small Robots by the Office of the Secretary of Defense Joint Robotics Program. This paper covers the background, experience, and collaboration efforts by SSC San Diego to serve as the "Impedance-Matching Transformer" between the robotic user and technical communities. Special attention is given to our Unmanned Systems Technology Imperatives for Research, Development, Testing and Evaluation (RDT&E) of Small Robots. Active projects, past efforts, and architectures are provided as success stories for the Unmanned Systems Development Approach.

  13. Toward a practical mobile robotic aid system for people with severe physical disabilities.

    PubMed

    Regalbuto, M A; Krouskop, T A; Cheatham, J B

    1992-01-01

    A simple, relatively inexpensive robotic system that can aid severely disabled persons by providing pick-and-place manipulative abilities to augment the functions of human or trained animal assistants is under development at Rice University and the Baylor College of Medicine. A stand-alone software application program runs on a Macintosh personal computer and provides the user with a selection of interactive windows for commanding the mobile robot via cursor action. A HERO 2000 robot has been modified such that its workspace extends from the floor to tabletop heights, and the robot is interfaced to a Macintosh SE via a wireless communications link for untethered operation. Integrated into the system are hardware and software which allow the user to control household appliances in addition to the robot. A separate Machine Control Interface device converts breath action and head or other three-dimensional motion inputs into cursor signals. Preliminary in-home and laboratory testing has demonstrated the utility of the system to perform useful navigational and manipulative tasks.

  14. Design and Implementation of a Brain Computer Interface System for Controlling a Robotic Claw

    NASA Astrophysics Data System (ADS)

    Angelakis, D.; Zoumis, S.; Asvestas, P.

    2017-11-01

    The aim of this paper is to present the design and implementation of a brain-computer interface (BCI) system that can control a robotic claw. The system is based on the Emotiv Epoc headset, which provides the capability of simultaneous recording of 14 EEG channels, as well as wireless connectivity by means of the Bluetooth protocol. The system is initially trained to decode what user thinks to properly formatted data. The headset communicates with a personal computer, which runs a dedicated software application, implemented under the Processing integrated development environment. The application acquires the data from the headset and invokes suitable commands to an Arduino Uno board. The board decodes the received commands and produces corresponding signals to a servo motor that controls the position of the robotic claw. The system was tested successfully on a healthy, male subject, aged 28 years. The results are promising, taking into account that no specialized hardware was used. However, tests on a larger number of users is necessary in order to draw solid conclusions regarding the performance of the proposed system.

  15. Decentralized adaptive control of robot manipulators with robust stabilization design

    NASA Technical Reports Server (NTRS)

    Yuan, Bau-San; Book, Wayne J.

    1988-01-01

    Due to geometric nonlinearities and complex dynamics, a decentralized technique for adaptive control for multilink robot arms is attractive. Lyapunov-function theory for stability analysis provides an approach to robust stabilization. Each joint of the arm is treated as a component subsystem. The adaptive controller is made locally stable with servo signals including proportional and integral gains. This results in the bound on the dynamical interactions with other subsystems. A nonlinear controller which stabilizes the system with uniform boundedness is used to improve the robustness properties of the overall system. As a result, the robot tracks the reference trajectories with convergence. This strategy makes computation simple and therefore facilitates real-time implementation.

  16. Towards a real-time interface between a biomimetic model of sensorimotor cortex and a robotic arm

    PubMed Central

    Dura-Bernal, Salvador; Chadderdon, George L; Neymotin, Samuel A; Francis, Joseph T; Lytton, William W

    2015-01-01

    Brain-machine interfaces can greatly improve the performance of prosthetics. Utilizing biomimetic neuronal modeling in brain machine interfaces (BMI) offers the possibility of providing naturalistic motor-control algorithms for control of a robotic limb. This will allow finer control of a robot, while also giving us new tools to better understand the brain’s use of electrical signals. However, the biomimetic approach presents challenges in integrating technologies across multiple hardware and software platforms, so that the different components can communicate in real-time. We present the first steps in an ongoing effort to integrate a biomimetic spiking neuronal model of motor learning with a robotic arm. The biomimetic model (BMM) was used to drive a simple kinematic two-joint virtual arm in a motor task requiring trial-and-error convergence on a single target. We utilized the output of this model in real time to drive mirroring motion of a Barrett Technology WAM robotic arm through a user datagram protocol (UDP) interface. The robotic arm sent back information on its joint positions, which was then used by a visualization tool on the remote computer to display a realistic 3D virtual model of the moving robotic arm in real time. This work paves the way towards a full closed-loop biomimetic brain-effector system that can be incorporated in a neural decoder for prosthetic control, to be used as a platform for developing biomimetic learning algorithms for controlling real-time devices. PMID:26709323

  17. Robotic Observatory System Design-Specification Considerations for Achieving Long-Term Sustainable Precision Performance

    NASA Astrophysics Data System (ADS)

    Wray, J. D.

    2003-05-01

    The robotic observatory telescope must point precisely on the target object, and then track autonomously to a fraction of the FWHM of the system PSF for durations of ten to twenty minutes or more. It must retain this precision while continuing to function at rates approaching thousands of observations per night for all its years of useful life. These stringent requirements raise new challenges unique to robotic telescope systems design. Critical design considerations are driven by the applicability of the above requirements to all systems of the robotic observatory, including telescope and instrument systems, telescope-dome enclosure systems, combined electrical and electronics systems, environmental (e.g. seeing) control systems and integrated computer control software systems. Traditional telescope design considerations include the effects of differential thermal strain, elastic flexure, plastic flexure and slack or backlash with respect to focal stability, optical alignment and angular pointing and tracking precision. Robotic observatory design must holistically encapsulate these traditional considerations within the overall objective of maximized long-term sustainable precision performance. This overall objective is accomplished through combining appropriate mechanical and dynamical system characteristics with a full-time real-time telescope mount model feedback computer control system. Important design considerations include: identifying and reducing quasi-zero-backlash; increasing size to increase precision; directly encoding axis shaft rotation; pointing and tracking operation via real-time feedback between precision mount model and axis mounted encoders; use of monolithic construction whenever appropriate for sustainable mechanical integrity; accelerating dome motion to eliminate repetitive shock; ducting internal telescope air to outside dome; and the principal design criteria: maximizing elastic repeatability while minimizing slack, plastic deformation and hysteresis to facilitate long-term repeatably precise pointing and tracking performance.

  18. Virtual hand: a 3D tactile interface to virtual environments

    NASA Astrophysics Data System (ADS)

    Rogowitz, Bernice E.; Borrel, Paul

    2008-02-01

    We introduce a novel system that allows users to experience the sensation of touch in a computer graphics environment. In this system, the user places his/her hand on an array of pins, which is moved about space on a 6 degree-of-freedom robot arm. The surface of the pins defines a surface in the virtual world. This "virtual hand" can move about the virtual world. When the virtual hand encounters an object in the virtual world, the heights of the pins are adjusted so that they represent the object's shape, surface, and texture. A control system integrates pin and robot arm motions to transmit information about objects in the computer graphics world to the user. It also allows the user to edit, change and move the virtual objects, shapes and textures. This system provides a general framework for touching, manipulating, and modifying objects in a 3-D computer graphics environment, which may be useful in a wide range of applications, including computer games, computer aided design systems, and immersive virtual worlds.

  19. Towards Robot Scientists for autonomous scientific discovery

    PubMed Central

    2010-01-01

    We review the main components of autonomous scientific discovery, and how they lead to the concept of a Robot Scientist. This is a system which uses techniques from artificial intelligence to automate all aspects of the scientific discovery process: it generates hypotheses from a computer model of the domain, designs experiments to test these hypotheses, runs the physical experiments using robotic systems, analyses and interprets the resulting data, and repeats the cycle. We describe our two prototype Robot Scientists: Adam and Eve. Adam has recently proven the potential of such systems by identifying twelve genes responsible for catalysing specific reactions in the metabolic pathways of the yeast Saccharomyces cerevisiae. This work has been formally recorded in great detail using logic. We argue that the reporting of science needs to become fully formalised and that Robot Scientists can help achieve this. This will make scientific information more reproducible and reusable, and promote the integration of computers in scientific reasoning. We believe the greater automation of both the physical and intellectual aspects of scientific investigations to be essential to the future of science. Greater automation improves the accuracy and reliability of experiments, increases the pace of discovery and, in common with conventional laboratory automation, removes tedious and repetitive tasks from the human scientist. PMID:20119518

  20. Towards Robot Scientists for autonomous scientific discovery.

    PubMed

    Sparkes, Andrew; Aubrey, Wayne; Byrne, Emma; Clare, Amanda; Khan, Muhammed N; Liakata, Maria; Markham, Magdalena; Rowland, Jem; Soldatova, Larisa N; Whelan, Kenneth E; Young, Michael; King, Ross D

    2010-01-04

    We review the main components of autonomous scientific discovery, and how they lead to the concept of a Robot Scientist. This is a system which uses techniques from artificial intelligence to automate all aspects of the scientific discovery process: it generates hypotheses from a computer model of the domain, designs experiments to test these hypotheses, runs the physical experiments using robotic systems, analyses and interprets the resulting data, and repeats the cycle. We describe our two prototype Robot Scientists: Adam and Eve. Adam has recently proven the potential of such systems by identifying twelve genes responsible for catalysing specific reactions in the metabolic pathways of the yeast Saccharomyces cerevisiae. This work has been formally recorded in great detail using logic. We argue that the reporting of science needs to become fully formalised and that Robot Scientists can help achieve this. This will make scientific information more reproducible and reusable, and promote the integration of computers in scientific reasoning. We believe the greater automation of both the physical and intellectual aspects of scientific investigations to be essential to the future of science. Greater automation improves the accuracy and reliability of experiments, increases the pace of discovery and, in common with conventional laboratory automation, removes tedious and repetitive tasks from the human scientist.

  1. Evolving a Neural Olfactorimotor System in Virtual and Real Olfactory Environments

    PubMed Central

    Rhodes, Paul A.; Anderson, Todd O.

    2012-01-01

    To provide a platform to enable the study of simulated olfactory circuitry in context, we have integrated a simulated neural olfactorimotor system with a virtual world which simulates both computational fluid dynamics as well as a robotic agent capable of exploring the simulated plumes. A number of the elements which we developed for this purpose have not, to our knowledge, been previously assembled into an integrated system, including: control of a simulated agent by a neural olfactorimotor system; continuous interaction between the simulated robot and the virtual plume; the inclusion of multiple distinct odorant plumes and background odor; the systematic use of artificial evolution driven by olfactorimotor performance (e.g., time to locate a plume source) to specify parameter values; the incorporation of the realities of an imperfect physical robot using a hybrid model where a physical robot encounters a simulated plume. We close by describing ongoing work toward engineering a high dimensional, reversible, low power electronic olfactory sensor which will allow olfactorimotor neural circuitry evolved in the virtual world to control an autonomous olfactory robot in the physical world. The platform described here is intended to better test theories of olfactory circuit function, as well as provide robust odor source localization in realistic environments. PMID:23112772

  2. Tree fruit orchard of the future: An overview

    USDA-ARS?s Scientific Manuscript database

    Mechanization has been prevailing in row crops over the past decades, and now gradually in some fruit crops, with integration of innovative computers, robotics, mechanics, and precision orchard management. This talk will give an overview of challenges facing commercial fruit industries and needs of ...

  3. Modelling brain emergent behaviours through coevolution of neural agents.

    PubMed

    Maniadakis, Michail; Trahanias, Panos

    2006-06-01

    Recently, many research efforts focus on modelling partial brain areas with the long-term goal to support cognitive abilities of artificial organisms. Existing models usually suffer from heterogeneity, which constitutes their integration very difficult. The present work introduces a computational framework to address brain modelling tasks, emphasizing on the integrative performance of substructures. Moreover, implemented models are embedded in a robotic platform to support its behavioural capabilities. We follow an agent-based approach in the design of substructures to support the autonomy of partial brain structures. Agents are formulated to allow the emergence of a desired behaviour after a certain amount of interaction with the environment. An appropriate collaborative coevolutionary algorithm, able to emphasize both the speciality of brain areas and their cooperative performance, is employed to support design specification of agent structures. The effectiveness of the proposed approach is illustrated through the implementation of computational models for motor cortex and hippocampus, which are successfully tested on a simulated mobile robot.

  4. Progress in the development of shallow-water mapping systems

    USGS Publications Warehouse

    Bergeron, E.; Worley, C.R.; O'Brien, T.

    2007-01-01

    The USGS (US Geological Survey) Coastal and Marine Geology has deployed an advance autonomous shallow-draft robotic vehicle, Iris, for shallow-water mapping in Apalachicola Bay, Florida. The vehicle incorporates a side scan sonar system, seismic-reflection profiler, single-beam echosounder, and global positioning system (GPS) navigation. It is equipped with an onboard microprocessor-based motor controller, delivering signals for speed and steering to hull-mounted brushless direct-current thrusters. An onboard motion sensor in the Sea Robotics vehicle control system enclosure has been integrated in the vehicle to measure the vehicle heave, pitch, roll, and heading. Three water-tight enclosures are mounted along the vehicle axis for the Edgetech computer and electronics system including the Sea Robotics computer, a control and wireless communications system, and a Thales ZXW real-time kinematic (RTK) GPS receiver. The vehicle has resulted in producing high-quality seismic reflection and side scan sonar data, which will help in developing the baseline oyster habitat maps.

  5. Slow Computing Simulation of Bio-plausible Control

    DTIC Science & Technology

    2012-03-01

    information networks, neuromorphic chips would become necessary. Small unstable flying platforms currently require RTK, GPS, or Vicon closed-circuit...Visual, and IR Sensing FPGA ASIC Neuromorphic Chip Simulation Quad Rotor Robotic Insect Uniform Independent Network Single Modality Neural Network... neuromorphic Processing across parallel computational elements =0.54 N u m b e r o f c o m p u ta tio n s - No info 14 integrated circuit

  6. SpaceWire- Based Control System Architecture for the Lightweight Advanced Robotic Arm Demonstrator [LARAD

    NASA Astrophysics Data System (ADS)

    Rucinski, Marek; Coates, Adam; Montano, Giuseppe; Allouis, Elie; Jameux, David

    2015-09-01

    The Lightweight Advanced Robotic Arm Demonstrator (LARAD) is a state-of-the-art, two-meter long robotic arm for planetary surface exploration currently being developed by a UK consortium led by Airbus Defence and Space Ltd under contract to the UK Space Agency (CREST-2 programme). LARAD has a modular design, which allows for experimentation with different electronics and control software. The control system architecture includes the on-board computer, control software and firmware, and the communication infrastructure (e.g. data links, switches) connecting on-board computer(s), sensors, actuators and the end-effector. The purpose of the control system is to operate the arm according to pre-defined performance requirements, monitoring its behaviour in real-time and performing safing/recovery actions in case of faults. This paper reports on the results of a recent study about the feasibility of the development and integration of a novel control system architecture for LARAD fully based on the SpaceWire protocol. The current control system architecture is based on the combination of two communication protocols, Ethernet and CAN. The new SpaceWire-based control system will allow for improved monitoring and telecommanding performance thanks to higher communication data rate, allowing for the adoption of advanced control schemes, potentially based on multiple vision sensors, and for the handling of sophisticated end-effectors that require fine control, such as science payloads or robotic hands.

  7. Navigation and Robotics in Spinal Surgery: Where Are We Now?

    PubMed

    Overley, Samuel C; Cho, Samuel K; Mehta, Ankit I; Arnold, Paul M

    2017-03-01

    Spine surgery has experienced much technological innovation over the past several decades. The field has seen advancements in operative techniques, implants and biologics, and equipment such as computer-assisted navigation and surgical robotics. With the arrival of real-time image guidance and navigation capabilities along with the computing ability to process and reconstruct these data into an interactive three-dimensional spinal "map", so too have the applications of surgical robotic technology. While spinal robotics and navigation represent promising potential for improving modern spinal surgery, it remains paramount to demonstrate its superiority as compared to traditional techniques prior to assimilation of its use amongst surgeons.The applications for intraoperative navigation and image-guided robotics have expanded to surgical resection of spinal column and intradural tumors, revision procedures on arthrodesed spines, and deformity cases with distorted anatomy. Additionally, these platforms may mitigate much of the harmful radiation exposure in minimally invasive surgery to which the patient, surgeon, and ancillary operating room staff are subjected.Spine surgery relies upon meticulous fine motor skills to manipulate neural elements and a steady hand while doing so, often exploiting small working corridors utilizing exposures that minimize collateral damage. Additionally, the procedures may be long and arduous, predisposing the surgeon to both mental and physical fatigue. In light of these characteristics, spine surgery may actually be an ideal candidate for the integration of navigation and robotic-assisted procedures.With this paper, we aim to critically evaluate the current literature and explore the options available for intraoperative navigation and robotic-assisted spine surgery. Copyright © 2016 by the Congress of Neurological Surgeons.

  8. Virtual collaborative environments: programming and controlling robotic devices remotely

    NASA Astrophysics Data System (ADS)

    Davies, Brady R.; McDonald, Michael J., Jr.; Harrigan, Raymond W.

    1995-12-01

    This paper describes a technology for remote sharing of intelligent electro-mechanical devices. An architecture and actual system have been developed and tested, based on the proposed National Information Infrastructure (NII) or Information Highway, to facilitate programming and control of intelligent programmable machines (like robots, machine tools, etc.). Using appropriate geometric models, integrated sensors, video systems, and computing hardware; computer controlled resources owned and operated by different (in a geographic sense as well as legal sense) entities can be individually or simultaneously programmed and controlled from one or more remote locations. Remote programming and control of intelligent machines will create significant opportunities for sharing of expensive capital equipment. Using the technology described in this paper, university researchers, manufacturing entities, automation consultants, design entities, and others can directly access robotic and machining facilities located across the country. Disparate electro-mechanical resources will be shared in a manner similar to the way supercomputers are accessed by multiple users. Using this technology, it will be possible for researchers developing new robot control algorithms to validate models and algorithms right from their university labs without ever owning a robot. Manufacturers will be able to model, simulate, and measure the performance of prospective robots before selecting robot hardware optimally suited for their intended application. Designers will be able to access CNC machining centers across the country to fabricate prototypic parts during product design validation. An existing prototype architecture and system has been developed and proven. Programming and control of a large gantry robot located at Sandia National Laboratories in Albuquerque, New Mexico, was demonstrated from such remote locations as Washington D.C., Washington State, and Southern California.

  9. An architectural approach to create self organizing control systems for practical autonomous robots

    NASA Technical Reports Server (NTRS)

    Greiner, Helen

    1991-01-01

    For practical industrial applications, the development of trainable robots is an important and immediate objective. Therefore, the developing of flexible intelligence directly applicable to training is emphasized. It is generally agreed upon by the AI community that the fusion of expert systems, neural networks, and conventionally programmed modules (e.g., a trajectory generator) is promising in the quest for autonomous robotic intelligence. Autonomous robot development is hindered by integration and architectural problems. Some obstacles towards the construction of more general robot control systems are as follows: (1) Growth problem; (2) Software generation; (3) Interaction with environment; (4) Reliability; and (5) Resource limitation. Neural networks can be successfully applied to some of these problems. However, current implementations of neural networks are hampered by the resource limitation problem and must be trained extensively to produce computationally accurate output. A generalization of conventional neural nets is proposed, and an architecture is offered in an attempt to address the above problems.

  10. Controllability of Complex Dynamic Objects

    NASA Astrophysics Data System (ADS)

    Kalach, G. G.; Kazachek, N. A.; Morozov, A. A.

    2017-01-01

    Quality requirements for mobile robots intended for both specialized and everyday use are increasing in step with the complexity of the technological tasks assigned to the robots. Whether a mobile robot is for ground, aerial, or underwater use, the relevant quality characteristics can be summarized under the common concept of agility. This term denotes the object’s (the robot)’s ability to react quickly to control actions (change speed and direction), turn in a limited area, etc. When using this approach in integrated assessment of the quality characteristics of an object with the control system, it seems more constructive to use the term “degree of control”. This paper assesses the degree of control by an example of a mobile robot with the variable-geometry drive wheel axle. We show changes in the degree of control depending on the robot’s configuration, and results illustrated by calculation data, computer and practical experiments. We describe the prospects of using intelligent technology for efficient control of objects with a high degree of controllability.

  11. Quaternions in computer vision and robotics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pervin, E.; Webb, J.A.

    1982-01-01

    Computer vision and robotics suffer from not having good tools for manipulating three-dimensional objects. Vectors, coordinate geometry, and trigonometry all have deficiencies. Quaternions can be used to solve many of these problems. Many properties of quaternions that are relevant to computer vision and robotics are developed. Examples are given showing how quaternions can be used to simplify derivations in computer vision and robotics.

  12. Robonaut's Flexible Information Technology Infrastructure

    NASA Technical Reports Server (NTRS)

    Askew, Scott; Bluethmann, William; Alder, Ken; Ambrose, Robert

    2003-01-01

    Robonaut, NASA's humanoid robot, is designed to work as both an astronaut assistant and, in certain situations, an astronaut surrogate. This highly dexterous robot performs complex tasks under telepresence control that could previously only be carried out directly by humans. Currently with 47 degrees of freedom (DOF), Robonaut is a state-of-the-art human size telemanipulator system. while many of Robonaut's embedded components have been custom designed to meet packaging or environmental requirements, the primary computing systems used in Robonaut are currently commercial-off-the-shelf (COTS) products which have some correlation to flight qualified computer systems. This loose coupling of information technology (IT) resources allows Robonaut to exploit cost effective solutions while floating the technology base to take advantage of the rapid pace of IT advances. These IT systems utilize a software development environment, which is both compatible with COTS hardware as well as flight proven computing systems, preserving the majority of software development for a flight system. The ability to use highly integrated and flexible COTS software development tools improves productivity while minimizing redesign for a space flight system. Further, the flexibility of Robonaut's software and communication architecture has allowed it to become a widely used distributed development testbed for integrating new capabilities and furthering experimental research.

  13. 3D noise-resistant segmentation and tracking of unknown and occluded objects using integral imaging

    NASA Astrophysics Data System (ADS)

    Aloni, Doron; Jung, Jae-Hyun; Yitzhaky, Yitzhak

    2017-10-01

    Three dimensional (3D) object segmentation and tracking can be useful in various computer vision applications, such as: object surveillance for security uses, robot navigation, etc. We present a method for 3D multiple-object tracking using computational integral imaging, based on accurate 3D object segmentation. The method does not employ object detection by motion analysis in a video as conventionally performed (such as background subtraction or block matching). This means that the movement properties do not significantly affect the detection quality. The object detection is performed by analyzing static 3D image data obtained through computational integral imaging With regard to previous works that used integral imaging data in such a scenario, the proposed method performs the 3D tracking of objects without prior information about the objects in the scene, and it is found efficient under severe noise conditions.

  14. A review on locomotion robophysics: the study of movement at the intersection of robotics, soft matter and dynamical systems.

    PubMed

    Aguilar, Jeffrey; Zhang, Tingnan; Qian, Feifei; Kingsbury, Mark; McInroe, Benjamin; Mazouchova, Nicole; Li, Chen; Maladen, Ryan; Gong, Chaohui; Travers, Matt; Hatton, Ross L; Choset, Howie; Umbanhowar, Paul B; Goldman, Daniel I

    2016-11-01

    Discovery of fundamental principles which govern and limit effective locomotion (self-propulsion) is of intellectual interest and practical importance. Human technology has created robotic moving systems that excel in movement on and within environments of societal interest: paved roads, open air and water. However, such devices cannot yet robustly and efficiently navigate (as animals do) the enormous diversity of natural environments which might be of future interest for autonomous robots; examples include vertical surfaces like trees and cliffs, heterogeneous ground like desert rubble and brush, turbulent flows found near seashores, and deformable/flowable substrates like sand, mud and soil. In this review we argue for the creation of a physics of moving systems-a 'locomotion robophysics'-which we define as the pursuit of principles of self-generated motion. Robophysics can provide an important intellectual complement to the discipline of robotics, largely the domain of researchers from engineering and computer science. The essential idea is that we must complement the study of complex robots in complex situations with systematic study of simplified robotic devices in controlled laboratory settings and in simplified theoretical models. We must thus use the methods of physics to examine both locomotor successes and failures using parameter space exploration, systematic control, and techniques from dynamical systems. Using examples from our and others' research, we will discuss how such robophysical studies have begun to aid engineers in the creation of devices that have begun to achieve life-like locomotor abilities on and within complex environments, have inspired interesting physics questions in low dimensional dynamical systems, geometric mechanics and soft matter physics, and have been useful to develop models for biological locomotion in complex terrain. The rapidly decreasing cost of constructing robot models with easy access to significant computational power bodes well for scientists and engineers to engage in a discipline which can readily integrate experiment, theory and computation.

  15. A review on locomotion robophysics: the study of movement at the intersection of robotics, soft matter and dynamical systems

    NASA Astrophysics Data System (ADS)

    Aguilar, Jeffrey; Zhang, Tingnan; Qian, Feifei; Kingsbury, Mark; McInroe, Benjamin; Mazouchova, Nicole; Li, Chen; Maladen, Ryan; Gong, Chaohui; Travers, Matt; Hatton, Ross L.; Choset, Howie; Umbanhowar, Paul B.; Goldman, Daniel I.

    2016-11-01

    Discovery of fundamental principles which govern and limit effective locomotion (self-propulsion) is of intellectual interest and practical importance. Human technology has created robotic moving systems that excel in movement on and within environments of societal interest: paved roads, open air and water. However, such devices cannot yet robustly and efficiently navigate (as animals do) the enormous diversity of natural environments which might be of future interest for autonomous robots; examples include vertical surfaces like trees and cliffs, heterogeneous ground like desert rubble and brush, turbulent flows found near seashores, and deformable/flowable substrates like sand, mud and soil. In this review we argue for the creation of a physics of moving systems—a ‘locomotion robophysics’—which we define as the pursuit of principles of self-generated motion. Robophysics can provide an important intellectual complement to the discipline of robotics, largely the domain of researchers from engineering and computer science. The essential idea is that we must complement the study of complex robots in complex situations with systematic study of simplified robotic devices in controlled laboratory settings and in simplified theoretical models. We must thus use the methods of physics to examine both locomotor successes and failures using parameter space exploration, systematic control, and techniques from dynamical systems. Using examples from our and others’ research, we will discuss how such robophysical studies have begun to aid engineers in the creation of devices that have begun to achieve life-like locomotor abilities on and within complex environments, have inspired interesting physics questions in low dimensional dynamical systems, geometric mechanics and soft matter physics, and have been useful to develop models for biological locomotion in complex terrain. The rapidly decreasing cost of constructing robot models with easy access to significant computational power bodes well for scientists and engineers to engage in a discipline which can readily integrate experiment, theory and computation.

  16. Origami mechanologic.

    PubMed

    Treml, Benjamin; Gillman, Andrew; Buskohl, Philip; Vaia, Richard

    2018-06-18

    Robots autonomously interact with their environment through a continual sense-decide-respond control loop. Most commonly, the decide step occurs in a central processing unit; however, the stiffness mismatch between rigid electronics and the compliant bodies of soft robots can impede integration of these systems. We develop a framework for programmable mechanical computation embedded into the structure of soft robots that can augment conventional digital electronic control schemes. Using an origami waterbomb as an experimental platform, we demonstrate a 1-bit mechanical storage device that writes, erases, and rewrites itself in response to a time-varying environmental signal. Further, we show that mechanical coupling between connected origami units can be used to program the behavior of a mechanical bit, produce logic gates such as AND, OR, and three input majority gates, and transmit signals between mechanologic gates. Embedded mechanologic provides a route to add autonomy and intelligence in soft robots and machines. Copyright © 2018 the Author(s). Published by PNAS.

  17. The Interdependence of Computers, Robots, and People.

    ERIC Educational Resources Information Center

    Ludden, Laverne; And Others

    Computers and robots are becoming increasingly more advanced, with smaller and cheaper computers now doing jobs once reserved for huge multimillion dollar computers and with robots performing feats such as painting cars and using television cameras to simulate vision as they perform factory tasks. Technicians expect computers to become even more…

  18. Importance of nonverbal expression to the emergence of emotive artificial intelligence systems

    NASA Astrophysics Data System (ADS)

    Pioggia, Giovanni; Hanson, David; Dinelli, Serena; Di Francesco, Fabio; Francesconi, R.; De Rossi, Danilo

    2002-07-01

    The nonverbal expression of the emotions, especially in the human face, has rapidly become an area of intense interest in computer science and robotics. Exploring the emotions as a link between external events and behavioural responses, artificial intelligence designers and psychologists are approaching a theoretical understanding of foundational principles which will be key to the physical embodiment of artificial intelligence. In fact, it has been well demonstrated that many important aspects of intelligence are grounded in intimate communication with the physical world- so-called embodied intelligence . It follows naturally, then, that recent advances in emotive artificial intelligence show clear and undeniable broadening in the capacities of biologically-inspired robots to survive and thrive in a social environment. The means by which AI may express its foundling emotions are clearly integral to such capacities. In effect: powerful facial expressions are critical to the development of intelligent, sociable robots. Following discussion the importance of the nonverbal expression of emotions in humans and robots, this paper describes methods used in robotically emulating nonverbal expressions using human-like robotic faces. Furthermore, it describes the potentially revolutionary impact of electroactive polymer (EAP) actuators as artificial muscles for such robotic devices.

  19. Robot computer problem solving system

    NASA Technical Reports Server (NTRS)

    Merriam, E. W.; Becker, J. D.

    1973-01-01

    A robot computer problem solving system which represents a robot exploration vehicle in a simulated Mars environment is described. The model exhibits changes and improvements made on a previously designed robot in a city environment. The Martian environment is modeled in Cartesian coordinates; objects are scattered about a plane; arbitrary restrictions on the robot's vision have been removed; and the robot's path contains arbitrary curves. New environmental features, particularly the visual occlusion of objects by other objects, were added to the model. Two different algorithms were developed for computing occlusion. Movement and vision capabilities of the robot were established in the Mars environment, using LISP/FORTRAN interface for computational efficiency. The graphical display program was redesigned to reflect the change to the Mars-like environment.

  20. Evaluation of automated decisionmaking methodologies and development of an integrated robotic system simulation, volume 2, part 1. Appendix A: Software documentation

    NASA Technical Reports Server (NTRS)

    Lowrie, J. W.; Fermelia, A. J.; Haley, D. C.; Gremban, K. D.; Vanbaalen, J.; Walsh, R. W.

    1982-01-01

    Documentation of the preliminary software developed as a framework for a generalized integrated robotic system simulation is presented. The program structure is composed of three major functions controlled by a program executive. The three major functions are: system definition, analysis tools, and post processing. The system definition function handles user input of system parameters and definition of the manipulator configuration. The analysis tools function handles the computational requirements of the program. The post processing function allows for more detailed study of the results of analysis tool function executions. Also documented is the manipulator joint model software to be used as the basis of the manipulator simulation which will be part of the analysis tools capability.

  1. Toward cognitive robotics

    NASA Astrophysics Data System (ADS)

    Laird, John E.

    2009-05-01

    Our long-term goal is to develop autonomous robotic systems that have the cognitive abilities of humans, including communication, coordination, adapting to novel situations, and learning through experience. Our approach rests on the recent integration of the Soar cognitive architecture with both virtual and physical robotic systems. Soar has been used to develop a wide variety of knowledge-rich agents for complex virtual environments, including distributed training environments and interactive computer games. For development and testing in robotic virtual environments, Soar interfaces to a variety of robotic simulators and a simple mobile robot. We have recently made significant extensions to Soar that add new memories and new non-symbolic reasoning to Soar's original symbolic processing, which should significantly improve Soar abilities for control of robots. These extensions include episodic memory, semantic memory, reinforcement learning, and mental imagery. Episodic memory and semantic memory support the learning and recalling of prior events and situations as well as facts about the world. Reinforcement learning provides the ability of the system to tune its procedural knowledge - knowledge about how to do things. Mental imagery supports the use of diagrammatic and visual representations that are critical to support spatial reasoning. We speculate on the future of unmanned systems and the need for cognitive robotics to support dynamic instruction and taskability.

  2. Allothetic and idiothetic sensor fusion in rat-inspired robot localization

    NASA Astrophysics Data System (ADS)

    Weitzenfeld, Alfredo; Fellous, Jean-Marc; Barrera, Alejandra; Tejera, Gonzalo

    2012-06-01

    We describe a spatial cognition model based on the rat's brain neurophysiology as a basis for new robotic navigation architectures. The model integrates allothetic (external visual landmarks) and idiothetic (internal kinesthetic information) cues to train either rat or robot to learn a path enabling it to reach a goal from multiple starting positions. It stands in contrast to most robotic architectures based on SLAM, where a map of the environment is built to provide probabilistic localization information computed from robot odometry and landmark perception. Allothetic cues suffer in general from perceptual ambiguity when trying to distinguish between places with equivalent visual patterns, while idiothetic cues suffer from imprecise motions and limited memory recalls. We experiment with both types of cues in different maze configurations by training rats and robots to find the goal starting from a fixed location, and then testing them to reach the same target from new starting locations. We show that the robot, after having pre-explored a maze, can find a goal with improved efficiency, and is able to (1) learn the correct route to reach the goal, (2) recognize places already visited, and (3) exploit allothetic and idiothetic cues to improve on its performance. We finally contrast our biologically-inspired approach to more traditional robotic approaches and discuss current work in progress.

  3. Software development to support sensor control of robot arc welding

    NASA Technical Reports Server (NTRS)

    Silas, F. R., Jr.

    1986-01-01

    The development of software for a Digital Equipment Corporation MINC-23 Laboratory Computer to provide functions of a workcell host computer for Space Shuttle Main Engine (SSME) robotic welding is documented. Routines were written to transfer robot programs between the MINC and an Advanced Robotic Cyro 750 welding robot. Other routines provide advanced program editing features while additional software allows communicatin with a remote computer aided design system. Access to special robot functions were provided to allow advanced control of weld seam tracking and process control for future development programs.

  4. Developments and Control of Biocompatible Conducting Polymer for Intracorporeal Continuum Robots.

    PubMed

    Chikhaoui, Mohamed Taha; Benouhiba, Amine; Rougeot, Patrick; Rabenorosoa, Kanty; Ouisse, Morvan; Andreff, Nicolas

    2018-04-30

    Dexterity of robots is highly required when it comes to integration for medical applications. Major efforts have been conducted to increase the dexterity at the distal parts of medical robots. This paper reports on developments toward integrating biocompatible conducting polymers (CP) into inherently dexterous concentric tube robot paradigm. In the form of tri-layer thin structures, CP micro-actuators produce high strains while requiring less than 1 V for actuation. Fabrication, characterization, and first integrations of such micro-actuators are presented. The integration is validated in a preliminary telescopic soft robot prototype with qualitative and quantitative performance assessment of accurate position control for trajectory tracking scenarios. Further, CP micro-actuators are integrated to a laser steering system in a closed-loop control scheme with displacements up to 5 mm. Our first developments aim toward intracorporeal medical robotics, with miniaturized actuators to be embedded into continuum robots.

  5. Evolving and Controlling Perimeter, Rendezvous, and Foraging Behaviors in a Computation-Free Robot Swarm

    DTIC Science & Technology

    2016-04-01

    cheap, disposable swarms of robots that can accomplish these tasks quickly and with- out much human supervision. While there has been a lot of work...have shown that swarms of robots so dumb that they have no computational power–they can’t even add or subtract, and have no memory can still collec...behaviors can be achieved using swarms of computation-free robots . Our work starts with the simple robot model proposed in [6] and adds a form of

  6. Integrated mobile-robot design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kortenkamp, D.; Huber, M.; Cohen, C.

    1993-08-01

    Ten mobile robots entered the AAAI '92 Robot Competition, held at last year's national conference. Carmel, the University of Michigan entry, won. The competition consisted of three stages. The first stage required roaming a 22[times]22-meter arena while avoiding static and dynamic obstacles; the second involved searching for and visiting 10 objects in the same arena. The obstacles were at least 1.5 meters apart, while the objects were spaced roughly evenly throughout the arena. Visiting was defined as moving to within two robot diameters of the object. The last stage was a timed race to visit three of the objects locatedmore » earlier and return home. Since the first stage was primarily a subset of the second-stage requirements, and the third-stage implementation was very similar to that of the second, the authors' focus here on the second stage. Carmel (Computer-Aided Robotics for Maintenance, Emergency, and Life support) is based on a commercially available Cybermotion K2A mobile-robot platform. It has a top speed of approximately 800 millimeters per second and moves on three synchronously driven wheels. For sensing, Carmel, has a ring of 24 Polaroid sonar sensors and a single black-and-white charge-coupled-device camera mounted on a rotating table. Carmel has three processors: one controls the drive motors, one fires the sonar ring, and the third, a 486-based PC clone, executes all the high-level modules. The 486 also has a frame grabber for acquiring images. All computation and power are contained on-board.« less

  7. United States Air Force Computer-Aided Acquisition and Logistics Support (CALS) Evolution of Computer Integrated Manufacturing (CIM) Technologies

    DTIC Science & Technology

    1988-11-01

    Manufacturing System 22 4. Similar Parts Based Shape or Manufactuting Process 24 5. Projected Annual Unit Robot Sales and Installed Base Through 1992 30 6. U.S...effort needed to perform personnel, product design, marketing , and advertising, and finance tasks of the firm. Level III controls the resource...planning and accounting functions of the firm. Systems at this level support purchasing, accounts payable, accounts receivable, master scheduling and sales

  8. A neural framework for organization and flexible utilization of episodic memory in cumulatively learning baby humanoids.

    PubMed

    Mohan, Vishwanathan; Sandini, Giulio; Morasso, Pietro

    2014-12-01

    Cumulatively developing robots offer a unique opportunity to reenact the constant interplay between neural mechanisms related to learning, memory, prospection, and abstraction from the perspective of an integrated system that acts, learns, remembers, reasons, and makes mistakes. Situated within such interplay lie some of the computationally elusive and fundamental aspects of cognitive behavior: the ability to recall and flexibly exploit diverse experiences of one's past in the context of the present to realize goals, simulate the future, and keep learning further. This article is an adventurous exploration in this direction using a simple engaging scenario of how the humanoid iCub learns to construct the tallest possible stack given an arbitrary set of objects to play with. The learning takes place cumulatively, with the robot interacting with different objects (some previously experienced, some novel) in an open-ended fashion. Since the solution itself depends on what objects are available in the "now," multiple episodes of past experiences have to be remembered and creatively integrated in the context of the present to be successful. Starting from zero, where the robot knows nothing, we explore the computational basis of organization episodic memory in a cumulatively learning humanoid and address (1) how relevant past experiences can be reconstructed based on the present context, (2) how multiple stored episodic memories compete to survive in the neural space and not be forgotten, (3) how remembered past experiences can be combined with explorative actions to learn something new, and (4) how multiple remembered experiences can be recombined to generate novel behaviors (without exploration). Through the resulting behaviors of the robot as it builds, breaks, learns, and remembers, we emphasize that mechanisms of episodic memory are fundamental design features necessary to enable the survival of autonomous robots in a real world where neither everything can be known nor can everything be experienced.

  9. Solving Navigational Uncertainty Using Grid Cells on Robots

    PubMed Central

    Milford, Michael J.; Wiles, Janet; Wyeth, Gordon F.

    2010-01-01

    To successfully navigate their habitats, many mammals use a combination of two mechanisms, path integration and calibration using landmarks, which together enable them to estimate their location and orientation, or pose. In large natural environments, both these mechanisms are characterized by uncertainty: the path integration process is subject to the accumulation of error, while landmark calibration is limited by perceptual ambiguity. It remains unclear how animals form coherent spatial representations in the presence of such uncertainty. Navigation research using robots has determined that uncertainty can be effectively addressed by maintaining multiple probabilistic estimates of a robot's pose. Here we show how conjunctive grid cells in dorsocaudal medial entorhinal cortex (dMEC) may maintain multiple estimates of pose using a brain-based robot navigation system known as RatSLAM. Based both on rodent spatially-responsive cells and functional engineering principles, the cells at the core of the RatSLAM computational model have similar characteristics to rodent grid cells, which we demonstrate by replicating the seminal Moser experiments. We apply the RatSLAM model to a new experimental paradigm designed to examine the responses of a robot or animal in the presence of perceptual ambiguity. Our computational approach enables us to observe short-term population coding of multiple location hypotheses, a phenomenon which would not be easily observable in rodent recordings. We present behavioral and neural evidence demonstrating that the conjunctive grid cells maintain and propagate multiple estimates of pose, enabling the correct pose estimate to be resolved over time even without uniquely identifying cues. While recent research has focused on the grid-like firing characteristics, accuracy and representational capacity of grid cells, our results identify a possible critical and unique role for conjunctive grid cells in filtering sensory uncertainty. We anticipate our study to be a starting point for animal experiments that test navigation in perceptually ambiguous environments. PMID:21085643

  10. Research on the man in the loop control system of the robot arm based on gesture control

    NASA Astrophysics Data System (ADS)

    Xiao, Lifeng; Peng, Jinbao

    2017-03-01

    The Man in the loop control system of the robot arm based on gesture control research complex real-world environment, which requires the operator to continuously control and adjust the remote manipulator, as the background, completes the specific mission human in the loop entire system as the research object. This paper puts forward a kind of robot arm control system of Man in the loop based on gesture control, by robot arm control system based on gesture control and Virtual reality scene feedback to enhance immersion and integration of operator, to make operator really become a part of the whole control loop. This paper expounds how to construct a man in the loop control system of the robot arm based on gesture control. The system is a complex system of human computer cooperative control, but also people in the loop control problem areas. The new system solves the problems that the traditional method has no immersion feeling and the operation lever is unnatural, the adjustment time is long, and the data glove mode wears uncomfortable and the price is expensive.

  11. ROBOCAL: An automated NDA (nondestructive analysis) calorimetry and gamma isotopic system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hurd, J.R.; Powell, W.D.; Ostenak, C.A.

    1989-11-01

    ROBOCAL, which is presently being developed and tested at Los Alamos National Laboratory, is a full-scale, prototype robotic system for remote calorimetric and gamma-ray analysis of special nuclear materials. It integrates a fully automated, multidrawer, vertical stacker-retriever system for staging unmeasured nuclear materials, and a fully automated gantry robot for computer-based selection and transfer of nuclear materials to calorimetric and gamma-ray measurement stations. Since ROBOCAL is designed for minimal operator intervention, a completely programmed user interface is provided to interact with the automated mechanical and assay systems. The assay system is designed to completely integrate calorimetric and gamma-ray data acquisitionmore » and to perform state-of-the-art analyses on both homogeneous and heterogeneous distributions of nuclear materials in a wide variety of matrices.« less

  12. Physics and Robotic Sensing -- the good, the bad, and approaches to making it work

    NASA Astrophysics Data System (ADS)

    Huff, Brian

    2011-03-01

    All of the technological advances that have benefited consumer electronics have direct application to robotics. Technological advances have resulted in the dramatic reduction in size, cost, and weight of computing systems, while simultaneously doubling computational speed every eighteen months. The same manufacturing advancements that have enabled this rapid increase in computational power are now being leveraged to produce small, powerful and cost-effective sensing technologies applicable for use in mobile robotics applications. Despite the increase in computing and sensing resources available to today's robotic systems developers, there are sensing problems typically found in unstructured environments that continue to frustrate the widespread use of robotics and unmanned systems. This talk presents how physics has contributed to the creation of the technologies that are making modern robotics possible. The talk discusses theoretical approaches to robotic sensing that appear to suffer when they are deployed in the real world. Finally the author presents methods being used to make robotic sensing more robust.

  13. Evolutionary multiobjective design of a flexible caudal fin for robotic fish.

    PubMed

    Clark, Anthony J; Tan, Xiaobo; McKinley, Philip K

    2015-11-25

    Robotic fish accomplish swimming by deforming their bodies or other fin-like appendages. As an emerging class of embedded computing system, robotic fish are anticipated to play an important role in environmental monitoring, inspection of underwater structures, tracking of hazardous wastes and oil spills, and the study of live fish behaviors. While integration of flexible materials (into the fins and/or body) holds the promise of improved swimming performance (in terms of both speed and maneuverability) for these robots, such components also introduce significant design challenges due to the complex material mechanics and hydrodynamic interactions. The problem is further exacerbated by the need for the robots to meet multiple objectives (e.g., both speed and energy efficiency). In this paper, we propose an evolutionary multiobjective optimization approach to the design and control of a robotic fish with a flexible caudal fin. Specifically, we use the NSGA-II algorithm to investigate morphological and control parameter values that optimize swimming speed and power usage. Several evolved fin designs are validated experimentally with a small robotic fish, where fins of different stiffness values and sizes are printed with a multi-material 3D printer. Experimental results confirm the effectiveness of the proposed design approach in balancing the two competing objectives.

  14. A robotic system for researching social integration in honeybees.

    PubMed

    Griparić, Karlo; Haus, Tomislav; Miklić, Damjan; Polić, Marsela; Bogdan, Stjepan

    2017-01-01

    In this paper, we present a novel robotic system developed for researching collective social mechanisms in a biohybrid society of robots and honeybees. The potential for distributed coordination, as observed in nature in many different animal species, has caused an increased interest in collective behaviour research in recent years because of its applicability to a broad spectrum of technical systems requiring robust multi-agent control. One of the main problems is understanding the mechanisms driving the emergence of collective behaviour of social animals. With the aim of deepening the knowledge in this field, we have designed a multi-robot system capable of interacting with honeybees within an experimental arena. The final product, stationary autonomous robot units, designed by specificaly considering the physical, sensorimotor and behavioral characteristics of the honeybees (lat. Apis mallifera), are equipped with sensing, actuating, computation, and communication capabilities that enable the measurement of relevant environmental states, such as honeybee presence, and adequate response to the measurements by generating heat, vibration and airflow. The coordination among robots in the developed system is established using distributed controllers. The cooperation between the two different types of collective systems is realized by means of a consensus algorithm, enabling the honeybees and the robots to achieve a common objective. Presented results, obtained within ASSISIbf project, show successful cooperation indicating its potential for future applications.

  15. Parallel-distributed mobile robot simulator

    NASA Astrophysics Data System (ADS)

    Okada, Hiroyuki; Sekiguchi, Minoru; Watanabe, Nobuo

    1996-06-01

    The aim of this project is to achieve an autonomous learning and growth function based on active interaction with the real world. It should also be able to autonomically acquire knowledge about the context in which jobs take place, and how the jobs are executed. This article describes a parallel distributed movable robot system simulator with an autonomous learning and growth function. The autonomous learning and growth function which we are proposing is characterized by its ability to learn and grow through interaction with the real world. When the movable robot interacts with the real world, the system compares the virtual environment simulation with the interaction result in the real world. The system then improves the virtual environment to match the real-world result more closely. This the system learns and grows. It is very important that such a simulation is time- realistic. The parallel distributed movable robot simulator was developed to simulate the space of a movable robot system with an autonomous learning and growth function. The simulator constructs a virtual space faithful to the real world and also integrates the interfaces between the user, the actual movable robot and the virtual movable robot. Using an ultrafast CG (computer graphics) system (FUJITSU AG series), time-realistic 3D CG is displayed.

  16. Hebbian Plasticity Realigns Grid Cell Activity with External Sensory Cues in Continuous Attractor Models

    PubMed Central

    Mulas, Marcello; Waniek, Nicolai; Conradt, Jörg

    2016-01-01

    After the discovery of grid cells, which are an essential component to understand how the mammalian brain encodes spatial information, three main classes of computational models were proposed in order to explain their working principles. Amongst them, the one based on continuous attractor networks (CAN), is promising in terms of biological plausibility and suitable for robotic applications. However, in its current formulation, it is unable to reproduce important electrophysiological findings and cannot be used to perform path integration for long periods of time. In fact, in absence of an appropriate resetting mechanism, the accumulation of errors over time due to the noise intrinsic in velocity estimation and neural computation prevents CAN models to reproduce stable spatial grid patterns. In this paper, we propose an extension of the CAN model using Hebbian plasticity to anchor grid cell activity to environmental landmarks. To validate our approach we used as input to the neural simulations both artificial data and real data recorded from a robotic setup. The additional neural mechanism can not only anchor grid patterns to external sensory cues but also recall grid patterns generated in previously explored environments. These results might be instrumental for next generation bio-inspired robotic navigation algorithms that take advantage of neural computation in order to cope with complex and dynamic environments. PMID:26924979

  17. Robotic surgery

    MedlinePlus

    Robot-assisted surgery; Robotic-assisted laparoscopic surgery; Laparoscopic surgery with robotic assistance ... computer station and directs the movements of a robot. Small surgical tools are attached to the robot's ...

  18. The use of computer graphic simulation in the development of on-orbit tele-robotic systems

    NASA Technical Reports Server (NTRS)

    Fernandez, Ken; Hinman, Elaine

    1987-01-01

    This paper describes the use of computer graphic simulation techniques to resolve critical design and operational issues for robotic systems used for on-orbit operations. These issues are robot motion control, robot path-planning/verification, and robot dynamics. The major design issues in developing effective telerobotic systems are discussed, and the use of ROBOSIM, a NASA-developed computer graphic simulation tool, to address these issues is presented. Simulation plans for the Space Station and the Orbital Maneuvering Vehicle are presented and discussed.

  19. SLAM algorithm applied to robotics assistance for navigation in unknown environments.

    PubMed

    Cheein, Fernando A Auat; Lopez, Natalia; Soria, Carlos M; di Sciascio, Fernando A; Pereira, Fernando Lobo; Carelli, Ricardo

    2010-02-17

    The combination of robotic tools with assistance technology determines a slightly explored area of applications and advantages for disability or elder people in their daily tasks. Autonomous motorized wheelchair navigation inside an environment, behaviour based control of orthopaedic arms or user's preference learning from a friendly interface are some examples of this new field. In this paper, a Simultaneous Localization and Mapping (SLAM) algorithm is implemented to allow the environmental learning by a mobile robot while its navigation is governed by electromyographic signals. The entire system is part autonomous and part user-decision dependent (semi-autonomous). The environmental learning executed by the SLAM algorithm and the low level behaviour-based reactions of the mobile robot are robotic autonomous tasks, whereas the mobile robot navigation inside an environment is commanded by a Muscle-Computer Interface (MCI). In this paper, a sequential Extended Kalman Filter (EKF) feature-based SLAM algorithm is implemented. The features correspond to lines and corners -concave and convex- of the environment. From the SLAM architecture, a global metric map of the environment is derived. The electromyographic signals that command the robot's movements can be adapted to the patient's disabilities. For mobile robot navigation purposes, five commands were obtained from the MCI: turn to the left, turn to the right, stop, start and exit. A kinematic controller to control the mobile robot was implemented. A low level behavior strategy was also implemented to avoid robot's collisions with the environment and moving agents. The entire system was tested in a population of seven volunteers: three elder, two below-elbow amputees and two young normally limbed patients. The experiments were performed within a closed low dynamic environment. Subjects took an average time of 35 minutes to navigate the environment and to learn how to use the MCI. The SLAM results have shown a consistent reconstruction of the environment. The obtained map was stored inside the Muscle-Computer Interface. The integration of a highly demanding processing algorithm (SLAM) with a MCI and the communication between both in real time have shown to be consistent and successful. The metric map generated by the mobile robot would allow possible future autonomous navigation without direct control of the user, whose function could be relegated to choose robot destinations. Also, the mobile robot shares the same kinematic model of a motorized wheelchair. This advantage can be exploited for wheelchair autonomous navigation.

  20. An Integrated Testbed for Cooperative Perception with Heterogeneous Mobile and Static Sensors

    PubMed Central

    Jiménez-González, Adrián; Martínez-De Dios, José Ramiro; Ollero, Aníbal

    2011-01-01

    Cooperation among devices with different sensing, computing and communication capabilities provides interesting possibilities in a growing number of problems and applications including domotics (domestic robotics), environmental monitoring or intelligent cities, among others. Despite the increasing interest in academic and industrial communities, experimental tools for evaluation and comparison of cooperative algorithms for such heterogeneous technologies are still very scarce. This paper presents a remote testbed with mobile robots and Wireless Sensor Networks (WSN) equipped with a set of low-cost off-the-shelf sensors, commonly used in cooperative perception research and applications, that present high degree of heterogeneity in their technology, sensed magnitudes, features, output bandwidth, interfaces and power consumption, among others. Its open and modular architecture allows tight integration and interoperability between mobile robots and WSN through a bidirectional protocol that enables full interaction. Moreover, the integration of standard tools and interfaces increases usability, allowing an easy extension to new hardware and software components and the reuse of code. Different levels of decentralization are considered, supporting from totally distributed to centralized approaches. Developed for the EU-funded Cooperating Objects Network of Excellence (CONET) and currently available at the School of Engineering of Seville (Spain), the testbed provides full remote control through the Internet. Numerous experiments have been performed, some of which are described in the paper. PMID:22247679

  1. An integrated testbed for cooperative perception with heterogeneous mobile and static sensors.

    PubMed

    Jiménez-González, Adrián; Martínez-De Dios, José Ramiro; Ollero, Aníbal

    2011-01-01

    Cooperation among devices with different sensing, computing and communication capabilities provides interesting possibilities in a growing number of problems and applications including domotics (domestic robotics), environmental monitoring or intelligent cities, among others. Despite the increasing interest in academic and industrial communities, experimental tools for evaluation and comparison of cooperative algorithms for such heterogeneous technologies are still very scarce. This paper presents a remote testbed with mobile robots and Wireless Sensor Networks (WSN) equipped with a set of low-cost off-the-shelf sensors, commonly used in cooperative perception research and applications, that present high degree of heterogeneity in their technology, sensed magnitudes, features, output bandwidth, interfaces and power consumption, among others. Its open and modular architecture allows tight integration and interoperability between mobile robots and WSN through a bidirectional protocol that enables full interaction. Moreover, the integration of standard tools and interfaces increases usability, allowing an easy extension to new hardware and software components and the reuse of code. Different levels of decentralization are considered, supporting from totally distributed to centralized approaches. Developed for the EU-funded Cooperating Objects Network of Excellence (CONET) and currently available at the School of Engineering of Seville (Spain), the testbed provides full remote control through the Internet. Numerous experiments have been performed, some of which are described in the paper.

  2. Robopedia: Leveraging Sensorpedia for Web-Enabled Robot Control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Resseguie, David R

    There is a growing interest in building Internetscale sensor networks that integrate sensors from around the world into a single unified system. In contrast, robotics application development has primarily focused on building specialized systems. These specialized systems take scalability and reliability into consideration, but generally neglect exploring the key components required to build a large scale system. Integrating robotic applications with Internet-scale sensor networks will unify specialized robotics applications and provide answers to large scale implementation concerns. We focus on utilizing Internet-scale sensor network technology to construct a framework for unifying robotic systems. Our framework web-enables a surveillance robot smore » sensor observations and provides a webinterface to the robot s actuators. This lets robots seamlessly integrate into web applications. In addition, the framework eliminates most prerequisite robotics knowledge, allowing for the creation of general web-based robotics applications. The framework also provides mechanisms to create applications that can interface with any robot. Frameworks such as this one are key to solving large scale mobile robotics implementation problems. We provide an overview of previous Internetscale sensor networks, Sensorpedia (an ad-hoc Internet-scale sensor network), our framework for integrating robots with Sensorpedia, two applications which illustrate our frameworks ability to support general web-based robotic control, and offer experimental results that illustrate our framework s scalability, feasibility, and resource requirements.« less

  3. Architecture for reactive planning of robot actions

    NASA Astrophysics Data System (ADS)

    Riekki, Jukka P.; Roening, Juha

    1995-01-01

    In this article, a reactive system for planning robot actions is described. The described hierarchical control system architecture consists of planning-executing-monitoring-modelling elements (PEMM elements). A PEMM element is a goal-oriented, combined processing and data element. It includes a planner, an executor, a monitor, a modeler, and a local model. The elements form a tree-like structure. An element receives tasks from its ancestor and sends subtasks to its descendants. The model knowledge is distributed into the local models, which are connected to each other. The elements can be synchronized. The PEMM architecture is strictly hierarchical. It integrated planning, sensing, and modelling into a single framework. A PEMM-based control system is reactive, as it can cope with asynchronous events and operate under time constraints. The control system is intended to be used primarily to control mobile robots and robot manipulators in dynamic and partially unknown environments. It is suitable especially for applications consisting of physically separated devices and computing resources.

  4. Patient motion tracking in the presence of measurement errors.

    PubMed

    Haidegger, Tamás; Benyó, Zoltán; Kazanzides, Peter

    2009-01-01

    The primary aim of computer-integrated surgical systems is to provide physicians with superior surgical tools for better patient outcome. Robotic technology is capable of both minimally invasive surgery and microsurgery, offering remarkable advantages for the surgeon and the patient. Current systems allow for sub-millimeter intraoperative spatial positioning, however certain limitations still remain. Measurement noise and unintended changes in the operating room environment can result in major errors. Positioning errors are a significant danger to patients in procedures involving robots and other automated devices. We have developed a new robotic system at the Johns Hopkins University to support cranial drilling in neurosurgery procedures. The robot provides advanced visualization and safety features. The generic algorithm described in this paper allows for automated compensation of patient motion through optical tracking and Kalman filtering. When applied to the neurosurgery setup, preliminary results show that it is possible to identify patient motion within 700 ms, and apply the appropriate compensation with an average of 1.24 mm positioning error after 2 s of setup time.

  5. Explorer-II: Wireless Self-Powered Visual and NDE Robotic Inspection System for Live Gas Distribution Mains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carnegie Mellon University

    2008-09-30

    Carnegie Mellon University (CMU) under contract from Department of Energy/National Energy Technology Laboratory (DoE/NETL) and co-funding from the Northeast Gas Association (NGA), has completed the overall system design, field-trial and Magnetic Flux Leakage (MFL) sensor evaluation program for the next-generation Explorer-II (X-II) live gas main Non-destructive Evaluation (NDE) and visual inspection robot platform. The design is based on the Explorer-I prototype which was built and field-tested under a prior (also DoE- and NGA co-funded) program, and served as the validation that self-powered robots under wireless control could access and navigate live natural gas distribution mains. The X-II system design ({approx}8more » ft. and 66 lbs.) was heavily based on the X-I design, yet was substantially expanded to allow the addition of NDE sensor systems (while retaining its visual inspection capability), making it a modular system, and expanding its ability to operate at pressures up to 750 psig (high-pressure and unpiggable steel-pipe distribution mains). A new electronics architecture and on-board software kernel were added to again improve system performance. A locating sonde system was integrated to allow for absolute position-referencing during inspection (coupled with external differential GPS) and emergency-locating. The power system was upgraded to utilize lithium-based battery-cells for an increase in mission-time. The resulting robot-train system with CAD renderings of the individual modules. The system architecture now relies on a dual set of end camera-modules to house the 32-bit processors (Single-Board Computer or SBC) as well as the imaging and wireless (off-board) and CAN-based (on-board) communication hardware and software systems (as well as the sonde-coil and -electronics). The drive-module (2 ea.) are still responsible for bracing (and centering) to drive in push/pull fashion the robot train into and through the pipes and obstacles. The steering modules and their arrangement, still allow the robot to configure itself to perform any-angle (up to 90 deg) turns in any orientation (incl. vertical), and enable the live launching and recovery of the system using custom fittings and a (to be developed) launch-chamber/-tube. The battery modules are used to power the system, by providing power to the robot's bus. The support modules perform the functions of centration for the rest of the train as well as odometry pickups using incremental encoding schemes. The electronics architecture is based on a distributed (8-bit) microprocessor architecture (at least 1 in ea. module) communicating to a (one of two) 32-bit SBC, which manages all video-processing, posture and motion control as well as CAN and wireless communications. The operator controls the entire system from an off-board (laptop) controller, which is in constant wireless communication with the robot train in the pipe. The sensor modules collect data and forward it to the robot operator computer (via the CAN-wireless communications chain), who then transfers it to a dedicated NDE data-storage and post-processing computer for further (real-time or off-line) analysis. The prototype robot system was built and tested indoors and outdoors, outfitted with a Remote-Field Eddy Current (RFEC) sensor integrated as its main NDE sensor modality. An angled launcher, allowing for live launching and retrieval, was also built to suit custom angled launch-fittings from TDW. The prototype vehicle and launcher systems are shown. The complete system, including the in-pipe robot train, launcher, integrated NDE-sensor and real-time video and control console and NDE-data collection and -processing and real-time display, were demonstrated to all sponsors prior to proceeding into final field-trials--the individual components and setting for said acceptance demonstration are shown. The launcher-tube was also used to verify that the vehicle system is capable of operating in high-pressure environments, and is safely deployable using proper evacuating/purging techniques for operation in the potentially explosive natural gas environment. The test-setting and environment for safety-certification of the X-II robot platform and the launch and recovery procedures, is shown. Field-trials were successfully carried out in a live steel pipeline in Northwestern Pennsylvania. The robot was launched and recovered multiple times, travelling thousands of feet and communicating in real time with video and command-and-control (C&C) data under remote operator control from a laptop, with NDE sensor-data streaming to a second computer for storage, display and post-processing. Representative images of the activities and systems used in the week-long field-trial are shown. CMU also evaluated the ability of the X-II design to be able to integrate an MFL sensor, by adding additional drive-/battery-/steering- and support-modules to extend the X-II train.« less

  6. Novel Robotic Tools for Piping Inspection and Repair

    DTIC Science & Technology

    2015-01-14

    was selected due to its small size, and peripheral capability. The SoM measures 50mm x 44mm. The SoM processor is an ARM Cortex -A8 running at720MHz...designing an embedded computing system from scratch. The SoM is a single integrated module which contains the processor , RAM, power management, and

  7. Vertical Stream Curricula Integration of Problem-Based Learning Using an Autonomous Vacuum Robot in a Mechatronics Course

    ERIC Educational Resources Information Center

    Chin, Cheng; Yue, Keng

    2011-01-01

    Difficulties in teaching a multi-disciplinary subject such as the mechatronics system design module in Departments of Mechatronics Engineering at Temasek Polytechnic arise from the gap in experience and skill among staff and students who have different backgrounds in mechanical, computer and electrical engineering within the Mechatronics…

  8. A self-assembled nanoscale robotic arm controlled by electric fields

    NASA Astrophysics Data System (ADS)

    Kopperger, Enzo; List, Jonathan; Madhira, Sushi; Rothfischer, Florian; Lamb, Don C.; Simmel, Friedrich C.

    2018-01-01

    The use of dynamic, self-assembled DNA nanostructures in the context of nanorobotics requires fast and reliable actuation mechanisms. We therefore created a 55-nanometer–by–55-nanometer DNA-based molecular platform with an integrated robotic arm of length 25 nanometers, which can be extended to more than 400 nanometers and actuated with externally applied electrical fields. Precise, computer-controlled switching of the arm between arbitrary positions on the platform can be achieved within milliseconds, as demonstrated with single-pair Förster resonance energy transfer experiments and fluorescence microscopy. The arm can be used for electrically driven transport of molecules or nanoparticles over tens of nanometers, which is useful for the control of photonic and plasmonic processes. Application of piconewton forces by the robot arm is demonstrated in force-induced DNA duplex melting experiments.

  9. Sensor fusion V; Proceedings of the Meeting, Boston, MA, Nov. 15-17, 1992

    NASA Technical Reports Server (NTRS)

    Schenker, Paul S. (Editor)

    1992-01-01

    Topics addressed include 3D object perception, human-machine interface in multisensor systems, sensor fusion architecture, fusion of multiple and distributed sensors, interface and decision models for sensor fusion, computational networks, simple sensing for complex action, multisensor-based control, and metrology and calibration of multisensor systems. Particular attention is given to controlling 3D objects by sketching 2D views, the graphical simulation and animation environment for flexible structure robots, designing robotic systems from sensorimotor modules, cylindrical object reconstruction from a sequence of images, an accurate estimation of surface properties by integrating information using Bayesian networks, an adaptive fusion model for a distributed detection system, multiple concurrent object descriptions in support of autonomous navigation, robot control with multiple sensors and heuristic knowledge, and optical array detectors for image sensors calibration. (No individual items are abstracted in this volume)

  10. Embodied Computation: An Active-Learning Approach to Mobile Robotics Education

    ERIC Educational Resources Information Center

    Riek, L. D.

    2013-01-01

    This paper describes a newly designed upper-level undergraduate and graduate course, Autonomous Mobile Robots. The course employs active, cooperative, problem-based learning and is grounded in the fundamental computational problems in mobile robotics defined by Dudek and Jenkin. Students receive a broad survey of robotics through lectures, weekly…

  11. Design and Integration of a Three Degrees-of-Freedom Robotic Vehicle with Control Moment Gyro for the Autonomous Multi-Agent Physically Interacting Spacecraft (AMPHIS) Testbed

    DTIC Science & Technology

    2006-09-01

    required directional control for each thruster due to their high precision and equivalent power and computer interface requirements to those for the...Universal Serial Bus) ports, LPT (Line Printing Terminal) and KVM (Keyboard-Video- Mouse) interfaces. Additionally, power is supplied to the computer through...of the IDE cable to the Prometheus Development Kit ACC-IDEEXT. Connect a small drive power connector from the desktop ATX power supply to the ACC

  12. Impact of IQ, computer-gaming skills, general dexterity, and laparoscopic experience on performance with the da Vinci surgical system.

    PubMed

    Hagen, Monika E; Wagner, Oliver J; Inan, Ihsan; Morel, Philippe

    2009-09-01

    Due to improved ergonomics and dexterity, robotic surgery is promoted as being easily performed by surgeons with no special skills necessary. We tested this hypothesis by measuring IQ elements, computer gaming skills, general dexterity with chopsticks, and evaluating laparoscopic experience in correlation to performance ability with the da Vinci robot. Thirty-four individuals were tested for robotic dexterity, IQ elements, computer-gaming skills and general dexterity. Eighteen surgically inexperienced and 16 laparoscopically trained surgeons were included. Each individual performed three different tasks with the da Vinci surgical system and their times were recorded. An IQ test (elements: logical thinking, 3D imagination and technical understanding) was completed by each participant. Computer skills were tested with a simple computer game (hand-eye coordination) and general dexterity was evaluated by the ability to use chopsticks. We found no correlation between logical thinking, 3D imagination and robotic skills. Both computer gaming and general dexterity showed a slight but non-significant improvement in performance with the da Vinci robot (p > 0.05). A significant correlation between robotic skills, technical understanding and laparoscopic experience was observed (p < 0.05). The data support the conclusion that there are no significant correlations between robotic performance and logical thinking, 3D understanding, computer gaming skills and general dexterity. A correlation between robotic skills and technical understanding may exist. Laparoscopic experience seems to be the strongest predictor of performance with the da Vinci surgical system. Generally, it appears difficult to determine non-surgical predictors for robotic surgery.

  13. The role of physicality in rich programming environments

    NASA Astrophysics Data System (ADS)

    Liu, Allison S.; Schunn, Christian D.; Flot, Jesse; Shoop, Robin

    2013-12-01

    Computer science proficiency continues to grow in importance, while the number of students entering computer science-related fields declines. Many rich programming environments have been created to motivate student interest and expertise in computer science. In the current study, we investigated whether a recently created environment, Robot Virtual Worlds (RVWs), can be used to teach computer science principles within a robotics context by examining its use in high-school classrooms. We also investigated whether the lack of physicality in these environments impacts student learning by comparing classrooms that used either virtual or physical robots for the RVW curriculum. Results suggest that the RVW environment leads to significant gains in computer science knowledge, that virtual robots lead to faster learning, and that physical robots may have some influence on algorithmic thinking. We discuss the implications of physicality in these programming environments for learning computer science.

  14. Use of symbolic computation in robotics education

    NASA Technical Reports Server (NTRS)

    Vira, Naren; Tunstel, Edward

    1992-01-01

    An application of symbolic computation in robotics education is described. A software package is presented which combines generality, user interaction, and user-friendliness with the systematic usage of symbolic computation and artificial intelligence techniques. The software utilizes MACSYMA, a LISP-based symbolic algebra language, to automatically generate closed-form expressions representing forward and inverse kinematics solutions, the Jacobian transformation matrices, robot pose error-compensation models equations, and Lagrange dynamics formulation for N degree-of-freedom, open chain robotic manipulators. The goal of such a package is to aid faculty and students in the robotics course by removing burdensome tasks of mathematical manipulations. The software package has been successfully tested for its accuracy using commercially available robots.

  15. Design of Intelligent Robot as A Tool for Teaching Media Based on Computer Interactive Learning and Computer Assisted Learning to Improve the Skill of University Student

    NASA Astrophysics Data System (ADS)

    Zuhrie, M. S.; Basuki, I.; Asto B, I. G. P.; Anifah, L.

    2018-01-01

    The focus of the research is the teaching module which incorporates manufacturing, planning mechanical designing, controlling system through microprocessor technology and maneuverability of the robot. Computer interactive and computer-assisted learning is strategies that emphasize the use of computers and learning aids (computer assisted learning) in teaching and learning activity. This research applied the 4-D model research and development. The model is suggested by Thiagarajan, et.al (1974). 4-D Model consists of four stages: Define Stage, Design Stage, Develop Stage, and Disseminate Stage. This research was conducted by applying the research design development with an objective to produce a tool of learning in the form of intelligent robot modules and kit based on Computer Interactive Learning and Computer Assisted Learning. From the data of the Indonesia Robot Contest during the period of 2009-2015, it can be seen that the modules that have been developed confirm the fourth stage of the research methods of development; disseminate method. The modules which have been developed for students guide students to produce Intelligent Robot Tool for Teaching Based on Computer Interactive Learning and Computer Assisted Learning. Results of students’ responses also showed a positive feedback to relate to the module of robotics and computer-based interactive learning.

  16. Innovation in robotic surgery: the Indian scenario.

    PubMed

    Deshpande, Suresh V

    2015-01-01

    Robotics is the science. In scientific words a "Robot" is an electromechanical arm device with a computer interface, a combination of electrical, mechanical, and computer engineering. It is a mechanical arm that performs tasks in Industries, space exploration, and science. One such idea was to make an automated arm - A robot - In laparoscopy to control the telescope-camera unit electromechanically and then with a computer interface using voice control. It took us 5 long years from 2004 to bring it to the level of obtaining a patent. That was the birth of the Swarup Robotic Arm (SWARM) which is the first and the only Indian contribution in the field of robotics in laparoscopy as a total voice controlled camera holding robotic arm developed without any support by industry or research institutes.

  17. Hitchhiking Robots: A Collaborative Approach for Efficient Multi-Robot Navigation in Indoor Environments

    PubMed Central

    Ravankar, Abhijeet; Ravankar, Ankit A.; Kobayashi, Yukinori; Emaru, Takanori

    2017-01-01

    Hitchhiking is a means of transportation gained by asking other people for a (free) ride. We developed a multi-robot system which is the first of its kind to incorporate hitchhiking in robotics, and discuss its advantages. Our method allows the hitchhiker robot to skip redundant computations in navigation like path planning, localization, obstacle avoidance, and map update by completely relying on the driver robot. This allows the hitchhiker robot, which performs only visual servoing, to save computation while navigating on the common path with the driver robot. The driver robot, in the proposed system performs all the heavy computations in navigation and updates the hitchhiker about the current localized positions and new obstacle positions in the map. The proposed system is robust to recover from ‘driver-lost’ scenario which occurs due to visual servoing failure. We demonstrate robot hitchhiking in real environments considering factors like service-time and task priority with different start and goal configurations of the driver and hitchhiker robots. We also discuss the admissible characteristics of the hitchhiker, when hitchhiking should be allowed and when not, through experimental results. PMID:28809803

  18. Hitchhiking Robots: A Collaborative Approach for Efficient Multi-Robot Navigation in Indoor Environments.

    PubMed

    Ravankar, Abhijeet; Ravankar, Ankit A; Kobayashi, Yukinori; Emaru, Takanori

    2017-08-15

    Hitchhiking is a means of transportation gained by asking other people for a (free) ride. We developed a multi-robot system which is the first of its kind to incorporate hitchhiking in robotics, and discuss its advantages. Our method allows the hitchhiker robot to skip redundant computations in navigation like path planning, localization, obstacle avoidance, and map update by completely relying on the driver robot. This allows the hitchhiker robot, which performs only visual servoing, to save computation while navigating on the common path with the driver robot. The driver robot, in the proposed system performs all the heavy computations in navigation and updates the hitchhiker about the current localized positions and new obstacle positions in the map. The proposed system is robust to recover from `driver-lost' scenario which occurs due to visual servoing failure. We demonstrate robot hitchhiking in real environments considering factors like service-time and task priority with different start and goal configurations of the driver and hitchhiker robots. We also discuss the admissible characteristics of the hitchhiker, when hitchhiking should be allowed and when not, through experimental results.

  19. Can Robots and Humans Get Along?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scholtz, Jean

    2007-06-01

    Now that robots have moved into the mainstream—as vacuum cleaners, lawn mowers, autonomous vehicles, tour guides, and even pets—it is important to consider how everyday people will interact with them. A robot is really just a computer, but many researchers are beginning to understand that human-robot interactions are much different than human-computer interactions. So while the metrics used to evaluate the human-computer interaction (usability of the software interface in terms of time, accuracy, and user satisfaction) may also be appropriate for human-robot interactions, we need to determine whether there are additional metrics that should be considered.

  20. Open-Box Muscle-Computer Interface: Introduction to Human-Computer Interactions in Bioengineering, Physiology, and Neuroscience Courses

    ERIC Educational Resources Information Center

    Landa-Jiménez, M. A.; González-Gaspar, P.; Pérez-Estudillo, C.; López-Meraz, M. L.; Morgado-Valle, C.; Beltran-Parrazal, L.

    2016-01-01

    A Muscle-Computer Interface (muCI) is a human-machine system that uses electromyographic (EMG) signals to communicate with a computer. Surface EMG (sEMG) signals are currently used to command robotic devices, such as robotic arms and hands, and mobile robots, such as wheelchairs. These signals reflect the motor intention of a user before the…

  1. A One-Year Introductory Robotics Curriculum for Computer Science Upperclassmen

    ERIC Educational Resources Information Center

    Correll, N.; Wing, R.; Coleman, D.

    2013-01-01

    This paper describes a one-year introductory robotics course sequence focusing on computational aspects of robotics for third- and fourth-year students. The key challenges this curriculum addresses are "scalability," i.e., how to teach a robotics class with a limited amount of hardware to a large audience, "student assessment,"…

  2. Analyzing Robotic Kinematics Via Computed Simulations

    NASA Technical Reports Server (NTRS)

    Carnahan, Timothy M.

    1992-01-01

    Computing system assists in evaluation of kinematics of conceptual robot. Displays positions and motions of robotic manipulator within work cell. Also displays interactions between robotic manipulator and other objects. Results of simulation displayed on graphical computer workstation. System includes both off-the-shelf software originally developed for automotive industry and specially developed software. Simulation system also used to design human-equivalent hand, to model optical train in infrared system, and to develop graphical interface for teleoperator simulation system.

  3. Development of the HERMIES III mobile robot research testbed at Oak Ridge National Laboratory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manges, W.W.; Hamel, W.R.; Weisbin, C.R.

    1988-01-01

    The latest robot in the Hostile Environment Robotic Machine Intelligence Experiment Series (HERMIES) is now under development at the Center for Engineering Systems Advanced Research (CESAR) in the Oak Ridge National Laboratory. The HERMIES III robot incorporates a larger than human size 7-degree-of-freedom manipulator mounted on a 2-degree-of-freedom mobile platform including a variety of sensors and computers. The deployment of this robot represents a significant increase in research capabilities for the CESAR laboratory. The initial on-board computer capacity of the robot exceeds that of 20 Vax 11/780s. The navigation and vision algorithms under development make extensive use of the on-boardmore » NCUBE hypercube computer while the sensors are interfaced through five VME computers running the OS-9 real-time, multitasking operating system. This paper describes the motivation, key issues, and detailed design trade-offs of implementing the first phase (basic functionality) of the HERMIES III robot. 10 refs., 7 figs.« less

  4. SafeNet: a methodology for integrating general-purpose unsafe devices in safe-robot rehabilitation systems.

    PubMed

    Vicentini, Federico; Pedrocchi, Nicola; Malosio, Matteo; Molinari Tosatti, Lorenzo

    2014-09-01

    Robot-assisted neurorehabilitation often involves networked systems of sensors ("sensory rooms") and powerful devices in physical interaction with weak users. Safety is unquestionably a primary concern. Some lightweight robot platforms and devices designed on purpose include safety properties using redundant sensors or intrinsic safety design (e.g. compliance and backdrivability, limited exchange of energy). Nonetheless, the entire "sensory room" shall be required to be fail-safe and safely monitored as a system at large. Yet, sensor capabilities and control algorithms used in functional therapies require, in general, frequent updates or re-configurations, making a safety-grade release of such devices hardly sustainable in cost-effectiveness and development time. As such, promising integrated platforms for human-in-the-loop therapies could not find clinical application and manufacturing support because of lacking in the maintenance of global fail-safe properties. Under the general context of cross-machinery safety standards, the paper presents a methodology called SafeNet for helping in extending the safety rate of Human Robot Interaction (HRI) systems using unsafe components, including sensors and controllers. SafeNet considers, in fact, the robotic system as a device at large and applies the principles of functional safety (as in ISO 13489-1) through a set of architectural procedures and implementation rules. The enabled capability of monitoring a network of unsafe devices through redundant computational nodes, allows the usage of any custom sensors and algorithms, usually planned and assembled at therapy planning-time rather than at platform design-time. A case study is presented with an actual implementation of the proposed methodology. A specific architectural solution is applied to an example of robot-assisted upper-limb rehabilitation with online motion tracking. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  5. Dynamic modeling of parallel robots for computed-torque control implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Codourey, A.

    1998-12-01

    In recent years, increased interest in parallel robots has been observed. Their control with modern theory, such as the computed-torque method, has, however, been restrained, essentially due to the difficulty in establishing a simple dynamic model that can be calculated in real time. In this paper, a simple method based on the virtual work principle is proposed for modeling parallel robots. The mass matrix of the robot, needed for decoupling control strategies, does not explicitly appear in the formulation; however, it can be computed separately, based on kinetic energy considerations. The method is applied to the DELTA parallel robot, leadingmore » to a very efficient model that has been implemented in a real-time computed-torque control algorithm.« less

  6. A geometric method for computing ocular kinematics and classifying gaze events using monocular remote eye tracking in a robotic environment.

    PubMed

    Singh, Tarkeshwar; Perry, Christopher M; Herter, Troy M

    2016-01-26

    Robotic and virtual-reality systems offer tremendous potential for improving assessment and rehabilitation of neurological disorders affecting the upper extremity. A key feature of these systems is that visual stimuli are often presented within the same workspace as the hands (i.e., peripersonal space). Integrating video-based remote eye tracking with robotic and virtual-reality systems can provide an additional tool for investigating how cognitive processes influence visuomotor learning and rehabilitation of the upper extremity. However, remote eye tracking systems typically compute ocular kinematics by assuming eye movements are made in a plane with constant depth (e.g. frontal plane). When visual stimuli are presented at variable depths (e.g. transverse plane), eye movements have a vergence component that may influence reliable detection of gaze events (fixations, smooth pursuits and saccades). To our knowledge, there are no available methods to classify gaze events in the transverse plane for monocular remote eye tracking systems. Here we present a geometrical method to compute ocular kinematics from a monocular remote eye tracking system when visual stimuli are presented in the transverse plane. We then use the obtained kinematics to compute velocity-based thresholds that allow us to accurately identify onsets and offsets of fixations, saccades and smooth pursuits. Finally, we validate our algorithm by comparing the gaze events computed by the algorithm with those obtained from the eye-tracking software and manual digitization. Within the transverse plane, our algorithm reliably differentiates saccades from fixations (static visual stimuli) and smooth pursuits from saccades and fixations when visual stimuli are dynamic. The proposed methods provide advancements for examining eye movements in robotic and virtual-reality systems. Our methods can also be used with other video-based or tablet-based systems in which eye movements are performed in a peripersonal plane with variable depth.

  7. Robotic NDE inspection of advanced solid rocket motor casings

    NASA Technical Reports Server (NTRS)

    Mcneelege, Glenn E.; Sarantos, Chris

    1994-01-01

    The Advanced Solid Rocket Motor program determined the need to inspect ASRM forgings and segments for potentially catastrophic defects. To minimize costs, an automated eddy current inspection system was designed and manufactured for inspection of ASRM forgings in the initial phases of production. This system utilizes custom manipulators and motion control algorithms and integrated six channel eddy current data acquisition and analysis hardware and software. Total system integration is through a personal computer based workcell controller. Segment inspection demands the use of a gantry robot for the EMAT/ET inspection system. The EMAT/ET system utilized similar mechanical compliancy and software logic to accommodate complex part geometries. EMAT provides volumetric inspection capability while eddy current is limited to surface and near surface inspection. Each aspect of the systems are applicable to other industries, such as, inspection of pressure vessels, weld inspection, and traditional ultrasonic inspection applications.

  8. Telepresence and telerobotics

    NASA Technical Reports Server (NTRS)

    Garin, John; Matteo, Joseph; Jennings, Von Ayre

    1988-01-01

    The capability for a single operator to simultaneously control complex remote multi degree of freedom robotic arms and associated dextrous end effectors is being developed. An optimal solution within the realm of current technology, can be achieved by recognizing that: (1) machines/computer systems are more effective than humans when the task is routine and specified, and (2) humans process complex data sets and deal with the unpredictable better than machines. These observations lead naturally to a philosophy in which the human's role becomes a higher level function associated with planning, teaching, initiating, monitoring, and intervening when the machine gets into trouble, while the machine performs the codifiable tasks with deliberate efficiency. This concept forms the basis for the integration of man and telerobotics, i.e., robotics with the operator in the control loop. The concept of integration of the human in the loop and maximizing the feed-forward and feed-back data flow is referred to as telepresence.

  9. Combined Feature Based and Shape Based Visual Tracker for Robot Navigation

    NASA Technical Reports Server (NTRS)

    Deans, J.; Kunz, C.; Sargent, R.; Park, E.; Pedersen, L.

    2005-01-01

    We have developed a combined feature based and shape based visual tracking system designed to enable a planetary rover to visually track and servo to specific points chosen by a user with centimeter precision. The feature based tracker uses invariant feature detection and matching across a stereo pair, as well as matching pairs before and after robot movement in order to compute an incremental 6-DOF motion at each tracker update. This tracking method is subject to drift over time, which can be compensated by the shape based method. The shape based tracking method consists of 3D model registration, which recovers 6-DOF motion given sufficient shape and proper initialization. By integrating complementary algorithms, the combined tracker leverages the efficiency and robustness of feature based methods with the precision and accuracy of model registration. In this paper, we present the algorithms and their integration into a combined visual tracking system.

  10. [Impact of digital technology on clinical practices: perspectives from surgery].

    PubMed

    Zhang, Y; Liu, X J

    2016-04-09

    Digital medical technologies or computer aided medical procedures, refer to imaging, 3D reconstruction, virtual design, 3D printing, navigation guided surgery and robotic assisted surgery techniques. These techniques are integrated into conventional surgical procedures to create new clinical protocols that are known as "digital surgical techniques". Conventional health care is characterized by subjective experiences, while digital medical technologies bring quantifiable information, transferable data, repeatable methods and predictable outcomes into clinical practices. Being integrated into clinical practice, digital techniques facilitate surgical care by improving outcomes and reducing risks. Digital techniques are becoming increasingly popular in trauma surgery, orthopedics, neurosurgery, plastic and reconstructive surgery, imaging and anatomic sciences. Robotic assisted surgery is also evolving and being applied in general surgery, cardiovascular surgery and orthopedic surgery. Rapid development of digital medical technologies is changing healthcare and clinical practices. It is therefore important for all clinicians to purposefully adapt to these technologies and improve their clinical outcomes.

  11. Enhanced control & sensing for the REMOTEC ANDROS Mk VI robot. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spelt, P.F.; Harvey, H.W.

    1997-08-01

    This Cooperative Research and Development Agreement (CRADA) between Lockheed Marietta Energy Systems, Inc., and REMOTEC, Inc., explored methods of providing operator feedback for various work actions of the ANDROS Mk VI teleoperated robot. In a hazardous environment, an extremely heavy workload seriously degrades the productivity of teleoperated robot operators. This CRADA involved the addition of computer power to the robot along with a variety of sensors and encoders to provide information about the robot`s performance in and relationship to its environment. Software was developed to integrate the sensor and encoder information and provide control input to the robot. ANDROS Mkmore » VI robots are presently used by numerous electric utilities to perform tasks in reactors where substantial exposure to radiation exists, as well as in a variety of other hazardous environments. Further, this platform has potential for use in a number of environmental restoration tasks, such as site survey and detection of hazardous waste materials. The addition of sensors and encoders serves to make the robot easier to manage and permits tasks to be done more safely and inexpensively (due to time saved in the completion of complex remote tasks). Prior research on the automation of mobile platforms with manipulators at Oak Ridge National Laboratory`s Center for Engineering Systems Advanced Research (CESAR, B&R code KC0401030) Laboratory, a BES-supported facility, indicated that this type of enhancement is effective. This CRADA provided such enhancements to a successful working teleoperated robot for the first time. Performance of this CRADA used the CESAR laboratory facilities and expertise developed under BES funding.« less

  12. Dynamic electronic institutions in agent oriented cloud robotic systems.

    PubMed

    Nagrath, Vineet; Morel, Olivier; Malik, Aamir; Saad, Naufal; Meriaudeau, Fabrice

    2015-01-01

    The dot-com bubble bursted in the year 2000 followed by a swift movement towards resource virtualization and cloud computing business model. Cloud computing emerged not as new form of computing or network technology but a mere remoulding of existing technologies to suit a new business model. Cloud robotics is understood as adaptation of cloud computing ideas for robotic applications. Current efforts in cloud robotics stress upon developing robots that utilize computing and service infrastructure of the cloud, without debating on the underlying business model. HTM5 is an OMG's MDA based Meta-model for agent oriented development of cloud robotic systems. The trade-view of HTM5 promotes peer-to-peer trade amongst software agents. HTM5 agents represent various cloud entities and implement their business logic on cloud interactions. Trade in a peer-to-peer cloud robotic system is based on relationships and contracts amongst several agent subsets. Electronic Institutions are associations of heterogeneous intelligent agents which interact with each other following predefined norms. In Dynamic Electronic Institutions, the process of formation, reformation and dissolution of institutions is automated leading to run time adaptations in groups of agents. DEIs in agent oriented cloud robotic ecosystems bring order and group intellect. This article presents DEI implementations through HTM5 methodology.

  13. Robotic Challenges: Robots Bring New Life to Gifted Classes, Teach Students Hands-On Problem Solving, Computer Skills.

    ERIC Educational Resources Information Center

    Smith, Ruth Baynard

    1994-01-01

    Intermediate level academically talented students learn essential elements of computer programming by working with robots at enrichment workshops at Dwight-Englewood School in Englewood, New Jersey. The children combine creative thinking and problem-solving skills to program the robots' microcomputers to perform a variety of movements. (JDD)

  14. Portable control device for networked mobile robots

    DOEpatents

    Feddema, John T.; Byrne, Raymond H.; Bryan, Jon R.; Harrington, John J.; Gladwell, T. Scott

    2002-01-01

    A handheld control device provides a way for controlling one or multiple mobile robotic vehicles by incorporating a handheld computer with a radio board. The device and software use a personal data organizer as the handheld computer with an additional microprocessor and communication device on a radio board for use in controlling one robot or multiple networked robots.

  15. Integration of Gravitational Torques in Cerebellar Pathways Allows for the Dynamic Inverse Computation of Vertical Pointing Movements of a Robot Arm

    PubMed Central

    Gentili, Rodolphe J.; Papaxanthis, Charalambos; Ebadzadeh, Mehdi; Eskiizmirliler, Selim; Ouanezar, Sofiane; Darlot, Christian

    2009-01-01

    Background Several authors suggested that gravitational forces are centrally represented in the brain for planning, control and sensorimotor predictions of movements. Furthermore, some studies proposed that the cerebellum computes the inverse dynamics (internal inverse model) whereas others suggested that it computes sensorimotor predictions (internal forward model). Methodology/Principal Findings This study proposes a model of cerebellar pathways deduced from both biological and physical constraints. The model learns the dynamic inverse computation of the effect of gravitational torques from its sensorimotor predictions without calculating an explicit inverse computation. By using supervised learning, this model learns to control an anthropomorphic robot arm actuated by two antagonists McKibben artificial muscles. This was achieved by using internal parallel feedback loops containing neural networks which anticipate the sensorimotor consequences of the neural commands. The artificial neural networks architecture was similar to the large-scale connectivity of the cerebellar cortex. Movements in the sagittal plane were performed during three sessions combining different initial positions, amplitudes and directions of movements to vary the effects of the gravitational torques applied to the robotic arm. The results show that this model acquired an internal representation of the gravitational effects during vertical arm pointing movements. Conclusions/Significance This is consistent with the proposal that the cerebellar cortex contains an internal representation of gravitational torques which is encoded through a learning process. Furthermore, this model suggests that the cerebellum performs the inverse dynamics computation based on sensorimotor predictions. This highlights the importance of sensorimotor predictions of gravitational torques acting on upper limb movements performed in the gravitational field. PMID:19384420

  16. Robot Geometry and the High School Curriculum.

    ERIC Educational Resources Information Center

    Meyer, Walter

    1988-01-01

    Description of the field of robotics and its possible use in high school computational geometry classes emphasizes motion planning exercises and computer graphics displays. Eleven geometrical problems based on robotics are presented along with the correct solutions and explanations. (LRW)

  17. Remote hardware-reconfigurable robotic camera

    NASA Astrophysics Data System (ADS)

    Arias-Estrada, Miguel; Torres-Huitzil, Cesar; Maya-Rueda, Selene E.

    2001-10-01

    In this work, a camera with integrated image processing capabilities is discussed. The camera is based on an imager coupled to an FPGA device (Field Programmable Gate Array) which contains an architecture for real-time computer vision low-level processing. The architecture can be reprogrammed remotely for application specific purposes. The system is intended for rapid modification and adaptation for inspection and recognition applications, with the flexibility of hardware and software reprogrammability. FPGA reconfiguration allows the same ease of upgrade in hardware as a software upgrade process. The camera is composed of a digital imager coupled to an FPGA device, two memory banks, and a microcontroller. The microcontroller is used for communication tasks and FPGA programming. The system implements a software architecture to handle multiple FPGA architectures in the device, and the possibility to download a software/hardware object from the host computer into its internal context memory. System advantages are: small size, low power consumption, and a library of hardware/software functionalities that can be exchanged during run time. The system has been validated with an edge detection and a motion processing architecture, which will be presented in the paper. Applications targeted are in robotics, mobile robotics, and vision based quality control.

  18. [The advancement of robotic surgery--successes, failures, challenges].

    PubMed

    Haidegger, Tamás

    2010-10-10

    Computer-integrated robotic surgery systems appeared more than twenty years ago and since then hundreds of different prototypes have been developed. Only a fraction of them have been commercialized, mostly to support neurosurgical and orthopaedic procedures.Unquestionably, the most successful one is the da Vinci surgical system, primarily deployed in urology and general laparoscopic surgery. It is developed and marketed by Intuitive Surgical Inc. (Sunnyvale, CA, USA), the only profitable company of the segment. The da Vinci made robotic surgery is known and acknowledged throughout the world, and the great results delivered convinced most of the former critics of the technology. Success derived from the well chosen business development strategy, proficiency of the developers, appropriate timing and a huge pot of luck. This article presents the most important features of the da Vinci system, the history of development along with its medical, economical and financial aspects, and seeks the answer why this particular system became successful.

  19. An autonomous satellite architecture integrating deliberative reasoning and behavioural intelligence

    NASA Technical Reports Server (NTRS)

    Lindley, Craig A.

    1993-01-01

    This paper describes a method for the design of autonomous spacecraft, based upon behavioral approaches to intelligent robotics. First, a number of previous spacecraft automation projects are reviewed. A methodology for the design of autonomous spacecraft is then presented, drawing upon both the European Space Agency technological center (ESTEC) automation and robotics methodology and the subsumption architecture for autonomous robots. A layered competency model for autonomous orbital spacecraft is proposed. A simple example of low level competencies and their interaction is presented in order to illustrate the methodology. Finally, the general principles adopted for the control hardware design of the AUSTRALIS-1 spacecraft are described. This system will provide an orbital experimental platform for spacecraft autonomy studies, supporting the exploration of different logical control models, different computational metaphors within the behavioral control framework, and different mappings from the logical control model to its physical implementation.

  20. HERMIES-3: A step toward autonomous mobility, manipulation, and perception

    NASA Technical Reports Server (NTRS)

    Weisbin, C. R.; Burks, B. L.; Einstein, J. R.; Feezell, R. R.; Manges, W. W.; Thompson, D. H.

    1989-01-01

    HERMIES-III is an autonomous robot comprised of a seven degree-of-freedom (DOF) manipulator designed for human scale tasks, a laser range finder, a sonar array, an omni-directional wheel-driven chassis, multiple cameras, and a dual computer system containing a 16-node hypercube expandable to 128 nodes. The current experimental program involves performance of human-scale tasks (e.g., valve manipulation, use of tools), integration of a dexterous manipulator and platform motion in geometrically complex environments, and effective use of multiple cooperating robots (HERMIES-IIB and HERMIES-III). The environment in which the robots operate has been designed to include multiple valves, pipes, meters, obstacles on the floor, valves occluded from view, and multiple paths of differing navigation complexity. The ongoing research program supports the development of autonomous capability for HERMIES-IIB and III to perform complex navigation and manipulation under time constraints, while dealing with imprecise sensory information.

  1. From medical images to minimally invasive intervention: Computer assistance for robotic surgery.

    PubMed

    Lee, Su-Lin; Lerotic, Mirna; Vitiello, Valentina; Giannarou, Stamatia; Kwok, Ka-Wai; Visentini-Scarzanella, Marco; Yang, Guang-Zhong

    2010-01-01

    Minimally invasive surgery has been established as an important way forward in surgery for reducing patient trauma and hospitalization with improved prognosis. The introduction of robotic assistance enhances the manual dexterity and accuracy of instrument manipulation. Further development of the field in using pre- and intra-operative imaging guidance requires the integration of the general anatomy of the patient with clear pathologic indications and geometrical information for preoperative planning and intra-operative manipulation. It also requires effective visualization and the recreation of haptic and tactile sensing with dynamic active constraints to improve consistency and safety of the surgical procedures. This paper describes key technical considerations of tissue deformation tracking, 3D reconstruction, subject-specific modeling, image guidance and augmented reality for robotic assisted minimally invasive surgery. It highlights the importance of adapting preoperative surgical planning according to intra-operative data and illustrates how dynamic information such as tissue deformation can be incorporated into the surgical navigation framework. Some of the recent trends are discussed in terms of instrument design and the usage of dynamic active constraints and human-robot perceptual docking for robotic assisted minimally invasive surgery. Copyright 2009 Elsevier Ltd. All rights reserved.

  2. Design of an integrated master-slave robotic system for minimally invasive surgery.

    PubMed

    Li, Jianmin; Zhou, Ningxin; Wang, Shuxin; Gao, Yuanqian; Liu, Dongchun

    2012-03-01

    Minimally invasive surgery (MIS) robots are commonly used in hospitals and medical centres. However, currently available robotic systems are very complicated and huge, greatly raising system costs and the requirements of operating rooms. These disadvantages have become the major impediments to the expansion of MIS robots. An integrated MIS robotic system is proposed based on the analysis of advantages and disadvantages of different MIS robots. In the proposed system, the master manipulators, slave manipulators, image display device and control system have been designed as a whole. Modular design is adopted for the control system for easy maintenance and upgrade. The kinematic relations between the master and the slave are also investigated and embedded in software to realize intuitive movements of hand and instrument. Finally, animal experiments were designed to test the effectiveness of the robot. The robot realizes natural hand-eye movements between the master and the slave to facilitate MIS operations. The experimental results show that the robot can realize similar functions to those of current commercialized robots. The integrated design simplifies the robotic system and facilitates use of the robot. Compared with the commercialized robots, the proposed MIS robot achieves similar functions and features but with a smaller size and less weight. Copyright © 2011 John Wiley & Sons, Ltd.

  3. Correction of Visual Perception Based on Neuro-Fuzzy Learning for the Humanoid Robot TEO.

    PubMed

    Hernandez-Vicen, Juan; Martinez, Santiago; Garcia-Haro, Juan Miguel; Balaguer, Carlos

    2018-03-25

    New applications related to robotic manipulation or transportation tasks, with or without physical grasping, are continuously being developed. To perform these activities, the robot takes advantage of different kinds of perceptions. One of the key perceptions in robotics is vision. However, some problems related to image processing makes the application of visual information within robot control algorithms difficult. Camera-based systems have inherent errors that affect the quality and reliability of the information obtained. The need of correcting image distortion slows down image parameter computing, which decreases performance of control algorithms. In this paper, a new approach to correcting several sources of visual distortions on images in only one computing step is proposed. The goal of this system/algorithm is the computation of the tilt angle of an object transported by a robot, minimizing image inherent errors and increasing computing speed. After capturing the image, the computer system extracts the angle using a Fuzzy filter that corrects at the same time all possible distortions, obtaining the real angle in only one processing step. This filter has been developed by the means of Neuro-Fuzzy learning techniques, using datasets with information obtained from real experiments. In this way, the computing time has been decreased and the performance of the application has been improved. The resulting algorithm has been tried out experimentally in robot transportation tasks in the humanoid robot TEO (Task Environment Operator) from the University Carlos III of Madrid.

  4. Correction of Visual Perception Based on Neuro-Fuzzy Learning for the Humanoid Robot TEO

    PubMed Central

    2018-01-01

    New applications related to robotic manipulation or transportation tasks, with or without physical grasping, are continuously being developed. To perform these activities, the robot takes advantage of different kinds of perceptions. One of the key perceptions in robotics is vision. However, some problems related to image processing makes the application of visual information within robot control algorithms difficult. Camera-based systems have inherent errors that affect the quality and reliability of the information obtained. The need of correcting image distortion slows down image parameter computing, which decreases performance of control algorithms. In this paper, a new approach to correcting several sources of visual distortions on images in only one computing step is proposed. The goal of this system/algorithm is the computation of the tilt angle of an object transported by a robot, minimizing image inherent errors and increasing computing speed. After capturing the image, the computer system extracts the angle using a Fuzzy filter that corrects at the same time all possible distortions, obtaining the real angle in only one processing step. This filter has been developed by the means of Neuro-Fuzzy learning techniques, using datasets with information obtained from real experiments. In this way, the computing time has been decreased and the performance of the application has been improved. The resulting algorithm has been tried out experimentally in robot transportation tasks in the humanoid robot TEO (Task Environment Operator) from the University Carlos III of Madrid. PMID:29587392

  5. Some foundational aspects of quantum computers and quantum robots.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benioff, P.; Physics

    1998-01-01

    This paper addresses foundational issues related to quantum computing. The need for a universally valid theory such as quantum mechanics to describe to some extent its own validation is noted. This includes quantum mechanical descriptions of systems that do theoretical calculations (i.e. quantum computers) and systems that perform experiments. Quantum robots interacting with an environment are a small first step in this direction. Quantum robots are described here as mobile quantum systems with on-board quantum computers that interact with environments. Included are discussions on the carrying out of tasks and the division of tasks into computation and action phases. Specificmore » models based on quantum Turing machines are described. Differences and similarities between quantum robots plus environments and quantum computers are discussed.« less

  6. An instrumented glove for grasp specification in virtual-reality-based point-and-direct telerobotics.

    PubMed

    Yun, M H; Cannon, D; Freivalds, A; Thomas, G

    1997-10-01

    Hand posture and force, which define aspects of the way an object is grasped, are features of robotic manipulation. A means for specifying these grasping "flavors" has been developed that uses an instrumented glove equipped with joint and force sensors. The new grasp specification system will be used at the Pennsylvania State University (Penn State) in a Virtual Reality based Point-and-Direct (VR-PAD) robotics implementation. Here, an operator gives directives to a robot in the same natural way that human may direct another. Phrases such as "put that there" cause the robot to define a grasping strategy and motion strategy to complete the task on its own. In the VR-PAD concept, pointing is done using virtual tools such that an operator can appear to graphically grasp real items in live video. Rather than requiring full duplication of forces and kinesthetic movement throughout a task as is required in manual telemanipulation, hand posture and force are now specified only once. The grasp parameters then become object flavors. The robot maintains the specified force and hand posture flavors for an object throughout the task in handling the real workpiece or item of interest. In the Computer integrated Manufacturing (CIM) Laboratory at Penn State, hand posture and force data were collected for manipulating bricks and other items that require varying amounts of force at multiple pressure points. The feasibility of measuring desired grasp characteristics was demonstrated for a modified Cyberglove impregnated with Force-Sensitive Resistor (FSR) (pressure sensors in the fingertips. A joint/force model relating the parameters of finger articulation and pressure to various lifting tasks was validated for the instrumented "wired" glove. Operators using such a modified glove may ultimately be able to configure robot grasping tasks in environments involving hazardous waste remediation, flexible manufacturing, space operations and other flexible robotics applications. In each case, the VR-PAD approach will finesse the computational and delay problems of real-time multiple-degree-of-freedom force feedback telemanipulation.

  7. Neural associative memories for the integration of language, vision and action in an autonomous agent.

    PubMed

    Markert, H; Kaufmann, U; Kara Kayikci, Z; Palm, G

    2009-03-01

    Language understanding is a long-standing problem in computer science. However, the human brain is capable of processing complex languages with seemingly no difficulties. This paper shows a model for language understanding using biologically plausible neural networks composed of associative memories. The model is able to deal with ambiguities on the single word and grammatical level. The language system is embedded into a robot in order to demonstrate the correct semantical understanding of the input sentences by letting the robot perform corresponding actions. For that purpose, a simple neural action planning system has been combined with neural networks for visual object recognition and visual attention control mechanisms.

  8. Autonomous stair-climbing with miniature jumping robots.

    PubMed

    Stoeter, Sascha A; Papanikolopoulos, Nikolaos

    2005-04-01

    The problem of vision-guided control of miniature mobile robots is investigated. Untethered mobile robots with small physical dimensions of around 10 cm or less do not permit powerful onboard computers because of size and power constraints. These challenges have, in the past, reduced the functionality of such devices to that of a complex remote control vehicle with fancy sensors. With the help of a computationally more powerful entity such as a larger companion robot, the control loop can be closed. Using the miniature robot's video transmission or that of an observer to localize it in the world, control commands can be computed and relayed to the inept robot. The result is a system that exhibits autonomous capabilities. The framework presented here solves the problem of climbing stairs with the miniature Scout robot. The robot's unique locomotion mode, the jump, is employed to hop one step at a time. Methods for externally tracking the Scout are developed. A large number of real-world experiments are conducted and the results discussed.

  9. Becoming Earth Independent: Human-Automation-Robotics Integration Challenges for Future Space Exploration

    NASA Technical Reports Server (NTRS)

    Marquez, Jessica J.

    2016-01-01

    Future exploration missions will require NASA to integrate more automation and robotics in order to accomplish mission objectives. This presentation will describe on the future challenges facing the human operator (astronaut, ground controllers) as we increase the amount of automation and robotics in spaceflight operations. It will describe how future exploration missions will have to adapt and evolve in order to deal with more complex missions and communication latencies. This presentation will outline future human-automation-robotic integration challenges.

  10. Robot computer problem solving system

    NASA Technical Reports Server (NTRS)

    Becker, J. D.; Merriam, E. W.

    1974-01-01

    The conceptual, experimental, and practical phases of developing a robot computer problem solving system are outlined. Robot intelligence, conversion of the programming language SAIL to run under the THNEX monitor, and the use of the network to run several cooperating jobs at different sites are discussed.

  11. SLAM algorithm applied to robotics assistance for navigation in unknown environments

    PubMed Central

    2010-01-01

    Background The combination of robotic tools with assistance technology determines a slightly explored area of applications and advantages for disability or elder people in their daily tasks. Autonomous motorized wheelchair navigation inside an environment, behaviour based control of orthopaedic arms or user's preference learning from a friendly interface are some examples of this new field. In this paper, a Simultaneous Localization and Mapping (SLAM) algorithm is implemented to allow the environmental learning by a mobile robot while its navigation is governed by electromyographic signals. The entire system is part autonomous and part user-decision dependent (semi-autonomous). The environmental learning executed by the SLAM algorithm and the low level behaviour-based reactions of the mobile robot are robotic autonomous tasks, whereas the mobile robot navigation inside an environment is commanded by a Muscle-Computer Interface (MCI). Methods In this paper, a sequential Extended Kalman Filter (EKF) feature-based SLAM algorithm is implemented. The features correspond to lines and corners -concave and convex- of the environment. From the SLAM architecture, a global metric map of the environment is derived. The electromyographic signals that command the robot's movements can be adapted to the patient's disabilities. For mobile robot navigation purposes, five commands were obtained from the MCI: turn to the left, turn to the right, stop, start and exit. A kinematic controller to control the mobile robot was implemented. A low level behavior strategy was also implemented to avoid robot's collisions with the environment and moving agents. Results The entire system was tested in a population of seven volunteers: three elder, two below-elbow amputees and two young normally limbed patients. The experiments were performed within a closed low dynamic environment. Subjects took an average time of 35 minutes to navigate the environment and to learn how to use the MCI. The SLAM results have shown a consistent reconstruction of the environment. The obtained map was stored inside the Muscle-Computer Interface. Conclusions The integration of a highly demanding processing algorithm (SLAM) with a MCI and the communication between both in real time have shown to be consistent and successful. The metric map generated by the mobile robot would allow possible future autonomous navigation without direct control of the user, whose function could be relegated to choose robot destinations. Also, the mobile robot shares the same kinematic model of a motorized wheelchair. This advantage can be exploited for wheelchair autonomous navigation. PMID:20163735

  12. System for robot-assisted real-time laparoscopic ultrasound elastography

    NASA Astrophysics Data System (ADS)

    Billings, Seth; Deshmukh, Nishikant; Kang, Hyun Jae; Taylor, Russell; Boctor, Emad M.

    2012-02-01

    Surgical robots provide many advantages for surgery, including minimal invasiveness, precise motion, high dexterity, and crisp stereovision. One limitation of current robotic procedures, compared to open surgery, is the loss of haptic information for such purposes as palpation, which can be very important in minimally invasive tumor resection. Numerous studies have reported the use of real-time ultrasound elastography, in conjunction with conventional B-mode ultrasound, to differentiate malignant from benign lesions. Several groups (including our own) have reported integration of ultrasound with the da Vinci robot, and ultrasound elastography is a very promising image guidance method for robotassisted procedures that will further enable the role of robots in interventions where precise knowledge of sub-surface anatomical features is crucial. We present a novel robot-assisted real-time ultrasound elastography system for minimally invasive robot-assisted interventions. Our system combines a da Vinci surgical robot with a non-clinical experimental software interface, a robotically articulated laparoscopic ultrasound probe, and our GPU-based elastography system. Elasticity and B-mode ultrasound images are displayed as picture-in-picture overlays in the da Vinci console. Our system minimizes dependence on human performance factors by incorporating computer-assisted motion control that automatically generates the tissue palpation required for elastography imaging, while leaving high-level control in the hands of the user. In addition to ensuring consistent strain imaging, the elastography assistance mode avoids the cognitive burden of tedious manual palpation. Preliminary tests of the system with an elasticity phantom demonstrate the ability to differentiate simulated lesions of varied stiffness and to clearly delineate lesion boundaries.

  13. Realization of the FPGA-based reconfigurable computing environment by the example of morphological processing of a grayscale image

    NASA Astrophysics Data System (ADS)

    Shatravin, V.; Shashev, D. V.

    2018-05-01

    Currently, robots are increasingly being used in every industry. One of the most high-tech areas is creation of completely autonomous robotic devices including vehicles. The results of various global research prove the efficiency of vision systems in autonomous robotic devices. However, the use of these systems is limited because of the computational and energy resources available in the robot device. The paper describes the results of applying the original approach for image processing on reconfigurable computing environments by the example of morphological operations over grayscale images. This approach is prospective for realizing complex image processing algorithms and real-time image analysis in autonomous robotic devices.

  14. Task-level robot programming: Integral part of evolution from teleoperation to autonomy

    NASA Technical Reports Server (NTRS)

    Reynolds, James C.

    1987-01-01

    An explanation is presented of task-level robot programming and of how it differs from the usual interpretation of task planning for robotics. Most importantly, it is argued that the physical and mathematical basis of task-level robot programming provides inherently greater reliability than efforts to apply better known concepts from artificial intelligence (AI) to autonomous robotics. Finally, an architecture is presented that allows the integration of task-level robot programming within an evolutionary, redundant, and multi-modal framework that spans teleoperation to autonomy.

  15. Computing Dynamics Of A Robot Of 6+n Degrees Of Freedom

    NASA Technical Reports Server (NTRS)

    Quiocho, Leslie J.; Bailey, Robert W.

    1995-01-01

    Improved formulation speeds and simplifies computation of dynamics of robot arm of n rotational degrees of freedom mounted on platform having three translational and three rotational degrees of freedom. Intended for use in dynamical modeling of robotic manipulators attached to such moving bases as spacecraft, aircraft, vessel, or land vehicle. Such modeling important part of simulation and control of robotic motions.

  16. Simulation of Robot Kinematics Using Interactive Computer Graphics.

    ERIC Educational Resources Information Center

    Leu, M. C.; Mahajan, R.

    1984-01-01

    Development of a robot simulation program based on geometric transformation softwares available in most computer graphics systems and program features are described. The program can be extended to simulate robots coordinating with external devices (such as tools, fixtures, conveyors) using geometric transformations to describe the…

  17. Neurobionics and the brain-computer interface: current applications and future horizons.

    PubMed

    Rosenfeld, Jeffrey V; Wong, Yan Tat

    2017-05-01

    The brain-computer interface (BCI) is an exciting advance in neuroscience and engineering. In a motor BCI, electrical recordings from the motor cortex of paralysed humans are decoded by a computer and used to drive robotic arms or to restore movement in a paralysed hand by stimulating the muscles in the forearm. Simultaneously integrating a BCI with the sensory cortex will further enhance dexterity and fine control. BCIs are also being developed to: provide ambulation for paraplegic patients through controlling robotic exoskeletons; restore vision in people with acquired blindness; detect and control epileptic seizures; and improve control of movement disorders and memory enhancement. High-fidelity connectivity with small groups of neurons requires microelectrode placement in the cerebral cortex. Electrodes placed on the cortical surface are less invasive but produce inferior fidelity. Scalp surface recording using electroencephalography is much less precise. BCI technology is still in an early phase of development and awaits further technical improvements and larger multicentre clinical trials before wider clinical application and impact on the care of people with disabilities. There are also many ethical challenges to explore as this technology evolves.

  18. Anthropomorphic Telemanipulation System in Terminus Control Mode

    NASA Technical Reports Server (NTRS)

    Jau, Bruno M.; Lewis, M. Anthony; Bejczy, Antal K.

    1994-01-01

    This paper describes a prototype anthropomorphic kinesthetic telepresence system that is being developed at JPL. It utilizes dexterous terminus devices in the form of an exoskeleton force-sensing master glove worn by the operator and a replica four finger anthropomorphic slave hand. The newly developed master glove is integrated with our previously developed non-anthropomorphic six degree of freedom (DOF) universal force-reflecting hand controller (FRHC). The mechanical hand and forearm are mounted to an industrial robot (PUMA 560), replacing its standard forearm. The notion of 'terminus control mode' refers to the fact that only the terminus devices (glove and robot hand) are of anthropomorphic nature, and the master and slave arms are non-anthropomorphic. The system is currently being evaluated, focusing on tool handling and astronaut equivalent task executions. The evaluation revealed the system's potential for tool handling but it also became evident that hand tool manipulations and space operations require a dual arm robot. This paper describes the system's principal components, its control and computing architecture, discusses findings of the tool handling evaluation, and explains why common tool handling and EVA space tasks require dual arm robots.

  19. Vision robot with rotational camera for searching ID tags

    NASA Astrophysics Data System (ADS)

    Kimura, Nobutaka; Moriya, Toshio

    2008-02-01

    We propose a new concept, called "real world crawling", in which intelligent mobile sensors completely recognize environments by actively gathering information in those environments and integrating that information on the basis of location. First we locate objects by widely and roughly scanning the entire environment with these mobile sensors, and we check the objects in detail by moving the sensors to find out exactly what and where they are. We focused on the automation of inventory counting with barcodes as an application of our concept. We developed "a barcode reading robot" which autonomously moved in a warehouse. It located and read barcode ID tags using a camera and a barcode reader while moving. However, motion blurs caused by the robot's translational motion made it difficult to recognize the barcodes. Because of the high computational cost of image deblurring software, we used the pan rotation of the camera to reduce these blurs. We derived the appropriate pan rotation velocity from the robot's translational velocity and from the distance to the surfaces of barcoded boxes. We verified the effectiveness of our method in an experimental test.

  20. Prejudice, segregation and immigration laws —integration of the robot into the laboratory society

    PubMed Central

    Fraley, Jr., Norman E.

    1994-01-01

    This paper addresses some serious issues about personnel morale, fears and hopes associated with and attributed to the laboratory robot. The introduction of the laboratory robot into the laboratory is examined from a managerial perspective. Human-rights and robot-rights issues are identified and addressed. Real world examples of how the integration of two high through-put robots affected the routine of a major industrial food laboratory are discussed. PMID:18925001

  1. Augmented reality to the rescue of the minimally invasive surgeon. The usefulness of the interposition of stereoscopic images in the Da Vinci™ robotic console.

    PubMed

    Volonté, Francesco; Buchs, Nicolas C; Pugin, François; Spaltenstein, Joël; Schiltz, Boris; Jung, Minoa; Hagen, Monika; Ratib, Osman; Morel, Philippe

    2013-09-01

    Computerized management of medical information and 3D imaging has become the norm in everyday medical practice. Surgeons exploit these emerging technologies and bring information previously confined to the radiology rooms into the operating theatre. The paper reports the authors' experience with integrated stereoscopic 3D-rendered images in the da Vinci surgeon console. Volume-rendered images were obtained from a standard computed tomography dataset using the OsiriX DICOM workstation. A custom OsiriX plugin was created that permitted the 3D-rendered images to be displayed in the da Vinci surgeon console and to appear stereoscopic. These rendered images were displayed in the robotic console using the TilePro multi-input display. The upper part of the screen shows the real endoscopic surgical field and the bottom shows the stereoscopic 3D-rendered images. These are controlled by a 3D joystick installed on the console, and are updated in real time. Five patients underwent a robotic augmented reality-enhanced procedure. The surgeon was able to switch between the classical endoscopic view and a combined virtual view during the procedure. Subjectively, the addition of the rendered images was considered to be an undeniable help during the dissection phase. With the rapid evolution of robotics, computer-aided surgery is receiving increasing interest. This paper details the authors' experience with 3D-rendered images projected inside the surgical console. The use of this intra-operative mixed reality technology is considered very useful by the surgeon. It has been shown that the usefulness of this technique is a step toward computer-aided surgery that will progress very quickly over the next few years. Copyright © 2012 John Wiley & Sons, Ltd.

  2. Enhanced control and sensing for the REMOTEC ANDROS Mk VI robot. CRADA final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spelt, P.F.; Harvey, H.W.

    1998-08-01

    This Cooperative Research and Development Agreement (CRADA) between Lockheed Martin Energy Systems, Inc., and REMOTEC, Inc., explored methods of providing operator feedback for various work actions of the ANDROS Mk VI teleoperated robot. In a hazardous environment, an extremely heavy workload seriously degrades the productivity of teleoperated robot operators. This CRADA involved the addition of computer power to the robot along with a variety of sensors and encoders to provide information about the robot`s performance in and relationship to its environment. Software was developed to integrate the sensor and encoder information and provide control input to the robot. ANDROS Mkmore » VI robots are presently used by numerous electric utilities to perform tasks in reactors where substantial exposure to radiation exists, as well as in a variety of other hazardous environments. Further, this platform has potential for use in a number of environmental restoration tasks, such as site survey and detection of hazardous waste materials. The addition of sensors and encoders serves to make the robot easier to manage and permits tasks to be done more safely and inexpensively (due to time saved in the completion of complex remote tasks). Prior research on the automation of mobile platforms with manipulators at Oak Ridge National Laboratory`s Center for Engineering Systems Advanced Research (CESAR, B&R code KC0401030) Laboratory, a BES-supported facility, indicated that this type of enhancement is effective. This CRADA provided such enhancements to a successful working teleoperated robot for the first time. Performance of this CRADA used the CESAR laboratory facilities and expertise developed under BES funding.« less

  3. Current state of computer navigation and robotics in unicompartmental and total knee arthroplasty: a systematic review with meta-analysis.

    PubMed

    van der List, Jelle P; Chawla, Harshvardhan; Joskowicz, Leo; Pearle, Andrew D

    2016-11-01

    Recently, there is a growing interest in surgical variables that are intraoperatively controlled by orthopaedic surgeons, including lower leg alignment, component positioning and soft tissues balancing. Since more tight control over these factors is associated with improved outcomes of unicompartmental knee arthroplasty and total knee arthroplasty (TKA), several computer navigation and robotic-assisted systems have been developed. Although mechanical axis accuracy and component positioning have been shown to improve with computer navigation, no superiority in functional outcomes has yet been shown. This could be explained by the fact that many differences exist between the number and type of surgical variables these systems control. Most systems control lower leg alignment and component positioning, while some in addition control soft tissue balancing. Finally, robotic-assisted systems have the additional advantage of improving surgical precision. A systematic search in PubMed, Embase and Cochrane Library resulted in 40 comparative studies and three registries on computer navigation reporting outcomes of 474,197 patients, and 21 basic science and clinical studies on robotic-assisted knee arthroplasty. Twenty-eight of these comparative computer navigation studies reported Knee Society Total scores in 3504 patients. Stratifying by type of surgical variables, no significant differences were noted in outcomes between surgery with computer-navigated TKA controlling for alignment and component positioning versus conventional TKA (p = 0.63). However, significantly better outcomes were noted following computer-navigated TKA that also controlled for soft tissue balancing versus conventional TKA (mean difference 4.84, 95 % Confidence Interval 1.61, 8.07, p = 0.003). A literature review of robotic systems showed that these systems can, similarly to computer navigation, reliably improve lower leg alignment, component positioning and soft tissues balancing. Furthermore, two studies comparing robotic-assisted with computer-navigated surgery reported superiority of robotic-assisted surgery in controlling these factors. Manually controlling all these surgical variables can be difficult for the orthopaedic surgeon. Findings in this study suggest that computer navigation or robotic assistance may help managing these multiple variables and could improve outcomes. Future studies assessing the role of soft tissue balancing in knee arthroplasty and long-term follow-up studies assessing the role of computer-navigated and robotic-assisted knee arthroplasty are needed.

  4. Behavior Selection of Mobile Robot Based on Integration of Multimodal Information

    NASA Astrophysics Data System (ADS)

    Chen, Bin; Kaneko, Masahide

    Recently, biologically inspired robots have been developed to acquire the capacity for directing visual attention to salient stimulus generated from the audiovisual environment. On purpose to realize this behavior, a general method is to calculate saliency maps to represent how much the external information attracts the robot's visual attention, where the audiovisual information and robot's motion status should be involved. In this paper, we represent a visual attention model where three modalities, that is, audio information, visual information and robot's motor status are considered, while the previous researches have not considered all of them. Firstly, we introduce a 2-D density map, on which the value denotes how much the robot pays attention to each spatial location. Then we model the attention density using a Bayesian network where the robot's motion statuses are involved. Secondly, the information from both of audio and visual modalities is integrated with the attention density map in integrate-fire neurons. The robot can direct its attention to the locations where the integrate-fire neurons are fired. Finally, the visual attention model is applied to make the robot select the visual information from the environment, and react to the content selected. Experimental results show that it is possible for robots to acquire the visual information related to their behaviors by using the attention model considering motion statuses. The robot can select its behaviors to adapt to the dynamic environment as well as to switch to another task according to the recognition results of visual attention.

  5. Coordinated Research in Robotics and Integrated Manufacturing.

    DTIC Science & Technology

    1983-07-31

    of three research divisions: Robot Systems, Management Systems, and Integrated Design and Manufacturing, and involves about 40 faculty spanning the...keystone of their program. A relatively smaller level of effort is being supported within the Management Systems Division. This is the first annual...SYSTEMS MANAGEMENT 0 DESIGN DATABASES " ROBOT-BASED 0 HUMAN FACTORSMANUFACTURING • CAD CELL* PRODUCTIONMUCR LANNING * INTEGRATION LANGUAGE AND VIA LOCAL

  6. On Integral Invariants for Effective 3-D Motion Trajectory Matching and Recognition.

    PubMed

    Shao, Zhanpeng; Li, Youfu

    2016-02-01

    Motion trajectories tracked from the motions of human, robots, and moving objects can provide an important clue for motion analysis, classification, and recognition. This paper defines some new integral invariants for a 3-D motion trajectory. Based on two typical kernel functions, we design two integral invariants, the distance and area integral invariants. The area integral invariants are estimated based on the blurred segment of noisy discrete curve to avoid the computation of high-order derivatives. Such integral invariants for a motion trajectory enjoy some desirable properties, such as computational locality, uniqueness of representation, and noise insensitivity. Moreover, our formulation allows the analysis of motion trajectories at a range of scales by varying the scale of kernel function. The features of motion trajectories can thus be perceived at multiscale levels in a coarse-to-fine manner. Finally, we define a distance function to measure the trajectory similarity to find similar trajectories. Through the experiments, we examine the robustness and effectiveness of the proposed integral invariants and find that they can capture the motion cues in trajectory matching and sign recognition satisfactorily.

  7. Human-Robot Interaction in High Vulnerability Domains

    NASA Technical Reports Server (NTRS)

    Gore, Brian F.

    2016-01-01

    Future NASA missions will require successful integration of the human with highly complex systems. Highly complex systems are likely to involve humans, automation, and some level of robotic assistance. The complex environments will require successful integration of the human with automation, with robots, and with human-automation-robot teams to accomplish mission critical goals. Many challenges exist for the human performing in these types of operational environments with these kinds of systems. Systems must be designed to optimally integrate various levels of inputs and outputs based on the roles and responsibilities of the human, the automation, and the robots; from direct manual control, shared human-robotic control, or no active human control (i.e. human supervisory control). It is assumed that the human will remain involved at some level. Technologies that vary based on contextual demands and on operator characteristics (workload, situation awareness) will be needed when the human integrates into these systems. Predictive models that estimate the impact of the technologies on the system performance and the on the human operator are also needed to meet the challenges associated with such future complex human-automation-robot systems in extreme environments.

  8. Unification and Enhancement of Planetary Robotic Vision Ground Processing: The EC FP7 Project PRoVisG

    NASA Astrophysics Data System (ADS)

    Paar, G.

    2009-04-01

    At present, mainly the US have realized planetary space missions with essential robotics background. Joining institutions, companies and universities from different established groups in Europe and two relevant players from the US, the EC FP7 Project PRoVisG started in autumn 2008 to demonstrate the European ability of realizing high-level processing of robotic vision image products from the surface of planetary bodies. PRoVisG will build a unified European framework for Robotic Vision Ground Processing. State-of-art computer vision technology will be collected inside and outside Europe to better exploit the image data gathered during past, present and future robotic space missions to the Moon and the Planets. This will lead to a significant enhancement of the scientific, technologic and educational outcome of such missions. We report on the main PRoVisG objectives and the development status: - Past, present and future planetary robotic mission profiles are analysed in terms of existing solutions and requirements for vision processing - The generic processing chain is based on unified vision sensor descriptions and processing interfaces. Processing components available at the PRoVisG Consortium Partners will be completed by and combined with modules collected within the international computer vision community in the form of Announcements of Opportunity (AOs). - A Web GIS is developed to integrate the processing results obtained with data from planetary surfaces into the global planetary context. - Towards the end of the 39 month project period, PRoVisG will address the public by means of a final robotic field test in representative terrain. The European tax payers will be able to monitor the imaging and vision processing in a Mars - similar environment, thus getting an insight into the complexity and methods of processing, the potential and decision making of scientific exploitation of such data and not least the elegancy and beauty of the resulting image products and their visualization. - The educational aspect is addressed by two summer schools towards the end of the project, presenting robotic vision to the students who are future providers of European science and technology, inside and outside the space domain.

  9. Proceedings from an International Conference on Computers and Philosophy, i-C&P 2006 held 3-5 May 2006 in Laval, France

    DTIC Science & Technology

    2008-10-20

    embedded intelligence and cultural adaptations to the onslaught of robots in society. This volume constitutes a key contribution to the body of... Robotics , CNRS/Toulouse University, France Nathalie COLINEAU, Language & Multi-modality, CSIRO, Australia Roberto CORDESCHI, Computation & Communication...Intelligence, SONY CSL ­ Paris Nik KASABOV, Computer and Information Sciences, Auckland University, New Zealand Oussama KHATIB, Robotics & Artificial

  10. Going Green Robots

    ERIC Educational Resources Information Center

    Nelson, Jacqueline M.

    2011-01-01

    In looking at the interesting shapes and sizes of old computer parts, creating robots quickly came to the author's mind. In this article, she describes how computer parts can be used creatively. Students will surely enjoy creating their very own robots while learning about the importance of recycling in the society. (Contains 1 online resource.)

  11. Sensor Control of Robot Arc Welding

    NASA Technical Reports Server (NTRS)

    Sias, F. R., Jr.

    1983-01-01

    The potential for using computer vision as sensory feedback for robot gas-tungsten arc welding is investigated. The basic parameters that must be controlled while directing the movement of an arc welding torch are defined. The actions of a human welder are examined to aid in determining the sensory information that would permit a robot to make reproducible high strength welds. Special constraints imposed by both robot hardware and software are considered. Several sensory modalities that would potentially improve weld quality are examined. Special emphasis is directed to the use of computer vision for controlling gas-tungsten arc welding. Vendors of available automated seam tracking arc welding systems and of computer vision systems are surveyed. An assessment is made of the state of the art and the problems that must be solved in order to apply computer vision to robot controlled arc welding on the Space Shuttle Main Engine.

  12. Integrating obstacle avoidance, global path planning, visual cue detection, and landmark triangulation in a mobile robot

    NASA Astrophysics Data System (ADS)

    Kortenkamp, David; Huber, Marcus J.; Congdon, Clare B.; Huffman, Scott B.; Bidlack, Clint R.; Cohen, Charles J.; Koss, Frank V.; Raschke, Ulrich; Weymouth, Terry E.

    1993-05-01

    This paper describes the design and implementation of an integrated system for combining obstacle avoidance, path planning, landmark detection and position triangulation. Such an integrated system allows the robot to move from place to place in an environment, avoiding obstacles and planning its way out of traps, while maintaining its position and orientation using distinctive landmarks. The task the robot performs is to search a 22 m X 22 m arena for 10 distinctive objects, visiting each object in turn. This same task was recently performed by a dozen different robots at a competition in which the robot described in this paper finished first.

  13. Effect of Robotics on Elementary Preservice Teachers' Self-Efficacy, Science Learning, and Computational Thinking

    NASA Astrophysics Data System (ADS)

    Jaipal-Jamani, Kamini; Angeli, Charoula

    2017-04-01

    The current impetus for increasing STEM in K-12 education calls for an examination of how preservice teachers are being prepared to teach STEM. This paper reports on a study that examined elementary preservice teachers' ( n = 21) self-efficacy, understanding of science concepts, and computational thinking as they engaged with robotics in a science methods course. Data collection methods included pretests and posttests on science content, prequestionnaires and postquestionnaires for interest and self-efficacy, and four programming assignments. Statistical results showed that preservice teachers' interest and self-efficacy with robotics increased. There was a statistically significant difference between preknowledge and postknowledge scores, and preservice teachers did show gains in learning how to write algorithms and debug programs over repeated programming tasks. The findings suggest that the robotics activity was an effective instructional strategy to enhance interest in robotics, increase self-efficacy to teach with robotics, develop understandings of science concepts, and promote the development of computational thinking skills. Study findings contribute quantitative evidence to the STEM literature on how robotics develops preservice teachers' self-efficacy, science knowledge, and computational thinking skills in higher education science classroom contexts.

  14. Robust tuning of robot control systems

    NASA Technical Reports Server (NTRS)

    Minis, I.; Uebel, M.

    1992-01-01

    The computed torque control problem is examined for a robot arm with flexible, geared, joint drive systems which are typical in many industrial robots. The standard computed torque algorithm is not directly applicable to this class of manipulators because of the dynamics introduced by the joint drive system. The proposed approach to computed torque control combines a computed torque algorithm with torque controller at each joint. Three such control schemes are proposed. The first scheme uses the joint torque control system currently implemented on the robot arm and a novel form of the computed torque algorithm. The other two use the standard computed torque algorithm and a novel model following torque control system based on model following techniques. Standard tasks and performance indices are used to evaluate the performance of the controllers. Both numerical simulations and experiments are used in evaluation. The study shows that all three proposed systems lead to improved tracking performance over a conventional PD controller.

  15. [Surgical robotics, short state of the art and prospects].

    PubMed

    Gravez, P

    2003-11-01

    State-of-the-art robotized systems developed for surgery are either remotely controlled manipulators that duplicate gestures made by the surgeon (endoscopic surgery applications), or automated robots that execute trajectories defined relatively to pre-operative medical imaging (neurosurgery and orthopaedic surgery). This generation of systems primarily applies existing robotics technologies (the remote handling systems and the so-called "industrial robots") to current surgical practices. It has contributed to validate the huge potential of surgical robotics, but it suffers from several drawbacks, mainly high costs, excessive dimensions and some lack of user-friendliness. Nevertheless, technological progress let us anticipate the appearance in the near future of miniaturised surgical robots able to assist the gesture of the surgeon and to enhance his perception of the operation at hand. Due to many in-the-body articulated links, these systems will have the capability to perform complex minimally invasive gestures without obstructing the operating theatre. They will also combine the facility of manual piloting with the accuracy and increased safety of computer control, guiding the gestures of the human without offending to his freedom of action. Lastly, they will allow the surgeon to feel the mechanical properties of the tissues he is operating through a genuine "remote palpation" function. Most probably, such technological evolutions will lead the way to redesigned surgical procedures taking place inside new operating rooms featuring a better integration of all equipments and favouring cooperative work from multidisciplinary and sometimes geographically distributed medical staff.

  16. A path planning method for robot end effector motion using the curvature theory of the ruled surfaces

    NASA Astrophysics Data System (ADS)

    Güler, Fatma; Kasap, Emin

    Using the curvature theory for the ruled surfaces a technique for robot trajectory planning is presented. This technique ensures the calculation of robot’s next path. The positional variation of the Tool Center Point (TCP), linear velocity, angular velocity are required in the work area of the robot. In some circumstances, it may not be physically achievable and a re-computation of the robot trajectory might be necessary. This technique is suitable for re-computation of the robot trajectory. We obtain different robot trajectories which change depending on the darboux angle function and define trajectory ruled surface family with a common trajectory curve with the rotation trihedron. Also, the motion of robot end effector is illustrated with examples.

  17. Task-specific ankle robotics gait training after stroke: a randomized pilot study.

    PubMed

    Forrester, Larry W; Roy, Anindo; Hafer-Macko, Charlene; Krebs, Hermano I; Macko, Richard F

    2016-06-02

    An unsettled question in the use of robotics for post-stroke gait rehabilitation is whether task-specific locomotor training is more effective than targeting individual joint impairments to improve walking function. The paretic ankle is implicated in gait instability and fall risk, but is difficult to therapeutically isolate and refractory to recovery. We hypothesize that in chronic stroke, treadmill-integrated ankle robotics training is more effective to improve gait function than robotics focused on paretic ankle impairments. Participants with chronic hemiparetic gait were randomized to either six weeks of treadmill-integrated ankle robotics (n = 14) or dose-matched seated ankle robotics (n = 12) videogame training. Selected gait measures were collected at baseline, post-training, and six-week retention. Friedman, and Wilcoxon Sign Rank and Fisher's exact tests evaluated within and between group differences across time, respectively. Six weeks post-training, treadmill robotics proved more effective than seated robotics to increase walking velocity, paretic single support, paretic push-off impulse, and active dorsiflexion range of motion. Treadmill robotics durably improved gait dorsiflexion swing angle leading 6/7 initially requiring ankle braces to self-discarded them, while their unassisted paretic heel-first contacts increased from 44 % to 99.6 %, versus no change in assistive device usage (0/9) following seated robotics. Treadmill-integrated, but not seated ankle robotics training, durably improves gait biomechanics, reversing foot drop, restoring walking propulsion, and establishing safer foot landing in chronic stroke that may reduce reliance on assistive devices. These findings support a task-specific approach integrating adaptive ankle robotics with locomotor training to optimize mobility recovery. NCT01337960. https://clinicaltrials.gov/ct2/show/NCT01337960?term=NCT01337960&rank=1.

  18. Robot computer problem solving system

    NASA Technical Reports Server (NTRS)

    Becker, J. D.; Merriam, E. W.

    1974-01-01

    The conceptual, experimental, and practical aspects of the development of a robot computer problem solving system were investigated. The distinctive characteristics were formulated of the approach taken in relation to various studies of cognition and robotics. Vehicle and eye control systems were structured, and the information to be generated by the visual system is defined.

  19. Control Robotics Programming Technology. Technology Learning Activity. Teacher Edition.

    ERIC Educational Resources Information Center

    Oklahoma State Dept. of Vocational and Technical Education, Stillwater. Curriculum and Instructional Materials Center.

    This Technology Learning Activity (TLA) for control robotics programming technology in grades 6-10 is designed to teach students to construct and program computer-controlled devices using a LEGO DACTA set and computer interface and to help them understand how control technology and robotics affect them and their lifestyle. The suggested time for…

  20. Development of an Integrated Robotic Radioisotope Identification and Location System

    DTIC Science & Technology

    2009-05-05

    TYPE 3. DATES COVERED (From - To) 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Development of an Integrated Robotic Radioisotope...system within a robotic base in order to inspect an area for either radioisotopes that could be used for a radiological dispersal device (RDD) or are...classified as Special Nuclear Material (SNM). In operation, at a given location in the room, the robot rotates about its circumference searching for

  1. An efficient formulation of robot arm dynamics for control and computer simulation

    NASA Astrophysics Data System (ADS)

    Lee, C. S. G.; Nigam, R.

    This paper describes an efficient formulation of the dynamic equations of motion of industrial robots based on the Lagrange formulation of d'Alembert's principle. This formulation, as applied to a PUMA robot arm, results in a set of closed form second order differential equations with cross product terms. They are not as efficient in computation as those formulated by the Newton-Euler method, but provide a better analytical model for control analysis and computer simulation. Computational complexities of this dynamic model together with other models are tabulated for discussion.

  2. Precise computer controlled positioning of robot end effectors using force sensors

    NASA Technical Reports Server (NTRS)

    Shieh, L. S.; Mcinnis, B. C.; Wang, J. C.

    1988-01-01

    A thorough study of combined position/force control using sensory feedback for a one-dimensional manipulator model, which may count for the spacecraft docking problem or be extended to the multi-joint robot manipulator problem, was performed. The additional degree of freedom introduced by the compliant force sensor is included in the system dynamics in the design of precise position control. State feedback based on the pole placement method and with integral control is used to design the position controller. A simple constant gain force controller is used as an example to illustrate the dependence of the stability and steady-state accuracy of the overall position/force control upon the design of the inner position controller. Supportive simulation results are also provided.

  3. The Emergence of Compositional Communication in a Synthetic Ethology Framework

    DTIC Science & Technology

    2005-08-12

    34Integrating Language and Cognition: A Cognitive Robotics Approach", invited contribution to IEEE Computational Intelligence Magazine . The first two...papers address the main topic of investigation of the research proposal. In particular, we have introduced a simple structured meaning-signal mapping...Cavalli-Sforza (1982) to investigate analytically the evolution of structured com- munication codes. Let x 6 [0,1] be the proportion of individuals in a

  4. Analyzing Cyber-Physical Threats on Robotic Platforms.

    PubMed

    Ahmad Yousef, Khalil M; AlMajali, Anas; Ghalyon, Salah Abu; Dweik, Waleed; Mohd, Bassam J

    2018-05-21

    Robots are increasingly involved in our daily lives. Fundamental to robots are the communication link (or stream) and the applications that connect the robots to their clients or users. Such communication link and applications are usually supported through client/server network connection. This networking system is amenable of being attacked and vulnerable to the security threats. Ensuring security and privacy for robotic platforms is thus critical, as failures and attacks could have devastating consequences. In this paper, we examine several cyber-physical security threats that are unique to the robotic platforms; specifically the communication link and the applications. Threats target integrity, availability and confidential security requirements of the robotic platforms, which use MobileEyes/arnlServer client/server applications. A robot attack tool (RAT) was developed to perform specific security attacks. An impact-oriented approach was adopted to analyze the assessment results of the attacks. Tests and experiments of attacks were conducted in simulation environment and physically on the robot. The simulation environment was based on MobileSim; a software tool for simulating, debugging and experimenting on MobileRobots/ActivMedia platforms and their environments. The robot platform PeopleBot TM was used for physical experiments. The analysis and testing results show that certain attacks were successful at breaching the robot security. Integrity attacks modified commands and manipulated the robot behavior. Availability attacks were able to cause Denial-of-Service (DoS) and the robot was not responsive to MobileEyes commands. Integrity and availability attacks caused sensitive information on the robot to be hijacked. To mitigate security threats, we provide possible mitigation techniques and suggestions to raise awareness of threats on the robotic platforms, especially when the robots are involved in critical missions or applications.

  5. Analyzing Cyber-Physical Threats on Robotic Platforms †

    PubMed Central

    2018-01-01

    Robots are increasingly involved in our daily lives. Fundamental to robots are the communication link (or stream) and the applications that connect the robots to their clients or users. Such communication link and applications are usually supported through client/server network connection. This networking system is amenable of being attacked and vulnerable to the security threats. Ensuring security and privacy for robotic platforms is thus critical, as failures and attacks could have devastating consequences. In this paper, we examine several cyber-physical security threats that are unique to the robotic platforms; specifically the communication link and the applications. Threats target integrity, availability and confidential security requirements of the robotic platforms, which use MobileEyes/arnlServer client/server applications. A robot attack tool (RAT) was developed to perform specific security attacks. An impact-oriented approach was adopted to analyze the assessment results of the attacks. Tests and experiments of attacks were conducted in simulation environment and physically on the robot. The simulation environment was based on MobileSim; a software tool for simulating, debugging and experimenting on MobileRobots/ActivMedia platforms and their environments. The robot platform PeopleBotTM was used for physical experiments. The analysis and testing results show that certain attacks were successful at breaching the robot security. Integrity attacks modified commands and manipulated the robot behavior. Availability attacks were able to cause Denial-of-Service (DoS) and the robot was not responsive to MobileEyes commands. Integrity and availability attacks caused sensitive information on the robot to be hijacked. To mitigate security threats, we provide possible mitigation techniques and suggestions to raise awareness of threats on the robotic platforms, especially when the robots are involved in critical missions or applications. PMID:29883403

  6. Metalevel programming in robotics: Some issues

    NASA Technical Reports Server (NTRS)

    Kumarn, A.; Parameswaran, N.

    1987-01-01

    Computing in robotics has two important requirements: efficiency and flexibility. Algorithms for robot actions are implemented usually in procedural languages such as VAL and AL. But, since their excessive bindings create inflexible structures of computation, it is proposed that Logic Programming is a more suitable language for robot programming due to its non-determinism, declarative nature, and provision for metalevel programming. Logic Programming, however, results in inefficient computations. As a solution to this problem, researchers discuss a framework in which controls can be described to improve efficiency. They have divided controls into: (1) in-code and (2) metalevel and discussed them with reference to selection of rules and dataflow. Researchers illustrated the merit of Logic Programming by modelling the motion of a robot from one point to another avoiding obstacles.

  7. Virtual- and real-world operation of mobile robotic manipulators: integrated simulation, visualization, and control environment

    NASA Astrophysics Data System (ADS)

    Chen, ChuXin; Trivedi, Mohan M.

    1992-03-01

    This research is focused on enhancing the overall productivity of an integrated human-robot system. A simulation, animation, visualization, and interactive control (SAVIC) environment has been developed for the design and operation of an integrated robotic manipulator system. This unique system possesses the abilities for multisensor simulation, kinematics and locomotion animation, dynamic motion and manipulation animation, transformation between real and virtual modes within the same graphics system, ease in exchanging software modules and hardware devices between real and virtual world operations, and interfacing with a real robotic system. This paper describes a working system and illustrates the concepts by presenting the simulation, animation, and control methodologies for a unique mobile robot with articulated tracks, a manipulator, and sensory modules.

  8. A computer architecture for intelligent machines

    NASA Technical Reports Server (NTRS)

    Lefebvre, D. R.; Saridis, G. N.

    1992-01-01

    The theory of intelligent machines proposes a hierarchical organization for the functions of an autonomous robot based on the principle of increasing precision with decreasing intelligence. An analytic formulation of this theory using information-theoretic measures of uncertainty for each level of the intelligent machine has been developed. The authors present a computer architecture that implements the lower two levels of the intelligent machine. The architecture supports an event-driven programming paradigm that is independent of the underlying computer architecture and operating system. Execution-level controllers for motion and vision systems are briefly addressed, as well as the Petri net transducer software used to implement coordination-level functions. A case study illustrates how this computer architecture integrates real-time and higher-level control of manipulator and vision systems.

  9. Method of mobile robot indoor navigation by artificial landmarks with use of computer vision

    NASA Astrophysics Data System (ADS)

    Glibin, E. S.; Shevtsov, A. A.; Enik, O. A.

    2018-05-01

    The article describes an algorithm of the mobile robot indoor navigation based on the use of visual odometry. The results of the experiment identifying calculation errors in the distance traveled on a slip are presented. It is shown that the use of computer vision allows one to correct erroneous coordinates of the robot with the help of artificial landmarks. The control system utilizing the proposed method has been realized on the basis of Arduino Mego 2560 controller and a single-board computer Raspberry Pi 3. The results of the experiment on the mobile robot navigation with the use of this control system are presented.

  10. Brain Computer Interfaces for Enhanced Interaction with Mobile Robot Agents

    DTIC Science & Technology

    2016-07-27

    synergistic and complementary way. This project focused on acquiring a mobile robotic agent platform that can be used to explore these interfaces...providing a test environment where the human control of a robot agent can be experimentally validated in 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND...Distribution Unlimited UU UU UU UU 27-07-2016 17-Sep-2013 16-Sep-2014 Final Report: Brain Computer Interfaces for Enhanced Interactions with Mobile Robot

  11. Robotics development for the enhancement of space endeavors

    NASA Astrophysics Data System (ADS)

    Mauceri, A. J.; Clarke, Margaret M.

    Telerobotics and robotics development activities to support NASA's goal of increasing opportunities in space commercialization and exploration are described. The Rockwell International activities center is using robotics to improve efficiency and safety in three related areas: remote control of autonomous systems, automated nondestructive evaluation of aspects of vehicle integrity, and the use of robotics in space vehicle ground reprocessing operations. In the first area, autonomous robotic control, Rockwell is using the control architecture, NASREM, as the foundation for the high level command of robotic tasks. In the second area, we have demonstrated the use of nondestructive evaluation (using acoustic excitation and lasers sensors) to evaluate the integrity of space vehicle surface material bonds, using Orbiter 102 as the test case. In the third area, Rockwell is building an automated version of the present manual tool used for Space Shuttle surface tile re-waterproofing. The tool will be integrated into an orbiter processing robot being developed by a KSC-led team.

  12. Determining of a robot workspace using the integration of a CAD system with a virtual control system

    NASA Astrophysics Data System (ADS)

    Herbuś, K.; Ociepka, P.

    2016-08-01

    The paper presents a method for determining the workspace of an industrial robot using an approach consisting in integration a 3D model of an industrial robot with a virtual control system. The robot model with his work environment, prepared for motion simulation, was created in the “Motion Simulation” module of the Siemens PLM NX software. In the mentioned model components of the “link” type were created which map the geometrical form of particular elements of the robot and the components of “joint” type mapping way of cooperation of components of the “link” type. In the paper is proposed the solution in which the control process of a virtual robot is similar to the control process of a real robot using the manual control panel (teach pendant). For this purpose, the control application “JOINT” was created, which provides the manipulation of a virtual robot in accordance with its internal control system. The set of procedures stored in an .xlsx file is the element integrating the 3D robot model working in the CAD/CAE class system with the elaborated control application.

  13. Soft Dielectric Elastomer Oscillators Driving Bioinspired Robots.

    PubMed

    Henke, E-F Markus; Schlatter, Samuel; Anderson, Iain A

    2017-12-01

    Entirely soft robots with animal-like behavior and integrated artificial nervous systems will open up totally new perspectives and applications. To produce them, we must integrate control and actuation in the same soft structure. Soft actuators (e.g., pneumatic and hydraulic) exist but electronics are hard and stiff and remotely located. We present novel soft, electronics-free dielectric elastomer oscillators, which are able to drive bioinspired robots. As a demonstrator, we present a robot that mimics the crawling motion of the caterpillar, with an integrated artificial nervous system, soft actuators and without any conventional stiff electronic parts. Supplied with an external DC voltage, the robot autonomously generates all signals that are necessary to drive its dielectric elastomer actuators, and it translates an in-plane electromechanical oscillation into a crawling locomotion movement. Therefore, all functional and supporting parts are made of polymer materials and carbon. Besides the basic design of this first electronic-free, biomimetic robot, we present prospects to control the general behavior of such robots. The absence of conventional stiff electronics and the exclusive use of polymeric materials will provide a large step toward real animal-like robots, compliant human machine interfaces, and a new class of distributed, neuron-like internal control for robotic systems.

  14. A High School Level Course On Robot Design And Construction

    NASA Astrophysics Data System (ADS)

    Sadler, Paul M.; Crandall, Jack L.

    1984-02-01

    The Robotics Design and Construction Class at Sehome High School was developed to offer gifted and/or highly motivated students an in-depth introduction to a modern engineering topic. The course includes instruction in basic electronics, digital and radio electronics, construction skills, robotics literacy, construction of the HERO 1 Heathkit Robot, computer/ robot programming, and voice synthesis. A key element which leads to the success of the course is the involvement of various community assets including manpower and financial assistance. The instructors included a physics/electronics teacher, a computer science teacher, two retired engineers, and an electronics technician.

  15. Stochastic Evolutionary Algorithms for Planning Robot Paths

    NASA Technical Reports Server (NTRS)

    Fink, Wolfgang; Aghazarian, Hrand; Huntsberger, Terrance; Terrile, Richard

    2006-01-01

    A computer program implements stochastic evolutionary algorithms for planning and optimizing collision-free paths for robots and their jointed limbs. Stochastic evolutionary algorithms can be made to produce acceptably close approximations to exact, optimal solutions for path-planning problems while often demanding much less computation than do exhaustive-search and deterministic inverse-kinematics algorithms that have been used previously for this purpose. Hence, the present software is better suited for application aboard robots having limited computing capabilities (see figure). The stochastic aspect lies in the use of simulated annealing to (1) prevent trapping of an optimization algorithm in local minima of an energy-like error measure by which the fitness of a trial solution is evaluated while (2) ensuring that the entire multidimensional configuration and parameter space of the path-planning problem is sampled efficiently with respect to both robot joint angles and computation time. Simulated annealing is an established technique for avoiding local minima in multidimensional optimization problems, but has not, until now, been applied to planning collision-free robot paths by use of low-power computers.

  16. Human-Robot Teams for Unknown and Uncertain Environments

    NASA Technical Reports Server (NTRS)

    Fong, Terry

    2015-01-01

    Man-robot interaction is the study of interactions between humans and robots. It is often referred as HRI by researchers. Human-robot interaction is a multidisciplinary field with contributions from human-computer interaction, artificial intelligence.

  17. The Use of Robotics to Promote Computing to Pre-College Students with Visual Impairments

    ERIC Educational Resources Information Center

    Ludi, Stephanie; Reichlmayr, Tom

    2011-01-01

    This article describes an outreach program to broaden participation in computing to include more students with visual impairments. The precollege workshops target students in grades 7-12 and engage students with robotics programming. The use of robotics at the precollege level has become popular in part due to the availability of Lego Mindstorm…

  18. Neural-Dynamic-Method-Based Dual-Arm CMG Scheme With Time-Varying Constraints Applied to Humanoid Robots.

    PubMed

    Zhang, Zhijun; Li, Zhijun; Zhang, Yunong; Luo, Yamei; Li, Yuanqing

    2015-12-01

    We propose a dual-arm cyclic-motion-generation (DACMG) scheme by a neural-dynamic method, which can remedy the joint-angle-drift phenomenon of a humanoid robot. In particular, according to a neural-dynamic design method, first, a cyclic-motion performance index is exploited and applied. This cyclic-motion performance index is then integrated into a quadratic programming (QP)-type scheme with time-varying constraints, called the time-varying-constrained DACMG (TVC-DACMG) scheme. The scheme includes the kinematic motion equations of two arms and the time-varying joint limits. The scheme can not only generate the cyclic motion of two arms for a humanoid robot but also control the arms to move to the desired position. In addition, the scheme considers the physical limit avoidance. To solve the QP problem, a recurrent neural network is presented and used to obtain the optimal solutions. Computer simulations and physical experiments demonstrate the effectiveness and the accuracy of such a TVC-DACMG scheme and the neural network solver.

  19. Nurses' Needs for Care Robots in Integrated Nursing Care Services.

    PubMed

    Lee, Jai-Yon; Song, Young Ae; Jung, Ji Young; Kim, Hyun Jeong; Kim, Bo Ram; Do, Hyun-Kyung; Lim, Jae-Young

    2018-05-13

    To determine the need for care robots among nurses and to suggest how robotic care should be prioritized in an integrated nursing care services. Korea is expected to be a super-aged society by 2030. To solve care issues with elderly inpatient caused by informal caregivers, the government introduced 'integrated nursing care services'; these are comprehensive care systems staffed by professionally trained nurses. To assist them, a care robot development project has been launched. The study applied a cross-sectional survey. In 2016, we conducted a multi-center survey involving 302 registered nurses in five hospitals including three tertiary and two secondary hospitals in Korea. The questionnaire consisted of general characteristics of nurses and their views on and extents of agreement about issues associated with robotic care. Trial center nurses and those with ≥10 years of experience reported positively on the prospects for robotic care. The top three desired primary roles for care robots were 'measuring/monitoring', 'mobility/activity' and 'safety care'. 'Reduction in workload', especially in terms of 'other nursing services' which were categorized as non-value-added nursing activities, was the most valued feature. The nurses approved of the aid by care robots but were concerned about device malfunction and interruption of rapport with patients. Care robots are expected to be effective in integrated nursing care services, particularly in 'measuring/monitoring'. Such robots should decrease nurses' workload and minimize non-value-added nursing activities efficiently. No matter how excellent care robots are, they must co-operate with and be controlled by nurses. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  20. Mamdani Fuzzy System for Indoor Autonomous Mobile Robot

    NASA Astrophysics Data System (ADS)

    Khan, M. K. A. Ahamed; Rashid, Razif; Elamvazuthi, I.

    2011-06-01

    Several control algorithms for autonomous mobile robot navigation have been proposed in the literature. Recently, the employment of non-analytical methods of computing such as fuzzy logic, evolutionary computation, and neural networks has demonstrated the utility and potential of these paradigms for intelligent control of mobile robot navigation. In this paper, Mamdani fuzzy system for an autonomous mobile robot is developed. The paper begins with the discussion on the conventional controller and then followed by the description of fuzzy logic controller in detail.

  1. Integrating sensorimotor systems in a robot model of cricket behavior

    NASA Astrophysics Data System (ADS)

    Webb, Barbara H.; Harrison, Reid R.

    2000-10-01

    The mechanisms by which animals manage sensorimotor integration and coordination of different behaviors can be investigated in robot models. In previous work the first author has build a robot that localizes sound based on close modeling of the auditory and neural system in the cricket. It is known that the cricket combines its response to sound with other sensorimotor activities such as an optomotor reflex and reactions to mechanical stimulation for the antennae and cerci. Behavioral evidence suggests some ways these behaviors may be integrated. We have tested the addition of an optomotor response, using an analog VLSI circuit developed by the second author, to the sound localizing behavior and have shown that it can, as in the cricket, improve the directness of the robot's path to sound. In particular it substantially improves behavior when the robot is subject to a motor disturbance. Our aim is to better understand how the insect brain functions in controlling complex combinations of behavior, with the hope that this will also suggest novel mechanisms for sensory integration on robots.

  2. Morphological computation of multi-gaited robot locomotion based on free vibration.

    PubMed

    Reis, Murat; Yu, Xiaoxiang; Maheshwari, Nandan; Iida, Fumiya

    2013-01-01

    In recent years, there has been increasing interest in the study of gait patterns in both animals and robots, because it allows us to systematically investigate the underlying mechanisms of energetics, dexterity, and autonomy of adaptive systems. In particular, for morphological computation research, the control of dynamic legged robots and their gait transitions provides additional insights into the guiding principles from a synthetic viewpoint for the emergence of sensible self-organizing behaviors in more-degrees-of-freedom systems. This article presents a novel approach to the study of gait patterns, which makes use of the intrinsic mechanical dynamics of robotic systems. Each of the robots consists of a U-shaped elastic beam and exploits free vibration to generate different locomotion patterns. We developed a simplified physics model of these robots, and through experiments in simulation and real-world robotic platforms, we show three distinctive mechanisms for generating different gait patterns in these robots.

  3. Building adaptive connectionist-based controllers: review of experiments in human-robot interaction, collective robotics, and computational neuroscience

    NASA Astrophysics Data System (ADS)

    Billard, Aude

    2000-10-01

    This paper summarizes a number of experiments in biologically inspired robotics. The common feature to all experiments is the use of artificial neural networks as the building blocks for the controllers. The experiments speak in favor of using a connectionist approach for designing adaptive and flexible robot controllers, and for modeling neurological processes. I present 1) DRAMA, a novel connectionist architecture, which has general property for learning time series and extracting spatio-temporal regularities in multi-modal and highly noisy data; 2) Robota, a doll-shaped robot, which imitates and learns a proto-language; 3) an experiment in collective robotics, where a group of 4 to 15 Khepera robots learn dynamically the topography of an environment whose features change frequently; 4) an abstract, computational model of primate ability to learn by imitation; 5) a model for the control of locomotor gaits in a quadruped legged robot.

  4. Put Your Robot In, Put Your Robot Out: Sequencing through Programming Robots in Early Childhood

    ERIC Educational Resources Information Center

    Kazakoff, Elizabeth R.; Bers, Marina Umaschi

    2014-01-01

    This article examines the impact of programming robots on sequencing ability in early childhood. Thirty-four children (ages 4.5-6.5 years) participated in computer programming activities with a developmentally appropriate tool, CHERP, specifically designed to program a robot's behaviors. The children learned to build and program robots over three…

  5. A novel modification of the Turing test for artificial intelligence and robotics in healthcare.

    PubMed

    Ashrafian, Hutan; Darzi, Ara; Athanasiou, Thanos

    2015-03-01

    The increasing demands of delivering higher quality global healthcare has resulted in a corresponding expansion in the development of computer-based and robotic healthcare tools that rely on artificially intelligent technologies. The Turing test was designed to assess artificial intelligence (AI) in computer technology. It remains an important qualitative tool for testing the next generation of medical diagnostics and medical robotics. Development of quantifiable diagnostic accuracy meta-analytical evaluative techniques for the Turing test paradigm. Modification of the Turing test to offer quantifiable diagnostic precision and statistical effect-size robustness in the assessment of AI for computer-based and robotic healthcare technologies. Modification of the Turing test to offer robust diagnostic scores for AI can contribute to enhancing and refining the next generation of digital diagnostic technologies and healthcare robotics. Copyright © 2014 John Wiley & Sons, Ltd.

  6. Applied estimation for hybrid dynamical systems using perceptional information

    NASA Astrophysics Data System (ADS)

    Plotnik, Aaron M.

    This dissertation uses the motivating example of robotic tracking of mobile deep ocean animals to present innovations in robotic perception and estimation for hybrid dynamical systems. An approach to estimation for hybrid systems is presented that utilizes uncertain perceptional information about the system's mode to improve tracking of its mode and continuous states. This results in significant improvements in situations where previously reported methods of estimation for hybrid systems perform poorly due to poor distinguishability of the modes. The specific application that motivates this research is an automatic underwater robotic observation system that follows and films individual deep ocean animals. A first version of such a system has been developed jointly by the Stanford Aerospace Robotics Laboratory and Monterey Bay Aquarium Research Institute (MBARI). This robotic observation system is successfully fielded on MBARI's ROVs, but agile specimens often evade the system. When a human ROV pilot performs this task, one advantage that he has over the robotic observation system in these situations is the ability to use visual perceptional information about the target, immediately recognizing any changes in the specimen's behavior mode. With the approach of the human pilot in mind, a new version of the robotic observation system is proposed which is extended to (a) derive perceptional information (visual cues) about the behavior mode of the tracked specimen, and (b) merge this dissimilar, discrete and uncertain information with more traditional continuous noisy sensor data by extending existing algorithms for hybrid estimation. These performance enhancements are enabled by integrating techniques in hybrid estimation, computer vision and machine learning. First, real-time computer vision and classification algorithms extract a visual observation of the target's behavior mode. Existing hybrid estimation algorithms are extended to admit this uncertain but discrete observation, complementing the information available from more traditional sensors. State tracking is achieved using a new form of Rao-Blackwellized particle filter called the mode-observed Gaussian Particle Filter. Performance is demonstrated using data from simulation and data collected on actual specimens in the ocean. The framework for estimation using both traditional and perceptional information is easily extensible to other stochastic hybrid systems with mode-related perceptional observations available.

  7. A brain-controlled lower-limb exoskeleton for human gait training.

    PubMed

    Liu, Dong; Chen, Weihai; Pei, Zhongcai; Wang, Jianhua

    2017-10-01

    Brain-computer interfaces have been a novel approach to translate human intentions into movement commands in robotic systems. This paper describes an electroencephalogram-based brain-controlled lower-limb exoskeleton for gait training, as a proof of concept towards rehabilitation with human-in-the-loop. Instead of using conventional single electroencephalography correlates, e.g., evoked P300 or spontaneous motor imagery, we propose a novel framework integrated two asynchronous signal modalities, i.e., sensorimotor rhythms (SMRs) and movement-related cortical potentials (MRCPs). We executed experiments in a biologically inspired and customized lower-limb exoskeleton where subjects (N = 6) actively controlled the robot using their brain signals. Each subject performed three consecutive sessions composed of offline training, online visual feedback testing, and online robot-control recordings. Post hoc evaluations were conducted including mental workload assessment, feature analysis, and statistics test. An average robot-control accuracy of 80.16% ± 5.44% was obtained with the SMR-based method, while estimation using the MRCP-based method yielded an average performance of 68.62% ± 8.55%. The experimental results showed the feasibility of the proposed framework with all subjects successfully controlled the exoskeleton. The current paradigm could be further extended to paraplegic patients in clinical trials.

  8. Operant conditioning: a minimal components requirement in artificial spiking neurons designed for bio-inspired robot's controller

    PubMed Central

    Cyr, André; Boukadoum, Mounir; Thériault, Frédéric

    2014-01-01

    In this paper, we investigate the operant conditioning (OC) learning process within a bio-inspired paradigm, using artificial spiking neural networks (ASNN) to act as robot brain controllers. In biological agents, OC results in behavioral changes learned from the consequences of previous actions, based on progressive prediction adjustment from rewarding or punishing signals. In a neurorobotics context, virtual and physical autonomous robots may benefit from a similar learning skill when facing unknown and unsupervised environments. In this work, we demonstrate that a simple invariant micro-circuit can sustain OC in multiple learning scenarios. The motivation for this new OC implementation model stems from the relatively complex alternatives that have been described in the computational literature and recent advances in neurobiology. Our elementary kernel includes only a few crucial neurons, synaptic links and originally from the integration of habituation and spike-timing dependent plasticity as learning rules. Using several tasks of incremental complexity, our results show that a minimal neural component set is sufficient to realize many OC procedures. Hence, with the proposed OC module, designing learning tasks with an ASNN and a bio-inspired robot context leads to simpler neural architectures for achieving complex behaviors. PMID:25120464

  9. A brain-controlled lower-limb exoskeleton for human gait training

    NASA Astrophysics Data System (ADS)

    Liu, Dong; Chen, Weihai; Pei, Zhongcai; Wang, Jianhua

    2017-10-01

    Brain-computer interfaces have been a novel approach to translate human intentions into movement commands in robotic systems. This paper describes an electroencephalogram-based brain-controlled lower-limb exoskeleton for gait training, as a proof of concept towards rehabilitation with human-in-the-loop. Instead of using conventional single electroencephalography correlates, e.g., evoked P300 or spontaneous motor imagery, we propose a novel framework integrated two asynchronous signal modalities, i.e., sensorimotor rhythms (SMRs) and movement-related cortical potentials (MRCPs). We executed experiments in a biologically inspired and customized lower-limb exoskeleton where subjects (N = 6) actively controlled the robot using their brain signals. Each subject performed three consecutive sessions composed of offline training, online visual feedback testing, and online robot-control recordings. Post hoc evaluations were conducted including mental workload assessment, feature analysis, and statistics test. An average robot-control accuracy of 80.16% ± 5.44% was obtained with the SMR-based method, while estimation using the MRCP-based method yielded an average performance of 68.62% ± 8.55%. The experimental results showed the feasibility of the proposed framework with all subjects successfully controlled the exoskeleton. The current paradigm could be further extended to paraplegic patients in clinical trials.

  10. Operant conditioning: a minimal components requirement in artificial spiking neurons designed for bio-inspired robot's controller.

    PubMed

    Cyr, André; Boukadoum, Mounir; Thériault, Frédéric

    2014-01-01

    In this paper, we investigate the operant conditioning (OC) learning process within a bio-inspired paradigm, using artificial spiking neural networks (ASNN) to act as robot brain controllers. In biological agents, OC results in behavioral changes learned from the consequences of previous actions, based on progressive prediction adjustment from rewarding or punishing signals. In a neurorobotics context, virtual and physical autonomous robots may benefit from a similar learning skill when facing unknown and unsupervised environments. In this work, we demonstrate that a simple invariant micro-circuit can sustain OC in multiple learning scenarios. The motivation for this new OC implementation model stems from the relatively complex alternatives that have been described in the computational literature and recent advances in neurobiology. Our elementary kernel includes only a few crucial neurons, synaptic links and originally from the integration of habituation and spike-timing dependent plasticity as learning rules. Using several tasks of incremental complexity, our results show that a minimal neural component set is sufficient to realize many OC procedures. Hence, with the proposed OC module, designing learning tasks with an ASNN and a bio-inspired robot context leads to simpler neural architectures for achieving complex behaviors.

  11. Towards a synergy framework across neuroscience and robotics: Lessons learned and open questions. Reply to comments on: "Hand synergies: Integration of robotics and neuroscience for understanding the control of biological and artificial hands"

    NASA Astrophysics Data System (ADS)

    Santello, Marco; Bianchi, Matteo; Gabiccini, Marco; Ricciardi, Emiliano; Salvietti, Gionata; Prattichizzo, Domenico; Ernst, Marc; Moscatelli, Alessandro; Jorntell, Henrik; Kappers, Astrid M. L.; Kyriakopoulos, Kostas; Schaeffer, Alin Abu; Castellini, Claudio; Bicchi, Antonio

    2016-07-01

    We would like to thank all commentators for their insightful commentaries. Thanks to their diverse and complementary expertise in neuroscience and robotics, the commentators have provided us with the opportunity to further discuss state-of-the-art and gaps in the integration of neuroscience and robotics reviewed in our article. We organized our reply in two sections that capture the main points of all commentaries [1-9]: (1) Advantages and limitations of the synergy approach in neuroscience and robotics, and (2) Learning and role of sensory feedback in biological and robotics synergies.

  12. Using Robotics and Game Design to Enhance Children's Self-Efficacy, STEM Attitudes, and Computational Thinking Skills

    ERIC Educational Resources Information Center

    Leonard, Jacqueline; Buss, Alan; Gamboa, Ruben; Mitchell, Monica; Fashola, Olatokunbo S.; Hubert, Tarcia; Almughyirah, Sultan

    2016-01-01

    This paper describes the findings of a pilot study that used robotics and game design to develop middle school students' computational thinking strategies. One hundred and twenty-four students engaged in LEGO® EV3 robotics and created games using Scalable Game Design software. The results of the study revealed students' pre-post self-efficacy…

  13. The Development of a Robot-Based Learning Companion: A User-Centered Design Approach

    ERIC Educational Resources Information Center

    Hsieh, Yi-Zeng; Su, Mu-Chun; Chen, Sherry Y.; Chen, Gow-Dong

    2015-01-01

    A computer-vision-based method is widely employed to support the development of a variety of applications. In this vein, this study uses a computer-vision-based method to develop a playful learning system, which is a robot-based learning companion named RobotTell. Unlike existing playful learning systems, a user-centered design (UCD) approach is…

  14. Quantum robots plus environments.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benioff, P.

    1998-07-23

    A quantum robot is a mobile quantum system, including an on board quantum computer and needed ancillary systems, that interacts with an environment of quantum systems. Quantum robots carry out tasks whose goals include making specified changes in the state of the environment or carrying out measurements on the environment. The environments considered so far, oracles, data bases, and quantum registers, are seen to be special cases of environments considered here. It is also seen that a quantum robot should include a quantum computer and cannot be simply a multistate head. A model of quantum robots and their interactions ismore » discussed in which each task, as a sequence of alternating computation and action phases,is described by a unitary single time step operator T {approx} T{sub a} + T{sub c} (discrete space and time are assumed). The overall system dynamics is described as a sum over paths of completed computation (T{sub c}) and action (T{sub a}) phases. A simple example of a task, measuring the distance between the quantum robot and a particle on a 1D lattice with quantum phase path dispersion present, is analyzed. A decision diagram for the task is presented and analyzed.« less

  15. Design and analysis of a tendon-based computed tomography-compatible robot with remote center of motion for lung biopsy.

    PubMed

    Yang, Yunpeng; Jiang, Shan; Yang, Zhiyong; Yuan, Wei; Dou, Huaisu; Wang, Wei; Zhang, Daguang; Bian, Yuan

    2017-04-01

    Nowadays, biopsy is a decisive method of lung cancer diagnosis, whereas lung biopsy is time-consuming, complex and inaccurate. So a computed tomography-compatible robot for rapid and precise lung biopsy is developed in this article. According to the actual operation process, the robot is divided into two modules: 4-degree-of-freedom position module for location of puncture point is appropriate for patient's almost all positions and 3-degree-of-freedom tendon-based orientation module with remote center of motion is compact and computed tomography-compatible to orientate and insert needle automatically inside computed tomography bore. The workspace of the robot surrounds patient's thorax, and the needle tip forms a cone under patient's skin. A new error model of the robot based on screw theory is proposed in view of structure error and actuation error, which are regarded as screw motions. Simulation is carried out to verify the precision of the error model contrasted with compensation via inverse kinematics. The results of insertion experiment on specific phantom prove the feasibility of the robot with mean error of 1.373 mm in laboratory environment, which is accurate enough to replace manual operation.

  16. Adaptive Language Games with Robots

    NASA Astrophysics Data System (ADS)

    Steels, Luc

    2010-11-01

    This paper surveys recent research into language evolution using computer simulations and robotic experiments. This field has made tremendous progress in the past decade going from simple simulations of lexicon formation with animallike cybernetic robots to sophisticated grammatical experiments with humanoid robots.

  17. Localization Methods for a Mobile Robot in Urban Environments

    DTIC Science & Technology

    2004-10-04

    Columbia University, Department of Computer Science, 2001. [30] R. Brown and P. Hwang , Introduction to random signals and applied Kalman filtering, 3rd...sensor. An extended Kalman filter integrates the sensor data and keeps track of the uncertainty associated with it. The second method is based on...errors+ compass/GPS errors corrected odometry pose odometry error estimates zk zk h(x)~ h(x)~ Kalman Filter zk Fig. 4. A diagram of the extended

  18. Acquiring neural signals for developing a perception and cognition model

    NASA Astrophysics Data System (ADS)

    Li, Wei; Li, Yunyi; Chen, Genshe; Shen, Dan; Blasch, Erik; Pham, Khanh; Lynch, Robert

    2012-06-01

    The understanding of how humans process information, determine salience, and combine seemingly unrelated information is essential to automated processing of large amounts of information that is partially relevant, or of unknown relevance. Recent neurological science research in human perception, and in information science regarding contextbased modeling, provides us with a theoretical basis for using a bottom-up approach for automating the management of large amounts of information in ways directly useful for human operators. However, integration of human intelligence into a game theoretic framework for dynamic and adaptive decision support needs a perception and cognition model. For the purpose of cognitive modeling, we present a brain-computer-interface (BCI) based humanoid robot system to acquire brainwaves during human mental activities of imagining a humanoid robot-walking behavior. We use the neural signals to investigate relationships between complex humanoid robot behaviors and human mental activities for developing the perception and cognition model. The BCI system consists of a data acquisition unit with an electroencephalograph (EEG), a humanoid robot, and a charge couple CCD camera. An EEG electrode cup acquires brainwaves from the skin surface on scalp. The humanoid robot has 20 degrees of freedom (DOFs); 12 DOFs located on hips, knees, and ankles for humanoid robot walking, 6 DOFs on shoulders and arms for arms motion, and 2 DOFs for head yaw and pitch motion. The CCD camera takes video clips of the human subject's hand postures to identify mental activities that are correlated to the robot-walking behaviors. We use the neural signals to investigate relationships between complex humanoid robot behaviors and human mental activities for developing the perception and cognition model.

  19. First Experiences with the New Senhance® Telerobotic System in Visceral Surgery.

    PubMed

    Stephan, Dietmar; Sälzer, Heike; Willeke, Frank

    2018-02-01

    Until recently, robotic-assisted surgery has exclusively been connected to the name DaVinci®. In 2016, a second robotic system, the Senhance®, became available. To introduce the new robotic system into clinical routine, detailed team training and an integration program were useful. Within the first 6 months, 116 cases were performed with this system. The integration program intended to start with simple and well-standardized clinical cases. We chose inguinal hernia repair using the TAPP (transabdominal preperitoneal) technique as the starting procedure. Subsequently, we added upper gastrointestinal surgery and cholecystectomies, and colorectal procedures have since also been included. Initial experience with the Senhance system as the first installation in Germany shows that it is suitable for surgery in general and for visceral surgery in particular. The application is safe due to the unproblematically quick changeover to normal laparoscopy and easy to integrate due to the very short system integration times (docking times). Since it is a laparoscopic-based system, following an integration program will enable experienced laparoscopic surgeons to very quickly manage more complex procedures. Due to lower costs, introducing robotic surgery starting with simple and standardized procedures is more feasible. After the establishment of this second robotic system, future studies will have to specifically look at differences in surgical results and basic conditions of different robotic-assisted systems. This paper documents the decision-making process of a hospital towards the integration of a robotic system and the selection criteria used while also demonstrating the planning and execution process during the introduction of the system into clinical routine.

  20. ARIES NDA Robot operators` manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scheer, N.L.; Nelson, D.C.

    1998-05-01

    The ARIES NDA Robot is an automation device for servicing the material movements for a suite of Non-destructive assay (NDA) instruments. This suite of instruments includes a calorimeter, a gamma isotopic system, a segmented gamma scanner (SGS), and a neutron coincidence counter (NCC). Objects moved by the robot include sample cans, standard cans, and instrument plugs. The robot computer has an RS-232 connection with the NDA Host computer, which coordinates robot movements and instrument measurements. The instruments are expected to perform measurements under the direction of the Host without operator intervention. This user`s manual describes system startup, using the mainmore » menu, manual operation, and error recovery.« less

  1. Integrating planning perception and action for informed object search.

    PubMed

    Manso, Luis J; Gutierrez, Marco A; Bustos, Pablo; Bachiller, Pilar

    2018-05-01

    This paper presents a method to reduce the time spent by a robot with cognitive abilities when looking for objects in unknown locations. It describes how machine learning techniques can be used to decide which places should be inspected first, based on images that the robot acquires passively. The proposal is composed of two concurrent processes. The first one uses the aforementioned images to generate a description of the types of objects found in each object container seen by the robot. This is done passively, regardless of the task being performed. The containers can be tables, boxes, shelves or any other kind of container of known shape whose contents can be seen from a distance. The second process uses the previously computed estimation of the contents of the containers to decide which is the most likely container having the object to be found. This second process is deliberative and takes place only when the robot needs to find an object, whether because it is explicitly asked to locate one or because it is needed as a step to fulfil the mission of the robot. Upon failure to guess the right container, the robot can continue making guesses until the object is found. Guesses are made based on the semantic distance between the object to find and the description of the types of the objects found in each object container. The paper provides quantitative results comparing the efficiency of the proposed method and two base approaches.

  2. Iconic Gestures for Robot Avatars, Recognition and Integration with Speech.

    PubMed

    Bremner, Paul; Leonards, Ute

    2016-01-01

    Co-verbal gestures are an important part of human communication, improving its efficiency and efficacy for information conveyance. One possible means by which such multi-modal communication might be realized remotely is through the use of a tele-operated humanoid robot avatar. Such avatars have been previously shown to enhance social presence and operator salience. We present a motion tracking based tele-operation system for the NAO robot platform that allows direct transmission of speech and gestures produced by the operator. To assess the capabilities of this system for transmitting multi-modal communication, we have conducted a user study that investigated if robot-produced iconic gestures are comprehensible, and are integrated with speech. Robot performed gesture outcomes were compared directly to those for gestures produced by a human actor, using a within participant experimental design. We show that iconic gestures produced by a tele-operated robot are understood by participants when presented alone, almost as well as when produced by a human. More importantly, we show that gestures are integrated with speech when presented as part of a multi-modal communication equally well for human and robot performances.

  3. Real-time multiple human perception with color-depth cameras on a mobile robot.

    PubMed

    Zhang, Hao; Reardon, Christopher; Parker, Lynne E

    2013-10-01

    The ability to perceive humans is an essential requirement for safe and efficient human-robot interaction. In real-world applications, the need for a robot to interact in real time with multiple humans in a dynamic, 3-D environment presents a significant challenge. The recent availability of commercial color-depth cameras allow for the creation of a system that makes use of the depth dimension, thus enabling a robot to observe its environment and perceive in the 3-D space. Here we present a system for 3-D multiple human perception in real time from a moving robot equipped with a color-depth camera and a consumer-grade computer. Our approach reduces computation time to achieve real-time performance through a unique combination of new ideas and established techniques. We remove the ground and ceiling planes from the 3-D point cloud input to separate candidate point clusters. We introduce the novel information concept, depth of interest, which we use to identify candidates for detection, and that avoids the computationally expensive scanning-window methods of other approaches. We utilize a cascade of detectors to distinguish humans from objects, in which we make intelligent reuse of intermediary features in successive detectors to improve computation. Because of the high computational cost of some methods, we represent our candidate tracking algorithm with a decision directed acyclic graph, which allows us to use the most computationally intense techniques only where necessary. We detail the successful implementation of our novel approach on a mobile robot and examine its performance in scenarios with real-world challenges, including occlusion, robot motion, nonupright humans, humans leaving and reentering the field of view (i.e., the reidentification challenge), human-object and human-human interaction. We conclude with the observation that the incorporation of the depth information, together with the use of modern techniques in new ways, we are able to create an accurate system for real-time 3-D perception of humans by a mobile robot.

  4. Conjugate Gradient Algorithms For Manipulator Simulation

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Scheid, Robert E.

    1991-01-01

    Report discusses applicability of conjugate-gradient algorithms to computation of forward dynamics of robotic manipulators. Rapid computation of forward dynamics essential to teleoperation and other advanced robotic applications. Part of continuing effort to find algorithms meeting requirements for increased computational efficiency and speed. Method used for iterative solution of systems of linear equations.

  5. Object Detection Techniques Applied on Mobile Robot Semantic Navigation

    PubMed Central

    Astua, Carlos; Barber, Ramon; Crespo, Jonathan; Jardon, Alberto

    2014-01-01

    The future of robotics predicts that robots will integrate themselves more every day with human beings and their environments. To achieve this integration, robots need to acquire information about the environment and its objects. There is a big need for algorithms to provide robots with these sort of skills, from the location where objects are needed to accomplish a task up to where these objects are considered as information about the environment. This paper presents a way to provide mobile robots with the ability-skill to detect objets for semantic navigation. This paper aims to use current trends in robotics and at the same time, that can be exported to other platforms. Two methods to detect objects are proposed, contour detection and a descriptor based technique, and both of them are combined to overcome their respective limitations. Finally, the code is tested on a real robot, to prove its accuracy and efficiency. PMID:24732101

  6. Haptic/graphic rehabilitation: integrating a robot into a virtual environment library and applying it to stroke therapy.

    PubMed

    Sharp, Ian; Patton, James; Listenberger, Molly; Case, Emily

    2011-08-08

    Recent research that tests interactive devices for prolonged therapy practice has revealed new prospects for robotics combined with graphical and other forms of biofeedback. Previous human-robot interactive systems have required different software commands to be implemented for each robot leading to unnecessary developmental overhead time each time a new system becomes available. For example, when a haptic/graphic virtual reality environment has been coded for one specific robot to provide haptic feedback, that specific robot would not be able to be traded for another robot without recoding the program. However, recent efforts in the open source community have proposed a wrapper class approach that can elicit nearly identical responses regardless of the robot used. The result can lead researchers across the globe to perform similar experiments using shared code. Therefore modular "switching out"of one robot for another would not affect development time. In this paper, we outline the successful creation and implementation of a wrapper class for one robot into the open-source H3DAPI, which integrates the software commands most commonly used by all robots.

  7. A multimodal interface for real-time soldier-robot teaming

    NASA Astrophysics Data System (ADS)

    Barber, Daniel J.; Howard, Thomas M.; Walter, Matthew R.

    2016-05-01

    Recent research and advances in robotics have led to the development of novel platforms leveraging new sensing capabilities for semantic navigation. As these systems becoming increasingly more robust, they support highly complex commands beyond direct teleoperation and waypoint finding facilitating a transition away from robots as tools to robots as teammates. Supporting future Soldier-Robot teaming requires communication capabilities on par with human-human teams for successful integration of robots. Therefore, as robots increase in functionality, it is equally important that the interface between the Soldier and robot advances as well. Multimodal communication (MMC) enables human-robot teaming through redundancy and levels of communications more robust than single mode interaction. Commercial-off-the-shelf (COTS) technologies released in recent years for smart-phones and gaming provide tools for the creation of portable interfaces incorporating MMC through the use of speech, gestures, and visual displays. However, for multimodal interfaces to be successfully used in the military domain, they must be able to classify speech, gestures, and process natural language in real-time with high accuracy. For the present study, a prototype multimodal interface supporting real-time interactions with an autonomous robot was developed. This device integrated COTS Automated Speech Recognition (ASR), a custom gesture recognition glove, and natural language understanding on a tablet. This paper presents performance results (e.g. response times, accuracy) of the integrated device when commanding an autonomous robot to perform reconnaissance and surveillance activities in an unknown outdoor environment.

  8. Neuro-prosthetic interplay. Comment on "Hand synergies: Integration of robotics and neuroscience for understanding the control of biological and artificial hands" by M. Santello et al.

    NASA Astrophysics Data System (ADS)

    Schieber, Marc H.

    2016-07-01

    Control of the human hand has been both difficult to understand scientifically and difficult to emulate technologically. The article by Santello and colleagues in the current issue of Physics of Life Reviews[1] highlights the accelerating pace of interaction between the neuroscience of controlling body movement and the engineering of robotic hands that can be used either autonomously or as part of a motor neuroprosthesis, an artificial body part that moves under control from a human subject's own nervous system. Motor neuroprostheses typically involve a brain-computer interface (BCI) that takes signals from the subject's nervous system or muscles, interprets those signals through a decoding algorithm, and then applies the resulting output to control the artificial device.

  9. Supersmart Robots: The Next Generation of Robots Has Evolutionary Capabilities

    ERIC Educational Resources Information Center

    Simkins, Michael

    2008-01-01

    Robots that can learn new behaviors. Robots that can reproduce themselves. Science fiction? Not anymore. Roboticists at Cornell's Computational Synthesis Lab have developed just such engineered creatures that offer interesting implications for education. The team, headed by Hod Lipson, was intrigued by the question, "How can you get robots to be…

  10. Using Robotics and Game Design to Enhance Children's Self-Efficacy, STEM Attitudes, and Computational Thinking Skills

    NASA Astrophysics Data System (ADS)

    Leonard, Jacqueline; Buss, Alan; Gamboa, Ruben; Mitchell, Monica; Fashola, Olatokunbo S.; Hubert, Tarcia; Almughyirah, Sultan

    2016-12-01

    This paper describes the findings of a pilot study that used robotics and game design to develop middle school students' computational thinking strategies. One hundred and twenty-four students engaged in LEGO® EV3 robotics and created games using Scalable Game Design software. The results of the study revealed students' pre-post self-efficacy scores on the construct of computer use declined significantly, while the constructs of videogaming and computer gaming remained unchanged. When these constructs were analyzed by type of learning environment, self-efficacy on videogaming increased significantly in the combined robotics/gaming environment compared with the gaming-only context. Student attitudes toward STEM, however, did not change significantly as a result of the study. Finally, children's computational thinking (CT) strategies varied by method of instruction as students who participated in holistic game development (i.e., Project First) had higher CT ratings. This study contributes to the STEM education literature on the use of robotics and game design to influence self-efficacy in technology and CT, while informing the research team about the adaptations needed to ensure project fidelity during the remaining years of the study.

  11. Experiences in Developing an Experimental Robotics Course Program for Undergraduate Education

    ERIC Educational Resources Information Center

    Jung, Seul

    2013-01-01

    An interdisciplinary undergraduate-level robotics course offers students the chance to integrate their engineering knowledge learned throughout their college years by building a robotic system. Robotics is thus a core course in system and control-related engineering education. This paper summarizes the experience of developing robotics courses…

  12. Sustainable Cooperative Robotic Technologies for Human and Robotic Outpost Infrastructure Construction and Maintenance

    NASA Technical Reports Server (NTRS)

    Stroupe, Ashley W.; Okon, Avi; Robinson, Matthew; Huntsberger, Terry; Aghazarian, Hrand; Baumgartner, Eric

    2004-01-01

    Robotic Construction Crew (RCC) is a heterogeneous multi-robot system for autonomous acquisition, transport, and precision mating of components in construction tasks. RCC minimizes resources constrained in a space environment such as computation, power, communication and, sensing. A behavior-based architecture provides adaptability and robustness despite low computational requirements. RCC successfully performs several construction related tasks in an emulated outdoor environment despite high levels of uncertainty in motions and sensing. Quantitative results are provided for formation keeping in component transport, precision instrument placement, and construction tasks.

  13. Justification of Filter Selection for Robot Balancing in Conditions of Limited Computational Resources

    NASA Astrophysics Data System (ADS)

    Momot, M. V.; Politsinskaia, E. V.; Sushko, A. V.; Semerenko, I. A.

    2016-08-01

    The paper considers the problem of mathematical filter selection, used for balancing of wheeled robot in conditions of limited computational resources. The solution based on complementary filter is proposed.

  14. Natural Tasking of Robots Based on Human Interaction Cues

    DTIC Science & Technology

    2005-06-01

    MIT. • Matthew Marjanovic , researcher, ITA Software. • Brian Scasselatti, Assistant Professor of Computer Science, Yale. • Matthew Williamson...2004. 25 [74] Charlie C. Kemp. Shoes as a platform for vision. 7th IEEE International Symposium on Wearable Computers, 2004. [75] Matthew Marjanovic ...meso: Simulated muscles for a humanoid robot. Presentation for Humanoid Robotics Group, MIT AI Lab, August 2001. [76] Matthew J. Marjanovic . Teaching

  15. Robot-Arm Dynamic Control by Computer

    NASA Technical Reports Server (NTRS)

    Bejczy, Antal K.; Tarn, Tzyh J.; Chen, Yilong J.

    1987-01-01

    Feedforward and feedback schemes linearize responses to control inputs. Method for control of robot arm based on computed nonlinear feedback and state tranformations to linearize system and decouple robot end-effector motions along each of cartesian axes augmented with optimal scheme for correction of errors in workspace. Major new feature of control method is: optimal error-correction loop directly operates on task level and not on joint-servocontrol level.

  16. Coordinated Control Of Mobile Robotic Manipulators

    NASA Technical Reports Server (NTRS)

    Seraji, Homayoun

    1995-01-01

    Computationally efficient scheme developed for on-line coordinated control of both manipulation and mobility of robots that include manipulator arms mounted on mobile bases. Applicable to variety of mobile robotic manipulators, including robots that move along tracks (typically, painting and welding robots), robots mounted on gantries and capable of moving in all three dimensions, wheeled robots, and compound robots (consisting of robots mounted on other robots). Theoretical basis discussed in several prior articles in NASA Tech Briefs, including "Increasing the Dexterity of Redundant Robots" (NPO-17801), "Redundant Robot Can Avoid Obstacles" (NPO-17852), "Configuration-Control Scheme Copes With Singularities" (NPO-18556), "More Uses for Configuration Control of Robots" (NPO-18607/NPO-18608).

  17. 3D printing of soft robotic systems

    NASA Astrophysics Data System (ADS)

    Wallin, T. J.; Pikul, J.; Shepherd, R. F.

    2018-06-01

    Soft robots are capable of mimicking the complex motion of animals. Soft robotic systems are defined by their compliance, which allows for continuous and often responsive localized deformation. These features make soft robots especially interesting for integration with human tissues, for example, the implementation of biomedical devices, and for robotic performance in harsh or uncertain environments, for example, exploration in confined spaces or locomotion on uneven terrain. Advances in soft materials and additive manufacturing technologies have enabled the design of soft robots with sophisticated capabilities, such as jumping, complex 3D movements, gripping and releasing. In this Review, we examine the essential soft material properties for different elements of soft robots, highlighting the most relevant polymer systems. Advantages and limitations of different additive manufacturing processes, including 3D printing, fused deposition modelling, direct ink writing, selective laser sintering, inkjet printing and stereolithography, are discussed, and the different techniques are investigated for their application in soft robotic fabrication. Finally, we explore integrated robotic systems and give an outlook for the future of the field and remaining challenges.

  18. Robotic technology in surgery: past, present, and future.

    PubMed

    Camarillo, David B; Krummel, Thomas M; Salisbury, J Kenneth

    2004-10-01

    It has been nearly 20 years since the first appearance of robotics in the operating room. In that time, much progress has been made in integrating robotic technologies with surgical instrumentation, as evidenced by the many thousands of successful robot-assisted cases. However, to build on past success and to fully leverage the potential of surgical robotics in the future, it is essential to maximize a shared understanding and communication among surgeons, engineers, entrepreneurs, and healthcare administrators. This article provides an introduction to medical robotic technologies, develops a possible taxonomy, reviews the evolution of a surgical robot, and discusses future prospects for innovation. Robotic surgery has demonstrated some clear benefits. It remains to be seen where these benefits will outweigh the associated costs over the long term. In the future, surgical robots should be smaller, less expensive, easier to operate, and should seamlessly integrate emerging technologies from a number of different fields. Such advances will enable continued progress in surgical instrumentation and, ultimately, surgical care.

  19. Self-localization for an autonomous mobile robot based on an omni-directional vision system

    NASA Astrophysics Data System (ADS)

    Chiang, Shu-Yin; Lin, Kuang-Yu; Chia, Tsorng-Lin

    2013-12-01

    In this study, we designed an autonomous mobile robot based on the rules of the Federation of International Robotsoccer Association (FIRA) RoboSot category, integrating the techniques of computer vision, real-time image processing, dynamic target tracking, wireless communication, self-localization, motion control, path planning, and control strategy to achieve the contest goal. The self-localization scheme of the mobile robot is based on the algorithms featured in the images from its omni-directional vision system. In previous works, we used the image colors of the field goals as reference points, combining either dual-circle or trilateration positioning of the reference points to achieve selflocalization of the autonomous mobile robot. However, because the image of the game field is easily affected by ambient light, positioning systems exclusively based on color model algorithms cause errors. To reduce environmental effects and achieve the self-localization of the robot, the proposed algorithm is applied in assessing the corners of field lines by using an omni-directional vision system. Particularly in the mid-size league of the RobotCup soccer competition, selflocalization algorithms based on extracting white lines from the soccer field have become increasingly popular. Moreover, white lines are less influenced by light than are the color model of the goals. Therefore, we propose an algorithm that transforms the omni-directional image into an unwrapped transformed image, enhancing the extraction features. The process is described as follows: First, radical scan-lines were used to process omni-directional images, reducing the computational load and improving system efficiency. The lines were radically arranged around the center of the omni-directional camera image, resulting in a shorter computational time compared with the traditional Cartesian coordinate system. However, the omni-directional image is a distorted image, which makes it difficult to recognize the position of the robot. Therefore, image transformation was required to implement self-localization. Second, we used an approach to transform the omni-directional images into panoramic images. Hence, the distortion of the white line can be fixed through the transformation. The interest points that form the corners of the landmark were then located using the features from accelerated segment test (FAST) algorithm. In this algorithm, a circle of sixteen pixels surrounding the corner candidate is considered and is a high-speed feature detector in real-time frame rate applications. Finally, the dual-circle, trilateration, and cross-ratio projection algorithms were implemented in choosing the corners obtained from the FAST algorithm and localizing the position of the robot. The results demonstrate that the proposed algorithm is accurate, exhibiting a 2-cm position error in the soccer field measuring 600 cm2 x 400 cm2.

  20. Moving NASA Beyond Low Earth Orbit: Future Human-Automation-Robotic Integration Challenges

    NASA Technical Reports Server (NTRS)

    Marquez, Jessica

    2016-01-01

    This presentation will provide an overview of current human spaceflight operations. It will also describe how future exploration missions will have to adapt and evolve in order to deal with more complex missions and communication latencies. Additionally, there are many implications regarding advanced automation and robotics, and this presentation will outline future human-automation-robotic integration challenges.

  1. A comprehensive overview of the applications of artificial life.

    PubMed

    Kim, Kyung-Joong; Cho, Sung-Bae

    2006-01-01

    We review the applications of artificial life (ALife), the creation of synthetic life on computers to study, simulate, and understand living systems. The definition and features of ALife are shown by application studies. ALife application fields treated include robot control, robot manufacturing, practical robots, computer graphics, natural phenomenon modeling, entertainment, games, music, economics, Internet, information processing, industrial design, simulation software, electronics, security, data mining, and telecommunications. In order to show the status of ALife application research, this review primarily features a survey of about 180 ALife application articles rather than a selected representation of a few articles. Evolutionary computation is the most popular method for designing such applications, but recently swarm intelligence, artificial immune network, and agent-based modeling have also produced results. Applications were initially restricted to the robotics and computer graphics, but presently, many different applications in engineering areas are of interest.

  2. The universal robot

    NASA Technical Reports Server (NTRS)

    Moravec, Hans

    1993-01-01

    Our artifacts are getting smarter, and a loose parallel with the evolution of animal intelligence suggests one future course for them. Computerless industrial machinery exhibits the behavioral flexibility of single-celled organisms. Today's best computer-controlled robots are like the simpler invertebrates. A thousand-fold increase in computer power in the next decade should make possible machines with reptile-like sensory and motor competence. Properly configured, such robots could do in the physical world what personal computers now do in the world of data - act on our behalf as literal-minded slaves. Growing computer power over the next half-century will allow this reptile stage to be surpassed, in stages producing robots that learn like mammals, model their world like primates, and eventually reason like humans. Depending on your point of view, humanity will then have produced a worthy successor, or transcended some of its inherited limitations and so transformed itself into something quite new.

  3. Brain computer interface for operating a robot

    NASA Astrophysics Data System (ADS)

    Nisar, Humaira; Balasubramaniam, Hari Chand; Malik, Aamir Saeed

    2013-10-01

    A Brain-Computer Interface (BCI) is a hardware/software based system that translates the Electroencephalogram (EEG) signals produced by the brain activity to control computers and other external devices. In this paper, we will present a non-invasive BCI system that reads the EEG signals from a trained brain activity using a neuro-signal acquisition headset and translates it into computer readable form; to control the motion of a robot. The robot performs the actions that are instructed to it in real time. We have used the cognitive states like Push, Pull to control the motion of the robot. The sensitivity and specificity of the system is above 90 percent. Subjective results show a mixed trend of the difficulty level of the training activities. The quantitative EEG data analysis complements the subjective results. This technology may become very useful for the rehabilitation of disabled and elderly people.

  4. The universal robot

    NASA Astrophysics Data System (ADS)

    Moravec, Hans

    1993-12-01

    Our artifacts are getting smarter, and a loose parallel with the evolution of animal intelligence suggests one future course for them. Computerless industrial machinery exhibits the behavioral flexibility of single-celled organisms. Today's best computer-controlled robots are like the simpler invertebrates. A thousand-fold increase in computer power in the next decade should make possible machines with reptile-like sensory and motor competence. Properly configured, such robots could do in the physical world what personal computers now do in the world of data - act on our behalf as literal-minded slaves. Growing computer power over the next half-century will allow this reptile stage to be surpassed, in stages producing robots that learn like mammals, model their world like primates, and eventually reason like humans. Depending on your point of view, humanity will then have produced a worthy successor, or transcended some of its inherited limitations and so transformed itself into something quite new.

  5. Real-time robot deliberation by compilation and monitoring of anytime algorithms

    NASA Technical Reports Server (NTRS)

    Zilberstein, Shlomo

    1994-01-01

    Anytime algorithms are algorithms whose quality of results improves gradually as computation time increases. Certainty, accuracy, and specificity are metrics useful in anytime algorighm construction. It is widely accepted that a successful robotic system must trade off between decision quality and the computational resources used to produce it. Anytime algorithms were designed to offer such a trade off. A model of compilation and monitoring mechanisms needed to build robots that can efficiently control their deliberation time is presented. This approach simplifies the design and implementation of complex intelligent robots, mechanizes the composition and monitoring processes, and provides independent real time robotic systems that automatically adjust resource allocation to yield optimum performance.

  6. A performance analysis method for distributed real-time robotic systems: A case study of remote teleoperation

    NASA Technical Reports Server (NTRS)

    Lefebvre, D. R.; Sanderson, A. C.

    1994-01-01

    Robot coordination and control systems for remote teleoperation applications are by necessity implemented on distributed computers. Modeling and performance analysis of these distributed robotic systems is difficult, but important for economic system design. Performance analysis methods originally developed for conventional distributed computer systems are often unsatisfactory for evaluating real-time systems. The paper introduces a formal model of distributed robotic control systems; and a performance analysis method, based on scheduling theory, which can handle concurrent hard-real-time response specifications. Use of the method is illustrated by a case of remote teleoperation which assesses the effect of communication delays and the allocation of robot control functions on control system hardware requirements.

  7. Machine intelligence and robotics: Report of the NASA study group. Executive summary

    NASA Technical Reports Server (NTRS)

    1979-01-01

    A brief overview of applications of machine intelligence and robotics in the space program is given. These space exploration robots, global service robots to collect data for public service use on soil conditions, sea states, global crop conditions, weather, geology, disasters, etc., from Earth orbit, space industrialization and processing technologies, and construction of large structures in space. Program options for research, advanced development, and implementation of machine intelligence and robot technology for use in program planning are discussed. A vigorous and long-range program to incorporate and keep pace with state of the art developments in computer technology, both in spaceborne and ground-based computer systems is recommended.

  8. Web Environment for Programming and Control of a Mobile Robot in a Remote Laboratory

    ERIC Educational Resources Information Center

    dos Santos Lopes, Maísa Soares; Gomes, Iago Pacheco; Trindade, Roque M. P.; da Silva, Alzira F.; de C. Lima, Antonio C.

    2017-01-01

    Remote robotics laboratories have been successfully used for engineering education. However, few of them use mobile robots to to teach computer science. This article describes a mobile robot Control and Programming Environment (CPE) and its pedagogical applications. The system comprises a remote laboratory for robotics, an online programming tool,…

  9. An EEG/EOG-based hybrid brain-neural computer interaction (BNCI) system to control an exoskeleton for the paralyzed hand.

    PubMed

    Soekadar, Surjo R; Witkowski, Matthias; Vitiello, Nicola; Birbaumer, Niels

    2015-06-01

    The loss of hand function can result in severe physical and psychosocial impairment. Thus, compensation of a lost hand function using assistive robotics that can be operated in daily life is very desirable. However, versatile, intuitive, and reliable control of assistive robotics is still an unsolved challenge. Here, we introduce a novel brain/neural-computer interaction (BNCI) system that integrates electroencephalography (EEG) and electrooculography (EOG) to improve control of assistive robotics in daily life environments. To evaluate the applicability and performance of this hybrid approach, five healthy volunteers (HV) (four men, average age 26.5 ± 3.8 years) and a 34-year-old patient with complete finger paralysis due to a brachial plexus injury (BPI) used EEG (condition 1) and EEG/EOG (condition 2) to control grasping motions of a hand exoskeleton. All participants were able to control the BNCI system (BNCI control performance HV: 70.24 ± 16.71%, BPI: 65.93 ± 24.27%), but inclusion of EOG significantly improved performance across all participants (HV: 80.65 ± 11.28, BPI: 76.03 ± 18.32%). This suggests that hybrid BNCI systems can achieve substantially better control over assistive devices, e.g., a hand exoskeleton, than systems using brain signals alone and thus may increase applicability of brain-controlled assistive devices in daily life environments.

  10. A Multi-Sensorial Simultaneous Localization and Mapping (SLAM) System for Low-Cost Micro Aerial Vehicles in GPS-Denied Environments

    PubMed Central

    López, Elena; García, Sergio; Barea, Rafael; Bergasa, Luis M.; Molinos, Eduardo J.; Arroyo, Roberto; Romera, Eduardo; Pardo, Samuel

    2017-01-01

    One of the main challenges of aerial robots navigation in indoor or GPS-denied environments is position estimation using only the available onboard sensors. This paper presents a Simultaneous Localization and Mapping (SLAM) system that remotely calculates the pose and environment map of different low-cost commercial aerial platforms, whose onboard computing capacity is usually limited. The proposed system adapts to the sensory configuration of the aerial robot, by integrating different state-of-the art SLAM methods based on vision, laser and/or inertial measurements using an Extended Kalman Filter (EKF). To do this, a minimum onboard sensory configuration is supposed, consisting of a monocular camera, an Inertial Measurement Unit (IMU) and an altimeter. It allows to improve the results of well-known monocular visual SLAM methods (LSD-SLAM and ORB-SLAM are tested and compared in this work) by solving scale ambiguity and providing additional information to the EKF. When payload and computational capabilities permit, a 2D laser sensor can be easily incorporated to the SLAM system, obtaining a local 2.5D map and a footprint estimation of the robot position that improves the 6D pose estimation through the EKF. We present some experimental results with two different commercial platforms, and validate the system by applying it to their position control. PMID:28397758

  11. Demonstration of a Semi-Autonomous Hybrid Brain-Machine Interface using Human Intracranial EEG, Eye Tracking, and Computer Vision to Control a Robotic Upper Limb Prosthetic

    PubMed Central

    McMullen, David P.; Hotson, Guy; Katyal, Kapil D.; Wester, Brock A.; Fifer, Matthew S.; McGee, Timothy G.; Harris, Andrew; Johannes, Matthew S.; Vogelstein, R. Jacob; Ravitz, Alan D.; Anderson, William S.; Thakor, Nitish V.; Crone, Nathan E.

    2014-01-01

    To increase the ability of brain-machine interfaces (BMIs) to control advanced prostheses such as the modular prosthetic limb (MPL), we are developing a novel system: the Hybrid Augmented Reality Multimodal Operation Neural Integration Environment (HARMONIE). This system utilizes hybrid input, supervisory control, and intelligent robotics to allow users to identify an object (via eye tracking and computer vision) and initiate (via brain-control) a semi-autonomous reach-grasp-and-drop of the object by the MPL. Sequential iterations of HARMONIE were tested in two pilot subjects implanted with electrocorticographic (ECoG) and depth electrodes within motor areas. The subjects performed the complex task in 71.4% (20/28) and 67.7% (21/31) of trials after minimal training. Balanced accuracy for detecting movements was 91.1% and 92.9%, significantly greater than chance accuracies (p < 0.05). After BMI-based initiation, the MPL completed the entire task 100% (one object) and 70% (three objects) of the time. The MPL took approximately 12.2 seconds for task completion after system improvements implemented for the second subject. Our hybrid-BMI design prevented all but one baseline false positive from initiating the system. The novel approach demonstrated in this proof-of-principle study, using hybrid input, supervisory control, and intelligent robotics, addresses limitations of current BMIs. PMID:24760914

  12. Demonstration of a semi-autonomous hybrid brain-machine interface using human intracranial EEG, eye tracking, and computer vision to control a robotic upper limb prosthetic.

    PubMed

    McMullen, David P; Hotson, Guy; Katyal, Kapil D; Wester, Brock A; Fifer, Matthew S; McGee, Timothy G; Harris, Andrew; Johannes, Matthew S; Vogelstein, R Jacob; Ravitz, Alan D; Anderson, William S; Thakor, Nitish V; Crone, Nathan E

    2014-07-01

    To increase the ability of brain-machine interfaces (BMIs) to control advanced prostheses such as the modular prosthetic limb (MPL), we are developing a novel system: the Hybrid Augmented Reality Multimodal Operation Neural Integration Environment (HARMONIE). This system utilizes hybrid input, supervisory control, and intelligent robotics to allow users to identify an object (via eye tracking and computer vision) and initiate (via brain-control) a semi-autonomous reach-grasp-and-drop of the object by the MPL. Sequential iterations of HARMONIE were tested in two pilot subjects implanted with electrocorticographic (ECoG) and depth electrodes within motor areas. The subjects performed the complex task in 71.4% (20/28) and 67.7% (21/31) of trials after minimal training. Balanced accuracy for detecting movements was 91.1% and 92.9%, significantly greater than chance accuracies (p < 0.05). After BMI-based initiation, the MPL completed the entire task 100% (one object) and 70% (three objects) of the time. The MPL took approximately 12.2 s for task completion after system improvements implemented for the second subject. Our hybrid-BMI design prevented all but one baseline false positive from initiating the system. The novel approach demonstrated in this proof-of-principle study, using hybrid input, supervisory control, and intelligent robotics, addresses limitations of current BMIs.

  13. Integrating autonomous distributed control into a human-centric C4ISR environment

    NASA Astrophysics Data System (ADS)

    Straub, Jeremy

    2017-05-01

    This paper considers incorporating autonomy into human-centric Command, Control, Communications, Computers, Intelligence, Surveillance and Reconnaissance (C4ISR) environments. Specifically, it focuses on identifying ways that current autonomy technologies can augment human control and the challenges presented by additive autonomy. Three approaches to this challenge are considered, stemming from prior work in two converging areas. In the first, the problem is approached as augmenting what humans currently do with automation. In the alternate approach, the problem is approached as treating humans as actors within a cyber-physical system-of-systems (stemming from robotic distributed computing). A third approach, combines elements of both of the aforementioned.

  14. Off-line robot programming and graphical verification of path planning

    NASA Technical Reports Server (NTRS)

    Tonkay, Gregory L.

    1989-01-01

    The objective of this project was to develop or specify an integrated environment for off-line programming, graphical path verification, and debugging for robotic systems. Two alternatives were compared. The first was the integration of the ASEA Off-line Programming package with ROBSIM, a robotic simulation program. The second alternative was the purchase of the commercial product IGRIP. The needs of the RADL (Robotics Applications Development Laboratory) were explored and the alternatives were evaluated based on these needs. As a result, IGRIP was proposed as the best solution to the problem.

  15. SU-G-JeP3-08: Robotic System for Ultrasound Tracking in Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuhlemann, I; Graduate School for Computing in Medicine and Life Sciences, University of Luebeck; Jauer, P

    Purpose: For safe and accurate real-time tracking of tumors for IGRT using 4D ultrasound, it is necessary to make use of novel, high-end force-sensitive lightweight robots designed for human-machine interaction. Such a robot will be integrated into an existing robotized ultrasound system for non-invasive 4D live tracking, using a newly developed real-time control and communication framework. Methods: The new KUKA LWR iiwa robot is used for robotized ultrasound real-time tumor tracking. Besides more precise probe contact pressure detection, this robot provides an additional 7th link, enhancing the dexterity of the kinematic and the mounted transducer. Several integrated, certified safety featuresmore » create a safe environment for the patients during treatment. However, to remotely control the robot for the ultrasound application, a real-time control and communication framework has to be developed. Based on a client/server concept, client-side control commands are received and processed by a central server unit and are implemented by a client module running directly on the robot’s controller. Several special functionalities for robotized ultrasound applications are integrated and the robot can now be used for real-time control of the image quality by adjusting the transducer position, and contact pressure. The framework was evaluated looking at overall real-time capability for communication and processing of three different standard commands. Results: Due to inherent, certified safety modules, the new robot ensures a safe environment for patients during tumor tracking. Furthermore, the developed framework shows overall real-time capability with a maximum average latency of 3.6 ms (Minimum 2.5 ms; 5000 trials). Conclusion: The novel KUKA LBR iiwa robot will advance the current robotized ultrasound tracking system with important features. With the developed framework, it is now possible to remotely control this robot and use it for robotized ultrasound tracking applications, including image quality control and target tracking.« less

  16. Exploring TeleRobotics: A Radio-Controlled Robot

    ERIC Educational Resources Information Center

    Deal, Walter F., III; Hsiung, Steve C.

    2007-01-01

    Robotics is a rich and exciting multidisciplinary area to study and learn about electronics and control technology. The interest in robotic devices and systems provides the technology teacher with an excellent opportunity to make many concrete connections between electronics, control technology, and computers and science, engineering, and…

  17. RCTA capstone assessment

    NASA Astrophysics Data System (ADS)

    Lennon, Craig; Bodt, Barry; Childers, Marshal; Dean, Robert; Oh, Jean; DiBerardino, Chip; Keegan, Terence

    2015-05-01

    The Army Research Laboratory's Robotics Collaborative Technology Alliance (RCTA) is a program intended to change robots from tools that soldiers use into teammates with which soldiers can work. This requires the integration of fundamental and applied research in perception, artificial intelligence, and human-robot interaction. In October of 2014, the RCTA assessed progress towards integrating this research. This assessment was designed to evaluate the robot's performance when it used new capabilities to perform selected aspects of a mission. The assessed capabilities included the ability of the robot to: navigate semantically outdoors with respect to structures and landmarks, identify doors in the facades of buildings, and identify and track persons emerging from those doors. We present details of the mission-based vignettes that constituted the assessment, and evaluations of the robot's performance in these vignettes.

  18. Distributed communications and control network for robotic mining

    NASA Technical Reports Server (NTRS)

    Schiffbauer, William H.

    1989-01-01

    The application of robotics to coal mining machines is one approach pursued to increase productivity while providing enhanced safety for the coal miner. Toward that end, a network composed of microcontrollers, computers, expert systems, real time operating systems, and a variety of program languages are being integrated that will act as the backbone for intelligent machine operation. Actual mining machines, including a few customized ones, have been given telerobotic semiautonomous capabilities by applying the described network. Control devices, intelligent sensors and computers onboard these machines are showing promise of achieving improved mining productivity and safety benefits. Current research using these machines involves navigation, multiple machine interaction, machine diagnostics, mineral detection, and graphical machine representation. Guidance sensors and systems employed include: sonar, laser rangers, gyroscopes, magnetometers, clinometers, and accelerometers. Information on the network of hardware/software and its implementation on mining machines are presented. Anticipated coal production operations using the network are discussed. A parallelism is also drawn between the direction of present day underground coal mining research to how the lunar soil (regolith) may be mined. A conceptual lunar mining operation that employs a distributed communication and control network is detailed.

  19. The Evolution of Software and Its Impact on Complex System Design in Robotic Spacecraft Embedded Systems

    NASA Technical Reports Server (NTRS)

    Butler, Roy

    2013-01-01

    The growth in computer hardware performance, coupled with reduced energy requirements, has led to a rapid expansion of the resources available to software systems, driving them towards greater logical abstraction, flexibility, and complexity. This shift in focus from compacting functionality into a limited field towards developing layered, multi-state architectures in a grand field has both driven and been driven by the history of embedded processor design in the robotic spacecraft industry.The combinatorial growth of interprocess conditions is accompanied by benefits (concurrent development, situational autonomy, and evolution of goals) and drawbacks (late integration, non-deterministic interactions, and multifaceted anomalies) in achieving mission success, as illustrated by the case of the Mars Reconnaissance Orbiter. Approaches to optimizing the benefits while mitigating the drawbacks have taken the form of the formalization of requirements, modular design practices, extensive system simulation, and spacecraft data trend analysis. The growth of hardware capability and software complexity can be expected to continue, with future directions including stackable commodity subsystems, computer-generated algorithms, runtime reconfigurable processors, and greater autonomy.

  20. Optimization and Control of Cyber-Physical Vehicle Systems

    PubMed Central

    Bradley, Justin M.; Atkins, Ella M.

    2015-01-01

    A cyber-physical system (CPS) is composed of tightly-integrated computation, communication and physical elements. Medical devices, buildings, mobile devices, robots, transportation and energy systems can benefit from CPS co-design and optimization techniques. Cyber-physical vehicle systems (CPVSs) are rapidly advancing due to progress in real-time computing, control and artificial intelligence. Multidisciplinary or multi-objective design optimization maximizes CPS efficiency, capability and safety, while online regulation enables the vehicle to be responsive to disturbances, modeling errors and uncertainties. CPVS optimization occurs at design-time and at run-time. This paper surveys the run-time cooperative optimization or co-optimization of cyber and physical systems, which have historically been considered separately. A run-time CPVS is also cooperatively regulated or co-regulated when cyber and physical resources are utilized in a manner that is responsive to both cyber and physical system requirements. This paper surveys research that considers both cyber and physical resources in co-optimization and co-regulation schemes with applications to mobile robotic and vehicle systems. Time-varying sampling patterns, sensor scheduling, anytime control, feedback scheduling, task and motion planning and resource sharing are examined. PMID:26378541

  1. Optimization and Control of Cyber-Physical Vehicle Systems.

    PubMed

    Bradley, Justin M; Atkins, Ella M

    2015-09-11

    A cyber-physical system (CPS) is composed of tightly-integrated computation, communication and physical elements. Medical devices, buildings, mobile devices, robots, transportation and energy systems can benefit from CPS co-design and optimization techniques. Cyber-physical vehicle systems (CPVSs) are rapidly advancing due to progress in real-time computing, control and artificial intelligence. Multidisciplinary or multi-objective design optimization maximizes CPS efficiency, capability and safety, while online regulation enables the vehicle to be responsive to disturbances, modeling errors and uncertainties. CPVS optimization occurs at design-time and at run-time. This paper surveys the run-time cooperative optimization or co-optimization of cyber and physical systems, which have historically been considered separately. A run-time CPVS is also cooperatively regulated or co-regulated when cyber and physical resources are utilized in a manner that is responsive to both cyber and physical system requirements. This paper surveys research that considers both cyber and physical resources in co-optimization and co-regulation schemes with applications to mobile robotic and vehicle systems. Time-varying sampling patterns, sensor scheduling, anytime control, feedback scheduling, task and motion planning and resource sharing are examined.

  2. Iconic Gestures for Robot Avatars, Recognition and Integration with Speech

    PubMed Central

    Bremner, Paul; Leonards, Ute

    2016-01-01

    Co-verbal gestures are an important part of human communication, improving its efficiency and efficacy for information conveyance. One possible means by which such multi-modal communication might be realized remotely is through the use of a tele-operated humanoid robot avatar. Such avatars have been previously shown to enhance social presence and operator salience. We present a motion tracking based tele-operation system for the NAO robot platform that allows direct transmission of speech and gestures produced by the operator. To assess the capabilities of this system for transmitting multi-modal communication, we have conducted a user study that investigated if robot-produced iconic gestures are comprehensible, and are integrated with speech. Robot performed gesture outcomes were compared directly to those for gestures produced by a human actor, using a within participant experimental design. We show that iconic gestures produced by a tele-operated robot are understood by participants when presented alone, almost as well as when produced by a human. More importantly, we show that gestures are integrated with speech when presented as part of a multi-modal communication equally well for human and robot performances. PMID:26925010

  3. Integrated HTA-FMEA/FMECA methodology for the evaluation of robotic system in urology and general surgery.

    PubMed

    Frosini, Francesco; Miniati, Roberto; Grillone, Saverio; Dori, Fabrizio; Gentili, Guido Biffi; Belardinelli, Andrea

    2016-11-14

    The following study proposes and tests an integrated methodology involving Health Technology Assessment (HTA) and Failure Modes, Effects and Criticality Analysis (FMECA) for the assessment of specific aspects related to robotic surgery involving safety, process and technology. The integrated methodology consists of the application of specific techniques coming from the HTA joined to the aid of the most typical models from reliability engineering such as FMEA/FMECA. The study has also included in-site data collection and interviews to medical personnel. The total number of robotic procedures included in the analysis was 44: 28 for urology and 16 for general surgery. The main outcomes refer to the comparative evaluation between robotic, laparoscopic and open surgery. Risk analysis and mitigation interventions come from FMECA application. The small sample size available for the study represents an important bias, especially for the clinical outcomes reliability. Despite this, the study seems to confirm the better trend for robotics' surgical times with comparison to the open technique as well as confirming the robotics' clinical benefits in urology. More complex situation is observed for general surgery, where robotics' clinical benefits directly measured are the lowest blood transfusion rate.

  4. An adaptive inverse kinematics algorithm for robot manipulators

    NASA Technical Reports Server (NTRS)

    Colbaugh, R.; Glass, K.; Seraji, H.

    1990-01-01

    An adaptive algorithm for solving the inverse kinematics problem for robot manipulators is presented. The algorithm is derived using model reference adaptive control (MRAC) theory and is computationally efficient for online applications. The scheme requires no a priori knowledge of the kinematics of the robot if Cartesian end-effector sensing is available, and it requires knowledge of only the forward kinematics if joint position sensing is used. Computer simulation results are given for the redundant seven-DOF robotics research arm, demonstrating that the proposed algorithm yields accurate joint angle trajectories for a given end-effector position/orientation trajectory.

  5. Robot, computer problem solving system

    NASA Technical Reports Server (NTRS)

    Becker, J. D.

    1972-01-01

    The development of a computer problem solving system is reported that considers physical problems faced by an artificial robot moving around in a complex environment. Fundamental interaction constraints with a real environment are simulated for the robot by visual scan and creation of an internal environmental model. The programming system used in constructing the problem solving system for the simulated robot and its simulated world environment is outlined together with the task that the system is capable of performing. A very general framework for understanding the relationship between an observed behavior and an adequate description of that behavior is included.

  6. Analysis hierarchical model for discrete event systems

    NASA Astrophysics Data System (ADS)

    Ciortea, E. M.

    2015-11-01

    The This paper presents the hierarchical model based on discrete event network for robotic systems. Based on the hierarchical approach, Petri network is analysed as a network of the highest conceptual level and the lowest level of local control. For modelling and control of complex robotic systems using extended Petri nets. Such a system is structured, controlled and analysed in this paper by using Visual Object Net ++ package that is relatively simple and easy to use, and the results are shown as representations easy to interpret. The hierarchical structure of the robotic system is implemented on computers analysed using specialized programs. Implementation of hierarchical model discrete event systems, as a real-time operating system on a computer network connected via a serial bus is possible, where each computer is dedicated to local and Petri model of a subsystem global robotic system. Since Petri models are simplified to apply general computers, analysis, modelling, complex manufacturing systems control can be achieved using Petri nets. Discrete event systems is a pragmatic tool for modelling industrial systems. For system modelling using Petri nets because we have our system where discrete event. To highlight the auxiliary time Petri model using transport stream divided into hierarchical levels and sections are analysed successively. Proposed robotic system simulation using timed Petri, offers the opportunity to view the robotic time. Application of goods or robotic and transmission times obtained by measuring spot is obtained graphics showing the average time for transport activity, using the parameters sets of finished products. individually.

  7. Exploring the Possibility of Using Humanoid Robots as Instructional Tools for Teaching a Second Language in Primary School

    ERIC Educational Resources Information Center

    Chang, Chih-Wei; Lee, Jih-Hsien; Chao, Po-Yao; Wang, Chin-Yeh; Chen, Gwo-Dong

    2010-01-01

    As robot technologies develop, many researchers have tried to use robots to support education. Studies have shown that robots can help students develop problem-solving abilities and learn computer programming, mathematics, and science. However, few studies discuss the use of robots to facilitate the teaching of second languages. We discuss whether…

  8. Big Robots for Little Kids: Investigating the Role of Scale in Early Childhood Robotics Kits

    NASA Astrophysics Data System (ADS)

    Vizner, Miki Z.

    Couch fort and refrigerator box constructions are staples of early childhood play in American culture. Can this this large-scale fantasy type of play be leveraged to facilitate computational thinking? This thesis looks at the ways Kindergarteners (age 5-6) use two variations of the KIBO robotics platform in their play and learning. The first is the standard KIBO kit developed at the DevTech research group at Tufts University and commercialized by Kinderlab robotics. The second, created by the author, is 100 times bigger and can be ridden by children and adults. Specifically this study addresses the research question "How are children's experiences with big-KIBO different from KIBO?" To do so this thesis presents two analytical tools that were assembled conceptually from literature and the authors experiences with KIBO, examined using the data collected in this study, refined, and used as frameworks for understanding the data. They are a developmental model of programming with KIBO and an operationalization of Bers's (2018) powerful ideas of computational thinking when using KIBO. Vignettes from the data are presented and analyzed using these frameworks. Content and structural play themes are extracted from additional vignettes with each robot. In this study there are no clear differences in the ways children engage in computational thinking or develop their ability to program. There appear to be differences in the ways children play with the robots. Suggesting that a larger robot offers new opportunities and pathways for children to engage in computational thinking tasks. This study makes a case for the importance of thinking developmentally about computational thinking. Connections to literature and theory as well as suggestions for future work, both for children and designers, are discussed.

  9. An Interdisciplinary Field Robotics Program for Undergraduate Computer Science and Engineering Education

    ERIC Educational Resources Information Center

    Kitts, Christopher; Quinn, Neil

    2004-01-01

    Santa Clara University's Robotic Systems Laboratory conducts an aggressive robotic development and operations program in which interdisciplinary teams of undergraduate students build and deploy a wide range of robotic systems, ranging from underwater vehicles to spacecraft. These year-long projects expose students to the breadth of and…

  10. Robotics for Computer Scientists: What's the Big Idea?

    ERIC Educational Resources Information Center

    Touretzky, David S.

    2013-01-01

    Modern robots, like today's smartphones, are complex devices with intricate software systems. Introductory robot programming courses must evolve to reflect this reality, by teaching students to make use of the sophisticated tools their robots provide rather than reimplementing basic algorithms. This paper focuses on teaching with Tekkotsu, an open…

  11. Robot, computer problem solving system

    NASA Technical Reports Server (NTRS)

    Becker, J. D.; Merriam, E. W.

    1973-01-01

    The TENEX computer system, the ARPA network, and computer language design technology was applied to support the complex system programs. By combining the pragmatic and theoretical aspects of robot development, an approach is created which is grounded in realism, but which also has at its disposal the power that comes from looking at complex problems from an abstract analytical point of view.

  12. A Visual Tool for Computer Supported Learning: The Robot Motion Planning Example

    ERIC Educational Resources Information Center

    Elnagar, Ashraf; Lulu, Leena

    2007-01-01

    We introduce an effective computer aided learning visual tool (CALVT) to teach graph-based applications. We present the robot motion planning problem as an example of such applications. The proposed tool can be used to simulate and/or further to implement practical systems in different areas of computer science such as graphics, computational…

  13. Approximate analytical solutions to the double-stance dynamics of the lossy spring-loaded inverted pendulum.

    PubMed

    Shahbazi, Mohammad; Saranlı, Uluç; Babuška, Robert; Lopes, Gabriel A D

    2016-12-05

    This paper introduces approximate time-domain solutions to the otherwise non-integrable double-stance dynamics of the 'bipedal' spring-loaded inverted pendulum (B-SLIP) in the presence of non-negligible damping. We first introduce an auxiliary system whose behavior under certain conditions is approximately equivalent to the B-SLIP in double-stance. Then, we derive approximate solutions to the dynamics of the new system following two different methods: (i) updated-momentum approach that can deal with both the lossy and lossless B-SLIP models, and (ii) perturbation-based approach following which we only derive a solution to the lossless case. The prediction performance of each method is characterized via a comprehensive numerical analysis. The derived representations are computationally very efficient compared to numerical integrations, and, hence, are suitable for online planning, increasing the autonomy of walking robots. Two application examples of walking gait control are presented. The proposed solutions can serve as instrumental tools in various fields such as control in legged robotics and human motion understanding in biomechanics.

  14. Swarming Robot Design, Construction and Software Implementation

    NASA Technical Reports Server (NTRS)

    Stolleis, Karl A.

    2014-01-01

    In this paper is presented an overview of the hardware design, construction overview, software design and software implementation for a small, low-cost robot to be used for swarming robot development. In addition to the work done on the robot, a full simulation of the robotic system was developed using Robot Operating System (ROS) and its associated simulation. The eventual use of the robots will be exploration of evolving behaviors via genetic algorithms and builds on the work done at the University of New Mexico Biological Computation Lab.

  15. The neuroscience of vision-based grasping: a functional review for computational modeling and bio-inspired robotics.

    PubMed

    Chinellato, Eris; Del Pobil, Angel P

    2009-06-01

    The topic of vision-based grasping is being widely studied in humans and in other primates using various techniques and with different goals. The fundamental related findings are reviewed in this paper, with the aim of providing researchers from different fields, including intelligent robotics and neural computation, a comprehensive but accessible view on the subject. A detailed description of the principal sensorimotor processes and the brain areas involved is provided following a functional perspective, in order to make this survey especially useful for computational modeling and bio-inspired robotic applications.

  16. Distributed and Modular CAN-Based Architecture for Hardware Control and Sensor Data Integration

    PubMed Central

    Losada, Diego P.; Fernández, Joaquín L.; Paz, Enrique; Sanz, Rafael

    2017-01-01

    In this article, we present a CAN-based (Controller Area Network) distributed system to integrate sensors, actuators and hardware controllers in a mobile robot platform. With this work, we provide a robust, simple, flexible and open system to make hardware elements or subsystems communicate, that can be applied to different robots or mobile platforms. Hardware modules can be connected to or disconnected from the CAN bus while the system is working. It has been tested in our mobile robot Rato, based on a RWI (Real World Interface) mobile platform, to replace the old sensor and motor controllers. It has also been used in the design of two new robots: BellBot and WatchBot. Currently, our hardware integration architecture supports different sensors, actuators and control subsystems, such as motor controllers and inertial measurement units. The integration architecture was tested and compared with other solutions through a performance analysis of relevant parameters such as transmission efficiency and bandwidth usage. The results conclude that the proposed solution implements a lightweight communication protocol for mobile robot applications that avoids transmission delays and overhead. PMID:28467381

  17. Distributed and Modular CAN-Based Architecture for Hardware Control and Sensor Data Integration.

    PubMed

    Losada, Diego P; Fernández, Joaquín L; Paz, Enrique; Sanz, Rafael

    2017-05-03

    In this article, we present a CAN-based (Controller Area Network) distributed system to integrate sensors, actuators and hardware controllers in a mobile robot platform. With this work, we provide a robust, simple, flexible and open system to make hardware elements or subsystems communicate, that can be applied to different robots or mobile platforms. Hardware modules can be connected to or disconnected from the CAN bus while the system is working. It has been tested in our mobile robot Rato, based on a RWI (Real World Interface) mobile platform, to replace the old sensor and motor controllers. It has also been used in the design of two new robots: BellBot and WatchBot. Currently, our hardware integration architecture supports different sensors, actuators and control subsystems, such as motor controllers and inertial measurement units. The integration architecture was tested and compared with other solutions through a performance analysis of relevant parameters such as transmission efficiency and bandwidth usage. The results conclude that the proposed solution implements a lightweight communication protocol for mobile robot applications that avoids transmission delays and overhead.

  18. Computer graphics testbed to simulate and test vision systems for space applications

    NASA Technical Reports Server (NTRS)

    Cheatham, John B.

    1991-01-01

    Artificial intelligence concepts are applied to robotics. Artificial neural networks, expert systems and laser imaging techniques for autonomous space robots are being studied. A computer graphics laser range finder simulator developed by Wu has been used by Weiland and Norwood to study use of artificial neural networks for path planning and obstacle avoidance. Interest is expressed in applications of CLIPS, NETS, and Fuzzy Control. These applications are applied to robot navigation.

  19. A Guide for Developing Human-Robot Interaction Experiments in the Robotic Interactive Visualization and Experimentation Technology (RIVET) Simulation

    DTIC Science & Technology

    2016-05-01

    research, Kunkler (2006) suggested that the similarities between computer simulation tools and robotic surgery systems (e.g., mechanized feedback...distribution is unlimited. 49 Davies B. A review of robotics in surgery . Proceedings of the Institution of Mechanical Engineers, Part H: Journal...ARL-TR-7683 ● MAY 2016 US Army Research Laboratory A Guide for Developing Human- Robot Interaction Experiments in the Robotic

  20. A conceptual cognitive architecture for robots to learn behaviors from demonstrations in robotic aid area.

    PubMed

    Tan, Huan; Liang, Chen

    2011-01-01

    This paper proposes a conceptual hybrid cognitive architecture for cognitive robots to learn behaviors from demonstrations in robotic aid situations. Unlike the current cognitive architectures, this architecture puts concentration on the requirements of the safety, the interaction, and the non-centralized processing in robotic aid situations. Imitation learning technologies for cognitive robots have been integrated into this architecture for rapidly transferring the knowledge and skills between human teachers and robots.

  1. An Integrated Framework for Human-Robot Collaborative Manipulation.

    PubMed

    Sheng, Weihua; Thobbi, Anand; Gu, Ye

    2015-10-01

    This paper presents an integrated learning framework that enables humanoid robots to perform human-robot collaborative manipulation tasks. Specifically, a table-lifting task performed jointly by a human and a humanoid robot is chosen for validation purpose. The proposed framework is split into two phases: 1) phase I-learning to grasp the table and 2) phase II-learning to perform the manipulation task. An imitation learning approach is proposed for phase I. In phase II, the behavior of the robot is controlled by a combination of two types of controllers: 1) reactive and 2) proactive. The reactive controller lets the robot take a reactive control action to make the table horizontal. The proactive controller lets the robot take proactive actions based on human motion prediction. A measure of confidence of the prediction is also generated by the motion predictor. This confidence measure determines the leader/follower behavior of the robot. Hence, the robot can autonomously switch between the behaviors during the task. Finally, the performance of the human-robot team carrying out the collaborative manipulation task is experimentally evaluated on a platform consisting of a Nao humanoid robot and a Vicon motion capture system. Results show that the proposed framework can enable the robot to carry out the collaborative manipulation task successfully.

  2. Optimization of power utilization in multimobile robot foraging behavior inspired by honeybees system.

    PubMed

    Ahmad, Faisul Arif; Ramli, Abd Rahman; Samsudin, Khairulmizam; Hashim, Shaiful Jahari

    2014-01-01

    Deploying large numbers of mobile robots which can interact with each other produces swarm intelligent behavior. However, mobile robots are normally running with finite energy resource, supplied from finite battery. The limitation of energy resource required human intervention for recharging the batteries. The sharing information among the mobile robots would be one of the potentials to overcome the limitation on previously recharging system. A new approach is proposed based on integrated intelligent system inspired by foraging of honeybees applied to multimobile robot scenario. This integrated approach caters for both working and foraging stages for known/unknown power station locations. Swarm mobile robot inspired by honeybee is simulated to explore and identify the power station for battery recharging. The mobile robots will share the location information of the power station with each other. The result showed that mobile robots consume less energy and less time when they are cooperating with each other for foraging process. The optimizing of foraging behavior would result in the mobile robots spending more time to do real work.

  3. Optimization of Power Utilization in Multimobile Robot Foraging Behavior Inspired by Honeybees System

    PubMed Central

    Ahmad, Faisul Arif; Ramli, Abd Rahman; Samsudin, Khairulmizam; Hashim, Shaiful Jahari

    2014-01-01

    Deploying large numbers of mobile robots which can interact with each other produces swarm intelligent behavior. However, mobile robots are normally running with finite energy resource, supplied from finite battery. The limitation of energy resource required human intervention for recharging the batteries. The sharing information among the mobile robots would be one of the potentials to overcome the limitation on previously recharging system. A new approach is proposed based on integrated intelligent system inspired by foraging of honeybees applied to multimobile robot scenario. This integrated approach caters for both working and foraging stages for known/unknown power station locations. Swarm mobile robot inspired by honeybee is simulated to explore and identify the power station for battery recharging. The mobile robots will share the location information of the power station with each other. The result showed that mobile robots consume less energy and less time when they are cooperating with each other for foraging process. The optimizing of foraging behavior would result in the mobile robots spending more time to do real work. PMID:24949491

  4. GADEN: A 3D Gas Dispersion Simulator for Mobile Robot Olfaction in Realistic Environments.

    PubMed

    Monroy, Javier; Hernandez-Bennets, Victor; Fan, Han; Lilienthal, Achim; Gonzalez-Jimenez, Javier

    2017-06-23

    This work presents a simulation framework developed under the widely used Robot Operating System (ROS) to enable the validation of robotics systems and gas sensing algorithms under realistic environments. The framework is rooted in the principles of computational fluid dynamics and filament dispersion theory, modeling wind flow and gas dispersion in 3D real-world scenarios (i.e., accounting for walls, furniture, etc.). Moreover, it integrates the simulation of different environmental sensors, such as metal oxide gas sensors, photo ionization detectors, or anemometers. We illustrate the potential and applicability of the proposed tool by presenting a simulation case in a complex and realistic office-like environment where gas leaks of different chemicals occur simultaneously. Furthermore, we accomplish quantitative and qualitative validation by comparing our simulated results against real-world data recorded inside a wind tunnel where methane was released under different wind flow profiles. Based on these results, we conclude that our simulation framework can provide a good approximation to real world measurements when advective airflows are present in the environment.

  5. GADEN: A 3D Gas Dispersion Simulator for Mobile Robot Olfaction in Realistic Environments

    PubMed Central

    Hernandez-Bennetts, Victor; Fan, Han; Lilienthal, Achim; Gonzalez-Jimenez, Javier

    2017-01-01

    This work presents a simulation framework developed under the widely used Robot Operating System (ROS) to enable the validation of robotics systems and gas sensing algorithms under realistic environments. The framework is rooted in the principles of computational fluid dynamics and filament dispersion theory, modeling wind flow and gas dispersion in 3D real-world scenarios (i.e., accounting for walls, furniture, etc.). Moreover, it integrates the simulation of different environmental sensors, such as metal oxide gas sensors, photo ionization detectors, or anemometers. We illustrate the potential and applicability of the proposed tool by presenting a simulation case in a complex and realistic office-like environment where gas leaks of different chemicals occur simultaneously. Furthermore, we accomplish quantitative and qualitative validation by comparing our simulated results against real-world data recorded inside a wind tunnel where methane was released under different wind flow profiles. Based on these results, we conclude that our simulation framework can provide a good approximation to real world measurements when advective airflows are present in the environment. PMID:28644375

  6. Virtual Sensor for Kinematic Estimation of Flexible Links in Parallel Robots

    PubMed Central

    Cabanes, Itziar; Mancisidor, Aitziber; Pinto, Charles

    2017-01-01

    The control of flexible link parallel manipulators is still an open area of research, endpoint trajectory tracking being one of the main challenges in this type of robot. The flexibility and deformations of the limbs make the estimation of the Tool Centre Point (TCP) position a challenging one. Authors have proposed different approaches to estimate this deformation and deduce the location of the TCP. However, most of these approaches require expensive measurement systems or the use of high computational cost integration methods. This work presents a novel approach based on a virtual sensor which can not only precisely estimate the deformation of the flexible links in control applications (less than 2% error), but also its derivatives (less than 6% error in velocity and 13% error in acceleration) according to simulation results. The validity of the proposed Virtual Sensor is tested in a Delta Robot, where the position of the TCP is estimated based on the Virtual Sensor measurements with less than a 0.03% of error in comparison with the flexible approach developed in ADAMS Multibody Software. PMID:28832510

  7. Improved Collision-Detection Method for Robotic Manipulator

    NASA Technical Reports Server (NTRS)

    Leger, Chris

    2003-01-01

    An improved method has been devised for the computational prediction of a collision between (1) a robotic manipulator and (2) another part of the robot or an external object in the vicinity of the robot. The method is intended to be used to test commanded manipulator trajectories in advance so that execution of the commands can be stopped before damage is done. The method involves utilization of both (1) mathematical models of the robot and its environment constructed manually prior to operation and (2) similar models constructed automatically from sensory data acquired during operation. The representation of objects in this method is simpler and more efficient (with respect to both computation time and computer memory), relative to the representations used in most prior methods. The present method was developed especially for use on a robotic land vehicle (rover) equipped with a manipulator arm and a vision system that includes stereoscopic electronic cameras. In this method, objects are represented and collisions detected by use of a previously developed technique known in the art as the method of oriented bounding boxes (OBBs). As the name of this technique indicates, an object is represented approximately, for computational purposes, by a box that encloses its outer boundary. Because many parts of a robotic manipulator are cylindrical, the OBB method has been extended in this method to enable the approximate representation of cylindrical parts by use of octagonal or other multiple-OBB assemblies denoted oriented bounding prisms (OBPs), as in the example of Figure 1. Unlike prior methods, the OBB/OBP method does not require any divisions or transcendental functions; this feature leads to greater robustness and numerical accuracy. The OBB/OBP method was selected for incorporation into the present method because it offers the best compromise between accuracy on the one hand and computational efficiency (and thus computational speed) on the other hand.

  8. Satellite-map position estimation for the Mars rover

    NASA Technical Reports Server (NTRS)

    Hayashi, Akira; Dean, Thomas

    1989-01-01

    A method for locating the Mars rover using an elevation map generated from satellite data is described. In exploring its environment, the rover is assumed to generate a local rover-centered elevation map that can be used to extract information about the relative position and orientation of landmarks corresponding to local maxima. These landmarks are integrated into a stochastic map which is then matched with the satellite map to obtain an estimate of the robot's current location. The landmarks are not explicitly represented in the satellite map. The results of the matching algorithm correspond to a probabilistic assessment of whether or not the robot is located within a given region of the satellite map. By assigning a probabilistic interpretation to the information stored in the satellite map, researchers are able to provide a precise characterization of the results computed by the matching algorithm.

  9. Route planning in a four-dimensional environment

    NASA Technical Reports Server (NTRS)

    Slack, M. G.; Miller, D. P.

    1987-01-01

    Robots must be able to function in the real world. The real world involves processes and agents that move independently of the actions of the robot, sometimes in an unpredictable manner. A real-time integrated route planning and spatial representation system for planning routes through dynamic domains is presented. The system will find the safest most efficient route through space-time as described by a set of user defined evaluation functions. Because the route planning algorthims is highly parallel and can run on an SIMD machine in O(p) time (p is the length of a path), the system will find real-time paths through unpredictable domains when used in an incremental mode. Spatial representation, an SIMD algorithm for route planning in a dynamic domain, and results from an implementation on a traditional computer architecture are discussed.

  10. Generic robot architecture

    DOEpatents

    Bruemmer, David J [Idaho Falls, ID; Few, Douglas A [Idaho Falls, ID

    2010-09-21

    The present invention provides methods, computer readable media, and apparatuses for a generic robot architecture providing a framework that is easily portable to a variety of robot platforms and is configured to provide hardware abstractions, abstractions for generic robot attributes, environment abstractions, and robot behaviors. The generic robot architecture includes a hardware abstraction level and a robot abstraction level. The hardware abstraction level is configured for developing hardware abstractions that define, monitor, and control hardware modules available on a robot platform. The robot abstraction level is configured for defining robot attributes and provides a software framework for building robot behaviors from the robot attributes. Each of the robot attributes includes hardware information from at least one hardware abstraction. In addition, each robot attribute is configured to substantially isolate the robot behaviors from the at least one hardware abstraction.

  11. An intelligent approach to welding robot selection

    NASA Astrophysics Data System (ADS)

    Milano, J.; Mauk, S. D.; Flitter, L.; Morris, R.

    1993-10-01

    In a shipyard where multiple stationary and mobile workcells are employed in the fabrication of components of complex sub-assemblies,efficient operation requires an intelligent method of scheduling jobs and selecting workcells based on optimum throughput and cost. The achievement of this global solution requires the successful organization of resource availability,process requirements,and process constraints. The Off-line Planner (OLP) of the Programmable Automated Weld Systemd (PAWS) is capable of advanced modeling of weld processes and environments as well as the generation of complete weld procedures. These capabilities involve the integration of advanced Computer Aided Design (CAD), path planning, and obstacle detection and avoidance techniques as well as the synthesis of complex design and process information. These existing capabilities provide the basis of the functionality required for the successful implementation of an intelligent weld robot selector and material flow planner. Current efforts are focused on robot selection via the dynamic routing of components to the appropriate work cells. It is proposed that this problem is a variant of the “Traveling Salesman Problem” (TSP) that has been proven to belong to a larger set of optimization problems termed nondeterministic polynomial complete (NP complete). In this paper, a heuristic approach utilizing recurrent neural networks is explored as a rapid means of producing a near optimal, if not optimal, bdweld robot selection.

  12. The TJO-OAdM robotic observatory: OpenROCS and dome control

    NASA Astrophysics Data System (ADS)

    Colomé, Josep; Francisco, Xavier; Ribas, Ignasi; Casteels, Kevin; Martín, Jonatan

    2010-07-01

    The Telescope Joan Oró at the Montsec Astronomical Observatory (TJO - OAdM) is a small-class observatory working in completely unattended control. There are key problems to solve when a robotic control is envisaged, both on hardware and software issues. We present the OpenROCS (ROCS stands for Robotic Observatory Control System), an open source platform developed for the robotic control of the TJO - OAdM and similar astronomical observatories. It is a complex software architecture, composed of several applications for hardware control, event handling, environment monitoring, target scheduling, image reduction pipeline, etc. The code is developed in Java, C++, Python and Perl. The software infrastructure used is based on the Internet Communications Engine (Ice), an object-oriented middleware that provides object-oriented remote procedure call, grid computing, and publish/subscribe functionality. We also describe the subsystem in charge of the dome control: several hardware and software elements developed to specially protect the system at this identified single point of failure. It integrates a redundant control and a rain detector signal for alarm triggering and it responds autonomously in case communication with any of the control elements is lost (watchdog functionality). The self-developed control software suite (OpenROCS) and dome control system have proven to be highly reliable.

  13. Comparison of human and humanoid robot control of upright stance.

    PubMed

    Peterka, Robert J

    2009-01-01

    There is considerable recent interest in developing humanoid robots. An important substrate for many motor actions in both humans and biped robots is the ability to maintain a statically or dynamically stable posture. Given the success of the human design, one would expect there are lessons to be learned in formulating a postural control mechanism for robots. In this study we limit ourselves to considering the problem of maintaining upright stance. Human stance control is compared to a suggested method for robot stance control called zero moment point (ZMP) compensation. Results from experimental and modeling studies suggest there are two important subsystems that account for the low- and mid-frequency (DC to approximately 1Hz) dynamic characteristics of human stance control. These subsystems are (1) a "sensory integration" mechanism whereby orientation information from multiple sensory systems encoding body kinematics (i.e. position, velocity) is flexibly combined to provide an overall estimate of body orientation while allowing adjustments (sensory re-weighting) that compensate for changing environmental conditions and (2) an "effort control" mechanism that uses kinetic-related (i.e., force-related) sensory information to reduce the mean deviation of body orientation from upright. Functionally, ZMP compensation is directly analogous to how humans appear to use kinetic feedback to modify the main sensory integration feedback loop controlling body orientation. However, a flexible sensory integration mechanism is missing from robot control leaving the robot vulnerable to instability in conditions where humans are able to maintain stance. We suggest the addition of a simple form of sensory integration to improve robot stance control. We also investigate how the biological constraint of feedback time delay influences the human stance control design. The human system may serve as a guide for improved robot control, but should not be directly copied because the constraints on robot and human control are different.

  14. A Gradient Optimization Approach to Adaptive Multi-Robot Control

    DTIC Science & Technology

    2009-09-01

    implemented for deploying a group of three flying robots with downward facing cameras to monitor an environment on the ground. Thirdly, the multi-robot...theoretically proven, and implemented on multi-robot platforms. Thesis Supervisor: Daniela Rus Title: Professor of Electrical Engineering and Computer...often nonlinear, and they are coupled through a network which changes over time. Thirdly, implementing multi-robot controllers requires maintaining mul

  15. Avoiding Local Optima with Interactive Evolutionary Robotics

    DTIC Science & Technology

    2012-07-09

    the top of a flight of stairs selects for climbing ; suspending the robot and the target object above the ground and creating rungs between the two will...REPORT Avoiding Local Optimawith Interactive Evolutionary Robotics 14. ABSTRACT 16. SECURITY CLASSIFICATION OF: The main bottleneck in evolutionary... robotics has traditionally been the time required to evolve robot controllers. However with the continued acceleration in computational resources, the

  16. An anatomy of industrial robots and their controls

    NASA Astrophysics Data System (ADS)

    Luh, J. Y. S.

    1983-02-01

    The modernization of manufacturing facilities by means of automation represents an approach for increasing productivity in industry. The three existing types of automation are related to continuous process controls, the use of transfer conveyor methods, and the employment of programmable automation for the low-volume batch production of discrete parts. The industrial robots, which are defined as computer controlled mechanics manipulators, belong to the area of programmable automation. Typically, the robots perform tasks of arc welding, paint spraying, or foundary operation. One may assign a robot to perform a variety of job assignments simply by changing the appropriate computer program. The present investigation is concerned with an evaluation of the potential of the robot on the basis of its basic structure and controls. It is found that robots function well in limited areas of industry. If the range of tasks which robots can perform is to be expanded, it is necessary to provide multiple-task sensors, or special tooling, or even automatic tooling.

  17. An integrated design and fabrication strategy for entirely soft, autonomous robots.

    PubMed

    Wehner, Michael; Truby, Ryan L; Fitzgerald, Daniel J; Mosadegh, Bobak; Whitesides, George M; Lewis, Jennifer A; Wood, Robert J

    2016-08-25

    Soft robots possess many attributes that are difficult, if not impossible, to achieve with conventional robots composed of rigid materials. Yet, despite recent advances, soft robots must still be tethered to hard robotic control systems and power sources. New strategies for creating completely soft robots, including soft analogues of these crucial components, are needed to realize their full potential. Here we report the untethered operation of a robot composed solely of soft materials. The robot is controlled with microfluidic logic that autonomously regulates fluid flow and, hence, catalytic decomposition of an on-board monopropellant fuel supply. Gas generated from the fuel decomposition inflates fluidic networks downstream of the reaction sites, resulting in actuation. The body and microfluidic logic of the robot are fabricated using moulding and soft lithography, respectively, and the pneumatic actuator networks, on-board fuel reservoirs and catalytic reaction chambers needed for movement are patterned within the body via a multi-material, embedded 3D printing technique. The fluidic and elastomeric architectures required for function span several orders of magnitude from the microscale to the macroscale. Our integrated design and rapid fabrication approach enables the programmable assembly of multiple materials within this architecture, laying the foundation for completely soft, autonomous robots.

  18. TRICCS: A proposed teleoperator/robot integrated command and control system for space applications

    NASA Technical Reports Server (NTRS)

    Will, R. W.

    1985-01-01

    Robotic systems will play an increasingly important role in space operations. An integrated command and control system based on the requirements of space-related applications and incorporating features necessary for the evolution of advanced goal-directed robotic systems is described. These features include: interaction with a world model or domain knowledge base, sensor feedback, multiple-arm capability and concurrent operations. The system makes maximum use of manual interaction at all levels for debug, monitoring, and operational reliability. It is shown that the robotic command and control system may most advantageously be implemented as packages and tasks in Ada.

  19. Integrated Planning for Telepresence with Time Delays

    NASA Technical Reports Server (NTRS)

    Johnston, Mark D.; Rabe, Kenneth J.

    2006-01-01

    Integrated planning and execution of teleoperations in space with time delays is shown. The topics include: 1) The Problem; 2) Future Robot Surgery? 3) Approach Overview; 4) Robonaut; 5) Normal Planning and Execution; 6) Planner Context; 7) Implementation; 8) Use of JSHOP2; 9) Monitoring and Testing GUI; 10) Normal sequence: first the supervisor acts; 11) then the robot; 12) Robot might be late; 13) Supervisor can work ahead; 14) Deviations from Plan; 15) Robot State Change Example; 16) Accomplished goals skipped in replan; 17) Planning continuity; 18) Supervisor Deviation From Plan; 19) Intentional Deviation; and 20) Infeasible states.

  20. A telemedicine system for enabling teaching activities.

    PubMed

    Masero, V; Sanchez, F M; Uson, J

    2000-01-01

    In order to improve the distance teaching of minimally invasive surgery techniques, an integrated system has been developed. It comprises a telecommunications system, a server, a workstation, some medical peripherals and several computer applications developed in the Minimally Invasive Surgery Centre. The latest peripherals, such as robotized teleoperating systems for telesurgery and virtual reality peripherals, have been added. The visualization of the zone to be treated, along with the teacher's explanations, enables the student to understand the procedures of the operation much better.

  1. Decentralized adaptive control

    NASA Technical Reports Server (NTRS)

    Oh, B. J.; Jamshidi, M.; Seraji, H.

    1988-01-01

    A decentralized adaptive control is proposed to stabilize and track the nonlinear, interconnected subsystems with unknown parameters. The adaptation of the controller gain is derived by using model reference adaptive control theory based on Lyapunov's direct method. The adaptive gains consist of sigma, proportional, and integral combination of the measured and reference values of the corresponding subsystem. The proposed control is applied to the joint control of a two-link robot manipulator, and the performance in computer simulation corresponds with what is expected in theoretical development.

  2. Emergent Intelligent Behavior through Integrated Investigation of Embodied Natural Language, Reasoning, Learning, Computer Vision, and Robotic Manipulation

    DTIC Science & Technology

    2011-10-11

    developed a method for determining the structure (component logs and their 3D place- ment) of a LINCOLN LOG assembly from a single image from an uncalibrated...small a class of components. Moreover, we focus on determining the precise pose and structure of an assembly, including the 3D pose of each...medial axes are parallel to the work surface. Thus valid structures Fig. 1. The 3D geometric shape parameters of LINCOLN LOGS. have logs on

  3. Experiences with the JPL telerobot testbed: Issues and insights

    NASA Technical Reports Server (NTRS)

    Stone, Henry W.; Balaram, Bob; Beahan, John

    1989-01-01

    The Jet Propulsion Laboratory's (JPL) Telerobot Testbed is an integrated robotic testbed used to develop, implement, and evaluate the performance of advanced concepts in autonomous, tele-autonomous, and tele-operated control of robotic manipulators. Using the Telerobot Testbed, researchers demonstrated several of the capabilities and technological advances in the control and integration of robotic systems which have been under development at JPL for several years. In particular, the Telerobot Testbed was recently employed to perform a near completely automated, end-to-end, satellite grapple and repair sequence. The task of integrating existing as well as new concepts in robot control into the Telerobot Testbed has been a very difficult and timely one. Now that researchers have completed the first major milestone (i.e., the end-to-end demonstration) it is important to reflect back upon experiences and to collect the knowledge that has been gained so that improvements can be made to the existing system. It is also believed that the experiences are of value to the others in the robotics community. Therefore, the primary objective here will be to use the Telerobot Testbed as a case study to identify real problems and technological gaps which exist in the areas of robotics and in particular systems integration. Such problems have surely hindered the development of what could be reasonably called an intelligent robot. In addition to identifying such problems, researchers briefly discuss what approaches have been taken to resolve them or, in several cases, to circumvent them until better approaches can be developed.

  4. Robot Manipulations: A Synergy of Visualization, Computation and Action for Spatial Instruction

    ERIC Educational Resources Information Center

    Verner, Igor M.

    2004-01-01

    This article considers the use of a learning environment, RoboCell, where manipulations of objects are performed by robot operations specified through the learner's application of mathematical and spatial reasoning. A curriculum is proposed relating to robot kinematics and point-to-point motion, rotation of objects, and robotic assembly of spatial…

  5. Experiential Learning of Electronics Subject Matter in Middle School Robotics Courses

    ERIC Educational Resources Information Center

    Rihtaršic, David; Avsec, Stanislav; Kocijancic, Slavko

    2016-01-01

    The purpose of this paper is to investigate whether the experiential learning of electronics subject matter is effective in the middle school open learning of robotics. Electronics is often ignored in robotics courses. Since robotics courses are typically comprised of computer-related subjects, and mechanical and electrical engineering, these…

  6. Robotics: Instructional Manual. The North Dakota High Technology Mobile Laboratory Project.

    ERIC Educational Resources Information Center

    Auer, Herbert J.

    This instructional manual contains 20 learning activity packets for use in a workshop on robotics. The lessons cover the following topics: safety considerations in robotics; introduction to technology-level and coordinate-systems categories; the teach pendant (a hand-held computer, usually attached to the robot controller, with which the operator…

  7. The Snackbot: Documenting the Design of a Robot for Long-term Human-Robot Interaction

    DTIC Science & Technology

    2009-03-01

    distributed robots. Proceedings of the Computer Supported Cooperative Work Conference’02. NY: ACM Press. [18] Kanda, T., Takayuki , H., Eaton, D., and...humanoid robots. Proceedings of HRI’06. New York, NY: ACM Press, 351-352. [23] Nabe, S., Kanda, T., Hiraki , K., Ishiguro, H., Kogure, K., and Hagita

  8. Adaptive Tracking Control for Robots With an Interneural Computing Scheme.

    PubMed

    Tsai, Feng-Sheng; Hsu, Sheng-Yi; Shih, Mau-Hsiang

    2018-04-01

    Adaptive tracking control of mobile robots requires the ability to follow a trajectory generated by a moving target. The conventional analysis of adaptive tracking uses energy minimization to study the convergence and robustness of the tracking error when the mobile robot follows a desired trajectory. However, in the case that the moving target generates trajectories with uncertainties, a common Lyapunov-like function for energy minimization may be extremely difficult to determine. Here, to solve the adaptive tracking problem with uncertainties, we wish to implement an interneural computing scheme in the design of a mobile robot for behavior-based navigation. The behavior-based navigation adopts an adaptive plan of behavior patterns learning from the uncertainties of the environment. The characteristic feature of the interneural computing scheme is the use of neural path pruning with rewards and punishment interacting with the environment. On this basis, the mobile robot can be exploited to change its coupling weights in paths of neural connections systematically, which can then inhibit or enhance the effect of flow elimination in the dynamics of the evolutionary neural network. Such dynamical flow translation ultimately leads to robust sensory-to-motor transformations adapting to the uncertainties of the environment. A simulation result shows that the mobile robot with the interneural computing scheme can perform fault-tolerant behavior of tracking by maintaining suitable behavior patterns at high frequency levels.

  9. Social robots as embedded reinforcers of social behavior in children with autism.

    PubMed

    Kim, Elizabeth S; Berkovits, Lauren D; Bernier, Emily P; Leyzberg, Dan; Shic, Frederick; Paul, Rhea; Scassellati, Brian

    2013-05-01

    In this study we examined the social behaviors of 4- to 12-year-old children with autism spectrum disorders (ASD; N = 24) during three tradic interactions with an adult confederate and an interaction partner, where the interaction partner varied randomly among (1) another adult human, (2) a touchscreen computer game, and (3) a social dinosaur robot. Children spoke more in general, and directed more speech to the adult confederate, when the interaction partner was a robot, as compared to a human or computer game interaction partner. Children spoke as much to the robot as to the adult interaction partner. This study provides the largest demonstration of social human-robot interaction in children with autism to date. Our findings suggest that social robots may be developed into useful tools for social skills and communication therapies, specifically by embedding social interaction into intrinsic reinforcers and motivators.

  10. Understanding of and applications for robot vision guidance at KSC

    NASA Technical Reports Server (NTRS)

    Shawaga, Lawrence M.

    1988-01-01

    The primary thrust of robotics at KSC is for the servicing of Space Shuttle remote umbilical docking functions. In order for this to occur, robots performing servicing operations must be capable of tracking a swaying Orbiter in Six Degrees of Freedom (6-DOF). Currently, in NASA KSC's Robotic Applications Development Laboratory (RADL), an ASEA IRB-90 industrial robot is being equipped with a real-time computer vision (hardware and software) system to allow it to track a simulated Orbiter interface (target) in 6-DOF. The real-time computer vision system effectively becomes the eyes for the lab robot, guiding it through a closed loop visual feedback system to move with the simulated Orbiter interface. This paper will address an understanding of this vision guidance system and how it will be applied to remote umbilical servicing at KSC. In addition, other current and future applications will be addressed.

  11. Computational structures for robotic computations

    NASA Technical Reports Server (NTRS)

    Lee, C. S. G.; Chang, P. R.

    1987-01-01

    The computational problem of inverse kinematics and inverse dynamics of robot manipulators by taking advantage of parallelism and pipelining architectures is discussed. For the computation of inverse kinematic position solution, a maximum pipelined CORDIC architecture has been designed based on a functional decomposition of the closed-form joint equations. For the inverse dynamics computation, an efficient p-fold parallel algorithm to overcome the recurrence problem of the Newton-Euler equations of motion to achieve the time lower bound of O(log sub 2 n) has also been developed.

  12. The Human-Robot Interaction Operating System

    NASA Technical Reports Server (NTRS)

    Fong, Terrence; Kunz, Clayton; Hiatt, Laura M.; Bugajska, Magda

    2006-01-01

    In order for humans and robots to work effectively together, they need to be able to converse about abilities, goals and achievements. Thus, we are developing an interaction infrastructure called the "Human-Robot Interaction Operating System" (HRI/OS). The HRI/OS provides a structured software framework for building human-robot teams, supports a variety of user interfaces, enables humans and robots to engage in task-oriented dialogue, and facilitates integration of robots through an extensible API.

  13. Computers in Science Fiction.

    ERIC Educational Resources Information Center

    Kurland, Michael

    1984-01-01

    Science fiction writers' perceptions of the "thinking machine" are examined through a review of Baum's Oz books, Heinlein's "Beyond This Horizon," science fiction magazine articles, and works about robots including Asimov's "I, Robot." The future of computers in science fiction is discussed and suggested readings are listed. (MBR)

  14. Speed control for a mobile robot

    NASA Astrophysics Data System (ADS)

    Kolli, Kaylan C.; Mallikarjun, Sreeram; Kola, Krishnamohan; Hall, Ernest L.

    1997-09-01

    Automated guided vehicles (AGVs) have many potential applications in manufacturing, medicine, space and defense. The purpose of this paper is to describe exploratory research on the design of a speed control for a modular autonomous mobile robot controller. The speed control of the traction motor is essential for safe operation of a mobile robot. The challenges of autonomous operation of a vehicle require safe, runaway and collision free operation. A mobile robot test-bed has been constructed using a golf cart base. The computer controlled speed control has been implemented and works with guidance provided by vision system and obstacle avoidance using ultrasonic sensors systems. A 486 computer through a 3- axis motion controller supervises the speed control. The traction motor is controlled via the computer by an EV-1 speed control. Testing of the system was done both in the lab and on an outside course with positive results. This design is a prototype and suggestions for improvements are also given. The autonomous speed controller is applicable for any computer controlled electric drive mobile vehicle.

  15. Three degree-of-freedom force feedback control for robotic mating of umbilical lines

    NASA Technical Reports Server (NTRS)

    Fullmer, R. Rees

    1988-01-01

    The use of robotic manipulators for the mating and demating of umbilical fuel lines to the Space Shuttle Vehicle prior to launch is investigated. Force feedback control is necessary to minimize the contact forces which develop during mating. The objective is to develop and demonstrate a working robotic force control system. Initial experimental force control tests with an ASEA IRB-90 industrial robot using the system's Adaptive Control capabilities indicated that control stability would by a primary problem. An investigation of the ASEA system showed a 0.280 second software delay between force input commands and the output of command voltages to the servo system. This computational delay was identified as the primary cause of the instability. Tests on a second path into the ASEA's control computer using the MicroVax II supervisory computer show that time delay would be comparable, offering no stability improvement. An alternative approach was developed where the digital control system of the robot was disconnected and an analog electronic force controller was used to control the robot's servosystem directly, allowing the robot to use force feedback control while in rigid contact with a moving three-degree-of-freedom target. An alternative approach was developed where the digital control system of the robot was disconnected and an analog electronic force controller was used to control the robot's servo system directly. This method allowed the robot to use force feedback control while in rigid contact with moving three degree-of-freedom target. Tests on this approach indicated adequate force feedback control even under worst case conditions. A strategy to digitally-controlled vision system was developed. This requires switching between the digital controller when using vision control and the analog controller when using force control, depending on whether or not the mating plates are in contact.

  16. Biologically inspired robots elicit a robust fear response in zebrafish

    NASA Astrophysics Data System (ADS)

    Ladu, Fabrizio; Bartolini, Tiziana; Panitz, Sarah G.; Butail, Sachit; Macrı, Simone; Porfiri, Maurizio

    2015-03-01

    We investigate the behavioral response of zebrafish to three fear-evoking stimuli. In a binary choice test, zebrafish are exposed to a live allopatric predator, a biologically-inspired robot, and a computer-animated image of the live predator. A target tracking algorithm is developed to score zebrafish behavior. Unlike computer-animated images, the robotic and live predator elicit a robust avoidance response. Importantly, the robotic stimulus elicits more consistent inter-individual responses than the live predator. Results from this effort are expected to aid in hypothesis-driven studies on zebrafish fear response, by offering a valuable approach to maximize data-throughput and minimize animal subjects.

  17. Microbiorobots for Manipulation and Sensing

    DTIC Science & Technology

    2016-04-19

    integrated into microscale robotics and biosensor systems. The objective of the proposed program is to develop a platform that integrates bacteria with...information represent enormous potential that can be harnessed and integrated into microscale robotics and biosensor systems. The objective of the...applicable in microscale assembly systems and biosensors that require autonomous coordination of bacteria. (a) Papers published in peer-reviewed

  18. Vertical stream curricula integration of problem-based learning using an autonomous vacuum robot in a mechatronics course

    NASA Astrophysics Data System (ADS)

    Chin, Cheng; Yue, Keng

    2011-10-01

    Difficulties in teaching a multi-disciplinary subject such as the mechatronics system design module in Departments of Mechatronics Engineering at Temasek Polytechnic arise from the gap in experience and skill among staff and students who have different backgrounds in mechanical, computer and electrical engineering within the Mechatronics Department. The departments piloted a new vertical stream curricula model (VSCAM) to enhance student learning in mechatronics system design through integration of educational activities from the first to the second year of the course. In this case study, a problem-based learning (PBL) method on an autonomous vacuum robot in the mechatronics systems design module was proposed to allow the students to have hands-on experience in the mechatronics system design. The proposed works included in PBL consist of seminar sessions, weekly works and project presentation to provide holistic assessment on teamwork and individual contributions. At the end of VSCAM, an integrative evaluation was conducted using confidence logs, attitude surveys and questionnaires. It was found that the activities were quite appreciated by the participating staff and students. Hence, PBL has served as an effective pedagogical framework for teaching multidisciplinary subjects in mechatronics engineering education if adequate guidance and support are given to staff and students.

  19. Topics in Chemical Instrumentation. Robots in the Laboratory--An Overview.

    ERIC Educational Resources Information Center

    Strimaitis, Janet R.

    1990-01-01

    Discussed are applications of robotics in the chemistry laboratory. Highlighted are issues of precision, accuracy, and system integration. Emphasized are the potential benefits of the use of robots to automate laboratory procedures. (CW)

  20. Real-time optical flow estimation on a GPU for a skied-steered mobile robot

    NASA Astrophysics Data System (ADS)

    Kniaz, V. V.

    2016-04-01

    Accurate egomotion estimation is required for mobile robot navigation. Often the egomotion is estimated using optical flow algorithms. For an accurate estimation of optical flow most of modern algorithms require high memory resources and processor speed. However simple single-board computers that control the motion of the robot usually do not provide such resources. On the other hand, most of modern single-board computers are equipped with an embedded GPU that could be used in parallel with a CPU to improve the performance of the optical flow estimation algorithm. This paper presents a new Z-flow algorithm for efficient computation of an optical flow using an embedded GPU. The algorithm is based on the phase correlation optical flow estimation and provide a real-time performance on a low cost embedded GPU. The layered optical flow model is used. Layer segmentation is performed using graph-cut algorithm with a time derivative based energy function. Such approach makes the algorithm both fast and robust in low light and low texture conditions. The algorithm implementation for a Raspberry Pi Model B computer is discussed. For evaluation of the algorithm the computer was mounted on a Hercules mobile skied-steered robot equipped with a monocular camera. The evaluation was performed using a hardware-in-the-loop simulation and experiments with Hercules mobile robot. Also the algorithm was evaluated using KITTY Optical Flow 2015 dataset. The resulting endpoint error of the optical flow calculated with the developed algorithm was low enough for navigation of the robot along the desired trajectory.

  1. Advanced computer graphic techniques for laser range finder (LRF) simulation

    NASA Astrophysics Data System (ADS)

    Bedkowski, Janusz; Jankowski, Stanislaw

    2008-11-01

    This paper show an advanced computer graphic techniques for laser range finder (LRF) simulation. The LRF is the common sensor for unmanned ground vehicle, autonomous mobile robot and security applications. The cost of the measurement system is extremely high, therefore the simulation tool is designed. The simulation gives an opportunity to execute algorithm such as the obstacle avoidance[1], slam for robot localization[2], detection of vegetation and water obstacles in surroundings of the robot chassis[3], LRF measurement in crowd of people[1]. The Axis Aligned Bounding Box (AABB) and alternative technique based on CUDA (NVIDIA Compute Unified Device Architecture) is presented.

  2. Teleoperation System with Hybrid Pneumatic-Piezoelectric Actuation for MRI-Guided Needle Insertion with Haptic Feedback

    PubMed Central

    Shang, Weijian; Su, Hao; Li, Gang; Fischer, Gregory S.

    2014-01-01

    This paper presents a surgical master-slave tele-operation system for percutaneous interventional procedures under continuous magnetic resonance imaging (MRI) guidance. This system consists of a piezoelectrically actuated slave robot for needle placement with integrated fiber optic force sensor utilizing Fabry-Perot interferometry (FPI) sensing principle. The sensor flexure is optimized and embedded to the slave robot for measuring needle insertion force. A novel, compact opto-mechanical FPI sensor interface is integrated into an MRI robot control system. By leveraging the complementary features of pneumatic and piezoelectric actuation, a pneumatically actuated haptic master robot is also developed to render force associated with needle placement interventions to the clinician. An aluminum load cell is implemented and calibrated to close the impedance control loop of the master robot. A force-position control algorithm is developed to control the hybrid actuated system. Teleoperated needle insertion is demonstrated under live MR imaging, where the slave robot resides in the scanner bore and the user manipulates the master beside the patient outside the bore. Force and position tracking results of the master-slave robot are demonstrated to validate the tracking performance of the integrated system. It has a position tracking error of 0.318mm and sine wave force tracking error of 2.227N. PMID:25126446

  3. Teleoperation System with Hybrid Pneumatic-Piezoelectric Actuation for MRI-Guided Needle Insertion with Haptic Feedback.

    PubMed

    Shang, Weijian; Su, Hao; Li, Gang; Fischer, Gregory S

    2013-01-01

    This paper presents a surgical master-slave tele-operation system for percutaneous interventional procedures under continuous magnetic resonance imaging (MRI) guidance. This system consists of a piezoelectrically actuated slave robot for needle placement with integrated fiber optic force sensor utilizing Fabry-Perot interferometry (FPI) sensing principle. The sensor flexure is optimized and embedded to the slave robot for measuring needle insertion force. A novel, compact opto-mechanical FPI sensor interface is integrated into an MRI robot control system. By leveraging the complementary features of pneumatic and piezoelectric actuation, a pneumatically actuated haptic master robot is also developed to render force associated with needle placement interventions to the clinician. An aluminum load cell is implemented and calibrated to close the impedance control loop of the master robot. A force-position control algorithm is developed to control the hybrid actuated system. Teleoperated needle insertion is demonstrated under live MR imaging, where the slave robot resides in the scanner bore and the user manipulates the master beside the patient outside the bore. Force and position tracking results of the master-slave robot are demonstrated to validate the tracking performance of the integrated system. It has a position tracking error of 0.318mm and sine wave force tracking error of 2.227N.

  4. Robot-assisted general surgery.

    PubMed

    Hazey, Jeffrey W; Melvin, W Scott

    2004-06-01

    With the initiation of laparoscopic techniques in general surgery, we have seen a significant expansion of minimally invasive techniques in the last 16 years. More recently, robotic-assisted laparoscopy has moved into the general surgeon's armamentarium to address some of the shortcomings of laparoscopic surgery. AESOP (Computer Motion, Goleta, CA) addressed the issue of visualization as a robotic camera holder. With the introduction of the ZEUS robotic surgical system (Computer Motion), the ability to remotely operate laparoscopic instruments became a reality. US Food and Drug Administration approval in July 2000 of the da Vinci robotic surgical system (Intuitive Surgical, Sunnyvale, CA) further defined the ability of a robotic-assist device to address limitations in laparoscopy. This includes a significant improvement in instrument dexterity, dampening of natural hand tremors, three-dimensional visualization, ergonomics, and camera stability. As experience with robotic technology increased and its applications to advanced laparoscopic procedures have become more understood, more procedures have been performed with robotic assistance. Numerous studies have shown equivalent or improved patient outcomes when robotic-assist devices are used. Initially, robotic-assisted laparoscopic cholecystectomy was deemed safe, and now robotics has been shown to be safe in foregut procedures, including Nissen fundoplication, Heller myotomy, gastric banding procedures, and Roux-en-Y gastric bypass. These techniques have been extrapolated to solid-organ procedures (splenectomy, adrenalectomy, and pancreatic surgery) as well as robotic-assisted laparoscopic colectomy. In this chapter, we review the evolution of robotic technology and its applications in general surgical procedures.

  5. A Unified Approach to Motion Control of Motion Robots

    NASA Technical Reports Server (NTRS)

    Seraji, H.

    1994-01-01

    This paper presents a simple on-line approach for motion control of mobile robots made up of a manipulator arm mounted on a mobile base. The proposed approach is equally applicable to nonholonomic mobile robots, such as rover-mounted manipulators and to holonomic mobile robots such as tracked robots or compound manipulators. The computational efficiency of the proposed control scheme makes it particularly suitable for real-time implementation.

  6. Augmented-reality integrated robotics in neurosurgery: are we there yet?

    PubMed

    Madhavan, Karthik; Kolcun, John Paul G; Chieng, Lee Onn; Wang, Michael Y

    2017-05-01

    Surgical robots have captured the interest-if not the widespread acceptance-of spinal neurosurgeons. But successful innovation, scientific or commercial, requires the majority to adopt a new practice. "Faster, better, cheaper" products should in theory conquer the market, but often fail. The psychology of change is complex, and the "follow the leader" mentality, common in the field today, lends little trust to the process of disseminating new technology. Beyond product quality, timing has proven to be a key factor in the inception, design, and execution of new technologies. Although the first robotic surgery was performed in 1985, scant progress was seen until the era of minimally invasive surgery. This movement increased neurosurgeons' dependence on navigation and fluoroscopy, intensifying the drive for enhanced precision. Outside the field of medicine, various technology companies have made great progress in popularizing co-robots ("cobots"), augmented reality, and processor chips. This has helped to ease practicing surgeons into familiarity with and acceptance of these technologies. The adoption among neurosurgeons in training is a "follow the leader" phenomenon, wherein new surgeons tend to adopt the technology used during residency. In neurosurgery today, robots are limited to computers functioning between the surgeon and patient. Their functions are confined to establishing a trajectory for navigation, with task execution solely in the surgeon's hands. In this review, the authors discuss significant untapped technologies waiting to be used for more meaningful applications. They explore the history and current manifestations of various modern technologies, and project what innovations may lie ahead.

  7. Human arm joints reconstruction algorithm in rehabilitation therapies assisted by end-effector robotic devices.

    PubMed

    Bertomeu-Motos, Arturo; Blanco, Andrea; Badesa, Francisco J; Barios, Juan A; Zollo, Loredana; Garcia-Aracil, Nicolas

    2018-02-20

    End-effector robots are commonly used in robot-assisted neuro-rehabilitation therapies for upper limbs where the patient's hand can be easily attached to a splint. Nevertheless, they are not able to estimate and control the kinematic configuration of the upper limb during the therapy. However, the Range of Motion (ROM) together with the clinical assessment scales offers a comprehensive assessment to the therapist. Our aim is to present a robust and stable kinematic reconstruction algorithm to accurately measure the upper limb joints using only an accelerometer placed onto the upper arm. The proposed algorithm is based on the inverse of the augmented Jaciobian as the algorithm (Papaleo, et al., Med Biol Eng Comput 53(9):815-28, 2015). However, the estimation of the elbow joint location is performed through the computation of the rotation measured by the accelerometer during the arm movement, making the algorithm more robust against shoulder movements. Furthermore, we present a method to compute the initial configuration of the upper limb necessary to start the integration method, a protocol to manually measure the upper arm and forearm lengths, and a shoulder position estimation. An optoelectronic system was used to test the accuracy of the proposed algorithm whilst healthy subjects were performing upper limb movements holding the end effector of the seven Degrees of Freedom (DoF) robot. In addition, the previous and the proposed algorithms were studied during a neuro-rehabilitation therapy assisted by the 'PUPArm' planar robot with three post-stroke patients. The proposed algorithm reports a Root Mean Square Error (RMSE) of 2.13cm in the elbow joint location and 1.89cm in the wrist joint location with high correlation. These errors lead to a RMSE about 3.5 degrees (mean of the seven joints) with high correlation in all the joints with respect to the real upper limb acquired through the optoelectronic system. Then, the estimation of the upper limb joints through both algorithms reveal an instability on the previous when shoulder movement appear due to the inevitable trunk compensation in post-stroke patients. The proposed algorithm is able to accurately estimate the human upper limb joints during a neuro-rehabilitation therapy assisted by end-effector robots. In addition, the implemented protocol can be followed in a clinical environment without optoelectronic systems using only one accelerometer attached in the upper arm. Thus, the ROM can be perfectly determined and could become an objective assessment parameter for a comprehensive assessment.

  8. Robotics, Artificial Intelligence, Computer Simulation: Future Applications in Special Education.

    ERIC Educational Resources Information Center

    Moore, Gwendolyn B.; And Others

    1986-01-01

    Describes possible applications of new technologies to special education. Discusses results of a study designed to explore the use of robotics, artificial intelligence, and computer simulations to aid people with handicapping conditions. Presents several scenarios in which specific technological advances may contribute to special education…

  9. Computer graphics testbed to simulate and test vision systems for space applications

    NASA Technical Reports Server (NTRS)

    Cheatham, John B.

    1991-01-01

    Research activity has shifted from computer graphics and vision systems to the broader scope of applying concepts of artificial intelligence to robotics. Specifically, the research is directed toward developing Artificial Neural Networks, Expert Systems, and Laser Imaging Techniques for Autonomous Space Robots.

  10. Computer-assisted hip and knee arthroplasty. Navigation and active robotic systems: an evidence-based analysis.

    PubMed

    2004-01-01

    The Medical Advisory Secretariat undertook a review of the evidence on the effectiveness and cost-effectiveness of computer assisted hip and knee arthroplasty. The two computer assisted arthroplasty systems that are the topics of this review are (1) navigation and (2) robotic-assisted hip and knee arthroplasty. Computer-assisted arthroplasty consists of navigation and robotic systems. Surgical navigation is a visualization system that provides positional information about surgical tools or implants relative to a target bone on a computer display. Most of the navigation-assisted arthroplasty devices that are the subject of this review are licensed by Health Canada. Robotic systems are active robots that mill bone according to information from a computer-assisted navigation system. The robotic-assisted arthroplasty devices that are the subject of this review are not currently licensed by Health Canada. The Cochrane and International Network of Agencies for Health Technology Assessment databases did not identify any health technology assessments on navigation or robotic-assisted hip or knee arthroplasty. The MEDLINE and EMBASE databases were searched for articles published between January 1, 1996 and November 30, 2003. This search produced 367 studies, of which 9 met the inclusion criteria. NAVIGATION-ASSISTED ARTHROPLASTY: Five studies were identified that examined navigation-assisted arthroplasty.A Level 1 evidence study from Germany found a statistically significant difference in alignment and angular deviation between navigation-assisted and free-hand total knee arthroplasty in favour of navigation-assisted surgery. However, the endpoints in this study were short-term. To date, the long-term effects (need for revision, implant longevity, pain, functional performance) are unknown.(1)A Level 2 evidence short-term study found that navigation-assisted total knee arthroplasty was significantly better than a non-navigated procedure for one of five postoperative measured angles.(2)A Level 2 evidence short-term study found no statistically significant difference in the variation of the abduction angle between navigation-assisted and conventional total hip arthroplasty.(3)Level 3 evidence observational studies of navigation-assisted total knee arthroplasty and unicompartmental knee arthroplasty have been conducted. Two studies reported that "the follow-up of the navigated prostheses is currently too short to know if clinical outcome or survival rates are improved. Longer follow-up is required to determine the respective advantages and disadvantages of both techniques."(4;5) ROBOTIC-ASSISTED ARTHROPLASTY: Four studies were identified that examined robotic-assisted arthroplasty.A Level 1 evidence study revealed that there was no statistically significant difference between functional hip scores at 24 months post implantation between patients who underwent robotic-assisted primary hip arthroplasty and those that were treated with manual implantation.(6)Robotic-assisted arthroplasty had advantages in terms of preoperative planning and the accuracy of the intraoperative procedure.(6)Patients who underwent robotic-assisted hip arthroplasty had a higher dislocation rate and more revisions.(6)Robotic-assisted arthroplasty may prove effective with certain prostheses (e.g., anatomic) because their use may result in less muscle detachment.(6)An observational study (Level 3 evidence) found that the incidence of severe embolic events during hip relocation was lower with robotic arthroplasty than with manual surgery.(7)An observational study (Level 3 evidence) found that there was no significant difference in gait analyses of patients who underwent robotic-assisted total hip arthroplasty using robotic surgery compared to patients who were treated with conventional cementless total hip arthroplasty.(8)An observational study (Level 3 evidence) compared outcomes of total knee arthroplasty between patients undergoing robotic surgery and patients who were historical controls. Brief, qualitative results suggested that there was much broader variation of angles after manual total knee arthroplasty compared to the robotic technique and that there was no difference in knee functional scores or implant position at the 3 and 6 month follow-up.(9).

  11. Computer-Assisted Hip and Knee Arthroplasty. Navigation and Active Robotic Systems

    PubMed Central

    2004-01-01

    Executive Summary Objective The Medical Advisory Secretariat undertook a review of the evidence on the effectiveness and cost-effectiveness of computer assisted hip and knee arthroplasty. The two computer assisted arthroplasty systems that are the topics of this review are (1) navigation and (2) robotic-assisted hip and knee arthroplasty. The Technology Computer-assisted arthroplasty consists of navigation and robotic systems. Surgical navigation is a visualization system that provides positional information about surgical tools or implants relative to a target bone on a computer display. Most of the navigation-assisted arthroplasty devices that are the subject of this review are licensed by Health Canada. Robotic systems are active robots that mill bone according to information from a computer-assisted navigation system. The robotic-assisted arthroplasty devices that are the subject of this review are not currently licensed by Health Canada. Review Strategy The Cochrane and International Network of Agencies for Health Technology Assessment databases did not identify any health technology assessments on navigation or robotic-assisted hip or knee arthroplasty. The MEDLINE and EMBASE databases were searched for articles published between January 1, 1996 and November 30, 2003. This search produced 367 studies, of which 9 met the inclusion criteria. Summary of Findings Navigation-Assisted Arthroplasty Five studies were identified that examined navigation-assisted arthroplasty. A Level 1 evidence study from Germany found a statistically significant difference in alignment and angular deviation between navigation-assisted and free-hand total knee arthroplasty in favour of navigation-assisted surgery. However, the endpoints in this study were short-term. To date, the long-term effects (need for revision, implant longevity, pain, functional performance) are unknown.(1) A Level 2 evidence short-term study found that navigation-assisted total knee arthroplasty was significantly better than a non-navigated procedure for one of five postoperative measured angles.(2) A Level 2 evidence short-term study found no statistically significant difference in the variation of the abduction angle between navigation-assisted and conventional total hip arthroplasty.(3) Level 3 evidence observational studies of navigation-assisted total knee arthroplasty and unicompartmental knee arthroplasty have been conducted. Two studies reported that “the follow-up of the navigated prostheses is currently too short to know if clinical outcome or survival rates are improved. Longer follow-up is required to determine the respective advantages and disadvantages of both techniques.”(4;5) Robotic-Assisted Arthroplasty Four studies were identified that examined robotic-assisted arthroplasty. A Level 1 evidence study revealed that there was no statistically significant difference between functional hip scores at 24 months post implantation between patients who underwent robotic-assisted primary hip arthroplasty and those that were treated with manual implantation.(6) Robotic-assisted arthroplasty had advantages in terms of preoperative planning and the accuracy of the intraoperative procedure.(6) Patients who underwent robotic-assisted hip arthroplasty had a higher dislocation rate and more revisions.(6) Robotic-assisted arthroplasty may prove effective with certain prostheses (e.g., anatomic) because their use may result in less muscle detachment.(6) An observational study (Level 3 evidence) found that the incidence of severe embolic events during hip relocation was lower with robotic arthroplasty than with manual surgery.(7) An observational study (Level 3 evidence) found that there was no significant difference in gait analyses of patients who underwent robotic-assisted total hip arthroplasty using robotic surgery compared to patients who were treated with conventional cementless total hip arthroplasty.(8) An observational study (Level 3 evidence) compared outcomes of total knee arthroplasty between patients undergoing robotic surgery and patients who were historical controls. Brief, qualitative results suggested that there was much broader variation of angles after manual total knee arthroplasty compared to the robotic technique and that there was no difference in knee functional scores or implant position at the 3 and 6 month follow-up.(9) PMID:23074452

  12. Sensing And Force-Reflecting Exoskeleton

    NASA Technical Reports Server (NTRS)

    Eberman, Brian; Fontana, Richard; Marcus, Beth

    1993-01-01

    Sensing and force-reflecting exoskeleton (SAFiRE) provides control signals to robot hand and force feedback from robot hand to human operator. Operator makes robot hand touch objects gently and manipulates them finely without exerting excessive forces. Device attaches to operator's hand; comfortable and lightweight. Includes finger exoskeleton, cable mechanical transmission, two dc servomotors, partial thumb exoskeleton, harness, amplifier box, two computer circuit boards, and software. Transduces motion of index finger and thumb. Video monitor of associated computer displays image corresponding to motion.

  13. Robotics in neurosurgery: which tools for what?

    PubMed

    Benabid, A L; Hoffmann, D; Seigneuret, E; Chabardes, S

    2006-01-01

    Robots are the tools for taking advantage of the skills of computers in achieving complicated tasks. This has been made possible owing to the "numerical image explosion" which allowed us to easily obtain spatial coordinates, three dimensional reconstruction, multimodality imaging including digital subtraction angiography (DSA), computed tomography (CT), magnetic resonance imaging (MRI) and magneto encephalography (MEG), with high resolution in space, time, and tissue density. Neurosurgical robots currently available at the operating level are being described. Future evolutions, indications and ethical aspects are examined.

  14. Rapid Human-Computer Interactive Conceptual Design of Mobile and Manipulative Robot Systems

    DTIC Science & Technology

    2015-05-19

    algorithm based on Age-Fitness Pareto Optimization (AFPO) ([9]) with an additional user prefer- ence objective and a neural network-based user model, we...greater than 40, which is about 5 times further than any robot traveled in our experiments. 6 3.3 Methods The algorithm uses a client -server computational...architecture. The client here is an interactive pro- gram which takes a pair of controllers as input, simulates4 two copies of the robot with

  15. Singularity now: using the ventricular assist device as a model for future human-robotic physiology.

    PubMed

    Martin, Archer K

    2016-04-01

    In our 21 st century world, human-robotic interactions are far more complicated than Asimov predicted in 1942. The future of human-robotic interactions includes human-robotic machine hybrids with an integrated physiology, working together to achieve an enhanced level of baseline human physiological performance. This achievement can be described as a biological Singularity. I argue that this time of Singularity cannot be met by current biological technologies, and that human-robotic physiology must be integrated for the Singularity to occur. In order to conquer the challenges we face regarding human-robotic physiology, we first need to identify a working model in today's world. Once identified, this model can form the basis for the study, creation, expansion, and optimization of human-robotic hybrid physiology. In this paper, I present and defend the line of argument that currently this kind of model (proposed to be named "IshBot") can best be studied in ventricular assist devices - VAD.

  16. Singularity now: using the ventricular assist device as a model for future human-robotic physiology

    PubMed Central

    Martin, Archer K.

    2016-01-01

    In our 21st century world, human-robotic interactions are far more complicated than Asimov predicted in 1942. The future of human-robotic interactions includes human-robotic machine hybrids with an integrated physiology, working together to achieve an enhanced level of baseline human physiological performance. This achievement can be described as a biological Singularity. I argue that this time of Singularity cannot be met by current biological technologies, and that human-robotic physiology must be integrated for the Singularity to occur. In order to conquer the challenges we face regarding human-robotic physiology, we first need to identify a working model in today’s world. Once identified, this model can form the basis for the study, creation, expansion, and optimization of human-robotic hybrid physiology. In this paper, I present and defend the line of argument that currently this kind of model (proposed to be named “IshBot”) can best be studied in ventricular assist devices – VAD. PMID:28913480

  17. RoMPS concept review automatic control of space robot, volume 2

    NASA Technical Reports Server (NTRS)

    Dobbs, M. E.

    1991-01-01

    Topics related to robot operated materials processing in space (RoMPS) are presented in view graph form and include: (1) system concept; (2) Hitchhiker Interface Requirements; (3) robot axis control concepts; (4) Autonomous Experiment Management System; (5) Zymate Robot Controller; (6) Southwest SC-4 Computer; (7) oven control housekeeping data; and (8) power distribution.

  18. Learning to Program with Personal Robots: Influences on Student Motivation

    ERIC Educational Resources Information Center

    McGill, Monica M.

    2012-01-01

    One of the goals of using robots in introductory programming courses is to increase motivation among learners. There have been several types of robots that have been used extensively in the classroom to teach a variety of computer science concepts. A more recently introduced robot designed to teach programming to novice students is the Institute…

  19. Image Mapping and Visual Attention on the Sensory Ego-Sphere

    NASA Technical Reports Server (NTRS)

    Fleming, Katherine Achim; Peters, Richard Alan, II

    2012-01-01

    The Sensory Ego-Sphere (SES) is a short-term memory for a robot in the form of an egocentric, tessellated, spherical, sensory-motor map of the robot s locale. Visual attention enables fast alignment of overlapping images without warping or position optimization, since an attentional point (AP) on the composite typically corresponds to one on each of the collocated regions in the images. Such alignment speeds analysis of the multiple images of the area. Compositing and attention were performed two ways and compared: (1) APs were computed directly on the composite and not on the full-resolution images until the time of retrieval; and (2) the attentional operator was applied to all incoming imagery. It was found that although the second method was slower, it produced consistent and, thereby, more useful APs. The SES is an integral part of a control system that will enable a robot to learn new behaviors based on its previous experiences, and that will enable it to recombine its known behaviors in such a way as to solve related, but novel, task problems with apparent creativity. The approach is to combine sensory-motor data association and dimensionality reduction to learn navigation and manipulation tasks as sequences of basic behaviors that can be implemented with a small set of closed-loop controllers. Over time, the aggregate of behaviors and their transition probabilities form a stochastic network. Then given a task, the robot finds a path in the network that leads from its current state to the goal. The SES provides a short-term memory for the cognitive functions of the robot, association of sensory and motor data via spatio-temporal coincidence, direction of the attention of the robot, navigation through spatial localization with respect to known or discovered landmarks, and structured data sharing between the robot and human team members, the individuals in multi-robot teams, or with a C3 center.

  20. Human exploration and settlement of Mars - The roles of humans and robots

    NASA Technical Reports Server (NTRS)

    Duke, Michael B.

    1991-01-01

    The scientific objectives and strategies for human settlement on Mars are examined in the context of the Space Exploration Initiative (SEI). An integrated strategy for humans and robots in the exploration and settlement of Mars is examined. Such an effort would feature robotic, telerobotic, and human-supervised robotic phases.

  1. Dancing Robots: Integrating Art, Music, and Robotics in Singapore's Early Childhood Centers

    ERIC Educational Resources Information Center

    Sullivan, Amanda; Bers, Marina Umaschi

    2018-01-01

    In recent years, Singapore has increased its national emphasis on technology and engineering in early childhood education. Their newest initiative, the Playmaker Programme, has focused on teaching robotics and coding in preschool settings. Robotics offers a playful and collaborative way for children to engage with foundational technology and…

  2. Using parallel evolutionary development for a biologically-inspired computer vision system for mobile robots.

    PubMed

    Wright, Cameron H G; Barrett, Steven F; Pack, Daniel J

    2005-01-01

    We describe a new approach to attacking the problem of robust computer vision for mobile robots. The overall strategy is to mimic the biological evolution of animal vision systems. Our basic imaging sensor is based upon the eye of the common house fly, Musca domestica. The computational algorithms are a mix of traditional image processing, subspace techniques, and multilayer neural networks.

  3. VBOT: Motivating computational and complex systems fluencies with constructionist virtual/physical robotics

    NASA Astrophysics Data System (ADS)

    Berland, Matthew W.

    As scientists use the tools of computational and complex systems theory to broaden science perspectives (e.g., Bar-Yam, 1997; Holland, 1995; Wolfram, 2002), so can middle-school students broaden their perspectives using appropriate tools. The goals of this dissertation project are to build, study, evaluate, and compare activities designed to foster both computational and complex systems fluencies through collaborative constructionist virtual and physical robotics. In these activities, each student builds an agent (e.g., a robot-bird) that must interact with fellow students' agents to generate a complex aggregate (e.g., a flock of robot-birds) in a participatory simulation environment (Wilensky & Stroup, 1999a). In a participatory simulation, students collaborate by acting in a common space, teaching each other, and discussing content with one another. As a result, the students improve both their computational fluency and their complex systems fluency, where fluency is defined as the ability to both consume and produce relevant content (DiSessa, 2000). To date, several systems have been designed to foster computational and complex systems fluencies through computer programming and collaborative play (e.g., Hancock, 2003; Wilensky & Stroup, 1999b); this study suggests that, by supporting the relevant fluencies through collaborative play, they become mutually reinforcing. In this work, I will present both the design of the VBOT virtual/physical constructionist robotics learning environment and a comparative study of student interaction with the virtual and physical environments across four middle-school classrooms, focusing on the contrast in systems perspectives differently afforded by the two environments. In particular, I found that while performance gains were similar overall, the physical environment supported agent perspectives on aggregate behavior, and the virtual environment supported aggregate perspectives on agent behavior. The primary research questions are: (1) What are the relative affordances of virtual and physical constructionist robotics systems towards computational and complex systems fluencies? (2) What can middle school students learn using computational/complex systems learning environments in a collaborative setting? (3) In what ways are these environments and activities effective in teaching students computational and complex systems fluencies?

  4. Teachers' perceptions of the benefits and the challenges of integrating educational robots into primary/elementary curricula

    NASA Astrophysics Data System (ADS)

    Khanlari, Ahmad

    2016-05-01

    Twenty-first century education systems should create an environment wherein students encounter critical learning components (such as problem-solving, teamwork, and communication skills) and embrace lifelong learning. A review of literature demonstrates that new technologies, in general, and robotics, in particular, are well suited for this aim. This study aims to contribute to the literature by studying teachers' perceptions of the effects of using robotics on students' lifelong learning skills. This study also seeks to better understand teachers' perceptions of the barriers of using robotics and the support they need. Eleven primary/elementary teachers from Newfoundland and Labrador English Schools District participated in this study. The results of this study revealed that robotics is perceived by teachers to have positive effects on students' lifelong learning skills. Furthermore, the participants indicated a number of barriers to integrate robotics into their teaching activities and expressed the support they need.

  5. Automation of Shuttle Tile Inspection - Engineering methodology for Space Station

    NASA Technical Reports Server (NTRS)

    Wiskerchen, M. J.; Mollakarimi, C.

    1987-01-01

    The Space Systems Integration and Operations Research Applications (SIORA) Program was initiated in late 1986 as a cooperative applications research effort between Stanford University, NASA Kennedy Space Center, and Lockheed Space Operations Company. One of the major initial SIORA tasks was the application of automation and robotics technology to all aspects of the Shuttle tile processing and inspection system. This effort has adopted a systems engineering approach consisting of an integrated set of rapid prototyping testbeds in which a government/university/industry team of users, technologists, and engineers test and evaluate new concepts and technologies within the operational world of Shuttle. These integrated testbeds include speech recognition and synthesis, laser imaging inspection systems, distributed Ada programming environments, distributed relational database architectures, distributed computer network architectures, multimedia workbenches, and human factors considerations.

  6. Combining psychological and engineering approaches to utilizing social robots with children with autism.

    PubMed

    Dickstein-Fischer, Laurie; Fischer, Gregory S

    2014-01-01

    It is estimated that Autism Spectrum Disorder (ASD) affects 1 in 68 children. Early identification of an ASD is exceedingly important to the introduction of an intervention. We are developing a robot-assisted approach that will serve as an improved diagnostic and early intervention tool for children with autism. The robot, named PABI® (Penguin for Autism Behavioral Interventions), is a compact humanoid robot taking on an expressive cartoon-like embodiment. The robot is affordable, durable, and portable so that it can be used in various settings including schools, clinics, and the home. Thus enabling significantly enhanced and more readily available diagnosis and continuation of care. Through facial expressions, body motion, verbal cues, stereo vision-based tracking, and a tablet computer, the robot is capable of interacting meaningfully with an autistic child. Initial implementations of the robot, as part of a comprehensive treatment model (CTM), include Applied Behavioral Analysis (ABA) therapy where the child interacts with a tablet computer wirelessly interfaced with the robot. At the same time, the robot makes meaningful expressions and utterances and uses stereo cameras in eyes to track the child, maintain eye contact, and collect data such as affect and gaze direction for charting of progress. In this paper we present the clinical justification, anticipated usage with corresponding requirements, prototype development of the robotic system, and demonstration of a sample application for robot-assisted ABA therapy.

  7. Robotics: Using Technology to Teach New Technologies.

    ERIC Educational Resources Information Center

    Cohen, Karen C.; Meyer, Carol D.

    1984-01-01

    Discusses the development of industrial robotics training materials, considering the need for such materials, preliminary curriculum design, the Piagetian approach followed, and the uses of computer assisted instruction. A list of robotics curriculum courses (with content and audience indicated) is included. (JN)

  8. Real World Robotics.

    ERIC Educational Resources Information Center

    Clark, Lisa J.

    2002-01-01

    Introduces a project for elementary school students in which students build a robot by following instructions and then write a computer program to run their robot by using LabView graphical development software. Uses ROBOLAB curriculum which is designed for grade levels K-12. (YDS)

  9. Robotics, Artificial Intelligence, Computer Simulation: Future Applications in Special Education.

    ERIC Educational Resources Information Center

    Moore, Gwendolyn B.; And Others

    The report describes three advanced technologies--robotics, artificial intelligence, and computer simulation--and identifies the ways in which they might contribute to special education. A hybrid methodology was employed to identify existing technology and forecast future needs. Following this framework, each of the technologies is defined,…

  10. Robot computer problem solving system

    NASA Technical Reports Server (NTRS)

    Becker, J. D.

    1972-01-01

    Continuing research is reported in a program aimed at the development of a robot computer problem solving system. The motivation and results are described of a theoretical investigation concerning the general properties of behavioral systems. Some of the important issues which a general theory of behavioral organization should encompass are outlined and discussed.

  11. Potential of a suite of robot/computer-assisted motivating systems for personalized, home-based, stroke rehabilitation.

    PubMed

    Johnson, Michelle J; Feng, Xin; Johnson, Laura M; Winters, Jack M

    2007-03-01

    There is a need to improve semi-autonomous stroke therapy in home environments often characterized by low supervision of clinical experts and low extrinsic motivation. Our distributed device approach to this problem consists of an integrated suite of low-cost robotic/computer-assistive technologies driven by a novel universal access software framework called UniTherapy. Our design strategy for personalizing the therapy, providing extrinsic motivation and outcome assessment is presented and evaluated. Three studies were conducted to evaluate the potential of the suite. A conventional force-reflecting joystick, a modified joystick therapy platform (TheraJoy), and a steering wheel platform (TheraDrive) were tested separately with the UniTherapy software. Stroke subjects with hemiparesis and able-bodied subjects completed tracking activities with the devices in different positions. We quantify motor performance across subject groups and across device platforms and muscle activation across devices at two positions in the arm workspace. Trends in the assessment metrics were consistent across devices with able-bodied and high functioning strokes subjects being significantly more accurate and quicker in their motor performance than low functioning subjects. Muscle activation patterns were different for shoulder and elbow across different devices and locations. The Robot/CAMR suite has potential for stroke rehabilitation. By manipulating hardware and software variables, we can create personalized therapy environments that engage patients, address their therapy need, and track their progress. A larger longitudinal study is still needed to evaluate these systems in under-supervised environments such as the home.

  12. My thoughts through a robot's eyes: an augmented reality-brain-machine interface.

    PubMed

    Kansaku, Kenji; Hata, Naoki; Takano, Kouji

    2010-02-01

    A brain-machine interface (BMI) uses neurophysiological signals from the brain to control external devices, such as robot arms or computer cursors. Combining augmented reality with a BMI, we show that the user's brain signals successfully controlled an agent robot and operated devices in the robot's environment. The user's thoughts became reality through the robot's eyes, enabling the augmentation of real environments outside the anatomy of the human body.

  13. Efficient Symbolic Task Planning for Multiple Mobile Robots

    DTIC Science & Technology

    2016-12-13

    Efficient Symbolic Task Planning for Multiple Mobile Robots Yuqian Jiang December 13, 2016 Abstract Symbolic task planning enables a robot to make...high-level deci- sions toward a complex goal by computing a sequence of actions with minimum expected costs. This thesis builds on a single- robot ...time complexity of optimal planning for multiple mobile robots . In this thesis we first investigate the performance of the state-of-the-art solvers of

  14. Scaling effects in spiral capsule robots.

    PubMed

    Liang, Liang; Hu, Rong; Chen, Bai; Tang, Yong; Xu, Yan

    2017-04-01

    Spiral capsule robots can be applied to human gastrointestinal tracts and blood vessels. Because of significant variations in the sizes of the inner diameters of the intestines as well as blood vessels, this research has been unable to meet the requirements for medical applications. By applying the fluid dynamic equations, using the computational fluid dynamics method, to a robot axial length ranging from 10 -5 to 10 -2  m, the operational performance indicators (axial driving force, load torque, and maximum fluid pressure on the pipe wall) of the spiral capsule robot and the fluid turbulent intensity around the robot spiral surfaces was numerically calculated in a straight rigid pipe filled with fluid. The reasonableness and validity of the calculation method adopted in this study were verified by the consistency of the calculated values by the computational fluid dynamics method and the experimental values from a relevant literature. The results show that the greater the fluid turbulent intensity, the greater the impact of the fluid turbulence on the driving performance of the spiral capsule robot and the higher the energy consumption of the robot. For the same level of size of the robot, the axial driving force, the load torque, and the maximum fluid pressure on the pipe wall of the outer spiral robot were larger than those of the inner spiral robot. For different requirements of the operating environment, we can choose a certain kind of spiral capsule robot. This study provides a theoretical foundation for spiral capsule robots.

  15. Energy Efficient Legged Robotics at Sandia Labs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buerger, Steve

    Sandia is developing energy efficient actuation and drive train technologies to dramatically improve the charge life of legged robots. The work is supported by DARPA, and Sandia will demonstrate an energy efficient bipedal robot at the technology exposition section of the DARPA Robotics Challenge Finals in June, 2015. This video, the first in a series, describes early development and initial integration of the Sandia Transmission Efficient Prototype Promoting Research (STEPPR) robot.

  16. A Dynamic Non Energy Storing Guidance Constraint with Motion Redirection for Robot Assisted Surgery

    DTIC Science & Technology

    2016-12-01

    Abstract— Haptically enabled hands-on or tele-operated surgical robotic systems provide a unique opportunity to integrate pre- and intra... robot -assisted surgical systems aim at improving and extending human capabilities, by exploiting the advantages of robotic systems while keeping the...move during the operation. Robot -assisted beating heart surgery is an example of procedures that can benefit from dynamic constraints. Their

  17. Energy Efficient Legged Robotics at Sandia Labs

    ScienceCinema

    Buerger, Steve

    2018-05-07

    Sandia is developing energy efficient actuation and drive train technologies to dramatically improve the charge life of legged robots. The work is supported by DARPA, and Sandia will demonstrate an energy efficient bipedal robot at the technology exposition section of the DARPA Robotics Challenge Finals in June, 2015. This video, the first in a series, describes early development and initial integration of the Sandia Transmission Efficient Prototype Promoting Research (STEPPR) robot.

  18. An integrated collision prediction and avoidance scheme for mobile robots in non-stationary environments

    NASA Technical Reports Server (NTRS)

    Kyriakopoulos, K. J.; Saridis, G. N.

    1993-01-01

    A formulation that makes possible the integration of collision prediction and avoidance stages for mobile robots moving in general terrains containing moving obstacles is presented. A dynamic model of the mobile robot and the dynamic constraints are derived. Collision avoidance is guaranteed if the distance between the robot and a moving obstacle is nonzero. A nominal trajectory is assumed to be known from off-line planning. The main idea is to change the velocity along the nominal trajectory so that collisions are avoided. A feedback control is developed and local asymptotic stability is proved if the velocity of the moving obstacle is bounded. Furthermore, a solution to the problem of inverse dynamics for the mobile robot is given. Simulation results verify the value of the proposed strategy.

  19. A plant-inspired robot with soft differential bending capabilities.

    PubMed

    Sadeghi, A; Mondini, A; Del Dottore, E; Mattoli, V; Beccai, L; Taccola, S; Lucarotti, C; Totaro, M; Mazzolai, B

    2016-12-20

    We present the design and development of a plant-inspired robot, named Plantoid, with sensorized robotic roots. Natural roots have a multi-sensing capability and show a soft bending behaviour to follow or escape from various environmental parameters (i.e., tropisms). Analogously, we implement soft bending capabilities in our robotic roots by designing and integrating soft spring-based actuation (SSBA) systems using helical springs to transmit the motor power in a compliant manner. Each robotic tip integrates four different sensors, including customised flexible touch and innovative humidity sensors together with commercial gravity and temperature sensors. We show how the embedded sensing capabilities together with a root-inspired control algorithm lead to the implementation of tropic behaviours. Future applications for such plant-inspired technologies include soil monitoring and exploration, useful for agriculture and environmental fields.

  20. Robotic gaming prototype for upper limb exercise: Effects of age and embodiment on user preferences and movement.

    PubMed

    Eizicovits, Danny; Edan, Yael; Tabak, Iris; Levy-Tzedek, Shelly

    2018-01-01

    Effective human-robot interactions in rehabilitation necessitates an understanding of how these should be tailored to the needs of the human. We report on a robotic system developed as a partner on a 3-D everyday task, using a gamified approach. To: (1) design and test a prototype system, to be ultimately used for upper-limb rehabilitation; (2) evaluate how age affects the response to such a robotic system; and (3) identify whether the robot's physical embodiment is an important aspect in motivating users to complete a set of repetitive tasks. 62 healthy participants, young (<30 yo) and old (>60 yo), played a 3D tic-tac-toe game against an embodied (a robotic arm) and a non-embodied (a computer-controlled lighting system) partner. To win, participants had to place three cups in sequence on a physical 3D grid. Cup picking-and-placing was chosen as a functional task that is often practiced in post-stroke rehabilitation. Movement of the participants was recorded using a Kinect camera. The timing of the participants' movement was primed by the response time of the system: participants moved slower when playing with the slower embodied system (p = 0.006). The majority of participants preferred the robot over the computer-controlled system. Slower response time of the robot compared to the computer-controlled one only affected the young group's motivation to continue playing. We demonstrated the feasibility of the system to encourage the performance of repetitive 3D functional movements, and track these movements. Young and old participants preferred to interact with the robot, compared with the non-embodied system. We contribute to the growing knowledge concerning personalized human-robot interactions by (1) demonstrating the priming of the human movement by the robotic movement - an important design feature, and (2) identifying response-speed as a design variable, the importance of which depends on the age of the user.

  1. 75 FR 54914 - Notice Pursuant to the National Cooperative Research and Production Act of 1993-Robotics...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-09

    ...; Esys Integration Corporation, Auburn Hills, MI; JADI, Inc., Troy, MI; Mobile Robots Inc., Amherst, NH... Alto, CA; Robot Worx, Marion, OH; RPU Technology, Inc., Needham, MA; Scientific Systems Company, Inc...

  2. Hierarchical Modelling Of Mobile, Seeing Robots

    NASA Astrophysics Data System (ADS)

    Luh, Cheng-Jye; Zeigler, Bernard P.

    1990-03-01

    This paper describes the implementation of a hierarchical robot simulation which supports the design of robots with vision and mobility. A seeing robot applies a classification expert system for visual identification of laboratory objects. The visual data acquisition algorithm used by the robot vision system has been developed to exploit multiple viewing distances and perspectives. Several different simulations have been run testing the visual logic in a laboratory environment. Much work remains to integrate the vision system with the rest of the robot system.

  3. Hierarchical modelling of mobile, seeing robots

    NASA Technical Reports Server (NTRS)

    Luh, Cheng-Jye; Zeigler, Bernard P.

    1990-01-01

    This paper describes the implementation of a hierarchical robot simulation which supports the design of robots with vision and mobility. A seeing robot applies a classification expert system for visual identification of laboratory objects. The visual data acquisition algorithm used by the robot vision system has been developed to exploit multiple viewing distances and perspectives. Several different simulations have been run testing the visual logic in a laboratory environment. Much work remains to integrate the vision system with the rest of the robot system.

  4. What can Robots Do? Towards Theoretical Analysis

    NASA Technical Reports Server (NTRS)

    Nogueira, Monica

    1997-01-01

    Robots have become more and more sophisticated. Every robot has its limits. If we face a task that existing robots cannot solve, then, before we start improving these robots, it is important to check whether it is, in principle, possible to design a robot for this task or not. For that, it is necessary to describe what exactly the robots can, in principle, do. A similar problem - to describe what exactly computers can do - has been solved as early as 1936, by Turing. In this paper, we describe a framework within which we can, hopefully, formalize and answer the question of what exactly robots can do.

  5. Visual environment recognition for robot path planning using template matched filters

    NASA Astrophysics Data System (ADS)

    Orozco-Rosas, Ulises; Picos, Kenia; Díaz-Ramírez, Víctor H.; Montiel, Oscar; Sepúlveda, Roberto

    2017-08-01

    A visual approach in environment recognition for robot navigation is proposed. This work includes a template matching filtering technique to detect obstacles and feasible paths using a single camera to sense a cluttered environment. In this problem statement, a robot can move from the start to the goal by choosing a single path between multiple possible ways. In order to generate an efficient and safe path for mobile robot navigation, the proposal employs a pseudo-bacterial potential field algorithm to derive optimal potential field functions using evolutionary computation. Simulation results are evaluated in synthetic and real scenes in terms of accuracy of environment recognition and efficiency of path planning computation.

  6. An integrated gait rehabilitation training based on Functional Electrical Stimulation cycling and overground robotic exoskeleton in complete spinal cord injury patients: Preliminary results.

    PubMed

    Mazzoleni, S; Battini, E; Rustici, A; Stampacchia, G

    2017-07-01

    The aim of this study is to investigate the effects of an integrated gait rehabilitation training based on Functional Electrical Stimulation (FES)-cycling and overground robotic exoskeleton in a group of seven complete spinal cord injury patients on spasticity and patient-robot interaction. They underwent a robot-assisted rehabilitation training based on two phases: n=20 sessions of FES-cycling followed by n= 20 sessions of robot-assisted gait training based on an overground robotic exoskeleton. The following clinical outcome measures were used: Modified Ashworth Scale (MAS), Numerical Rating Scale (NRS) on spasticity, Penn Spasm Frequency Scale (PSFS), Spinal Cord Independence Measure Scale (SCIM), NRS on pain and International Spinal Cord Injury Pain Data Set (ISCI). Clinical outcome measures were assessed before (T0) after (T1) the FES-cycling training and after (T2) the powered overground gait training. The ability to walk when using exoskeleton was assessed by means of 10 Meter Walk Test (10MWT), 6 Minute Walk Test (6MWT), Timed Up and Go test (TUG), standing time, walking time and number of steps. Statistically significant changes were found on the MAS score, NRS-spasticity, 6MWT, TUG, standing time and number of steps. The preliminary results of this study show that an integrated gait rehabilitation training based on FES-cycling and overground robotic exoskeleton in complete SCI patients can provide a significant reduction of spasticity and improvements in terms of patient-robot interaction.

  7. Precision in robotic rectal surgery using the da Vinci Xi system and integrated table motion, a technical note.

    PubMed

    Panteleimonitis, Sofoklis; Harper, Mick; Hall, Stuart; Figueiredo, Nuno; Qureshi, Tahseen; Parvaiz, Amjad

    2017-09-15

    Robotic rectal surgery is becoming increasingly more popular among colorectal surgeons. However, time spent on robotic platform docking, arm clashing and undocking of the platform during the procedure are factors that surgeons often find cumbersome and time consuming. The newest surgical platform, the da Vinci Xi, coupled with integrated table motion can help to overcome these problems. This technical note aims to describe a standardised operative technique of single docking robotic rectal surgery using the da Vinci Xi system and integrated table motion. A stepwise approach of the da Vinci docking process and surgical technique is described accompanied by an intra-operative video that demonstrates this technique. We also present data collected from a prospectively maintained database. 33 consecutive rectal cancer patients (24 male, 9 female) received robotic rectal surgery with the da Vinci Xi during the preparation of this technical note. 29 (88%) patients had anterior resections, and four (12%) had abdominoperineal excisions. There were no conversions, no anastomotic leaks and no mortality. Median operation time was 331 (249-372) min, blood loss 20 (20-45) mls and length of stay 6.5 (4-8) days. 30-day readmission rate and re-operation rates were 3% (n = 1). This standardised technique of single docking robotic rectal surgery with the da Vinci Xi is safe, feasible and reproducible. The technological advances of the new robotic system facilitate the totally robotic single docking approach.

  8. Event detection and localization for small mobile robots using reservoir computing.

    PubMed

    Antonelo, E A; Schrauwen, B; Stroobandt, D

    2008-08-01

    Reservoir Computing (RC) techniques use a fixed (usually randomly created) recurrent neural network, or more generally any dynamic system, which operates at the edge of stability, where only a linear static readout output layer is trained by standard linear regression methods. In this work, RC is used for detecting complex events in autonomous robot navigation. This can be extended to robot localization tasks which are solely based on a few low-range, high-noise sensory data. The robot thus builds an implicit map of the environment (after learning) that is used for efficient localization by simply processing the input stream of distance sensors. These techniques are demonstrated in both a simple simulation environment and in the physically realistic Webots simulation of the commercially available e-puck robot, using several complex and even dynamic environments.

  9. Wearable computer for mobile augmented-reality-based controlling of an intelligent robot

    NASA Astrophysics Data System (ADS)

    Turunen, Tuukka; Roening, Juha; Ahola, Sami; Pyssysalo, Tino

    2000-10-01

    An intelligent robot can be utilized to perform tasks that are either hazardous or unpleasant for humans. Such tasks include working in disaster areas or conditions that are, for example, too hot. An intelligent robot can work on its own to some extent, but in some cases the aid of humans will be needed. This requires means for controlling the robot from somewhere else, i.e. teleoperation. Mobile augmented reality can be utilized as a user interface to the environment, as it enhances the user's perception of the situation compared to other interfacing methods and allows the user to perform other tasks while controlling the intelligent robot. Augmented reality is a method that combines virtual objects into the user's perception of the real world. As computer technology evolves, it is possible to build very small devices that have sufficient capabilities for augmented reality applications. We have evaluated the existing wearable computers and mobile augmented reality systems to build a prototype of a future mobile terminal- the CyPhone. A wearable computer with sufficient system resources for applications, wireless communication media with sufficient throughput and enough interfaces for peripherals has been built at the University of Oulu. It is self-sustained in energy, with enough operating time for the applications to be useful, and uses accurate positioning systems.

  10. Effects of computing time delay on real-time control systems

    NASA Technical Reports Server (NTRS)

    Shin, Kang G.; Cui, Xianzhong

    1988-01-01

    The reliability of a real-time digital control system depends not only on the reliability of the hardware and software used, but also on the speed in executing control algorithms. The latter is due to the negative effects of computing time delay on control system performance. For a given sampling interval, the effects of computing time delay are classified into the delay problem and the loss problem. Analysis of these two problems is presented as a means of evaluating real-time control systems. As an example, both the self-tuning predicted (STP) control and Proportional-Integral-Derivative (PID) control are applied to the problem of tracking robot trajectories, and their respective effects of computing time delay on control performance are comparatively evaluated. For this example, the STP (PID) controller is shown to outperform the PID (STP) controller in coping with the delay (loss) problem.

  11. Computer vision-based classification of hand grip variations in neurorehabilitation.

    PubMed

    Zariffa, José; Steeves, John D

    2011-01-01

    The complexity of hand function is such that most existing upper limb rehabilitation robotic devices use only simplified hand interfaces. This is in contrast to the importance of the hand in regaining function after neurological injury. Computer vision technology has been used to identify hand posture in the field of Human Computer Interaction, but this approach has not been translated to the rehabilitation context. We describe a computer vision-based classifier that can be used to discriminate rehabilitation-relevant hand postures, and could be integrated into a virtual reality-based upper limb rehabilitation system. The proposed system was tested on a set of video recordings from able-bodied individuals performing cylindrical grasps, lateral key grips, and tip-to-tip pinches. The overall classification success rate was 91.2%, and was above 98% for 6 out of the 10 subjects. © 2011 IEEE

  12. A computer architecture for intelligent machines

    NASA Technical Reports Server (NTRS)

    Lefebvre, D. R.; Saridis, G. N.

    1991-01-01

    The Theory of Intelligent Machines proposes a hierarchical organization for the functions of an autonomous robot based on the Principle of Increasing Precision With Decreasing Intelligence. An analytic formulation of this theory using information-theoretic measures of uncertainty for each level of the intelligent machine has been developed in recent years. A computer architecture that implements the lower two levels of the intelligent machine is presented. The architecture supports an event-driven programming paradigm that is independent of the underlying computer architecture and operating system. Details of Execution Level controllers for motion and vision systems are addressed, as well as the Petri net transducer software used to implement Coordination Level functions. Extensions to UNIX and VxWorks operating systems which enable the development of a heterogeneous, distributed application are described. A case study illustrates how this computer architecture integrates real-time and higher-level control of manipulator and vision systems.

  13. From Speech to Emotional Interaction: EmotiRob Project

    NASA Astrophysics Data System (ADS)

    Le Tallec, Marc; Saint-Aimé, Sébastien; Jost, Céline; Villaneau, Jeanne; Antoine, Jean-Yves; Letellier-Zarshenas, Sabine; Le-Pévédic, Brigitte; Duhaut, Dominique

    This article presents research work done in the domain of nonverbal emotional interaction for the EmotiRob project. It is a component of the MAPH project, the objective of which is to give comfort to vulnerable children and/or those undergoing long-term hospitalisation through the help of an emotional robot companion. It is important to note that we are not trying to reproduce human emotion and behavior, but trying to make a robot emotionally expressive. This paper will present the different hypotheses we have used from understanding to emotional reaction. We begin the article with a presentation of the MAPH and EmotiRob project. Then, we quickly describe the speech undestanding system, the iGrace computational model of emotions and integration of dynamics behavior. We conclude with a description of the architecture of Emi, as well as improvements to be made to its next generation.

  14. Sensor fusion III: 3-D perception and recognition; Proceedings of the Meeting, Boston, MA, Nov. 5-8, 1990

    NASA Technical Reports Server (NTRS)

    Schenker, Paul S. (Editor)

    1991-01-01

    The volume on data fusion from multiple sources discusses fusing multiple views, temporal analysis and 3D motion interpretation, sensor fusion and eye-to-hand coordination, and integration in human shape perception. Attention is given to surface reconstruction, statistical methods in sensor fusion, fusing sensor data with environmental knowledge, computational models for sensor fusion, and evaluation and selection of sensor fusion techniques. Topics addressed include the structure of a scene from two and three projections, optical flow techniques for moving target detection, tactical sensor-based exploration in a robotic environment, and the fusion of human and machine skills for remote robotic operations. Also discussed are K-nearest-neighbor concepts for sensor fusion, surface reconstruction with discontinuities, a sensor-knowledge-command fusion paradigm for man-machine systems, coordinating sensing and local navigation, and terrain map matching using multisensing techniques for applications to autonomous vehicle navigation.

  15. An iterative learning control method with application for CNC machine tools

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, D.I.; Kim, S.

    1996-01-01

    A proportional, integral, and derivative (PID) type iterative learning controller is proposed for precise tracking control of industrial robots and computer numerical controller (CNC) machine tools performing repetitive tasks. The convergence of the output error by the proposed learning controller is guaranteed under a certain condition even when the system parameters are not known exactly and unknown external disturbances exist. As the proposed learning controller is repeatedly applied to the industrial robot or the CNC machine tool with the path-dependent repetitive task, the distance difference between the desired path and the actual tracked or machined path, which is one ofmore » the most significant factors in the evaluation of control performance, is progressively reduced. The experimental results demonstrate that the proposed learning controller can improve machining accuracy when the CNC machine tool performs repetitive machining tasks.« less

  16. MRI-guided robotics at the U of Houston: evolving methodologies for interventions and surgeries.

    PubMed

    Tsekos, Nikolaos V

    2009-01-01

    Currently, we witness the rapid evolution of minimally invasive surgeries (MIS) and image guided interventions (IGI) for offering improved patient management and cost effectiveness. It is well recognized that sustaining and expand this paradigm shift would require new computational methodology that integrates sensing with multimodal imaging, actively controlled robotic manipulators, the patient and the operator. Such approach would include (1) assessing in real-time tissue deformation secondary to the procedure and physiologic motion, (2) monitoring the tool(s) in 3D, and (3) on-the-fly update information about the pathophysiology of the targeted tissue. With those capabilities, real time image guidance may facilitate a paradigm shift and methodological leap from "keyhole" visualization (i.e. endoscopy or laparoscopy) to one that uses a volumetric and informational rich perception of the Area of Operation (AoO). This capability may eventually enable a wider range and level of complexity IGI and MIS.

  17. Robotic Design Studio: Exploring the Big Ideas of Engineering in a Liberal Arts Environment.

    ERIC Educational Resources Information Center

    Turbak, Franklyn; Berg, Robbie

    2002-01-01

    Suggests that it is important to introduce liberal arts students to the essence of engineering. Describes Robotic Design Studio, a course in which students learn how to design, assemble, and program robots made out of LEGO parts, sensors, motors, and small embedded computers. Represents an alternative vision of how robot design can be used to…

  18. Digital redesign of the control system for the Robotics Research Corporation model K-1607 robot

    NASA Technical Reports Server (NTRS)

    Carroll, Robert L.

    1989-01-01

    The analog control system for positioning each link of the Robotics Research Corporation Model K-1607 robot manipulator was redesigned for computer control. In order to accomplish the redesign, a linearized model of the dynamic behavior of the robot was developed. The parameters of the model were determined by examination of the input-output data collected in closed-loop operation of the analog control system. The robot manipulator possesses seven degrees of freedom in its motion. The analog control system installed by the manufacturer of the robot attempts to control the positioning of each link without feedback from other links. Constraints on the design of a digital control system include: the robot cannot be disassembled for measurement of parameters; the digital control system must not include filtering operations if possible, because of lack of computer capability; and criteria of goodness of control system performing is lacking. The resulting design employs sampled-data position and velocity feedback. The criteria of the design permits the control system gain margin and phase margin, measured at the same frequencies, to be the same as that provided by the analog control system.

  19. Cartesian control of redundant robots

    NASA Technical Reports Server (NTRS)

    Colbaugh, R.; Glass, K.

    1989-01-01

    A Cartesian-space position/force controller is presented for redundant robots. The proposed control structure partitions the control problem into a nonredundant position/force trajectory tracking problem and a redundant mapping problem between Cartesian control input F is a set member of the set R(sup m) and robot actuator torque T is a set member of the set R(sup n) (for redundant robots, m is less than n). The underdetermined nature of the F yields T map is exploited so that the robot redundancy is utilized to improve the dynamic response of the robot. This dynamically optimal F yields T map is implemented locally (in time) so that it is computationally efficient for on-line control; however, it is shown that the map possesses globally optimal characteristics. Additionally, it is demonstrated that the dynamically optimal F yields T map can be modified so that the robot redundancy is used to simultaneously improve the dynamic response and realize any specified kinematic performance objective (e.g., manipulability maximization or obstacle avoidance). Computer simulation results are given for a four degree of freedom planar redundant robot under Cartesian control, and demonstrate that position/force trajectory tracking and effective redundancy utilization can be achieved simultaneously with the proposed controller.

  20. Computational Simulation on Facial Expressions and Experimental Tensile Strength for Silicone Rubber as Artificial Skin

    NASA Astrophysics Data System (ADS)

    Amijoyo Mochtar, Andi

    2018-02-01

    Applications of robotics have become important for human life in recent years. There are many specification of robots that have been improved and encriched with the technology advances. One of them are humanoid robot with facial expression which closer with the human facial expression naturally. The purpose of this research is to make computation on facial expressions and conduct the tensile strength for silicone rubber as artificial skin. Facial expressions were calculated by determining dimension, material properties, number of node elements, boundary condition, force condition, and analysis type. A Facial expression robot is determined by the direction and the magnitude external force on the driven point. The expression face of robot is identical with the human facial expression where the muscle structure in face according to the human face anatomy. For developing facial expression robots, facial action coding system (FACS) in approached due to follow expression human. The tensile strength is conducting due to check the proportional force of artificial skin that can be applied on the future of robot facial expression. Combining of calculated and experimental results can generate reliable and sustainable robot facial expression that using silicone rubber as artificial skin.

  1. Solution of Inverse Kinematics for 6R Robot Manipulators With Offset Wrist Based on Geometric Algebra.

    PubMed

    Fu, Zhongtao; Yang, Wenyu; Yang, Zhen

    2013-08-01

    In this paper, we present an efficient method based on geometric algebra for computing the solutions to the inverse kinematics problem (IKP) of the 6R robot manipulators with offset wrist. Due to the fact that there exist some difficulties to solve the inverse kinematics problem when the kinematics equations are complex, highly nonlinear, coupled and multiple solutions in terms of these robot manipulators stated mathematically, we apply the theory of Geometric Algebra to the kinematic modeling of 6R robot manipulators simply and generate closed-form kinematics equations, reformulate the problem as a generalized eigenvalue problem with symbolic elimination technique, and then yield 16 solutions. Finally, a spray painting robot, which conforms to the type of robot manipulators, is used as an example of implementation for the effectiveness and real-time of this method. The experimental results show that this method has a large advantage over the classical methods on geometric intuition, computation and real-time, and can be directly extended to all serial robot manipulators and completely automatized, which provides a new tool on the analysis and application of general robot manipulators.

  2. Implementing a Robotics Curriculum in an Early Childhood Montessori Classroom

    ERIC Educational Resources Information Center

    Elkin, Mollie; Sullivan, Amanda; Bers, Marina Umaschi

    2014-01-01

    This paper explores how robotics can be used as a new educational tool in a Montessori early education classroom. It presents a case study of one early educator's experience of designing and implementing a robotics curriculum integrated with a social science unit in her mixed-age classroom. This teacher had no prior experience using robotics in…

  3. Gastrointestinal robot-assisted surgery. A current perspective.

    PubMed

    Lunca, Sorinel; Bouras, George; Stanescu, Alexandru Calin

    2005-12-01

    Minimally invasive techniques have revolutionized operative surgery. Computer aided surgery and robotic surgical systems strive to improve further on currently available minimally invasive surgery and open new horizons. Only several centers are currently using surgical robots and publishing data. In gastrointestinal surgery, robotic surgery is applied to a wide range of procedures, but is still in its infancy. Cholecystectomy, Nissen fundoplication and Heller myotomy are among the most frequently performed operations. The ZEUS (Computer Motion, Goleta, CA) and the da Vinci (Intuitive Surgical, Mountain View, CA) surgical systems are today the most advanced robotic systems used in gastrointestinal surgery. Most studies reported that robotic gastrointestinal surgery is feasible and safe, provides improved dexterity, better visualization, reduced fatigue and high levels of precision when compared to conventional laparoscopic surgery. Its main drawbacks are the absence of force feedback and extremely high costs. At this moment there are no reports to clearly demonstrate the superiority of robotics over conventional laparoscopic surgery. Further research and more prospective randomized trials are needed to better define the optimal application of this new technology in gastrointestinal surgery.

  4. Watching elderly and disabled person's physical condition by remotely controlled monorail robot

    NASA Astrophysics Data System (ADS)

    Nagasaka, Yasunori; Matsumoto, Yoshinori; Fukaya, Yasutoshi; Takahashi, Tomoichi; Takeshita, Toru

    2001-10-01

    We are developing a nursing system using robots and cameras. The cameras are mounted on a remote controlled monorail robot which moves inside a room and watches the elderly. It is necessary to pay attention to the elderly at home or nursing homes all time. This requires staffs to pay attention to them at every time. The purpose of our system is to help those staffs. This study intends to improve such situation. A host computer controls a monorail robot to go in front of the elderly using the images taken by cameras on the ceiling. A CCD camera is mounted on the monorail robot to take pictures of their facial expression or movements. The robot sends the images to a host computer that checks them whether something unusual happens or not. We propose a simple calibration method for positioning the monorail robots to track the moves of the elderly for keeping their faces at center of camera view. We built a small experiment system, and evaluated our camera calibration method and image processing algorithm.

  5. Mobile robot exploration and navigation of indoor spaces using sonar and vision

    NASA Technical Reports Server (NTRS)

    Kortenkamp, David; Huber, Marcus; Koss, Frank; Belding, William; Lee, Jaeho; Wu, Annie; Bidlack, Clint; Rodgers, Seth

    1994-01-01

    Integration of skills into an autonomous robot that performs a complex task is described. Time constraints prevented complete integration of all the described skills. The biggest problem was tuning the sensor-based region-finding algorithm to the environment involved. Since localization depended on matching regions found with the a priori map, the robot became lost very quickly. If the low level sensing of the world is not working, then high level reasoning or map making will be unsuccessful.

  6. The Maiden Voyage of a Kinematics Robot

    NASA Astrophysics Data System (ADS)

    Greenwolfe, Matthew L.

    2015-04-01

    In a Montessori preschool classroom, students work independently on tasks that absorb their attention in part because the apparatus are carefully designed to make mistakes directly observable and limit exploration to one aspect or dimension. Control of error inheres in the apparatus itself, so that teacher intervention can be minimal.1 Inspired by this example, I created a robotic kinematics apparatus that also shapes the inquiry experience. Students program the robot by drawing kinematic graphs on a computer and then observe its motion. Exploration is at once limited to constant velocity and constant acceleration motion, yet open to complex multi-segment examples difficult to achieve in the lab in other ways. The robot precisely and reliably produces the motion described by the students' graphs, so that the apparatus itself provides immediate visual feedback about whether their understanding is correct as they are free to explore within the hard-coded limits. In particular, the kinematic robot enables hands-on study of multi-segment constant velocity situations, which lays a far stronger foundation for the study of accelerated motion. When correction is anonymous—just between one group of lab partners and their robot—students using the kinematic robot tend to flow right back to work because they view the correction as an integral part of the inquiry learning process. By contrast, when correction occurs by the teacher and/or in public (e.g., returning a graded assignment or pointing out student misconceptions during class), students all too often treat the event as the endpoint to inquiry. Furthermore, quantitative evidence shows a large gain from pre-test to post-test scores using the Test of Understanding Graphs in Kinematics (TUG-K).

  7. Blending of brain-machine interface and vision-guided autonomous robotics improves neuroprosthetic arm performance during grasping.

    PubMed

    Downey, John E; Weiss, Jeffrey M; Muelling, Katharina; Venkatraman, Arun; Valois, Jean-Sebastien; Hebert, Martial; Bagnell, J Andrew; Schwartz, Andrew B; Collinger, Jennifer L

    2016-03-18

    Recent studies have shown that brain-machine interfaces (BMIs) offer great potential for restoring upper limb function. However, grasping objects is a complicated task and the signals extracted from the brain may not always be capable of driving these movements reliably. Vision-guided robotic assistance is one possible way to improve BMI performance. We describe a method of shared control where the user controls a prosthetic arm using a BMI and receives assistance with positioning the hand when it approaches an object. Two human subjects with tetraplegia used a robotic arm to complete object transport tasks with and without shared control. The shared control system was designed to provide a balance between BMI-derived intention and computer assistance. An autonomous robotic grasping system identified and tracked objects and defined stable grasp positions for these objects. The system identified when the user intended to interact with an object based on the BMI-controlled movements of the robotic arm. Using shared control, BMI controlled movements and autonomous grasping commands were blended to ensure secure grasps. Both subjects were more successful on object transfer tasks when using shared control compared to BMI control alone. Movements made using shared control were more accurate, more efficient, and less difficult. One participant attempted a task with multiple objects and successfully lifted one of two closely spaced objects in 92 % of trials, demonstrating the potential for users to accurately execute their intention while using shared control. Integration of BMI control with vision-guided robotic assistance led to improved performance on object transfer tasks. Providing assistance while maintaining generalizability will make BMI systems more attractive to potential users. NCT01364480 and NCT01894802 .

  8. Experiments in cooperative-arm object manipulation with a two-armed free-flying robot. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Koningstein, Ross

    1990-01-01

    Developing computed-torque controllers for complex manipulator systems using current techniques and tools is difficult because they address the issues pertinent to simulation, as opposed to control. A new formulation of computed-torque (CT) control that leads to an automated computer-torque robot controller program is presented. This automated tool is used for simulations and experimental demonstrations of endpoint and object control from a free-flying robot. A new computed-torque formulation states the multibody control problem in an elegant, homogeneous, and practical form. A recursive dynamics algorithm is presented that numerically evaluates kinematics and dynamics terms for multibody systems given a topological description. Manipulators may be free-flying, and may have closed-chain constraints. With the exception of object squeeze-force control, the algorithm does not deal with actuator redundancy. The algorithm is used to implement an automated 2D computed-torque dynamics and control package that allows joint, endpoint, orientation, momentum, and object squeeze-force control. This package obviates the need for hand-derivation of kinematics and dynamics, and is used for both simulation and experimental control. Endpoint control experiments are performed on a laboratory robot that has two arms to manipulate payloads, and uses an air bearing to achieve very-low drag characteristics. Simulations and experimental data for endpoint and object controllers are presented for the experimental robot - a complex dynamic system. There is a certain rather wide set of conditions under which CT endpoint controllers can neglect robot base accelerations (but not motions) and achieve comparable performance including base accelerations in the model. The regime over which this simplification holds is explored by simulation and experiment.

  9. Improving therapeutic outcomes in autism spectrum disorders: Enhancing social communication and sensory processing through the use of interactive robots.

    PubMed

    Sartorato, Felippe; Przybylowski, Leon; Sarko, Diana K

    2017-07-01

    For children with autism spectrum disorders (ASDs), social robots are increasingly utilized as therapeutic tools in order to enhance social skills and communication. Robots have been shown to generate a number of social and behavioral benefits in children with ASD including heightened engagement, increased attention, and decreased social anxiety. Although social robots appear to be effective social reinforcement tools in assistive therapies, the perceptual mechanism underlying these benefits remains unknown. To date, social robot studies have primarily relied on expertise in fields such as engineering and clinical psychology, with measures of social robot efficacy principally limited to qualitative observational assessments of children's interactions with robots. In this review, we examine a range of socially interactive robots that currently have the most widespread use as well as the utility of these robots and their therapeutic effects. In addition, given that social interactions rely on audiovisual communication, we discuss how enhanced sensory processing and integration of robotic social cues may underlie the perceptual and behavioral benefits that social robots confer. Although overall multisensory processing (including audiovisual integration) is impaired in individuals with ASD, social robot interactions may provide therapeutic benefits by allowing audiovisual social cues to be experienced through a simplified version of a human interaction. By applying systems neuroscience tools to identify, analyze, and extend the multisensory perceptual substrates that may underlie the therapeutic benefits of social robots, future studies have the potential to strengthen the clinical utility of social robots for individuals with ASD. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Robotic neurorehabilitation: a computational motor learning perspective

    PubMed Central

    Huang, Vincent S; Krakauer, John W

    2009-01-01

    Conventional neurorehabilitation appears to have little impact on impairment over and above that of spontaneous biological recovery. Robotic neurorehabilitation has the potential for a greater impact on impairment due to easy deployment, its applicability across of a wide range of motor impairment, its high measurement reliability, and the capacity to deliver high dosage and high intensity training protocols. We first describe current knowledge of the natural history of arm recovery after stroke and of outcome prediction in individual patients. Rehabilitation strategies and outcome measures for impairment versus function are compared. The topics of dosage, intensity, and time of rehabilitation are then discussed. Robots are particularly suitable for both rigorous testing and application of motor learning principles to neurorehabilitation. Computational motor control and learning principles derived from studies in healthy subjects are introduced in the context of robotic neurorehabilitation. Particular attention is paid to the idea of context, task generalization and training schedule. The assumptions that underlie the choice of both movement trajectory programmed into the robot and the degree of active participation required by subjects are examined. We consider rehabilitation as a general learning problem, and examine it from the perspective of theoretical learning frameworks such as supervised and unsupervised learning. We discuss the limitations of current robotic neurorehabilitation paradigms and suggest new research directions from the perspective of computational motor learning. PMID:19243614

  11. External force/velocity control for an autonomous rehabilitation robot

    NASA Astrophysics Data System (ADS)

    Saekow, Peerayuth; Neranon, Paramin; Smithmaitrie, Pruittikorn

    2018-01-01

    Stroke is a primary cause of death and the leading cause of permanent disability in adults. There are many stroke survivors, who live with a variety of levels of disability and always need rehabilitation activities on daily basis. Several studies have reported that usage of rehabilitation robotic devices shows the better improvement outcomes in upper-limb stroke patients than the conventional therapy-nurses or therapists actively help patients with exercise-based rehabilitation. This research focuses on the development of an autonomous robotic trainer designed to guide a stroke patient through an upper-limb rehabilitation task. The robotic device was designed and developed to automate the reaching exercise as mentioned. The designed robotic system is made up of a four-wheel omni-directional mobile robot, an ATI Gamma multi-axis force/torque sensor used to measure contact force and a microcontroller real-time operating system. Proportional plus Integral control was adapted to control the overall performance and stability of the autonomous assistive robot. External force control was successfully implemented to establish the behavioral control strategy for the robot force and velocity control scheme. In summary, the experimental results indicated satisfactorily stable performance of the robot force and velocity control can be considered acceptable. The gain tuning for proportional integral (PI) velocity control algorithms was suitably estimated using the Ziegler-Nichols method in which the optimized proportional and integral gains are 0.45 and 0.11, respectively. Additionally, the PI external force control gains were experimentally tuned using the trial and error method based on a set of experiments which allow a human participant moves the robot along the constrained circular path whilst attempting to minimize the radial force. The performance was analyzed based on the root mean square error (E_RMS) of the radial forces, in which the lower the variation in radial forces, the better the performance of the system. The outstanding performance of the tests as specified by the E_RMS of the radial force was observed with proportional and integral gains of Kp = 0.7 and Ki = 0.75, respectively.

  12. ARTIE: An Integrated Environment for the Development of Affective Robot Tutors

    PubMed Central

    Imbernón Cuadrado, Luis-Eduardo; Manjarrés Riesco, Ángeles; De La Paz López, Félix

    2016-01-01

    Over the last decade robotics has attracted a great deal of interest from teachers and researchers as a valuable educational tool from preschool to highschool levels. The implementation of social-support behaviors in robot tutors, in particular in the emotional dimension, can make a significant contribution to learning efficiency. With the aim of contributing to the rising field of affective robot tutors we have developed ARTIE (Affective Robot Tutor Integrated Environment). We offer an architectural pattern which integrates any given educational software for primary school children with a component whose function is to identify the emotional state of the students who are interacting with the software, and with the driver of a robot tutor which provides personalized emotional pedagogical support to the students. In order to support the development of affective robot tutors according to the proposed architecture, we also provide a methodology which incorporates a technique for eliciting pedagogical knowledge from teachers, and a generic development platform. This platform contains a component for identiying emotional states by analysing keyboard and mouse interaction data, and a generic affective pedagogical support component which specifies the affective educational interventions (including facial expressions, body language, tone of voice,…) in terms of BML (a Behavior Model Language for virtual agent specification) files which are translated into actions of a robot tutor. The platform and the methodology are both adapted to primary school students. Finally, we illustrate the use of this platform to build a prototype implementation of the architecture, in which the educational software is instantiated with Scratch and the robot tutor with NAO. We also report on a user experiment we carried out to orient the development of the platform and of the prototype. We conclude from our work that, in the case of primary school students, it is possible to identify, without using intrusive and expensive identification methods, the emotions which most affect the character of educational interventions. Our work also demonstrates the feasibility of a general-purpose architecture of decoupled components, in which a wide range of educational software and robot tutors can be integrated and then used according to different educational criteria. PMID:27536230

  13. ARTIE: An Integrated Environment for the Development of Affective Robot Tutors.

    PubMed

    Imbernón Cuadrado, Luis-Eduardo; Manjarrés Riesco, Ángeles; De La Paz López, Félix

    2016-01-01

    Over the last decade robotics has attracted a great deal of interest from teachers and researchers as a valuable educational tool from preschool to highschool levels. The implementation of social-support behaviors in robot tutors, in particular in the emotional dimension, can make a significant contribution to learning efficiency. With the aim of contributing to the rising field of affective robot tutors we have developed ARTIE (Affective Robot Tutor Integrated Environment). We offer an architectural pattern which integrates any given educational software for primary school children with a component whose function is to identify the emotional state of the students who are interacting with the software, and with the driver of a robot tutor which provides personalized emotional pedagogical support to the students. In order to support the development of affective robot tutors according to the proposed architecture, we also provide a methodology which incorporates a technique for eliciting pedagogical knowledge from teachers, and a generic development platform. This platform contains a component for identiying emotional states by analysing keyboard and mouse interaction data, and a generic affective pedagogical support component which specifies the affective educational interventions (including facial expressions, body language, tone of voice,…) in terms of BML (a Behavior Model Language for virtual agent specification) files which are translated into actions of a robot tutor. The platform and the methodology are both adapted to primary school students. Finally, we illustrate the use of this platform to build a prototype implementation of the architecture, in which the educational software is instantiated with Scratch and the robot tutor with NAO. We also report on a user experiment we carried out to orient the development of the platform and of the prototype. We conclude from our work that, in the case of primary school students, it is possible to identify, without using intrusive and expensive identification methods, the emotions which most affect the character of educational interventions. Our work also demonstrates the feasibility of a general-purpose architecture of decoupled components, in which a wide range of educational software and robot tutors can be integrated and then used according to different educational criteria.

  14. A strategy planner for NASA robotics applications

    NASA Technical Reports Server (NTRS)

    Brodd, S. S.

    1985-01-01

    Automatic strategy or task planning is an important element of robotics systems. A strategy planner under development at Goddard Space Flight Center automatically produces robot plans for assembly, disassembly, or repair of NASA spacecraft from computer aided design descriptions of the individual parts of the spacecraft.

  15. Development of intelligent robots - Achievements and issues

    NASA Astrophysics Data System (ADS)

    Nitzan, D.

    1985-03-01

    A flexible, intelligent robot is regarded as a general purpose machine system that may include effectors, sensors, computers, and auxiliary equipment and, like a human, can perform a variety of tasks under unpredictable conditions. Development of intelligent robots is essential for increasing the growth rate of today's robot population in industry and elsewhere. Robotics research and development topics include manipulation, end effectors, mobility, sensing (noncontact and contact), adaptive control, robot programming languages, and manufacturing process planning. Past achievements and current issues related to each of these topics are described briefly.

  16. Dynamic Modelling Of A SCARA Robot

    NASA Astrophysics Data System (ADS)

    Turiel, J. Perez; Calleja, R. Grossi; Diez, V. Gutierrez

    1987-10-01

    This paper describes a method for modelling industrial robots that considers dynamic approach to manipulation systems motion generation, obtaining the complete dynamic model for the mechanic part of the robot and taking into account the dynamic effect of actuators acting at the joints. For a four degree of freedom SCARA robot we obtain the dynamic model for the basic (minimal) configuration, that is, the three degrees of freedom that allow us to place the robot end effector in a desired point, using the Lagrange Method to obtain the dynamic equations in matrix form. The manipulator is considered to be a set of rigid bodies inter-connected by joints in the form of simple kinematic pairs. Then, the state space model is obtained for the actuators that move the robot joints, uniting the models of the single actuators, that is, two DC permanent magnet servomotors and an electrohydraulic actuator. Finally, using a computer simulation program written in FORTRAN language, we can compute the matrices of the complete model.

  17. Artificial consciousness, artificial emotions, and autonomous robots.

    PubMed

    Cardon, Alain

    2006-12-01

    Nowadays for robots, the notion of behavior is reduced to a simple factual concept at the level of the movements. On another hand, consciousness is a very cultural concept, founding the main property of human beings, according to themselves. We propose to develop a computable transposition of the consciousness concepts into artificial brains, able to express emotions and consciousness facts. The production of such artificial brains allows the intentional and really adaptive behavior for the autonomous robots. Such a system managing the robot's behavior will be made of two parts: the first one computes and generates, in a constructivist manner, a representation for the robot moving in its environment, and using symbols and concepts. The other part achieves the representation of the previous one using morphologies in a dynamic geometrical way. The robot's body will be seen for itself as the morphologic apprehension of its material substrata. The model goes strictly by the notion of massive multi-agent's organizations with a morphologic control.

  18. Effect of motor dynamics on nonlinear feedback robot arm control

    NASA Technical Reports Server (NTRS)

    Tarn, Tzyh-Jong; Li, Zuofeng; Bejczy, Antal K.; Yun, Xiaoping

    1991-01-01

    A nonlinear feedback robot controller that incorporates the robot manipulator dynamics and the robot joint motor dynamics is proposed. The manipulator dynamics and the motor dynamics are coupled to obtain a third-order-dynamic model, and differential geometric control theory is applied to produce a linearized and decoupled robot controller. The derived robot controller operates in the robot task space, thus eliminating the need for decomposition of motion commands into robot joint space commands. Computer simulations are performed to verify the feasibility of the proposed robot controller. The controller is further experimentally evaluated on the PUMA 560 robot arm. The experiments show that the proposed controller produces good trajectory tracking performances and is robust in the presence of model inaccuracies. Compared with a nonlinear feedback robot controller based on the manipulator dynamics only, the proposed robot controller yields conspicuously improved performance.

  19. Hazardous Environment Robotics

    NASA Technical Reports Server (NTRS)

    1996-01-01

    Jet Propulsion Laboratory (JPL) developed video overlay calibration and demonstration techniques for ground-based telerobotics. Through a technology sharing agreement with JPL, Deneb Robotics added this as an option to its robotics software, TELEGRIP. The software is used for remotely operating robots in nuclear and hazardous environments in industries including automotive and medical. The option allows the operator to utilize video to calibrate 3-D computer models with the actual environment, and thus plan and optimize robot trajectories before the program is automatically generated.

  20. Robot vibration control using inertial damping forces

    NASA Technical Reports Server (NTRS)

    Lee, Soo Han; Book, Wayne J.

    1991-01-01

    This paper concerns the suppression of the vibration of a large flexible robot by inertial forces of a small robot which is located at the tip of the large robot. A controller for generating damping forces to a large robot is designed based on the two time scale model. The controller does not need to calculate the quasi-steady variables and is efficient in computation. Simulation results show the effectiveness of the inertial forces and the controller designed.

  1. Robot vibration control using inertial damping forces

    NASA Technical Reports Server (NTRS)

    Lee, Soo Han; Book, Wayne J.

    1989-01-01

    The suppression is examined of the vibration of a large flexible robot by inertial forces of a small robot which is located at the tip of the large robot. A controller for generating damping forces to a large robot is designed based on the two time scale mode. The controller does not need to calculate the quasi-steady state variables and is efficient in computation. Simulation results show the effectiveness of the inertial forces and the controller designed.

  2. Soft Robotic Manipulator for Improving Dexterity in Minimally Invasive Surgery.

    PubMed

    Diodato, Alessandro; Brancadoro, Margherita; De Rossi, Giacomo; Abidi, Haider; Dall'Alba, Diego; Muradore, Riccardo; Ciuti, Gastone; Fiorini, Paolo; Menciassi, Arianna; Cianchetti, Matteo

    2018-02-01

    Combining the strengths of surgical robotics and minimally invasive surgery (MIS) holds the potential to revolutionize surgical interventions. The MIS advantages for the patients are obvious, but the use of instrumentation suitable for MIS often translates in limiting the surgeon capabilities (eg, reduction of dexterity and maneuverability and demanding navigation around organs). To overcome these shortcomings, the application of soft robotics technologies and approaches can be beneficial. The use of devices based on soft materials is already demonstrating several advantages in all the exploitation areas where dexterity and safe interaction are needed. In this article, the authors demonstrate that soft robotics can be synergistically used with traditional rigid tools to improve the robotic system capabilities and without affecting the usability of the robotic platform. A bioinspired soft manipulator equipped with a miniaturized camera has been integrated with the Endoscopic Camera Manipulator arm of the da Vinci Research Kit both from hardware and software viewpoints. Usability of the integrated system has been evaluated with nonexpert users through a standard protocol to highlight difficulties in controlling the soft manipulator. This is the first time that an endoscopic tool based on soft materials has been integrated into a surgical robot. The soft endoscopic camera can be easily operated through the da Vinci Research Kit master console, thus increasing the workspace and the dexterity, and without limiting intuitive and friendly use.

  3. Energy Efficient Legged Robotics at Sandia Labs, Part 2

    ScienceCinema

    Buerger, Steve; Mazumdar, Ani; Spencer, Steve

    2018-01-16

    Sandia is developing energy efficient actuation and drive train technologies to dramatically improve the charge life of legged robots. The work is supported by DARPA, and Sandia will demonstrate an energy efficient bipedal robot at the technology exposition section of the DARPA Robotics Challenge Finals in June, 2015. This video, the second in a series, describes the continued development and integration of the Sandia Transmission Efficient Prototype Promoting Research (STEPPR) robot.

  4. Energy Efficient Legged Robotics at Sandia Labs, Part 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buerger, Steve; Mazumdar, Ani; Spencer, Steve

    Sandia is developing energy efficient actuation and drive train technologies to dramatically improve the charge life of legged robots. The work is supported by DARPA, and Sandia will demonstrate an energy efficient bipedal robot at the technology exposition section of the DARPA Robotics Challenge Finals in June, 2015. This video, the second in a series, describes the continued development and integration of the Sandia Transmission Efficient Prototype Promoting Research (STEPPR) robot.

  5. Evidence in Support of the Independent Channel Model Describing the Sensorimotor Control of Human Stance Using a Humanoid Robot

    PubMed Central

    Pasma, Jantsje H.; Assländer, Lorenz; van Kordelaar, Joost; de Kam, Digna; Mergner, Thomas; Schouten, Alfred C.

    2018-01-01

    The Independent Channel (IC) model is a commonly used linear balance control model in the frequency domain to analyze human balance control using system identification and parameter estimation. The IC model is a rudimentary and noise-free description of balance behavior in the frequency domain, where a stable model representation is not guaranteed. In this study, we conducted firstly time-domain simulations with added noise, and secondly robot experiments by implementing the IC model in a real-world robot (PostuRob II) to test the validity and stability of the model in the time domain and for real world situations. Balance behavior of seven healthy participants was measured during upright stance by applying pseudorandom continuous support surface rotations. System identification and parameter estimation were used to describe the balance behavior with the IC model in the frequency domain. The IC model with the estimated parameters from human experiments was implemented in Simulink for computer simulations including noise in the time domain and robot experiments using the humanoid robot PostuRob II. Again, system identification and parameter estimation were used to describe the simulated balance behavior. Time series, Frequency Response Functions, and estimated parameters from human experiments, computer simulations, and robot experiments were compared with each other. The computer simulations showed similar balance behavior and estimated control parameters compared to the human experiments, in the time and frequency domain. Also, the IC model was able to control the humanoid robot by keeping it upright, but showed small differences compared to the human experiments in the time and frequency domain, especially at high frequencies. We conclude that the IC model, a descriptive model in the frequency domain, can imitate human balance behavior also in the time domain, both in computer simulations with added noise and real world situations with a humanoid robot. This provides further evidence that the IC model is a valid description of human balance control. PMID:29615886

  6. Evidence in Support of the Independent Channel Model Describing the Sensorimotor Control of Human Stance Using a Humanoid Robot.

    PubMed

    Pasma, Jantsje H; Assländer, Lorenz; van Kordelaar, Joost; de Kam, Digna; Mergner, Thomas; Schouten, Alfred C

    2018-01-01

    The Independent Channel (IC) model is a commonly used linear balance control model in the frequency domain to analyze human balance control using system identification and parameter estimation. The IC model is a rudimentary and noise-free description of balance behavior in the frequency domain, where a stable model representation is not guaranteed. In this study, we conducted firstly time-domain simulations with added noise, and secondly robot experiments by implementing the IC model in a real-world robot (PostuRob II) to test the validity and stability of the model in the time domain and for real world situations. Balance behavior of seven healthy participants was measured during upright stance by applying pseudorandom continuous support surface rotations. System identification and parameter estimation were used to describe the balance behavior with the IC model in the frequency domain. The IC model with the estimated parameters from human experiments was implemented in Simulink for computer simulations including noise in the time domain and robot experiments using the humanoid robot PostuRob II. Again, system identification and parameter estimation were used to describe the simulated balance behavior. Time series, Frequency Response Functions, and estimated parameters from human experiments, computer simulations, and robot experiments were compared with each other. The computer simulations showed similar balance behavior and estimated control parameters compared to the human experiments, in the time and frequency domain. Also, the IC model was able to control the humanoid robot by keeping it upright, but showed small differences compared to the human experiments in the time and frequency domain, especially at high frequencies. We conclude that the IC model, a descriptive model in the frequency domain, can imitate human balance behavior also in the time domain, both in computer simulations with added noise and real world situations with a humanoid robot. This provides further evidence that the IC model is a valid description of human balance control.

  7. Genetic Robots: An Integrated Art and Biology Curriculum.

    ERIC Educational Resources Information Center

    Schramm, Susan L.

    2000-01-01

    Describes the design and implementation of an integrated art and science curriculum "Genetic Robotics: A Three-Dimensional Scientific Inquiry" for high school art and biology students at Madeira Junior/Senior High School in Cincinnati, Ohio. States that the project aimed at recognizing individual differences while enabling students to become…

  8. Real time AI expert system for robotic applications

    NASA Technical Reports Server (NTRS)

    Follin, John F.

    1987-01-01

    A computer controlled multi-robot process cell to demonstrate advanced technologies for the demilitarization of obsolete chemical munitions was developed. The methods through which the vision system and other sensory inputs were used by the artificial intelligence to provide the information required to direct the robots to complete the desired task are discussed. The mechanisms that the expert system uses to solve problems (goals), the different rule data base, and the methods for adapting this control system to any device that can be controlled or programmed through a high level computer interface are discussed.

  9. Coordinating teams of autonomous vehicles: an architectural perspective

    NASA Astrophysics Data System (ADS)

    Czichon, Cary; Peterson, Robert W.; Mettala, Erik G.; Vondrak, Ivo

    2005-05-01

    In defense-related robotics research, a mission level integration gap exists between mission tasks (tactical) performed by ground, sea, or air applications and elementary behaviors enacted by processing, communications, sensors, and weaponry resources (platform specific). The gap spans ensemble (heterogeneous team) behaviors, automatic MOE/MOP tracking, and tactical task modeling/simulation for virtual and mixed teams comprised of robotic and human combatants. This study surveys robotic system architectures, compares approaches for navigating problem/state spaces by autonomous systems, describes an architecture for an integrated, repository-based modeling, simulation, and execution environment, and outlines a multi-tiered scheme for robotic behavior components that is agent-based, platform-independent, and extendable via plug-ins. Tools for this integrated environment, along with a distributed agent framework for collaborative task performance are being developed by a U.S. Army funded SBIR project (RDECOM Contract N61339-04-C-0005).

  10. Smart Hand For Manipulators

    NASA Astrophysics Data System (ADS)

    Fiorini, Paolo

    1987-10-01

    Sensor based, computer controlled end effectors for mechanical arms are receiving more and more attention in the robotics industry, because commonly available grippers are only adequate for simple pick and place tasks. This paper describes the current status of the research at JPL on a smart hand for a Puma 560 robot arm. The hand is a self contained, autonomous system, capable of executing high level commands from a supervisory computer. The mechanism consists of parallel fingers, powered by a DC motor, and controlled by a microprocessor embedded in the hand housing. Special sensors are integrated in the hand for measuring the grasp force of the fingers, and for measuring forces and torques applied between the arm and the surrounding environment. Fingers can be exercised under position, velocity and force control modes. The single-chip microcomputer in the hand executes the tasks of communication, data acquisition and sensor based motor control, with a sample cycle of 2 ms and a transmission rate of 9600 baud. The smart hand described in this paper represents a new development in the area of end effector design because of its multi-functionality and autonomy. It will also be a versatile test bed for experimenting with advanced control schemes for dexterous manipulation.

  11. A Fabry-Perot Interferometry Based MRI-Compatible Miniature Uniaxial Force Sensor for Percutaneous Needle Placement

    PubMed Central

    Shang, Weijian; Su, Hao; Li, Gang; Furlong, Cosme; Fischer, Gregory S.

    2014-01-01

    Robot-assisted surgical procedures, taking advantage of the high soft tissue contrast and real-time imaging of magnetic resonance imaging (MRI), are developing rapidly. However, it is crucial to maintain tactile force feedback in MRI-guided needle-based procedures. This paper presents a Fabry-Perot interference (FPI) based system of an MRI-compatible fiber optic sensor which has been integrated into a piezoelectrically actuated robot for prostate cancer biopsy and brachytherapy in 3T MRI scanner. The opto-electronic sensing system design was minimized to fit inside an MRI-compatible robot controller enclosure. A flexure mechanism was designed that integrates the FPI sensor fiber for measuring needle insertion force, and finite element analysis was performed for optimizing the correct force-deformation relationship. The compact, low-cost FPI sensing system was integrated into the robot and calibration was conducted. The root mean square (RMS) error of the calibration among the range of 0–10 Newton was 0.318 Newton comparing to the theoretical model which has been proven sufficient for robot control and teleoperation. PMID:25126153

  12. Real-time computing platform for spiking neurons (RT-spike).

    PubMed

    Ros, Eduardo; Ortigosa, Eva M; Agís, Rodrigo; Carrillo, Richard; Arnold, Michael

    2006-07-01

    A computing platform is described for simulating arbitrary networks of spiking neurons in real time. A hybrid computing scheme is adopted that uses both software and hardware components to manage the tradeoff between flexibility and computational power; the neuron model is implemented in hardware and the network model and the learning are implemented in software. The incremental transition of the software components into hardware is supported. We focus on a spike response model (SRM) for a neuron where the synapses are modeled as input-driven conductances. The temporal dynamics of the synaptic integration process are modeled with a synaptic time constant that results in a gradual injection of charge. This type of model is computationally expensive and is not easily amenable to existing software-based event-driven approaches. As an alternative we have designed an efficient time-based computing architecture in hardware, where the different stages of the neuron model are processed in parallel. Further improvements occur by computing multiple neurons in parallel using multiple processing units. This design is tested using reconfigurable hardware and its scalability and performance evaluated. Our overall goal is to investigate biologically realistic models for the real-time control of robots operating within closed action-perception loops, and so we evaluate the performance of the system on simulating a model of the cerebellum where the emulation of the temporal dynamics of the synaptic integration process is important.

  13. A Facility and Architecture for Autonomy Research

    NASA Technical Reports Server (NTRS)

    Pisanich, Greg; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Autonomy is a key enabling factor in the advancement of the remote robotic exploration. There is currently a large gap between autonomy software at the research level and software that is ready for insertion into near-term space missions. The Mission Simulation Facility (MST) will bridge this gap by providing a simulation framework and suite of simulation tools to support research in autonomy for remote exploration. This system will allow developers of autonomy software to test their models in a high-fidelity simulation and evaluate their system's performance against a set of integrated, standardized simulations. The Mission Simulation ToolKit (MST) uses a distributed architecture with a communication layer that is built on top of the standardized High Level Architecture (HLA). This architecture enables the use of existing high fidelity models, allows mixing simulation components from various computing platforms and enforces the use of a standardized high-level interface among components. The components needed to achieve a realistic simulation can be grouped into four categories: environment generation (terrain, environmental features), robotic platform behavior (robot dynamics), instrument models (camera/spectrometer/etc.), and data analysis. The MST will provide basic components in these areas but allows users to plug-in easily any refined model by means of a communication protocol. Finally, a description file defines the robot and environment parameters for easy configuration and ensures that all the simulation models share the same information.

  14. A mobile robot system for ground servicing operations on the space shuttle

    NASA Astrophysics Data System (ADS)

    Dowling, K.; Bennett, R.; Blackwell, M.; Graham, T.; Gatrall, S.; O'Toole, R.; Schempf, H.

    1992-11-01

    A mobile system for space shuttle servicing, the Tessellator, has been configured, designed and is currently being built and integrated. Robot tasks include chemical injection and inspection of the shuttle's thermal protection system. This paper outlines tasks, rationale, and facility requirements for the development of this system. A detailed look at the mobile system and manipulator follow with a look at mechanics, electronics, and software. Salient features of the mobile robot include omnidirectionality, high reach, high stiffness and accuracy with safety and self-reliance integral to all aspects of the design. The robot system is shown to meet task, facility, and NASA requirements in its design resulting in unprecedented specifications for a mobile-manipulation system.

  15. A mobile robot system for ground servicing operations on the space shuttle

    NASA Technical Reports Server (NTRS)

    Dowling, K.; Bennett, R.; Blackwell, M.; Graham, T.; Gatrall, S.; O'Toole, R.; Schempf, H.

    1992-01-01

    A mobile system for space shuttle servicing, the Tessellator, has been configured, designed and is currently being built and integrated. Robot tasks include chemical injection and inspection of the shuttle's thermal protection system. This paper outlines tasks, rationale, and facility requirements for the development of this system. A detailed look at the mobile system and manipulator follow with a look at mechanics, electronics, and software. Salient features of the mobile robot include omnidirectionality, high reach, high stiffness and accuracy with safety and self-reliance integral to all aspects of the design. The robot system is shown to meet task, facility, and NASA requirements in its design resulting in unprecedented specifications for a mobile-manipulation system.

  16. A Survey of Robotic Technology.

    DTIC Science & Technology

    1983-07-01

    developed the following definition of a robot: A robot is a reprogrammable multifunctional manipulator designed to move material, parts, tools, or specialized...subroutines subroutines commands to specific actuators, computations based on sensor data, etc. For instance, the job might be to assemble an automobile ...the set-up developed at Draper Labs to enable a robot to assemble an automobile alternator. The assembly operation is impressive to watch. The number

  17. Human voluntary activity integration in the control of a standing-up rehabilitation robot: a simulation study.

    PubMed

    Kamnik, Roman; Bajd, Tadej

    2007-11-01

    The paper presents a novel control approach for the robot-assisted motion augmentation of disabled subjects during the standing-up manoeuvre. The main goal of the proposal is to integrate the voluntary activity of a person in the control scheme of the rehabilitation robot. The algorithm determines the supportive force to be tracked by a robot force controller. The basic idea behind the calculation of supportive force is to quantify the deficit in the dynamic equilibrium of the trunk. The proposed algorithm was implemented as a Kalman filter procedure and evaluated in a simulation environment. The simulation results proved the adequate and robust performance of "patient-driven" robot-assisted standing-up training. In addition, the possibility of varying the training conditions with different degrees of the subject's initiative is demonstrated.

  18. A Robotics Systems Design Need: A Design Standard to Provide the Systems Focus that is Required for Longterm Exploration Efforts

    NASA Technical Reports Server (NTRS)

    Dischinger, H. Charles., Jr.; Mullins, Jeffrey B.

    2005-01-01

    The United States is entering a new period of human exploration of the inner Solar System, and robotic human helpers will be partners in that effort. In order to support integration of these new worker robots into existing and new human systems, a new design standard should be developed, to be called the Robot-Systems Integration Standard (RSIS). It will address the requirements for and constraints upon robotic collaborators with humans. These workers are subject to the same functional constraints as humans of work, reach, and visibility/situational awareness envelopes, and they will deal with the same maintenance and communication interfaces. Thus, the RSIS will be created by discipline experts with the same sort of perspective on these and other interface concerns as human engineers.

  19. Manipulator control and mechanization: A telerobot subsystem

    NASA Technical Reports Server (NTRS)

    Hayati, S.; Wilcox, B.

    1987-01-01

    The short- and long-term autonomous robot control activities in the Robotics and Teleoperators Research Group at the Jet Propulsion Laboratory (JPL) are described. This group is one of several involved in robotics and is an integral part of a new NASA robotics initiative called Telerobot program. A description of the architecture, hardware and software, and the research direction in manipulator control is given.

  20. Robust performance of multiple tasks by a mobile robot

    NASA Technical Reports Server (NTRS)

    Beckerman, Martin; Barnett, Deanna L.; Dickens, Mike; Weisbin, Charles R.

    1989-01-01

    While there have been many successful mobile robot experiments, only a few papers have addressed issues pertaining to the range of applicability, or robustness, of robotic systems. The purpose of this paper is to report results of a series of benchmark experiments done to determine and quantify the robustness of an integrated hardware and software system of a mobile robot.

  1. Measurement of the Robot Motor Capability of a Robot Motor System: A Fitts's-Law-Inspired Approach

    PubMed Central

    Lin, Hsien-I; George Lee, C. S.

    2013-01-01

    Robot motor capability is a crucial factor for a robot, because it affects how accurately and rapidly a robot can perform a motion to accomplish a task constrained by spatial and temporal conditions. In this paper, we propose and derive a pseudo-index of motor performance (pIp) to characterize robot motor capability with robot kinematics, dynamics and control taken into consideration. The proposed pIp provides a quantitative measure for a robot with revolute joints, which is inspired from an index of performance in Fitts's law of human skills. Computer simulations and experiments on a PUMA 560 industrial robot were conducted to validate the proposed pIp for performing a motion accurately and rapidly. PMID:23820745

  2. Measurement of the robot motor capability of a robot motor system: a Fitts's-law-inspired approach.

    PubMed

    Lin, Hsien-I; Lee, C S George

    2013-07-02

    Robot motor capability is a crucial factor for a robot, because it affects how accurately and rapidly a robot can perform a motion to accomplish a task constrained by spatial and temporal conditions. In this paper, we propose and derive a pseudo-index of motor performance (pIp) to characterize robot motor capability with robot kinematics, dynamics and control taken into consideration. The proposed pIp provides a quantitative measure for a robot with revolute joints, which is inspired from an index of performance in Fitts's law of human skills. Computer simulations and experiments on a PUMA 560 industrial robot were conducted to validate the proposed pIp for performing a motion accurately and rapidly.

  3. Evolution of Signaling in a Multi-Robot System: Categorization and Communication

    NASA Astrophysics Data System (ADS)

    Ampatzis, Christos; Tuci, Elio; Trianni, Vito; Dorigo, Marco

    We use Evolutionary Robotics to design robot controllers in which decision-making mechanisms to switch from solitary to social behavior are integrated with the mechanisms that underpin the sensory-motor repertoire of the robots. In particular, we study the evolution of behavioral and communicative skills in a categorization task. The individual decision-making structures are based on the integration over time of sensory information. The mechanisms for switching from solitary to social behavior and the ways in which the robots can affect each other's behavior are not predetermined by the experimenter, but are aspects of our model designed by artificial evolution. Our results show that evolved robots manage to cooperate and collectively discriminate between different environments by developing a simple communication protocol based on sound signaling. Communication emerges in the absence of explicit selective pressure coded in the fitness function. The evolution of communication is neither trivial nor obvious; for a meaningful signaling system to evolve, evolution must produce both appropriate signals and appropriate reactions to signals. The use of communication proves to be adaptive for the group, even if, in principle, non-cooperating robots can be equally successful with cooperating robots.

  4. Interaction dynamics of multiple mobile robots with simple navigation strategies

    NASA Technical Reports Server (NTRS)

    Wang, P. K. C.

    1989-01-01

    The global dynamic behavior of multiple interacting autonomous mobile robots with simple navigation strategies is studied. Here, the effective spatial domain of each robot is taken to be a closed ball about its mass center. It is assumed that each robot has a specified cone of visibility such that interaction with other robots takes place only when they enter its visibility cone. Based on a particle model for the robots, various simple homing and collision-avoidance navigation strategies are derived. Then, an analysis of the dynamical behavior of the interacting robots in unbounded spatial domains is made. The article concludes with the results of computer simulations studies of two or more interacting robots.

  5. Design of a Compact Actuation and Control System for Flexible Medical Robots.

    PubMed

    Morimoto, Tania K; Hawkes, Elliot Wright; Okamura, Allison M

    2017-07-01

    Flexible medical robots can improve surgical procedures by decreasing invasiveness and increasing accessibility within the body. Using preoperative images, these robots can be designed to optimize a procedure for a particular patient. To minimize invasiveness and maximize biocompatibility, the actuation units of flexible medical robots should be placed fully outside the patient's body. In this letter, we present a novel, compact, lightweight, modular actuation, and control system for driving a class of these flexible robots, known as concentric tube robots. A key feature of the design is the use of three-dimensional printed waffle gears to enable compact control of two degrees of freedom within each module. We measure the precision and accuracy of a single actuation module and demonstrate the ability of an integrated set of three actuation modules to control six degrees of freedom. The integrated system drives a three-tube concentric tube robot to reach a final tip position that is on average less than 2 mm from a given target. In addition, we show a handheld manifestation of the device and present its potential applications.

  6. Integration of advanced teleoperation technologies for control of space robots

    NASA Technical Reports Server (NTRS)

    Stagnaro, Michael J.

    1993-01-01

    Teleoperated robots require one or more humans to control actuators, mechanisms, and other robot equipment given feedback from onboard sensors. To accomplish this task, the human or humans require some form of control station. Desirable features of such a control station include operation by a single human, comfort, and natural human interfaces (visual, audio, motion, tactile, etc.). These interfaces should work to maximize performance of the human/robot system by streamlining the link between human brain and robot equipment. This paper describes development of a control station testbed with the characteristics described above. Initially, this testbed will be used to control two teleoperated robots. Features of the robots include anthropomorphic mechanisms, slaving to the testbed, and delivery of sensory feedback to the testbed. The testbed will make use of technologies such as helmet mounted displays, voice recognition, and exoskeleton masters. It will allow tor integration and testing of emerging telepresence technologies along with techniques for coping with control link time delays. Systems developed from this testbed could be applied to ground control of space based robots. During man-tended operations, the Space Station Freedom may benefit from ground control of IVA or EVA robots with science or maintenance tasks. Planetary exploration may also find advanced teleoperation systems to be very useful.

  7. A Robot-Driven Computational Model for Estimating Passive Ankle Torque With Subject-Specific Adaptation.

    PubMed

    Zhang, Mingming; Meng, Wei; Davies, T Claire; Zhang, Yanxin; Xie, Sheng Q

    2016-04-01

    Robot-assisted ankle assessment could potentially be conducted using sensor-based and model-based methods. Existing ankle rehabilitation robots usually use torquemeters and multiaxis load cells for measuring joint dynamics. These measurements are accurate, but the contribution as a result of muscles and ligaments is not taken into account. Some computational ankle models have been developed to evaluate ligament strain and joint torque. These models do not include muscles and, thus, are not suitable for an overall ankle assessment in robot-assisted therapy. This study proposed a computational ankle model for use in robot-assisted therapy with three rotational degrees of freedom, 12 muscles, and seven ligaments. This model is driven by robotics, uses three independent position variables as inputs, and outputs an overall ankle assessment. Subject-specific adaptations by geometric and strength scaling were also made to allow for a universal model. This model was evaluated using published results and experimental data from 11 participants. Results show a high accuracy in the evaluation of ligament neutral length and passive joint torque. The subject-specific adaptation performance is high, with each normalized root-mean-square deviation value less than 10%. This model could be used for ankle assessment, especially in evaluating passive ankle torque, for a specific individual. The characteristic that is unique to this model is the use of three independent position variables that can be measured in real time as inputs, which makes it advantageous over other models when combined with robot-assisted therapy.

  8. Using advanced computer vision algorithms on small mobile robots

    NASA Astrophysics Data System (ADS)

    Kogut, G.; Birchmore, F.; Biagtan Pacis, E.; Everett, H. R.

    2006-05-01

    The Technology Transfer project employs a spiral development process to enhance the functionality and autonomy of mobile robot systems in the Joint Robotics Program (JRP) Robotic Systems Pool by converging existing component technologies onto a transition platform for optimization. An example of this approach is the implementation of advanced computer vision algorithms on small mobile robots. We demonstrate the implementation and testing of the following two algorithms useful on mobile robots: 1) object classification using a boosted Cascade of classifiers trained with the Adaboost training algorithm, and 2) human presence detection from a moving platform. Object classification is performed with an Adaboost training system developed at the University of California, San Diego (UCSD) Computer Vision Lab. This classification algorithm has been used to successfully detect the license plates of automobiles in motion in real-time. While working towards a solution to increase the robustness of this system to perform generic object recognition, this paper demonstrates an extension to this application by detecting soda cans in a cluttered indoor environment. The human presence detection from a moving platform system uses a data fusion algorithm which combines results from a scanning laser and a thermal imager. The system is able to detect the presence of humans while both the humans and the robot are moving simultaneously. In both systems, the two aforementioned algorithms were implemented on embedded hardware and optimized for use in real-time. Test results are shown for a variety of environments.

  9. Motion planning: A journey of robots, molecules, digital actors, and other artifacts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Latombe, J.C.

    1999-11-01

    During the past three decades, motion planning has emerged as a crucial and productive research area in robotics. In the mid-1980s, the most advanced planners were barely able to compute collision-free paths for objects crawling in planar workspaces. Today, planners efficiently deal with robots with many degrees of freedom in complex environments. Techniques also exist to generate quasi-optimal trajectories, coordinate multiple robots, deal with dynamic and kinematic constraints, and handle dynamic environments. This paper describes some of these achievements, presents new problems that have recently emerged, discusses applications likely to motivate future research, and finally gives expectations for the comingmore » years. It stresses the fact that nonrobotics applications (e.g., graphic animation, surgical planning, computational biology) are growing in importance and are likely to shape future motion-planning research more than robotics itself.« less

  10. Characteristics of Behavior of Robots with Emotion Model

    NASA Astrophysics Data System (ADS)

    Sato, Shigehiko; Nozawa, Akio; Ide, Hideto

    Cooperated multi robots system has much dominance in comparison with single robot system. It is able to adapt to various circumstances and has a flexibility for variation of tasks. However it has still problems to control each robot, though methods for control multi robots system have been studied. Recently, the robots have been coming into real scene. And emotion and sensitivity of the robots have been widely studied. In this study, human emotion model based on psychological interaction was adapt to multi robots system to achieve methods for organization of multi robots. The characteristics of behavior of multi robots system achieved through computer simulation were analyzed. As a result, very complexed and interesting behavior was emerged even though it has rather simple configuration. And it has flexiblity in various circumstances. Additional experiment with actual robots will be conducted based on the emotion model.

  11. A review of robotics in surgery.

    PubMed

    Davies, B

    2000-01-01

    A brief introduction is given to the definitions and history of surgical robotics. The capabilities and merits of surgical robots are then contrasted with the related field of computer assisted surgery. A classification is then given of the various types of robot system currently being investigated internationally, together with a number of examples of different applications in both soft-tissue and orthopaedic surgery. The paper finishes with a discussion of the main difficulties facing robotic surgery and a prediction of future progress.

  12. Human-directed local autonomy for motion guidance and coordination in an intelligent manufacturing system

    NASA Astrophysics Data System (ADS)

    Alford, W. A.; Kawamura, Kazuhiko; Wilkes, Don M.

    1997-12-01

    This paper discusses the problem of integrating human intelligence and skills into an intelligent manufacturing system. Our center has jointed the Holonic Manufacturing Systems (HMS) Project, an international consortium dedicated to developing holonic systems technologies. One of our contributions to this effort is in Work Package 6: flexible human integration. This paper focuses on one activity, namely, human integration into motion guidance and coordination. Much research on intelligent systems focuses on creating totally autonomous agents. At the Center for Intelligent Systems (CIS), we design robots that interact directly with a human user. We focus on using the natural intelligence of the user to simplify the design of a robotic system. The problem is finding ways for the user to interact with the robot that are efficient and comfortable for the user. Manufacturing applications impose the additional constraint that the manufacturing process should not be disturbed; that is, frequent interacting with the user could degrade real-time performance. Our research in human-robot interaction is based on a concept called human directed local autonomy (HuDL). Under this paradigm, the intelligent agent selects and executes a behavior or skill, based upon directions from a human user. The user interacts with the robot via speech, gestures, or other media. Our control software is based on the intelligent machine architecture (IMA), an object-oriented architecture which facilitates cooperation and communication among intelligent agents. In this paper we describe our research testbed, a dual-arm humanoid robot and human user, and the use of this testbed for a human directed sorting task. We also discuss some proposed experiments for evaluating the integration of the human into the robot system. At the time of this writing, the experiments have not been completed.

  13. Blending Velocities In Task Space In Computing Robot Motions

    NASA Technical Reports Server (NTRS)

    Volpe, Richard A.

    1995-01-01

    Blending of linear and angular velocities between sequential specified points in task space constitutes theoretical basis of improved method of computing trajectories followed by robotic manipulators. In method, generalized velocity-vector-blending technique provides relatively simple, common conceptual framework for blending linear, angular, and other parametric velocities. Velocity vectors originate from straight-line segments connecting specified task-space points, called "via frames" and represent specified robot poses. Linear-velocity-blending functions chosen from among first-order, third-order-polynomial, and cycloidal options. Angular velocities blended by use of first-order approximation of previous orientation-matrix-blending formulation. Angular-velocity approximation yields small residual error, quantified and corrected. Method offers both relative simplicity and speed needed for generation of robot-manipulator trajectories in real time.

  14. Surgical robot setup simulation with consistent kinematics and haptics for abdominal surgery.

    PubMed

    Hayashibe, Mitsuhiro; Suzuki, Naoki; Hattori, Asaki; Suzuki, Shigeyuki; Konishi, Kozo; Kakeji, Yoshihiro; Hashizume, Makoto

    2005-01-01

    Preoperative simulation and planning of surgical robot setup should accompany advanced robotic surgery if their advantages are to be further pursued. Feedback from the planning system will plays an essential role in computer-aided robotic surgery in addition to preoperative detailed geometric information from patient CT/MRI images. Surgical robot setup simulation systems for appropriate trocar site placement have been developed especially for abdominal surgery. The motion of the surgical robot can be simulated and rehearsed with kinematic constraints at the trocar site, and the inverse-kinematics of the robot. Results from simulation using clinical patient data verify the effectiveness of the proposed system.

  15. Kinematic control of robot with degenerate wrist

    NASA Technical Reports Server (NTRS)

    Barker, L. K.; Moore, M. C.

    1984-01-01

    Kinematic resolved rate equations allow an operator with visual feedback to dynamically control a robot hand. When the robot wrist is degenerate, the computed joint angle rates exceed operational limits, and unwanted hand movements can result. The generalized matrix inverse solution can also produce unwanted responses. A method is introduced to control the robot hand in the region of the degenerate robot wrist. The method uses a coordinated movement of the first and third joints of the robot wrist to locate the second wrist joint axis for movement of the robot hand in the commanded direction. The method does not entail infinite joint angle rates.

  16. Present status and trends of image fusion

    NASA Astrophysics Data System (ADS)

    Xiang, Dachao; Fu, Sheng; Cai, Yiheng

    2009-10-01

    Image fusion information extracted from multiple images which is more accurate and reliable than that from just a single image. Since various images contain different information aspects of the measured parts, and comprehensive information can be obtained by integrating them together. Image fusion is a main branch of the application of data fusion technology. At present, it was widely used in computer vision technology, remote sensing, robot vision, medical image processing and military field. This paper mainly presents image fusion's contents, research methods, and the status quo at home and abroad, and analyzes the development trend.

  17. Integrated Artificial Intelligence Approaches for Disease Diagnostics.

    PubMed

    Vashistha, Rajat; Chhabra, Deepak; Shukla, Pratyoosh

    2018-06-01

    Mechanocomputational techniques in conjunction with artificial intelligence (AI) are revolutionizing the interpretations of the crucial information from the medical data and converting it into optimized and organized information for diagnostics. It is possible due to valuable perfection in artificial intelligence, computer aided diagnostics, virtual assistant, robotic surgery, augmented reality and genome editing (based on AI) technologies. Such techniques are serving as the products for diagnosing emerging microbial or non microbial diseases. This article represents a combinatory approach of using such approaches and providing therapeutic solutions towards utilizing these techniques in disease diagnostics.

  18. PIMS sequencing extension: a laboratory information management system for DNA sequencing facilities.

    PubMed

    Troshin, Peter V; Postis, Vincent Lg; Ashworth, Denise; Baldwin, Stephen A; McPherson, Michael J; Barton, Geoffrey J

    2011-03-07

    Facilities that provide a service for DNA sequencing typically support large numbers of users and experiment types. The cost of services is often reduced by the use of liquid handling robots but the efficiency of such facilities is hampered because the software for such robots does not usually integrate well with the systems that run the sequencing machines. Accordingly, there is a need for software systems capable of integrating different robotic systems and managing sample information for DNA sequencing services. In this paper, we describe an extension to the Protein Information Management System (PIMS) that is designed for DNA sequencing facilities. The new version of PIMS has a user-friendly web interface and integrates all aspects of the sequencing process, including sample submission, handling and tracking, together with capture and management of the data. The PIMS sequencing extension has been in production since July 2009 at the University of Leeds DNA Sequencing Facility. It has completely replaced manual data handling and simplified the tasks of data management and user communication. Samples from 45 groups have been processed with an average throughput of 10000 samples per month. The current version of the PIMS sequencing extension works with Applied Biosystems 3130XL 96-well plate sequencer and MWG 4204 or Aviso Theonyx liquid handling robots, but is readily adaptable for use with other combinations of robots. PIMS has been extended to provide a user-friendly and integrated data management solution for DNA sequencing facilities that is accessed through a normal web browser and allows simultaneous access by multiple users as well as facility managers. The system integrates sequencing and liquid handling robots, manages the data flow, and provides remote access to the sequencing results. The software is freely available, for academic users, from http://www.pims-lims.org/.

  19. A limit-cycle self-organizing map architecture for stable arm control.

    PubMed

    Huang, Di-Wei; Gentili, Rodolphe J; Katz, Garrett E; Reggia, James A

    2017-01-01

    Inspired by the oscillatory nature of cerebral cortex activity, we recently proposed and studied self-organizing maps (SOMs) based on limit cycle neural activity in an attempt to improve the information efficiency and robustness of conventional single-node, single-pattern representations. Here we explore for the first time the use of limit cycle SOMs to build a neural architecture that controls a robotic arm by solving inverse kinematics in reach-and-hold tasks. This multi-map architecture integrates open-loop and closed-loop controls that learn to self-organize oscillatory neural representations and to harness non-fixed-point neural activity even for fixed-point arm reaching tasks. We show through computer simulations that our architecture generalizes well, achieves accurate, fast, and smooth arm movements, and is robust in the face of arm perturbations, map damage, and variations of internal timing parameters controlling the flow of activity. A robotic implementation is evaluated successfully without further training, demonstrating for the first time that limit cycle maps can control a physical robot arm. We conclude that architectures based on limit cycle maps can be organized to function effectively as neural controllers. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Soft Robotic Grippers.

    PubMed

    Shintake, Jun; Cacucciolo, Vito; Floreano, Dario; Shea, Herbert

    2018-05-07

    Advances in soft robotics, materials science, and stretchable electronics have enabled rapid progress in soft grippers. Here, a critical overview of soft robotic grippers is presented, covering different material sets, physical principles, and device architectures. Soft gripping can be categorized into three technologies, enabling grasping by: a) actuation, b) controlled stiffness, and c) controlled adhesion. A comprehensive review of each type is presented. Compared to rigid grippers, end-effectors fabricated from flexible and soft components can often grasp or manipulate a larger variety of objects. Such grippers are an example of morphological computation, where control complexity is greatly reduced by material softness and mechanical compliance. Advanced materials and soft components, in particular silicone elastomers, shape memory materials, and active polymers and gels, are increasingly investigated for the design of lighter, simpler, and more universal grippers, using the inherent functionality of the materials. Embedding stretchable distributed sensors in or on soft grippers greatly enhances the ways in which the grippers interact with objects. Challenges for soft grippers include miniaturization, robustness, speed, integration of sensing, and control. Improved materials, processing methods, and sensing play an important role in future research. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

Top