Improving semantic scene understanding using prior information
NASA Astrophysics Data System (ADS)
Laddha, Ankit; Hebert, Martial
2016-05-01
Perception for ground robot mobility requires automatic generation of descriptions of the robot's surroundings from sensor input (cameras, LADARs, etc.). Effective techniques for scene understanding have been developed, but they are generally purely bottom-up in that they rely entirely on classifying features from the input data based on learned models. In fact, perception systems for ground robots have a lot of information at their disposal from knowledge about the domain and the task. For example, a robot in urban environments might have access to approximate maps that can guide the scene interpretation process. In this paper, we explore practical ways to combine such prior information with state of the art scene understanding approaches.
Using articulated scene models for dynamic 3d scene analysis in vista spaces
NASA Astrophysics Data System (ADS)
Beuter, Niklas; Swadzba, Agnes; Kummert, Franz; Wachsmuth, Sven
2010-09-01
In this paper we describe an efficient but detailed new approach to analyze complex dynamic scenes directly in 3D. The arising information is important for mobile robots to solve tasks in the area of household robotics. In our work a mobile robot builds an articulated scene model by observing the environment in the visual field or rather in the so-called vista space. The articulated scene model consists of essential knowledge about the static background, about autonomously moving entities like humans or robots and finally, in contrast to existing approaches, information about articulated parts. These parts describe movable objects like chairs, doors or other tangible entities, which could be moved by an agent. The combination of the static scene, the self-moving entities and the movable objects in one articulated scene model enhances the calculation of each single part. The reconstruction process for parts of the static scene benefits from removal of the dynamic parts and in turn, the moving parts can be extracted more easily through the knowledge about the background. In our experiments we show, that the system delivers simultaneously an accurate static background model, moving persons and movable objects. This information of the articulated scene model enables a mobile robot to detect and keep track of interaction partners, to navigate safely through the environment and finally, to strengthen the interaction with the user through the knowledge about the 3D articulated objects and 3D scene analysis. [Figure not available: see fulltext.
Doroodgar, Barzin; Liu, Yugang; Nejat, Goldie
2014-12-01
Semi-autonomous control schemes can address the limitations of both teleoperation and fully autonomous robotic control of rescue robots in disaster environments by allowing a human operator to cooperate and share such tasks with a rescue robot as navigation, exploration, and victim identification. In this paper, we present a unique hierarchical reinforcement learning-based semi-autonomous control architecture for rescue robots operating in cluttered and unknown urban search and rescue (USAR) environments. The aim of the controller is to enable a rescue robot to continuously learn from its own experiences in an environment in order to improve its overall performance in exploration of unknown disaster scenes. A direction-based exploration technique is integrated in the controller to expand the search area of the robot via the classification of regions and the rubble piles within these regions. Both simulations and physical experiments in USAR-like environments verify the robustness of the proposed HRL-based semi-autonomous controller to unknown cluttered scenes with different sizes and varying types of configurations.
Situational awareness for unmanned ground vehicles in semi-structured environments
NASA Astrophysics Data System (ADS)
Goodsell, Thomas G.; Snorrason, Magnus; Stevens, Mark R.
2002-07-01
Situational Awareness (SA) is a critical component of effective autonomous vehicles, reducing operator workload and allowing an operator to command multiple vehicles or simultaneously perform other tasks. Our Scene Estimation & Situational Awareness Mapping Engine (SESAME) provides SA for mobile robots in semi-structured scenes, such as parking lots and city streets. SESAME autonomously builds volumetric models for scene analysis. For example, a SES-AME equipped robot can build a low-resolution 3-D model of a row of cars, then approach a specific car and build a high-resolution model from a few stereo snapshots. The model can be used onboard to determine the type of car and locate its license plate, or the model can be segmented out and sent back to an operator who can view it from different viewpoints. As new views of the scene are obtained, the model is updated and changes are tracked (such as cars arriving or departing). Since the robot's position must be accurately known, SESAME also has automated techniques for deter-mining the position and orientation of the camera (and hence, robot) with respect to existing maps. This paper presents an overview of the SESAME architecture and algorithms, including our model generation algorithm.
Learning Long-Range Vision for an Offroad Robot
2008-09-01
robot to perceive and navigate in an unstructured natural world is a difficult task. Without learning, navigation systems are short-range and extremely...unsupervised or weakly supervised learning methods are necessary for training general feature representations for natural scenes. The process was...the world looked dark, and Legos when I was weary. iii ABSTRACT Teaching a robot to perceive and navigate in an unstructured natural world is a
Robotic vision techniques for space operations
NASA Technical Reports Server (NTRS)
Krishen, Kumar
1994-01-01
Automation and robotics for space applications are being pursued for increased productivity, enhanced reliability, increased flexibility, higher safety, and for the automation of time-consuming tasks and those activities which are beyond the capacity of the crew. One of the key functional elements of an automated robotic system is sensing and perception. As the robotics era dawns in space, vision systems will be required to provide the key sensory data needed for multifaceted intelligent operations. In general, the three-dimensional scene/object description, along with location, orientation, and motion parameters will be needed. In space, the absence of diffused lighting due to a lack of atmosphere gives rise to: (a) high dynamic range (10(exp 8)) of scattered sunlight intensities, resulting in very high contrast between shadowed and specular portions of the scene; (b) intense specular reflections causing target/scene bloom; and (c) loss of portions of the image due to shadowing and presence of stars, Earth, Moon, and other space objects in the scene. In this work, developments for combating the adverse effects described earlier and for enhancing scene definition are discussed. Both active and passive sensors are used. The algorithm for selecting appropriate wavelength, polarization, look angle of vision sensors is based on environmental factors as well as the properties of the target/scene which are to be perceived. The environment is characterized on the basis of sunlight and other illumination incident on the target/scene and the temperature profiles estimated on the basis of the incident illumination. The unknown geometrical and physical parameters are then derived from the fusion of the active and passive microwave, infrared, laser, and optical data.
Convolutional Neural Network-Based Robot Navigation Using Uncalibrated Spherical Images †
Ran, Lingyan; Zhang, Yanning; Zhang, Qilin; Yang, Tao
2017-01-01
Vision-based mobile robot navigation is a vibrant area of research with numerous algorithms having been developed, the vast majority of which either belong to the scene-oriented simultaneous localization and mapping (SLAM) or fall into the category of robot-oriented lane-detection/trajectory tracking. These methods suffer from high computational cost and require stringent labelling and calibration efforts. To address these challenges, this paper proposes a lightweight robot navigation framework based purely on uncalibrated spherical images. To simplify the orientation estimation, path prediction and improve computational efficiency, the navigation problem is decomposed into a series of classification tasks. To mitigate the adverse effects of insufficient negative samples in the “navigation via classification” task, we introduce the spherical camera for scene capturing, which enables 360° fisheye panorama as training samples and generation of sufficient positive and negative heading directions. The classification is implemented as an end-to-end Convolutional Neural Network (CNN), trained on our proposed Spherical-Navi image dataset, whose category labels can be efficiently collected. This CNN is capable of predicting potential path directions with high confidence levels based on a single, uncalibrated spherical image. Experimental results demonstrate that the proposed framework outperforms competing ones in realistic applications. PMID:28604624
Convolutional Neural Network-Based Robot Navigation Using Uncalibrated Spherical Images.
Ran, Lingyan; Zhang, Yanning; Zhang, Qilin; Yang, Tao
2017-06-12
Vision-based mobile robot navigation is a vibrant area of research with numerous algorithms having been developed, the vast majority of which either belong to the scene-oriented simultaneous localization and mapping (SLAM) or fall into the category of robot-oriented lane-detection/trajectory tracking. These methods suffer from high computational cost and require stringent labelling and calibration efforts. To address these challenges, this paper proposes a lightweight robot navigation framework based purely on uncalibrated spherical images. To simplify the orientation estimation, path prediction and improve computational efficiency, the navigation problem is decomposed into a series of classification tasks. To mitigate the adverse effects of insufficient negative samples in the "navigation via classification" task, we introduce the spherical camera for scene capturing, which enables 360° fisheye panorama as training samples and generation of sufficient positive and negative heading directions. The classification is implemented as an end-to-end Convolutional Neural Network (CNN), trained on our proposed Spherical-Navi image dataset, whose category labels can be efficiently collected. This CNN is capable of predicting potential path directions with high confidence levels based on a single, uncalibrated spherical image. Experimental results demonstrate that the proposed framework outperforms competing ones in realistic applications.
NASA Technical Reports Server (NTRS)
Bejczy, Antal K.
1995-01-01
This presentation focuses on the application of computer graphics or 'virtual reality' (VR) techniques as a human-computer interface tool in the operation of telerobotic systems. VR techniques offer very valuable task realization aids for planning, previewing and predicting robotic actions, operator training, and for visual perception of non-visible events like contact forces in robotic tasks. The utility of computer graphics in telerobotic operation can be significantly enhanced by high-fidelity calibration of virtual reality images to actual TV camera images. This calibration will even permit the creation of artificial (synthetic) views of task scenes for which no TV camera views are available.
Characteristics of Behavior of Robots with Emotion Model
NASA Astrophysics Data System (ADS)
Sato, Shigehiko; Nozawa, Akio; Ide, Hideto
Cooperated multi robots system has much dominance in comparison with single robot system. It is able to adapt to various circumstances and has a flexibility for variation of tasks. However it has still problems to control each robot, though methods for control multi robots system have been studied. Recently, the robots have been coming into real scene. And emotion and sensitivity of the robots have been widely studied. In this study, human emotion model based on psychological interaction was adapt to multi robots system to achieve methods for organization of multi robots. The characteristics of behavior of multi robots system achieved through computer simulation were analyzed. As a result, very complexed and interesting behavior was emerged even though it has rather simple configuration. And it has flexiblity in various circumstances. Additional experiment with actual robots will be conducted based on the emotion model.
Autonomous intelligent assembly systems LDRD 105746 final report.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Robert J.
2013-04-01
This report documents a three-year to develop technology that enables mobile robots to perform autonomous assembly tasks in unstructured outdoor environments. This is a multi-tier problem that requires an integration of a large number of different software technologies including: command and control, estimation and localization, distributed communications, object recognition, pose estimation, real-time scanning, and scene interpretation. Although ultimately unsuccessful in achieving a target brick stacking task autonomously, numerous important component technologies were nevertheless developed. Such technologies include: a patent-pending polygon snake algorithm for robust feature tracking, a color grid algorithm for uniquely identification and calibration, a command and control frameworkmore » for abstracting robot commands, a scanning capability that utilizes a compact robot portable scanner, and more. This report describes this project and these developed technologies.« less
Machine vision and appearance based learning
NASA Astrophysics Data System (ADS)
Bernstein, Alexander
2017-03-01
Smart algorithms are used in Machine vision to organize or extract high-level information from the available data. The resulted high-level understanding the content of images received from certain visual sensing system and belonged to an appearance space can be only a key first step in solving various specific tasks such as mobile robot navigation in uncertain environments, road detection in autonomous driving systems, etc. Appearance-based learning has become very popular in the field of machine vision. In general, the appearance of a scene is a function of the scene content, the lighting conditions, and the camera position. Mobile robots localization problem in machine learning framework via appearance space analysis is considered. This problem is reduced to certain regression on an appearance manifold problem, and newly regression on manifolds methods are used for its solution.
Robot Teleoperation and Perception Assistance with a Virtual Holographic Display
NASA Technical Reports Server (NTRS)
Goddard, Charles O.
2012-01-01
Teleoperation of robots in space from Earth has historically been dfficult. Speed of light delays make direct joystick-type control infeasible, so it is desirable to command a robot in a very high-level fashion. However, in order to provide such an interface, knowledge of what objects are in the robot's environment and how they can be interacted with is required. In addition, many tasks that would be desirable to perform are highly spatial, requiring some form of six degree of freedom input. These two issues can be combined, allowing the user to assist the robot's perception by identifying the locations of objects in the scene. The zSpace system, a virtual holographic environment, provides a virtual three-dimensional space superimposed over real space and a stylus tracking position and rotation inside of it. Using this system, a possible interface for this sort of robot control is proposed.
NASA Astrophysics Data System (ADS)
Chen, C.; Zou, X.; Tian, M.; Li, J.; Wu, W.; Song, Y.; Dai, W.; Yang, B.
2017-11-01
In order to solve the automation of 3D indoor mapping task, a low cost multi-sensor robot laser scanning system is proposed in this paper. The multiple-sensor robot laser scanning system includes a panorama camera, a laser scanner, and an inertial measurement unit and etc., which are calibrated and synchronized together to achieve simultaneously collection of 3D indoor data. Experiments are undertaken in a typical indoor scene and the data generated by the proposed system are compared with ground truth data collected by a TLS scanner showing an accuracy of 99.2% below 0.25 meter, which explains the applicability and precision of the system in indoor mapping applications.
Scene analysis for a breadboard Mars robot functioning in an indoor environment
NASA Technical Reports Server (NTRS)
Levine, M. D.
1973-01-01
The problem is delt with of computer perception in an indoor laboratory environment containing rocks of various sizes. The sensory data processing is required for the NASA/JPL breadboard mobile robot that is a test system for an adaptive variably-autonomous vehicle that will conduct scientific explorations on the surface of Mars. Scene analysis is discussed in terms of object segmentation followed by feature extraction, which results in a representation of the scene in the robot's world model.
Mishra, Ajay; Aloimonos, Yiannis
2009-01-01
The human visual system observes and understands a scene/image by making a series of fixations. Every fixation point lies inside a particular region of arbitrary shape and size in the scene which can either be an object or just a part of it. We define as a basic segmentation problem the task of segmenting that region containing the fixation point. Segmenting the region containing the fixation is equivalent to finding the enclosing contour- a connected set of boundary edge fragments in the edge map of the scene - around the fixation. This enclosing contour should be a depth boundary.We present here a novel algorithm that finds this bounding contour and achieves the segmentation of one object, given the fixation. The proposed segmentation framework combines monocular cues (color/intensity/texture) with stereo and/or motion, in a cue independent manner. The semantic robots of the immediate future will be able to use this algorithm to automatically find objects in any environment. The capability of automatically segmenting objects in their visual field can bring the visual processing to the next level. Our approach is different from current approaches. While existing work attempts to segment the whole scene at once into many areas, we segment only one image region, specifically the one containing the fixation point. Experiments with real imagery collected by our active robot and from the known databases 1 demonstrate the promise of the approach.
NASA Technical Reports Server (NTRS)
Sandor, Aniko; Cross, E. Vincent, II; Chang, Mai Lee
2015-01-01
Human-robot interaction (HRI) is a discipline investigating the factors affecting the interactions between humans and robots. It is important to evaluate how the design of interfaces affect the human's ability to perform tasks effectively and efficiently when working with a robot. By understanding the effects of interface design on human performance, workload, and situation awareness, interfaces can be developed to appropriately support the human in performing tasks with minimal errors and with appropriate interaction time and effort. Thus, the results of research on human-robot interfaces have direct implications for the design of robotic systems. For efficient and effective remote navigation of a rover, a human operator needs to be aware of the robot's environment. However, during teleoperation, operators may get information about the environment only through a robot's front-mounted camera causing a keyhole effect. The keyhole effect reduces situation awareness which may manifest in navigation issues such as higher number of collisions, missing critical aspects of the environment, or reduced speed. One way to compensate for the keyhole effect and the ambiguities operators experience when they teleoperate a robot is adding multiple cameras and including the robot chassis in the camera view. Augmented reality, such as overlays, can also enhance the way a person sees objects in the environment or in camera views by making them more visible. Scenes can be augmented with integrated telemetry, procedures, or map information. Furthermore, the addition of an exocentric (i.e., third-person) field of view from a camera placed in the robot's environment may provide operators with the additional information needed to gain spatial awareness of the robot. Two research studies investigated possible mitigation approaches to address the keyhole effect: 1) combining the inclusion of the robot chassis in the camera view with augmented reality overlays, and 2) modifying the camera frame of reference. The first study investigated the effects of inclusion and exclusion of the robot chassis along with superimposing a simple arrow overlay onto the video feed of operator task performance during teleoperation of a mobile robot in a driving task. In this study, the front half of the robot chassis was made visible through the use of three cameras, two side-facing and one forward-facing. The purpose of the second study was to compare operator performance when teleoperating a robot from an egocentric-only and combined (egocentric plus exocentric camera) view. Camera view parameters that are found to be beneficial in these laboratory experiments can be implemented on NASA rovers and tested in a real-world driving and navigation scenario on-site at the Johnson Space Center.
Robotic vision. [process control applications
NASA Technical Reports Server (NTRS)
Williams, D. S.; Wilf, J. M.; Cunningham, R. T.; Eskenazi, R.
1979-01-01
Robotic vision, involving the use of a vision system to control a process, is discussed. Design and selection of active sensors employing radiation of radio waves, sound waves, and laser light, respectively, to light up unobservable features in the scene are considered, as are design and selection of passive sensors, which rely on external sources of illumination. The segmentation technique by which an image is separated into different collections of contiguous picture elements having such common characteristics as color, brightness, or texture is examined, with emphasis on the edge detection technique. The IMFEX (image feature extractor) system performing edge detection and thresholding at 30 frames/sec television frame rates is described. The template matching and discrimination approach to recognize objects are noted. Applications of robotic vision in industry for tasks too monotonous or too dangerous for the workers are mentioned.
Small, Lightweight Inspection Robot With 12 Degrees Of Freedom
NASA Technical Reports Server (NTRS)
Lee, Thomas S.; Ohm, Timothy R.; Hayati, Samad
1996-01-01
Small serpentine robot weighs only 6 lbs. and has link diameter of 1.5 in. Designed to perform inspections. Multiple degrees of freedom enables it to reach around obstacles and through small openings into simple or complexly shaped confined spaces to positions where difficult or impossible to perform inspections by other means. Fiber-optic borescope incorporated into robot arm, with inspection tip of borescope located at tip of arm. Borescope both conveys light along robot arm to illuminate scene inspected at tip and conveys image of scene back along robot arm to external imaging equipment.
NASA Astrophysics Data System (ADS)
Tickle, Andrew J.; Singh, Harjap; Grindley, Josef E.
2013-06-01
Morphological Scene Change Detection (MSCD) is a process typically tasked at detecting relevant changes in a guarded environment for security applications. This can be implemented on a Field Programmable Gate Array (FPGA) by a combination of binary differences based around exclusive-OR (XOR) gates, mathematical morphology and a crucial threshold setting. This is a robust technique and can be applied many areas from leak detection to movement tracking, and further augmented to perform additional functions such as watermarking and facial detection. Fire is a severe problem, and in areas where traditional fire alarm systems are not installed or feasible, it may not be detected until it is too late. Shown here is a way of adapting the traditional Morphological Scene Change Detector (MSCD) with a temperature sensor so if both the temperature sensor and scene change detector are triggered, there is a high likelihood of fire present. Such a system would allow integration into autonomous mobile robots so that not only security patrols could be undertaken, but also fire detection.
Three-Dimensional Images For Robot Vision
NASA Astrophysics Data System (ADS)
McFarland, William D.
1983-12-01
Robots are attracting increased attention in the industrial productivity crisis. As one significant approach for this nation to maintain technological leadership, the need for robot vision has become critical. The "blind" robot, while occupying an economical niche at present is severely limited and job specific, being only one step up from the numerical controlled machines. To successfully satisfy robot vision requirements a three dimensional representation of a real scene must be provided. Several image acquistion techniques are discussed with more emphasis on the laser radar type instruments. The autonomous vehicle is also discussed as a robot form, and the requirements for these applications are considered. The total computer vision system requirement is reviewed with some discussion of the major techniques in the literature for three dimensional scene analysis.
Real-time detection of moving objects from moving vehicles using dense stereo and optical flow
NASA Technical Reports Server (NTRS)
Talukder, Ashit; Matthies, Larry
2004-01-01
Dynamic scene perception is very important for autonomous vehicles operating around other moving vehicles and humans. Most work on real-time object tracking from moving platforms has used sparse features or assumed flat scene structures. We have recently extended a real-time, dense stereo system to include realtime, dense optical flow, enabling more comprehensive dynamic scene analysis. We describe algorithms to robustly estimate 6-DOF robot egomotion in the presence of moving objects using dense flow and dense stereo. We then use dense stereo and egomotion estimates to identify & other moving objects while the robot itself is moving. We present results showing accurate egomotion estimation and detection of moving people and vehicles under general 6-DOF motion of the robot and independently moving objects. The system runs at 18.3 Hz on a 1.4 GHz Pentium M laptop, computing 160x120 disparity maps and optical flow fields, egomotion, and moving object segmentation. We believe this is a significant step toward general unconstrained dynamic scene analysis for mobile robots, as well as for improved position estimation where GPS is unavailable.
Real-time detection of moving objects from moving vehicles using dense stereo and optical flow
NASA Technical Reports Server (NTRS)
Talukder, Ashit; Matthies, Larry
2004-01-01
Dynamic scene perception is very important for autonomous vehicles operating around other moving vehicles and humans. Most work on real-time object tracking from moving platforms has used sparse features or assumed flat scene structures. We have recently extended a real-time, dense stereo system to include real-time, dense optical flow, enabling more comprehensive dynamic scene analysis. We describe algorithms to robustly estimate 6-DOF robot egomotion in the presence of moving objects using dense flow and dense stereo. We then use dense stereo and egomotion estimates to identity other moving objects while the robot itself is moving. We present results showing accurate egomotion estimation and detection of moving people and vehicles under general 6-DOF motion of the robot and independently moving objects. The system runs at 18.3 Hz on a 1.4 GHz Pentium M laptop, computing 160x120 disparity maps and optical flow fields, egomotion, and moving object segmentation. We believe this is a significant step toward general unconstrained dynamic scene analysis for mobile robots, as well as for improved position estimation where GPS is unavailable.
Real-time Detection of Moving Objects from Moving Vehicles Using Dense Stereo and Optical Flow
NASA Technical Reports Server (NTRS)
Talukder, Ashit; Matthies, Larry
2004-01-01
Dynamic scene perception is very important for autonomous vehicles operating around other moving vehicles and humans. Most work on real-time object tracking from moving platforms has used sparse features or assumed flat scene structures. We have recently extended a real-time. dense stereo system to include realtime. dense optical flow, enabling more comprehensive dynamic scene analysis. We describe algorithms to robustly estimate 6-DOF robot egomotion in the presence of moving objects using dense flow and dense stereo. We then use dense stereo and egomotion estimates to identify other moving objects while the robot itself is moving. We present results showing accurate egomotion estimation and detection of moving people and vehicles under general 6DOF motion of the robot and independently moving objects. The system runs at 18.3 Hz on a 1.4 GHz Pentium M laptop. computing 160x120 disparity maps and optical flow fields, egomotion, and moving object segmentation. We believe this is a significant step toward general unconstrained dynamic scene analysis for mobile robots, as well as for improved position estimation where GPS is unavailable.
Trifocal Tensor-Based Adaptive Visual Trajectory Tracking Control of Mobile Robots.
Chen, Jian; Jia, Bingxi; Zhang, Kaixiang
2017-11-01
In this paper, a trifocal tensor-based approach is proposed for the visual trajectory tracking task of a nonholonomic mobile robot equipped with a roughly installed monocular camera. The desired trajectory is expressed by a set of prerecorded images, and the robot is regulated to track the desired trajectory using visual feedback. Trifocal tensor is exploited to obtain the orientation and scaled position information used in the control system, and it works for general scenes owing to the generality of trifocal tensor. In the previous works, the start, current, and final images are required to share enough visual information to estimate the trifocal tensor. However, this requirement can be easily violated for perspective cameras with limited field of view. In this paper, key frame strategy is proposed to loosen this requirement, extending the workspace of the visual servo system. Considering the unknown depth and extrinsic parameters (installing position of the camera), an adaptive controller is developed based on Lyapunov methods. The proposed control strategy works for almost all practical circumstances, including both trajectory tracking and pose regulation tasks. Simulations are made based on the virtual experimentation platform (V-REP) to evaluate the effectiveness of the proposed approach.
Beliefs about the Minds of Others Influence How We Process Sensory Information
Prosser, Aaron; Müller, Hermann J.
2014-01-01
Attending where others gaze is one of the most fundamental mechanisms of social cognition. The present study is the first to examine the impact of the attribution of mind to others on gaze-guided attentional orienting and its ERP correlates. Using a paradigm in which attention was guided to a location by the gaze of a centrally presented face, we manipulated participants' beliefs about the gazer: gaze behavior was believed to result either from operations of a mind or from a machine. In Experiment 1, beliefs were manipulated by cue identity (human or robot), while in Experiment 2, cue identity (robot) remained identical across conditions and beliefs were manipulated solely via instruction, which was irrelevant to the task. ERP results and behavior showed that participants' attention was guided by gaze only when gaze was believed to be controlled by a human. Specifically, the P1 was more enhanced for validly, relative to invalidly, cued targets only when participants believed the gaze behavior was the result of a mind, rather than of a machine. This shows that sensory gain control can be influenced by higher-order (task-irrelevant) beliefs about the observed scene. We propose a new interdisciplinary model of social attention, which integrates ideas from cognitive and social neuroscience, as well as philosophy in order to provide a framework for understanding a crucial aspect of how humans' beliefs about the observed scene influence sensory processing. PMID:24714419
P1 Truss and JEM Pressurized Module (JPM)
2009-03-23
S119-E-007519 (23 March 2009) --- Astronaut Richard Arnold (lower left on port truss), STS-119 mission specialist, participates in the mission's third scheduled session of extravehicular activity (EVA) as construction and maintenance continue on the International Space Station. During the six-hour, 27-minute spacewalk, Arnold and Joseph Acaba (out of frame), mission specialist, helped robotic arm operators relocate the Crew Equipment Translation Aid (CETA) cart from the Port 1 to Starboard 1 truss segment, installed a new coupler on the CETA cart, lubricated snares on the "B" end of the space station's robotic arm and performed a few "get ahead" tasks. The Japanese Kibo laboratory is visible at right, and the station’s Canadarm2 is at left. The blackness of space and Earth’s horizon provide the backdrop for the scene.
Navigable points estimation for mobile robots using binary image skeletonization
NASA Astrophysics Data System (ADS)
Martinez S., Fernando; Jacinto G., Edwar; Montiel A., Holman
2017-02-01
This paper describes the use of image skeletonization for the estimation of all the navigable points, inside a scene of mobile robots navigation. Those points are used for computing a valid navigation path, using standard methods. The main idea is to find the middle and the extreme points of the obstacles in the scene, taking into account the robot size, and create a map of navigable points, in order to reduce the amount of information for the planning algorithm. Those points are located by means of the skeletonization of a binary image of the obstacles and the scene background, along with some other digital image processing algorithms. The proposed algorithm automatically gives a variable number of navigable points per obstacle, depending on the complexity of its shape. As well as, the way how the algorithm can change some of their parameters in order to change the final number of the resultant key points is shown. The results shown here were obtained applying different kinds of digital image processing algorithms on static scenes.
A Comparative Study of Registration Methods for RGB-D Video of Static Scenes
Morell-Gimenez, Vicente; Saval-Calvo, Marcelo; Azorin-Lopez, Jorge; Garcia-Rodriguez, Jose; Cazorla, Miguel; Orts-Escolano, Sergio; Fuster-Guillo, Andres
2014-01-01
The use of RGB-D sensors for mapping and recognition tasks in robotics or, in general, for virtual reconstruction has increased in recent years. The key aspect of these kinds of sensors is that they provide both depth and color information using the same device. In this paper, we present a comparative analysis of the most important methods used in the literature for the registration of subsequent RGB-D video frames in static scenarios. The analysis begins by explaining the characteristics of the registration problem, dividing it into two representative applications: scene modeling and object reconstruction. Then, a detailed experimentation is carried out to determine the behavior of the different methods depending on the application. For both applications, we used standard datasets and a new one built for object reconstruction. PMID:24834909
Expedient range enhanced 3-D robot colour vision
NASA Astrophysics Data System (ADS)
Jarvis, R. A.
1983-01-01
Computer vision has been chosen, in many cases, as offering the richest form of sensory information which can be utilized for guiding robotic manipulation. The present investigation is concerned with the problem of three-dimensional (3D) visual interpretation of colored objects in support of robotic manipulation of those objects with a minimum of semantic guidance. The scene 'interpretations' are aimed at providing basic parameters to guide robotic manipulation rather than to provide humans with a detailed description of what the scene 'means'. Attention is given to overall system configuration, hue transforms, a connectivity analysis, plan/elevation segmentations, range scanners, elevation/range segmentation, higher level structure, eye in hand research, and aspects of array and video stream processing.
NASA Astrophysics Data System (ADS)
Brown, C. David; Ih, Charles S.; Arce, Gonzalo R.; Fertell, David A.
1987-01-01
Vision systems for mobile robots or autonomous vehicles navigating in an unknown terrain environment must provide a rapid and accurate method of segmenting the scene ahead into regions of pathway and background. A major distinguishing feature between the pathway and background is the three dimensional texture of these two regions. Typical methods of textural image segmentation are very computationally intensive, often lack the required robustness, and are incapable of sensing the three dimensional texture of various regions of the scene. A method is presented where scanned laser projected lines of structured light, viewed by a stereoscopically located single video camera, resulted in an image in which the three dimensional characteristics of the scene were represented by the discontinuity of the projected lines. This image was conducive to processing with simple regional operators to classify regions as pathway or background. Design of some operators and application methods, and demonstration on sample images are presented. This method provides rapid and robust scene segmentation capability that has been implemented on a microcomputer in near real time, and should result in higher speed and more reliable robotic or autonomous navigation in unstructured environments.
Auto-converging stereo cameras for 3D robotic tele-operation
NASA Astrophysics Data System (ADS)
Edmondson, Richard; Aycock, Todd; Chenault, David
2012-06-01
Polaris Sensor Technologies has developed a Stereovision Upgrade Kit for TALON robot to provide enhanced depth perception to the operator. This kit previously required the TALON Operator Control Unit to be equipped with the optional touchscreen interface to allow for operator control of the camera convergence angle adjustment. This adjustment allowed for optimal camera convergence independent of the distance from the camera to the object being viewed. Polaris has recently improved the performance of the stereo camera by implementing an Automatic Convergence algorithm in a field programmable gate array in the camera assembly. This algorithm uses scene content to automatically adjust the camera convergence angle, freeing the operator to focus on the task rather than adjustment of the vision system. The autoconvergence capability has been demonstrated on both visible zoom cameras and longwave infrared microbolometer stereo pairs.
Does scene context always facilitate retrieval of visual object representations?
Nakashima, Ryoichi; Yokosawa, Kazuhiko
2011-04-01
An object-to-scene binding hypothesis maintains that visual object representations are stored as part of a larger scene representation or scene context, and that scene context facilitates retrieval of object representations (see, e.g., Hollingworth, Journal of Experimental Psychology: Learning, Memory and Cognition, 32, 58-69, 2006). Support for this hypothesis comes from data using an intentional memory task. In the present study, we examined whether scene context always facilitates retrieval of visual object representations. In two experiments, we investigated whether the scene context facilitates retrieval of object representations, using a new paradigm in which a memory task is appended to a repeated-flicker change detection task. Results indicated that in normal scene viewing, in which many simultaneous objects appear, scene context facilitation of the retrieval of object representations-henceforth termed object-to-scene binding-occurred only when the observer was required to retain much information for a task (i.e., an intentional memory task).
A fusion network for semantic segmentation using RGB-D data
NASA Astrophysics Data System (ADS)
Yuan, Jiahui; Zhang, Kun; Xia, Yifan; Qi, Lin; Dong, Junyu
2018-04-01
Semantic scene parsing is considerable in many intelligent field, including perceptual robotics. For the past few years, pixel-wise prediction tasks like semantic segmentation with RGB images has been extensively studied and has reached very remarkable parsing levels, thanks to convolutional neural networks (CNNs) and large scene datasets. With the development of stereo cameras and RGBD sensors, it is expected that additional depth information will help improving accuracy. In this paper, we propose a semantic segmentation framework incorporating RGB and complementary depth information. Motivated by the success of fully convolutional networks (FCN) in semantic segmentation field, we design a fully convolutional networks consists of two branches which extract features from both RGB and depth data simultaneously and fuse them as the network goes deeper. Instead of aggregating multiple model, our goal is to utilize RGB data and depth data more effectively in a single model. We evaluate our approach on the NYU-Depth V2 dataset, which consists of 1449 cluttered indoor scenes, and achieve competitive results with the state-of-the-art methods.
Reinforcement learning in computer vision
NASA Astrophysics Data System (ADS)
Bernstein, A. V.; Burnaev, E. V.
2018-04-01
Nowadays, machine learning has become one of the basic technologies used in solving various computer vision tasks such as feature detection, image segmentation, object recognition and tracking. In many applications, various complex systems such as robots are equipped with visual sensors from which they learn state of surrounding environment by solving corresponding computer vision tasks. Solutions of these tasks are used for making decisions about possible future actions. It is not surprising that when solving computer vision tasks we should take into account special aspects of their subsequent application in model-based predictive control. Reinforcement learning is one of modern machine learning technologies in which learning is carried out through interaction with the environment. In recent years, Reinforcement learning has been used both for solving such applied tasks as processing and analysis of visual information, and for solving specific computer vision problems such as filtering, extracting image features, localizing objects in scenes, and many others. The paper describes shortly the Reinforcement learning technology and its use for solving computer vision problems.
Development of Moire machine vision
NASA Technical Reports Server (NTRS)
Harding, Kevin G.
1987-01-01
Three dimensional perception is essential to the development of versatile robotics systems in order to handle complex manufacturing tasks in future factories and in providing high accuracy measurements needed in flexible manufacturing and quality control. A program is described which will develop the potential of Moire techniques to provide this capability in vision systems and automated measurements, and demonstrate artificial intelligence (AI) techniques to take advantage of the strengths of Moire sensing. Moire techniques provide a means of optically manipulating the complex visual data in a three dimensional scene into a form which can be easily and quickly analyzed by computers. This type of optical data manipulation provides high productivity through integrated automation, producing a high quality product while reducing computer and mechanical manipulation requirements and thereby the cost and time of production. This nondestructive evaluation is developed to be able to make full field range measurement and three dimensional scene analysis.
Real-time depth processing for embedded platforms
NASA Astrophysics Data System (ADS)
Rahnama, Oscar; Makarov, Aleksej; Torr, Philip
2017-05-01
Obtaining depth information of a scene is an important requirement in many computer-vision and robotics applications. For embedded platforms, passive stereo systems have many advantages over their active counterparts (i.e. LiDAR, Infrared). They are power efficient, cheap, robust to lighting conditions and inherently synchronized to the RGB images of the scene. However, stereo depth estimation is a computationally expensive task that operates over large amounts of data. For embedded applications which are often constrained by power consumption, obtaining accurate results in real-time is a challenge. We demonstrate a computationally and memory efficient implementation of a stereo block-matching algorithm in FPGA. The computational core achieves a throughput of 577 fps at standard VGA resolution whilst consuming less than 3 Watts of power. The data is processed using an in-stream approach that minimizes memory-access bottlenecks and best matches the raster scan readout of modern digital image sensors.
Development of Moire machine vision
NASA Astrophysics Data System (ADS)
Harding, Kevin G.
1987-10-01
Three dimensional perception is essential to the development of versatile robotics systems in order to handle complex manufacturing tasks in future factories and in providing high accuracy measurements needed in flexible manufacturing and quality control. A program is described which will develop the potential of Moire techniques to provide this capability in vision systems and automated measurements, and demonstrate artificial intelligence (AI) techniques to take advantage of the strengths of Moire sensing. Moire techniques provide a means of optically manipulating the complex visual data in a three dimensional scene into a form which can be easily and quickly analyzed by computers. This type of optical data manipulation provides high productivity through integrated automation, producing a high quality product while reducing computer and mechanical manipulation requirements and thereby the cost and time of production. This nondestructive evaluation is developed to be able to make full field range measurement and three dimensional scene analysis.
Detecting Target Objects by Natural Language Instructions Using an RGB-D Camera
Bao, Jiatong; Jia, Yunyi; Cheng, Yu; Tang, Hongru; Xi, Ning
2016-01-01
Controlling robots by natural language (NL) is increasingly attracting attention for its versatility, convenience and no need of extensive training for users. Grounding is a crucial challenge of this problem to enable robots to understand NL instructions from humans. This paper mainly explores the object grounding problem and concretely studies how to detect target objects by the NL instructions using an RGB-D camera in robotic manipulation applications. In particular, a simple yet robust vision algorithm is applied to segment objects of interest. With the metric information of all segmented objects, the object attributes and relations between objects are further extracted. The NL instructions that incorporate multiple cues for object specifications are parsed into domain-specific annotations. The annotations from NL and extracted information from the RGB-D camera are matched in a computational state estimation framework to search all possible object grounding states. The final grounding is accomplished by selecting the states which have the maximum probabilities. An RGB-D scene dataset associated with different groups of NL instructions based on different cognition levels of the robot are collected. Quantitative evaluations on the dataset illustrate the advantages of the proposed method. The experiments of NL controlled object manipulation and NL-based task programming using a mobile manipulator show its effectiveness and practicability in robotic applications. PMID:27983604
AltiVec performance increases for autonomous robotics for the MARSSCAPE architecture program
NASA Astrophysics Data System (ADS)
Gothard, Benny M.
2002-02-01
One of the main tall poles that must be overcome to develop a fully autonomous vehicle is the inability of the computer to understand its surrounding environment to a level that is required for the intended task. The military mission scenario requires a robot to interact in a complex, unstructured, dynamic environment. Reference A High Fidelity Multi-Sensor Scene Understanding System for Autonomous Navigation The Mobile Autonomous Robot Software Self Composing Adaptive Programming Environment (MarsScape) perception research addresses three aspects of the problem; sensor system design, processing architectures, and algorithm enhancements. A prototype perception system has been demonstrated on robotic High Mobility Multi-purpose Wheeled Vehicle and All Terrain Vehicle testbeds. This paper addresses the tall pole of processing requirements and the performance improvements based on the selected MarsScape Processing Architecture. The processor chosen is the Motorola Altivec-G4 Power PC(PPC) (1998 Motorola, Inc.), a highly parallized commercial Single Instruction Multiple Data processor. Both derived perception benchmarks and actual perception subsystems code will be benchmarked and compared against previous Demo II-Semi-autonomous Surrogate Vehicle processing architectures along with desktop Personal Computers(PC). Performance gains are highlighted with progress to date, and lessons learned and future directions are described.
NASA Astrophysics Data System (ADS)
Madokoro, H.; Yamanashi, A.; Sato, K.
2013-08-01
This paper presents an unsupervised scene classification method for actualizing semantic recognition of indoor scenes. Background and foreground features are respectively extracted using Gist and color scale-invariant feature transform (SIFT) as feature representations based on context. We used hue, saturation, and value SIFT (HSV-SIFT) because of its simple algorithm with low calculation costs. Our method creates bags of features for voting visual words created from both feature descriptors to a two-dimensional histogram. Moreover, our method generates labels as candidates of categories for time-series images while maintaining stability and plasticity together. Automatic labeling of category maps can be realized using labels created using adaptive resonance theory (ART) as teaching signals for counter propagation networks (CPNs). We evaluated our method for semantic scene classification using KTH's image database for robot localization (KTH-IDOL), which is popularly used for robot localization and navigation. The mean classification accuracies of Gist, gray SIFT, one class support vector machines (OC-SVM), position-invariant robust features (PIRF), and our method are, respectively, 39.7, 58.0, 56.0, 63.6, and 79.4%. The result of our method is 15.8% higher than that of PIRF. Moreover, we applied our method for fine classification using our original mobile robot. We obtained mean classification accuracy of 83.2% for six zones.
Maravall, Darío; de Lope, Javier; Fuentes, Juan P
2017-01-01
We introduce a hybrid algorithm for the self-semantic location and autonomous navigation of robots using entropy-based vision and visual topological maps. In visual topological maps the visual landmarks are considered as leave points for guiding the robot to reach a target point (robot homing) in indoor environments. These visual landmarks are defined from images of relevant objects or characteristic scenes in the environment. The entropy of an image is directly related to the presence of a unique object or the presence of several different objects inside it: the lower the entropy the higher the probability of containing a single object inside it and, conversely, the higher the entropy the higher the probability of containing several objects inside it. Consequently, we propose the use of the entropy of images captured by the robot not only for the landmark searching and detection but also for obstacle avoidance. If the detected object corresponds to a landmark, the robot uses the suggestions stored in the visual topological map to reach the next landmark or to finish the mission. Otherwise, the robot considers the object as an obstacle and starts a collision avoidance maneuver. In order to validate the proposal we have defined an experimental framework in which the visual bug algorithm is used by an Unmanned Aerial Vehicle (UAV) in typical indoor navigation tasks.
Maravall, Darío; de Lope, Javier; Fuentes, Juan P.
2017-01-01
We introduce a hybrid algorithm for the self-semantic location and autonomous navigation of robots using entropy-based vision and visual topological maps. In visual topological maps the visual landmarks are considered as leave points for guiding the robot to reach a target point (robot homing) in indoor environments. These visual landmarks are defined from images of relevant objects or characteristic scenes in the environment. The entropy of an image is directly related to the presence of a unique object or the presence of several different objects inside it: the lower the entropy the higher the probability of containing a single object inside it and, conversely, the higher the entropy the higher the probability of containing several objects inside it. Consequently, we propose the use of the entropy of images captured by the robot not only for the landmark searching and detection but also for obstacle avoidance. If the detected object corresponds to a landmark, the robot uses the suggestions stored in the visual topological map to reach the next landmark or to finish the mission. Otherwise, the robot considers the object as an obstacle and starts a collision avoidance maneuver. In order to validate the proposal we have defined an experimental framework in which the visual bug algorithm is used by an Unmanned Aerial Vehicle (UAV) in typical indoor navigation tasks. PMID:28900394
Sharing skills: using augmented reality for human-robot collaboration
NASA Astrophysics Data System (ADS)
Giesler, Bjorn; Steinhaus, Peter; Walther, Marcus; Dillmann, Ruediger
2004-05-01
Both stationary 'industrial' and autonomous mobile robots nowadays pervade many workplaces, but human-friendly interaction with them is still very much an experimental subject. One of the reasons for this is that computer and robotic systems are very bad at performing certain tasks well and robust. A prime example is classification of sensor readings: Which part of a 3D depth image is the cup, which the saucer, which the table? These are tasks that humans excel at. To alleviate this problem, we propose a team approah, wherein the robot records sensor data and uses an Augmented-Reality (AR) system to present the data to the user directly in the 3D environment. The user can then perform classification decisions directly on the data by pointing, gestures and speech commands. After the classification has been performed by the user, the robot takes the classified data and matches it to its environment model. As a demonstration of this approach, we present an initial system for creating objects on-the-fly in the environment model. A rotating laser scanner is used to capture a 3D snapshot of the environment. This snapshot is presented to the user as an overlay over his view of the scene. The user classifies unknown objects by pointing at them. The system segments the snapshot according to the user's indications and presents the results of segmentation back to the user, who can then inspect, correct and enhance them interactively. After a satisfying result has been reached, the laser-scanner can take more snapshots from other angles and use the previous segmentation hints to construct a 3D model of the object.
Smith, Tim J; Mital, Parag K
2013-07-17
Does viewing task influence gaze during dynamic scene viewing? Research into the factors influencing gaze allocation during free viewing of dynamic scenes has reported that the gaze of multiple viewers clusters around points of high motion (attentional synchrony), suggesting that gaze may be primarily under exogenous control. However, the influence of viewing task on gaze behavior in static scenes and during real-world interaction has been widely demonstrated. To dissociate exogenous from endogenous factors during dynamic scene viewing we tracked participants' eye movements while they (a) freely watched unedited videos of real-world scenes (free viewing) or (b) quickly identified where the video was filmed (spot-the-location). Static scenes were also presented as controls for scene dynamics. Free viewing of dynamic scenes showed greater attentional synchrony, longer fixations, and more gaze to people and areas of high flicker compared with static scenes. These differences were minimized by the viewing task. In comparison with the free viewing of dynamic scenes, during the spot-the-location task fixation durations were shorter, saccade amplitudes were longer, and gaze exhibited less attentional synchrony and was biased away from areas of flicker and people. These results suggest that the viewing task can have a significant influence on gaze during a dynamic scene but that endogenous control is slow to kick in as initial saccades default toward the screen center, areas of high motion and people before shifting to task-relevant features. This default-like viewing behavior returns after the viewing task is completed, confirming that gaze behavior is more predictable during free viewing of dynamic than static scenes but that this may be due to natural correlation between regions of interest (e.g., people) and motion.
NASA Astrophysics Data System (ADS)
Pezzaniti, J. Larry; Edmondson, Richard; Vaden, Justin; Hyatt, Bryan; Chenault, David B.; Kingston, David; Geulen, Vanilynmae; Newell, Scott; Pettijohn, Brad
2009-02-01
In this paper, we report on the development of a 3D vision system consisting of a flat panel stereoscopic display and auto-converging stereo camera and an assessment of the system's use for robotic driving, manipulation, and surveillance operations. The 3D vision system was integrated onto a Talon Robot and Operator Control Unit (OCU) such that direct comparisons of the performance of a number of test subjects using 2D and 3D vision systems were possible. A number of representative scenarios were developed to determine which tasks benefited most from the added depth perception and to understand when the 3D vision system hindered understanding of the scene. Two tests were conducted at Fort Leonard Wood, MO with noncommissioned officers ranked Staff Sergeant and Sergeant First Class. The scenarios; the test planning, approach and protocols; the data analysis; and the resulting performance assessment of the 3D vision system are reported.
System and method for seamless task-directed autonomy for robots
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nielsen, Curtis; Bruemmer, David; Few, Douglas
Systems, methods, and user interfaces are used for controlling a robot. An environment map and a robot designator are presented to a user. The user may place, move, and modify task designators on the environment map. The task designators indicate a position in the environment map and indicate a task for the robot to achieve. A control intermediary links task designators with robot instructions issued to the robot. The control intermediary analyzes a relative position between the task designators and the robot. The control intermediary uses the analysis to determine a task-oriented autonomy level for the robot and communicates targetmore » achievement information to the robot. The target achievement information may include instructions for directly guiding the robot if the task-oriented autonomy level indicates low robot initiative and may include instructions for directing the robot to determine a robot plan for achieving the task if the task-oriented autonomy level indicates high robot initiative.« less
Klinghammer, Mathias; Blohm, Gunnar; Fiehler, Katja
2017-01-01
Previous research has shown that egocentric and allocentric information is used for coding target locations for memory-guided reaching movements. Especially, task-relevance determines the use of objects as allocentric cues. Here, we investigated the influence of scene configuration and object reliability as a function of task-relevance on allocentric coding for memory-guided reaching. For that purpose, we presented participants images of a naturalistic breakfast scene with five objects on a table and six objects in the background. Six of these objects served as potential reach-targets (= task-relevant objects). Participants explored the scene and after a short delay, a test scene appeared with one of the task-relevant objects missing, indicating the location of the reach target. After the test scene vanished, participants performed a memory-guided reaching movement toward the target location. Besides removing one object from the test scene, we also shifted the remaining task-relevant and/or task-irrelevant objects left- or rightwards either coherently in the same direction or incoherently in opposite directions. By varying object coherence, we manipulated the reliability of task-relevant and task-irrelevant objects in the scene. In order to examine the influence of scene configuration (distributed vs. grouped arrangement of task-relevant objects) on allocentric coding, we compared the present data with our previously published data set (Klinghammer et al., 2015). We found that reaching errors systematically deviated in the direction of object shifts, but only when the objects were task-relevant and their reliability was high. However, this effect was substantially reduced when task-relevant objects were distributed across the scene leading to a larger target-cue distance compared to a grouped configuration. No deviations of reach endpoints were observed in conditions with shifts of only task-irrelevant objects or with low object reliability irrespective of task-relevancy. Moreover, when solely task-relevant objects were shifted incoherently, the variability of reaching endpoints increased compared to coherent shifts of task-relevant objects. Our results suggest that the use of allocentric information for coding targets for memory-guided reaching depends on the scene configuration, in particular the average distance of the reach target to task-relevant objects, and the reliability of task-relevant allocentric information. PMID:28450826
Klinghammer, Mathias; Blohm, Gunnar; Fiehler, Katja
2017-01-01
Previous research has shown that egocentric and allocentric information is used for coding target locations for memory-guided reaching movements. Especially, task-relevance determines the use of objects as allocentric cues. Here, we investigated the influence of scene configuration and object reliability as a function of task-relevance on allocentric coding for memory-guided reaching. For that purpose, we presented participants images of a naturalistic breakfast scene with five objects on a table and six objects in the background. Six of these objects served as potential reach-targets (= task-relevant objects). Participants explored the scene and after a short delay, a test scene appeared with one of the task-relevant objects missing, indicating the location of the reach target. After the test scene vanished, participants performed a memory-guided reaching movement toward the target location. Besides removing one object from the test scene, we also shifted the remaining task-relevant and/or task-irrelevant objects left- or rightwards either coherently in the same direction or incoherently in opposite directions. By varying object coherence, we manipulated the reliability of task-relevant and task-irrelevant objects in the scene. In order to examine the influence of scene configuration (distributed vs. grouped arrangement of task-relevant objects) on allocentric coding, we compared the present data with our previously published data set (Klinghammer et al., 2015). We found that reaching errors systematically deviated in the direction of object shifts, but only when the objects were task-relevant and their reliability was high. However, this effect was substantially reduced when task-relevant objects were distributed across the scene leading to a larger target-cue distance compared to a grouped configuration. No deviations of reach endpoints were observed in conditions with shifts of only task-irrelevant objects or with low object reliability irrespective of task-relevancy. Moreover, when solely task-relevant objects were shifted incoherently, the variability of reaching endpoints increased compared to coherent shifts of task-relevant objects. Our results suggest that the use of allocentric information for coding targets for memory-guided reaching depends on the scene configuration, in particular the average distance of the reach target to task-relevant objects, and the reliability of task-relevant allocentric information.
Supervised autonomous robotic soft tissue surgery.
Shademan, Azad; Decker, Ryan S; Opfermann, Justin D; Leonard, Simon; Krieger, Axel; Kim, Peter C W
2016-05-04
The current paradigm of robot-assisted surgeries (RASs) depends entirely on an individual surgeon's manual capability. Autonomous robotic surgery-removing the surgeon's hands-promises enhanced efficacy, safety, and improved access to optimized surgical techniques. Surgeries involving soft tissue have not been performed autonomously because of technological limitations, including lack of vision systems that can distinguish and track the target tissues in dynamic surgical environments and lack of intelligent algorithms that can execute complex surgical tasks. We demonstrate in vivo supervised autonomous soft tissue surgery in an open surgical setting, enabled by a plenoptic three-dimensional and near-infrared fluorescent (NIRF) imaging system and an autonomous suturing algorithm. Inspired by the best human surgical practices, a computer program generates a plan to complete complex surgical tasks on deformable soft tissue, such as suturing and intestinal anastomosis. We compared metrics of anastomosis-including the consistency of suturing informed by the average suture spacing, the pressure at which the anastomosis leaked, the number of mistakes that required removing the needle from the tissue, completion time, and lumen reduction in intestinal anastomoses-between our supervised autonomous system, manual laparoscopic surgery, and clinically used RAS approaches. Despite dynamic scene changes and tissue movement during surgery, we demonstrate that the outcome of supervised autonomous procedures is superior to surgery performed by expert surgeons and RAS techniques in ex vivo porcine tissues and in living pigs. These results demonstrate the potential for autonomous robots to improve the efficacy, consistency, functional outcome, and accessibility of surgical techniques. Copyright © 2016, American Association for the Advancement of Science.
Concurrent Path Planning with One or More Humanoid Robots
NASA Technical Reports Server (NTRS)
Reiland, Matthew J. (Inventor); Sanders, Adam M. (Inventor)
2014-01-01
A robotic system includes a controller and one or more robots each having a plurality of robotic joints. Each of the robotic joints is independently controllable to thereby execute a cooperative work task having at least one task execution fork, leading to multiple independent subtasks. The controller coordinates motion of the robot(s) during execution of the cooperative work task. The controller groups the robotic joints into task-specific robotic subsystems, and synchronizes motion of different subsystems during execution of the various subtasks of the cooperative work task. A method for executing the cooperative work task using the robotic system includes automatically grouping the robotic joints into task-specific subsystems, and assigning subtasks of the cooperative work task to the subsystems upon reaching a task execution fork. The method further includes coordinating execution of the subtasks after reaching the task execution fork.
Feature diagnosticity and task context shape activity in human scene-selective cortex.
Lowe, Matthew X; Gallivan, Jason P; Ferber, Susanne; Cant, Jonathan S
2016-01-15
Scenes are constructed from multiple visual features, yet previous research investigating scene processing has often focused on the contributions of single features in isolation. In the real world, features rarely exist independently of one another and likely converge to inform scene identity in unique ways. Here, we utilize fMRI and pattern classification techniques to examine the interactions between task context (i.e., attend to diagnostic global scene features; texture or layout) and high-level scene attributes (content and spatial boundary) to test the novel hypothesis that scene-selective cortex represents multiple visual features, the importance of which varies according to their diagnostic relevance across scene categories and task demands. Our results show for the first time that scene representations are driven by interactions between multiple visual features and high-level scene attributes. Specifically, univariate analysis of scene-selective cortex revealed that task context and feature diagnosticity shape activity differentially across scene categories. Examination using multivariate decoding methods revealed results consistent with univariate findings, but also evidence for an interaction between high-level scene attributes and diagnostic visual features within scene categories. Critically, these findings suggest visual feature representations are not distributed uniformly across scene categories but are shaped by task context and feature diagnosticity. Thus, we propose that scene-selective cortex constructs a flexible representation of the environment by integrating multiple diagnostically relevant visual features, the nature of which varies according to the particular scene being perceived and the goals of the observer. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Panfil, Wawrzyniec; Moczulski, Wojciech
2017-10-01
In the paper presented is a control system of a mobile robots group intended for carrying out inspection missions. The main research problem was to define such a control system in order to facilitate a cooperation of the robots resulting in realization of the committed inspection tasks. Many of the well-known control systems use auctions for tasks allocation, where a subject of an auction is a task to be allocated. It seems that in the case of missions characterized by much larger number of tasks than number of robots it will be better if robots (instead of tasks) are subjects of auctions. The second identified problem concerns the one-sided robot-to-task fitness evaluation. Simultaneous assessment of the robot-to-task fitness and task attractiveness for robot should affect positively for the overall effectiveness of the multi-robot system performance. The elaborated system allows to assign tasks to robots using various methods for evaluation of fitness between robots and tasks, and using some tasks allocation methods. There is proposed the method for multi-criteria analysis, which is composed of two assessments, i.e. robot's concurrency position for task among other robots and task's attractiveness for robot among other tasks. Furthermore, there are proposed methods for tasks allocation applying the mentioned multi-criteria analysis method. The verification of both the elaborated system and the proposed tasks' allocation methods was carried out with the help of simulated experiments. The object under test was a group of inspection mobile robots being a virtual counterpart of the real mobile-robot group.
A Saccade Based Framework for Real-Time Motion Segmentation Using Event Based Vision Sensors
Mishra, Abhishek; Ghosh, Rohan; Principe, Jose C.; Thakor, Nitish V.; Kukreja, Sunil L.
2017-01-01
Motion segmentation is a critical pre-processing step for autonomous robotic systems to facilitate tracking of moving objects in cluttered environments. Event based sensors are low power analog devices that represent a scene by means of asynchronous information updates of only the dynamic details at high temporal resolution and, hence, require significantly less calculations. However, motion segmentation using spatiotemporal data is a challenging task due to data asynchrony. Prior approaches for object tracking using neuromorphic sensors perform well while the sensor is static or a known model of the object to be followed is available. To address these limitations, in this paper we develop a technique for generalized motion segmentation based on spatial statistics across time frames. First, we create micromotion on the platform to facilitate the separation of static and dynamic elements of a scene, inspired by human saccadic eye movements. Second, we introduce the concept of spike-groups as a methodology to partition spatio-temporal event groups, which facilitates computation of scene statistics and characterize objects in it. Experimental results show that our algorithm is able to classify dynamic objects with a moving camera with maximum accuracy of 92%. PMID:28316563
NASA Astrophysics Data System (ADS)
Gwiazda, A.; Banas, W.; Sekala, A.; Foit, K.; Hryniewicz, P.; Kost, G.
2015-11-01
Process of workcell designing is limited by different constructional requirements. They are related to technological parameters of manufactured element, to specifications of purchased elements of a workcell and to technical characteristics of a workcell scene. This shows the complexity of the design-constructional process itself. The results of such approach are individually designed workcell suitable to the specific location and specific production cycle. Changing this parameters one must rebuild the whole configuration of a workcell. Taking into consideration this it is important to elaborate the base of typical elements of a robot kinematic chain that could be used as the tool for building Virtual modelling of kinematic chains of industrial robots requires several preparatory phase. Firstly, it is important to create a database element, which will be models of industrial robot arms. These models could be described as functional primitives that represent elements between components of the kinematic pairs and structural members of industrial robots. A database with following elements is created: the base kinematic pairs, the base robot structural elements, the base of the robot work scenes. The first of these databases includes kinematic pairs being the key component of the manipulator actuator modules. Accordingly, as mentioned previously, it includes the first stage rotary pair of fifth stage. This type of kinematic pairs was chosen due to the fact that it occurs most frequently in the structures of industrial robots. Second base consists of structural robot elements therefore it allows for the conversion of schematic structures of kinematic chains in the structural elements of the arm of industrial robots. It contains, inter alia, the structural elements such as base, stiff members - simple or angular units. They allow converting recorded schematic three-dimensional elements. Last database is a database of scenes. It includes elements of both simple and complex: simple models of technological equipment, conveyors models, models of the obstacles and like that. Using these elements it could be formed various production spaces (robotized workcells), in which it is possible to virtually track the operation of an industrial robot arm modelled in the system.
Campagne, Aurélie; Fradcourt, Benoit; Pichat, Cédric; Baciu, Monica; Kauffmann, Louise; Peyrin, Carole
2016-01-01
Visual processing of emotional stimuli critically depends on the type of cognitive appraisal involved. The present fMRI pilot study aimed to investigate the cerebral correlates involved in the visual processing of emotional scenes in two tasks, one emotional, based on the appraisal of personal emotional experience, and the other motivational, based on the appraisal of the tendency to action. Given that the use of spatial frequency information is relatively flexible during the visual processing of emotional stimuli depending on the task's demands, we also explored the effect of the type of spatial frequency in visual stimuli in each task by using emotional scenes filtered in low spatial frequency (LSF) and high spatial frequencies (HSF). Activation was observed in the visual areas of the fusiform gyrus for all emotional scenes in both tasks, and in the amygdala for unpleasant scenes only. The motivational task induced additional activation in frontal motor-related areas (e.g. premotor cortex, SMA) and parietal regions (e.g. superior and inferior parietal lobules). Parietal regions were recruited particularly during the motivational appraisal of approach in response to pleasant scenes. These frontal and parietal activations, respectively, suggest that motor and navigation processes play a specific role in the identification of the tendency to action in the motivational task. Furthermore, activity observed in the motivational task, in response to both pleasant and unpleasant scenes, was significantly greater for HSF than for LSF scenes, suggesting that the tendency to action is driven mainly by the detailed information contained in scenes. Results for the emotional task suggest that spatial frequencies play only a small role in the evaluation of unpleasant and pleasant emotions. Our preliminary study revealed a partial distinction between visual processing of emotional scenes during identification of the tendency to action, and during identification of personal emotional experiences. It also illustrates flexible use of the spatial frequencies contained in scenes depending on their emotional valence and on task demands.
NASA VERVE: Interactive 3D Visualization Within Eclipse
NASA Technical Reports Server (NTRS)
Cohen, Tamar; Allan, Mark B.
2014-01-01
At NASA, we develop myriad Eclipse RCP applications to provide situational awareness for remote systems. The Intelligent Robotics Group at NASA Ames Research Center has developed VERVE - a high-performance, robot user interface that provides scientists, robot operators, and mission planners with powerful, interactive 3D displays of remote environments.VERVE includes a 3D Eclipse view with an embedded Java Ardor3D scenario, including SWT and mouse controls which interact with the Ardor3D camera and objects in the scene. VERVE also includes Eclipse views for exploring and editing objects in the Ardor3D scene graph, and a HUD (Heads Up Display) framework allows Growl-style notifications and other textual information to be overlayed onto the 3D scene. We use VERVE to listen to telemetry from robots and display the robots and associated scientific data along the terrain they are exploring; VERVE can be used for any interactive 3D display of data.VERVE is now open source. VERVE derives from the prior Viz system, which was developed for Mars Polar Lander (2001) and used for the Mars Exploration Rover (2003) and the Phoenix Lander (2008). It has been used for ongoing research with IRG's K10 and KRex rovers in various locations. VERVE was used on the International Space Station during two experiments in 2013 - Surface Telerobotics, in which astronauts controlled robots on Earth from the ISS, and SPHERES, where astronauts control a free flying robot on board the ISS.We will show in detail how to code with VERVE, how to interact between SWT controls to the Ardor3D scenario, and share example code.
Comparison of precision and speed in laparoscopic and robot-assisted surgical task performance.
Zihni, Ahmed; Gerull, William D; Cavallo, Jaime A; Ge, Tianjia; Ray, Shuddhadeb; Chiu, Jason; Brunt, L Michael; Awad, Michael M
2018-03-01
Robotic platforms have the potential advantage of providing additional dexterity and precision to surgeons while performing complex laparoscopic tasks, especially for those in training. Few quantitative evaluations of surgical task performance comparing laparoscopic and robotic platforms among surgeons of varying experience levels have been done. We compared measures of quality and efficiency of Fundamentals of Laparoscopic Surgery task performance on these platforms in novices and experienced laparoscopic and robotic surgeons. Fourteen novices, 12 expert laparoscopic surgeons (>100 laparoscopic procedures performed, no robotics experience), and five expert robotic surgeons (>25 robotic procedures performed) performed three Fundamentals of Laparoscopic Surgery tasks on both laparoscopic and robotic platforms: peg transfer (PT), pattern cutting (PC), and intracorporeal suturing. All tasks were repeated three times by each subject on each platform in a randomized order. Mean completion times and mean errors per trial (EPT) were calculated for each task on both platforms. Results were compared using Student's t-test (P < 0.05 considered statistically significant). Among novices, greater errors were noted during laparoscopic PC (Lap 2.21 versus Robot 0.88 EPT, P < 0.001). Among expert laparoscopists, greater errors were noted during laparoscopic PT compared with robotic (PT: Lap 0.14 versus Robot 0.00 EPT, P = 0.04). Among expert robotic surgeons, greater errors were noted during laparoscopic PC compared with robotic (Lap 0.80 versus Robot 0.13 EPT, P = 0.02). Among expert laparoscopists, task performance was slower on the robotic platform compared with laparoscopy. In comparisons of expert laparoscopists performing tasks on the laparoscopic platform and expert robotic surgeons performing tasks on the robotic platform, expert robotic surgeons demonstrated fewer errors during the PC task (P = 0.009). Robotic assistance provided a reduction in errors at all experience levels for some laparoscopic tasks, but no benefit in the speed of task performance. Robotic assistance may provide some benefit in precision of surgical task performance. Copyright © 2017 Elsevier Inc. All rights reserved.
Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes
Yebes, J. Javier; Bergasa, Luis M.; García-Garrido, Miguel Ángel
2015-01-01
Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban scenarios is the detection of road participants (e.g., cyclists, pedestrians and vehicles). In this regard, this paper tackles the detection and orientation estimation of cars, pedestrians and cyclists, employing the challenging and naturalistic KITTI images. This work proposes 3D-aware features computed from stereo color images in order to capture the appearance and depth peculiarities of the objects in road scenes. The successful part-based object detector, known as DPM, is extended to learn richer models from the 2.5D data (color and disparity), while also carrying out a detailed analysis of the training pipeline. A large set of experiments evaluate the proposals, and the best performing approach is ranked on the KITTI website. Indeed, this is the first work that reports results with stereo data for the KITTI object challenge, achieving increased detection ratios for the classes car and cyclist compared to a baseline DPM. PMID:25903553
Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes.
Yebes, J Javier; Bergasa, Luis M; García-Garrido, Miguel Ángel
2015-04-20
Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban scenarios is the detection of road participants (e.g., cyclists, pedestrians and vehicles). In this regard, this paper tackles the detection and orientation estimation of cars, pedestrians and cyclists, employing the challenging and naturalistic KITTI images. This work proposes 3D-aware features computed from stereo color images in order to capture the appearance and depth peculiarities of the objects in road scenes. The successful part-based object detector, known as DPM, is extended to learn richer models from the 2.5D data (color and disparity), while also carrying out a detailed analysis of the training pipeline. A large set of experiments evaluate the proposals, and the best performing approach is ranked on the KITTI website. Indeed, this is the first work that reports results with stereo data for the KITTI object challenge, achieving increased detection ratios for the classes car and cyclist compared to a baseline DPM.
Object Classification in Semi Structured Enviroment Using Forward-Looking Sonar
dos Santos, Matheus; Ribeiro, Pedro Otávio; Núñez, Pedro; Botelho, Silvia
2017-01-01
The submarine exploration using robots has been increasing in recent years. The automation of tasks such as monitoring, inspection, and underwater maintenance requires the understanding of the robot’s environment. The object recognition in the scene is becoming a critical issue for these systems. On this work, an underwater object classification pipeline applied in acoustic images acquired by Forward-Looking Sonar (FLS) are studied. The object segmentation combines thresholding, connected pixels searching and peak of intensity analyzing techniques. The object descriptor extract intensity and geometric features of the detected objects. A comparison between the Support Vector Machine, K-Nearest Neighbors, and Random Trees classifiers are presented. An open-source tool was developed to annotate and classify the objects and evaluate their classification performance. The proposed method efficiently segments and classifies the structures in the scene using a real dataset acquired by an underwater vehicle in a harbor area. Experimental results demonstrate the robustness and accuracy of the method described in this paper. PMID:28961163
The influence of behavioral relevance on the processing of global scene properties: An ERP study.
Hansen, Natalie E; Noesen, Birken T; Nador, Jeffrey D; Harel, Assaf
2018-05-02
Recent work studying the temporal dynamics of visual scene processing (Harel et al., 2016) has found that global scene properties (GSPs) modulate the amplitude of early Event-Related Potentials (ERPs). It is still not clear, however, to what extent the processing of these GSPs is influenced by their behavioral relevance, determined by the goals of the observer. To address this question, we investigated how behavioral relevance, operationalized by the task context impacts the electrophysiological responses to GSPs. In a set of two experiments we recorded ERPs while participants viewed images of real-world scenes, varying along two GSPs, naturalness (manmade/natural) and spatial expanse (open/closed). In Experiment 1, very little attention to scene content was required as participants viewed the scenes while performing an orthogonal fixation-cross task. In Experiment 2 participants saw the same scenes but now had to actively categorize them, based either on their naturalness or spatial expense. We found that task context had very little impact on the early ERP responses to the naturalness and spatial expanse of the scenes: P1, N1, and P2 could distinguish between open and closed scenes and between manmade and natural scenes across both experiments. Further, the specific effects of naturalness and spatial expanse on the ERP components were largely unaffected by their relevance for the task. A task effect was found at the N1 and P2 level, but this effect was manifest across all scene dimensions, indicating a general effect rather than an interaction between task context and GSPs. Together, these findings suggest that the extraction of global scene information reflected in the early ERP components is rapid and very little influenced by top-down observer-based goals. Copyright © 2018 Elsevier Ltd. All rights reserved.
The Identification and Modeling of Visual Cue Usage in Manual Control Task Experiments
NASA Technical Reports Server (NTRS)
Sweet, Barbara Townsend; Trejo, Leonard J. (Technical Monitor)
1999-01-01
Many fields of endeavor require humans to conduct manual control tasks while viewing a perspective scene. Manual control refers to tasks in which continuous, or nearly continuous, control adjustments are required. Examples include flying an aircraft, driving a car, and riding a bicycle. Perspective scenes can arise through natural viewing of the world, simulation of a scene (as in flight simulators), or through imaging devices (such as the cameras on an unmanned aerospace vehicle). Designers frequently have some degree of control over the content and characteristics of a perspective scene; airport designers can choose runway markings, vehicle designers can influence the size and shape of windows, as well as the location of the pilot, and simulator database designers can choose scene complexity and content. Little theoretical framework exists to help designers determine the answers to questions related to perspective scene content. An empirical approach is most commonly used to determine optimum perspective scene configurations. The goal of the research effort described in this dissertation has been to provide a tool for modeling the characteristics of human operators conducting manual control tasks with perspective-scene viewing. This is done for the purpose of providing an algorithmic, as opposed to empirical, method for analyzing the effects of changing perspective scene content for closed-loop manual control tasks.
Speech to Text Translation for Malay Language
NASA Astrophysics Data System (ADS)
Al-khulaidi, Rami Ali; Akmeliawati, Rini
2017-11-01
The speech recognition system is a front end and a back-end process that receives an audio signal uttered by a speaker and converts it into a text transcription. The speech system can be used in several fields including: therapeutic technology, education, social robotics and computer entertainments. In most cases in control tasks, which is the purpose of proposing our system, wherein the speed of performance and response concern as the system should integrate with other controlling platforms such as in voiced controlled robots. Therefore, the need for flexible platforms, that can be easily edited to jibe with functionality of the surroundings, came to the scene; unlike other software programs that require recording audios and multiple training for every entry such as MATLAB and Phoenix. In this paper, a speech recognition system for Malay language is implemented using Microsoft Visual Studio C#. 90 (ninety) Malay phrases were tested by 10 (ten) speakers from both genders in different contexts. The result shows that the overall accuracy (calculated from Confusion Matrix) is satisfactory as it is 92.69%.
Novel Descattering Approach for Stereo Vision in Dense Suspended Scatterer Environments
Nguyen, Chanh D. Tr.; Park, Jihyuk; Cho, Kyeong-Yong; Kim, Kyung-Soo; Kim, Soohyun
2017-01-01
In this paper, we propose a model-based scattering removal method for stereo vision for robot manipulation in indoor scattering media where the commonly used ranging sensors are unable to work. Stereo vision is an inherently ill-posed and challenging problem. It is even more difficult in the case of images of dense fog or dense steam scenes illuminated by active light sources. Images taken in such environments suffer attenuation of object radiance and scattering of the active light sources. To solve this problem, we first derive the imaging model for images taken in a dense scattering medium with a single active illumination close to the cameras. Based on this physical model, the non-uniform backscattering signal is efficiently removed. The descattered images are then utilized as the input images of stereo vision. The performance of the method is evaluated based on the quality of the depth map from stereo vision. We also demonstrate the effectiveness of the proposed method by carrying out the real robot manipulation task. PMID:28629139
Robotics On-Board Trainer (ROBoT)
NASA Technical Reports Server (NTRS)
Johnson, Genevieve; Alexander, Greg
2013-01-01
ROBoT is an on-orbit version of the ground-based Dynamics Skills Trainer (DST) that astronauts use for training on a frequent basis. This software consists of two primary software groups. The first series of components is responsible for displaying the graphical scenes. The remaining components are responsible for simulating the Mobile Servicing System (MSS), the Japanese Experiment Module Remote Manipulator System (JEMRMS), and the H-II Transfer Vehicle (HTV) Free Flyer Robotics Operations. The MSS simulation software includes: Robotic Workstation (RWS) simulation, a simulation of the Space Station Remote Manipulator System (SSRMS), a simulation of the ISS Command and Control System (CCS), and a portion of the Portable Computer System (PCS) software necessary for MSS operations. These components all run under the CentOS4.5 Linux operating system. The JEMRMS simulation software includes real-time, HIL, dynamics, manipulator multi-body dynamics, and a moving object contact model with Tricks discrete time scheduling. The JEMRMS DST will be used as a functional proficiency and skills trainer for flight crews. The HTV Free Flyer Robotics Operations simulation software adds a functional simulation of HTV vehicle controllers, sensors, and data to the MSS simulation software. These components are intended to support HTV ISS visiting vehicle analysis and training. The scene generation software will use DOUG (Dynamic On-orbit Ubiquitous Graphics) to render the graphical scenes. DOUG runs on a laptop running the CentOS4.5 Linux operating system. DOUG is an Open GL-based 3D computer graphics rendering package. It uses pre-built three-dimensional models of on-orbit ISS and space shuttle systems elements, and provides realtime views of various station and shuttle configurations.
Performance evaluation and clinical applications of 3D plenoptic cameras
NASA Astrophysics Data System (ADS)
Decker, Ryan; Shademan, Azad; Opfermann, Justin; Leonard, Simon; Kim, Peter C. W.; Krieger, Axel
2015-06-01
The observation and 3D quantification of arbitrary scenes using optical imaging systems is challenging, but increasingly necessary in many fields. This paper provides a technical basis for the application of plenoptic cameras in medical and medical robotics applications, and rigorously evaluates camera integration and performance in the clinical setting. It discusses plenoptic camera calibration and setup, assesses plenoptic imaging in a clinically relevant context, and in the context of other quantitative imaging technologies. We report the methods used for camera calibration, precision and accuracy results in an ideal and simulated surgical setting. Afterwards, we report performance during a surgical task. Test results showed the average precision of the plenoptic camera to be 0.90mm, increasing to 1.37mm for tissue across the calibrated FOV. The ideal accuracy was 1.14mm. The camera showed submillimeter error during a simulated surgical task.
Ego-location and situational awareness in semistructured environments
NASA Astrophysics Data System (ADS)
Goodsell, Thomas G.; Snorrason, Magnus S.; Stevens, Mark R.; Stube, Brian; McBride, Jonah
2003-09-01
The success of any potential application for mobile robots depends largely on the specific environment where the application takes place. Practical applications are rarely found in highly structured environments, but unstructured environments (such as natural terrain) pose major challenges to any mobile robot. We believe that semi-structured environments-such as parking lots-provide a good opportunity for successful mobile robot applications. Parking lots tend to be flat and smooth, and cars can be uniquely identified by their license plates. Our scenario is a parking lot where only known vehicles are supposed to park. The robot looks for vehicles that do not belong in the parking lot. It checks both license plates and vehicle types, in case the plate is stolen from an approved vehicle. It operates autonomously, but reports back to a guard who verifies its performance. Our interest is in developing the robot's vision system, which we call Scene Estimation & Situational Awareness Mapping Engine (SESAME). In this paper, we present initial results from the development of two SESAME subsystems, the ego-location and license plate detection systems. While their ultimate goals are obviously quite different, our design demonstrates that by sharing intermediate results, both tasks can be significantly simplified. The inspiration for this design approach comes from the basic tenets of Situational Awareness (SA), where the benefits of holistic perception are clearly demonstrated over the more typical designs that attempt to solve each sensing/perception problem in isolation.
Campagne, Aurélie; Fradcourt, Benoit; Pichat, Cédric; Baciu, Monica; Kauffmann, Louise; Peyrin, Carole
2016-01-01
Visual processing of emotional stimuli critically depends on the type of cognitive appraisal involved. The present fMRI pilot study aimed to investigate the cerebral correlates involved in the visual processing of emotional scenes in two tasks, one emotional, based on the appraisal of personal emotional experience, and the other motivational, based on the appraisal of the tendency to action. Given that the use of spatial frequency information is relatively flexible during the visual processing of emotional stimuli depending on the task’s demands, we also explored the effect of the type of spatial frequency in visual stimuli in each task by using emotional scenes filtered in low spatial frequency (LSF) and high spatial frequencies (HSF). Activation was observed in the visual areas of the fusiform gyrus for all emotional scenes in both tasks, and in the amygdala for unpleasant scenes only. The motivational task induced additional activation in frontal motor-related areas (e.g. premotor cortex, SMA) and parietal regions (e.g. superior and inferior parietal lobules). Parietal regions were recruited particularly during the motivational appraisal of approach in response to pleasant scenes. These frontal and parietal activations, respectively, suggest that motor and navigation processes play a specific role in the identification of the tendency to action in the motivational task. Furthermore, activity observed in the motivational task, in response to both pleasant and unpleasant scenes, was significantly greater for HSF than for LSF scenes, suggesting that the tendency to action is driven mainly by the detailed information contained in scenes. Results for the emotional task suggest that spatial frequencies play only a small role in the evaluation of unpleasant and pleasant emotions. Our preliminary study revealed a partial distinction between visual processing of emotional scenes during identification of the tendency to action, and during identification of personal emotional experiences. It also illustrates flexible use of the spatial frequencies contained in scenes depending on their emotional valence and on task demands. PMID:26757433
Foulsham, Tom; Alan, Rana; Kingstone, Alan
2011-10-01
Previous research has demonstrated that search and memory for items within natural scenes can be disrupted by "scrambling" the images. In the present study, we asked how disrupting the structure of a scene through scrambling might affect the control of eye fixations in either a search task (Experiment 1) or a memory task (Experiment 2). We found that the search decrement in scrambled scenes was associated with poorer guidance of the eyes to the target. Across both tasks, scrambling led to shorter fixations and longer saccades, and more distributed, less selective overt attention, perhaps corresponding to an ambient mode of processing. These results confirm that scene structure has widespread effects on the guidance of eye movements in scenes. Furthermore, the results demonstrate the trade-off between scene structure and visual saliency, with saliency having more of an effect on eye guidance in scrambled scenes.
Autonomous detection of indoor and outdoor signs
NASA Astrophysics Data System (ADS)
Holden, Steven; Snorrason, Magnus; Goodsell, Thomas; Stevens, Mark R.
2005-05-01
Most goal-oriented mobile robot tasks involve navigation to one or more known locations. This is generally done using GPS coordinates and landmarks outdoors, or wall-following and fiducial marks indoors. Such approaches ignore the rich source of navigation information that is already in place for human navigation in all man-made environments: signs. A mobile robot capable of detecting and reading arbitrary signs could be tasked using directions that are intuitive to hu-mans, and it could report its location relative to intuitive landmarks (a street corner, a person's office, etc.). Such ability would not require active marking of the environment and would be functional in the absence of GPS. In this paper we present an updated version of a system we call Sign Understanding in Support of Autonomous Navigation (SUSAN). This system relies on cues common to most signs, the presence of text, vivid color, and compact shape. By not relying on templates, SUSAN can detect a wide variety of signs: traffic signs, street signs, store-name signs, building directories, room signs, etc. In this paper we focus on the text detection capability. We present results summarizing probability of detection and false alarm rate across many scenes containing signs of very different designs and in a variety of lighting conditions.
NASA Astrophysics Data System (ADS)
Dimitrievski, Martin; Goossens, Bart; Veelaert, Peter; Philips, Wilfried
2017-09-01
Understanding the 3D structure of the environment is advantageous for many tasks in the field of robotics and autonomous vehicles. From the robot's point of view, 3D perception is often formulated as a depth image reconstruction problem. In the literature, dense depth images are often recovered deterministically from stereo image disparities. Other systems use an expensive LiDAR sensor to produce accurate, but semi-sparse depth images. With the advent of deep learning there have also been attempts to estimate depth by only using monocular images. In this paper we combine the best of the two worlds, focusing on a combination of monocular images and low cost LiDAR point clouds. We explore the idea that very sparse depth information accurately captures the global scene structure while variations in image patches can be used to reconstruct local depth to a high resolution. The main contribution of this paper is a supervised learning depth reconstruction system based on a deep convolutional neural network. The network is trained on RGB image patches reinforced with sparse depth information and the output is a depth estimate for each pixel. Using image and point cloud data from the KITTI vision dataset we are able to learn a correspondence between local RGB information and local depth, while at the same time preserving the global scene structure. Our results are evaluated on sequences from the KITTI dataset and our own recordings using a low cost camera and LiDAR setup.
To search or to like: Mapping fixations to differentiate two forms of incidental scene memory.
Choe, Kyoung Whan; Kardan, Omid; Kotabe, Hiroki P; Henderson, John M; Berman, Marc G
2017-10-01
We employed eye-tracking to investigate how performing different tasks on scenes (e.g., intentionally memorizing them, searching for an object, evaluating aesthetic preference) can affect eye movements during encoding and subsequent scene memory. We found that scene memorability decreased after visual search (one incidental encoding task) compared to intentional memorization, and that preference evaluation (another incidental encoding task) produced better memory, similar to the incidental memory boost previously observed for words and faces. By analyzing fixation maps, we found that although fixation map similarity could explain how eye movements during visual search impairs incidental scene memory, it could not explain the incidental memory boost from aesthetic preference evaluation, implying that implicit mechanisms were at play. We conclude that not all incidental encoding tasks should be taken to be similar, as different mechanisms (e.g., explicit or implicit) lead to memory enhancements or decrements for different incidental encoding tasks.
Robotics for Human Exploration
NASA Technical Reports Server (NTRS)
Fong, Terrence; Deans, Mathew; Bualat, Maria
2013-01-01
Robots can do a variety of work to increase the productivity of human explorers. Robots can perform tasks that are tedious, highly repetitive or long-duration. Robots can perform precursor tasks, such as reconnaissance, which help prepare for future human activity. Robots can work in support of astronauts, assisting or performing tasks in parallel. Robots can also perform "follow-up" work, completing tasks designated or started by humans. In this paper, we summarize the development and testing of robots designed to improve future human exploration of space.
A multiple-feature and multiple-kernel scene segmentation algorithm for humanoid robot.
Liu, Zhi; Xu, Shuqiong; Zhang, Yun; Chen, Chun Lung Philip
2014-11-01
This technical correspondence presents a multiple-feature and multiple-kernel support vector machine (MFMK-SVM) methodology to achieve a more reliable and robust segmentation performance for humanoid robot. The pixel wise intensity, gradient, and C1 SMF features are extracted via the local homogeneity model and Gabor filter, which would be used as inputs of MFMK-SVM model. It may provide multiple features of the samples for easier implementation and efficient computation of MFMK-SVM model. A new clustering method, which is called feature validity-interval type-2 fuzzy C-means (FV-IT2FCM) clustering algorithm, is proposed by integrating a type-2 fuzzy criterion in the clustering optimization process to improve the robustness and reliability of clustering results by the iterative optimization. Furthermore, the clustering validity is employed to select the training samples for the learning of the MFMK-SVM model. The MFMK-SVM scene segmentation method is able to fully take advantage of the multiple features of scene image and the ability of multiple kernels. Experiments on the BSDS dataset and real natural scene images demonstrate the superior performance of our proposed method.
Task-level robot programming: Integral part of evolution from teleoperation to autonomy
NASA Technical Reports Server (NTRS)
Reynolds, James C.
1987-01-01
An explanation is presented of task-level robot programming and of how it differs from the usual interpretation of task planning for robotics. Most importantly, it is argued that the physical and mathematical basis of task-level robot programming provides inherently greater reliability than efforts to apply better known concepts from artificial intelligence (AI) to autonomous robotics. Finally, an architecture is presented that allows the integration of task-level robot programming within an evolutionary, redundant, and multi-modal framework that spans teleoperation to autonomy.
Towards Camera-LIDAR Fusion-Based Terrain Modelling for Planetary Surfaces: Review and Analysis
Shaukat, Affan; Blacker, Peter C.; Spiteri, Conrad; Gao, Yang
2016-01-01
In recent decades, terrain modelling and reconstruction techniques have increased research interest in precise short and long distance autonomous navigation, localisation and mapping within field robotics. One of the most challenging applications is in relation to autonomous planetary exploration using mobile robots. Rovers deployed to explore extraterrestrial surfaces are required to perceive and model the environment with little or no intervention from the ground station. Up to date, stereopsis represents the state-of-the art method and can achieve short-distance planetary surface modelling. However, future space missions will require scene reconstruction at greater distance, fidelity and feature complexity, potentially using other sensors like Light Detection And Ranging (LIDAR). LIDAR has been extensively exploited for target detection, identification, and depth estimation in terrestrial robotics, but is still under development to become a viable technology for space robotics. This paper will first review current methods for scene reconstruction and terrain modelling using cameras in planetary robotics and LIDARs in terrestrial robotics; then we will propose camera-LIDAR fusion as a feasible technique to overcome the limitations of either of these individual sensors for planetary exploration. A comprehensive analysis will be presented to demonstrate the advantages of camera-LIDAR fusion in terms of range, fidelity, accuracy and computation. PMID:27879625
Research in interactive scene analysis
NASA Technical Reports Server (NTRS)
Tenenbaum, J. M.; Barrow, H. G.; Weyl, S. A.
1976-01-01
Cooperative (man-machine) scene analysis techniques were developed whereby humans can provide a computer with guidance when completely automated processing is infeasible. An interactive approach promises significant near-term payoffs in analyzing various types of high volume satellite imagery, as well as vehicle-based imagery used in robot planetary exploration. This report summarizes the work accomplished over the duration of the project and describes in detail three major accomplishments: (1) the interactive design of texture classifiers; (2) a new approach for integrating the segmentation and interpretation phases of scene analysis; and (3) the application of interactive scene analysis techniques to cartography.
RAFCON: A Graphical Tool for Engineering Complex, Robotic Tasks
2016-10-09
Robotic tasks are becoming increasingly complex, and with this also the robotic systems. This requires new tools to manage this complexity and to...execution of robotic tasks, called RAFCON. These tasks are described in hierarchical state machines supporting concurrency. A formal notation of this concept
Task path planning, scheduling and learning for free-ranging robot systems
NASA Technical Reports Server (NTRS)
Wakefield, G. Steve
1987-01-01
The development of robotics applications for space operations is often restricted by the limited movement available to guided robots. Free ranging robots can offer greater flexibility than physically guided robots in these applications. Presented here is an object oriented approach to path planning and task scheduling for free-ranging robots that allows the dynamic determination of paths based on the current environment. The system also provides task learning for repetitive jobs. This approach provides a basis for the design of free-ranging robot systems which are adaptable to various environments and tasks.
A Novel Robot Visual Homing Method Based on SIFT Features
Zhu, Qidan; Liu, Chuanjia; Cai, Chengtao
2015-01-01
Warping is an effective visual homing method for robot local navigation. However, the performance of the warping method can be greatly influenced by the changes of the environment in a real scene, thus resulting in lower accuracy. In order to solve the above problem and to get higher homing precision, a novel robot visual homing algorithm is proposed by combining SIFT (scale-invariant feature transform) features with the warping method. The algorithm is novel in using SIFT features as landmarks instead of the pixels in the horizon region of the panoramic image. In addition, to further improve the matching accuracy of landmarks in the homing algorithm, a novel mismatching elimination algorithm, based on the distribution characteristics of landmarks in the catadioptric panoramic image, is proposed. Experiments on image databases and on a real scene confirm the effectiveness of the proposed method. PMID:26473880
FLS tasks can be used as an ergonomic discriminator between laparoscopic and robotic surgery.
Zihni, Ahmed M; Ohu, Ikechukwu; Cavallo, Jaime A; Ousley, Jenny; Cho, Sohyung; Awad, Michael M
2014-08-01
Robotic surgery may result in ergonomic benefits to surgeons. In this pilot study, we utilize surface electromyography (sEMG) to describe a method for identifying ergonomic differences between laparoscopic and robotic platforms using validated Fundamentals of Laparoscopic Surgery (FLS) tasks. We hypothesize that FLS task performance on laparoscopic and robotic surgical platforms will produce significant differences in mean muscle activation, as quantified by sEMG. Six right-hand-dominant subjects with varying experience performed FLS peg transfer (PT), pattern cutting (PC), and intracorporeal suturing (IS) tasks on laparoscopic and robotic platforms. sEMG measurements were obtained from each subject's bilateral bicep, tricep, deltoid, and trapezius muscles. EMG measurements were normalized to the maximum voluntary contraction (MVC) of each muscle of each subject. Subjects repeated each task three times per platform, and mean values used for pooled analysis. Average normalized muscle activation (%MVC) was calculated for each muscle group in all subjects for each FLS task. We compared mean %MVC values with paired t tests and considered differences with a p value less than 0.05 to be statistically significant. Mean activation of right bicep (2.7 %MVC lap, 1.3 %MVC robotic, p = 0.019) and right deltoid muscles (2.4 %MVC lap, 1.0 %MVC robotic, p = 0.019) were significantly elevated during the laparoscopic compared to the robotic IS task. The mean activation of the right trapezius muscle was significantly elevated during robotic compared to the laparoscopic PT (1.6 %MVC lap, 3.5 %MVC robotic, p = 0.040) and PC (1.3 %MVC lap, 3.6 %MVC robotic, p = 0.0018) tasks. FLS tasks are validated, readily available instruments that are feasible for use in demonstrating ergonomic differences between surgical platforms. In this study, we used FLS tasks to compare mean muscle activation of four muscle groups during laparoscopic and robotic task performance. FLS tasks can serve as the basis for larger studies to further describe ergonomic differences between laparoscopic and robotic surgery.
Coalition Formation under Uncertainty
2010-03-01
world robotics and demonstrate the algorithm’s scalability. This provides a framework well suited to decentralized task allocation in general collectives...impatience and acquiescence to define a robot allocation to a task in a decentralized manner. The tasks are assigned to the entire collective, and one...20] allocates tasks to robots with a first-price auction method [31]. It announces a task with defined metrics, then the robots issue bids. The task
Threatening scenes but not threatening faces shorten time-to-contact estimates.
DeLucia, Patricia R; Brendel, Esther; Hecht, Heiko; Stacy, Ryan L; Larsen, Jeff T
2014-08-01
We previously reported that time-to-contact (TTC) judgments of threatening scene pictures (e.g., frontal attacks) resulted in shortened estimations and were mediated by cognitive processes, and that judgments of threatening (e.g., angry) face pictures resulted in a smaller effect and did not seem cognitively mediated. In the present study, the effects of threatening scenes and faces were compared in two different tasks. An effect of threatening scene pictures occurred in a prediction-motion task, which putatively requires cognitive motion extrapolation, but not in a relative TTC judgment task, which was designed to be less reliant on cognitive processes. An effect of threatening face pictures did not occur in either task. We propose that an object's explicit potential of threat per se, and not only emotional valence, underlies the effect of threatening scenes on TTC judgments and that such an effect occurs only when the task allows sufficient cognitive processing. Results are consistent with distinctions between predator and social fear systems and different underlying physiological mechanisms. Not all threatening information elicits the same responses, and whether an effect occurs at all may depend on the task and the degree to which the task involves cognitive processes.
Chen, J Y C; Terrence, P I
2008-08-01
This study examined the concurrent performance of military gunnery, robotics control and communication tasks in a simulated environment. More specifically, the study investigated how aided target recognition (AiTR) capabilities (delivered either through tactile or tactile + visual cueing) for the gunnery task might benefit overall performance. Results showed that AiTR benefited not only the gunnery task, but also the concurrent robotics and communication tasks. The participants' spatial ability was found to be a good indicator of their gunnery and robotics task performance. However, when AiTR was available to assist their gunnery task, those participants of lower spatial ability were able to perform their robotics tasks as well as those of higher spatial ability. Finally, participants' workload assessment was significantly higher when they teleoperated (i.e. remotely operated) a robot and when their gunnery task was unassisted. These results will further understanding of multitasking performance in military tasking environments. These results will also facilitate the implementation of robots in military settings and will provide useful data to military system designs.
Towards the automatic scanning of indoors with robots.
Adán, Antonio; Quintana, Blanca; Vázquez, Andres S; Olivares, Alberto; Parra, Eduardo; Prieto, Samuel
2015-05-19
This paper is framed in both 3D digitization and 3D data intelligent processing research fields. Our objective is focused on developing a set of techniques for the automatic creation of simple three-dimensional indoor models with mobile robots. The document presents the principal steps of the process, the experimental setup and the results achieved. We distinguish between the stages concerning intelligent data acquisition and 3D data processing. This paper is focused on the first stage. We show how the mobile robot, which carries a 3D scanner, is able to, on the one hand, make decisions about the next best scanner position and, on the other hand, navigate autonomously in the scene with the help of the data collected from earlier scans. After this stage, millions of 3D data are converted into a simplified 3D indoor model. The robot imposes a stopping criterion when the whole point cloud covers the essential parts of the scene. This system has been tested under real conditions indoors with promising results. The future is addressed to extend the method in much more complex and larger scenarios.
Towards the Automatic Scanning of Indoors with Robots
Adán, Antonio; Quintana, Blanca; Vázquez, Andres S.; Olivares, Alberto; Parra, Eduardo; Prieto, Samuel
2015-01-01
This paper is framed in both 3D digitization and 3D data intelligent processing research fields. Our objective is focused on developing a set of techniques for the automatic creation of simple three-dimensional indoor models with mobile robots. The document presents the principal steps of the process, the experimental setup and the results achieved. We distinguish between the stages concerning intelligent data acquisition and 3D data processing. This paper is focused on the first stage. We show how the mobile robot, which carries a 3D scanner, is able to, on the one hand, make decisions about the next best scanner position and, on the other hand, navigate autonomously in the scene with the help of the data collected from earlier scans. After this stage, millions of 3D data are converted into a simplified 3D indoor model. The robot imposes a stopping criterion when the whole point cloud covers the essential parts of the scene. This system has been tested under real conditions indoors with promising results. The future is addressed to extend the method in much more complex and larger scenarios. PMID:25996513
Beyl, Tim; Nicolai, Philip; Comparetti, Mirko D; Raczkowsky, Jörg; De Momi, Elena; Wörn, Heinz
2016-07-01
Scene supervision is a major tool to make medical robots safer and more intuitive. The paper shows an approach to efficiently use 3D cameras within the surgical operating room to enable for safe human robot interaction and action perception. Additionally the presented approach aims to make 3D camera-based scene supervision more reliable and accurate. A camera system composed of multiple Kinect and time-of-flight cameras has been designed, implemented and calibrated. Calibration and object detection as well as people tracking methods have been designed and evaluated. The camera system shows a good registration accuracy of 0.05 m. The tracking of humans is reliable and accurate and has been evaluated in an experimental setup using operating clothing. The robot detection shows an error of around 0.04 m. The robustness and accuracy of the approach allow for an integration into modern operating room. The data output can be used directly for situation and workflow detection as well as collision avoidance.
Superordinate Level Processing Has Priority Over Basic-Level Processing in Scene Gist Recognition
Sun, Qi; Zheng, Yang; Sun, Mingxia; Zheng, Yuanjie
2016-01-01
By combining a perceptual discrimination task and a visuospatial working memory task, the present study examined the effects of visuospatial working memory load on the hierarchical processing of scene gist. In the perceptual discrimination task, two scene images from the same (manmade–manmade pairing or natural–natural pairing) or different superordinate level categories (manmade–natural pairing) were presented simultaneously, and participants were asked to judge whether these two images belonged to the same basic-level category (e.g., street–street pairing) or not (e.g., street–highway pairing). In the concurrent working memory task, spatial load (position-based load in Experiment 1) and object load (figure-based load in Experiment 2) were manipulated. The results were as follows: (a) spatial load and object load have stronger effects on discrimination of same basic-level scene pairing than same superordinate level scene pairing; (b) spatial load has a larger impact on the discrimination of scene pairings at early stages than at later stages; on the contrary, object information has a larger influence on at later stages than at early stages. It followed that superordinate level processing has priority over basic-level processing in scene gist recognition and spatial information contributes to the earlier and object information to the later stages in scene gist recognition. PMID:28382195
Multi-optimization Criteria-based Robot Behavioral Adaptability and Motion Planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pin, Francois G.
2002-06-01
Robotic tasks are typically defined in Task Space (e.g., the 3-D World), whereas robots are controlled in Joint Space (motors). The transformation from Task Space to Joint Space must consider the task objectives (e.g., high precision, strength optimization, torque optimization), the task constraints (e.g., obstacles, joint limits, non-holonomic constraints, contact or tool task constraints), and the robot kinematics configuration (e.g., tools, type of joints, mobile platform, manipulator, modular additions, locked joints). Commercially available robots are optimized for a specific set of tasks, objectives and constraints and, therefore, their control codes are extremely specific to a particular set of conditions. Thus,more » there exist a multiplicity of codes, each handling a particular set of conditions, but none suitable for use on robots with widely varying tasks, objectives, constraints, or environments. On the other hand, most DOE missions and tasks are typically ''batches of one''. Attempting to use commercial codes for such work requires significant personnel and schedule costs for re-programming or adding code to the robots whenever a change in task objective, robot configuration, number and type of constraint, etc. occurs. The objective of our project is to develop a ''generic code'' to implement this Task-space to Joint-Space transformation that would allow robot behavior adaptation, in real time (at loop rate), to changes in task objectives, number and type of constraints, modes of controls, kinematics configuration (e.g., new tools, added module). Our specific goal is to develop a single code for the general solution of under-specified systems of algebraic equations that is suitable for solving the inverse kinematics of robots, is useable for all types of robots (mobile robots, manipulators, mobile manipulators, etc.) with no limitation on the number of joints and the number of controlled Task-Space variables, can adapt to real time changes in number and type of constraints and in task objectives, and can adapt to changes in kinematics configurations (change of module, change of tool, joint failure adaptation, etc.).« less
An Integrated Framework for Human-Robot Collaborative Manipulation.
Sheng, Weihua; Thobbi, Anand; Gu, Ye
2015-10-01
This paper presents an integrated learning framework that enables humanoid robots to perform human-robot collaborative manipulation tasks. Specifically, a table-lifting task performed jointly by a human and a humanoid robot is chosen for validation purpose. The proposed framework is split into two phases: 1) phase I-learning to grasp the table and 2) phase II-learning to perform the manipulation task. An imitation learning approach is proposed for phase I. In phase II, the behavior of the robot is controlled by a combination of two types of controllers: 1) reactive and 2) proactive. The reactive controller lets the robot take a reactive control action to make the table horizontal. The proactive controller lets the robot take proactive actions based on human motion prediction. A measure of confidence of the prediction is also generated by the motion predictor. This confidence measure determines the leader/follower behavior of the robot. Hence, the robot can autonomously switch between the behaviors during the task. Finally, the performance of the human-robot team carrying out the collaborative manipulation task is experimentally evaluated on a platform consisting of a Nao humanoid robot and a Vicon motion capture system. Results show that the proposed framework can enable the robot to carry out the collaborative manipulation task successfully.
Yoo, Seung-Woo; Lee, Inah
2017-01-01
How visual scene memory is processed differentially by the upstream structures of the hippocampus is largely unknown. We sought to dissociate functionally the lateral and medial subdivisions of the entorhinal cortex (LEC and MEC, respectively) in visual scene-dependent tasks by temporarily inactivating the LEC and MEC in the same rat. When the rat made spatial choices in a T-maze using visual scenes displayed on LCD screens, the inactivation of the MEC but not the LEC produced severe deficits in performance. However, when the task required the animal to push a jar or to dig in the sand in the jar using the same scene stimuli, the LEC but not the MEC became important. Our findings suggest that the entorhinal cortex is critical for scene-dependent mnemonic behavior, and the response modality may interact with a sensory modality to determine the involvement of the LEC and MEC in scene-based memory tasks. DOI: http://dx.doi.org/10.7554/eLife.21543.001 PMID:28169828
Reduced modulation of scanpaths in response to task demands in posterior cortical atrophy.
Shakespeare, Timothy J; Pertzov, Yoni; Yong, Keir X X; Nicholas, Jennifer; Crutch, Sebastian J
2015-02-01
A difficulty in perceiving visual scenes is one of the most striking impairments experienced by patients with the clinico-radiological syndrome posterior cortical atrophy (PCA). However whilst a number of studies have investigated perception of relatively simple experimental stimuli in these individuals, little is known about multiple object and complex scene perception and the role of eye movements in posterior cortical atrophy. We embrace the distinction between high-level (top-down) and low-level (bottom-up) influences upon scanning eye movements when looking at scenes. This distinction was inspired by Yarbus (1967), who demonstrated how the location of our fixations is affected by task instructions and not only the stimulus' low level properties. We therefore examined how scanning patterns are influenced by task instructions and low-level visual properties in 7 patients with posterior cortical atrophy, 8 patients with typical Alzheimer's disease, and 19 healthy age-matched controls. Each participant viewed 10 scenes under four task conditions (encoding, recognition, search and description) whilst eye movements were recorded. The results reveal significant differences between groups in the impact of test instructions upon scanpaths. Across tasks without a search component, posterior cortical atrophy patients were significantly less consistent than typical Alzheimer's disease patients and controls in where they were looking. By contrast, when comparing search and non-search tasks, it was controls who exhibited lowest between-task similarity ratings, suggesting they were better able than posterior cortical atrophy or typical Alzheimer's disease patients to respond appropriately to high-level needs by looking at task-relevant regions of a scene. Posterior cortical atrophy patients had a significant tendency to fixate upon more low-level salient parts of the scenes than controls irrespective of the viewing task. The study provides a detailed characterisation of scene perception abilities in posterior cortical atrophy and offers insights into the mechanisms by which high-level cognitive schemes interact with low-level perception. Copyright © 2015 Elsevier Ltd. All rights reserved.
Recognition of 3-D Scene with Partially Occluded Objects
NASA Astrophysics Data System (ADS)
Lu, Siwei; Wong, Andrew K. C...
1987-03-01
This paper presents a robot vision system which is capable of recognizing objects in a 3-D scene and interpreting their spatial relation even though some objects in the scene may be partially occluded by other objects. An algorithm is developed to transform the geometric information from the range data into an attributed hypergraph representation (AHR). A hypergraph monomorphism algorithm is then used to compare the AHR of objects in the scene with a set of complete AHR's of prototypes. The capability of identifying connected components and interpreting various types of edges in the 3-D scene enables us to distinguish objects which are partially blocking each other in the scene. Using structural information stored in the primitive area graph, a heuristic hypergraph monomorphism algorithm provides an effective way for recognizing, locating, and interpreting partially occluded objects in the range image.
Task-oriented rehabilitation robotics.
Schweighofer, Nicolas; Choi, Younggeun; Winstein, Carolee; Gordon, James
2012-11-01
Task-oriented training is emerging as the dominant and most effective approach to motor rehabilitation of upper extremity function after stroke. Here, the authors propose that the task-oriented training framework provides an evidence-based blueprint for the design of task-oriented robots for the rehabilitation of upper extremity function in the form of three design principles: skill acquisition of functional tasks, active participation training, and individualized adaptive training. The previous robotic systems that incorporate elements of task-oriented trainings are then reviewed. Finally, the authors critically analyze their own attempt to design and test the feasibility of a TOR robot, ADAPT (Adaptive and Automatic Presentation of Tasks), which incorporates the three design principles. Because of its task-oriented training-based design, ADAPT departs from most other current rehabilitation robotic systems: it presents realistic functional tasks in which the task goal is constantly adapted, so that the individual actively performs doable but challenging tasks without physical assistance. To maximize efficacy for a large clinical population, the authors propose that future task-oriented robots need to incorporate yet-to-be developed adaptive task presentation algorithms that emphasize acquisition of fine motor coordination skills while minimizing compensatory movements.
Learning compliant manipulation through kinesthetic and tactile human-robot interaction.
Kronander, Klas; Billard, Aude
2014-01-01
Robot Learning from Demonstration (RLfD) has been identified as a key element for making robots useful in daily lives. A wide range of techniques has been proposed for deriving a task model from a set of demonstrations of the task. Most previous works use learning to model the kinematics of the task, and for autonomous execution the robot then relies on a stiff position controller. While many tasks can and have been learned this way, there are tasks in which controlling the position alone is insufficient to achieve the goals of the task. These are typically tasks that involve contact or require a specific response to physical perturbations. The question of how to adjust the compliance to suit the need of the task has not yet been fully treated in Robot Learning from Demonstration. In this paper, we address this issue and present interfaces that allow a human teacher to indicate compliance variations by physically interacting with the robot during task execution. We validate our approach in two different experiments on the 7 DoF Barrett WAM and KUKA LWR robot manipulators. Furthermore, we conduct a user study to evaluate the usability of our approach from a non-roboticists perspective.
ERBE Geographic Scene and Monthly Snow Data
NASA Technical Reports Server (NTRS)
Coleman, Lisa H.; Flug, Beth T.; Gupta, Shalini; Kizer, Edward A.; Robbins, John L.
1997-01-01
The Earth Radiation Budget Experiment (ERBE) is a multisatellite system designed to measure the Earth's radiation budget. The ERBE data processing system consists of several software packages or sub-systems, each designed to perform a particular task. The primary task of the Inversion Subsystem is to reduce satellite altitude radiances to fluxes at the top of the Earth's atmosphere. To accomplish this, angular distribution models (ADM's) are required. These ADM's are a function of viewing and solar geometry and of the scene type as determined by the ERBE scene identification algorithm which is a part of the Inversion Subsystem. The Inversion Subsystem utilizes 12 scene types which are determined by the ERBE scene identification algorithm. The scene type is found by combining the most probable cloud cover, which is determined statistically by the scene identification algorithm, with the underlying geographic scene type. This Contractor Report describes how the geographic scene type is determined on a monthly basis.
NASA Technical Reports Server (NTRS)
Jones, Corey; Kapatos, Dennis; Skradski, Cory
2012-01-01
Do you have workflows with many manual tasks that slow down your business? Or, do you scale back workflows because there are simply too many manual tasks? Basic workflow robots can automate some common tasks, but not everything. This presentation will show how advanced robots called "expression robots" can be set up to perform everything from simple tasks such as: moving, creating folders, renaming, changing or creating an attribute, and revising, to more complex tasks like: creating a pdf, or even launching a session of Creo Parametric and performing a specific modeling task. Expression robots are able to utilize the Java API and Info*Engine to do almost anything you can imagine! Best of all, these tools are supported by PTC and will work with later releases of Windchill. Limited knowledge of Java, Info*Engine, and XML are required. The attendee will learn what task expression robots are capable of performing. The attendee will learn what is involved in setting up an expression robot. The attendee will gain a basic understanding of simple Info*Engine tasks
Domestic Robots for Older Adults: Attitudes, Preferences, and Potential
Mitzner, Tracy L.; Beer, Jenay M.; Prakash, Akanksha; Chen, Tiffany L.; Kemp, Charles C.; Rogers, Wendy A.
2014-01-01
The population of older adults in America is expected to reach an unprecedented level in the near future. Some of them have difficulties with performing daily tasks and caregivers may not be able to match pace with the increasing need for assistance. Robots, especially mobile manipulators, have the potential for assisting older adults with daily tasks enabling them to live independently in their homes. However, little is known about their views of robot assistance in the home. Twenty-one independently living older Americans (65–93 years old) were asked about their preferences for and attitudes toward robot assistance via a structured group interview and questionnaires. In the group interview, they generated a diverse set of 121 tasks they would want a robot to assist them with in their homes. These data, along with their questionnaire responses, suggest that the older adults were generally open to robot assistance but were discriminating in their acceptance of assistance for different tasks. They preferred robot assistance over human assistance for tasks related to chores, manipulating objects, and information management. In contrast, they preferred human assistance to robot assistance for tasks related to personal care and leisure activities. Our study provides insights into older adults' attitudes and preferences for robot assistance with everyday living tasks in the home which may inform the design of robots that will be more likely accepted by older adults. PMID:25152779
Domestic Robots for Older Adults: Attitudes, Preferences, and Potential.
Smarr, Cory-Ann; Mitzner, Tracy L; Beer, Jenay M; Prakash, Akanksha; Chen, Tiffany L; Kemp, Charles C; Rogers, Wendy A
2014-04-01
The population of older adults in America is expected to reach an unprecedented level in the near future. Some of them have difficulties with performing daily tasks and caregivers may not be able to match pace with the increasing need for assistance. Robots, especially mobile manipulators, have the potential for assisting older adults with daily tasks enabling them to live independently in their homes. However, little is known about their views of robot assistance in the home. Twenty-one independently living older Americans (65-93 years old) were asked about their preferences for and attitudes toward robot assistance via a structured group interview and questionnaires. In the group interview, they generated a diverse set of 121 tasks they would want a robot to assist them with in their homes. These data, along with their questionnaire responses, suggest that the older adults were generally open to robot assistance but were discriminating in their acceptance of assistance for different tasks. They preferred robot assistance over human assistance for tasks related to chores, manipulating objects, and information management. In contrast, they preferred human assistance to robot assistance for tasks related to personal care and leisure activities. Our study provides insights into older adults' attitudes and preferences for robot assistance with everyday living tasks in the home which may inform the design of robots that will be more likely accepted by older adults.
Lifelong Transfer Learning for Heterogeneous Teams of Agents in Sequential Decision Processes
2016-06-01
making (SDM) tasks in dynamic environments with simulated and physical robots . 15. SUBJECT TERMS Sequential decision making, lifelong learning, transfer...sequential decision-making (SDM) tasks in dynamic environments with both simple benchmark tasks and more complex aerial and ground robot tasks. Our work...and ground robots in the presence of disturbances: We applied our methods to the problem of learning controllers for robots with novel disturbances in
User Needs, Benefits, and Integration of Robotic Systems in a Space Station Laboratory
NASA Technical Reports Server (NTRS)
Dodd, W. R.; Badgley, M. B.; Konkel, C. R.
1989-01-01
The methodology, results and conclusions of all tasks of the User Needs, Benefits, and Integration Study (UNBIS) of Robotic Systems in a Space Station Laboratory are summarized. Study goals included the determination of user requirements for robotics within the Space Station, United States Laboratory. In Task 1, three experiments were selected to determine user needs and to allow detailed investigation of microgravity requirements. In Task 2, a NASTRAN analysis of Space Station response to robotic disturbances, and acceleration measurement of a standard industrial robot (Intelledex Model 660) resulted in selection of two ranges of microgravity manipulation: Level 1 (10-3 to 10-5 G at greater than 1 Hz) and Level 2 (less than equal 10-6 G at 0.1 Hz). This task included an evaluation of microstepping methods for controlling stepper motors and concluded that an industrial robot actuator can perform milli-G motion without modification. Relative merits of end-effectors and manipulators were studied in Task 3 in order to determine their ability to perform a range of tasks related to the three microgravity experiments. An Effectivity Rating was established for evaluating these robotic system capabilities. Preliminary interface requirements for an orbital flight demonstration were determined in Task 4. Task 5 assessed the impact of robotics.
Robot Acquisition of Active Maps Through Teleoperation and Vector Space Analysis
NASA Technical Reports Server (NTRS)
Peters, Richard Alan, II
2003-01-01
The work performed under this contract was in the area of intelligent robotics. The problem being studied was the acquisition of intelligent behaviors by a robot. The method was to acquire action maps that describe tasks as sequences of reflexive behaviors. Action maps (a.k.a. topological maps) are graphs whose nodes represent sensorimotor states and whose edges represent the motor actions that cause the robot to proceed from one state to the next. The maps were acquired by the robot after being teleoperated or otherwise guided by a person through a task several times. During a guided task, the robot records all its sensorimotor signals. The signals from several task trials are partitioned into episodes of static behavior. The corresponding episodes from each trial are averaged to produce a task description as a sequence of characteristic episodes. The sensorimotor states that indicate episode boundaries become the nodes, and the static behaviors, the edges. It was demonstrated that if compound maps are constructed from a set of tasks then the robot can perform new tasks in which it was never explicitly trained.
Badalato, Gina M; Shapiro, Edan; Rothberg, Michael B; Bergman, Ari; RoyChoudhury, Arindam; Korets, Ruslan; Patel, Trushar; Badani, Ketan K
2014-01-01
Handedness, or the inherent dominance of one hand's dexterity over the other's, is a factor in open surgery but has an unknown importance in robot-assisted surgery. We sought to examine whether the robotic surgery platform could eliminate the effect of inherent hand preference. Residents from the Urology and Obstetrics/Gynecology departments were enrolled. Ambidextrous and left-handed subjects were excluded. After completing a questionnaire, subjects performed three tasks modified from the Fundamentals of Laparoscopic Surgery curriculum. Tasks were performed by hand and then with the da Vinci robotic surgical system (Intuitive Surgical, Sunnyvale, California). Participants were randomized to begin with using either the left or the right hand, and then switch. Left:right ratios were calculated from scores based on time to task completion. Linear regression analysis was used to determine the significance of the impact of surgical technique on hand dominance. Ten subjects were enrolled. The mean difference in raw score performance between the right and left hands was 12.5 seconds for open tasks and 8 seconds for robotic tasks (P<.05). Overall left-right ratios were found to be 1.45 versus 1.12 for the open and robot tasks, respectively (P<.05). Handedness significantly differed between robotic and open approaches for raw time scores (P<.0001) and left-right ratio (P=.03) when controlling for the prior tasks completed, starting hand, prior robotic experience, and comfort level. These findings remain to be validated in larger cohorts. The robotic technique reduces hand dominance in surgical trainees across all task domains. This finding contributes to the known advantages of robotic surgery.
Thiessen, Amber; Beukelman, David; Hux, Karen; Longenecker, Maria
2016-04-01
The purpose of the study was to compare the visual attention patterns of adults with aphasia and adults without neurological conditions when viewing visual scenes with 2 types of engagement. Eye-tracking technology was used to measure the visual attention patterns of 10 adults with aphasia and 10 adults without neurological conditions. Participants viewed camera-engaged (i.e., human figure facing camera) and task-engaged (i.e., human figure looking at and touching an object) visual scenes. Participants with aphasia responded to engagement cues by focusing on objects of interest more for task-engaged scenes than camera-engaged scenes; however, the difference in their responses to these scenes were not as pronounced as those observed in adults without neurological conditions. In addition, people with aphasia spent more time looking at background areas of interest and less time looking at person areas of interest for camera-engaged scenes than did control participants. Results indicate people with aphasia visually attend to scenes differently than adults without neurological conditions. As a consequence, augmentative and alternative communication (AAC) facilitators may have different visual attention behaviors than the people with aphasia for whom they are constructing or selecting visual scenes. Further examination of the visual attention of people with aphasia may help optimize visual scene selection.
Park, Gyeong-Moon; Yoo, Yong-Ho; Kim, Deok-Hwa; Kim, Jong-Hwan; Gyeong-Moon Park; Yong-Ho Yoo; Deok-Hwa Kim; Jong-Hwan Kim; Yoo, Yong-Ho; Park, Gyeong-Moon; Kim, Jong-Hwan; Kim, Deok-Hwa
2018-06-01
Robots are expected to perform smart services and to undertake various troublesome or difficult tasks in the place of humans. Since these human-scale tasks consist of a temporal sequence of events, robots need episodic memory to store and retrieve the sequences to perform the tasks autonomously in similar situations. As episodic memory, in this paper we propose a novel Deep adaptive resonance theory (ART) neural model and apply it to the task performance of the humanoid robot, Mybot, developed in the Robot Intelligence Technology Laboratory at KAIST. Deep ART has a deep structure to learn events, episodes, and even more like daily episodes. Moreover, it can retrieve the correct episode from partial input cues robustly. To demonstrate the effectiveness and applicability of the proposed Deep ART, experiments are conducted with the humanoid robot, Mybot, for performing the three tasks of arranging toys, making cereal, and disposing of garbage.
Robot Task Commander with Extensible Programming Environment
NASA Technical Reports Server (NTRS)
Hart, Stephen W (Inventor); Wightman, Brian J (Inventor); Dinh, Duy Paul (Inventor); Yamokoski, John D. (Inventor); Gooding, Dustin R (Inventor)
2014-01-01
A system for developing distributed robot application-level software includes a robot having an associated control module which controls motion of the robot in response to a commanded task, and a robot task commander (RTC) in networked communication with the control module over a network transport layer (NTL). The RTC includes a script engine(s) and a GUI, with a processor and a centralized library of library blocks constructed from an interpretive computer programming code and having input and output connections. The GUI provides access to a Visual Programming Language (VPL) environment and a text editor. In executing a method, the VPL is opened, a task for the robot is built from the code library blocks, and data is assigned to input and output connections identifying input and output data for each block. A task sequence(s) is sent to the control module(s) over the NTL to command execution of the task.
Proficiency training on a virtual reality robotic surgical skills curriculum.
Bric, Justin; Connolly, Michael; Kastenmeier, Andrew; Goldblatt, Matthew; Gould, Jon C
2014-12-01
The clinical application of robotic surgery is increasing. The skills necessary to perform robotic surgery are unique from those required in open and laparoscopic surgery. A validated laparoscopic surgical skills curriculum (Fundamentals of Laparoscopic Surgery or FLS™) has transformed the way surgeons acquire laparoscopic skills. There is a need for a similar skills training and assessment tool for robotic surgery. Our research group previously developed and validated a robotic training curriculum in a virtual reality (VR) simulator. We hypothesized that novice robotic surgeons could achieve proficiency levels defined by more experienced robotic surgeons on the VR robotic curriculum, and that this would result in improved performance on the actual daVinci Surgical System™. 25 medical students with no prior robotic surgery experience were recruited. Prior to VR training, subjects performed 2 FLS tasks 3 times each (Peg Transfer, Intracorporeal Knot Tying) using the daVinci Surgical System™ docked to a video trainer box. Task performance for the FLS tasks was scored objectively. Subjects then practiced on the VR simulator (daVinci Skills Simulator) until proficiency levels on all 5 tasks were achieved before completing a post-training assessment of the 2 FLS tasks on the daVinci Surgical System™ in the video trainer box. All subjects to complete the study (1 dropped out) reached proficiency levels on all VR tasks in an average of 71 (± 21.7) attempts, accumulating 164.3 (± 55.7) minutes of console training time. There was a significant improvement in performance on the robotic FLS tasks following completion of the VR training curriculum. Novice robotic surgeons are able to attain proficiency levels on a VR simulator. This leads to improved performance in the daVinci surgical platform on simulated tasks. Training to proficiency on a VR robotic surgery simulator is an efficient and viable method for acquiring robotic surgical skills.
Stages as models of scene geometry.
Nedović, Vladimir; Smeulders, Arnold W M; Redert, André; Geusebroek, Jan-Mark
2010-09-01
Reconstruction of 3D scene geometry is an important element for scene understanding, autonomous vehicle and robot navigation, image retrieval, and 3D television. We propose accounting for the inherent structure of the visual world when trying to solve the scene reconstruction problem. Consequently, we identify geometric scene categorization as the first step toward robust and efficient depth estimation from single images. We introduce 15 typical 3D scene geometries called stages, each with a unique depth profile, which roughly correspond to a large majority of broadcast video frames. Stage information serves as a first approximation of global depth, narrowing down the search space in depth estimation and object localization. We propose different sets of low-level features for depth estimation, and perform stage classification on two diverse data sets of television broadcasts. Classification results demonstrate that stages can often be efficiently learned from low-dimensional image representations.
Efficient Symbolic Task Planning for Multiple Mobile Robots
2016-12-13
Efficient Symbolic Task Planning for Multiple Mobile Robots Yuqian Jiang December 13, 2016 Abstract Symbolic task planning enables a robot to make...high-level deci- sions toward a complex goal by computing a sequence of actions with minimum expected costs. This thesis builds on a single- robot ...time complexity of optimal planning for multiple mobile robots . In this thesis we first investigate the performance of the state-of-the-art solvers of
ERIC Educational Resources Information Center
Hull, Daniel M.; Lovett, James E.
This task analysis report for the Robotics/Automated Systems Technician (RAST) curriculum project first provides a RAST job description. It then discusses the task analysis, including the identification of tasks, the grouping of tasks according to major areas of specialty, and the comparison of the competencies to existing or new courses to…
Moore, Lee J; Wilson, Mark R; Waine, Elizabeth; Masters, Rich S W; McGrath, John S; Vine, Samuel J
2015-03-01
Technical surgical skills are said to be acquired quicker on a robotic rather than laparoscopic platform. However, research examining this proposition is scarce. Thus, this study aimed to compare the performance and learning curves of novices acquiring skills using a robotic or laparoscopic system, and to examine if any learning advantages were maintained over time and transferred to more difficult and stressful tasks. Forty novice participants were randomly assigned to either a robotic- or laparoscopic-trained group. Following one baseline trial on a ball pick-and-drop task, participants performed 50 learning trials. Participants then completed an immediate retention trial and a transfer trial on a two-instrument rope-threading task. One month later, participants performed a delayed retention trial and a stressful multi-tasking trial. The results revealed that the robotic-trained group completed the ball pick-and-drop task more quickly and accurately than the laparoscopic-trained group across baseline, immediate retention, and delayed retention trials. Furthermore, the robotic-trained group displayed a shorter learning curve for accuracy. The robotic-trained group also performed the more complex rope-threading and stressful multi-tasking transfer trials better. Finally, in the multi-tasking trial, the robotic-trained group made fewer tone counting errors. The results highlight the benefits of using robotic technology for the acquisition of technical surgical skills.
A Preliminary Study of Peer-to-Peer Human-Robot Interaction
NASA Technical Reports Server (NTRS)
Fong, Terrence; Flueckiger, Lorenzo; Kunz, Clayton; Lees, David; Schreiner, John; Siegel, Michael; Hiatt, Laura M.; Nourbakhsh, Illah; Simmons, Reid; Ambrose, Robert
2006-01-01
The Peer-to-Peer Human-Robot Interaction (P2P-HRI) project is developing techniques to improve task coordination and collaboration between human and robot partners. Our work is motivated by the need to develop effective human-robot teams for space mission operations. A central element of our approach is creating dialogue and interaction tools that enable humans and robots to flexibly support one another. In order to understand how this approach can influence task performance, we recently conducted a series of tests simulating a lunar construction task with a human-robot team. In this paper, we describe the tests performed, discuss our initial results, and analyze the effect of intervention on task performance.
NASA Technical Reports Server (NTRS)
Stevens, H. D.; Miles, E. S.; Rock, S. J.; Cannon, R. H.
1994-01-01
Expanding man's presence in space requires capable, dexterous robots capable of being controlled from the Earth. Traditional 'hand-in-glove' control paradigms require the human operator to directly control virtually every aspect of the robot's operation. While the human provides excellent judgment and perception, human interaction is limited by low bandwidth, delayed communications. These delays make 'hand-in-glove' operation from Earth impractical. In order to alleviate many of the problems inherent to remote operation, Stanford University's Aerospace Robotics Laboratory (ARL) has developed the Object-Based Task-Level Control architecture. Object-Based Task-Level Control (OBTLC) removes the burden of teleoperation from the human operator and enables execution of tasks not possible with current techniques. OBTLC is a hierarchical approach to control where the human operator is able to specify high-level, object-related tasks through an intuitive graphical user interface. Infrequent task-level command replace constant joystick operations, eliminating communications bandwidth and time delay problems. The details of robot control and task execution are handled entirely by the robot and computer control system. The ARL has implemented the OBTLC architecture on a set of Free-Flying Space Robots. The capability of the OBTLC architecture has been demonstrated by controlling the ARL Free-Flying Space Robots from NASA Ames Research Center.
Optimized Assistive Human-Robot Interaction Using Reinforcement Learning.
Modares, Hamidreza; Ranatunga, Isura; Lewis, Frank L; Popa, Dan O
2016-03-01
An intelligent human-robot interaction (HRI) system with adjustable robot behavior is presented. The proposed HRI system assists the human operator to perform a given task with minimum workload demands and optimizes the overall human-robot system performance. Motivated by human factor studies, the presented control structure consists of two control loops. First, a robot-specific neuro-adaptive controller is designed in the inner loop to make the unknown nonlinear robot behave like a prescribed robot impedance model as perceived by a human operator. In contrast to existing neural network and adaptive impedance-based control methods, no information of the task performance or the prescribed robot impedance model parameters is required in the inner loop. Then, a task-specific outer-loop controller is designed to find the optimal parameters of the prescribed robot impedance model to adjust the robot's dynamics to the operator skills and minimize the tracking error. The outer loop includes the human operator, the robot, and the task performance details. The problem of finding the optimal parameters of the prescribed robot impedance model is transformed into a linear quadratic regulator (LQR) problem which minimizes the human effort and optimizes the closed-loop behavior of the HRI system for a given task. To obviate the requirement of the knowledge of the human model, integral reinforcement learning is used to solve the given LQR problem. Simulation results on an x - y table and a robot arm, and experimental implementation results on a PR2 robot confirm the suitability of the proposed method.
Real-Time Mapping Using Stereoscopic Vision Optimization
2005-03-01
pinhole geometry . . . . . . . . . . . . . . 17 2.8. Artificially textured scenes . . . . . . . . . . . . . . . . . . . . 23 3.1. Bilbo the robot...geometry. 2.2.1 The Fundamental Matrix. The fundamental matrix (F) describes the relationship between a pair of 2D pictures of a 3D scene . This is...eight CCD cameras to compute a mesh model of the environment from a large number of overlapped 3D images. In [1,17], a range scanner is combined with a
Moore, Lee J; Wilson, Mark R; McGrath, John S; Waine, Elizabeth; Masters, Rich S W; Vine, Samuel J
2015-09-01
Research has demonstrated the benefits of robotic surgery for the patient; however, research examining the benefits of robotic technology for the surgeon is limited. This study aimed to adopt validated measures of workload, mental effort, and gaze control to assess the benefits of robotic surgery for the surgeon. We predicted that the performance of surgical training tasks on a surgical robot would require lower investments of workload and mental effort, and would be accompanied by superior gaze control and better performance, when compared to conventional laparoscopy. Thirty-two surgeons performed two trials on a ball pick-and-drop task and a rope-threading task on both robotic and laparoscopic systems. Measures of workload (the surgery task load index), mental effort (subjective: rating scale for mental effort and objective: standard deviation of beat-to-beat intervals), gaze control (using a mobile eye movement recorder), and task performance (completion time and number of errors) were recorded. As expected, surgeons performed both tasks more quickly and accurately (with fewer errors) on the robotic system. Self-reported measures of workload and mental effort were significantly lower on the robotic system compared to the laparoscopic system. Similarly, an objective cardiovascular measure of mental effort revealed lower investment of mental effort when using the robotic platform relative to the laparoscopic platform. Gaze control distinguished the robotic from the laparoscopic systems, but not in the predicted fashion, with the robotic system associated with poorer (more novice like) gaze control. The findings highlight the benefits of robotic technology for surgical operators. Specifically, they suggest that tasks can be performed more proficiently, at a lower workload, and with the investment of less mental effort, this may allow surgeons greater cognitive resources for dealing with other demands such as communication, decision-making, or periods of increased complexity in the operating room.
Inter-rater reliability of kinesthetic measurements with the KINARM robotic exoskeleton.
Semrau, Jennifer A; Herter, Troy M; Scott, Stephen H; Dukelow, Sean P
2017-05-22
Kinesthesia (sense of limb movement) has been extremely difficult to measure objectively, especially in individuals who have survived a stroke. The development of valid and reliable measurements for proprioception is important to developing a better understanding of proprioceptive impairments after stroke and their impact on the ability to perform daily activities. We recently developed a robotic task to evaluate kinesthetic deficits after stroke and found that the majority (~60%) of stroke survivors exhibit significant deficits in kinesthesia within the first 10 days post-stroke. Here we aim to determine the inter-rater reliability of this robotic kinesthetic matching task. Twenty-five neurologically intact control subjects and 15 individuals with first-time stroke were evaluated on a robotic kinesthetic matching task (KIN). Subjects sat in a robotic exoskeleton with their arms supported against gravity. In the KIN task, the robot moved the subjects' stroke-affected arm at a preset speed, direction and distance. As soon as subjects felt the robot begin to move their affected arm, they matched the robot movement with the unaffected arm. Subjects were tested in two sessions on the KIN task: initial session and then a second session (within an average of 18.2 ± 13.8 h of the initial session for stroke subjects), which were supervised by different technicians. The task was performed both with and without the use of vision in both sessions. We evaluated intra-class correlations of spatial and temporal parameters derived from the KIN task to determine the reliability of the robotic task. We evaluated 8 spatial and temporal parameters that quantify kinesthetic behavior. We found that the parameters exhibited moderate to high intra-class correlations between the initial and retest conditions (Range, r-value = [0.53-0.97]). The robotic KIN task exhibited good inter-rater reliability. This validates the KIN task as a reliable, objective method for quantifying kinesthesia after stroke.
Shin, Joon-Ho; Park, Gyulee; Cho, Duk Youn
2017-04-01
To explore motor performance on 2 different cognitive tasks during robotic rehabilitation in which motor performance was longitudinally assessed. Prospective study. Rehabilitation hospital. Patients (N=22) with chronic stroke and upper extremity impairment. A total of 640 repetitions of robot-assisted planar reaching, 5 times a week for 4 weeks. Longitudinal robotic evaluations regarding motor performance included smoothness, mean velocity, path error, and reach error by the type of cognitive task. Dual-task effects (DTEs) of motor performance were computed to analyze the effect of the cognitive task on dual-task interference. Cognitive task type influenced smoothness (P=.006), the DTEs of smoothness (P=.002), and the DTEs of reach error (P=.052). Robotic rehabilitation improved smoothness (P=.007) and reach error (P=.078), while stroke severity affected smoothness (P=.01), reach error (P<.001), and path error (P=.01). Robotic rehabilitation or severity did not affect the DTEs of motor performance. The results provide evidence for the effect of cognitive-motor interference on upper extremity performance among participants with stroke using a robotic-guided rehabilitation system. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Development of a task-level robot programming and simulation system
NASA Technical Reports Server (NTRS)
Liu, H.; Kawamura, K.; Narayanan, S.; Zhang, G.; Franke, H.; Ozkan, M.; Arima, H.; Liu, H.
1987-01-01
An ongoing project in developing a Task-Level Robot Programming and Simulation System (TARPS) is discussed. The objective of this approach is to design a generic TARPS that can be used in a variety of applications. Many robotic applications require off-line programming, and a TARPS is very useful in such applications. Task level programming is object centered in that the user specifies tasks to be performed instead of robot paths. Graphics simulation provides greater flexibility and also avoids costly machine setup and possible damage. A TARPS has three major modules: world model, task planner and task simulator. The system architecture, design issues and some preliminary results are given.
History-Based Response Threshold Model for Division of Labor in Multi-Agent Systems
Lee, Wonki; Kim, DaeEun
2017-01-01
Dynamic task allocation is a necessity in a group of robots. Each member should decide its own task such that it is most commensurate with its current state in the overall system. In this work, the response threshold model is applied to a dynamic foraging task. Each robot employs a task switching function based on the local task demand obtained from the surrounding environment, and no communication occurs between the robots. Each individual member has a constant-sized task demand history that reflects the global demand. In addition, it has response threshold values for all of the tasks and manages the task switching process depending on the stimuli of the task demands. The robot then determines the task to be executed to regulate the overall division of labor. This task selection induces a specialized tendency for performing a specific task and regulates the division of labor. In particular, maintaining a history of the task demands is very effective for the dynamic foraging task. Various experiments are performed using a simulation with multiple robots, and the results show that the proposed algorithm is more effective as compared to the conventional model. PMID:28555031
Supervisory Control of a Humanoid Robot in Microgravity for Manipulation Tasks
NASA Technical Reports Server (NTRS)
Farrell, Logan C.; Strawser, Phil; Hambuchen, Kimberly; Baker, Will; Badger, Julia
2017-01-01
Teleoperation is the dominant form of dexterous robotic tasks in the field. However, there are many use cases in which direct teleoperation is not feasible such as disaster areas with poor communication as posed in the DARPA Robotics Challenge, or robot operations on spacecraft a large distance from Earth with long communication delays. Presented is a solution that combines the Affordance Template Framework for object interaction with TaskForce for supervisory control in order to accomplish high level task objectives with basic autonomous behavior from the robot. TaskForce, is a new commanding infrastructure that allows for optimal development of task execution, clear feedback to the user to aid in off-nominal situations, and the capability to add autonomous verification and corrective actions. This framework has allowed the robot to take corrective actions before requesting assistance from the user. This framework is demonstrated with Robonaut 2 removing a Cargo Transfer Bag from a simulated logistics resupply vehicle for spaceflight using a single operator command. This was executed with 80% success with no human involvement, and 95% success with limited human interaction. This technology sets the stage to do any number of high level tasks using a similar framework, allowing the robot to accomplish tasks with minimal to no human interaction.
Knowledge representation system for assembly using robots
NASA Technical Reports Server (NTRS)
Jain, A.; Donath, M.
1987-01-01
Assembly robots combine the benefits of speed and accuracy with the capability of adaptation to changes in the work environment. However, an impediment to the use of robots is the complexity of the man-machine interface. This interface can be improved by providing a means of using a priori-knowledge and reasoning capabilities for controlling and monitoring the tasks performed by robots. Robots ought to be able to perform complex assembly tasks with the help of only supervisory guidance from human operators. For such supervisory quidance, it is important to express the commands in terms of the effects desired, rather than in terms of the motion the robot must undertake in order to achieve these effects. A suitable knowledge representation can facilitate the conversion of task level descriptions into explicit instructions to the robot. Such a system would use symbolic relationships describing the a priori information about the robot, its environment, and the tasks specified by the operator to generate the commands for the robot.
A graphical, rule based robotic interface system
NASA Technical Reports Server (NTRS)
Mckee, James W.; Wolfsberger, John
1988-01-01
The ability of a human to take control of a robotic system is essential in any use of robots in space in order to handle unforeseen changes in the robot's work environment or scheduled tasks. But in cases in which the work environment is known, a human controlling a robot's every move by remote control is both time consuming and frustrating. A system is needed in which the user can give the robotic system commands to perform tasks but need not tell the system how. To be useful, this system should be able to plan and perform the tasks faster than a telerobotic system. The interface between the user and the robot system must be natural and meaningful to the user. A high level user interface program under development at the University of Alabama, Huntsville, is described. A graphical interface is proposed in which the user selects objects to be manipulated by selecting representations of the object on projections of a 3-D model of the work environment. The user may move in the work environment by changing the viewpoint of the projections. The interface uses a rule based program to transform user selection of items on a graphics display of the robot's work environment into commands for the robot. The program first determines if the desired task is possible given the abilities of the robot and any constraints on the object. If the task is possible, the program determines what movements the robot needs to make to perform the task. The movements are transformed into commands for the robot. The information defining the robot, the work environment, and how objects may be moved is stored in a set of data bases accessible to the program and displayable to the user.
Ivaldi, Serena; Anzalone, Salvatore M; Rousseau, Woody; Sigaud, Olivier; Chetouani, Mohamed
2014-01-01
We hypothesize that the initiative of a robot during a collaborative task with a human can influence the pace of interaction, the human response to attention cues, and the perceived engagement. We propose an object learning experiment where the human interacts in a natural way with the humanoid iCub. Through a two-phases scenario, the human teaches the robot about the properties of some objects. We compare the effect of the initiator of the task in the teaching phase (human or robot) on the rhythm of the interaction in the verification phase. We measure the reaction time of the human gaze when responding to attention utterances of the robot. Our experiments show that when the robot is the initiator of the learning task, the pace of interaction is higher and the reaction to attention cues faster. Subjective evaluations suggest that the initiating role of the robot, however, does not affect the perceived engagement. Moreover, subjective and third-person evaluations of the interaction task suggest that the attentive mechanism we implemented in the humanoid robot iCub is able to arouse engagement and make the robot's behavior readable.
Ivaldi, Serena; Anzalone, Salvatore M.; Rousseau, Woody; Sigaud, Olivier; Chetouani, Mohamed
2014-01-01
We hypothesize that the initiative of a robot during a collaborative task with a human can influence the pace of interaction, the human response to attention cues, and the perceived engagement. We propose an object learning experiment where the human interacts in a natural way with the humanoid iCub. Through a two-phases scenario, the human teaches the robot about the properties of some objects. We compare the effect of the initiator of the task in the teaching phase (human or robot) on the rhythm of the interaction in the verification phase. We measure the reaction time of the human gaze when responding to attention utterances of the robot. Our experiments show that when the robot is the initiator of the learning task, the pace of interaction is higher and the reaction to attention cues faster. Subjective evaluations suggest that the initiating role of the robot, however, does not affect the perceived engagement. Moreover, subjective and third-person evaluations of the interaction task suggest that the attentive mechanism we implemented in the humanoid robot iCub is able to arouse engagement and make the robot's behavior readable. PMID:24596554
Generation, recognition, and consistent fusion of partial boundary representations from range images
NASA Astrophysics Data System (ADS)
Kohlhepp, Peter; Hanczak, Andrzej M.; Li, Gang
1994-10-01
This paper presents SOMBRERO, a new system for recognizing and locating 3D, rigid, non- moving objects from range data. The objects may be polyhedral or curved, partially occluding, touching or lying flush with each other. For data collection, we employ 2D time- of-flight laser scanners mounted to a moving gantry robot. By combining sensor and robot coordinates, we obtain 3D cartesian coordinates. Boundary representations (Brep's) provide view independent geometry models that are both efficiently recognizable and derivable automatically from sensor data. SOMBRERO's methods for generating, matching and fusing Brep's are highly synergetic. A split-and-merge segmentation algorithm with dynamic triangular builds a partial (21/2D) Brep from scattered data. The recognition module matches this scene description with a model database and outputs recognized objects, their positions and orientations, and possibly surfaces corresponding to unknown objects. We present preliminary results in scene segmentation and recognition. Partial Brep's corresponding to different range sensors or viewpoints can be merged into a consistent, complete and irredundant 3D object or scene model. This fusion algorithm itself uses the recognition and segmentation methods.
Multi-robot task allocation based on two dimensional artificial fish swarm algorithm
NASA Astrophysics Data System (ADS)
Zheng, Taixiong; Li, Xueqin; Yang, Liangyi
2007-12-01
The problem of task allocation for multiple robots is to allocate more relative-tasks to less relative-robots so as to minimize the processing time of these tasks. In order to get optimal multi-robot task allocation scheme, a twodimensional artificial swarm algorithm based approach is proposed in this paper. In this approach, the normal artificial fish is extended to be two dimension artificial fish. In the two dimension artificial fish, each vector of primary artificial fish is extended to be an m-dimensional vector. Thus, each vector can express a group of tasks. By redefining the distance between artificial fish and the center of artificial fish, the behavior of two dimension fish is designed and the task allocation algorithm based on two dimension artificial swarm algorithm is put forward. At last, the proposed algorithm is applied to the problem of multi-robot task allocation and comparer with GA and SA based algorithm is done. Simulation and compare result shows the proposed algorithm is effective.
Chen, J Y C; Terrence, P I
2009-08-01
This study investigated the performance and workload of the combined position of gunner and robotics operator in a simulated military multitasking environment. Specifically, the study investigated how aided target recognition (AiTR) capabilities for the gunnery task with imperfect reliability (false-alarm-prone vs. miss-prone) might affect the concurrent robotics and communication tasks. Additionally, the study examined whether performance was affected by individual differences in spatial ability and attentional control. Results showed that when the robotics task was simply monitoring the video, participants had the best performance in their gunnery and communication tasks and the lowest perceived workload, compared with the other robotics tasking conditions. There was a strong interaction between the type of AiTR unreliability and participants' perceived attentional control. Overall, for participants with higher perceived attentional control, false-alarm-prone alerts were more detrimental; for low attentional control participants, conversely, miss-prone automation was more harmful. Low spatial ability participants preferred visual cueing and high spatial ability participants favoured tactile cueing. Potential applications of the findings include personnel selection for robotics operation, robotics user interface designs and training development. The present results will provide further understanding of the interplays among automation reliability, multitasking performance and individual differences in military tasking environments. These results will also facilitate the implementation of robots in military settings and will provide useful data to military system designs.
Time response for sensor sensed to actuator response for mobile robotic system
NASA Astrophysics Data System (ADS)
Amir, N. S.; Shafie, A. A.
2017-11-01
Time and performance of a mobile robot are very important in completing the tasks given to achieve its ultimate goal. Tasks may need to be done within a time constraint to ensure smooth operation of a mobile robot and can result in better performance. The main purpose of this research was to improve the performance of a mobile robot so that it can complete the tasks given within time constraint. The problem that is needed to be solved is to minimize the time interval between sensor detection and actuator response. The research objective is to analyse the real time operating system performance of sensors and actuators on one microcontroller and on two microcontroller for a mobile robot. The task for a mobile robot for this research is line following with an obstacle avoidance. Three runs will be carried out for the task and the time between the sensors senses to the actuator responses were recorded. Overall, the results show that two microcontroller system have better response time compared to the one microcontroller system. For this research, the average difference of response time is very important to improve the internal performance between the occurrence of a task, sensors detection, decision making and actuator response of a mobile robot. This research helped to develop a mobile robot with a better performance and can complete task within the time constraint.
Do laparoscopic skills transfer to robotic surgery?
Panait, Lucian; Shetty, Shohan; Shewokis, Patricia A; Sanchez, Juan A
2014-03-01
Identifying the set of skills that can transfer from laparoscopic to robotic surgery is an important consideration in designing optimal training curricula. We tested the degree to which laparoscopic skills transfer to a robotic platform. Fourteen medical students and 14 surgery residents with no previous robotic but varying degrees of laparoscopic experience were studied. Three fundamentals of laparoscopic surgery tasks were used on the laparoscopic box trainer and then the da Vinci robot: peg transfer (PT), circle cutting (CC), and intracorporeal suturing (IS). A questionnaire was administered for assessing subjects' comfort level with each task. Standard fundamentals of laparoscopic surgery scoring metric were used and higher scores indicate a superior performance. For the group, PT and CC scores were similar between robotic and laparoscopic modalities (90 versus 90 and 52 versus 47; P > 0.05). However, for the advanced IS task, robotic-IS scores were significantly higher than laparoscopic-IS (80 versus 53; P < 0.001). Subgroup analysis of senior residents revealed a lower robotic-PT score when compared with laparoscopic-PT (92 versus 105; P < 0.05). Scores for CC and IS were similar in this subgroup (64 ± 9 versus 69 ± 15 and 95 ± 3 versus 92 ± 10; P > 0.05). The robot was favored over laparoscopy for all drills (PT, 66.7%; CC, 88.9%; IS, 94.4%). For simple tasks, participants with preexisting skills perform worse with the robot. However, with increasing task difficulty, robotic performance is equal or better than laparoscopy. Laparoscopic skills appear to readily transfer to a robotic platform, and difficult tasks such as IS are actually enhanced, even in subjects naive to the technology. Copyright © 2014 Elsevier Inc. All rights reserved.
Raven surgical robot training in preparation for da vinci.
Glassman, Deanna; White, Lee; Lewis, Andrew; King, Hawkeye; Clarke, Alicia; Glassman, Thomas; Comstock, Bryan; Hannaford, Blake; Lendvay, Thomas S
2014-01-01
The rapid adoption of robotic assisted surgery challenges the pace at which adequate robotic training can occur due to access limitations to the da Vinci robot. Thirty medical students completed a randomized controlled trial evaluating whether the Raven robot could be used as an alternative training tool for the Fundamentals of Laparoscopic Surgery (FLS) block transfer task on the da Vinci robot. Two groups, one trained on the da Vinci and one trained on the Raven, were tested on a criterion FLS block transfer task on the da Vinci. After robotic FLS block transfer proficiency training there was no statistically significant difference between path length (p=0.39) and economy of motion scores (p=0.06) between the two groups, but those trained on the da Vinci did have faster task times (p=0.01). These results provide evidence for the value of using the Raven robot for training prior to using the da Vinci surgical system for similar tasks.
Determining robot actions for tasks requiring sensor interaction
NASA Technical Reports Server (NTRS)
Budenske, John; Gini, Maria
1989-01-01
The performance of non-trivial tasks by a mobile robot has been a long term objective of robotic research. One of the major stumbling blocks to this goal is the conversion of the high-level planning goals and commands into the actuator and sensor processing controls. In order for a mobile robot to accomplish a non-trivial task, the task must be described in terms of primitive actions of the robot's actuators. Most non-trivial tasks require the robot to interact with its environment; thus necessitating coordination of sensor processing and actuator control to accomplish the task. The main contention is that the transformation from the high level description of the task to the primitive actions should be performed primarily at execution time, when knowledge about the environment can be obtained through sensors. It is proposed to produce the detailed plan of primitive actions by using a collection of low-level planning components that contain domain specific knowledge and knowledge about the available sensors, actuators, and sensor/actuator processing. This collection will perform signal and control processing as well as serve as a control interface between an actual mobile robot and a high-level planning system. Previous research has shown the usefulness of high-level planning systems to plan the coordination of activities such to achieve a goal, but none have been fully applied to actual mobile robots due to the complexity of interacting with sensors and actuators. This control interface is currently being implemented on a LABMATE mobile robot connected to a SUN workstation and will be developed such to enable the LABMATE to perform non-trivial, sensor-intensive tasks as specified by a planning system.
NASA Technical Reports Server (NTRS)
Backes, Paul G. (Inventor); Tso, Kam S. (Inventor)
1993-01-01
This invention relates to an operator interface for controlling a telerobot to perform tasks in a poorly modeled environment and/or within unplanned scenarios. The telerobot control system includes a remote robot manipulator linked to an operator interface. The operator interface includes a setup terminal, simulation terminal, and execution terminal for the control of the graphics simulator and local robot actuator as well as the remote robot actuator. These terminals may be combined in a single terminal. Complex tasks are developed from sequential combinations of parameterized task primitives and recorded teleoperations, and are tested by execution on a graphics simulator and/or local robot actuator, together with adjustable time delays. The novel features of this invention include the shared and supervisory control of the remote robot manipulator via operator interface by pretested complex tasks sequences based on sequences of parameterized task primitives combined with further teleoperation and run-time binding of parameters based on task context.
Retention of laparoscopic and robotic skills among medical students: a randomized controlled trial.
Orlando, Megan S; Thomaier, Lauren; Abernethy, Melinda G; Chen, Chi Chiung Grace
2017-08-01
Although simulation training beneficially contributes to traditional surgical training, there are less objective data on simulation skills retention. To investigate the retention of laparoscopic and robotic skills after simulation training. We present the second stage of a randomized single-blinded controlled trial in which 40 simulation-naïve medical students were randomly assigned to practice peg transfer tasks on either laparoscopic (N = 20, Fundamentals of Laparoscopic Surgery, Venture Technologies Inc., Waltham, MA) or robotic (N = 20, dV-Trainer, Mimic, Seattle, WA) platforms. In the first stage, two expert surgeons evaluated participants on both tasks before (Stage 1: Baseline) and immediately after training (Stage 1: Post-training) using a modified validated global rating scale of laparoscopic and robotic operative performance. In Stage 2, participants were evaluated on both tasks 11-20 weeks after training. Of the 40 students who participated in Stage 1, 23 (11 laparoscopic and 12 robotic) underwent repeat evaluation. During Stage 2, there were no significant differences between groups in objective or subjective measures for the laparoscopic task. Laparoscopic-trained participants' performances on the laparoscopic task were improved during Stage 2 compared to baseline measured by time to task completion, but not by the modified global rating scale. During the robotic task, the robotic-trained group demonstrated superior economy of motion (p = .017), Tissue Handling (p = .020), and fewer errors (p = .018) compared to the laparoscopic-trained group. Robotic skills acquisition from baseline with no significant deterioration as measured by modified global rating scale scores was observed among robotic-trained participants during Stage 2. Robotic skills acquired through simulation appear to be better maintained than laparoscopic simulation skills. This study is registered on ClinicalTrials.gov (NCT02370407).
Task planning with uncertainty for robotic systems. Thesis
NASA Technical Reports Server (NTRS)
Cao, Tiehua
1993-01-01
In a practical robotic system, it is important to represent and plan sequences of operations and to be able to choose an efficient sequence from them for a specific task. During the generation and execution of task plans, different kinds of uncertainty may occur and erroneous states need to be handled to ensure the efficiency and reliability of the system. An approach to task representation, planning, and error recovery for robotic systems is demonstrated. Our approach to task planning is based on an AND/OR net representation, which is then mapped to a Petri net representation of all feasible geometric states and associated feasibility criteria for net transitions. Task decomposition of robotic assembly plans based on this representation is performed on the Petri net for robotic assembly tasks, and the inheritance of properties of liveness, safeness, and reversibility at all levels of decomposition are explored. This approach provides a framework for robust execution of tasks through the properties of traceability and viability. Uncertainty in robotic systems are modeled by local fuzzy variables, fuzzy marking variables, and global fuzzy variables which are incorporated in fuzzy Petri nets. Analysis of properties and reasoning about uncertainty are investigated using fuzzy reasoning structures built into the net. Two applications of fuzzy Petri nets, robot task sequence planning and sensor-based error recovery, are explored. In the first application, the search space for feasible and complete task sequences with correct precedence relationships is reduced via the use of global fuzzy variables in reasoning about subgoals. In the second application, sensory verification operations are modeled by mutually exclusive transitions to reason about local and global fuzzy variables on-line and automatically select a retry or an alternative error recovery sequence when errors occur. Task sequencing and task execution with error recovery capability for one and multiple soft components in robotic systems are investigated.
Leveraging Large-Scale Semantic Networks for Adaptive Robot Task Learning and Execution.
Boteanu, Adrian; St Clair, Aaron; Mohseni-Kabir, Anahita; Saldanha, Carl; Chernova, Sonia
2016-12-01
This work seeks to leverage semantic networks containing millions of entries encoding assertions of commonsense knowledge to enable improvements in robot task execution and learning. The specific application we explore in this project is object substitution in the context of task adaptation. Humans easily adapt their plans to compensate for missing items in day-to-day tasks, substituting a wrap for bread when making a sandwich, or stirring pasta with a fork when out of spoons. Robot plan execution, however, is far less robust, with missing objects typically leading to failure if the robot is not aware of alternatives. In this article, we contribute a context-aware algorithm that leverages the linguistic information embedded in the task description to identify candidate substitution objects without reliance on explicit object affordance information. Specifically, we show that the task context provided by the task labels within the action structure of a task plan can be leveraged to disambiguate information within a noisy large-scale semantic network containing hundreds of potential object candidates to identify successful object substitutions with high accuracy. We present two extensive evaluations of our work on both abstract and real-world robot tasks, showing that the substitutions made by our system are valid, accepted by users, and lead to a statistically significant reduction in robot learning time. In addition, we report the outcomes of testing our approach with a large number of crowd workers interacting with a robot in real time.
OLDER ADULTS’ PREFERENCES FOR AND ACCEPTANCE OF ROBOT ASSISTANCE FOR EVERYDAY LIVING TASKS
Smarr, Cory-Ann; Prakash, Akanksha; Beer, Jenay M.; Mitzner, Tracy L.; Kemp, Charles C.; Rogers, Wendy A.
2014-01-01
Many older adults value their independence and prefer to age in place. Robots can be designed to assist older people with performing everyday living tasks and maintaining their independence at home. Yet, there is a scarcity of knowledge regarding older adults’ attitudes toward robots and their preferences for robot assistance. Twenty-one older adults (M = 80.25 years old, SD = 7.19) completed questionnaires and participated in structured group interviews investigating their openness to and preferences for assistance from a mobile manipulator robot. Although the older adults were generally open to robot assistance for performing home-based tasks, they were selective in their views. Older adults preferred robot assistance over human assistance for many instrumental (e.g., housekeeping, laundry, medication reminders) and enhanced activities of daily living (e.g., new learning, hobbies). However, older adults were less open to robot assistance for some activities of daily living (e.g., shaving, hair care). Results from this study provide insight into older adults’ attitudes toward robot assistance with home-based everyday living tasks. PMID:25284971
Lendvay, Thomas S; Brand, Timothy C; White, Lee; Kowalewski, Timothy; Jonnadula, Saikiran; Mercer, Laina D; Khorsand, Derek; Andros, Justin; Hannaford, Blake; Satava, Richard M
2013-06-01
Preoperative simulation warm-up has been shown to improve performance and reduce errors in novice and experienced surgeons, yet existing studies have only investigated conventional laparoscopy. We hypothesized that a brief virtual reality (VR) robotic warm-up would enhance robotic task performance and reduce errors. In a 2-center randomized trial, 51 residents and experienced minimally invasive surgery faculty in General Surgery, Urology, and Gynecology underwent a validated robotic surgery proficiency curriculum on a VR robotic simulator and on the da Vinci surgical robot (Intuitive Surgical Inc). Once they successfully achieved performance benchmarks, surgeons were randomized to either receive a 3- to 5-minute VR simulator warm-up or read a leisure book for 10 minutes before performing similar and dissimilar (intracorporeal suturing) robotic surgery tasks. The primary outcomes compared were task time, tool path length, economy of motion, technical, and cognitive errors. Task time (-29.29 seconds, p = 0.001; 95% CI, -47.03 to -11.56), path length (-79.87 mm; p = 0.014; 95% CI, -144.48 to -15.25), and cognitive errors were reduced in the warm-up group compared with the control group for similar tasks. Global technical errors in intracorporeal suturing (0.32; p = 0.020; 95% CI, 0.06-0.59) were reduced after the dissimilar VR task. When surgeons were stratified by earlier robotic and laparoscopic clinical experience, the more experienced surgeons (n = 17) demonstrated significant improvements from warm-up in task time (-53.5 seconds; p = 0.001; 95% CI, -83.9 to -23.0) and economy of motion (0.63 mm/s; p = 0.007; 95% CI, 0.18-1.09), and improvement in these metrics was not statistically significantly appreciated in the less-experienced cohort (n = 34). We observed significant performance improvement and error reduction rates among surgeons of varying experience after VR warm-up for basic robotic surgery tasks. In addition, the VR warm-up reduced errors on a more complex task (robotic suturing), suggesting the generalizability of the warm-up. Copyright © 2013 American College of Surgeons. All rights reserved.
Lendvay, Thomas S.; Brand, Timothy C.; White, Lee; Kowalewski, Timothy; Jonnadula, Saikiran; Mercer, Laina; Khorsand, Derek; Andros, Justin; Hannaford, Blake; Satava, Richard M.
2014-01-01
Background Pre-operative simulation “warm-up” has been shown to improve performance and reduce errors in novice and experienced surgeons, yet existing studies have only investigated conventional laparoscopy. We hypothesized a brief virtual reality (VR) robotic warm-up would enhance robotic task performance and reduce errors. Study Design In a two-center randomized trial, fifty-one residents and experienced minimally invasive surgery faculty in General Surgery, Urology, and Gynecology underwent a validated robotic surgery proficiency curriculum on a VR robotic simulator and on the da Vinci surgical robot. Once successfully achieving performance benchmarks, surgeons were randomized to either receive a 3-5 minute VR simulator warm-up or read a leisure book for 10 minutes prior to performing similar and dissimilar (intracorporeal suturing) robotic surgery tasks. The primary outcomes compared were task time, tool path length, economy of motion, technical and cognitive errors. Results Task time (-29.29sec, p=0.001, 95%CI-47.03,-11.56), path length (-79.87mm, p=0.014, 95%CI -144.48,-15.25), and cognitive errors were reduced in the warm-up group compared to the control group for similar tasks. Global technical errors in intracorporeal suturing (0.32, p=0.020, 95%CI 0.06,0.59) were reduced after the dissimilar VR task. When surgeons were stratified by prior robotic and laparoscopic clinical experience, the more experienced surgeons(n=17) demonstrated significant improvements from warm-up in task time (-53.5sec, p=0.001, 95%CI -83.9,-23.0) and economy of motion (0.63mm/sec, p=0.007, 95%CI 0.18,1.09), whereas improvement in these metrics was not statistically significantly appreciated in the less experienced cohort(n=34). Conclusions We observed a significant performance improvement and error reduction rate among surgeons of varying experience after VR warm-up for basic robotic surgery tasks. In addition, the VR warm-up reduced errors on a more complex task (robotic suturing) suggesting the generalizability of the warm-up. PMID:23583618
Visual environment recognition for robot path planning using template matched filters
NASA Astrophysics Data System (ADS)
Orozco-Rosas, Ulises; Picos, Kenia; Díaz-Ramírez, Víctor H.; Montiel, Oscar; Sepúlveda, Roberto
2017-08-01
A visual approach in environment recognition for robot navigation is proposed. This work includes a template matching filtering technique to detect obstacles and feasible paths using a single camera to sense a cluttered environment. In this problem statement, a robot can move from the start to the goal by choosing a single path between multiple possible ways. In order to generate an efficient and safe path for mobile robot navigation, the proposal employs a pseudo-bacterial potential field algorithm to derive optimal potential field functions using evolutionary computation. Simulation results are evaluated in synthetic and real scenes in terms of accuracy of environment recognition and efficiency of path planning computation.
Tack, Lois C; Thomas, Michelle; Reich, Karl
2007-03-01
Forensic labs globally face the same problem-a growing need to process a greater number and wider variety of samples for DNA analysis. The same forensic lab can be tasked all at once with processing mixed casework samples from crime scenes, convicted offender samples for database entry, and tissue from tsunami victims for identification. Besides flexibility in the robotic system chosen for forensic automation, there is a need, for each sample type, to develop new methodology that is not only faster but also more reliable than past procedures. FTA is a chemical treatment of paper, unique to Whatman Bioscience, and is used for the stabilization and storage of biological samples. Here, the authors describe optimization of the Whatman FTA Purification Kit protocol for use with the AmpFlSTR Identifiler PCR Amplification Kit.
Interaction between Task Oriented and Affective Information Processing in Cognitive Robotics
NASA Astrophysics Data System (ADS)
Haazebroek, Pascal; van Dantzig, Saskia; Hommel, Bernhard
There is an increasing interest in endowing robots with emotions. Robot control however is still often very task oriented. We present a cognitive architecture that allows the combination of and interaction between task representations and affective information processing. Our model is validated by comparing simulation results with empirical data from experimental psychology.
The phantom robot - Predictive displays for teleoperation with time delay
NASA Technical Reports Server (NTRS)
Bejczy, Antal K.; Kim, Won S.; Venema, Steven C.
1990-01-01
An enhanced teleoperation technique for time-delayed bilateral teleoperator control is discussed. The control technique selected for time delay is based on the use of a high-fidelity graphics phantom robot that is being controlled in real time (without time delay) against the static task image. Thus, the motion of the phantom robot image on the monitor predicts the motion of the real robot. The real robot's motion will follow the phantom robot's motion on the monitor with the communication time delay implied in the task. Real-time high-fidelity graphics simulation of a PUMA arm is generated and overlaid on the actual camera view of the arm. A simple camera calibration technique is used for calibrated graphics overlay. A preliminary experiment is performed with the predictive display by using a very simple tapping task. The results with this simple task indicate that predictive display enhances the human operator's telemanipulation task performance significantly during free motion when there is a long time delay. It appears, however, that either two-view or stereoscopic predictive displays are necessary for general three-dimensional tasks.
Market-Based Coordination and Auditing Mechanisms for Self-Interested Multi-Robot Systems
ERIC Educational Resources Information Center
Ham, MyungJoo
2009-01-01
We propose market-based coordinated task allocation mechanisms, which allocate complex tasks that require synchronized and collaborated services of multiple robot agents to robot agents, and an auditing mechanism, which ensures proper behaviors of robot agents by verifying inter-agent activities, for self-interested, fully-distributed, and…
Richard, Christian M; Wright, Richard D; Ee, Cheryl; Prime, Steven L; Shimizu, Yujiro; Vavrik, John
2002-01-01
The effect of a concurrent auditory task on visual search was investigated using an image-flicker technique. Participants were undergraduate university students with normal or corrected-to-normal vision who searched for changes in images of driving scenes that involved either driving-related (e.g., traffic light) or driving-unrelated (e.g., mailbox) scene elements. The results indicated that response times were significantly slower if the search was accompanied by a concurrent auditory task. In addition, slower overall responses to scenes involving driving-unrelated changes suggest that the underlying process affected by the concurrent auditory task is strategic in nature. These results were interpreted in terms of their implications for using a cellular telephone while driving. Actual or potential applications of this research include the development of safer in-vehicle communication devices.
A Human-Robot Co-Manipulation Approach Based on Human Sensorimotor Information.
Peternel, Luka; Tsagarakis, Nikos; Ajoudani, Arash
2017-07-01
This paper aims to improve the interaction and coordination between the human and the robot in cooperative execution of complex, powerful, and dynamic tasks. We propose a novel approach that integrates online information about the human motor function and manipulability properties into the hybrid controller of the assistive robot. Through this human-in-the-loop framework, the robot can adapt to the human motor behavior and provide the appropriate assistive response in different phases of the cooperative task. We experimentally evaluate the proposed approach in two human-robot co-manipulation tasks that require specific complementary behavior from the two agents. Results suggest that the proposed technique, which relies on a minimum degree of task-level pre-programming, can achieve an enhanced physical human-robot interaction performance and deliver appropriate level of assistance to the human operator.
Gist in time: scene semantics and structure enhance recall of searched objects
Wolfe, Jeremy M.; Võ, Melissa L.-H.
2016-01-01
Previous work has shown that recall of objects that are incidentally encountered as targets in visual search is better than recall of objects that have been intentionally memorized (Draschkow, Wolfe & Võ, 2014). However, this counter-intuitive result is not seen when these tasks are performed with non-scene stimuli. The goal of the current paper is to determine what features of search in a scene contribute to higher recall rates when compared to a memorization task. In each of four experiments, we compare the free recall rate for target objects following a search to the rate following a memorization task. Across the experiments, the stimuli include progressively more scene-related information. Experiment 1 provides the spatial relations between objects. Experiment 2 adds relative size and depth of objects. Experiments 3 and 4 include scene layout and semantic information. We find that search leads to better recall than explicit memorization in cases where scene layout and semantic information are present, as long as the participant has ample time (2500ms) to integrate this information with knowledge about the target object (Exp. 4). These results suggest that the integration of scene and target information not only leads to more efficient search, but can also contribute to stronger memory representations than intentional memorization. PMID:27270227
Gist in time: Scene semantics and structure enhance recall of searched objects.
Josephs, Emilie L; Draschkow, Dejan; Wolfe, Jeremy M; Võ, Melissa L-H
2016-09-01
Previous work has shown that recall of objects that are incidentally encountered as targets in visual search is better than recall of objects that have been intentionally memorized (Draschkow, Wolfe, & Võ, 2014). However, this counter-intuitive result is not seen when these tasks are performed with non-scene stimuli. The goal of the current paper is to determine what features of search in a scene contribute to higher recall rates when compared to a memorization task. In each of four experiments, we compare the free recall rate for target objects following a search to the rate following a memorization task. Across the experiments, the stimuli include progressively more scene-related information. Experiment 1 provides the spatial relations between objects. Experiment 2 adds relative size and depth of objects. Experiments 3 and 4 include scene layout and semantic information. We find that search leads to better recall than explicit memorization in cases where scene layout and semantic information are present, as long as the participant has ample time (2500ms) to integrate this information with knowledge about the target object (Exp. 4). These results suggest that the integration of scene and target information not only leads to more efficient search, but can also contribute to stronger memory representations than intentional memorization. Copyright © 2016 Elsevier B.V. All rights reserved.
Functional anatomy of temporal organisation and domain-specificity of episodic memory retrieval.
Kwok, Sze Chai; Shallice, Tim; Macaluso, Emiliano
2012-10-01
Episodic memory provides information about the "when" of events as well as "what" and "where" they happened. Using functional imaging, we investigated the domain specificity of retrieval-related processes following encoding of complex, naturalistic events. Subjects watched a 42-min TV episode, and 24h later, made discriminative choices of scenes from the clip during fMRI. Subjects were presented with two scenes and required to either choose the scene that happened earlier in the film (Temporal), or the scene with a correct spatial arrangement (Spatial), or the scene that had been shown (Object). We identified a retrieval network comprising the precuneus, lateral and dorsal parietal cortex, middle frontal and medial temporal areas. The precuneus and angular gyrus are associated with temporal retrieval, with precuneal activity correlating negatively with temporal distance between two happenings at encoding. A dorsal fronto-parietal network engages during spatial retrieval, while antero-medial temporal regions activate during object-related retrieval. We propose that access to episodic memory traces involves different processes depending on task requirements. These include memory-searching within an organised knowledge structure in the precuneus (Temporal task), online maintenance of spatial information in dorsal fronto-parietal cortices (Spatial task) and combining scene-related spatial and non-spatial information in the hippocampus (Object task). Our findings support the proposal of process-specific dissociations of retrieval. Copyright © 2012 Elsevier Ltd. All rights reserved.
Functional anatomy of temporal organisation and domain-specificity of episodic memory retrieval
Kwok, Sze Chai; Shallice, Tim; Macaluso, Emiliano
2013-01-01
Episodic memory provides information about the “when” of events as well as “what” and “where” they happened. Using functional imaging, we investigated the domain specificity of retrieval-related processes following encoding of complex, naturalistic events. Subjects watched a 42-min TV episode, and 24 h later, made discriminative choices of scenes from the clip during fMRI. Subjects were presented with two scenes and required to either choose the scene that happened earlier in the film (Temporal), or the scene with a correct spatial arrangement (Spatial), or the scene that had been shown (Object). We identified a retrieval network comprising the precuneus, lateral and dorsal parietal cortex, middle frontal and medial temporal areas. The precuneus and angular gyrus are associated with temporal retrieval, with precuneal activity correlating negatively with temporal distance between two happenings at encoding. A dorsal fronto-parietal network engages during spatial retrieval, while antero-medial temporal regions activate during object-related retrieval. We propose that access to episodic memory traces involves different processes depending on task requirements. These include memory-searching within an organised knowledge structure in the precuneus (Temporal task), online maintenance of spatial information in dorsal fronto-parietal cortices (Spatial task) and combining scene-related spatial and non-spatial information in the hippocampus (Object task). Our findings support the proposal of process-specific dissociations of retrieval. PMID:22877840
Mitchell, Anna S.; Baxter, Mark G.; Gaffan, David
2008-01-01
Monkeys with aspiration lesions of the magnocellular division of the mediodorsal thalamus (MDmc) are impaired in object-in-place scene learning, object recognition and stimulus-reward association. These data have been interpreted to mean that projections from MDmc to prefrontal cortex are required to sustain normal prefrontal function in a variety of task settings. In the present study, we investigated the extent to which bilateral neurotoxic lesions of the MDmc impair a pre-operatively learnt strategy implementation task that is impaired by a crossed lesion technique that disconnects the frontal cortex in one hemisphere from the contralateral inferotemporal cortex. Postoperative memory impairments were also examined using the object-in-place scene memory task. Monkeys learnt both strategy implementation and scene memory tasks separately to a stable level pre-operatively. Bilateral neurotoxic lesions of the MDmc, produced by 10 × 1 μl injections of a mixture of ibotenate and N-methyl-D-aspartate did not affect performance in the strategy implementation task. However, new learning of object-in-place scene memory was substantially impaired. These results provide new evidence about the role of the magnocellular mediodorsal thalamic nucleus in memory processing, indicating that interconnections with the prefrontal cortex are essential during new learning but are not required when implementing a preoperatively acquired strategy task. Thus not all functions of the prefrontal cortex require MDmc input. Instead the involvement of MDmc in prefrontal function may be limited to situations in which new learning must occur. PMID:17978029
Thomaier, Lauren; Orlando, Megan; Abernethy, Melinda; Paka, Chandhana; Chen, Chi Chiung Grace
2017-08-01
Although surgical simulation provides an effective supplement to traditional training, it is not known whether skills are transferable between minimally invasive surgical modalities. The purpose of this study was to assess the transferability of skills between minimally invasive surgical simulation platforms among simulation-naïve participants. Forty simulation-naïve medical students were enrolled in this randomized single-blinded controlled trial. Participants completed a baseline evaluation on laparoscopic (Fundamentals of Laparoscopic Surgery Program, Los Angeles, CA) and robotic (dV-Trainer, Mimic, Seattle, WA) simulation peg transfer tasks. Participants were then randomized to perform a practice session on either the robotic (N = 20) or laparoscopic (N = 20) simulator. Two blinded, expert minimally invasive surgeons evaluated participants before and after training using a modified previously validated subjective global rating scale. Objective measures including time to task completion and Mimic dV-Trainer motion metrics were also recorded. At baseline, there were no significant differences between the training groups as measured by objective and subjective measures for either simulation task. After training, participants randomized to the laparoscopic practice group completed the laparoscopic task faster (p < 0.003) and with higher global rating scale scores (p < 0.001) than the robotic group. Robotic-trained participants performed the robotic task faster (p < 0.001), with improved economy of motion (p < 0.001), and with higher global rating scale scores (p = 0.006) than the laparoscopic group. The robotic practice group also demonstrated significantly improved performance on the laparoscopic task (p = 0.02). Laparoscopic-trained participants also improved their robotic performance (p = 0.02), though the robotic group had a higher percent improvement on the robotic task (p = 0.037). Skills acquired through practice on either laparoscopic or robotic simulation platforms appear to be transferable between modalities. However, participants demonstrate superior skill in the modality in which they specifically train.
Monocular Depth Perception and Robotic Grasping of Novel Objects
2009-06-01
resulting algorithm is able to learn monocular vision cues that accurately estimate the relative depths of obstacles in a scene. Reinforcement learning ... learning still make sense in these settings? Since many of the cues that are useful for estimating depth can be re-created in synthetic images, we...supervised learning approach to this problem, and use a Markov Random Field (MRF) to model the scene depth as a function of the image features. We show
2015-08-01
Navigational and Robot -Monitoring Tasks by Gina Pomranky-Hartnett, Linda R Elliott, Bruce JP Mortimer, Greg R Mort, Rodger A Pettitt, and Gary A...Tactor Display during Simultaneous Navigational and Robot -Monitoring Tasks by Gina Pomranky-Hartnett, Linda R Elliott, and Rodger A Pettitt...2014–31 March 2015 4. TITLE AND SUBTITLE Soldier-Based Assessment of a Dual-Row Tactor Display during Simultaneous Navigational and Robot -Monitoring
Redundant arm control in a supervisory and shared control system
NASA Technical Reports Server (NTRS)
Backes, Paul G.; Long, Mark K.
1992-01-01
The Extended Task Space Control approach to robotic operations based on manipulator behaviors derived from task requirements is described. No differentiation between redundant and non-redundant robots is made at the task level. The manipulation task behaviors are combined into a single set of motion commands. The manipulator kinematics are used subsequently in mapping motion commands into actuator commands. Extended Task Space Control is applied to a Robotics Research K-1207 seven degree-of-freedom manipulator in a supervisory telerobot system as an example.
Analysis and Processing the 3D-Range-Image-Data for Robot Monitoring
NASA Astrophysics Data System (ADS)
Kohoutek, Tobias
2008-09-01
Industrial robots are commonly used for physically stressful jobs in complex environments. In any case collisions with heavy and high dynamic machines need to be prevented. For this reason the operational range has to be monitored precisely, reliably and meticulously. The advantage of the SwissRanger® SR-3000 is that it delivers intensity images and 3D-information simultaneously of the same scene that conveniently allows 3D-monitoring. Due to that fact automatic real time collision prevention within the robots working space is possible by working with 3D-coordinates.
Stefanidis, Dimitrios; Hope, William W; Scott, Daniel J
2011-07-01
The value of robotic assistance for intracorporeal suturing is not well defined. We compared robotic suturing with laparoscopic suturing on the FLS model with a large cohort of surgeons. Attendees (n=117) at the SAGES 2006 Learning Center robotic station placed intracorporeal sutures on the FLS box-trainer model using conventional laparoscopic instruments and the da Vinci® robot. Participant performance was recorded using a validated objective scoring system, and a questionnaire regarding demographics, task workload, and suturing modality preference was completed. Construct validity for both tasks was assessed by comparing the performance scores of subjects with various levels of experience. A validated questionnaire was used for workload measurement. Of the participants, 84% had prior laparoscopic and 10% prior robotic suturing experience. Within the allotted time, 83% of participants completed the suturing task laparoscopically and 72% with the robot. Construct validity was demonstrated for both simulated tasks according to the participants' advanced laparoscopic experience, laparoscopic suturing experience, and self-reported laparoscopic suturing ability (p<0.001 for all) and according to prior robotic experience, robotic suturing experience, and self-reported robotic suturing ability (p<0.001 for all), respectively. While participants achieved higher suturing scores with standard laparoscopy compared with the robot (84±75 vs. 56±63, respectively; p<0.001), they found the laparoscopic task more physically demanding (NASA score 13±5 vs. 10±5, respectively; p<0.001) and favored the robot as their method of choice for intracorporeal suturing (62 vs. 38%, respectively; p<0.01). Construct validity was demonstrated for robotic suturing on the FLS model. Suturing scores were higher using standard laparoscopy likely as a result of the participants' greater experience with laparoscopic suturing versus robotic suturing. Robotic assistance decreases the physical demand of intracorporeal suturing compared with conventional laparoscopy and, in this study, was the preferred suturing method by most surgeons. Curricula for robotic suturing training need to be developed.
Impedance learning for robotic contact tasks using natural actor-critic algorithm.
Kim, Byungchan; Park, Jooyoung; Park, Shinsuk; Kang, Sungchul
2010-04-01
Compared with their robotic counterparts, humans excel at various tasks by using their ability to adaptively modulate arm impedance parameters. This ability allows us to successfully perform contact tasks even in uncertain environments. This paper considers a learning strategy of motor skill for robotic contact tasks based on a human motor control theory and machine learning schemes. Our robot learning method employs impedance control based on the equilibrium point control theory and reinforcement learning to determine the impedance parameters for contact tasks. A recursive least-square filter-based episodic natural actor-critic algorithm is used to find the optimal impedance parameters. The effectiveness of the proposed method was tested through dynamic simulations of various contact tasks. The simulation results demonstrated that the proposed method optimizes the performance of the contact tasks in uncertain conditions of the environment.
Global ensemble texture representations are critical to rapid scene perception.
Brady, Timothy F; Shafer-Skelton, Anna; Alvarez, George A
2017-06-01
Traditionally, recognizing the objects within a scene has been treated as a prerequisite to recognizing the scene itself. However, research now suggests that the ability to rapidly recognize visual scenes could be supported by global properties of the scene itself rather than the objects within the scene. Here, we argue for a particular instantiation of this view: That scenes are recognized by treating them as a global texture and processing the pattern of orientations and spatial frequencies across different areas of the scene without recognizing any objects. To test this model, we asked whether there is a link between how proficient individuals are at rapid scene perception and how proficiently they represent simple spatial patterns of orientation information (global ensemble texture). We find a significant and selective correlation between these tasks, suggesting a link between scene perception and spatial ensemble tasks but not nonspatial summary statistics In a second and third experiment, we additionally show that global ensemble texture information is not only associated with scene recognition, but that preserving only global ensemble texture information from scenes is sufficient to support rapid scene perception; however, preserving the same information is not sufficient for object recognition. Thus, global ensemble texture alone is sufficient to allow activation of scene representations but not object representations. Together, these results provide evidence for a view of scene recognition based on global ensemble texture rather than a view based purely on objects or on nonspatially localized global properties. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Performance Evaluation of a Pose Estimation Method based on the SwissRanger SR4000
2012-08-01
however, not suitable for navigating a small robot. Commercially available Flash LIDAR now has sufficient accuracy for robotic application. A...Flash LIDAR simultaneously produces intensity and range images of the scene at a video frame rate. It has the following advantages over stereovision...fully dense depth data across its field-of-view. The commercially available Flash LIDAR includes the SwissRanger [17] and TigerEye 3D [18
Hakim, Renée M; Tunis, Brandon G; Ross, Michael D
2017-11-01
The focus of research using technological innovations such as robotic devices has been on interventions to improve upper extremity function in neurologic populations, particularly patients with stroke. There is a growing body of evidence describing rehabilitation programs using various types of supportive/assistive and/or resistive robotic and virtual reality-enhanced devices to improve outcomes for patients with neurologic disorders. The most promising approaches are task-oriented, based on current concepts of motor control/learning and practice-induced neuroplasticity. Based on this evidence, we describe application and feasibility of virtual reality-enhanced robotics integrated with current concepts in orthopaedic rehabilitation shifting from an impairment-based focus to inclusion of more intense, task-specific training for patients with upper extremity disorders, specifically emphasizing the wrist and hand. The purpose of this paper is to describe virtual reality-enhanced rehabilitation robotic devices, review evidence of application in patients with upper extremity deficits related to neurologic disorders, and suggest how this technology and task-oriented rehabilitation approach can also benefit patients with orthopaedic disorders of the wrist and hand. We will also discuss areas for further research and development using a task-oriented approach and a commercially available haptic robotic device to focus on training of grasp and manipulation tasks. Implications for Rehabilitation There is a growing body of evidence describing rehabilitation programs using various types of supportive/assistive and/or resistive robotic and virtual reality-enhanced devices to improve outcomes for patients with neurologic disorders. The most promising approaches using rehabilitation robotics are task-oriented, based on current concepts of motor control/learning and practice-induced neuroplasticity. Based on the evidence in neurologic populations, virtual reality-enhanced robotics may be integrated with current concepts in orthopaedic rehabilitation shifting from an impairment-based focus to inclusion of more intense, task-specific training for patients with UE disorders, specifically emphasizing the wrist and hand. Clinical application of a task-oriented approach may be accomplished using commercially available haptic robotic device to focus on training of grasp and manipulation tasks.
Reversal Learning Task in Children with Autism Spectrum Disorder: A Robot-Based Approach.
Costescu, Cristina A; Vanderborght, Bram; David, Daniel O
2015-11-01
Children with autism spectrum disorder (ASD) engage in highly perseverative and inflexible behaviours. Technological tools, such as robots, received increased attention as social reinforces and/or assisting tools for improving the performance of children with ASD. The aim of our study is to investigate the role of the robotic toy Keepon in a cognitive flexibility task performed by children with ASD and typically developing (TD) children. The number of participants included in this study is 81 children: 40 TD children and 41 children with ASD. Each participant had to go through two conditions: robot interaction and human interaction in which they had performed the reversal learning task. Our primary outcomes are the number of errors from acquisition phase and from reversal phase of the task; as secondary outcomes we have measured attentional engagement and positive affect. The results of this study showed that children with ASD are more engaged in the task and they seem to enjoy more the task when interacting with the robot compared with the interaction with the adult. On the other hand their cognitive flexibility performance is, in general, similar in the robot and the human conditions with the exception of the learning phase where the robot can interfere with the performance. Implication for future research and practice are discussed.
School-based use of a robotic arm system by children with disabilities.
Cook, Albert M; Bentz, Brenda; Harbottle, Norma; Lynch, Cheryl; Miller, Brad
2005-12-01
A robotic arm system was developed for use by children who had very severe motor disabilities and varying levels of cognitive and language skills. The children used the robot in a three-task sequence routine to dig objects from a tub of dry macaroni. The robotic system was used in the child's school for 12-15 sessions over a period of four weeks. Goal attainment scaling indicated improvement in all children in operational competence of the robot, and varying levels of gain in functional skill development with the robot and in carryover to the classroom from the robot experiments. Teacher interviews revealed gains in classroom participation, expressive language (vocalizations, symbolic communication), and a high degree of interest by the children in the robot tasks. The teachers also recommended that the robot should have more color, contrast and character, as well as generating sounds and/or music for student cues. They also felt that the robotic system accuracy should be increased so that teacher assistance is not necessary to complete the task.
Robot-assisted laparoscopic skills development: formal versus informal training.
Benson, Aaron D; Kramer, Brandan A; Boehler, Margaret; Schwind, Cathy J; Schwartz, Bradley F
2010-08-01
The learning curve for robotic surgery is not completely defined, and ideal training components have not yet been identified. We attempted to determine whether skill development would be accelerated with formal, organized instruction in robotic surgical techniques versus informal practice alone. Forty-three medical students naive to robotic surgery were randomized into two groups and tested on three tasks using the robotic platform. Between the testing sessions, the students were given equally timed practice sessions. The formal training group participated in an organized, formal training session with instruction from an attending robotic surgeon, whereas the informal training group participated in an equally timed unstructured practice session with the robot. The results were compared based on technical score and time to completion of each task. There was no difference between groups in prepractice testing for any task. In postpractice testing, there was no difference between groups for the ring transfer tasks. However, for the suture placement and knot-tying task, the technical score of the formal training group was significantly better than that of the informal training group (p < 0.001), yet time to completion was not different. Although formal training may not be necessary for basic skills, formal instruction for more advanced skills, such as suture placement and knot tying, is important in developing skills needed for effective robotic surgery. These findings may be important in formulating potential skills labs or training courses for robotic surgery.
Behavior-based multi-robot collaboration for autonomous construction tasks
NASA Technical Reports Server (NTRS)
Stroupe, Ashley; Huntsberger, Terry; Okon, Avi; Aghazarian, Hrand; Robinson, Matthew
2005-01-01
The Robot Construction Crew (RCC) is a heterogeneous multi-robot system for autonomous construction of a structure through assembly of Long components. The two robot team demonstrates component placement into an existing structure in a realistic environment. The task requires component acquisition, cooperative transport, and cooperative precision manipulation. A behavior-based architecture provides adaptability. The RCC approach minimizes computation, power, communication, and sensing for applicability to space-related construction efforts, but the techniques are applicable to terrestrial construction tasks.
Robotic assessment of sensorimotor deficits after traumatic brain injury.
Debert, Chantel T; Herter, Troy M; Scott, Stephen H; Dukelow, Sean
2012-06-01
Robotic technology is commonly used to quantify aspects of typical sensorimotor function. We evaluated the feasibility of using robotic technology to assess visuomotor and position sense impairments following traumatic brain injury (TBI). We present results of robotic sensorimotor function testing in 12 subjects with TBI, who had a range of initial severities (9 severe, 2 moderate, 1 mild), and contrast these results with those of clinical tests. We also compared these with robotic test outcomes in persons without disability. For each subject with TBI, a review of the initial injury and neuroradiologic findings was conducted. Following this, each subject completed a number of standardized clinical measures (Fugl-Meyer Assessment, Purdue Peg Board, Montreal Cognitive Assessment, Rancho Los Amigos Scale), followed by two robotic tasks. A visually guided reaching task was performed to assess visuomotor control of the upper limb. An arm position-matching task was used to assess position sense. Robotic task performance in the subjects with TBI was compared with findings in a cohort of 170 person without disabilities. Subjects with TBI demonstrated a broad range of sensory and motor deficits on robotic testing. Notably, several subjects with TBI displayed significant deficits in one or both of the robotic tasks, despite normal scores on traditional clinical motor and cognitive assessment measures. The findings demonstrate the potential of robotic assessments for identifying deficits in visuomotor control and position sense following TBI. Improved identification of neurologic impairments following TBI may ultimately enhance rehabilitation.
Brown, Jeremy D; O Brien, Conor E; Leung, Sarah C; Dumon, Kristoffel R; Lee, David I; Kuchenbecker, Katherine J
2017-09-01
Most trainees begin learning robotic minimally invasive surgery by performing inanimate practice tasks with clinical robots such as the Intuitive Surgical da Vinci. Expert surgeons are commonly asked to evaluate these performances using standardized five-point rating scales, but doing such ratings is time consuming, tedious, and somewhat subjective. This paper presents an automatic skill evaluation system that analyzes only the contact force with the task materials, the broad-bandwidth accelerations of the robotic instruments and camera, and the task completion time. We recruited N = 38 participants of varying skill in robotic surgery to perform three trials of peg transfer with a da Vinci Standard robot instrumented with our Smart Task Board. After calibration, three individuals rated these trials on five domains of the Global Evaluative Assessment of Robotic Skill (GEARS) structured assessment tool, providing ground-truth labels for regression and classification machine learning algorithms that predict GEARS scores based on the recorded force, acceleration, and time signals. Both machine learning approaches produced scores on the reserved testing sets that were in good to excellent agreement with the human raters, even when the force information was not considered. Furthermore, regression predicted GEARS scores more accurately and efficiently than classification. A surgeon's skill at robotic peg transfer can be reliably rated via regression using features gathered from force, acceleration, and time sensors external to the robot. We expect improved trainee learning as a result of providing these automatic skill ratings during inanimate task practice on a surgical robot.
Man-Robot Symbiosis: A Framework For Cooperative Intelligence And Control
NASA Astrophysics Data System (ADS)
Parker, Lynne E.; Pin, Francois G.
1988-10-01
The man-robot symbiosis concept has the fundamental objective of bridging the gap between fully human-controlled and fully autonomous systems to achieve true man-robot cooperative control and intelligence. Such a system would allow improved speed, accuracy, and efficiency of task execution, while retaining the man in the loop for innovative reasoning and decision-making. The symbiont would have capabilities for supervised and unsupervised learning, allowing an increase of expertise in a wide task domain. This paper describes a robotic system architecture facilitating the symbiotic integration of teleoperative and automated modes of task execution. The architecture reflects a unique blend of many disciplines of artificial intelligence into a working system, including job or mission planning, dynamic task allocation, man-robot communication, automated monitoring, and machine learning. These disciplines are embodied in five major components of the symbiotic framework: the Job Planner, the Dynamic Task Allocator, the Presenter/Interpreter, the Automated Monitor, and the Learning System.
System for exchanging tools and end effectors on a robot
Burry, David B.; Williams, Paul M.
1991-02-19
A system and method for exchanging tools and end effectors on a robot permits exchange during a programmed task. The exchange mechanism is located off the robot, thus reducing the mass of the robot arm and permitting smaller robots to perform designated tasks. A simple spring/collet mechanism mounted on the robot is used which permits the engagement and disengagement of the tool or end effector without the need for a rotational orientation of the tool to the end effector/collet interface. As the tool changing system is not located on the robot arm no umbilical cords are located on robot.
NASA Astrophysics Data System (ADS)
Rahman, Md. Mozasser; Ikeura, Ryojun; Mizutani, Kazuki
In the near future many aspects of our lives will be encompassed by tasks performed in cooperation with robots. The application of robots in home automation, agricultural production and medical operations etc. will be indispensable. As a result robots need to be made human-friendly and to execute tasks in cooperation with humans. Control systems for such robots should be designed to work imitating human characteristics. In this study, we have tried to achieve these goals by means of controlling a simple one degree-of-freedom cooperative robot. Firstly, the impedance characteristic of the human arm in a cooperative task is investigated. Then, this characteristic is implemented to control a robot in order to perform cooperative task with humans. A human followed the motion of an object, which is moved through desired trajectories. The motion is actuated by the linear motor of the one degree-of-freedom robot system. Trajectories used in the experiments of this method were minimum jerk (the rate of change of acceleration) trajectory, which was found during human and human cooperative task and optimum for muscle movement. As the muscle is mechanically analogous to a spring-damper system, a simple second-order equation is used as models for the arm dynamics. In the model, we considered mass, stiffness and damping factor. Impedance parameter is calculated from the position and force data obtained from the experiments and based on the “Estimation of Parametric Model”. Investigated impedance characteristic of human arm is then implemented to control a robot, which performed cooperative task with human. It is observed that the proposed control methodology has given human like movements to the robot for cooperating with human.
Human-robot skills transfer interfaces for a flexible surgical robot.
Calinon, Sylvain; Bruno, Danilo; Malekzadeh, Milad S; Nanayakkara, Thrishantha; Caldwell, Darwin G
2014-09-01
In minimally invasive surgery, tools go through narrow openings and manipulate soft organs to perform surgical tasks. There are limitations in current robot-assisted surgical systems due to the rigidity of robot tools. The aim of the STIFF-FLOP European project is to develop a soft robotic arm to perform surgical tasks. The flexibility of the robot allows the surgeon to move within organs to reach remote areas inside the body and perform challenging procedures in laparoscopy. This article addresses the problem of designing learning interfaces enabling the transfer of skills from human demonstration. Robot programming by demonstration encompasses a wide range of learning strategies, from simple mimicking of the demonstrator's actions to the higher level imitation of the underlying intent extracted from the demonstrations. By focusing on this last form, we study the problem of extracting an objective function explaining the demonstrations from an over-specified set of candidate reward functions, and using this information for self-refinement of the skill. In contrast to inverse reinforcement learning strategies that attempt to explain the observations with reward functions defined for the entire task (or a set of pre-defined reward profiles active for different parts of the task), the proposed approach is based on context-dependent reward-weighted learning, where the robot can learn the relevance of candidate objective functions with respect to the current phase of the task or encountered situation. The robot then exploits this information for skills refinement in the policy parameters space. The proposed approach is tested in simulation with a cutting task performed by the STIFF-FLOP flexible robot, using kinesthetic demonstrations from a Barrett WAM manipulator. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Intelligent robot control using an adaptive critic with a task control center and dynamic database
NASA Astrophysics Data System (ADS)
Hall, E. L.; Ghaffari, M.; Liao, X.; Alhaj Ali, S. M.
2006-10-01
The purpose of this paper is to describe the design, development and simulation of a real time controller for an intelligent, vision guided robot. The use of a creative controller that can select its own tasks is demonstrated. This creative controller uses a task control center and dynamic database. The dynamic database stores both global environmental information and local information including the kinematic and dynamic models of the intelligent robot. The kinematic model is very useful for position control and simulations. However, models of the dynamics of the manipulators are needed for tracking control of the robot's motions. Such models are also necessary for sizing the actuators, tuning the controller, and achieving superior performance. Simulations of various control designs are shown. Also, much of the model has also been used for the actual prototype Bearcat Cub mobile robot. This vision guided robot was designed for the Intelligent Ground Vehicle Contest. A novel feature of the proposed approach is that the method is applicable to both robot arm manipulators and robot bases such as wheeled mobile robots. This generality should encourage the development of more mobile robots with manipulator capability since both models can be easily stored in the dynamic database. The multi task controller also permits wide applications. The use of manipulators and mobile bases with a high-level control are potentially useful for space exploration, certain rescue robots, defense robots, and medical robotics aids.
EPSRC Principles of Robotics: commentary on safety, robots as products, and responsibility
NASA Astrophysics Data System (ADS)
Boddington, Paula
2017-04-01
The EPSRC Principles of Robotics refer to safety. How safety is understood is relative to how tasks are characterised and identified. But the exact task(s) a robot plays within a complex system of agency may be hard to identify. If robots are seen as products, it is nonetheless vital that the safety and other implications of their use in situ must also be considered carefully, and they must be fit for purpose. The Principles identify humans as responsible, rather than robots. We must thus understand how the replacement of human agency by robotic agency may impact upon attributions of responsibility. The Principles seek to fit into existing systems of law and ethics. But these may need development, and in certain context, attention to more local regulations is also needed. A distinction between ethical issues related to the design of robotics, and to their use, may be needed in the Principles.
Evolutionary online behaviour learning and adaptation in real robots.
Silva, Fernando; Correia, Luís; Christensen, Anders Lyhne
2017-07-01
Online evolution of behavioural control on real robots is an open-ended approach to autonomous learning and adaptation: robots have the potential to automatically learn new tasks and to adapt to changes in environmental conditions, or to failures in sensors and/or actuators. However, studies have so far almost exclusively been carried out in simulation because evolution in real hardware has required several days or weeks to produce capable robots. In this article, we successfully evolve neural network-based controllers in real robotic hardware to solve two single-robot tasks and one collective robotics task. Controllers are evolved either from random solutions or from solutions pre-evolved in simulation. In all cases, capable solutions are found in a timely manner (1 h or less). Results show that more accurate simulations may lead to higher-performing controllers, and that completing the optimization process in real robots is meaningful, even if solutions found in simulation differ from solutions in reality. We furthermore demonstrate for the first time the adaptive capabilities of online evolution in real robotic hardware, including robots able to overcome faults injected in the motors of multiple units simultaneously, and to modify their behaviour in response to changes in the task requirements. We conclude by assessing the contribution of each algorithmic component on the performance of the underlying evolutionary algorithm.
Bilateral Impedance Control For Telemanipulators
NASA Technical Reports Server (NTRS)
Moore, Christopher L.
1993-01-01
Telemanipulator system includes master robot manipulated by human operator, and slave robot performing tasks at remote location. Two robots electronically coupled so slave robot moves in response to commands from master robot. Teleoperation greatly enhanced if forces acting on slave robot fed back to operator, giving operator feeling he or she manipulates remote environment directly. Main advantage of bilateral impedance control: enables arbitrary specification of desired performance characteristics for telemanipulator system. Relationship between force and position modulated at both ends of system to suit requirements of task.
Study of robot landmark recognition with complex background
NASA Astrophysics Data System (ADS)
Huang, Yuqing; Yang, Jia
2007-12-01
It's of great importance for assisting robot in path planning, position navigating and task performing by perceiving and recognising environment characteristic. To solve the problem of monocular-vision-oriented landmark recognition for mobile intelligent robot marching with complex background, a kind of nested region growing algorithm which fused with transcendental color information and based on current maximum convergence center is proposed, allowing invariance localization to changes in position, scale, rotation, jitters and weather conditions. Firstly, a novel experiment threshold based on RGB vision model is used for the first image segmentation, which allowing some objects and partial scenes with similar color to landmarks also are detected with landmarks together. Secondly, with current maximum convergence center on segmented image as each growing seed point, the above region growing algorithm accordingly starts to establish several Regions of Interest (ROI) orderly. According to shape characteristics, a quick and effectual contour analysis based on primitive element is applied in deciding whether current ROI could be reserved or deleted after each region growing, then each ROI is judged initially and positioned. When the position information as feedback is conveyed to the gray image, the whole landmarks are extracted accurately with the second segmentation on the local image that exclusive to landmark area. Finally, landmarks are recognised by Hopfield neural network. Results issued from experiments on a great number of images with both campus and urban district as background show the effectiveness of the proposed algorithm.
Human-Robot Cooperation with Commands Embedded in Actions
NASA Astrophysics Data System (ADS)
Kobayashi, Kazuki; Yamada, Seiji
In this paper, we first propose a novel interaction model, CEA (Commands Embedded in Actions). It can explain the way how some existing systems reduce the work-load of their user. We next extend the CEA and build ECEA (Extended CEA) model. The ECEA enables robots to achieve more complicated tasks. On this extension, we employ ACS (Action Coding System) which can describe segmented human acts and clarifies the relationship between user's actions and robot's actions in a task. The ACS utilizes the CEA's strong point which enables a user to send a command to a robot by his/her natural action for the task. The instance of the ECEA led by using the ACS is a temporal extension which has the user keep a final state of a previous his/her action. We apply the temporal extension of the ECEA for a sweeping task. The high-level task, a cooperative task between the user and the robot can be realized. The robot with simple reactive behavior can sweep the region of under an object when the user picks up the object. In addition, we measure user's cognitive loads on the ECEA and a traditional method, DCM (Direct Commanding Method) in the sweeping task, and compare between them. The results show that the ECEA has a lower cognitive load than the DCM significantly.
Learning Semantics of Gestural Instructions for Human-Robot Collaboration
Shukla, Dadhichi; Erkent, Özgür; Piater, Justus
2018-01-01
Designed to work safely alongside humans, collaborative robots need to be capable partners in human-robot teams. Besides having key capabilities like detecting gestures, recognizing objects, grasping them, and handing them over, these robots need to seamlessly adapt their behavior for efficient human-robot collaboration. In this context we present the fast, supervised Proactive Incremental Learning (PIL) framework for learning associations between human hand gestures and the intended robotic manipulation actions. With the proactive aspect, the robot is competent to predict the human's intent and perform an action without waiting for an instruction. The incremental aspect enables the robot to learn associations on the fly while performing a task. It is a probabilistic, statistically-driven approach. As a proof of concept, we focus on a table assembly task where the robot assists its human partner. We investigate how the accuracy of gesture detection affects the number of interactions required to complete the task. We also conducted a human-robot interaction study with non-roboticist users comparing a proactive with a reactive robot that waits for instructions. PMID:29615888
Learning Semantics of Gestural Instructions for Human-Robot Collaboration.
Shukla, Dadhichi; Erkent, Özgür; Piater, Justus
2018-01-01
Designed to work safely alongside humans, collaborative robots need to be capable partners in human-robot teams. Besides having key capabilities like detecting gestures, recognizing objects, grasping them, and handing them over, these robots need to seamlessly adapt their behavior for efficient human-robot collaboration. In this context we present the fast, supervised Proactive Incremental Learning (PIL) framework for learning associations between human hand gestures and the intended robotic manipulation actions. With the proactive aspect, the robot is competent to predict the human's intent and perform an action without waiting for an instruction. The incremental aspect enables the robot to learn associations on the fly while performing a task. It is a probabilistic, statistically-driven approach. As a proof of concept, we focus on a table assembly task where the robot assists its human partner. We investigate how the accuracy of gesture detection affects the number of interactions required to complete the task. We also conducted a human-robot interaction study with non-roboticist users comparing a proactive with a reactive robot that waits for instructions.
A Mobile, Map-Based Tasking Interface for Human-Robot Interaction
2010-12-01
A MOBILE, MAP-BASED TASKING INTERFACE FOR HUMAN-ROBOT INTERACTION By Eli R. Hooten Thesis Submitted to the Faculty of the Graduate School of...SUBTITLE A Mobile, Map-Based Tasking Interface for Human-Robot Interaction 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6...3 II.1 Interactive Modalities and Multi-Touch . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 II.2
Underwater Robot Task Planning Using Multi-Objective Meta-Heuristics
Landa-Torres, Itziar; Manjarres, Diana; Bilbao, Sonia; Del Ser, Javier
2017-01-01
Robotics deployed in the underwater medium are subject to stringent operational conditions that impose a high degree of criticality on the allocation of resources and the schedule of operations in mission planning. In this context the so-called cost of a mission must be considered as an additional criterion when designing optimal task schedules within the mission at hand. Such a cost can be conceived as the impact of the mission on the robotic resources themselves, which range from the consumption of battery to other negative effects such as mechanic erosion. This manuscript focuses on this issue by devising three heuristic solvers aimed at efficiently scheduling tasks in robotic swarms, which collaborate together to accomplish a mission, and by presenting experimental results obtained over realistic scenarios in the underwater environment. The heuristic techniques resort to a Random-Keys encoding strategy to represent the allocation of robots to tasks and the relative execution order of such tasks within the schedule of certain robots. The obtained results reveal interesting differences in terms of Pareto optimality and spread between the algorithms considered in the benchmark, which are insightful for the selection of a proper task scheduler in real underwater campaigns. PMID:28375160
Underwater Robot Task Planning Using Multi-Objective Meta-Heuristics.
Landa-Torres, Itziar; Manjarres, Diana; Bilbao, Sonia; Del Ser, Javier
2017-04-04
Robotics deployed in the underwater medium are subject to stringent operational conditions that impose a high degree of criticality on the allocation of resources and the schedule of operations in mission planning. In this context the so-called cost of a mission must be considered as an additional criterion when designing optimal task schedules within the mission at hand. Such a cost can be conceived as the impact of the mission on the robotic resources themselves, which range from the consumption of battery to other negative effects such as mechanic erosion. This manuscript focuses on this issue by devising three heuristic solvers aimed at efficiently scheduling tasks in robotic swarms, which collaborate together to accomplish a mission, and by presenting experimental results obtained over realistic scenarios in the underwater environment. The heuristic techniques resort to a Random-Keys encoding strategy to represent the allocation of robots to tasks and the relative execution order of such tasks within the schedule of certain robots. The obtained results reveal interesting differences in terms of Pareto optimality and spread between the algorithms considered in the benchmark, which are insightful for the selection of a proper task scheduler in real underwater campaigns.
NASA Astrophysics Data System (ADS)
Yue, Haosong; Chen, Weihai; Wu, Xingming; Wang, Jianhua
2016-03-01
Three-dimensional (3-D) simultaneous localization and mapping (SLAM) is a crucial technique for intelligent robots to navigate autonomously and execute complex tasks. It can also be applied to shape measurement, reverse engineering, and many other scientific or engineering fields. A widespread SLAM algorithm, named KinectFusion, performs well in environments with complex shapes. However, it cannot handle translation uncertainties well in highly structured scenes. This paper improves the KinectFusion algorithm and makes it competent in both structured and unstructured environments. 3-D line features are first extracted according to both color and depth data captured by Kinect sensor. Then the lines in the current data frame are matched with the lines extracted from the entire constructed world model. Finally, we fuse the distance errors of these line-pairs into the standard KinectFusion framework and estimate sensor poses using an iterative closest point-based algorithm. Comparative experiments with the KinectFusion algorithm and one state-of-the-art method in a corridor scene have been done. The experimental results demonstrate that after our improvement, the KinectFusion algorithm can also be applied to structured environments and has higher accuracy. Experiments on two open access datasets further validated our improvements.
Liu, Chun; Kroll, Andreas
2016-01-01
Multi-robot task allocation determines the task sequence and distribution for a group of robots in multi-robot systems, which is one of constrained combinatorial optimization problems and more complex in case of cooperative tasks because they introduce additional spatial and temporal constraints. To solve multi-robot task allocation problems with cooperative tasks efficiently, a subpopulation-based genetic algorithm, a crossover-free genetic algorithm employing mutation operators and elitism selection in each subpopulation, is developed in this paper. Moreover, the impact of mutation operators (swap, insertion, inversion, displacement, and their various combinations) is analyzed when solving several industrial plant inspection problems. The experimental results show that: (1) the proposed genetic algorithm can obtain better solutions than the tested binary tournament genetic algorithm with partially mapped crossover; (2) inversion mutation performs better than other tested mutation operators when solving problems without cooperative tasks, and the swap-inversion combination performs better than other tested mutation operators/combinations when solving problems with cooperative tasks. As it is difficult to produce all desired effects with a single mutation operator, using multiple mutation operators (including both inversion and swap) is suggested when solving similar combinatorial optimization problems.
Squire, P N; Parasuraman, R
2010-08-01
The present study assessed the impact of task load and level of automation (LOA) on task switching in participants supervising a team of four or eight semi-autonomous robots in a simulated 'capture the flag' game. Participants were faster to perform the same task than when they chose to switch between different task actions. They also took longer to switch between different tasks when supervising the robots at a high compared to a low LOA. Task load, as manipulated by the number of robots to be supervised, did not influence switch costs. The results suggest that the design of future unmanned vehicle (UV) systems should take into account not simply how many UVs an operator can supervise, but also the impact of LOA and task operations on task switching during supervision of multiple UVs. The findings of this study are relevant for the ergonomics practice of UV systems. This research extends the cognitive theory of task switching to inform the design of UV systems and results show that switching between UVs is an important factor to consider.
EVALUATING ROBOT TECHNOLOGIES AS TOOLS TO EXPLORE RADIOLOGICAL AND OTHER HAZARDOUS ENVIRONMENTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curtis W. Nielsen; David I. Gertman; David J. Bruemmer
2008-03-01
There is a general consensus that robots could be beneficial in performing tasks within hazardous radiological environments. Most control of robots in hazardous environments involves master-slave or teleoperation relationships between the human and the robot. While teleoperation-based solutions keep humans out of harms way, they also change the training requirements to accomplish a task. In this paper we present a research methodology that allowed scientists at Idaho National Laboratory to identify, develop, and prove a semi-autonomous robot solution for search and characterization tasks within a hazardous environment. Two experiments are summarized that validated the use of semi-autonomy and show thatmore » robot autonomy can help mitigate some of the performance differences between operators who have different levels of robot experience, and can improve performance over teleoperated systems.« less
Pasqualotto, Achille; Esenkaya, Tayfun
2016-01-01
Visual-to-auditory sensory substitution is used to convey visual information through audition, and it was initially created to compensate for blindness; it consists of software converting the visual images captured by a video-camera into the equivalent auditory images, or "soundscapes". Here, it was used by blindfolded sighted participants to learn the spatial position of simple shapes depicted in images arranged on the floor. Very few studies have used sensory substitution to investigate spatial representation, while it has been widely used to investigate object recognition. Additionally, with sensory substitution we could study the performance of participants actively exploring the environment through audition, rather than passively localizing sound sources. Blindfolded participants egocentrically learnt the position of six images by using sensory substitution and then a judgment of relative direction task (JRD) was used to determine how this scene was represented. This task consists of imagining being in a given location, oriented in a given direction, and pointing towards the required image. Before performing the JRD task, participants explored a map that provided allocentric information about the scene. Although spatial exploration was egocentric, surprisingly we found that performance in the JRD task was better for allocentric perspectives. This suggests that the egocentric representation of the scene was updated. This result is in line with previous studies using visual and somatosensory scenes, thus supporting the notion that different sensory modalities produce equivalent spatial representation(s). Moreover, our results have practical implications to improve training methods with sensory substitution devices (SSD).
How long did it last? You would better ask a human
Lacquaniti, Francesco; Carrozzo, Mauro; d’Avella, Andrea; La Scaleia, Barbara; Moscatelli, Alessandro; Zago, Myrka
2014-01-01
In the future, human-like robots will live among people to provide company and help carrying out tasks in cooperation with humans. These interactions require that robots understand not only human actions, but also the way in which we perceive the world. Human perception heavily relies on the time dimension, especially when it comes to processing visual motion. Critically, human time perception for dynamic events is often inaccurate. Robots interacting with humans may want to see the world and tell time the way humans do: if so, they must incorporate human-like fallacy. Observers asked to judge the duration of brief scenes are prone to errors: perceived duration often does not match the physical duration of the event. Several kinds of temporal distortions have been described in the specialized literature. Here we review the topic with a special emphasis on our work dealing with time perception of animate actors versus inanimate actors. This work shows the existence of specialized time bases for different categories of targets. The time base used by the human brain to process visual motion appears to be calibrated against the specific predictions regarding the motion of human figures in case of animate motion, while it can be calibrated against the predictions of motion of passive objects in case of inanimate motion. Human perception of time appears to be strictly linked with the mechanisms used to control movements. Thus, neural time can be entrained by external cues in a similar manner for both perceptual judgments of elapsed time and in motor control tasks. One possible strategy could be to implement in humanoids a unique architecture for dealing with time, which would apply the same specialized mechanisms to both perception and action, similarly to humans. This shared implementation might render the humanoids more acceptable to humans, thus facilitating reciprocal interactions. PMID:24478694
How long did it last? You would better ask a human.
Lacquaniti, Francesco; Carrozzo, Mauro; d'Avella, Andrea; La Scaleia, Barbara; Moscatelli, Alessandro; Zago, Myrka
2014-01-01
In the future, human-like robots will live among people to provide company and help carrying out tasks in cooperation with humans. These interactions require that robots understand not only human actions, but also the way in which we perceive the world. Human perception heavily relies on the time dimension, especially when it comes to processing visual motion. Critically, human time perception for dynamic events is often inaccurate. Robots interacting with humans may want to see the world and tell time the way humans do: if so, they must incorporate human-like fallacy. Observers asked to judge the duration of brief scenes are prone to errors: perceived duration often does not match the physical duration of the event. Several kinds of temporal distortions have been described in the specialized literature. Here we review the topic with a special emphasis on our work dealing with time perception of animate actors versus inanimate actors. This work shows the existence of specialized time bases for different categories of targets. The time base used by the human brain to process visual motion appears to be calibrated against the specific predictions regarding the motion of human figures in case of animate motion, while it can be calibrated against the predictions of motion of passive objects in case of inanimate motion. Human perception of time appears to be strictly linked with the mechanisms used to control movements. Thus, neural time can be entrained by external cues in a similar manner for both perceptual judgments of elapsed time and in motor control tasks. One possible strategy could be to implement in humanoids a unique architecture for dealing with time, which would apply the same specialized mechanisms to both perception and action, similarly to humans. This shared implementation might render the humanoids more acceptable to humans, thus facilitating reciprocal interactions.
Visual search for changes in scenes creates long-term, incidental memory traces.
Utochkin, Igor S; Wolfe, Jeremy M
2018-05-01
Humans are very good at remembering large numbers of scenes over substantial periods of time. But how good are they at remembering changes to scenes? In this study, we tested scene memory and change detection two weeks after initial scene learning. In Experiments 1-3, scenes were learned incidentally during visual search for change. In Experiment 4, observers explicitly memorized scenes. At test, after two weeks observers were asked to discriminate old from new scenes, to recall a change that they had detected in the study phase, or to detect a newly introduced change in the memorization experiment. Next, they performed a change detection task, usually looking for the same change as in the study period. Scene recognition memory was found to be similar in all experiments, regardless of the study task. In Experiment 1, more difficult change detection produced better scene memory. Experiments 2 and 3 supported a "depth-of-processing" account for the effects of initial search and change detection on incidental memory for scenes. Of most interest, change detection was faster during the test phase than during the study phase, even when the observer had no explicit memory of having found that change previously. This result was replicated in two of our three change detection experiments. We conclude that scenes can be encoded incidentally as well as explicitly and that changes in those scenes can leave measurable traces even if they are not explicitly recalled.
Combining environment-driven adaptation and task-driven optimisation in evolutionary robotics.
Haasdijk, Evert; Bredeche, Nicolas; Eiben, A E
2014-01-01
Embodied evolutionary robotics is a sub-field of evolutionary robotics that employs evolutionary algorithms on the robotic hardware itself, during the operational period, i.e., in an on-line fashion. This enables robotic systems that continuously adapt, and are therefore capable of (re-)adjusting themselves to previously unknown or dynamically changing conditions autonomously, without human oversight. This paper addresses one of the major challenges that such systems face, viz. that the robots must satisfy two sets of requirements. Firstly, they must continue to operate reliably in their environment (viability), and secondly they must competently perform user-specified tasks (usefulness). The solution we propose exploits the fact that evolutionary methods have two basic selection mechanisms-survivor selection and parent selection. This allows evolution to tackle the two sets of requirements separately: survivor selection is driven by the environment and parent selection is based on task-performance. This idea is elaborated in the Multi-Objective aNd open-Ended Evolution (monee) framework, which we experimentally validate. Experiments with robotic swarms of 100 simulated e-pucks show that monee does indeed promote task-driven behaviour without compromising environmental adaptation. We also investigate an extension of the parent selection process with a 'market mechanism' that can ensure equitable distribution of effort over multiple tasks, a particularly pressing issue if the environment promotes specialisation in single tasks.
What can Robots Do? Towards Theoretical Analysis
NASA Technical Reports Server (NTRS)
Nogueira, Monica
1997-01-01
Robots have become more and more sophisticated. Every robot has its limits. If we face a task that existing robots cannot solve, then, before we start improving these robots, it is important to check whether it is, in principle, possible to design a robot for this task or not. For that, it is necessary to describe what exactly the robots can, in principle, do. A similar problem - to describe what exactly computers can do - has been solved as early as 1936, by Turing. In this paper, we describe a framework within which we can, hopefully, formalize and answer the question of what exactly robots can do.
System for exchanging tools and end effectors on a robot
Burry, D.B.; Williams, P.M.
1991-02-19
A system and method for exchanging tools and end effectors on a robot permits exchange during a programmed task. The exchange mechanism is located off the robot, thus reducing the mass of the robot arm and permitting smaller robots to perform designated tasks. A simple spring/collet mechanism mounted on the robot is used which permits the engagement and disengagement of the tool or end effector without the need for a rotational orientation of the tool to the end effector/collet interface. As the tool changing system is not located on the robot arm no umbilical cords are located on robot. 12 figures.
Robotic Precursor Missions for Mars Habitats
NASA Technical Reports Server (NTRS)
Huntsberger, Terry; Pirjanian, Paolo; Schenker, Paul S.; Trebi-Ollennu, Ashitey; Das, Hari; Joshi, Sajay
2000-01-01
Infrastructure support for robotic colonies, manned Mars habitat, and/or robotic exploration of planetary surfaces will need to rely on the field deployment of multiple robust robots. This support includes such tasks as the deployment and servicing of power systems and ISRU generators, construction of beaconed roadways, and the site preparation and deployment of manned habitat modules. The current level of autonomy of planetary rovers such as Sojourner will need to be greatly enhanced for these types of operations. In addition, single robotic platforms will not be capable of complicated construction scenarios. Precursor robotic missions to Mars that involve teams of multiple cooperating robots to accomplish some of these tasks is a cost effective solution to the possible long timeline necessary for the deployment of a manned habitat. Ongoing work at JPL under the Mars Outpost Program in the area of robot colonies is investigating many of the technology developments necessary for such an ambitious undertaking. Some of the issues that are being addressed include behavior-based control systems for multiple cooperating robots (CAMPOUT), development of autonomous robotic systems for the rescue/repair of trapped or disabled robots, and the design and development of robotic platforms for construction tasks such as material transport and surface clearing.
Experiments in Nonlinear Adaptive Control of Multi-Manipulator, Free-Flying Space Robots
NASA Technical Reports Server (NTRS)
Chen, Vincent Wei-Kang
1992-01-01
Sophisticated robots can greatly enhance the role of humans in space by relieving astronauts of low level, tedious assembly and maintenance chores and allowing them to concentrate on higher level tasks. Robots and astronauts can work together efficiently, as a team; but the robot must be capable of accomplishing complex operations and yet be easy to use. Multiple cooperating manipulators are essential to dexterity and can broaden greatly the types of activities the robot can achieve; adding adaptive control can ease greatly robot usage by allowing the robot to change its own controller actions, without human intervention, in response to changes in its environment. Previous work in the Aerospace Robotics Laboratory (ARL) have shown the usefulness of a space robot with cooperating manipulators. The research presented in this dissertation extends that work by adding adaptive control. To help achieve this high level of robot sophistication, this research made several advances to the field of nonlinear adaptive control of robotic systems. A nonlinear adaptive control algorithm developed originally for control of robots, but requiring joint positions as inputs, was extended here to handle the much more general case of manipulator endpoint-position commands. A new system modelling technique, called system concatenation was developed to simplify the generation of a system model for complicated systems, such as a free-flying multiple-manipulator robot system. Finally, the task-space concept was introduced wherein the operator's inputs specify only the robot's task. The robot's subsequent autonomous performance of each task still involves, of course, endpoint positions and joint configurations as subsets. The combination of these developments resulted in a new adaptive control framework that is capable of continuously providing full adaptation capability to the complex space-robot system in all modes of operation. The new adaptive control algorithm easily handles free-flying systems with multiple, interacting manipulators, and extends naturally to even larger systems. The new adaptive controller was experimentally demonstrated on an ideal testbed in the ARL-A first-ever experimental model of a multi-manipulator, free-flying space robot that is capable of capturing and manipulating free-floating objects without requiring human assistance. A graphical user interface enhanced the robot usability: it enabled an operator situated at a remote location to issue high-level task description commands to the robot, and to monitor robot activities as it then carried out each assignment autonomously.
Mapping planetary caves with an autonomous, heterogeneous robot team
NASA Astrophysics Data System (ADS)
Husain, Ammar; Jones, Heather; Kannan, Balajee; Wong, Uland; Pimentel, Tiago; Tang, Sarah; Daftry, Shreyansh; Huber, Steven; Whittaker, William L.
Caves on other planetary bodies offer sheltered habitat for future human explorers and numerous clues to a planet's past for scientists. While recent orbital imagery provides exciting new details about cave entrances on the Moon and Mars, the interiors of these caves are still unknown and not observable from orbit. Multi-robot teams offer unique solutions for exploration and modeling subsurface voids during precursor missions. Robot teams that are diverse in terms of size, mobility, sensing, and capability can provide great advantages, but this diversity, coupled with inherently distinct low-level behavior architectures, makes coordination a challenge. This paper presents a framework that consists of an autonomous frontier and capability-based task generator, a distributed market-based strategy for coordinating and allocating tasks to the different team members, and a communication paradigm for seamless interaction between the different robots in the system. Robots have different sensors, (in the representative robot team used for testing: 2D mapping sensors, 3D modeling sensors, or no exteroceptive sensors), and varying levels of mobility. Tasks are generated to explore, model, and take science samples. Based on an individual robot's capability and associated cost for executing a generated task, a robot is autonomously selected for task execution. The robots create coarse online maps and store collected data for high resolution offline modeling. The coordination approach has been field tested at a mock cave site with highly-unstructured natural terrain, as well as an outdoor patio area. Initial results are promising for applicability of the proposed multi-robot framework to exploration and modeling of planetary caves.
Robotically assisted laparoscopy benefits surgical performance under stress.
Moore, Lee J; Wilson, Mark R; Waine, Elizabeth; McGrath, John S; Masters, Rich S W; Vine, Samuel J
2015-12-01
While the benefits of robotic surgery for the patient have been relatively well established, little is known about the benefits for the surgeon. This study examined whether the advantages of robotically assisted laparoscopy (improved dexterity, a 3-dimensional view, reduction in tremors, etc.) enable the surgeon to better deal with stressful tasks. Subjective and objective (i.e. cardiovascular) responses to stress were assessed while surgeons performed on either a robotic or conventional laparoscopic system. Thirty-two surgeons were assigned to perform a surgical task on either a robotic system or a laparoscopic system, under three stress conditions. The surgeons completed self-report measures of stress before each condition. Furthermore, the surgeons' cardiovascular responses to stress were recorded prior to each condition. Finally, task performance was recorded throughout each condition. While both groups reported experiencing similar levels of stress, compared to the laparoscopic group, the robotic group displayed a more adaptive cardiovascular response to the stress conditions, reflecting a challenge state (i.e. higher blood flow and lower vascular resistance). Furthermore, despite no differences in completion time, the robotic group performed the tasks more accurately than the laparoscopic group across the stress conditions. These results highlight the benefits of using robotic technology during stressful situations. Specifically, the results show that stressful tasks can be performed more accurately with a robotic platform, and that surgeons' cardiovascular responses to stress are more favourable. Importantly, the 'challenge' cardiovascular response to stress displayed when using the robotic system has been associated with more positive long-term health outcomes in domains where stress is commonly experienced (e.g. lower cardiovascular disease risk).
NASA Technical Reports Server (NTRS)
Stroupe, Ashley W.; Okon, Avi; Robinson, Matthew; Huntsberger, Terry; Aghazarian, Hrand; Baumgartner, Eric
2004-01-01
Robotic Construction Crew (RCC) is a heterogeneous multi-robot system for autonomous acquisition, transport, and precision mating of components in construction tasks. RCC minimizes resources constrained in a space environment such as computation, power, communication and, sensing. A behavior-based architecture provides adaptability and robustness despite low computational requirements. RCC successfully performs several construction related tasks in an emulated outdoor environment despite high levels of uncertainty in motions and sensing. Quantitative results are provided for formation keeping in component transport, precision instrument placement, and construction tasks.
Communication and knowledge sharing in human-robot interaction and learning from demonstration.
Koenig, Nathan; Takayama, Leila; Matarić, Maja
2010-01-01
Inexpensive personal robots will soon become available to a large portion of the population. Currently, most consumer robots are relatively simple single-purpose machines or toys. In order to be cost effective and thus widely accepted, robots will need to be able to accomplish a wide range of tasks in diverse conditions. Learning these tasks from demonstrations offers a convenient mechanism to customize and train a robot by transferring task related knowledge from a user to a robot. This avoids the time-consuming and complex process of manual programming. The way in which the user interacts with a robot during a demonstration plays a vital role in terms of how effectively and accurately the user is able to provide a demonstration. Teaching through demonstrations is a social activity, one that requires bidirectional communication between a teacher and a student. The work described in this paper studies how the user's visual observation of the robot and the robot's auditory cues affect the user's ability to teach the robot in a social setting. Results show that auditory cues provide important knowledge about the robot's internal state, while visual observation of a robot can hinder an instructor due to incorrect mental models of the robot and distractions from the robot's movements. Copyright © 2010. Published by Elsevier Ltd.
Delcasso, Sébastien; Huh, Namjung; Byeon, Jung Seop; Lee, Jihyun; Jung, Min Whan; Lee, Inah
2014-11-19
The hippocampus is important for contextual behavior, and the striatum plays key roles in decision making. When studying the functional relationships with the hippocampus, prior studies have focused mostly on the dorsolateral striatum (DLS), emphasizing the antagonistic relationships between the hippocampus and DLS in spatial versus response learning. By contrast, the functional relationships between the dorsomedial striatum (DMS) and hippocampus are relatively unknown. The current study reports that lesions to both the hippocampus and DMS profoundly impaired performance of rats in a visual scene-based memory task in which the animals were required to make a choice response by using visual scenes displayed in the background. Analysis of simultaneous recordings of local field potentials revealed that the gamma oscillatory power was higher in the DMS, but not in CA1, when the rat performed the task using familiar scenes than novel ones. In addition, the CA1-DMS networks increased coherence at γ, but not at θ, rhythm as the rat mastered the task. At the single-unit level, the neuronal populations in CA1 and DMS showed differential firing patterns when responses were made using familiar visual scenes than novel ones. Such learning-dependent firing patterns were observed earlier in the DMS than in CA1 before the rat made choice responses. The present findings suggest that both the hippocampus and DMS process memory representations for visual scenes in parallel with different time courses and that flexible choice action using background visual scenes requires coordinated operations of the hippocampus and DMS at γ frequencies. Copyright © 2014 the authors 0270-6474/14/3415534-14$15.00/0.
Najafi, Mohammad; Adams, Kim; Tavakoli, Mahdi
2017-07-01
The number of people with physical disabilities and impaired motion control is increasing. Consequently, there is a growing demand for intelligent assistive robotic systems to cooperate with people with disability and help them carry out different tasks. To this end, our group has pioneered the use of robot learning from demonstration (RLfD) techniques, which eliminate the need for task-specific robot programming, in robotic rehabilitation and assistive technologies settings. First, in the demonstration phase, the therapist (or in general, a helper) provides an intervention (typically assistance) and cooperatively performs a task with a patient several times. The demonstrated motion is modelled by a statistical RLfD algorithm, which will later be used in the robot controllers to reproduce a similar intervention robotically. In this paper, by proposing a Tangential-Normal Varying-Impedance Controller (TNVIC), the robotic manipulator not only follows the therapist's demonstrated motion, but also mimics his/her interaction impedance during the therapeutic/assistive intervention. The feasibility and efficacy of the proposed framework are evaluated by conducting an experiment involving a healthy adult with cerebral palsy symptoms being induced using transcutaneous electrical nerve stimulation.
Rogers, Wendy A.
2015-01-01
Ample research in social psychology has highlighted the importance of the human face in human–human interactions. However, there is a less clear understanding of how a humanoid robot's face is perceived by humans. One of the primary goals of this study was to investigate how initial perceptions of robots are influenced by the extent of human-likeness of the robot's face, particularly when the robot is intended to provide assistance with tasks in the home that are traditionally carried out by humans. Moreover, although robots have the potential to help both younger and older adults, there is limited knowledge of whether the two age groups' perceptions differ. In this study, younger (N = 32) and older adults (N = 32) imagined interacting with a robot in four different task contexts and rated robot faces of varying levels of human-likeness. Participants were also interviewed to assess their reasons for particular preferences. This multi-method approach identified patterns of perceptions across different appearances as well as reasons that influence the formation of such perceptions. Overall, the results indicated that people's perceptions of robot faces vary as a function of robot human-likeness. People tended to over-generalize their understanding of humans to build expectations about a human-looking robot's behavior and capabilities. Additionally, preferences for humanoid robots depended on the task although younger and older adults differed in their preferences for certain humanoid appearances. The results of this study have implications both for advancing theoretical understanding of robot perceptions and for creating and applying guidelines for the design of robots. PMID:26294936
Prakash, Akanksha; Rogers, Wendy A
2015-04-01
Ample research in social psychology has highlighted the importance of the human face in human-human interactions. However, there is a less clear understanding of how a humanoid robot's face is perceived by humans. One of the primary goals of this study was to investigate how initial perceptions of robots are influenced by the extent of human-likeness of the robot's face, particularly when the robot is intended to provide assistance with tasks in the home that are traditionally carried out by humans. Moreover, although robots have the potential to help both younger and older adults, there is limited knowledge of whether the two age groups' perceptions differ. In this study, younger ( N = 32) and older adults ( N = 32) imagined interacting with a robot in four different task contexts and rated robot faces of varying levels of human-likeness. Participants were also interviewed to assess their reasons for particular preferences. This multi-method approach identified patterns of perceptions across different appearances as well as reasons that influence the formation of such perceptions. Overall, the results indicated that people's perceptions of robot faces vary as a function of robot human-likeness. People tended to over-generalize their understanding of humans to build expectations about a human-looking robot's behavior and capabilities. Additionally, preferences for humanoid robots depended on the task although younger and older adults differed in their preferences for certain humanoid appearances. The results of this study have implications both for advancing theoretical understanding of robot perceptions and for creating and applying guidelines for the design of robots.
Swallow, Khena M; Jiang, Yuhong V
2010-04-01
Recent work on event perception suggests that perceptual processing increases when events change. An important question is how such changes influence the way other information is processed, particularly during dual-task performance. In this study, participants monitored a long series of distractor items for an occasional target as they simultaneously encoded unrelated background scenes. The appearance of an occasional target could have two opposite effects on the secondary task: It could draw attention away from the second task, or, as a change in the ongoing event, it could improve secondary task performance. Results were consistent with the second possibility. Memory for scenes presented simultaneously with the targets was better than memory for scenes that preceded or followed the targets. This effect was observed when the primary detection task involved visual feature oddball detection, auditory oddball detection, and visual color-shape conjunction detection. It was eliminated when the detection task was omitted, and when it required an arbitrary response mapping. The appearance of occasional, task-relevant events appears to trigger a temporal orienting response that facilitates processing of concurrently attended information (Attentional Boost Effect). Copyright 2009 Elsevier B.V. All rights reserved.
Swallow, Khena M.; Jiang, Yuhong V.
2009-01-01
Recent work on event perception suggests that perceptual processing increases when events change. An important question is how such changes influence the way other information is processed, particularly during dual-task performance. In this study, participants monitored a long series of distractor items for an occasional target as they simultaneously encoded unrelated background scenes. The appearance of an occasional target could have two opposite effects on the secondary task: It could draw attention away from the second task, or, as a change in the ongoing event, it could improve secondary task performance. Results were consistent with the second possibility. Memory for scenes presented simultaneously with the targets was better than memory for scenes that preceded or followed the targets. This effect was observed when the primary detection task involved visual feature oddball detection, auditory oddball detection, and visual color-shape conjunction detection. It was eliminated when the detection task was omitted, and when it required an arbitrary response mapping. The appearance of occasional, task-relevant events appears to trigger a temporal orienting response that facilitates processing of concurrently attended information (Attentional Boost Effect). PMID:20080232
Flexible automation of cell culture and tissue engineering tasks.
Knoll, Alois; Scherer, Torsten; Poggendorf, Iris; Lütkemeyer, Dirk; Lehmann, Jürgen
2004-01-01
Until now, the predominant use cases of industrial robots have been routine handling tasks in the automotive industry. In biotechnology and tissue engineering, in contrast, only very few tasks have been automated with robots. New developments in robot platform and robot sensor technology, however, make it possible to automate plants that largely depend on human interaction with the production process, e.g., for material and cell culture fluid handling, transportation, operation of equipment, and maintenance. In this paper we present a robot system that lends itself to automating routine tasks in biotechnology but also has the potential to automate other production facilities that are similar in process structure. After motivating the design goals, we describe the system and its operation, illustrate sample runs, and give an assessment of the advantages. We conclude this paper by giving an outlook on possible further developments.
Obstacle Avoidance On Roadways Using Range Data
NASA Astrophysics Data System (ADS)
Dunlay, R. Terry; Morgenthaler, David G.
1987-02-01
This report describes range data based obstacle avoidance techniques developed for use on an autonomous road-following robot vehicle. The purpose of these techniques is to detect and locate obstacles present in a road environment for navigation of a robot vehicle equipped with an active laser-based range sensor. Techniques are presented for obstacle detection, obstacle location, and coordinate transformations needed in the construction of Scene Models (symbolic structures representing the 3-D obstacle boundaries used by the vehicle's Navigator for path planning). These techniques have been successfully tested on an outdoor robotic vehicle, the Autonomous Land Vehicle (ALV), at speeds up to 3.5 km/hour.
Research status of multi - robot systems task allocation and uncertainty treatment
NASA Astrophysics Data System (ADS)
Li, Dahui; Fan, Qi; Dai, Xuefeng
2017-08-01
The multi-robot coordination algorithm has become a hot research topic in the field of robotics in recent years. It has a wide range of applications and good application prospects. This paper analyzes and summarizes the current research status of multi-robot coordination algorithms at home and abroad. From task allocation and dealing with uncertainty, this paper discusses the multi-robot coordination algorithm and presents the advantages and disadvantages of each method commonly used.
Evolutionary online behaviour learning and adaptation in real robots
Correia, Luís; Christensen, Anders Lyhne
2017-01-01
Online evolution of behavioural control on real robots is an open-ended approach to autonomous learning and adaptation: robots have the potential to automatically learn new tasks and to adapt to changes in environmental conditions, or to failures in sensors and/or actuators. However, studies have so far almost exclusively been carried out in simulation because evolution in real hardware has required several days or weeks to produce capable robots. In this article, we successfully evolve neural network-based controllers in real robotic hardware to solve two single-robot tasks and one collective robotics task. Controllers are evolved either from random solutions or from solutions pre-evolved in simulation. In all cases, capable solutions are found in a timely manner (1 h or less). Results show that more accurate simulations may lead to higher-performing controllers, and that completing the optimization process in real robots is meaningful, even if solutions found in simulation differ from solutions in reality. We furthermore demonstrate for the first time the adaptive capabilities of online evolution in real robotic hardware, including robots able to overcome faults injected in the motors of multiple units simultaneously, and to modify their behaviour in response to changes in the task requirements. We conclude by assessing the contribution of each algorithmic component on the performance of the underlying evolutionary algorithm. PMID:28791130
Case studies in configuration control for redundant robots
NASA Technical Reports Server (NTRS)
Seraji, H.; Lee, T.; Colbaugh, R.; Glass, K.
1989-01-01
A simple approach to configuration control of redundant robots is presented. The redundancy is utilized to control the robot configuration directly in task space, where the task will be performed. A number of task-related kinematic functions are defined and combined with the end-effector coordinates to form a set of configuration variables. An adaptive control scheme is then utilized to ensure that the configuration variables track the desired reference trajectories as closely as possible. Simulation results are presented to illustrate the control scheme. The scheme has also been implemented for direct online control of a PUMA industrial robot, and experimental results are presented. The simulation and experimental results validate the configuration control scheme for performing various realistic tasks.
Task automation in a successful industrial telerobot
NASA Technical Reports Server (NTRS)
Spelt, Philip F.; Jones, Sammy L.
1994-01-01
In this paper, we discuss cooperative work by Oak Ridge National Laboratory and Remotec, Inc., to automate components of the operator's workload using Remotec's Andros telerobot, thereby providing an enhanced user interface which can be retrofit to existing fielded units as well as being incorporated into new production units. Remotec's Andros robots are presently used by numerous electric utilities to perform tasks in reactors where substantial exposure to radiation exists, as well as by the armed forces and numerous law enforcement agencies. The automation of task components, as well as the video graphics display of the robot's position in the environment, will enhance all tasks performed by these users, as well as enabling performance in terrain where the robots cannot presently perform due to lack of knowledge about, for instance, the degree of tilt of the robot. Enhanced performance of a successful industrial mobile robot leads to increased safety and efficiency of performance in hazardous environments. The addition of these capabilities will greatly enhance the utility of the robot, as well as its marketability.
Investigation of human-robot interface performance in household environments
NASA Astrophysics Data System (ADS)
Cremer, Sven; Mirza, Fahad; Tuladhar, Yathartha; Alonzo, Rommel; Hingeley, Anthony; Popa, Dan O.
2016-05-01
Today, assistive robots are being introduced into human environments at an increasing rate. Human environments are highly cluttered and dynamic, making it difficult to foresee all necessary capabilities and pre-program all desirable future skills of the robot. One approach to increase robot performance is semi-autonomous operation, allowing users to intervene and guide the robot through difficult tasks. To this end, robots need intuitive Human-Machine Interfaces (HMIs) that support fine motion control without overwhelming the operator. In this study we evaluate the performance of several interfaces that balance autonomy and teleoperation of a mobile manipulator for accomplishing several household tasks. Our proposed HMI framework includes teleoperation devices such as a tablet, as well as physical interfaces in the form of piezoresistive pressure sensor arrays. Mobile manipulation experiments were performed with a sensorized KUKA youBot, an omnidirectional platform with a 5 degrees of freedom (DOF) arm. The pick and place tasks involved navigation and manipulation of objects in household environments. Performance metrics included time for task completion and position accuracy.
Advantages of Task-Specific Multi-Objective Optimisation in Evolutionary Robotics.
Trianni, Vito; López-Ibáñez, Manuel
2015-01-01
The application of multi-objective optimisation to evolutionary robotics is receiving increasing attention. A survey of the literature reveals the different possibilities it offers to improve the automatic design of efficient and adaptive robotic systems, and points to the successful demonstrations available for both task-specific and task-agnostic approaches (i.e., with or without reference to the specific design problem to be tackled). However, the advantages of multi-objective approaches over single-objective ones have not been clearly spelled out and experimentally demonstrated. This paper fills this gap for task-specific approaches: starting from well-known results in multi-objective optimisation, we discuss how to tackle commonly recognised problems in evolutionary robotics. In particular, we show that multi-objective optimisation (i) allows evolving a more varied set of behaviours by exploring multiple trade-offs of the objectives to optimise, (ii) supports the evolution of the desired behaviour through the introduction of objectives as proxies, (iii) avoids the premature convergence to local optima possibly introduced by multi-component fitness functions, and (iv) solves the bootstrap problem exploiting ancillary objectives to guide evolution in the early phases. We present an experimental demonstration of these benefits in three different case studies: maze navigation in a single robot domain, flocking in a swarm robotics context, and a strictly collaborative task in collective robotics.
NASA Technical Reports Server (NTRS)
Woodbury, R. F.; Oppenheim, I. J.
1987-01-01
Cognitive robot systems are ones in which sensing and representation occur, from which task plans and tactics are determined. Such a robot system accomplishes a task after being told what to do, but determines for itself how to do it. Cognition is required when the work environment is uncontrolled, when contingencies are prevalent, or when task complexity is large; it is useful in any robotic mission. A number of distinguishing features can be associated with cognitive robotics, and one emphasized here is the role of artificial intelligence in knowledge representation and in planning. While space telerobotics may elude some of the problems driving cognitive robotics, it shares many of the same demands, and it can be assumed that capabilities developed for cognitive robotics can be employed advantageously for telerobotics in general. The top level problem is task planning, and it is appropriate to introduce a hierarchical view of control. Presented with certain mission objectives, the system must generate plans (typically) at the strategic, tactical, and reflexive levels. The structure by which knowledge is used to construct and update these plans endows the system with its cognitive attributes, and with the ability to deal with contingencies, changes, unknowns, and so on. Issues of representation and reasoning which are absolutely fundamental to robot manipulation, decisions based upon geometry, are discussed here, not AI task planning per se.
A Developmental Learning Approach of Mobile Manipulator via Playing
Wu, Ruiqi; Zhou, Changle; Chao, Fei; Zhu, Zuyuan; Lin, Chih-Min; Yang, Longzhi
2017-01-01
Inspired by infant development theories, a robotic developmental model combined with game elements is proposed in this paper. This model does not require the definition of specific developmental goals for the robot, but the developmental goals are implied in the goals of a series of game tasks. The games are characterized into a sequence of game modes based on the complexity of the game tasks from simple to complex, and the task complexity is determined by the applications of developmental constraints. Given a current mode, the robot switches to play in a more complicated game mode when it cannot find any new salient stimuli in the current mode. By doing so, the robot gradually achieves it developmental goals by playing different modes of games. In the experiment, the game was instantiated into a mobile robot with the playing task of picking up toys, and the game is designed with a simple game mode and a complex game mode. A developmental algorithm, “Lift-Constraint, Act and Saturate,” is employed to drive the mobile robot move from the simple mode to the complex one. The experimental results show that the mobile manipulator is able to successfully learn the mobile grasping ability after playing simple and complex games, which is promising in developing robotic abilities to solve complex tasks using games. PMID:29046632
The Relationship Between Online Visual Representation of a Scene and Long-Term Scene Memory
ERIC Educational Resources Information Center
Hollingworth, Andrew
2005-01-01
In 3 experiments the author investigated the relationship between the online visual representation of natural scenes and long-term visual memory. In a change detection task, a target object either changed or remained the same from an initial image of a natural scene to a test image. Two types of changes were possible: rotation in depth, or…
You think you know where you looked? You better look again.
Võ, Melissa L-H; Aizenman, Avigael M; Wolfe, Jeremy M
2016-10-01
People are surprisingly bad at knowing where they have looked in a scene. We tested participants' ability to recall their own eye movements in 2 experiments using natural or artificial scenes. In each experiment, participants performed a change-detection (Exp.1) or search (Exp.2) task. On 25% of trials, after 3 seconds of viewing the scene, participants were asked to indicate where they thought they had just fixated. They responded by making mouse clicks on 12 locations in the unchanged scene. After 135 trials, observers saw 10 new scenes and were asked to put 12 clicks where they thought someone else would have looked. Although observers located their own fixations more successfully than a random model, their performance was no better than when they were guessing someone else's fixations. Performance with artificial scenes was worse, though judging one's own fixations was slightly superior. Even after repeating the fixation-location task on 30 scenes immediately after scene viewing, performance was far from the prediction of an ideal observer. Memory for our own fixation locations appears to add next to nothing beyond what common sense tells us about the likely fixations of others. These results have important implications for socially important visual search tasks. For example, a radiologist might think he has looked at "everything" in an image, but eye tracking data suggest that this is not so. Such shortcomings might be avoided by providing observers with better insights of where they have looked. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Study of robotics systems applications to the space station program
NASA Technical Reports Server (NTRS)
Fox, J. C.
1983-01-01
Applications of robotics systems to potential uses of the Space Station as an assembly facility, and secondarily as a servicing facility, are considered. A typical robotics system mission is described along with the pertinent application guidelines and Space Station environmental assumptions utilized in developing the robotic task scenarios. A functional description of a supervised dual-robot space structure construction system is given, and four key areas of robotic technology are defined, described, and assessed. Alternate technologies for implementing the more routine space technology support subsystems that will be required to support the Space Station robotic systems in assembly and servicing tasks are briefly discussed. The environmental conditions impacting on the robotic configuration design and operation are reviewed.
Affordance Templates for Shared Robot Control
NASA Technical Reports Server (NTRS)
Hart, Stephen; Dinh, Paul; Hambuchen, Kim
2014-01-01
This paper introduces the Affordance Template framework used to supervise task behaviors on the NASA-JSC Valkyrie robot at the 2013 DARPA Robotics Challenge (DRC) Trials. This framework provides graphical interfaces to human supervisors that are adjustable based on the run-time environmental context (e.g., size, location, and shape of objects that the robot must interact with, etc.). Additional improvements, described below, inject degrees of autonomy into instantiations of affordance templates at run-time in order to enable efficient human supervision of the robot for accomplishing tasks.
Live video monitoring robot controlled by web over internet
NASA Astrophysics Data System (ADS)
Lokanath, M.; Akhil Sai, Guruju
2017-11-01
Future is all about robots, robot can perform tasks where humans cannot, Robots have huge applications in military and industrial area for lifting heavy weights, for accurate placements, for repeating the same task number of times, where human are not efficient. Generally robot is a mix of electronic, electrical and mechanical engineering and can do the tasks automatically on its own or under the supervision of humans. The camera is the eye for robot, call as robovision helps in monitoring security system and also can reach into the places where the human eye cannot reach. This paper presents about developing a live video streaming robot controlled from the website. We designed the web, controlling for the robot to move left, right, front and back while streaming video. As we move to the smart environment or IoT (Internet of Things) by smart devices the system we developed here connects over the internet and can be operated with smart mobile phone using a web browser. The Raspberry Pi model B chip acts as heart for this system robot, the sufficient motors, surveillance camera R pi 2 are connected to Raspberry pi.
Whitehurst, Sabrina V; Lockrow, Ernest G; Lendvay, Thomas S; Propst, Anthony M; Dunlow, Susan G; Rosemeyer, Christopher J; Gobern, Joseph M; White, Lee W; Skinner, Anna; Buller, Jerome L
2015-01-01
To compare the efficacy of simulation-based training between the Mimic dV- Trainer and traditional dry lab da Vinci robot training. A prospective randomized study analyzing the performance of 20 robotics-naive participants. Participants were enrolled in an online da Vinci Intuitive Surgical didactic training module, followed by training in use of the da Vinci standard surgical robot. Spatial ability tests were performed as well. Participants were randomly assigned to 1 of 2 training conditions: performance of 3 Fundamentals of Laparoscopic Surgery dry lab tasks using the da Vinci or performance of 4 dV-Trainer tasks. Participants in both groups performed all tasks to empirically establish proficiency criterion. Participants then performed the transfer task, a cystotomy closure using the daVinci robot on a live animal (swine) model. The performance of robotic tasks was blindly assessed by a panel of experienced surgeons using objective tracking data and using the validated Global Evaluative Assessment of Robotic Surgery (GEARS), a structured assessment tool. No statistically significant difference in surgeon performance was found between the 2 training conditions, dV-Trainer and da Vinci robot. Analysis of a 95% confidence interval for the difference in means (-0.803 to 0.543) indicated that the 2 methods are unlikely to differ to an extent that would be clinically meaningful. Based on the results of this study, a curriculum on the dV- Trainer was shown to be comparable to traditional da Vinci robot training. Therefore, we have identified that training on a virtual reality system may be an alternative to live animal training for future robotic surgeons. Published by Elsevier Inc.
Duarte, Jaime E; Gebrekristos, Berkenesh; Perez, Sergi; Rowe, Justin B; Sharp, Kelli; Reinkensmeyer, David J
2013-06-01
Robotic devices can modulate success rates and required effort levels during motor training, but it is unclear how this affects performance gains and motivation. Here we present results from training unimpaired humans in a virtual golf-putting task, and training spinal cord injured (SCI) rats in a grip strength task using robotically modulated success rates and effort levels. Robotic assistance in golf practice increased trainees feelings of competence, and, paradoxically, increased their sense effort, even though it had mixed effects on learning. Reducing effort during a grip strength training task led rats with SCI to practice the task more frequently. However, the more frequent practice of these rats did not cause them to exceed the strength gains achieved by rats that exercised less often at higher required effort levels. These results show that increasing success and decreasing effort with robots increases motivation, but has mixed effects on performance gains.
Choi, Bongjae; Jo, Sungho
2013-01-01
This paper describes a hybrid brain-computer interface (BCI) technique that combines the P300 potential, the steady state visually evoked potential (SSVEP), and event related de-synchronization (ERD) to solve a complicated multi-task problem consisting of humanoid robot navigation and control along with object recognition using a low-cost BCI system. Our approach enables subjects to control the navigation and exploration of a humanoid robot and recognize a desired object among candidates. This study aims to demonstrate the possibility of a hybrid BCI based on a low-cost system for a realistic and complex task. It also shows that the use of a simple image processing technique, combined with BCI, can further aid in making these complex tasks simpler. An experimental scenario is proposed in which a subject remotely controls a humanoid robot in a properly sized maze. The subject sees what the surrogate robot sees through visual feedback and can navigate the surrogate robot. While navigating, the robot encounters objects located in the maze. It then recognizes if the encountered object is of interest to the subject. The subject communicates with the robot through SSVEP and ERD-based BCIs to navigate and explore with the robot, and P300-based BCI to allow the surrogate robot recognize their favorites. Using several evaluation metrics, the performances of five subjects navigating the robot were quite comparable to manual keyboard control. During object recognition mode, favorite objects were successfully selected from two to four choices. Subjects conducted humanoid navigation and recognition tasks as if they embodied the robot. Analysis of the data supports the potential usefulness of the proposed hybrid BCI system for extended applications. This work presents an important implication for the future work that a hybridization of simple BCI protocols provide extended controllability to carry out complicated tasks even with a low-cost system. PMID:24023953
Choi, Bongjae; Jo, Sungho
2013-01-01
This paper describes a hybrid brain-computer interface (BCI) technique that combines the P300 potential, the steady state visually evoked potential (SSVEP), and event related de-synchronization (ERD) to solve a complicated multi-task problem consisting of humanoid robot navigation and control along with object recognition using a low-cost BCI system. Our approach enables subjects to control the navigation and exploration of a humanoid robot and recognize a desired object among candidates. This study aims to demonstrate the possibility of a hybrid BCI based on a low-cost system for a realistic and complex task. It also shows that the use of a simple image processing technique, combined with BCI, can further aid in making these complex tasks simpler. An experimental scenario is proposed in which a subject remotely controls a humanoid robot in a properly sized maze. The subject sees what the surrogate robot sees through visual feedback and can navigate the surrogate robot. While navigating, the robot encounters objects located in the maze. It then recognizes if the encountered object is of interest to the subject. The subject communicates with the robot through SSVEP and ERD-based BCIs to navigate and explore with the robot, and P300-based BCI to allow the surrogate robot recognize their favorites. Using several evaluation metrics, the performances of five subjects navigating the robot were quite comparable to manual keyboard control. During object recognition mode, favorite objects were successfully selected from two to four choices. Subjects conducted humanoid navigation and recognition tasks as if they embodied the robot. Analysis of the data supports the potential usefulness of the proposed hybrid BCI system for extended applications. This work presents an important implication for the future work that a hybridization of simple BCI protocols provide extended controllability to carry out complicated tasks even with a low-cost system.
Pasqualotto, Achille; Esenkaya, Tayfun
2016-01-01
Visual-to-auditory sensory substitution is used to convey visual information through audition, and it was initially created to compensate for blindness; it consists of software converting the visual images captured by a video-camera into the equivalent auditory images, or “soundscapes”. Here, it was used by blindfolded sighted participants to learn the spatial position of simple shapes depicted in images arranged on the floor. Very few studies have used sensory substitution to investigate spatial representation, while it has been widely used to investigate object recognition. Additionally, with sensory substitution we could study the performance of participants actively exploring the environment through audition, rather than passively localizing sound sources. Blindfolded participants egocentrically learnt the position of six images by using sensory substitution and then a judgment of relative direction task (JRD) was used to determine how this scene was represented. This task consists of imagining being in a given location, oriented in a given direction, and pointing towards the required image. Before performing the JRD task, participants explored a map that provided allocentric information about the scene. Although spatial exploration was egocentric, surprisingly we found that performance in the JRD task was better for allocentric perspectives. This suggests that the egocentric representation of the scene was updated. This result is in line with previous studies using visual and somatosensory scenes, thus supporting the notion that different sensory modalities produce equivalent spatial representation(s). Moreover, our results have practical implications to improve training methods with sensory substitution devices (SSD). PMID:27148000
Sofer, Imri; Crouzet, Sébastien M.; Serre, Thomas
2015-01-01
Observers can rapidly perform a variety of visual tasks such as categorizing a scene as open, as outdoor, or as a beach. Although we know that different tasks are typically associated with systematic differences in behavioral responses, to date, little is known about the underlying mechanisms. Here, we implemented a single integrated paradigm that links perceptual processes with categorization processes. Using a large image database of natural scenes, we trained machine-learning classifiers to derive quantitative measures of task-specific perceptual discriminability based on the distance between individual images and different categorization boundaries. We showed that the resulting discriminability measure accurately predicts variations in behavioral responses across categorization tasks and stimulus sets. We further used the model to design an experiment, which challenged previous interpretations of the so-called “superordinate advantage.” Overall, our study suggests that observed differences in behavioral responses across rapid categorization tasks reflect natural variations in perceptual discriminability. PMID:26335683
ROBOSIGHT: Robotic Vision System For Inspection And Manipulation
NASA Astrophysics Data System (ADS)
Trivedi, Mohan M.; Chen, ChuXin; Marapane, Suresh
1989-02-01
Vision is an important sensory modality that can be used for deriving information critical to the proper, efficient, flexible, and safe operation of an intelligent robot. Vision systems are uti-lized for developing higher level interpretation of the nature of a robotic workspace using images acquired by cameras mounted on a robot. Such information can be useful for tasks such as object recognition, object location, object inspection, obstacle avoidance and navigation. In this paper we describe efforts directed towards developing a vision system useful for performing various robotic inspection and manipulation tasks. The system utilizes gray scale images and can be viewed as a model-based system. It includes general purpose image analysis modules as well as special purpose, task dependent object status recognition modules. Experiments are described to verify the robust performance of the integrated system using a robotic testbed.
The Chronic Detrimental Impact of Interruptions in a Simulated Submarine Track Management Task.
Loft, Shayne; Sadler, Andreas; Braithwaite, Janelle; Huf, Samuel
2015-12-01
The objective of this article is to examine the extent to which interruptions negatively impact situation awareness and long-term performance in a submarine track management task where pre- and postinterruption display scenes remained essentially identical. Interruptions in command and control task environments can degrade performance well beyond the first postinterruption action typically measured for sequential static tasks, because individuals need to recover their situation awareness for multiple unfolding display events. Participants in the current study returned to an unchanged display scene following interruption and therefore could be more immune to such long-term performance deficits. The task required participants to monitor a display to detect contact heading changes and to make enemy engagement decisions. Situation awareness (Situation Present Assessment Method) and subjective workload (NASA-Task Load Index) were measured. The interruption replaced the display for 20 s with a blank screen, during which participants completed a classification task. Situation awareness after returning from interruption was degraded. Participants were slower to make correct engagement decisions and slower and less accurate in detecting heading changes, despite these task decisions being made at least 40 s following the interruption. Interruptions negatively impacted situation awareness and long-term performance because participants needed to redetermine the location and spatial relationship between the displayed contacts when returning from interruption, either because their situation awareness for the preinterruption scene decayed or because they did not encode the preinterruption scene. Interruption in work contexts such as submarines is unavoidable, and further understanding of how operators are affected is required to improve work design and training. © 2015, Human Factors and Ergonomics Society.
Human-Robot Interaction: Status and Challenges.
Sheridan, Thomas B
2016-06-01
The current status of human-robot interaction (HRI) is reviewed, and key current research challenges for the human factors community are described. Robots have evolved from continuous human-controlled master-slave servomechanisms for handling nuclear waste to a broad range of robots incorporating artificial intelligence for many applications and under human supervisory control. This mini-review describes HRI developments in four application areas and what are the challenges for human factors research. In addition to a plethora of research papers, evidence of success is manifest in live demonstrations of robot capability under various forms of human control. HRI is a rapidly evolving field. Specialized robots under human teleoperation have proven successful in hazardous environments and medical application, as have specialized telerobots under human supervisory control for space and repetitive industrial tasks. Research in areas of self-driving cars, intimate collaboration with humans in manipulation tasks, human control of humanoid robots for hazardous environments, and social interaction with robots is at initial stages. The efficacy of humanoid general-purpose robots has yet to be proven. HRI is now applied in almost all robot tasks, including manufacturing, space, aviation, undersea, surgery, rehabilitation, agriculture, education, package fetch and delivery, policing, and military operations. © 2016, Human Factors and Ergonomics Society.
Does object view influence the scene consistency effect?
Sastyin, Gergo; Niimi, Ryosuke; Yokosawa, Kazuhiko
2015-04-01
Traditional research on the scene consistency effect only used clearly recognizable object stimuli to show mutually interactive context effects for both the object and background components on scene perception (Davenport & Potter in Psychological Science, 15, 559-564, 2004). However, in real environments, objects are viewed from multiple viewpoints, including an accidental, hard-to-recognize one. When the observers named target objects in scenes (Experiments 1a and 1b, object recognition task), we replicated the scene consistency effect (i.e., there was higher accuracy for the objects with consistent backgrounds). However, there was a significant interaction effect between consistency and object viewpoint, which indicated that the scene consistency effect was more important for identifying objects in the accidental view condition than in the canonical view condition. Therefore, the object recognition system may rely more on the scene context when the object is difficult to recognize. In Experiment 2, the observers identified the background (background recognition task) while the scene consistency and object views were manipulated. The results showed that object viewpoint had no effect, while the scene consistency effect was observed. More specifically, the canonical and accidental views both equally provided contextual information for scene perception. These findings suggested that the mechanism for conscious recognition of objects could be dissociated from the mechanism for visual analysis of object images that were part of a scene. The "context" that the object images provided may have been derived from its view-invariant, relatively low-level visual features (e.g., color), rather than its semantic information.
Pini, Giovanni; Brutschy, Arne; Scheidler, Alexander; Dorigo, Marco; Birattari, Mauro
2014-01-01
We study task partitioning in the context of swarm robotics. Task partitioning is the decomposition of a task into subtasks that can be tackled by different workers. We focus on the case in which a task is partitioned into a sequence of subtasks that must be executed in a certain order. This implies that the subtasks must interface with each other, and that the output of a subtask is used as input for the subtask that follows. A distinction can be made between task partitioning with direct transfer and with indirect transfer. We focus our study on the first case: The output of a subtask is directly transferred from an individual working on that subtask to an individual working on the subtask that follows. As a test bed for our study, we use a swarm of robots performing foraging. The robots have to harvest objects from a source, situated in an unknown location, and transport them to a home location. When a robot finds the source, it memorizes its position and uses dead reckoning to return there. Dead reckoning is appealing in robotics, since it is a cheap localization method and it does not require any additional external infrastructure. However, dead reckoning leads to errors that grow in time if not corrected periodically. We compare a foraging strategy that does not make use of task partitioning with one that does. We show that cooperation through task partitioning can be used to limit the effect of dead reckoning errors. This results in improved capability of locating the object source and in increased performance of the swarm. We use the implemented system as a test bed to study benefits and costs of task partitioning with direct transfer. We implement the system with real robots, demonstrating the feasibility of our approach in a foraging scenario.
Al-Halimi, Reem K; Moussa, Medhat
2017-06-01
In this paper, we report on the results of a study that was conducted to examine how users suffering from severe upper-extremity disabilities can control a 6 degrees-of-freedom (DOF) robotics arm to complete complex activities of daily living. The focus of the study is not on assessing the robot arm but on examining the human-robot interaction patterns. Three participants were recruited. Each participant was asked to perform three tasks: eating three pieces of pre-cut bread from a plate, drinking three sips of soup from a bowl, and opening a right-handed door with lever handle. Each of these tasks was repeated three times. The arm was mounted on the participant's wheelchair, and the participants were free to move the arm as they wish to complete these tasks. Each task consisted of a sequence of modes where a mode is defined as arm movement in one DOF. Results show that participants used a total of 938 mode movements with an average of 75.5 (std 10.2) modes for the eating task, 70 (std 8.8) modes for the soup task, and 18.7 (std 4.5) modes for the door opening task. Tasks were then segmented into smaller subtasks. It was found that there are patterns of usage per participant and per subtask. These patterns can potentially allow a robot to learn from user's demonstration what is the task being executed and by whom and respond accordingly to reduce user effort.
Chowriappa, Ashirwad J; Shi, Yi; Raza, Syed Johar; Ahmed, Kamran; Stegemann, Andrew; Wilding, Gregory; Kaouk, Jihad; Peabody, James O; Menon, Mani; Hassett, James M; Kesavadas, Thenkurussi; Guru, Khurshid A
2013-12-01
A standardized scoring system does not exist in virtual reality-based assessment metrics to describe safe and crucial surgical skills in robot-assisted surgery. This study aims to develop an assessment score along with its construct validation. All subjects performed key tasks on previously validated Fundamental Skills of Robotic Surgery curriculum, which were recorded, and metrics were stored. After an expert consensus for the purpose of content validation (Delphi), critical safety determining procedural steps were identified from the Fundamental Skills of Robotic Surgery curriculum and a hierarchical task decomposition of multiple parameters using a variety of metrics was used to develop Robotic Skills Assessment Score (RSA-Score). Robotic Skills Assessment mainly focuses on safety in operative field, critical error, economy, bimanual dexterity, and time. Following, the RSA-Score was further evaluated for construct validation and feasibility. Spearman correlation tests performed between tasks using the RSA-Scores indicate no cross correlation. Wilcoxon rank sum tests were performed between the two groups. The proposed RSA-Score was evaluated on non-robotic surgeons (n = 15) and on expert-robotic surgeons (n = 12). The expert group demonstrated significantly better performance on all four tasks in comparison to the novice group. Validation of the RSA-Score in this study was carried out on the Robotic Surgical Simulator. The RSA-Score is a valid scoring system that could be incorporated in any virtual reality-based surgical simulator to achieve standardized assessment of fundamental surgical tents during robot-assisted surgery. Copyright © 2013 Elsevier Inc. All rights reserved.
Advantages of Task-Specific Multi-Objective Optimisation in Evolutionary Robotics
Trianni, Vito; López-Ibáñez, Manuel
2015-01-01
The application of multi-objective optimisation to evolutionary robotics is receiving increasing attention. A survey of the literature reveals the different possibilities it offers to improve the automatic design of efficient and adaptive robotic systems, and points to the successful demonstrations available for both task-specific and task-agnostic approaches (i.e., with or without reference to the specific design problem to be tackled). However, the advantages of multi-objective approaches over single-objective ones have not been clearly spelled out and experimentally demonstrated. This paper fills this gap for task-specific approaches: starting from well-known results in multi-objective optimisation, we discuss how to tackle commonly recognised problems in evolutionary robotics. In particular, we show that multi-objective optimisation (i) allows evolving a more varied set of behaviours by exploring multiple trade-offs of the objectives to optimise, (ii) supports the evolution of the desired behaviour through the introduction of objectives as proxies, (iii) avoids the premature convergence to local optima possibly introduced by multi-component fitness functions, and (iv) solves the bootstrap problem exploiting ancillary objectives to guide evolution in the early phases. We present an experimental demonstration of these benefits in three different case studies: maze navigation in a single robot domain, flocking in a swarm robotics context, and a strictly collaborative task in collective robotics. PMID:26295151
Reverse control for humanoid robot task recognition.
Hak, Sovannara; Mansard, Nicolas; Stasse, Olivier; Laumond, Jean Paul
2012-12-01
Efficient methods to perform motion recognition have been developed using statistical tools. Those methods rely on primitive learning in a suitable space, for example, the latent space of the joint angle and/or adequate task spaces. Learned primitives are often sequential: A motion is segmented according to the time axis. When working with a humanoid robot, a motion can be decomposed into parallel subtasks. For example, in a waiter scenario, the robot has to keep some plates horizontal with one of its arms while placing a plate on the table with its free hand. Recognition can thus not be limited to one task per consecutive segment of time. The method presented in this paper takes advantage of the knowledge of what tasks the robot is able to do and how the motion is generated from this set of known controllers, to perform a reverse engineering of an observed motion. This analysis is intended to recognize parallel tasks that have been used to generate a motion. The method relies on the task-function formalism and the projection operation into the null space of a task to decouple the controllers. The approach is successfully applied on a real robot to disambiguate motion in different scenarios where two motions look similar but have different purposes.
Robotic Toys as a Catalyst for Mathematical Problem Solving
ERIC Educational Resources Information Center
Highfield, Kate
2010-01-01
Robotic toys present unique opportunities for teachers of young children to integrate mathematics learning with engaging problem-solving tasks. This article describes a series of tasks using Bee-bots and Pro-bots, developed as part a larger project examining young children's use of robotic toys as tools in developing mathematical and metacognitive…
Research on Multirobot Pursuit Task Allocation Algorithm Based on Emotional Cooperation Factor
Fang, Baofu; Chen, Lu; Wang, Hao; Dai, Shuanglu; Zhong, Qiubo
2014-01-01
Multirobot task allocation is a hot issue in the field of robot research. A new emotional model is used with the self-interested robot, which gives a new way to measure self-interested robots' individual cooperative willingness in the problem of multirobot task allocation. Emotional cooperation factor is introduced into self-interested robot; it is updated based on emotional attenuation and external stimuli. Then a multirobot pursuit task allocation algorithm is proposed, which is based on emotional cooperation factor. Combined with the two-step auction algorithm recruiting team leaders and team collaborators, set up pursuit teams, and finally use certain strategies to complete the pursuit task. In order to verify the effectiveness of this algorithm, some comparing experiments have been done with the instantaneous greedy optimal auction algorithm; the results of experiments show that the total pursuit time and total team revenue can be optimized by using this algorithm. PMID:25152925
Research on multirobot pursuit task allocation algorithm based on emotional cooperation factor.
Fang, Baofu; Chen, Lu; Wang, Hao; Dai, Shuanglu; Zhong, Qiubo
2014-01-01
Multirobot task allocation is a hot issue in the field of robot research. A new emotional model is used with the self-interested robot, which gives a new way to measure self-interested robots' individual cooperative willingness in the problem of multirobot task allocation. Emotional cooperation factor is introduced into self-interested robot; it is updated based on emotional attenuation and external stimuli. Then a multirobot pursuit task allocation algorithm is proposed, which is based on emotional cooperation factor. Combined with the two-step auction algorithm recruiting team leaders and team collaborators, set up pursuit teams, and finally use certain strategies to complete the pursuit task. In order to verify the effectiveness of this algorithm, some comparing experiments have been done with the instantaneous greedy optimal auction algorithm; the results of experiments show that the total pursuit time and total team revenue can be optimized by using this algorithm.
An anatomy of industrial robots and their controls
NASA Astrophysics Data System (ADS)
Luh, J. Y. S.
1983-02-01
The modernization of manufacturing facilities by means of automation represents an approach for increasing productivity in industry. The three existing types of automation are related to continuous process controls, the use of transfer conveyor methods, and the employment of programmable automation for the low-volume batch production of discrete parts. The industrial robots, which are defined as computer controlled mechanics manipulators, belong to the area of programmable automation. Typically, the robots perform tasks of arc welding, paint spraying, or foundary operation. One may assign a robot to perform a variety of job assignments simply by changing the appropriate computer program. The present investigation is concerned with an evaluation of the potential of the robot on the basis of its basic structure and controls. It is found that robots function well in limited areas of industry. If the range of tasks which robots can perform is to be expanded, it is necessary to provide multiple-task sensors, or special tooling, or even automatic tooling.
NASA Astrophysics Data System (ADS)
Uehara, Hideyuki; Higa, Hiroki; Soken, Takashi; Namihira, Yoshinori
A mobile feeding assistive robotic arm for people with physical disabilities of the extremities has been developed in this paper. This system is composed of a robotic arm, microcontroller, and its interface. The main unit of the robotic arm can be contained in a laptop computer's briefcase. Its weight is 5kg, including two 12-V lead acid rechargeable batteries. This robotic arm can be also mounted on a wheelchair. To verify performance of the mobile robotic arm system, drinking tea task was experimentally performed by two able-bodied subjects as well as three persons suffering from muscular dystrophy. From the experimental results, it was clear that they could smoothly carry out the drinking task, and that the robotic arm could firmly grasp a commercially available 500-ml plastic bottle. The eating task was also performed by the two able-bodied subjects. The experimental results showed that they could eat porridge by using a spoon without any difficulty.
Rapid natural scene categorization in the near absence of attention
Li, Fei Fei; VanRullen, Rufin; Koch, Christof; Perona, Pietro
2002-01-01
What can we see when we do not pay attention? It is well known that we can be “blind” even to major aspects of natural scenes when we attend elsewhere. The only tasks that do not need attention appear to be carried out in the early stages of the visual system. Contrary to this common belief, we report that subjects can rapidly detect animals or vehicles in briefly presented novel natural scenes while simultaneously performing another attentionally demanding task. By comparison, they are unable to discriminate large T's from L's, or bisected two-color disks from their mirror images under the same conditions. We conclude that some visual tasks associated with “high-level” cortical areas may proceed in the near absence of attention. PMID:12077298
Integration of task level planning and diagnosis for an intelligent robot
NASA Technical Reports Server (NTRS)
Chan, Amy W.
1992-01-01
A satellite floating space is diagnosed with a telerobot attached performing maintenance or replacement tasks. This research included three objectives. The first objective was to generate intelligent path planning for a robot to move around a satellite. The second objective was to diagnose possible faulty scenarios in the satellite. The third objective included two tasks. The first task was to combine intelligent path planning with diagnosis. The second task was to build an interface between the combined intelligent system with Robosim. The ability of a robot to deal with unexpected scenarios is particularly important in space since the situation could be different from time to time so that the telerobot must be capable of detecting that the situation has changed and the necessity may exist to alter its behavior based on the new situation. The feature of allowing human-in-the-loop is also very important in space. In some extreme cases, the situation is beyond the capability of a robot so our research project allows the human to override the decision of a robot.
Task-level control for autonomous robots
NASA Technical Reports Server (NTRS)
Simmons, Reid
1994-01-01
Task-level control refers to the integration and coordination of planning, perception, and real-time control to achieve given high-level goals. Autonomous mobile robots need task-level control to effectively achieve complex tasks in uncertain, dynamic environments. This paper describes the Task Control Architecture (TCA), an implemented system that provides commonly needed constructs for task-level control. Facilities provided by TCA include distributed communication, task decomposition and sequencing, resource management, monitoring and exception handling. TCA supports a design methodology in which robot systems are developed incrementally, starting first with deliberative plans that work in nominal situations, and then layering them with reactive behaviors that monitor plan execution and handle exceptions. To further support this approach, design and analysis tools are under development to provide ways of graphically viewing the system and validating its behavior.
Asymptotically Optimal Motion Planning for Learned Tasks Using Time-Dependent Cost Maps
Bowen, Chris; Ye, Gu; Alterovitz, Ron
2015-01-01
In unstructured environments in people’s homes and workspaces, robots executing a task may need to avoid obstacles while satisfying task motion constraints, e.g., keeping a plate of food level to avoid spills or properly orienting a finger to push a button. We introduce a sampling-based method for computing motion plans that are collision-free and minimize a cost metric that encodes task motion constraints. Our time-dependent cost metric, learned from a set of demonstrations, encodes features of a task’s motion that are consistent across the demonstrations and, hence, are likely required to successfully execute the task. Our sampling-based motion planner uses the learned cost metric to compute plans that simultaneously avoid obstacles and satisfy task constraints. The motion planner is asymptotically optimal and minimizes the Mahalanobis distance between the planned trajectory and the distribution of demonstrations in a feature space parameterized by the locations of task-relevant objects. The motion planner also leverages the distribution of the demonstrations to significantly reduce plan computation time. We demonstrate the method’s effectiveness and speed using a small humanoid robot performing tasks requiring both obstacle avoidance and satisfaction of learned task constraints. Note to Practitioners Motivated by the desire to enable robots to autonomously operate in cluttered home and workplace environments, this paper presents an approach for intuitively training a robot in a manner that enables it to repeat the task in novel scenarios and in the presence of unforeseen obstacles in the environment. Based on user-provided demonstrations of the task, our method learns features of the task that are consistent across the demonstrations and that we expect should be repeated by the robot when performing the task. We next present an efficient algorithm for planning robot motions to perform the task based on the learned features while avoiding obstacles. We demonstrate the effectiveness of our motion planner for scenarios requiring transferring a powder and pushing a button in environments with obstacles, and we plan to extend our results to more complex tasks in the future. PMID:26279642
3D Visual Data-Driven Spatiotemporal Deformations for Non-Rigid Object Grasping Using Robot Hands
Mateo, Carlos M.; Gil, Pablo; Torres, Fernando
2016-01-01
Sensing techniques are important for solving problems of uncertainty inherent to intelligent grasping tasks. The main goal here is to present a visual sensing system based on range imaging technology for robot manipulation of non-rigid objects. Our proposal provides a suitable visual perception system of complex grasping tasks to support a robot controller when other sensor systems, such as tactile and force, are not able to obtain useful data relevant to the grasping manipulation task. In particular, a new visual approach based on RGBD data was implemented to help a robot controller carry out intelligent manipulation tasks with flexible objects. The proposed method supervises the interaction between the grasped object and the robot hand in order to avoid poor contact between the fingertips and an object when there is neither force nor pressure data. This new approach is also used to measure changes to the shape of an object’s surfaces and so allows us to find deformations caused by inappropriate pressure being applied by the hand’s fingers. Test was carried out for grasping tasks involving several flexible household objects with a multi-fingered robot hand working in real time. Our approach generates pulses from the deformation detection method and sends an event message to the robot controller when surface deformation is detected. In comparison with other methods, the obtained results reveal that our visual pipeline does not use deformations models of objects and materials, as well as the approach works well both planar and 3D household objects in real time. In addition, our method does not depend on the pose of the robot hand because the location of the reference system is computed from a recognition process of a pattern located place at the robot forearm. The presented experiments demonstrate that the proposed method accomplishes a good monitoring of grasping task with several objects and different grasping configurations in indoor environments. PMID:27164102
3D Visual Data-Driven Spatiotemporal Deformations for Non-Rigid Object Grasping Using Robot Hands.
Mateo, Carlos M; Gil, Pablo; Torres, Fernando
2016-05-05
Sensing techniques are important for solving problems of uncertainty inherent to intelligent grasping tasks. The main goal here is to present a visual sensing system based on range imaging technology for robot manipulation of non-rigid objects. Our proposal provides a suitable visual perception system of complex grasping tasks to support a robot controller when other sensor systems, such as tactile and force, are not able to obtain useful data relevant to the grasping manipulation task. In particular, a new visual approach based on RGBD data was implemented to help a robot controller carry out intelligent manipulation tasks with flexible objects. The proposed method supervises the interaction between the grasped object and the robot hand in order to avoid poor contact between the fingertips and an object when there is neither force nor pressure data. This new approach is also used to measure changes to the shape of an object's surfaces and so allows us to find deformations caused by inappropriate pressure being applied by the hand's fingers. Test was carried out for grasping tasks involving several flexible household objects with a multi-fingered robot hand working in real time. Our approach generates pulses from the deformation detection method and sends an event message to the robot controller when surface deformation is detected. In comparison with other methods, the obtained results reveal that our visual pipeline does not use deformations models of objects and materials, as well as the approach works well both planar and 3D household objects in real time. In addition, our method does not depend on the pose of the robot hand because the location of the reference system is computed from a recognition process of a pattern located place at the robot forearm. The presented experiments demonstrate that the proposed method accomplishes a good monitoring of grasping task with several objects and different grasping configurations in indoor environments.
Smooth leader or sharp follower? Playing the mirror game with a robot.
Kashi, Shir; Levy-Tzedek, Shelly
2018-01-01
The increasing number of opportunities for human-robot interactions in various settings, from industry through home use to rehabilitation, creates a need to understand how to best personalize human-robot interactions to fit both the user and the task at hand. In the current experiment, we explored a human-robot collaborative task of joint movement, in the context of an interactive game. We set out to test people's preferences when interacting with a robotic arm, playing a leader-follower imitation game (the mirror game). Twenty two young participants played the mirror game with the robotic arm, where one player (person or robot) followed the movements of the other. Each partner (person and robot) was leading part of the time, and following part of the time. When the robotic arm was leading the joint movement, it performed movements that were either sharp or smooth, which participants were later asked to rate. The greatest preference was given to smooth movements. Half of the participants preferred to lead, and half preferred to follow. Importantly, we found that the movements of the robotic arm primed the subsequent movements performed by the participants. The priming effect by the robot on the movements of the human should be considered when designing interactions with robots. Our results demonstrate individual differences in preferences regarding the role of the human and the joint motion path of the robot and the human when performing the mirror game collaborative task, and highlight the importance of personalized human-robot interactions.
Rapid detection of person information in a naturalistic scene.
Fletcher-Watson, Sue; Findlay, John M; Leekam, Susan R; Benson, Valerie
2008-01-01
A preferential-looking paradigm was used to investigate how gaze is distributed in naturalistic scenes. Two scenes were presented side by side: one contained a single person (person-present) and one did not (person-absent). Eye movements were recorded, the principal measures being the time spent looking at each region of the scenes, and the latency and location of the first fixation within each trial. We studied gaze patterns during free viewing, and also in a task requiring gender discrimination of the human figure depicted. Results indicated a strong bias towards looking to the person-present scene. This bias was present on the first fixation after image presentation, confirming previous findings of ultra-rapid processing of complex information. Faces attracted disproportionately many fixations, the preference emerging in the first fixation and becoming stronger in the following ones. These biases were exaggerated in the gender-discrimination task. A tendency to look at the object being fixated by the person in the scene was shown to be strongest at a slightly later point in the gaze sequence. We conclude that human bodies and faces are subject to special perceptual processing when presented as part of a naturalistic scene.
Compliant Task Execution and Learning for Safe Mixed-Initiative Human-Robot Operations
NASA Technical Reports Server (NTRS)
Dong, Shuonan; Conrad, Patrick R.; Shah, Julie A.; Williams, Brian C.; Mittman, David S.; Ingham, Michel D.; Verma, Vandana
2011-01-01
We introduce a novel task execution capability that enhances the ability of in-situ crew members to function independently from Earth by enabling safe and efficient interaction with automated systems. This task execution capability provides the ability to (1) map goal-directed commands from humans into safe, compliant, automated actions, (2) quickly and safely respond to human commands and actions during task execution, and (3) specify complex motions through teaching by demonstration. Our results are applicable to future surface robotic systems, and we have demonstrated these capabilities on JPL's All-Terrain Hex-Limbed Extra-Terrestrial Explorer (ATHLETE) robot.
A Model of Manual Control with Perspective Scene Viewing
NASA Technical Reports Server (NTRS)
Sweet, Barbara Townsend
2013-01-01
A model of manual control during perspective scene viewing is presented, which combines the Crossover Model with a simpli ed model of perspective-scene viewing and visual- cue selection. The model is developed for a particular example task: an idealized constant- altitude task in which the operator controls longitudinal position in the presence of both longitudinal and pitch disturbances. An experiment is performed to develop and vali- date the model. The model corresponds closely with the experimental measurements, and identi ed model parameters are highly consistent with the visual cues available in the perspective scene. The modeling results indicate that operators used one visual cue for position control, and another visual cue for velocity control (lead generation). Additionally, operators responded more quickly to rotation (pitch) than translation (longitudinal).
Promoting Interactions Between Humans and Robots Using Robotic Emotional Behavior.
Ficocelli, Maurizio; Terao, Junichi; Nejat, Goldie
2016-12-01
The objective of a socially assistive robot is to create a close and effective interaction with a human user for the purpose of giving assistance. In particular, the social interaction, guidance, and support that a socially assistive robot can provide a person can be very beneficial to patient-centered care. However, there are a number of research issues that need to be addressed in order to design such robots. This paper focuses on developing effective emotion-based assistive behavior for a socially assistive robot intended for natural human-robot interaction (HRI) scenarios with explicit social and assistive task functionalities. In particular, in this paper, a unique emotional behavior module is presented and implemented in a learning-based control architecture for assistive HRI. The module is utilized to determine the appropriate emotions of the robot to display, as motivated by the well-being of the person, during assistive task-driven interactions in order to elicit suitable actions from users to accomplish a given person-centered assistive task. A novel online updating technique is used in order to allow the emotional model to adapt to new people and scenarios. Experiments presented show the effectiveness of utilizing robotic emotional assistive behavior during HRI scenarios.
A Hexapod Robot to Demonstrate Mesh Walking in a Microgravity Environment
NASA Technical Reports Server (NTRS)
Foor, David C.
2005-01-01
The JPL Micro-Robot Explorer (MRE) Spiderbot is a robot that takes advantage of its small size to perform precision tasks suitable for space applications. The Spiderbot is a legged robot that can traverse harsh terrain otherwise inaccessible to wheeled robots. A team of Spiderbots can network and can exhibit collaborative efforts to SUCCeSSfUlly complete a set of tasks. The Spiderbot is designed and developed to demonstrate hexapods that can walk on flat surfaces, crawl on meshes, and assemble simple structures. The robot has six legs consisting of two spring-compliant joints and a gripping actuator. A hard-coded set of gaits allows the robot to move smoothly in a zero-gravity environment along the mesh. The primary objective of this project is to create a Spiderbot that traverses a flexible, deployable mesh, for use in space repair. Verification of this task will take place aboard a zero-gravity test flight. The secondary objective of this project is to adapt feedback from the joints to allow the robot to test each arm for a successful grip of the mesh. The end result of this research lends itself to a fault-tolerant robot suitable for a wide variety of space applications.
Competencies Identification for Robotics Training.
ERIC Educational Resources Information Center
Tang, Le D.
A study focused on the task of identifying competencies for robotics training. The level of robotics training was limited to that of robot technicians. Study objectives were to obtain a list of occupational competencies; to rank their order of importance; and to compare opinions from robot manufacturers, robot users, and robotics educators…
Chung, Cheng-Shiu; Wang, Hongwu; Cooper, Rory A
2013-07-01
The user interface development of assistive robotic manipulators can be traced back to the 1960s. Studies include kinematic designs, cost-efficiency, user experience involvements, and performance evaluation. This paper is to review studies conducted with clinical trials using activities of daily living (ADLs) tasks to evaluate performance categorized using the International Classification of Functioning, Disability, and Health (ICF) frameworks, in order to give the scope of current research and provide suggestions for future studies. We conducted a literature search of assistive robotic manipulators from 1970 to 2012 in PubMed, Google Scholar, and University of Pittsburgh Library System - PITTCat. Twenty relevant studies were identified. Studies were separated into two broad categories: user task preferences and user-interface performance measurements of commercialized and developing assistive robotic manipulators. The outcome measures and ICF codes associated with the performance evaluations are reported. Suggestions for the future studies include (1) standardized ADL tasks for the quantitative and qualitative evaluation of task efficiency and performance to build comparable measures between research groups, (2) studies relevant to the tasks from user priority lists and ICF codes, and (3) appropriate clinical functional assessment tests with consideration of constraints in assistive robotic manipulator user interfaces. In addition, these outcome measures will help physicians and therapists build standardized tools while prescribing and assessing assistive robotic manipulators.
TROTER's (Tiny Robotic Operation Team Experiment): A new concept of space robots
NASA Technical Reports Server (NTRS)
Su, Renjeng
1990-01-01
In view of the future need of automation and robotics in space and the existing approaches to the problem, we proposed a new concept of robots for space construction. The new concept is based on the basic idea of decentralization. Decentralization occurs, on the one hand, in using teams of many cooperative robots for construction tasks. Redundancy and modular design are explored to achieve high reliability for team robotic operations. Reliability requirement on individual robots is greatly reduced. Another area of decentralization is manifested by the proposed control hierarchy which eventually includes humans in the loop. The control strategy is constrained by various time delays and calls for different levels of abstraction of the task dynamics. Such technology is needed for remote control of robots in an uncertain environment. Thus, concerns of human safety around robots are relaxed. This presentation also introduces the required technologies behind the new robotic concept.
Method and System for Controlling a Dexterous Robot Execution Sequence Using State Classification
NASA Technical Reports Server (NTRS)
Sanders, Adam M. (Inventor); Quillin, Nathaniel (Inventor); Platt, Robert J., Jr. (Inventor); Pfeiffer, Joseph (Inventor); Permenter, Frank Noble (Inventor)
2014-01-01
A robotic system includes a dexterous robot and a controller. The robot includes a plurality of robotic joints, actuators for moving the joints, and sensors for measuring a characteristic of the joints, and for transmitting the characteristics as sensor signals. The controller receives the sensor signals, and is configured for executing instructions from memory, classifying the sensor signals into distinct classes via the state classification module, monitoring a system state of the robot using the classes, and controlling the robot in the execution of alternative work tasks based on the system state. A method for controlling the robot in the above system includes receiving the signals via the controller, classifying the signals using the state classification module, monitoring the present system state of the robot using the classes, and controlling the robot in the execution of alternative work tasks based on the present system state.
NASA Astrophysics Data System (ADS)
Ayres, R.; Miller, S.
1982-06-01
The characteristics, applications, and operational capabilities of currently available robots are examined. Designed to function at tasks of a repetitive, hazardous, or uncreative nature, robot appendages are controlled by microprocessors which permit some simple decision-making on-the-job, and have served for sample gathering on the Mars Viking lander. Critical developmental areas concern active sensors at the robot grappler-object interface, where sufficient data must be gathered for the central processor to which the robot is attached to conclude the state of completion and suitability of the workpiece. Although present robots must be programmed through every step of a particular industrial process, thus limiting each robot to specialized tasks, the potential for closed cells of batch-processing robot-run units is noted to be close to realization. Finally, consideration is given to methods for retraining the human workforce that robots replace
Behavior-Based Multi-Robot Collaboration for Autonomous Construction Tasks
NASA Technical Reports Server (NTRS)
Stroupe, Ashley; Huntsberger, Terry; Okon, Avi; Aghazarian, Hrand; Robinson, Matthew
2005-01-01
We present a heterogeneous multi-robot system for autonomous construction of a structure through assembly of long components. Placement of a component within an existing structure in a realistic environment is demonstrated on a two-robot team. The task requires component acquisition, cooperative transport, and cooperative precision manipulation. Far adaptability, the system is designed as a behavior-based architecture. Far applicability to space-related construction efforts, computation, power, communication, and sensing are minimized, though the techniques developed are also applicable to terrestrial construction tasks.
Enhanced control & sensing for the REMOTEC ANDROS Mk VI robot. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spelt, P.F.; Harvey, H.W.
1997-08-01
This Cooperative Research and Development Agreement (CRADA) between Lockheed Marietta Energy Systems, Inc., and REMOTEC, Inc., explored methods of providing operator feedback for various work actions of the ANDROS Mk VI teleoperated robot. In a hazardous environment, an extremely heavy workload seriously degrades the productivity of teleoperated robot operators. This CRADA involved the addition of computer power to the robot along with a variety of sensors and encoders to provide information about the robot`s performance in and relationship to its environment. Software was developed to integrate the sensor and encoder information and provide control input to the robot. ANDROS Mkmore » VI robots are presently used by numerous electric utilities to perform tasks in reactors where substantial exposure to radiation exists, as well as in a variety of other hazardous environments. Further, this platform has potential for use in a number of environmental restoration tasks, such as site survey and detection of hazardous waste materials. The addition of sensors and encoders serves to make the robot easier to manage and permits tasks to be done more safely and inexpensively (due to time saved in the completion of complex remote tasks). Prior research on the automation of mobile platforms with manipulators at Oak Ridge National Laboratory`s Center for Engineering Systems Advanced Research (CESAR, B&R code KC0401030) Laboratory, a BES-supported facility, indicated that this type of enhancement is effective. This CRADA provided such enhancements to a successful working teleoperated robot for the first time. Performance of this CRADA used the CESAR laboratory facilities and expertise developed under BES funding.« less
3D vision upgrade kit for TALON robot
NASA Astrophysics Data System (ADS)
Edmondson, Richard; Vaden, Justin; Hyatt, Brian; Morris, James; Pezzaniti, J. Larry; Chenault, David B.; Tchon, Joe; Barnidge, Tracy; Kaufman, Seth; Pettijohn, Brad
2010-04-01
In this paper, we report on the development of a 3D vision field upgrade kit for TALON robot consisting of a replacement flat panel stereoscopic display, and multiple stereo camera systems. An assessment of the system's use for robotic driving, manipulation, and surveillance operations was conducted. The 3D vision system was integrated onto a TALON IV Robot and Operator Control Unit (OCU) such that stock components could be electrically disconnected and removed, and upgrade components coupled directly to the mounting and electrical connections. A replacement display, replacement mast camera with zoom, auto-focus, and variable convergence, and a replacement gripper camera with fixed focus and zoom comprise the upgrade kit. The stereo mast camera allows for improved driving and situational awareness as well as scene survey. The stereo gripper camera allows for improved manipulation in typical TALON missions.
NASA Astrophysics Data System (ADS)
Lee, Sam; Lucas, Nathan P.; Ellis, R. Darin; Pandya, Abhilash
2012-06-01
This paper presents a seamlessly controlled human multi-robot system comprised of ground and aerial robots of semiautonomous nature for source localization tasks. The system combines augmented reality interfaces capabilities with human supervisor's ability to control multiple robots. The role of this human multi-robot interface is to allow an operator to control groups of heterogeneous robots in real time in a collaborative manner. It used advanced path planning algorithms to ensure obstacles are avoided and that the operators are free for higher-level tasks. Each robot knows the environment and obstacles and can automatically generate a collision-free path to any user-selected target. It displayed sensor information from each individual robot directly on the robot in the video view. In addition, a sensor data fused AR view is displayed which helped the users pin point source information or help the operator with the goals of the mission. The paper studies a preliminary Human Factors evaluation of this system in which several interface conditions are tested for source detection tasks. Results show that the novel Augmented Reality multi-robot control (Point-and-Go and Path Planning) reduced mission completion times compared to the traditional joystick control for target detection missions. Usability tests and operator workload analysis are also investigated.
Boucher, Jean-David; Pattacini, Ugo; Lelong, Amelie; Bailly, Gerrard; Elisei, Frederic; Fagel, Sascha; Dominey, Peter Ford; Ventre-Dominey, Jocelyne
2012-01-01
Human-human interaction in natural environments relies on a variety of perceptual cues. Humanoid robots are becoming increasingly refined in their sensorimotor capabilities, and thus should now be able to manipulate and exploit these social cues in cooperation with their human partners. Previous studies have demonstrated that people follow human and robot gaze, and that it can help them to cope with spatially ambiguous language. Our goal is to extend these findings into the domain of action, to determine how human and robot gaze can influence the speed and accuracy of human action. We report on results from a human-human cooperation experiment demonstrating that an agent's vision of her/his partner's gaze can significantly improve that agent's performance in a cooperative task. We then implement a heuristic capability to generate such gaze cues by a humanoid robot that engages in the same cooperative interaction. The subsequent human-robot experiments demonstrate that a human agent can indeed exploit the predictive gaze of their robot partner in a cooperative task. This allows us to render the humanoid robot more human-like in its ability to communicate with humans. The long term objectives of the work are thus to identify social cooperation cues, and to validate their pertinence through implementation in a cooperative robot. The current research provides the robot with the capability to produce appropriate speech and gaze cues in the context of human-robot cooperation tasks. Gaze is manipulated in three conditions: Full gaze (coordinated eye and head), eyes hidden with sunglasses, and head fixed. We demonstrate the pertinence of these cues in terms of statistical measures of action times for humans in the context of a cooperative task, as gaze significantly facilitates cooperation as measured by human response times.
Development of Live-working Robot for Power Transmission Lines
NASA Astrophysics Data System (ADS)
Yan, Yu; Liu, Xiaqing; Ren, Chengxian; Li, Jinliang; Li, Hui
2017-07-01
Dream-I, the first reconfigurable live-working robot for power transmission lines successfully developed in China, has the functions of autonomous walking on lines and accurately positioning. This paper firstly described operation task and object of the robot; then designed a general platform, an insulator replacement end and a drainage plate bolt fastening end of the robot, presented a control system of the robot, and performed simulation analysis on operation plan of the robot; and finally completed electrical field withstand voltage tests in a high voltage hall as well as online test and trial on actual lines. Experimental results show that by replacing ends of manipulators, the robot can fulfill operation tasks of live replacement of suspension insulators and live drainage plate bolt fastening.
Wiener, Scott; Haddock, Peter; Shichman, Steven; Dorin, Ryan
2015-11-01
To define the time needed by urology residents to attain proficiency in computer-aided robotic surgery to aid in the refinement of a robotic surgery simulation curriculum. We undertook a retrospective review of robotic skills training data acquired during January 2012 to December 2014 from junior (postgraduate year [PGY] 2-3) and senior (PGY4-5) urology residents using the da Vinci Skills Simulator. We determined the number of training sessions attended and the level of proficiency achieved by junior and senior residents in attempting 11 basic or 6 advanced tasks, respectively. Junior residents successfully completed 9.9 ± 1.8 tasks, with 62.5% completing all 11 basic tasks. The maximal cumulative success rate of junior residents completing basic tasks was 89.8%, which was achieved within 7.0 ± 1.5 hours of training. Of senior residents, 75% successfully completed all six advanced tasks. Senior residents attended 6.3 ± 3.5 hours of training during which 5.1 ± 1.6 tasks were completed. The maximal cumulative success rate of senior residents completing advanced tasks was 85.4%. When designing and implementing an effective robotic surgical training curriculum, an allocation of 10 hours of training may be optimal to allow junior and senior residents to achieve an acceptable level of surgical proficiency in basic and advanced robotic surgical skills, respectively. These data help guide the design and scheduling of a residents training curriculum within the time constraints of a resident's workload.
Hall, Amanda K; Backonja, Uba; Painter, Ian; Cakmak, Maya; Sung, Minjung; Lau, Timothy; Thompson, Hilaire J; Demiris, George
2017-11-29
As the number of older adults living with chronic conditions continues to rise, they will require assistance with activities of daily living (ADL) and healthcare tasks to continue living independently in their homes. One proposed solution to assist with the care needs of an aging population and a shrinking healthcare workforce is robotic technology. Using a cross-sectional survey design, we purposively sampled adults (≥18 years old) to assess generational acceptance and perceived usefulness of robots to assist with ADLs, healthcare tasks, and evaluate acceptance of robotic healthcare assistance across different settings. A total of 499 adults (age range [years] 18-98, Mean = 38.7, SD = 22.7) responded to the survey. Significant differences were found among young, middle-aged, and older adults on perceived usefulness of robots for cleaning, escorting them around town, acting as companionship, delivering meals, assessing sadness and calling for help, providing medical advice, taking vital sign assessments, and assisting with personal care (p < 0.05). The majority of younger adults reported that they would like a robot to provide healthcare assistance in the hospital, compared to middle-aged and older adults (p < 0.001). Results of this study can guide the design of robots to assist adults of all ages with useful tasks.
Application requirements for Robotic Nursing Assistants in hospital environments
NASA Astrophysics Data System (ADS)
Cremer, Sven; Doelling, Kris; Lundberg, Cody L.; McNair, Mike; Shin, Jeongsik; Popa, Dan
2016-05-01
In this paper we report on analysis toward identifying design requirements for an Adaptive Robotic Nursing Assistant (ARNA). Specifically, the paper focuses on application requirements for ARNA, envisioned as a mobile assistive robot that can navigate hospital environments to perform chores in roles such as patient sitter and patient walker. The role of a sitter is primarily related to patient observation from a distance, and fetching objects at the patient's request, while a walker provides physical assistance for ambulation and rehabilitation. The robot will be expected to not only understand nurse and patient intent but also close the decision loop by automating several routine tasks. As a result, the robot will be equipped with sensors such as distributed pressure sensitive skins, 3D range sensors, and so on. Modular sensor and actuator hardware configured in the form of several multi-degree-of-freedom manipulators, and a mobile base are expected to be deployed in reconfigurable platforms for physical assistance tasks. Furthermore, adaptive human-machine interfaces are expected to play a key role, as they directly impact the ability of robots to assist nurses in a dynamic and unstructured environment. This paper discusses required tasks for the ARNA robot, as well as sensors and software infrastructure to carry out those tasks in the aspects of technical resource availability, gaps, and needed experimental studies.
Bilateral assessment of functional tasks for robot-assisted therapy applications
Wang, Sarah; Bai, Ping; Strachota, Elaine; Tchekanov, Guennady; Melbye, Jeff; McGuire, John
2011-01-01
This article presents a novel evaluation system along with methods to evaluate bilateral coordination of arm function on activities of daily living tasks before and after robot-assisted therapy. An affordable bilateral assessment system (BiAS) consisting of two mini-passive measuring units modeled as three degree of freedom robots is described. The process for evaluating functional tasks using the BiAS is presented and we demonstrate its ability to measure wrist kinematic trajectories. Three metrics, phase difference, movement overlap, and task completion time, are used to evaluate the BiAS system on a bilateral symmetric (bi-drink) and a bilateral asymmetric (bi-pour) functional task. Wrist position and velocity trajectories are evaluated using these metrics to provide insight into temporal and spatial bilateral deficits after stroke. The BiAS system quantified movements of the wrists during functional tasks and detected differences in impaired and unimpaired arm movements. Case studies showed that stroke patients compared to healthy subjects move slower and are less likely to use their arm simultaneously even when the functional task requires simultaneous movement. After robot-assisted therapy, interlimb coordination spatial deficits moved toward normal coordination on functional tasks. PMID:21881901
The effect of non-visual working memory load on top-down modulation of visual processing
Rissman, Jesse; Gazzaley, Adam; D'Esposito, Mark
2009-01-01
While a core function of the working memory (WM) system is the active maintenance of behaviorally relevant sensory representations, it is also critical that distracting stimuli are appropriately ignored. We used functional magnetic resonance imaging to examine the role of domain-general WM resources in the top-down attentional modulation of task-relevant and irrelevant visual representations. In our dual-task paradigm, each trial began with the auditory presentation of six random (high load) or sequentially-ordered (low load) digits. Next, two relevant visual stimuli (e.g., faces), presented amongst two temporally interspersed visual distractors (e.g., scenes), were to be encoded and maintained across a 7-sec delay interval, after which memory for the relevant images and digits was probed. When taxed by high load digit maintenance, participants exhibited impaired performance on the visual WM task and a selective failure to attenuate the neural processing of task-irrelevant scene stimuli. The over-processing of distractor scenes under high load was indexed by elevated encoding activity in a scene-selective region-of-interest relative to low load and passive viewing control conditions, as well as by improved long-term recognition memory for these items. In contrast, the load manipulation did not affect participants' ability to upregulate activity in this region when scenes were task-relevant. These results highlight the critical role of domain-general WM resources in the goal-directed regulation of distractor processing. Moreover, the consequences of increased WM load in young adults closely resemble the effects of cognitive aging on distractor filtering [Gazzaley et al., (2005) Nature Neuroscience 8, 1298-1300], suggesting the possibility of a common underlying mechanism. PMID:19397858
Age Differences in Selective Memory of Goal-Relevant Stimuli Under Threat.
Durbin, Kelly A; Clewett, David; Huang, Ringo; Mather, Mara
2018-02-01
When faced with threat, people often selectively focus on and remember the most pertinent information while simultaneously ignoring any irrelevant information. Filtering distractors under arousal requires inhibitory mechanisms, which take time to recruit and often decline in older age. Despite the adaptive nature of this ability, relatively little research has examined how both threat and time spent preparing these inhibitory mechanisms affect selective memory for goal-relevant information across the life span. In this study, 32 younger and 31 older adults were asked to encode task-relevant scenes, while ignoring transparent task-irrelevant objects superimposed onto them. Threat levels were increased on some trials by threatening participants with monetary deductions if they later forgot scenes that followed threat cues. We also varied the time between threat induction and a to-be-encoded scene (i.e., 2 s, 4 s, 6 s) to determine whether both threat and timing effects on memory selectivity differ by age. We found that age differences in memory selectivity only emerged after participants spent a long time (i.e., 6 s) preparing for selective encoding. Critically, this time-dependent age difference occurred under threatening, but not neutral, conditions. Under threat, longer preparation time led to enhanced memory for task-relevant scenes and greater memory suppression of task-irrelevant objects in younger adults. In contrast, increased preparation time after threat induction had no effect on older adults' scene memory and actually worsened memory suppression of task-irrelevant objects. These findings suggest that increased time to prepare top-down encoding processes benefits younger, but not older, adults' selective memory for goal-relevant information under threat. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
NASA Technical Reports Server (NTRS)
Mavroidis, Constantinos; Pfeiffer, Charles; Paljic, Alex; Celestino, James; Lennon, Jamie; Bar-Cohen, Yoseph
2000-01-01
For many years, the robotic community sought to develop robots that can eventually operate autonomously and eliminate the need for human operators. However, there is an increasing realization that there are some tasks that human can perform significantly better but, due to associated hazards, distance, physical limitations and other causes, only robot can be employed to perform these tasks. Remotely performing these types of tasks requires operating robots as human surrogates. While current "hand master" haptic systems are able to reproduce the feeling of rigid objects, they present great difficulties in emulating the feeling of remote/virtual stiffness. In addition, they tend to be heavy, cumbersome and usually they only allow limited operator workspace. In this paper a novel haptic interface is presented to enable human-operators to "feel" and intuitively mirror the stiffness/forces at remote/virtual sites enabling control of robots as human-surrogates. This haptic interface is intended to provide human operators intuitive feeling of the stiffness and forces at remote or virtual sites in support of space robots performing dexterous manipulation tasks (such as operating a wrench or a drill). Remote applications are referred to the control of actual robots whereas virtual applications are referred to simulated operations. The developed haptic interface will be applicable to IVA operated robotic EVA tasks to enhance human performance, extend crew capability and assure crew safety. The electrically controlled stiffness is obtained using constrained ElectroRheological Fluids (ERF), which changes its viscosity under electrical stimulation. Forces applied at the robot end-effector due to a compliant environment will be reflected to the user using this ERF device where a change in the system viscosity will occur proportionally to the force to be transmitted. In this paper, we will present the results of our modeling, simulation, and initial testing of such an electrorheological fluid (ERF) based haptic device.
NASA Technical Reports Server (NTRS)
1999-01-01
This video gives a brief history of the Jet Propulsion Laboratory, current missions and what the future may hold. Scenes includes various planets in the solar system, robotic exploration of space, discussions on the Hubble Space Telescope, the source of life, and solar winds. This video was narrated by Jodie Foster. Animations include: close-up image of the Moon; close-up images of the surface of Mars; robotic exploration of Mars; the first mapping assignment of Mars; animated views of Jupiter; animated views of Saturn; and views of a Giant Storm on Neptune called the Great Dark Spot.
NASA Technical Reports Server (NTRS)
Simmons, Reid; Apfelbaum, David
2005-01-01
Task Description Language (TDL) is an extension of the C++ programming language that enables programmers to quickly and easily write complex, concurrent computer programs for controlling real-time autonomous systems, including robots and spacecraft. TDL is based on earlier work (circa 1984 through 1989) on the Task Control Architecture (TCA). TDL provides syntactic support for hierarchical task-level control functions, including task decomposition, synchronization, execution monitoring, and exception handling. A Java-language-based compiler transforms TDL programs into pure C++ code that includes calls to a platform-independent task-control-management (TCM) library. TDL has been used to control and coordinate multiple heterogeneous robots in projects sponsored by NASA and the Defense Advanced Research Projects Agency (DARPA). It has also been used in Brazil to control an autonomous airship and in Canada to control a robotic manipulator.
A mobile robot system for ground servicing operations on the space shuttle
NASA Astrophysics Data System (ADS)
Dowling, K.; Bennett, R.; Blackwell, M.; Graham, T.; Gatrall, S.; O'Toole, R.; Schempf, H.
1992-11-01
A mobile system for space shuttle servicing, the Tessellator, has been configured, designed and is currently being built and integrated. Robot tasks include chemical injection and inspection of the shuttle's thermal protection system. This paper outlines tasks, rationale, and facility requirements for the development of this system. A detailed look at the mobile system and manipulator follow with a look at mechanics, electronics, and software. Salient features of the mobile robot include omnidirectionality, high reach, high stiffness and accuracy with safety and self-reliance integral to all aspects of the design. The robot system is shown to meet task, facility, and NASA requirements in its design resulting in unprecedented specifications for a mobile-manipulation system.
A mobile robot system for ground servicing operations on the space shuttle
NASA Technical Reports Server (NTRS)
Dowling, K.; Bennett, R.; Blackwell, M.; Graham, T.; Gatrall, S.; O'Toole, R.; Schempf, H.
1992-01-01
A mobile system for space shuttle servicing, the Tessellator, has been configured, designed and is currently being built and integrated. Robot tasks include chemical injection and inspection of the shuttle's thermal protection system. This paper outlines tasks, rationale, and facility requirements for the development of this system. A detailed look at the mobile system and manipulator follow with a look at mechanics, electronics, and software. Salient features of the mobile robot include omnidirectionality, high reach, high stiffness and accuracy with safety and self-reliance integral to all aspects of the design. The robot system is shown to meet task, facility, and NASA requirements in its design resulting in unprecedented specifications for a mobile-manipulation system.
Task allocation among multiple intelligent robots
NASA Technical Reports Server (NTRS)
Gasser, L.; Bekey, G.
1987-01-01
Researchers describe the design of a decentralized mechanism for allocating assembly tasks in a multiple robot assembly workstation. Currently, the approach focuses on distributed allocation to explore its feasibility and its potential for adaptability to changing circumstances, rather than for optimizing throughput. Individual greedy robots make their own local allocation decisions using both dynamic allocation policies which propagate through a network of allocation goals, and local static and dynamic constraints describing which robots are elibible for which assembly tasks. Global coherence is achieved by proper weighting of allocation pressures propagating through the assembly plan. Deadlock avoidance and synchronization is achieved using periodic reassessments of local allocation decisions, ageing of allocation goals, and short-term allocation locks on goals.
Surface Support Systems for Co-Operative and Integrated Human/Robotic Lunar Exploration
NASA Technical Reports Server (NTRS)
Mueller, Robert P.
2006-01-01
Human and robotic partnerships to realize space goals can enhance space missions and provide increases in human productivity while decreasing the hazards that the humans are exposed to. For lunar exploration, the harsh environment of the moon and the repetitive nature of the tasks involved with lunar outpost construction, maintenance and operation as well as production tasks associated with in-situ resource utilization, make it highly desirable to use robotic systems in co-operation with human activity. A human lunar outpost is functionally examined and concepts for selected human/robotic tasks are discussed in the context of a lunar outpost which will enable the presence of humans on the moon for extended periods of time.
[Visual representation of natural scenes in flicker changes].
Nakashima, Ryoichi; Yokosawa, Kazuhiko
2010-08-01
Coherence theory in scene perception (Rensink, 2002) assumes the retention of volatile object representations on which attention is not focused. On the other hand, visual memory theory in scene perception (Hollingworth & Henderson, 2002) assumes that robust object representations are retained. In this study, we hypothesized that the difference between these two theories is derived from the difference of the experimental tasks that they are based on. In order to verify this hypothesis, we examined the properties of visual representation by using a change detection and memory task in a flicker paradigm. We measured the representations when participants were instructed to search for a change in a scene, and compared them with the intentional memory representations. The visual representations were retained in visual long-term memory even in the flicker paradigm, and were as robust as the intentional memory representations. However, the results indicate that the representations are unavailable for explicitly localizing a scene change, but are available for answering the recognition test. This suggests that coherence theory and visual memory theory are compatible.
Validated robotic laparoscopic surgical training in a virtual-reality environment.
Katsavelis, Dimitrios; Siu, Ka-Chun; Brown-Clerk, Bernadette; Lee, Irene H; Lee, Yong Kwon; Oleynikov, Dmitry; Stergiou, Nick
2009-01-01
A robotic virtual-reality (VR) simulator has been developed to improve robot-assisted training for laparoscopic surgery and to enhance surgical performance in laparoscopic skills. The simulated VR training environment provides an effective approach to evaluate and improve surgical performance. This study presents our findings of the VR training environment for robotic laparoscopy. Eight volunteers performed two inanimate tasks in both the VR and the actual training environment. The tasks were bimanual carrying (BC) and needle passing (NP). For the BC task, the volunteers simultaneously transferred two plastic pieces in opposite directions five times consecutively. The same volunteers passed a surgical needle through six pairs of holes in the NP task. Both tasks require significant bimanual coordination that mimics actual laparoscopic skills. Data analysis included time to task completion, speed and distance traveled of the instrument tip, as well as range of motion of the subject's wrist and elbow of the right arm. Electromyography of the right wrist flexor and extensor were also analyzed. Paired t-tests and Pearson's r were used to explore the differences and correlations between the two environments. There were no significant differences between the actual and the simulated VR environment with respect to the BC task, while there were significant differences in almost all dependent parameters for the NP task. Moderate to high correlations for most dependent parameters were revealed for both tasks. Our data shows that the VR environment adequately simulated the BC task. The significant differences found for the NP task may be attributed to an oversimplification in the VR environment. However, they do point to the need for improvements in the complexity of our VR simulation. Further research work is needed to develop effective and reliable VR environments for robotic laparoscopic training.
2010-08-01
facility. 2.2.4 The National Aeronautics and Space Administration-Task Load Index (NASA-TLX) The NASA-TLX is a subjective workload assessment tool ...No 6 NR If so, what type? Davinci (1) 14. Please describe the conditions under which you used the robotic system. Surgical (1) 15...2 Fantastic tool . Can see how this will save more lives by using the robot as the recon tool . 1 Interesting and fun. 3 Easily understood what to
Smooth leader or sharp follower? Playing the mirror game with a robot
Kashi, Shir; Levy-Tzedek, Shelly
2017-01-01
Background: The increasing number of opportunities for human-robot interactions in various settings, from industry through home use to rehabilitation, creates a need to understand how to best personalize human-robot interactions to fit both the user and the task at hand. In the current experiment, we explored a human-robot collaborative task of joint movement, in the context of an interactive game. Objective: We set out to test people’s preferences when interacting with a robotic arm, playing a leader-follower imitation game (the mirror game). Methods: Twenty two young participants played the mirror game with the robotic arm, where one player (person or robot) followed the movements of the other. Each partner (person and robot) was leading part of the time, and following part of the time. When the robotic arm was leading the joint movement, it performed movements that were either sharp or smooth, which participants were later asked to rate. Results: The greatest preference was given to smooth movements. Half of the participants preferred to lead, and half preferred to follow. Importantly, we found that the movements of the robotic arm primed the subsequent movements performed by the participants. Conclusion: The priming effect by the robot on the movements of the human should be considered when designing interactions with robots. Our results demonstrate individual differences in preferences regarding the role of the human and the joint motion path of the robot and the human when performing the mirror game collaborative task, and highlight the importance of personalized human-robot interactions. PMID:29036853
Robot-assisted laparoscopic ultrasonography for hepatic surgery.
Schneider, Caitlin M; Peng, Peter D; Taylor, Russell H; Dachs, Gregory W; Hasser, Christopher J; DiMaio, Simon P; Choti, Michael A
2012-05-01
This study describes and evaluates a novel, robot-assisted laparoscopic ultrasonographic device for hepatic surgery. Laparoscopic liver surgery is being performed with increasing frequency. One major drawback of this approach is the limited capability of intraoperative ultrasonography (IOUS) using standard laparoscopic devices. Robotic surgery systems offer the opportunity to develop new tools to improve techniques in minimally invasive surgery. This study evaluates a new integrated ultrasonography (US) device with the da Vinci Surgical System for laparoscopic visualization, comparing it with conventional handheld laparoscopic IOUS for performing key tasks in hepatic surgery. A prototype laparoscopic IOUS instrument was developed for the da Vinci Surgical System and compared with a conventional laparoscopic US device in simulation tasks: (1) In vivo porcine hepatic visualization and probe manipulation, (2) lesion detection accuracy, and (3) biopsy precision. Usability was queried by poststudy questionnaire. The robotic US proved better than conventional laparoscopic US in liver surface exploration (85% success vs 73%; P = .030) and tool manipulation (79% vs 57%; P = .028), whereas no difference was detected in lesion identification (63 vs 58; P = .41) and needle biopsy tasks (57 vs 48; P = .11). Subjects found the robotic US to facilitate better probe positioning (80%), decrease fatigue (90%), and be more useful overall (90%) on the post-task questionnaire. We found this robot-assisted IOUS system to be practical and useful in the performance of important tasks required for hepatic surgery, outperforming free-hand laparoscopic IOUS for certain tasks, and was more subjectively usable to the surgeon. Systems such as this may expand the use of robotic surgery for complex operative procedures requiring IOUS. Copyright © 2012 Mosby, Inc. All rights reserved.
Robot design for a vacuum environment
NASA Technical Reports Server (NTRS)
Belinski, S.; Trento, W.; Imani-Shikhabadi, R.; Hackwood, S.
1987-01-01
The cleanliness requirements for many processing and manufacturing tasks are becoming ever stricter, resulting in a greater interest in the vacuum environment. Researchers discuss the importance of this special environment, and the development of robots which are physically and functionally suited to vacuum processing tasks. Work is in progress at the Center for robotic Systems in Microelectronics (CRSM) to provide a robot for the manufacture of a revolutionary new gyroscope in high vacuum. The need for vacuum in this and other processes is discussed as well as the requirements for a vacuum-compatible robot. Finally, researchers present details on work done at the CRSM to modify an existing clean-room compatible robot for use at high vacuum.
Quantum robots plus environments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benioff, P.
1998-07-23
A quantum robot is a mobile quantum system, including an on board quantum computer and needed ancillary systems, that interacts with an environment of quantum systems. Quantum robots carry out tasks whose goals include making specified changes in the state of the environment or carrying out measurements on the environment. The environments considered so far, oracles, data bases, and quantum registers, are seen to be special cases of environments considered here. It is also seen that a quantum robot should include a quantum computer and cannot be simply a multistate head. A model of quantum robots and their interactions ismore » discussed in which each task, as a sequence of alternating computation and action phases,is described by a unitary single time step operator T {approx} T{sub a} + T{sub c} (discrete space and time are assumed). The overall system dynamics is described as a sum over paths of completed computation (T{sub c}) and action (T{sub a}) phases. A simple example of a task, measuring the distance between the quantum robot and a particle on a 1D lattice with quantum phase path dispersion present, is analyzed. A decision diagram for the task is presented and analyzed.« less
Multi-Robot Coalitions Formation with Deadlines: Complexity Analysis and Solutions
2017-01-01
Multi-robot task allocation is one of the main problems to address in order to design a multi-robot system, very especially when robots form coalitions that must carry out tasks before a deadline. A lot of factors affect the performance of these systems and among them, this paper is focused on the physical interference effect, produced when two or more robots want to access the same point simultaneously. To our best knowledge, this paper presents the first formal description of multi-robot task allocation that includes a model of interference. Thanks to this description, the complexity of the allocation problem is analyzed. Moreover, the main contribution of this paper is to provide the conditions under which the optimal solution of the aforementioned allocation problem can be obtained solving an integer linear problem. The optimal results are compared to previous allocation algorithms already proposed by the first two authors of this paper and with a new method proposed in this paper. The results obtained show how the new task allocation algorithms reach up more than an 80% of the median of the optimal solution, outperforming previous auction algorithms with a huge reduction of the execution time. PMID:28118384
Multi-Robot Coalitions Formation with Deadlines: Complexity Analysis and Solutions.
Guerrero, Jose; Oliver, Gabriel; Valero, Oscar
2017-01-01
Multi-robot task allocation is one of the main problems to address in order to design a multi-robot system, very especially when robots form coalitions that must carry out tasks before a deadline. A lot of factors affect the performance of these systems and among them, this paper is focused on the physical interference effect, produced when two or more robots want to access the same point simultaneously. To our best knowledge, this paper presents the first formal description of multi-robot task allocation that includes a model of interference. Thanks to this description, the complexity of the allocation problem is analyzed. Moreover, the main contribution of this paper is to provide the conditions under which the optimal solution of the aforementioned allocation problem can be obtained solving an integer linear problem. The optimal results are compared to previous allocation algorithms already proposed by the first two authors of this paper and with a new method proposed in this paper. The results obtained show how the new task allocation algorithms reach up more than an 80% of the median of the optimal solution, outperforming previous auction algorithms with a huge reduction of the execution time.
Navigation of a care and welfare robot
NASA Astrophysics Data System (ADS)
Yukawa, Toshihiro; Hosoya, Osamu; Saito, Naoki; Okano, Hideharu
2005-12-01
In this paper, we propose the development of a robot that can perform nursing tasks in a hospital. In a narrow environment such as a sickroom or a hallway, the robot must be able to move freely in arbitrary directions. Therefore, the robot needs to have high controllability and the capability to make precise movements. Our robot can recognize a line by using cameras, and can be controlled in the reference directions by means of comparison with original cell map information; furthermore, it moves safely on the basis of an original center-line established permanently in the building. Correspondence between the robot and a centralized control center enables the robot's autonomous movement in the hospital. Through a navigation system using cell map information, the robot is able to perform nursing tasks smoothly by changing the camera angle.
Zhao, Nan; Chen, Wenfeng; Xuan, Yuming; Mehler, Bruce; Reimer, Bryan; Fu, Xiaolan
2014-01-01
The 'looked-but-failed-to-see' phenomenon is crucial to driving safety. Previous research utilising change detection tasks related to driving has reported inconsistent effects of driver experience on the ability to detect changes in static driving scenes. Reviewing these conflicting results, we suggest that drivers' increased ability to detect changes will only appear when the task requires a pattern of visual attention distribution typical of actual driving. By adding a distant fixation point on the road image, we developed a modified change blindness paradigm and measured detection performance of drivers and non-drivers. Drivers performed better than non-drivers only in scenes with a fixation point. Furthermore, experience effect interacted with the location of the change and the relevance of the change to driving. These results suggest that learning associated with driving experience reflects increased skill in the efficient distribution of visual attention across both the central focus area and peripheral objects. This article provides an explanation for the previously conflicting reports of driving experience effects in change detection tasks. We observed a measurable benefit of experience in static driving scenes, using a modified change blindness paradigm. These results have translational opportunities for picture-based training and testing tools to improve driver skill.
Laboratory testing of candidate robotic applications for space
NASA Technical Reports Server (NTRS)
Purves, R. B.
1987-01-01
Robots have potential for increasing the value of man's presence in space. Some categories with potential benefit are: (1) performing extravehicular tasks like satellite and station servicing, (2) supporting the science mission of the station by manipulating experiment tasks, and (3) performing intravehicular activities which would be boring, tedious, exacting, or otherwise unpleasant for astronauts. An important issue in space robotics is selection of an appropriate level of autonomy. In broad terms three levels of autonomy can be defined: (1) teleoperated - an operator explicitly controls robot movement; (2) telerobotic - an operator controls the robot directly, but by high-level commands, without, for example, detailed control of trajectories; and (3) autonomous - an operator supplies a single high-level command, the robot does all necessary task sequencing and planning to satisfy the command. Researchers chose three projects for their exploration of technology and implementation issues in space robots, one each of the three application areas, each with a different level of autonomy. The projects were: (1) satellite servicing - teleoperated; (2) laboratory assistant - telerobotic; and (3) on-orbit inventory manager - autonomous. These projects are described and some results of testing are summarized.
2011-01-01
Background Practicing arm and gait movements with robotic assistance after neurologic injury can help patients improve their movement ability, but patients sometimes reduce their effort during training in response to the assistance. Reduced effort has been hypothesized to diminish clinical outcomes of robotic training. To better understand patient slacking, we studied the role of visual distraction and auditory feedback in modulating patient effort during a common robot-assisted tracking task. Methods Fourteen participants with chronic left hemiparesis from stroke, five control participants with chronic right hemiparesis and fourteen non-impaired healthy control participants, tracked a visual target with their arms while receiving adaptive assistance from a robotic arm exoskeleton. We compared four practice conditions: the baseline tracking task alone; tracking while also performing a visual distracter task; tracking with the visual distracter and sound feedback; and tracking with sound feedback. For the distracter task, symbols were randomly displayed in the corners of the computer screen, and the participants were instructed to click a mouse button when a target symbol appeared. The sound feedback consisted of a repeating beep, with the frequency of repetition made to increase with increasing tracking error. Results Participants with stroke halved their effort and doubled their tracking error when performing the visual distracter task with their left hemiparetic arm. With sound feedback, however, these participants increased their effort and decreased their tracking error close to their baseline levels, while also performing the distracter task successfully. These effects were significantly smaller for the participants who used their non-paretic arm and for the participants without stroke. Conclusions Visual distraction decreased participants effort during a standard robot-assisted movement training task. This effect was greater for the hemiparetic arm, suggesting that the increased demands associated with controlling an affected arm make the motor system more prone to slack when distracted. Providing an alternate sensory channel for feedback, i.e., auditory feedback of tracking error, enabled the participants to simultaneously perform the tracking task and distracter task effectively. Thus, incorporating real-time auditory feedback of performance errors might improve clinical outcomes of robotic therapy systems. PMID:21513561
The Canonical Robot Command Language (CRCL).
Proctor, Frederick M; Balakirsky, Stephen B; Kootbally, Zeid; Kramer, Thomas R; Schlenoff, Craig I; Shackleford, William P
2016-01-01
Industrial robots can perform motion with sub-millimeter repeatability when programmed using the teach-and-playback method. While effective, this method requires significant up-front time, tying up the robot and a person during the teaching phase. Off-line programming can be used to generate robot programs, but the accuracy of this method is poor unless supplemented with good calibration to remove systematic errors, feed-forward models to anticipate robot response to loads, and sensing to compensate for unmodeled errors. These increase the complexity and up-front cost of the system, but the payback in the reduction of recurring teach programming time can be worth the effort. This payback especially benefits small-batch, short-turnaround applications typical of small-to-medium enterprises, who need the agility afforded by off-line application development to be competitive against low-cost manual labor. To fully benefit from this agile application tasking model, a common representation of tasks should be used that is understood by all of the resources required for the job: robots, tooling, sensors, and people. This paper describes an information model, the Canonical Robot Command Language (CRCL), which provides a high-level description of robot tasks and associated control and status information.
Lessons Learned from Pit Viper System Deployment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Catalan, Michael A.; Alzheimer, James M.; Valdez, Patrick LJ
2002-04-11
Tele-operated and robotic systems operated in unstructured field environments pose unique challenges for tool design. Since field tasks are not always well defined and the robot work area usually cannot be designed for ease of operation, the tools must be versatile. It's important to carefully consider the orientation of the grip the robot takes on the tool, as it's not easily changed in the field. The stiffness of the robot and the possibility of robot positioning errors encourages the use of non-contact or minimal-contact tooling. While normal hand tools can usually be modified for use by the robot, this ismore » not always the most effective approach. It's desirable to have tooling that is relatively independent of the robot; in this case, the robot places the tool near the desired work location and the tool performs its task relatively independently. Here we consider the adaptation of a number of tools for cleanup of a radioactively contaminated piping junction and valve pit. The tasks to be considered are debris removal (small nuts and bolts and pipe up to 100 mm in diameter), size reduction, surface cleaning, and support of past practice crane-based methods for working in the pits.« less
The Canonical Robot Command Language (CRCL)
Proctor, Frederick M.; Balakirsky, Stephen B.; Kootbally, Zeid; Kramer, Thomas R.; Schlenoff, Craig I.; Shackleford, William P.
2017-01-01
Industrial robots can perform motion with sub-millimeter repeatability when programmed using the teach-and-playback method. While effective, this method requires significant up-front time, tying up the robot and a person during the teaching phase. Off-line programming can be used to generate robot programs, but the accuracy of this method is poor unless supplemented with good calibration to remove systematic errors, feed-forward models to anticipate robot response to loads, and sensing to compensate for unmodeled errors. These increase the complexity and up-front cost of the system, but the payback in the reduction of recurring teach programming time can be worth the effort. This payback especially benefits small-batch, short-turnaround applications typical of small-to-medium enterprises, who need the agility afforded by off-line application development to be competitive against low-cost manual labor. To fully benefit from this agile application tasking model, a common representation of tasks should be used that is understood by all of the resources required for the job: robots, tooling, sensors, and people. This paper describes an information model, the Canonical Robot Command Language (CRCL), which provides a high-level description of robot tasks and associated control and status information. PMID:28529393
The Affordance Template ROS Package for Robot Task Programming
NASA Technical Reports Server (NTRS)
Hart, Stephen; Dinh, Paul; Hambuchen, Kimberly
2015-01-01
This paper introduces the Affordance Template ROS package for quickly programming, adjusting, and executing robot applications in the ROS RViz environment. This package extends the capabilities of RViz interactive markers by allowing an operator to specify multiple end-effector waypoint locations and grasp poses in object-centric coordinate frames and to adjust these waypoints in order to meet the run-time demands of the task (specifically, object scale and location). The Affordance Template package stores task specifications in a robot-agnostic XML description format such that it is trivial to apply a template to a new robot. As such, the Affordance Template package provides a robot-generic ROS tool appropriate for building semi-autonomous, manipulation-based applications. Affordance Templates were developed by the NASA-JSC DARPA Robotics Challenge (DRC) team and have since successfully been deployed on multiple platforms including the NASA Valkyrie and Robonaut 2 humanoids, the University of Texas Dreamer robot and the Willow Garage PR2. In this paper, the specification and implementation of the affordance template package is introduced and demonstrated through examples for wheel (valve) turning, pick-and-place, and drill grasping, evincing its utility and flexibility for a wide variety of robot applications.
Consistency of performance of robot-assisted surgical tasks in virtual reality.
Suh, I H; Siu, K-C; Mukherjee, M; Monk, E; Oleynikov, D; Stergiou, N
2009-01-01
The purpose of this study was to investigate consistency of performance of robot-assisted surgical tasks in a virtual reality environment. Eight subjects performed two surgical tasks, bimanual carrying and needle passing, with both the da Vinci surgical robot and a virtual reality equivalent environment. Nonlinear analysis was utilized to evaluate consistency of performance by calculating the regularity and the amount of divergence in the movement trajectories of the surgical instrument tips. Our results revealed that movement patterns for both training tasks were statistically similar between the two environments. Consistency of performance as measured by nonlinear analysis could be an appropriate methodology to evaluate the complexity of the training tasks between actual and virtual environments and assist in developing better surgical training programs.
A methodology to assess performance of human-robotic systems in achievement of collective tasks
NASA Technical Reports Server (NTRS)
Howard, Ayanna M.
2005-01-01
In this paper, we present a methodology to assess system performance of human-robotic systems in achievement of collective tasks such as habitat construction, geological sampling, and space exploration.
Undersea applications of dexterous robotics
NASA Technical Reports Server (NTRS)
Gittleman, Mark M.
1994-01-01
The revolution and application of dexterous robotics in the undersea energy production industry and how this mature technology has affected planned SSF dexterous robotic tasks are examined. Undersea telerobotics, or Remotely Operated Vehicles (ROV's), have evolved in design and use since the mid-1970s. Originally developed to replace commercial divers for both planned and unplanned tasks, they are now most commonly used to perform planned robotic tasks in all phases of assembly, inspection, and maintenance of undersea structures and installations. To accomplish these tasks, the worksites, the tasks themselves, and the tools are now engineered with both the telerobot's and the diver's capabilities in mind. In many cases, this planning has permitted a reduction in telerobot system complexity and cost. The philosophies and design practices that have resulted in the successful incorporation of telerobotics into the highly competitive and cost conscious offshore production industry have been largely ignored in the space community. Cases where these philosophies have been adopted or may be successfully adopted in the near future are explored.
NASA Technical Reports Server (NTRS)
Platt, Robert (Inventor); Wampler, II, Charles W. (Inventor); Abdallah, Muhammad E. (Inventor)
2013-01-01
A robotic system includes a robot having manipulators for grasping an object using one of a plurality of grasp types during a primary task, and a controller. The controller controls the manipulators during the primary task using a multiple-task control hierarchy, and automatically parameterizes the internal forces of the system for each grasp type in response to an input signal. The primary task is defined at an object-level of control, e.g., using a closed-chain transformation, such that only select degrees of freedom are commanded for the object. A control system for the robotic system has a host machine and algorithm for controlling the manipulators using the above hierarchy. A method for controlling the system includes receiving and processing the input signal using the host machine, including defining the primary task at the object-level of control, e.g., using a closed-chain definition, and parameterizing the internal forces for each of grasp type.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pin, Francois G.; Love, Lonnie L.; Jung, David L.
2004-03-29
Contrary to the repetitive tasks performed by industrial robots, the tasks in most DOE missions such as environmental restoration or Decontamination and Decommissioning (D&D) can be characterized as ''batches-of-one'', in which robots must be capable of adapting to changes in constraints, tools, environment, criteria and configuration. No commercially available robot control code is suitable for use with such widely varying conditions. In this talk we present our development of a ''generic code'' to allow real time (at loop rate) robot behavior adaptation to changes in task objectives, tools, number and type of constraints, modes of controls or kinematics configuration. Wemore » present the analytical framework underlying our approach and detail the design of its two major modules for the automatic generation of the kinematics equations when the robot configuration or tools change and for the motion planning under time-varying constraints. Sample problems illustrating the capabilities of the developed system are presented.« less
Emergence of leadership in a robotic fish group under diverging individual personality traits.
Wang, Chen; Chen, Xiaojie; Xie, Guangming; Cao, Ming
2017-05-01
Variations of individual's personality traits have been identified before as one of the possible mechanisms for the emergence of leadership in an interactive collective, which may lead to benefits for the group as a whole. Complementing the large number of existing literatures on using simulation models to study leadership, we use biomimetic robotic fish to gain insight into how the fish's behaviours evolve under the influence of the physical hydrodynamics. In particular, we focus in this paper on understanding how robotic fish's personality traits affect the emergence of an effective leading fish in repeated robotic foraging tasks when the robotic fish's strategies, to push or not to push the obstacle in its foraging path, are updated over time following an evolutionary game set-up. We further show that the robotic fish's personality traits diverge when the group carries out difficult foraging tasks in our experiments, and self-organization takes place to help the group to adapt to the level of difficulties of the tasks without inter-individual communication.
Enhanced control and sensing for the REMOTEC ANDROS Mk VI robot. CRADA final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spelt, P.F.; Harvey, H.W.
1998-08-01
This Cooperative Research and Development Agreement (CRADA) between Lockheed Martin Energy Systems, Inc., and REMOTEC, Inc., explored methods of providing operator feedback for various work actions of the ANDROS Mk VI teleoperated robot. In a hazardous environment, an extremely heavy workload seriously degrades the productivity of teleoperated robot operators. This CRADA involved the addition of computer power to the robot along with a variety of sensors and encoders to provide information about the robot`s performance in and relationship to its environment. Software was developed to integrate the sensor and encoder information and provide control input to the robot. ANDROS Mkmore » VI robots are presently used by numerous electric utilities to perform tasks in reactors where substantial exposure to radiation exists, as well as in a variety of other hazardous environments. Further, this platform has potential for use in a number of environmental restoration tasks, such as site survey and detection of hazardous waste materials. The addition of sensors and encoders serves to make the robot easier to manage and permits tasks to be done more safely and inexpensively (due to time saved in the completion of complex remote tasks). Prior research on the automation of mobile platforms with manipulators at Oak Ridge National Laboratory`s Center for Engineering Systems Advanced Research (CESAR, B&R code KC0401030) Laboratory, a BES-supported facility, indicated that this type of enhancement is effective. This CRADA provided such enhancements to a successful working teleoperated robot for the first time. Performance of this CRADA used the CESAR laboratory facilities and expertise developed under BES funding.« less
Incremental learning of tasks from user demonstrations, past experiences, and vocal comments.
Pardowitz, Michael; Knoop, Steffen; Dillmann, Ruediger; Zöllner, Raoul D
2007-04-01
Since many years the robotics community is envisioning robot assistants sharing the same environment with humans. It became obvious that they have to interact with humans and should adapt to individual user needs. Especially the high variety of tasks robot assistants will be facing requires a highly adaptive and user-friendly programming interface. One possible solution to this programming problem is the learning-by-demonstration paradigm, where the robot is supposed to observe the execution of a task, acquire task knowledge, and reproduce it. In this paper, a system to record, interpret, and reason over demonstrations of household tasks is presented. The focus is on the model-based representation of manipulation tasks, which serves as a basis for incremental reasoning over the acquired task knowledge. The aim of the reasoning is to condense and interconnect the data, resulting in more general task knowledge. A measure for the assessment of information content of task features is introduced. This measure for the relevance of certain features relies both on general background knowledge as well as task-specific knowledge gathered from the user demonstrations. Beside the autonomous information estimation of features, speech comments during the execution, pointing out the relevance of features are considered as well. The results of the incremental growth of the task knowledge when more task demonstrations become available and their fusion with relevance information gained from speech comments is demonstrated within the task of laying a table.
Chung, Cheng-Shiu; Wang, Hongwu; Cooper, Rory A.
2013-01-01
Context The user interface development of assistive robotic manipulators can be traced back to the 1960s. Studies include kinematic designs, cost-efficiency, user experience involvements, and performance evaluation. This paper is to review studies conducted with clinical trials using activities of daily living (ADLs) tasks to evaluate performance categorized using the International Classification of Functioning, Disability, and Health (ICF) frameworks, in order to give the scope of current research and provide suggestions for future studies. Methods We conducted a literature search of assistive robotic manipulators from 1970 to 2012 in PubMed, Google Scholar, and University of Pittsburgh Library System – PITTCat. Results Twenty relevant studies were identified. Conclusion Studies were separated into two broad categories: user task preferences and user-interface performance measurements of commercialized and developing assistive robotic manipulators. The outcome measures and ICF codes associated with the performance evaluations are reported. Suggestions for the future studies include (1) standardized ADL tasks for the quantitative and qualitative evaluation of task efficiency and performance to build comparable measures between research groups, (2) studies relevant to the tasks from user priority lists and ICF codes, and (3) appropriate clinical functional assessment tests with consideration of constraints in assistive robotic manipulator user interfaces. In addition, these outcome measures will help physicians and therapists build standardized tools while prescribing and assessing assistive robotic manipulators. PMID:23820143
Electrophysiological revelations of trial history effects in a color oddball search task.
Shin, Eunsam; Chong, Sang Chul
2016-12-01
In visual oddball search tasks, viewing a no-target scene (i.e., no-target selection trial) leads to the facilitation or delay of the search time for a target in a subsequent trial. Presumably, this selection failure leads to biasing attentional set and prioritizing stimulus features unseen in the no-target scene. We observed attention-related ERP components and tracked the course of attentional biasing as a function of trial history. Participants were instructed to identify color oddballs (i.e., targets) shown in varied trial sequences. The number of no-target scenes preceding a target scene was increased from zero to two to reinforce attentional biasing, and colors presented in two successive no-target scenes were repeated or changed to systematically bias attention to specific colors. For the no-target scenes, the presentation of a second no-target scene resulted in an early selection of, and sustained attention to, the changed colors (mirrored in the frontal selection positivity, the anterior N2, and the P3b). For the target scenes, the N2pc indicated an earlier allocation of attention to the targets with unseen or remotely seen colors. Inhibitory control of attention, shown in the anterior N2, was greatest when the target scene was followed by repeated no-target scenes with repeated colors. Finally, search times and the P3b were influenced by both color previewing and its history. The current results demonstrate that attentional biasing can occur on a trial-by-trial basis and be influenced by both feature previewing and its history. © 2016 Society for Psychophysiological Research.
Adapting sensory data for multiple robots performing spill cleanup
DOE Office of Scientific and Technical Information (OSTI.GOV)
Storjohann, K.; Saltzen, E.
1990-09-01
This paper describes a possible method of converting a single performing robot algorithm into a multiple performing robot algorithm without the need to modify previously written codes. The algorithm to be converted involves spill detection and clean up by the HERMIES-III mobile robot. In order to achieve the goal of multiple performing robots with this algorithm, two steps are taken. First, the task is formally divided into two sub-tasks, spill detection and spill clean-up, the former of which is allocated to the added performing robot, HERMIES-IIB. Second, a inverse perspective mapping, is applied to the data acquired by the newmore » performing robot (HERMIES-IIB), allowing the data to be processed by the previously written algorithm without re-writing the code. 6 refs., 4 figs.« less
NASA Astrophysics Data System (ADS)
Luo, Chang; Wang, Jie; Feng, Gang; Xu, Suhui; Wang, Shiqiang
2017-10-01
Deep convolutional neural networks (CNNs) have been widely used to obtain high-level representation in various computer vision tasks. However, for remote scene classification, there are not sufficient images to train a very deep CNN from scratch. From two viewpoints of generalization power, we propose two promising kinds of deep CNNs for remote scenes and try to find whether deep CNNs need to be deep for remote scene classification. First, we transfer successful pretrained deep CNNs to remote scenes based on the theory that depth of CNNs brings the generalization power by learning available hypothesis for finite data samples. Second, according to the opposite viewpoint that generalization power of deep CNNs comes from massive memorization and shallow CNNs with enough neural nodes have perfect finite sample expressivity, we design a lightweight deep CNN (LDCNN) for remote scene classification. With five well-known pretrained deep CNNs, experimental results on two independent remote-sensing datasets demonstrate that transferred deep CNNs can achieve state-of-the-art results in an unsupervised setting. However, because of its shallow architecture, LDCNN cannot obtain satisfactory performance, regardless of whether in an unsupervised, semisupervised, or supervised setting. CNNs really need depth to obtain general features for remote scenes. This paper also provides baseline for applying deep CNNs to other remote sensing tasks.
Singularity-robustness and task-prioritization in configuration control of redundant robots
NASA Technical Reports Server (NTRS)
Seraji, H.; Colbaugh, R.
1990-01-01
The authors present a singularity-robust task-prioritized reformulation of the configuration control for redundant robot manipulators. This reformation suppresses large joint velocities to induce minimal errors in the task performance by modifying the task trajectories. Furthermore, the same framework provides a means for assignment of priorities between the basic task of end-effector motion and the user-defined additional task for utilizing redundancy. This allows automatic relaxation of the additional task constraints in favor of the desired end-effector motion when both cannot be achieved exactly.
Social relevance drives viewing behavior independent of low-level salience in rhesus macaques
Solyst, James A.; Buffalo, Elizabeth A.
2014-01-01
Quantifying attention to social stimuli during the viewing of complex social scenes with eye tracking has proven to be a sensitive method in the diagnosis of autism spectrum disorders years before average clinical diagnosis. Rhesus macaques provide an ideal model for understanding the mechanisms underlying social viewing behavior, but to date no comparable behavioral task has been developed for use in monkeys. Using a novel scene-viewing task, we monitored the gaze of three rhesus macaques while they freely viewed well-controlled composed social scenes and analyzed the time spent viewing objects and monkeys. In each of six behavioral sessions, monkeys viewed a set of 90 images (540 unique scenes) with each image presented twice. In two-thirds of the repeated scenes, either a monkey or an object was replaced with a novel item (manipulated scenes). When viewing a repeated scene, monkeys made longer fixations and shorter saccades, shifting from a rapid orienting to global scene contents to a more local analysis of fewer items. In addition to this repetition effect, in manipulated scenes, monkeys demonstrated robust memory by spending more time viewing the replaced items. By analyzing attention to specific scene content, we found that monkeys strongly preferred to view conspecifics and that this was not related to their salience in terms of low-level image features. A model-free analysis of viewing statistics found that monkeys that were viewed earlier and longer had direct gaze and redder sex skin around their face and rump, two important visual social cues. These data provide a quantification of viewing strategy, memory and social preferences in rhesus macaques viewing complex social scenes, and they provide an important baseline with which to compare to the effects of therapeutics aimed at enhancing social cognition. PMID:25414633
3D printed rapid disaster response
NASA Astrophysics Data System (ADS)
Lacaze, Alberto; Murphy, Karl; Mottern, Edward; Corley, Katrina; Chu, Kai-Dee
2014-05-01
Under the Department of Homeland Security-sponsored Sensor-smart Affordable Autonomous Robotic Platforms (SAARP) project, Robotic Research, LLC is developing an affordable and adaptable method to provide disaster response robots developed with 3D printer technology. The SAARP Store contains a library of robots, a developer storefront, and a user storefront. The SAARP Store allows the user to select, print, assemble, and operate the robot. In addition to the SAARP Store, two platforms are currently being developed. They use a set of common non-printed components that will allow the later design of other platforms that share non-printed components. During disasters, new challenges are faced that require customized tools or platforms. Instead of prebuilt and prepositioned supplies, a library of validated robots will be catalogued to satisfy various challenges at the scene. 3D printing components will allow these customized tools to be deployed in a fraction of the time that would normally be required. While the current system is focused on supporting disaster response personnel, this system will be expandable to a range of customers, including domestic law enforcement, the armed services, universities, and research facilities.
Scenario-Based Assessment of User Needs for Point-of-Care Robots.
Lee, Hyeong Suk; Kim, Jeongeun
2018-01-01
This study aimed to derive specific user requirements and barriers in a real medical environment to define the essential elements and functions of two types of point-of-care (POC) robot: a telepresence robot as a tool for teleconsultation, and a bedside robot to provide emotional care for patients. An analysis of user requirements was conducted; user needs were gathered and identified, and detailed, realistic scenarios were created. The prototype robots were demonstrated in physical environments for envisioning and evaluation. In all, three nurses and three clinicians participated as evaluators to observe the demonstrations and evaluate the robot systems. The evaluators were given a brief explanation of each scene and the robots' functionality. Four major functions of the teleconsultation robot were defined and tested in the demonstration. In addition, four major functions of the bedside robot were evaluated. Among the desired functions for a teleconsultation robot, medical information delivery and communication had high priority. For a bedside robot, patient support, patient monitoring, and healthcare provider support were the desired functions. The evaluators reported that the teleconsultation robot can increase support from and access to specialists and resources. They mentioned that the bedside robot can improve the quality of hospital life. Problems identified in the demonstration were those of space conflict, communication errors, and safety issues. Incorporating this technology into healthcare services will enhance communication and teamwork skills across distances and thereby facilitate teamwork. However, repeated tests will be needed to evaluate and ensure improved performance.
Scenario-Based Assessment of User Needs for Point-of-Care Robots
Lee, Hyeong Suk
2018-01-01
Objectives This study aimed to derive specific user requirements and barriers in a real medical environment to define the essential elements and functions of two types of point-of-care (POC) robot: a telepresence robot as a tool for teleconsultation, and a bedside robot to provide emotional care for patients. Methods An analysis of user requirements was conducted; user needs were gathered and identified, and detailed, realistic scenarios were created. The prototype robots were demonstrated in physical environments for envisioning and evaluation. In all, three nurses and three clinicians participated as evaluators to observe the demonstrations and evaluate the robot systems. The evaluators were given a brief explanation of each scene and the robots' functionality. Four major functions of the teleconsultation robot were defined and tested in the demonstration. In addition, four major functions of the bedside robot were evaluated. Results Among the desired functions for a teleconsultation robot, medical information delivery and communication had high priority. For a bedside robot, patient support, patient monitoring, and healthcare provider support were the desired functions. The evaluators reported that the teleconsultation robot can increase support from and access to specialists and resources. They mentioned that the bedside robot can improve the quality of hospital life. Problems identified in the demonstration were those of space conflict, communication errors, and safety issues. Conclusions Incorporating this technology into healthcare services will enhance communication and teamwork skills across distances and thereby facilitate teamwork. However, repeated tests will be needed to evaluate and ensure improved performance. PMID:29503748
ERIC Educational Resources Information Center
Simut, Ramona E.; Vanderfaeillie, Johan; Peca, Andreea; Van de Perre, Greet; Vanderborght, Bram
2016-01-01
Social robots are thought to be motivating tools in play tasks with children with autism spectrum disorders. Thirty children with autism were included using a repeated measurements design. It was investigated if the children's interaction with a human differed from the interaction with a social robot during a play task. Also, it was examined if…
Head Pose Estimation Using Multilinear Subspace Analysis for Robot Human Awareness
NASA Technical Reports Server (NTRS)
Ivanov, Tonislav; Matthies, Larry; Vasilescu, M. Alex O.
2009-01-01
Mobile robots, operating in unconstrained indoor and outdoor environments, would benefit in many ways from perception of the human awareness around them. Knowledge of people's head pose and gaze directions would enable the robot to deduce which people are aware of the its presence, and to predict future motions of the people for better path planning. To make such inferences, requires estimating head pose on facial images that are combination of multiple varying factors, such as identity, appearance, head pose, and illumination. By applying multilinear algebra, the algebra of higher-order tensors, we can separate these factors and estimate head pose regardless of subject's identity or image conditions. Furthermore, we can automatically handle uncertainty in the size of the face and its location. We demonstrate a pipeline of on-the-move detection of pedestrians with a robot stereo vision system, segmentation of the head, and head pose estimation in cluttered urban street scenes.
Open Issues in Evolutionary Robotics.
Silva, Fernando; Duarte, Miguel; Correia, Luís; Oliveira, Sancho Moura; Christensen, Anders Lyhne
2016-01-01
One of the long-term goals in evolutionary robotics is to be able to automatically synthesize controllers for real autonomous robots based only on a task specification. While a number of studies have shown the applicability of evolutionary robotics techniques for the synthesis of behavioral control, researchers have consistently been faced with a number of issues preventing the widespread adoption of evolutionary robotics for engineering purposes. In this article, we review and discuss the open issues in evolutionary robotics. First, we analyze the benefits and challenges of simulation-based evolution and subsequent deployment of controllers versus evolution on real robotic hardware. Second, we discuss specific evolutionary computation issues that have plagued evolutionary robotics: (1) the bootstrap problem, (2) deception, and (3) the role of genomic encoding and genotype-phenotype mapping in the evolution of controllers for complex tasks. Finally, we address the absence of standard research practices in the field. We also discuss promising avenues of research. Our underlying motivation is the reduction of the current gap between evolutionary robotics and mainstream robotics, and the establishment of evolutionary robotics as a canonical approach for the engineering of autonomous robots.
Motion coordination and programmable teleoperation between two industrial robots
NASA Technical Reports Server (NTRS)
Luh, J. Y. S.; Zheng, Y. F.
1987-01-01
Tasks for two coordinated industrial robots always bring the robots in contact with a same object. The motion coordination among the robots and the object must be maintained all the time. To plan the coordinated tasks, only one robot's motion is planned according to the required motion of the object. The motion of the second robot is to follow the first one as specified by a set of holonomic equality constraints at every time instant. If any modification of the object's motion is needed in real-time, only the first robot's motion has to be modified accordingly in real-time. The modification for the second robot is done implicitly through the constraint conditions. Thus the operation is simplified. If the object is physically removed, the second robot still continually follows the first one through the constraint conditions. If the first robot is maneuvered through either the teach pendant or the keyboard, the second one moves accordingly to form the teleoperation which is linked through the software programming. Obviously, the second robot does not need to duplicate the first robot's motion. The programming of the constraints specifies their relative motions.
Crime scene investigation, reporting, and reconstuction (CSIRR)
NASA Astrophysics Data System (ADS)
Booth, John F.; Young, Jeffrey M.; Corrigan, Paul
1997-02-01
Graphic Data Systems Corporation (GDS Corp.) and Intellignet Graphics Solutions, Inc. (IGS) combined talents in 1995 to design and develop a MicroGDSTM application to support field investiations of crime scenes, such as homoicides, bombings, and arsons. IGS and GDS Corp. prepared design documents under the guidance of federal, state, and local crime scene reconstruction experts and with information from the FBI's evidence response team field book. The application was then developed to encompass the key components of crime scene investigaton: staff assigned to the incident, tasks occuring at the scene, visits to the scene location, photogrpahs taken of the crime scene, related documents, involved persons, catalogued evidence, and two- or three- dimensional crime scene reconstruction. Crime scene investigation, reporting, and reconstruction (CSIRR$CPY) provides investigators with a single applicaiton for both capturing all tabular data about the crime scene and quickly renderng a sketch of the scene. Tabular data is captured through ituitive database forms, while MicroGDSTM has been modified to readily allow non-CAD users to sketch the scene.
Three-dimensional vision enhances task performance independently of the surgical method.
Wagner, O J; Hagen, M; Kurmann, A; Horgan, S; Candinas, D; Vorburger, S A
2012-10-01
Within the next few years, the medical industry will launch increasingly affordable three-dimensional (3D) vision systems for the operating room (OR). This study aimed to evaluate the effect of two-dimensional (2D) and 3D visualization on surgical skills and task performance. In this study, 34 individuals with varying laparoscopic experience (18 inexperienced individuals) performed three tasks to test spatial relationships, grasping and positioning, dexterity, precision, and hand-eye and hand-hand coordination. Each task was performed in 3D using binocular vision for open performance, the Viking 3Di Vision System for laparoscopic performance, and the DaVinci robotic system. The same tasks were repeated in 2D using an eye patch for monocular vision, conventional laparoscopy, and the DaVinci robotic system. Loss of 3D vision significantly increased the perceived difficulty of a task and the time required to perform it, independently of the approach (P < 0.0001-0.02). Simple tasks took 25 % to 30 % longer to complete and more complex tasks took 75 % longer with 2D than with 3D vision. Only the difficult task was performed faster with the robot than with laparoscopy (P = 0.005). In every case, 3D robotic performance was superior to conventional laparoscopy (2D) (P < 0.001-0.015). The more complex the task, the more 3D vision accelerates task completion compared with 2D vision. The gain in task performance is independent of the surgical method.
Design of a surgical robot with dynamic vision field control for Single Port Endoscopic Surgery.
Kobayashi, Yo; Sekiguchi, Yuta; Tomono, Yu; Watanabe, Hiroki; Toyoda, Kazutaka; Konishi, Kozo; Tomikawa, Morimasa; Ieiri, Satoshi; Tanoue, Kazuo; Hashizume, Makoto; Fujie, Masaktsu G
2010-01-01
Recently, a robotic system was developed to assist Single Port Endoscopic Surgery (SPS). However, the existing system required a manual change of vision field, hindering the surgical task and increasing the degrees of freedom (DOFs) of the manipulator. We proposed a surgical robot for SPS with dynamic vision field control, the endoscope view being manipulated by a master controller. The prototype robot consisted of a positioning and sheath manipulator (6 DOF) for vision field control, and dual tool tissue manipulators (gripping: 5DOF, cautery: 3DOF). Feasibility of the robot was demonstrated in vitro. The "cut and vision field control" (using tool manipulators) is suitable for precise cutting tasks in risky areas while a "cut by vision field control" (using a vision field control manipulator) is effective for rapid macro cutting of tissues. A resection task was accomplished using a combination of both methods.
Blending Velocities In Task Space In Computing Robot Motions
NASA Technical Reports Server (NTRS)
Volpe, Richard A.
1995-01-01
Blending of linear and angular velocities between sequential specified points in task space constitutes theoretical basis of improved method of computing trajectories followed by robotic manipulators. In method, generalized velocity-vector-blending technique provides relatively simple, common conceptual framework for blending linear, angular, and other parametric velocities. Velocity vectors originate from straight-line segments connecting specified task-space points, called "via frames" and represent specified robot poses. Linear-velocity-blending functions chosen from among first-order, third-order-polynomial, and cycloidal options. Angular velocities blended by use of first-order approximation of previous orientation-matrix-blending formulation. Angular-velocity approximation yields small residual error, quantified and corrected. Method offers both relative simplicity and speed needed for generation of robot-manipulator trajectories in real time.
NASA Astrophysics Data System (ADS)
Dağlarli, Evren; Temeltaş, Hakan
2008-04-01
In this study, behavior generation and self-learning paradigms are investigated for the real-time applications of multi-goal mobile robot tasks. The method is capable to generate new behaviors and it combines them in order to achieve multi goal tasks. The proposed method is composed from three layers: Behavior Generating Module, Coordination Level and Emotion -Motivation Level. Last two levels use Hidden Markov models to manage dynamical structure of behaviors. The kinematics and dynamic model of the mobile robot with non-holonomic constraints are considered in the behavior based control architecture. The proposed method is tested on a four-wheel driven and four-wheel steered mobile robot with constraints in simulation environment and results are obtained successfully.
The effect of distraction on change detection in crowded acoustic scenes.
Petsas, Theofilos; Harrison, Jemma; Kashino, Makio; Furukawa, Shigeto; Chait, Maria
2016-11-01
In this series of behavioural experiments we investigated the effect of distraction on the maintenance of acoustic scene information in short-term memory. Stimuli are artificial acoustic 'scenes' composed of several (up to twelve) concurrent tone-pip streams ('sources'). A gap (1000 ms) is inserted partway through the 'scene'; Changes in the form of an appearance of a new source or disappearance of an existing source, occur after the gap in 50% of the trials. Listeners were instructed to monitor the unfolding 'soundscapes' for these events. Distraction was measured by presenting distractor stimuli during the gap. Experiments 1 and 2 used a dual task design where listeners were required to perform a task with varying attentional demands ('High Demand' vs. 'Low Demand') on brief auditory (Experiment 1a) or visual (Experiment 1b) signals presented during the gap. Experiments 2 and 3 required participants to ignore distractor sounds and focus on the change detection task. Our results demonstrate that the maintenance of scene information in short-term memory is influenced by the availability of attentional and/or processing resources during the gap, and that this dependence appears to be modality specific. We also show that these processes are susceptible to bottom up driven distraction even in situations when the distractors are not novel, but occur on each trial. Change detection performance is systematically linked with the, independently determined, perceptual salience of the distractor sound. The findings also demonstrate that the present task may be a useful objective means for determining relative perceptual salience. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Control of a Serpentine Robot for Inspection Tasks
NASA Technical Reports Server (NTRS)
Colbaugh, R.; Glass, K.; Seraji, H.
1994-01-01
This paper presents a simple and robust kinematic control scheme for the JPL serpentine robot system. The proposed strategy is developed using the dampened-least-squares/configuration control methodology, and permits the considerable dexterity of the JPL serpentine robot to be effectively utilized for maneuvering in the congested and uncertain workspaces often encountered in inspection tasks. Computer simulation results are given for the 20 degree-of-freedom (DOF) manipulator system obtained by mounting the twelve DOF serpentine robot at the end-effector of an eight DOF Robotics Research arm/lathe-bed system. These simulations demonstrate that the proposed approach provides an effective method of controlling this complex system.
Kim, Youngmoo E.
2017-01-01
Motor-imagery tasks are a popular input method for controlling brain-computer interfaces (BCIs), partially due to their similarities to naturally produced motor signals. The use of functional near-infrared spectroscopy (fNIRS) in BCIs is still emerging and has shown potential as a supplement or replacement for electroencephalography. However, studies often use only two or three motor-imagery tasks, limiting the number of available commands. In this work, we present the results of the first four-class motor-imagery-based online fNIRS-BCI for robot control. Thirteen participants utilized upper- and lower-limb motor-imagery tasks (left hand, right hand, left foot, and right foot) that were mapped to four high-level commands (turn left, turn right, move forward, and move backward) to control the navigation of a simulated or real robot. A significant improvement in classification accuracy was found between the virtual-robot-based BCI (control of a virtual robot) and the physical-robot BCI (control of the DARwIn-OP humanoid robot). Differences were also found in the oxygenated hemoglobin activation patterns of the four tasks between the first and second BCI. These results corroborate previous findings that motor imagery can be improved with feedback and imply that a four-class motor-imagery-based fNIRS-BCI could be feasible with sufficient subject training. PMID:28804712
Batula, Alyssa M; Kim, Youngmoo E; Ayaz, Hasan
2017-01-01
Motor-imagery tasks are a popular input method for controlling brain-computer interfaces (BCIs), partially due to their similarities to naturally produced motor signals. The use of functional near-infrared spectroscopy (fNIRS) in BCIs is still emerging and has shown potential as a supplement or replacement for electroencephalography. However, studies often use only two or three motor-imagery tasks, limiting the number of available commands. In this work, we present the results of the first four-class motor-imagery-based online fNIRS-BCI for robot control. Thirteen participants utilized upper- and lower-limb motor-imagery tasks (left hand, right hand, left foot, and right foot) that were mapped to four high-level commands (turn left, turn right, move forward, and move backward) to control the navigation of a simulated or real robot. A significant improvement in classification accuracy was found between the virtual-robot-based BCI (control of a virtual robot) and the physical-robot BCI (control of the DARwIn-OP humanoid robot). Differences were also found in the oxygenated hemoglobin activation patterns of the four tasks between the first and second BCI. These results corroborate previous findings that motor imagery can be improved with feedback and imply that a four-class motor-imagery-based fNIRS-BCI could be feasible with sufficient subject training.
Synchronization of spontaneous eyeblinks while viewing video stories
Nakano, Tamami; Yamamoto, Yoshiharu; Kitajo, Keiichi; Takahashi, Toshimitsu; Kitazawa, Shigeru
2009-01-01
Blinks are generally suppressed during a task that requires visual attention and tend to occur immediately before or after the task when the timing of its onset and offset are explicitly given. During the viewing of video stories, blinks are expected to occur at explicit breaks such as scene changes. However, given that the scene length is unpredictable, there should also be appropriate timing for blinking within a scene to prevent temporal loss of critical visual information. Here, we show that spontaneous blinks were highly synchronized between and within subjects when they viewed the same short video stories, but were not explicitly tied to the scene breaks. Synchronized blinks occurred during scenes that required less attention such as at the conclusion of an action, during the absence of the main character, during a long shot and during repeated presentations of a similar scene. In contrast, blink synchronization was not observed when subjects viewed a background video or when they listened to a story read aloud. The results suggest that humans share a mechanism for controlling the timing of blinks that searches for an implicit timing that is appropriate to minimize the chance of losing critical information while viewing a stream of visual events. PMID:19640888
Machine learning in motion control
NASA Technical Reports Server (NTRS)
Su, Renjeng; Kermiche, Noureddine
1989-01-01
The existing methodologies for robot programming originate primarily from robotic applications to manufacturing, where uncertainties of the robots and their task environment may be minimized by repeated off-line modeling and identification. In space application of robots, however, a higher degree of automation is required for robot programming because of the desire of minimizing the human intervention. We discuss a new paradigm of robotic programming which is based on the concept of machine learning. The goal is to let robots practice tasks by themselves and the operational data are used to automatically improve their motion performance. The underlying mathematical problem is to solve the problem of dynamical inverse by iterative methods. One of the key questions is how to ensure the convergence of the iterative process. There have been a few small steps taken into this important approach to robot programming. We give a representative result on the convergence problem.
Disbergen, Niels R.; Valente, Giancarlo; Formisano, Elia; Zatorre, Robert J.
2018-01-01
Polyphonic music listening well exemplifies processes typically involved in daily auditory scene analysis situations, relying on an interactive interplay between bottom-up and top-down processes. Most studies investigating scene analysis have used elementary auditory scenes, however real-world scene analysis is far more complex. In particular, music, contrary to most other natural auditory scenes, can be perceived by either integrating or, under attentive control, segregating sound streams, often carried by different instruments. One of the prominent bottom-up cues contributing to multi-instrument music perception is their timbre difference. In this work, we introduce and validate a novel paradigm designed to investigate, within naturalistic musical auditory scenes, attentive modulation as well as its interaction with bottom-up processes. Two psychophysical experiments are described, employing custom-composed two-voice polyphonic music pieces within a framework implementing a behavioral performance metric to validate listener instructions requiring either integration or segregation of scene elements. In Experiment 1, the listeners' locus of attention was switched between individual instruments or the aggregate (i.e., both instruments together), via a task requiring the detection of temporal modulations (i.e., triplets) incorporated within or across instruments. Subjects responded post-stimulus whether triplets were present in the to-be-attended instrument(s). Experiment 2 introduced the bottom-up manipulation by adding a three-level morphing of instrument timbre distance to the attentional framework. The task was designed to be used within neuroimaging paradigms; Experiment 2 was additionally validated behaviorally in the functional Magnetic Resonance Imaging (fMRI) environment. Experiment 1 subjects (N = 29, non-musicians) completed the task at high levels of accuracy, showing no group differences between any experimental conditions. Nineteen listeners also participated in Experiment 2, showing a main effect of instrument timbre distance, even though within attention-condition timbre-distance contrasts did not demonstrate any timbre effect. Correlation of overall scores with morph-distance effects, computed by subtracting the largest from the smallest timbre distance scores, showed an influence of general task difficulty on the timbre distance effect. Comparison of laboratory and fMRI data showed scanner noise had no adverse effect on task performance. These Experimental paradigms enable to study both bottom-up and top-down contributions to auditory stream segregation and integration within psychophysical and neuroimaging experiments. PMID:29563861
Vaccaro, Christine M; Crisp, Catrina C; Fellner, Angela N; Jackson, Christopher; Kleeman, Steven D; Pavelka, James
2013-01-01
The objective of this study was to compare the effect of virtual reality simulation training plus robotic orientation versus robotic orientation alone on performance of surgical tasks using an inanimate model. Surgical resident physicians were enrolled in this assessor-blinded randomized controlled trial. Residents were randomized to receive either (1) robotic virtual reality simulation training plus standard robotic orientation or (2) standard robotic orientation alone. Performance of surgical tasks was assessed at baseline and after the intervention. Nine of 33 modules from the da Vinci Skills Simulator were chosen. Experts in robotic surgery evaluated each resident's videotaped performance of the inanimate model using the Global Rating Scale (GRS) and Objective Structured Assessment of Technical Skills-modified for robotic-assisted surgery (rOSATS). Nine resident physicians were enrolled in the simulation group and 9 in the control group. As a whole, participants improved their total time, time to incision, and suture time from baseline to repeat testing on the inanimate model (P = 0.001, 0.003, <0.001, respectively). Both groups improved their GRS and rOSATS scores significantly (both P < 0.001); however, the GRS overall pass rate was higher in the simulation group compared with the control group (89% vs 44%, P = 0.066). Standard robotic orientation and/or robotic virtual reality simulation improve surgical skills on an inanimate model, although this may be a function of the initial "practice" on the inanimate model and repeat testing of a known task. However, robotic virtual reality simulation training increases GRS pass rates consistent with improved robotic technical skills learned in a virtual reality environment.
Colombo, Roberto; Sterpi, Irma; Mazzone, Alessandra; Delconte, Carmen; Pisano, Fabrizio
2012-05-01
In robot-assisted neurorehabilitation, matching the task difficulty level to the patient's needs and abilities, both initially and as the relearning process progresses, can enhance the effectiveness of training and improve patients' motivation and outcome. This study presents a Progressive Task Regulation algorithm implemented in a robot for upper limb rehabilitation. It evaluates the patient's performance during training through the computation of robot-measured parameters, and automatically changes the features of the reaching movements, adapting the difficulty level of the motor task to the patient's abilities. In particular, it can select different types of assistance (time-triggered, activity-triggered, and negative assistance) and implement varied therapy practice to promote generalization processes. The algorithm was tuned by assessing the performance data obtained in 22 chronic stroke patients who underwent robotic rehabilitation, in which the difficulty level of the task was manually adjusted by the therapist. Thus, we could verify the patient's recovery strategies and implement task transition rules to match both the patient's and therapist's behavior. In addition, the algorithm was tested in a sample of five chronic stroke patients. The findings show good agreement with the therapist decisions so indicating that it could be useful for the implementation of training protocols allowing individualized and gradual treatment of upper limb disabilities in patients after stroke. The application of this algorithm during robot-assisted therapy should allow an easier management of the different motor tasks administered during training, thereby facilitating the therapist's activity in the treatment of different pathologic conditions of the neuromuscular system.
Decentralized Planning for Autonomous Agents Cooperating in Complex Missions
2010-09-01
Consensus - based decentralized auctions for robust task allocation ," IEEE Transactions on Robotics...Robotics, vol. 24, pp. 209-222, 2006. [44] H.-L. Choi, L. Brunet, and J. P. How, " Consensus - based decentralized auctions for robust task allocation ...2003. 123 [31] L. Brunet, " Consensus - Based Auctions for Decentralized Task Assignment," Master’s thesis, Dept.
Normand, Jean-Marie; Sanchez-Vives, Maria V.; Waechter, Christian; Giannopoulos, Elias; Grosswindhager, Bernhard; Spanlang, Bernhard; Guger, Christoph; Klinker, Gudrun; Srinivasan, Mandayam A.; Slater, Mel
2012-01-01
Immersive virtual reality (IVR) typically generates the illusion in participants that they are in the displayed virtual scene where they can experience and interact in events as if they were really happening. Teleoperator (TO) systems place people at a remote physical destination embodied as a robotic device, and where typically participants have the sensation of being at the destination, with the ability to interact with entities there. In this paper, we show how to combine IVR and TO to allow a new class of application. The participant in the IVR is represented in the destination by a physical robot (TO) and simultaneously the remote place and entities within it are represented to the participant in the IVR. Hence, the IVR participant has a normal virtual reality experience, but where his or her actions and behaviour control the remote robot and can therefore have physical consequences. Here, we show how such a system can be deployed to allow a human and a rat to operate together, but the human interacting with the rat on a human scale, and the rat interacting with the human on the rat scale. The human is represented in a rat arena by a small robot that is slaved to the human’s movements, whereas the tracked rat is represented to the human in the virtual reality by a humanoid avatar. We describe the system and also a study that was designed to test whether humans can successfully play a game with the rat. The results show that the system functioned well and that the humans were able to interact with the rat to fulfil the tasks of the game. This system opens up the possibility of new applications in the life sciences involving participant observation of and interaction with animals but at human scale. PMID:23118987
Normand, Jean-Marie; Sanchez-Vives, Maria V; Waechter, Christian; Giannopoulos, Elias; Grosswindhager, Bernhard; Spanlang, Bernhard; Guger, Christoph; Klinker, Gudrun; Srinivasan, Mandayam A; Slater, Mel
2012-01-01
Immersive virtual reality (IVR) typically generates the illusion in participants that they are in the displayed virtual scene where they can experience and interact in events as if they were really happening. Teleoperator (TO) systems place people at a remote physical destination embodied as a robotic device, and where typically participants have the sensation of being at the destination, with the ability to interact with entities there. In this paper, we show how to combine IVR and TO to allow a new class of application. The participant in the IVR is represented in the destination by a physical robot (TO) and simultaneously the remote place and entities within it are represented to the participant in the IVR. Hence, the IVR participant has a normal virtual reality experience, but where his or her actions and behaviour control the remote robot and can therefore have physical consequences. Here, we show how such a system can be deployed to allow a human and a rat to operate together, but the human interacting with the rat on a human scale, and the rat interacting with the human on the rat scale. The human is represented in a rat arena by a small robot that is slaved to the human's movements, whereas the tracked rat is represented to the human in the virtual reality by a humanoid avatar. We describe the system and also a study that was designed to test whether humans can successfully play a game with the rat. The results show that the system functioned well and that the humans were able to interact with the rat to fulfil the tasks of the game. This system opens up the possibility of new applications in the life sciences involving participant observation of and interaction with animals but at human scale.
An orbital emulator for pursuit-evasion game theoretic sensor management
NASA Astrophysics Data System (ADS)
Shen, Dan; Wang, Tao; Wang, Gang; Jia, Bin; Wang, Zhonghai; Chen, Genshe; Blasch, Erik; Pham, Khanh
2017-05-01
This paper develops and evaluates an orbital emulator (OE) for space situational awareness (SSA). The OE can produce 3D satellite movements using capabilities generated from omni-wheeled robot and robotic arm motion methods. The 3D motion of a satellite is partitioned into the movements in the equatorial plane and the up-down motions in the vertical plane. The 3D actions are emulated by omni-wheeled robot models while the up-down motions are performed by a stepped-motor-controlled-ball along a rod (robotic arm), which is attached to the robot. For multiple satellites, a fast map-merging algorithm is integrated into the robot operating system (ROS) and simultaneous localization and mapping (SLAM) routines to locate the multiple robots in the scene. The OE is used to demonstrate a pursuit-evasion (PE) game theoretic sensor management algorithm, which models conflicts between a space-based-visible (SBV) satellite (as pursuer) and a geosynchronous (GEO) satellite (as evader). The cost function of the PE game is based on the informational entropy of the SBV-tracking-GEO scenario. GEO can maneuver using a continuous and low thruster. The hard-in-loop space emulator visually illustrates the SSA problem solution based PE game.
Elhage, Oussama; Challacombe, Ben; Shortland, Adam; Dasgupta, Prokar
2015-02-01
To evaluate, in a simulated suturing task, individual surgeons’ performance using three surgical approaches: open, laparoscopic and robot-assisted. subjects and methods: Six urological surgeons made an in vitro simulated vesico-urethral anastomosis. All surgeons performed the simulated suturing task using all three surgical approaches (open, laparoscopic and robot-assisted). The time taken to perform each task was recorded. Participants were evaluated for perceived discomfort using the self-reporting Borg scale. Errors made by surgeons were quantified by studying the video recording of the tasks. Anastomosis quality was quantified using scores for knot security, symmetry of suture, position of suture and apposition of anastomosis. The time taken to complete the task by the laparoscopic approach was on average 221 s, compared with 55 s for the open approach and 116 s for the robot-assisted approach (anova, P < 0.005). The number of errors and the level of self-reported discomfort were highest for the laparoscopic approach (anova, P < 0.005). Limitations of the present study include the small sample size and variation in prior surgical experience of the participants. In an in vitro model of anastomosis surgery, robot-assisted surgery combines the accuracy of open surgery while causing lesser surgeon discomfort than laparoscopy and maintaining minimal access.
Testing for Instrument Deployment by InSight Robotic Arm
2015-03-04
In the weeks after NASA's InSight mission reaches Mars in September 2016, the lander's arm will lift two key science instruments off the deck and place them onto the ground. This image shows testing of InSight's robotic arm inside a clean room at NASA's Jet Propulsion Laboratory, Pasadena, California, about two years before it will perform these tasks on Mars. InSight -- an acronym for Interior Exploration using Seismic Investigations, Geodesy and Heat Transport -- will launch in March 2016. It will study the interior of Mars to improve understanding of the processes that formed and shaped rocky planets, including Earth. One key instrument that the arm will deploy is the Seismic Experiment for Interior Structure, or SEIS. It is from France's national space agency (CNES), with components from Germany, Switzerland, the United Kingdom and the United States. In this scene, the arm has just deployed a test model of a protective covering for SEIS, the instrument's wind and thermal shield. The shield's purpose is to lessen disturbances that weather would cause to readings from the sensitive seismometer. Note: After thorough examination, NASA managers have decided to suspend the planned March 2016 launch of the Interior Exploration using Seismic Investigations Geodesy and Heat Transport (InSight) mission. The decision follows unsuccessful attempts to repair a leak in a section of the prime instrument in the science payload. http://photojournal.jpl.nasa.gov/catalog/PIA19144
A knowledge-based machine vision system for space station automation
NASA Technical Reports Server (NTRS)
Chipman, Laure J.; Ranganath, H. S.
1989-01-01
A simple knowledge-based approach to the recognition of objects in man-made scenes is being developed. Specifically, the system under development is a proposed enhancement to a robot arm for use in the space station laboratory module. The system will take a request from a user to find a specific object, and locate that object by using its camera input and information from a knowledge base describing the scene layout and attributes of the object types included in the scene. In order to use realistic test images in developing the system, researchers are using photographs of actual NASA simulator panels, which provide similar types of scenes to those expected in the space station environment. Figure 1 shows one of these photographs. In traditional approaches to image analysis, the image is transformed step by step into a symbolic representation of the scene. Often the first steps of the transformation are done without any reference to knowledge of the scene or objects. Segmentation of an image into regions generally produces a counterintuitive result in which regions do not correspond to objects in the image. After segmentation, a merging procedure attempts to group regions into meaningful units that will more nearly correspond to objects. Here, researchers avoid segmenting the image as a whole, and instead use a knowledge-directed approach to locate objects in the scene. The knowledge-based approach to scene analysis is described and the categories of knowledge used in the system are discussed.
Telemanipulator design and optimization software
NASA Astrophysics Data System (ADS)
Cote, Jean; Pelletier, Michel
1995-12-01
For many years, industrial robots have been used to execute specific repetitive tasks. In those cases, the optimal configuration and location of the manipulator only has to be found once. The optimal configuration or position where often found empirically according to the tasks to be performed. In telemanipulation, the nature of the tasks to be executed is much wider and can be very demanding in terms of dexterity and workspace. The position/orientation of the robot's base could be required to move during the execution of a task. At present, the choice of the initial position of the teleoperator is usually found empirically which can be sufficient in the case of an easy or repetitive task. In the converse situation, the amount of time wasted to move the teleoperator support platform has to be taken into account during the execution of the task. Automatic optimization of the position/orientation of the platform or a better designed robot configuration could minimize these movements and save time. This paper will present two algorithms. The first algorithm is used to optimize the position and orientation of a given manipulator (or manipulators) with respect to the environment on which a task has to be executed. The second algorithm is used to optimize the position or the kinematic configuration of a robot. For this purpose, the tasks to be executed are digitized using a position/orientation measurement system and a compact representation based on special octrees. Given a digitized task, the optimal position or Denavit-Hartenberg configuration of the manipulator can be obtained numerically. Constraints on the robot design can also be taken into account. A graphical interface has been designed to facilitate the use of the two optimization algorithms.
Boucher, Jean-David; Pattacini, Ugo; Lelong, Amelie; Bailly, Gerard; Elisei, Frederic; Fagel, Sascha; Dominey, Peter Ford; Ventre-Dominey, Jocelyne
2012-01-01
Human–human interaction in natural environments relies on a variety of perceptual cues. Humanoid robots are becoming increasingly refined in their sensorimotor capabilities, and thus should now be able to manipulate and exploit these social cues in cooperation with their human partners. Previous studies have demonstrated that people follow human and robot gaze, and that it can help them to cope with spatially ambiguous language. Our goal is to extend these findings into the domain of action, to determine how human and robot gaze can influence the speed and accuracy of human action. We report on results from a human–human cooperation experiment demonstrating that an agent’s vision of her/his partner’s gaze can significantly improve that agent’s performance in a cooperative task. We then implement a heuristic capability to generate such gaze cues by a humanoid robot that engages in the same cooperative interaction. The subsequent human–robot experiments demonstrate that a human agent can indeed exploit the predictive gaze of their robot partner in a cooperative task. This allows us to render the humanoid robot more human-like in its ability to communicate with humans. The long term objectives of the work are thus to identify social cooperation cues, and to validate their pertinence through implementation in a cooperative robot. The current research provides the robot with the capability to produce appropriate speech and gaze cues in the context of human–robot cooperation tasks. Gaze is manipulated in three conditions: Full gaze (coordinated eye and head), eyes hidden with sunglasses, and head fixed. We demonstrate the pertinence of these cues in terms of statistical measures of action times for humans in the context of a cooperative task, as gaze significantly facilitates cooperation as measured by human response times. PMID:22563315
Fault detection and fault tolerance in robotics
NASA Technical Reports Server (NTRS)
Visinsky, Monica; Walker, Ian D.; Cavallaro, Joseph R.
1992-01-01
Robots are used in inaccessible or hazardous environments in order to alleviate some of the time, cost and risk involved in preparing men to endure these conditions. In order to perform their expected tasks, the robots are often quite complex, thus increasing their potential for failures. If men must be sent into these environments to repair each component failure in the robot, the advantages of using the robot are quickly lost. Fault tolerant robots are needed which can effectively cope with failures and continue their tasks until repairs can be realistically scheduled. Before fault tolerant capabilities can be created, methods of detecting and pinpointing failures must be perfected. This paper develops a basic fault tree analysis of a robot in order to obtain a better understanding of where failures can occur and how they contribute to other failures in the robot. The resulting failure flow chart can also be used to analyze the resiliency of the robot in the presence of specific faults. By simulating robot failures and fault detection schemes, the problems involved in detecting failures for robots are explored in more depth.
Exhaustive geographic search with mobile robots along space-filling curves
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spires, S.V.; Goldsmith, S.Y.
1998-03-01
Swarms of mobile robots can be tasked with searching a geographic region for targets of interest, such as buried land mines. The authors assume that the individual robots are equipped with sensors tuned to the targets of interest, that these sensors have limited range, and that the robots can communicate with one another to enable cooperation. How can a swarm of cooperating sensate robots efficiently search a given geographic region for targets in the absence of a priori information about the target`s locations? Many of the obvious approaches are inefficient or lack robustness. One efficient approach is to have themore » robots traverse a space-filling curve. For many geographic search applications, this method is energy-frugal, highly robust, and provides guaranteed coverage in a finite time that decreases as the reciprocal of the number of robots sharing the search task. Furthermore, it minimizes the amount of robot-to-robot communication needed for the robots to organize their movements. This report presents some preliminary results from applying the Hilbert space-filling curve to geographic search by mobile robots.« less
ERIC Educational Resources Information Center
Hull, Daniel M.; Lovett, James E.
The six new robotics and automated systems specialty courses developed by the Robotics/Automated Systems Technician (RAST) project are described in this publication. Course titles are Fundamentals of Robotics and Automated Systems, Automated Systems and Support Components, Controllers for Robots and Automated Systems, Robotics and Automated…
Tools to Perform Local Dense 3D Reconstruction of Shallow Water Seabed ‡
Avanthey, Loïca; Beaudoin, Laurent; Gademer, Antoine; Roux, Michel
2016-01-01
Tasks such as distinguishing or identifying individual objects of interest require the production of dense local clouds at the scale of these individual objects of interest. Due to the physical and dynamic properties of an underwater environment, the usual dense matching algorithms must be rethought in order to be adaptive. These properties also imply that the scene must be observed at close range. Classic robotized acquisition systems are oversized for local studies in shallow water while the systematic acquisition of data is not guaranteed with divers. We address these two major issues through a multidisciplinary approach. To efficiently acquire on-demand stereoscopic pairs using simple logistics in small areas of shallow water, we devised an agile light-weight dedicated system which is easy to reproduce. To densely match two views in a reliable way, we devised a reconstruction algorithm that automatically accounts for the dynamics, variability and light absorption of the underwater environment. Field experiments in the Mediterranean Sea were used to assess the results. PMID:27196913
Pyramidal neurovision architecture for vision machines
NASA Astrophysics Data System (ADS)
Gupta, Madan M.; Knopf, George K.
1993-08-01
The vision system employed by an intelligent robot must be active; active in the sense that it must be capable of selectively acquiring the minimal amount of relevant information for a given task. An efficient active vision system architecture that is based loosely upon the parallel-hierarchical (pyramidal) structure of the biological visual pathway is presented in this paper. Although the computational architecture of the proposed pyramidal neuro-vision system is far less sophisticated than the architecture of the biological visual pathway, it does retain some essential features such as the converging multilayered structure of its biological counterpart. In terms of visual information processing, the neuro-vision system is constructed from a hierarchy of several interactive computational levels, whereupon each level contains one or more nonlinear parallel processors. Computationally efficient vision machines can be developed by utilizing both the parallel and serial information processing techniques within the pyramidal computing architecture. A computer simulation of a pyramidal vision system for active scene surveillance is presented.
Understanding and representing natural language meaning
NASA Astrophysics Data System (ADS)
Waltz, D. L.; Maran, L. R.; Dorfman, M. H.; Dinitz, R.; Farwell, D.
1982-12-01
During this contract period the authors have: (1) continued investigation of events and actions by means of representation schemes called 'event shape diagrams'; (2) written a parsing program which selects appropriate word and sentence meanings by a parallel process know as activation and inhibition; (3) begun investigation of the point of a story or event by modeling the motivations and emotional behaviors of story characters; (4) started work on combining and translating two machine-readable dictionaries into a lexicon and knowledge base which will form an integral part of our natural language understanding programs; (5) made substantial progress toward a general model for the representation of cognitive relations by comparing English scene and event descriptions with similar descriptions in other languages; (6) constructed a general model for the representation of tense and aspect of verbs; (7) made progress toward the design of an integrated robotics system which accepts English requests, and uses visual and tactile inputs in making decisions and learning new tasks.
A three-finger multisensory hand for dexterous space robotic tasks
NASA Technical Reports Server (NTRS)
Murase, Yuichi; Komada, Satoru; Uchiyama, Takashi; Machida, Kazuo; Akita, Kenzo
1994-01-01
The National Space Development Agency of Japan will launch ETS-7 in 1997, as a test bed for next generation space technology of RV&D and space robot. MITI has been developing a three-finger multisensory hand for complex space robotic tasks. The hand can be operated under remote control or autonomously. This paper describes the design and development of the hand and the performance of a breadboard model.
Human-Robot Interface: Issues in Operator Performance, Interface Design, and Technologies
2006-07-01
and the use of lightweight portable robotic sensor platforms. 5 robotics has reached a point where some generalities of HRI transcend specific...displays with control devices such as joysticks, wheels, and pedals (Kamsickas, 2003). Typical control stations include panels displaying (a) sensor ...tasks that do not involve mobility and usually involve camera control or data fusion from sensors Active search: Search tasks that involve mobility
A graph theoretic approach to scene matching
NASA Technical Reports Server (NTRS)
Ranganath, Heggere S.; Chipman, Laure J.
1991-01-01
The ability to match two scenes is a fundamental requirement in a variety of computer vision tasks. A graph theoretic approach to inexact scene matching is presented which is useful in dealing with problems due to imperfect image segmentation. A scene is described by a set of graphs, with nodes representing objects and arcs representing relationships between objects. Each node has a set of values representing the relations between pairs of objects, such as angle, adjacency, or distance. With this method of scene representation, the task in scene matching is to match two sets of graphs. Because of segmentation errors, variations in camera angle, illumination, and other conditions, an exact match between the sets of observed and stored graphs is usually not possible. In the developed approach, the problem is represented as an association graph, in which each node represents a possible mapping of an observed region to a stored object, and each arc represents the compatibility of two mappings. Nodes and arcs have weights indicating the merit or a region-object mapping and the degree of compatibility between two mappings. A match between the two graphs corresponds to a clique, or fully connected subgraph, in the association graph. The task is to find the clique that represents the best match. Fuzzy relaxation is used to update the node weights using the contextual information contained in the arcs and neighboring nodes. This simplifies the evaluation of cliques. A method of handling oversegmentation and undersegmentation problems is also presented. The approach is tested with a set of realistic images which exhibit many types of sementation errors.
A Mobile Robot for Locomotion Through a 3D Periodic Lattice Environment
NASA Technical Reports Server (NTRS)
Jenett, Benjamin; Cellucci, Daniel; Cheung, Kenneth
2017-01-01
This paper describes a novel class of robots specifically adapted to climb periodic lattices, which we call 'Relative Robots'. These robots use the regularity of the structure to simplify the path planning, align with minimal feedback, and reduce the number of degrees of freedom (DOF) required to locomote. They can perform vital inspection and repair tasks within the structure that larger truss construction robots could not perform without modifying the structure. We detail a specific type of relative robot designed to traverse a cuboctahedral (CubOct) cellular solids lattice, show how the symmetries of the lattice simplify the design, and test these design methodologies with a CubOct relative robot that traverses a 76.2 mm (3 in.) pitch lattice, MOJO (Multi-Objective JOurneying robot). We perform three locomotion tasks with MOJO: vertical climbing, horizontal climbing, and turning, and find that, due to changes in the orientation of the robot relative to the gravity vector, the success rate of vertical and horizontal climbing is significantly different.
Carpinella, Ilaria; Cattaneo, Davide; Bertoni, Rita; Ferrarin, Maurizio
2012-05-01
In this pilot study, we compared two protocols for robot-based rehabilitation of upper limb in multiple sclerosis (MS): a protocol involving reaching tasks (RT) requiring arm transport only and a protocol requiring both objects' reaching and manipulation (RMT). Twenty-two MS subjects were assigned to RT or RMT group. Both protocols consisted of eight sessions. During RT training, subjects moved the handle of a planar robotic manipulandum toward circular targets displayed on a screen. RMT protocol required patients to reach and manipulate real objects, by moving the robotic arm equipped with a handle which left the hand free for distal tasks. In both trainings, the robot generated resistive and perturbing forces. Subjects were evaluated with clinical and instrumental tests. The results confirmed that MS patients maintained the ability to adapt to the robot-generated forces and that the rate of motor learning increased across sessions. Robot-therapy significantly reduced arm tremor and improved arm kinematics and functional ability. Compared to RT, RMT protocol induced a significantly larger improvement in movements involving grasp (improvement in Grasp ARAT sub-score: RMT 77.4%, RT 29.5%, p=0.035) but not precision grip. Future studies are needed to evaluate if longer trainings and the use of robotic handles would significantly improve also fine manipulation.
Neural activation and memory for natural scenes: Explicit and spontaneous retrieval.
Weymar, Mathias; Bradley, Margaret M; Sege, Christopher T; Lang, Peter J
2018-05-06
Stimulus repetition elicits either enhancement or suppression in neural activity, and a recent fMRI meta-analysis of repetition effects for visual stimuli (Kim, 2017) reported cross-stimulus repetition enhancement in medial and lateral parietal cortex, as well as regions of prefrontal, temporal, and posterior cingulate cortex. Repetition enhancement was assessed here for repeated and novel scenes presented in the context of either an explicit episodic recognition task or an implicit judgment task, in order to study the role of spontaneous retrieval of episodic memories. Regardless of whether episodic memory was explicitly probed or not, repetition enhancement was found in medial posterior parietal (precuneus/cuneus), lateral parietal cortex (angular gyrus), as well as in medial prefrontal cortex (frontopolar), which did not differ by task. Enhancement effects in the posterior cingulate cortex were significantly larger during explicit compared to implicit task, primarily due to a lack of functional activity for new scenes. Taken together, the data are consistent with an interpretation that medial and (ventral) lateral parietal cortex are associated with spontaneous episodic retrieval, whereas posterior cingulate cortical regions may reflect task or decision processes. © 2018 Society for Psychophysiological Research.
Emergency response nurse scheduling with medical support robot by multi-agent and fuzzy technique.
Kono, Shinya; Kitamura, Akira
2015-08-01
In this paper, a new co-operative re-scheduling method corresponding the medical support tasks that the time of occurrence can not be predicted is described, assuming robot can co-operate medical activities with the nurse. Here, Multi-Agent-System (MAS) is used for the co-operative re-scheduling, in which Fuzzy-Contract-Net (FCN) is applied to the robots task assignment for the emergency tasks. As the simulation results, it is confirmed that the re-scheduling results by the proposed method can keep the patients satisfaction and decrease the work load of the nurse.
Falotico, Egidio; Vannucci, Lorenzo; Ambrosano, Alessandro; Albanese, Ugo; Ulbrich, Stefan; Vasquez Tieck, Juan Camilo; Hinkel, Georg; Kaiser, Jacques; Peric, Igor; Denninger, Oliver; Cauli, Nino; Kirtay, Murat; Roennau, Arne; Klinker, Gudrun; Von Arnim, Axel; Guyot, Luc; Peppicelli, Daniel; Martínez-Cañada, Pablo; Ros, Eduardo; Maier, Patrick; Weber, Sandro; Huber, Manuel; Plecher, David; Röhrbein, Florian; Deser, Stefan; Roitberg, Alina; van der Smagt, Patrick; Dillman, Rüdiger; Levi, Paul; Laschi, Cecilia; Knoll, Alois C.; Gewaltig, Marc-Oliver
2017-01-01
Combined efforts in the fields of neuroscience, computer science, and biology allowed to design biologically realistic models of the brain based on spiking neural networks. For a proper validation of these models, an embodiment in a dynamic and rich sensory environment, where the model is exposed to a realistic sensory-motor task, is needed. Due to the complexity of these brain models that, at the current stage, cannot deal with real-time constraints, it is not possible to embed them into a real-world task. Rather, the embodiment has to be simulated as well. While adequate tools exist to simulate either complex neural networks or robots and their environments, there is so far no tool that allows to easily establish a communication between brain and body models. The Neurorobotics Platform is a new web-based environment that aims to fill this gap by offering scientists and technology developers a software infrastructure allowing them to connect brain models to detailed simulations of robot bodies and environments and to use the resulting neurorobotic systems for in silico experimentation. In order to simplify the workflow and reduce the level of the required programming skills, the platform provides editors for the specification of experimental sequences and conditions, environments, robots, and brain–body connectors. In addition to that, a variety of existing robots and environments are provided. This work presents the architecture of the first release of the Neurorobotics Platform developed in subproject 10 “Neurorobotics” of the Human Brain Project (HBP).1 At the current state, the Neurorobotics Platform allows researchers to design and run basic experiments in neurorobotics using simulated robots and simulated environments linked to simplified versions of brain models. We illustrate the capabilities of the platform with three example experiments: a Braitenberg task implemented on a mobile robot, a sensory-motor learning task based on a robotic controller, and a visual tracking embedding a retina model on the iCub humanoid robot. These use-cases allow to assess the applicability of the Neurorobotics Platform for robotic tasks as well as in neuroscientific experiments. PMID:28179882
Falotico, Egidio; Vannucci, Lorenzo; Ambrosano, Alessandro; Albanese, Ugo; Ulbrich, Stefan; Vasquez Tieck, Juan Camilo; Hinkel, Georg; Kaiser, Jacques; Peric, Igor; Denninger, Oliver; Cauli, Nino; Kirtay, Murat; Roennau, Arne; Klinker, Gudrun; Von Arnim, Axel; Guyot, Luc; Peppicelli, Daniel; Martínez-Cañada, Pablo; Ros, Eduardo; Maier, Patrick; Weber, Sandro; Huber, Manuel; Plecher, David; Röhrbein, Florian; Deser, Stefan; Roitberg, Alina; van der Smagt, Patrick; Dillman, Rüdiger; Levi, Paul; Laschi, Cecilia; Knoll, Alois C; Gewaltig, Marc-Oliver
2017-01-01
Combined efforts in the fields of neuroscience, computer science, and biology allowed to design biologically realistic models of the brain based on spiking neural networks. For a proper validation of these models, an embodiment in a dynamic and rich sensory environment, where the model is exposed to a realistic sensory-motor task, is needed. Due to the complexity of these brain models that, at the current stage, cannot deal with real-time constraints, it is not possible to embed them into a real-world task. Rather, the embodiment has to be simulated as well. While adequate tools exist to simulate either complex neural networks or robots and their environments, there is so far no tool that allows to easily establish a communication between brain and body models. The Neurorobotics Platform is a new web-based environment that aims to fill this gap by offering scientists and technology developers a software infrastructure allowing them to connect brain models to detailed simulations of robot bodies and environments and to use the resulting neurorobotic systems for in silico experimentation. In order to simplify the workflow and reduce the level of the required programming skills, the platform provides editors for the specification of experimental sequences and conditions, environments, robots, and brain-body connectors. In addition to that, a variety of existing robots and environments are provided. This work presents the architecture of the first release of the Neurorobotics Platform developed in subproject 10 "Neurorobotics" of the Human Brain Project (HBP). At the current state, the Neurorobotics Platform allows researchers to design and run basic experiments in neurorobotics using simulated robots and simulated environments linked to simplified versions of brain models. We illustrate the capabilities of the platform with three example experiments: a Braitenberg task implemented on a mobile robot, a sensory-motor learning task based on a robotic controller, and a visual tracking embedding a retina model on the iCub humanoid robot. These use-cases allow to assess the applicability of the Neurorobotics Platform for robotic tasks as well as in neuroscientific experiments.
Stefanidis, Dimitrios; Wang, Fikre; Korndorffer, James R; Dunne, J Bruce; Scott, Daniel J
2010-02-01
Intracorporeal suturing is one of the most difficult laparoscopic tasks. The purpose of this study was to assess the impact of robotic assistance on novice suturing performance, safety, and workload in the operating room. Medical students (n = 34), without prior laparoscopic suturing experience, were enrolled in an Institutional Review Board-approved, randomized protocol. After viewing an instructional video, subjects were tested in intracorporeal suturing on two identical, live, porcine Nissen fundoplication models; they placed three gastro-gastric sutures using conventional laparoscopic instruments in one model and using robotic assistance (da Vinci) in the other, in random order. Each knot was objectively scored based on time, accuracy, and security. Injuries to surrounding structures were recorded. Workload was assessed using the validated National Aeronautics and Space Administration (NASA) task load index (TLX) questionnaire, which measures the subjects' self-reported performance, effort, frustration, and mental, physical, and temporal demands of the task. Analysis was by paired t-test; p < 0.05 was considered significant. Compared with laparoscopy, robotic assistance enabled subjects to suture faster (595 +/- 22 s versus 459 +/- 137 s, respectively; p < 0.001), achieve higher overall scores (0 +/- 1 versus 95 +/- 128, respectively; p < 0.001), and commit fewer errors per knot (1.15 +/- 1.35 versus 0.05 +/- 0.26, respectively; p < 0.001). Subjects' overall score did not improve between the first and third attempt for laparoscopic suturing (0 +/- 0 versus 0 +/- 0; p = NS) but improved significantly for robotic suturing (49 +/- 100 versus 141 +/- 152; p < 0.001). Moreover, subjects indicated on the NASA-TLX scale that the task was more difficult to perform with laparoscopic instruments compared with robotic assistance (99 +/- 15 versus 57 +/- 23; p < 0.001). Compared with standard laparoscopy, robotic assistance significantly improved intracorporeal suturing performance and safety of novices in the operating room while decreasing their workload. Moreover, the robot significantly shortened the learning curve of this difficult task. Further study is needed to assess the value of robotic assistance for experienced surgeons, and validated robotic training curricula need to be developed.
A fault-tolerant intelligent robotic control system
NASA Technical Reports Server (NTRS)
Marzwell, Neville I.; Tso, Kam Sing
1993-01-01
This paper describes the concept, design, and features of a fault-tolerant intelligent robotic control system being developed for space and commercial applications that require high dependability. The comprehensive strategy integrates system level hardware/software fault tolerance with task level handling of uncertainties and unexpected events for robotic control. The underlying architecture for system level fault tolerance is the distributed recovery block which protects against application software, system software, hardware, and network failures. Task level fault tolerance provisions are implemented in a knowledge-based system which utilizes advanced automation techniques such as rule-based and model-based reasoning to monitor, diagnose, and recover from unexpected events. The two level design provides tolerance of two or more faults occurring serially at any level of command, control, sensing, or actuation. The potential benefits of such a fault tolerant robotic control system include: (1) a minimized potential for damage to humans, the work site, and the robot itself; (2) continuous operation with a minimum of uncommanded motion in the presence of failures; and (3) more reliable autonomous operation providing increased efficiency in the execution of robotic tasks and decreased demand on human operators for controlling and monitoring the robotic servicing routines.
Gácsi, Márta; Szakadát, Sára; Miklósi, Adám
2013-01-01
These studies are part of a project aiming to reveal relevant aspects of human-dog interactions, which could serve as a model to design successful human-robot interactions. Presently there are no successfully commercialized assistance robots, however, assistance dogs work efficiently as partners for persons with disabilities. In Study 1, we analyzed the cooperation of 32 assistance dog-owner dyads performing a carrying task. We revealed typical behavior sequences and also differences depending on the dyads' experiences and on whether the owner was a wheelchair user. In Study 2, we investigated dogs' responses to unforeseen difficulties during a retrieving task in two contexts. Dogs displayed specific communicative and displacement behaviors, and a strong commitment to execute the insoluble task. Questionnaire data from Study 3 confirmed that these behaviors could successfully attenuate owners' disappointment. Although owners anticipated the technical competence of future assistance robots to be moderate/high, they could not imagine robots as emotional companions, which negatively affected their acceptance ratings of future robotic assistants. We propose that assistance dogs' cooperative behaviors and problem solving strategies should inspire the development of the relevant functions and social behaviors of assistance robots with limited manual and verbal skills.
Off-line simulation inspires insight: A neurodynamics approach to efficient robot task learning.
Sousa, Emanuel; Erlhagen, Wolfram; Ferreira, Flora; Bicho, Estela
2015-12-01
There is currently an increasing demand for robots able to acquire the sequential organization of tasks from social learning interactions with ordinary people. Interactive learning-by-demonstration and communication is a promising research topic in current robotics research. However, the efficient acquisition of generalized task representations that allow the robot to adapt to different users and contexts is a major challenge. In this paper, we present a dynamic neural field (DNF) model that is inspired by the hypothesis that the nervous system uses the off-line re-activation of initial memory traces to incrementally incorporate new information into structured knowledge. To achieve this, the model combines fast activation-based learning to robustly represent sequential information from single task demonstrations with slower, weight-based learning during internal simulations to establish longer-term associations between neural populations representing individual subtasks. The efficiency of the learning process is tested in an assembly paradigm in which the humanoid robot ARoS learns to construct a toy vehicle from its parts. User demonstrations with different serial orders together with the correction of initial prediction errors allow the robot to acquire generalized task knowledge about possible serial orders and the longer term dependencies between subgoals in very few social learning interactions. This success is shown in a joint action scenario in which ARoS uses the newly acquired assembly plan to construct the toy together with a human partner. Copyright © 2015 Elsevier Ltd. All rights reserved.
HOPIS: hybrid omnidirectional and perspective imaging system for mobile robots.
Lin, Huei-Yung; Wang, Min-Liang
2014-09-04
In this paper, we present a framework for the hybrid omnidirectional and perspective robot vision system. Based on the hybrid imaging geometry, a generalized stereo approach is developed via the construction of virtual cameras. It is then used to rectify the hybrid image pair using the perspective projection model. The proposed method not only simplifies the computation of epipolar geometry for the hybrid imaging system, but also facilitates the stereo matching between the heterogeneous image formation. Experimental results for both the synthetic data and real scene images have demonstrated the feasibility of our approach.
HOPIS: Hybrid Omnidirectional and Perspective Imaging System for Mobile Robots
Lin, Huei-Yung.; Wang, Min-Liang.
2014-01-01
In this paper, we present a framework for the hybrid omnidirectional and perspective robot vision system. Based on the hybrid imaging geometry, a generalized stereo approach is developed via the construction of virtual cameras. It is then used to rectify the hybrid image pair using the perspective projection model. The proposed method not only simplifies the computation of epipolar geometry for the hybrid imaging system, but also facilitates the stereo matching between the heterogeneous image formation. Experimental results for both the synthetic data and real scene images have demonstrated the feasibility of our approach. PMID:25192317
Significance of perceptually relevant image decolorization for scene classification
NASA Astrophysics Data System (ADS)
Viswanathan, Sowmya; Divakaran, Govind; Soman, Kutti Padanyl
2017-11-01
Color images contain luminance and chrominance components representing the intensity and color information, respectively. The objective of this paper is to show the significance of incorporating chrominance information to the task of scene classification. An improved color-to-grayscale image conversion algorithm that effectively incorporates chrominance information is proposed using the color-to-gray structure similarity index and singular value decomposition to improve the perceptual quality of the converted grayscale images. The experimental results based on an image quality assessment for image decolorization and its success rate (using the Cadik and COLOR250 datasets) show that the proposed image decolorization technique performs better than eight existing benchmark algorithms for image decolorization. In the second part of the paper, the effectiveness of incorporating the chrominance component for scene classification tasks is demonstrated using a deep belief network-based image classification system developed using dense scale-invariant feature transforms. The amount of chrominance information incorporated into the proposed image decolorization technique is confirmed with the improvement to the overall scene classification accuracy. Moreover, the overall scene classification performance improved by combining the models obtained using the proposed method and conventional decolorization methods.
Robust colour constancy in red-green dichromats
Linhares, João M. M.; Moreira, Humberto; Lillo, Julio; Nascimento, Sérgio M. C.
2017-01-01
Colour discrimination has been widely studied in red-green (R-G) dichromats but the extent to which their colour constancy is affected remains unclear. This work estimated the extent of colour constancy for four normal trichromatic observers and seven R-G dichromats when viewing natural scenes under simulated daylight illuminants. Hyperspectral imaging data from natural scenes were used to generate the stimuli on a calibrated CRT display. In experiment 1, observers viewed a reference scene illuminated by daylight with a correlated colour temperature (CCT) of 6700K; observers then viewed sequentially two versions of the same scene, one illuminated by either a higher or lower CCT (condition 1, pure CCT change with constant luminance) or a higher or lower average luminance (condition 2, pure luminance change with a constant CCT). The observers’ task was to identify the version of the scene that looked different from the reference scene. Thresholds for detecting a pure CCT change or a pure luminance change were estimated, and it was found that those for R-G dichromats were marginally higher than for normal trichromats regarding CCT. In experiment 2, observers viewed sequentially a reference scene and a comparison scene with a CCT change or a luminance change above threshold for each observer. The observers’ task was to identify whether or not the change was an intensity change. No significant differences were found between the responses of normal trichromats and dichromats. These data suggest robust colour constancy mechanisms along daylight locus in R-G dichromacy. PMID:28662218
Robust colour constancy in red-green dichromats.
Álvaro, Leticia; Linhares, João M M; Moreira, Humberto; Lillo, Julio; Nascimento, Sérgio M C
2017-01-01
Colour discrimination has been widely studied in red-green (R-G) dichromats but the extent to which their colour constancy is affected remains unclear. This work estimated the extent of colour constancy for four normal trichromatic observers and seven R-G dichromats when viewing natural scenes under simulated daylight illuminants. Hyperspectral imaging data from natural scenes were used to generate the stimuli on a calibrated CRT display. In experiment 1, observers viewed a reference scene illuminated by daylight with a correlated colour temperature (CCT) of 6700K; observers then viewed sequentially two versions of the same scene, one illuminated by either a higher or lower CCT (condition 1, pure CCT change with constant luminance) or a higher or lower average luminance (condition 2, pure luminance change with a constant CCT). The observers' task was to identify the version of the scene that looked different from the reference scene. Thresholds for detecting a pure CCT change or a pure luminance change were estimated, and it was found that those for R-G dichromats were marginally higher than for normal trichromats regarding CCT. In experiment 2, observers viewed sequentially a reference scene and a comparison scene with a CCT change or a luminance change above threshold for each observer. The observers' task was to identify whether or not the change was an intensity change. No significant differences were found between the responses of normal trichromats and dichromats. These data suggest robust colour constancy mechanisms along daylight locus in R-G dichromacy.
Idiosyncratic characteristics of saccadic eye movements when viewing different visual environments.
Andrews, T J; Coppola, D M
1999-08-01
Eye position was recorded in different viewing conditions to assess whether the temporal and spatial characteristics of saccadic eye movements in different individuals are idiosyncratic. Our aim was to determine the degree to which oculomotor control is based on endogenous factors. A total of 15 naive subjects viewed five visual environments: (1) The absence of visual stimulation (i.e. a dark room); (2) a repetitive visual environment (i.e. simple textured patterns); (3) a complex natural scene; (4) a visual search task; and (5) reading text. Although differences in visual environment had significant effects on eye movements, idiosyncrasies were also apparent. For example, the mean fixation duration and size of an individual's saccadic eye movements when passively viewing a complex natural scene covaried significantly with those same parameters in the absence of visual stimulation and in a repetitive visual environment. In contrast, an individual's spatio-temporal characteristics of eye movements during active tasks such as reading text or visual search covaried together, but did not correlate with the pattern of eye movements detected when viewing a natural scene, simple patterns or in the dark. These idiosyncratic patterns of eye movements in normal viewing reveal an endogenous influence on oculomotor control. The independent covariance of eye movements during different visual tasks shows that saccadic eye movements during active tasks like reading or visual search differ from those engaged during the passive inspection of visual scenes.
Simut, Ramona E; Vanderfaeillie, Johan; Peca, Andreea; Van de Perre, Greet; Vanderborght, Bram
2016-01-01
Social robots are thought to be motivating tools in play tasks with children with autism spectrum disorders. Thirty children with autism were included using a repeated measurements design. It was investigated if the children's interaction with a human differed from the interaction with a social robot during a play task. Also, it was examined if the two conditions differed in their ability to elicit interaction with a human accompanying the child during the task. Interaction of the children with both partners did not differ apart from the eye-contact. Participants had more eye-contact with the social robot compared to the eye-contact with the human. The conditions did not differ regarding the interaction elicited with the human accompanying the child.
Wong, Yu-Tung; Finley, Charles C; Giallo, Joseph F; Buckmire, Robert A
2011-08-01
To introduce a novel method of combining robotics and the CO(2) laser micromanipulator to provide excellent precision and performance repeatability designed for surgical applications. Pilot feasibility study. We developed a portable robotic controller that appends to a standard CO(2) laser micromanipulator. The robotic accuracy and laser beam path repeatability were compared to six experienced users of the industry standard micromanipulator performing the same simulated surgical tasks. Helium-neon laser beam video tracking techniques were employed. The robotic controller demonstrated superiority over experienced human manual micromanipulator control in accuracy (laser path within 1 mm of idealized centerline), 97.42% (standard deviation [SD] 2.65%), versus 85.11% (SD 14.51%), P = .018; and laser beam path repeatability (area of laser path divergence on successive trials), 21.42 mm(2) (SD 4.35 mm(2) ) versus 65.84 mm(2) (SD 11.93 mm(2) ), P = .006. Robotic micromanipulator control enhances accuracy and repeatability for specific laser tasks. Computerized control opens opportunity for alternative user interfaces and additional safety features. Copyright © 2011 The American Laryngological, Rhinological, and Otological Society, Inc.
NASA Astrophysics Data System (ADS)
Lima, José; Pereira, Ana I.; Costa, Paulo; Pinto, Andry; Costa, Pedro
2017-07-01
This paper describes an optimization procedure for a robot with 12 degrees of freedom avoiding the inverse kinematics problem, which is a hard task for this type of robot manipulator. This robot can be used to pick and place tasks in complex designs. Combining an accurate and fast direct kinematics model with optimization strategies, it is possible to achieve the joints angles for a desired end-effector position and orientation. The optimization methods stretched simulated annealing algorithm and genetic algorithm were used. The solutions found were validated using data originated by a real and by a simulated robot formed by 12 servomotors with a gripper.
Robotic experiment with a force reflecting handcontroller onboard MIR space station
NASA Technical Reports Server (NTRS)
Delpech, M.; Matzakis, Y.
1994-01-01
During the French CASSIOPEE mission that will fly onboard MIR space station in 1996, ergonomic evaluations of a force reflecting handcontroller will be performed on a simulated robotic task. This handcontroller is a part of the COGNILAB payload that will be used also for experiments in neurophysiology. The purpose of the robotic experiment is the validation of a new control and design concept that would enhance the task performances for telemanipulating space robots. Besides the handcontroller and its control unit, the experimental system includes a simulator of the slave robot dynamics for both free and constrained motions, a flat display screen and a seat with special fixtures for holding the astronaut.
NASA Astrophysics Data System (ADS)
Kortenkamp, David; Huber, Marcus J.; Congdon, Clare B.; Huffman, Scott B.; Bidlack, Clint R.; Cohen, Charles J.; Koss, Frank V.; Raschke, Ulrich; Weymouth, Terry E.
1993-05-01
This paper describes the design and implementation of an integrated system for combining obstacle avoidance, path planning, landmark detection and position triangulation. Such an integrated system allows the robot to move from place to place in an environment, avoiding obstacles and planning its way out of traps, while maintaining its position and orientation using distinctive landmarks. The task the robot performs is to search a 22 m X 22 m arena for 10 distinctive objects, visiting each object in turn. This same task was recently performed by a dozen different robots at a competition in which the robot described in this paper finished first.
The navigation system of the JPL robot
NASA Technical Reports Server (NTRS)
Thompson, A. M.
1977-01-01
The control structure of the JPL research robot and the operations of the navigation subsystem are discussed. The robot functions as a network of interacting concurrent processes distributed among several computers and coordinated by a central executive. The results of scene analysis are used to create a segmented terrain model in which surface regions are classified by traversibility. The model is used by a path planning algorithm, PATH, which uses tree search methods to find the optimal path to a goal. In PATH, the search space is defined dynamically as a consequence of node testing. Maze-solving and the use of an associative data base for context dependent node generation are also discussed. Execution of a planned path is accomplished by a feedback guidance process with automatic error recovery.
Experiments in teleoperator and autonomous control of space robotic vehicles
NASA Technical Reports Server (NTRS)
Alexander, Harold L.
1991-01-01
A program of research embracing teleoperator and automatic navigational control of freely flying satellite robots is presented. Current research goals include: (1) developing visual operator interfaces for improved vehicle teleoperation; (2) determining the effects of different visual interface system designs on operator performance; and (3) achieving autonomous vision-based vehicle navigation and control. This research program combines virtual-environment teleoperation studies and neutral-buoyancy experiments using a space-robot simulator vehicle currently under development. Visual-interface design options under investigation include monoscopic versus stereoscopic displays and cameras, helmet-mounted versus panel-mounted display monitors, head-tracking versus fixed or manually steerable remote cameras, and the provision of vehicle-fixed visual cues, or markers, in the remote scene for improved sensing of vehicle position, orientation, and motion.
Information-Driven Autonomous Exploration for a Vision-Based Mav
NASA Astrophysics Data System (ADS)
Palazzolo, E.; Stachniss, C.
2017-08-01
Most micro aerial vehicles (MAV) are flown manually by a pilot. When it comes to autonomous exploration for MAVs equipped with cameras, we need a good exploration strategy for covering an unknown 3D environment in order to build an accurate map of the scene. In particular, the robot must select appropriate viewpoints to acquire informative measurements. In this paper, we present an approach that computes in real-time a smooth flight path with the exploration of a 3D environment using a vision-based MAV. We assume to know a bounding box of the object or building to explore and our approach iteratively computes the next best viewpoints using a utility function that considers the expected information gain of new measurements, the distance between viewpoints, and the smoothness of the flight trajectories. In addition, the algorithm takes into account the elapsed time of the exploration run to safely land the MAV at its starting point after a user specified time. We implemented our algorithm and our experiments suggest that it allows for a precise reconstruction of the 3D environment while guiding the robot smoothly through the scene.
Task decomposition for a multilimbed robot to work in reachable but unorientable space
NASA Technical Reports Server (NTRS)
Su, Chau; Zheng, Yuan F.
1991-01-01
Robot manipulators installed on legged mobile platforms are suggested for enlarging robot workspace. To plan the motion of such a system, the arm-platform motion coordination problem is raised, and a task decomposition is proposed to solve the problem. A given task described by the destination position and orientation of the end effector is decomposed into subtasks for arm manipulation and for platform configuration, respectively. The former is defined as the end-effector position and orientation with respect to the platform, and the latter as the platform position and orientation in the base coordinates. Three approaches are proposed for the task decomposition. The approaches are also evaluated in terms of the displacements, from which an optimal approach can be selected.
UROLOGIC ROBOTS AND FUTURE DIRECTIONS
Mozer, Pierre; Troccaz, Jocelyne; Stoianovici, Dan
2009-01-01
Purpose of review Robot-assisted laparoscopic surgery in urology has gained immense popularity with the Da Vinci system but a lot of research teams are working on new robots. The purpose of this paper is to review current urologic robots and present future developments directions. Recent findings Future systems are expected to advance in two directions: improvements of remote manipulation robots and developments of image-guided robots. Summary The final goal of robots is to allow safer and more homogeneous outcomes with less variability of surgeon performance, as well as new tools to perform tasks based on medical transcutaneous imaging, in a less invasive way, at lower costs. It is expected that improvements for remote system could be augmented reality, haptic feed back, size reduction and development of new tools for NOTES surgery. The paradigm of image-guided robots is close to a clinical availability and the most advanced robots are presented with end-user technical assessments. It is also notable that the potential of robots lies much further ahead than the accomplishments of the daVinci system. The integration of imaging with robotics holds a substantial promise, because this can accomplish tasks otherwise impossible. Image guided robots have the potential to offer a paradigm shift. PMID:19057227
Urologic robots and future directions.
Mozer, Pierre; Troccaz, Jocelyne; Stoianovici, Dan
2009-01-01
Robot-assisted laparoscopic surgery in urology has gained immense popularity with the daVinci system, but a lot of research teams are working on new robots. The purpose of this study is to review current urologic robots and present future development directions. Future systems are expected to advance in two directions: improvements of remote manipulation robots and developments of image-guided robots. The final goal of robots is to allow safer and more homogeneous outcomes with less variability of surgeon performance, as well as new tools to perform tasks on the basis of medical transcutaneous imaging, in a less invasive way, at lower costs. It is expected that improvements for a remote system could be augmented in reality, with haptic feedback, size reduction, and development of new tools for natural orifice translumenal endoscopic surgery. The paradigm of image-guided robots is close to clinical availability and the most advanced robots are presented with end-user technical assessments. It is also notable that the potential of robots lies much further ahead than the accomplishments of the daVinci system. The integration of imaging with robotics holds a substantial promise, because this can accomplish tasks otherwise impossible. Image-guided robots have the potential to offer a paradigm shift.
Knowledge-based machine vision systems for space station automation
NASA Technical Reports Server (NTRS)
Ranganath, Heggere S.; Chipman, Laure J.
1989-01-01
Computer vision techniques which have the potential for use on the space station and related applications are assessed. A knowledge-based vision system (expert vision system) and the development of a demonstration system for it are described. This system implements some of the capabilities that would be necessary in a machine vision system for the robot arm of the laboratory module in the space station. A Perceptics 9200e image processor, on a host VAXstation, was used to develop the demonstration system. In order to use realistic test images, photographs of actual space shuttle simulator panels were used. The system's capabilities of scene identification and scene matching are discussed.
DspaceOgre 3D Graphics Visualization Tool
NASA Technical Reports Server (NTRS)
Jain, Abhinandan; Myin, Steven; Pomerantz, Marc I.
2011-01-01
This general-purpose 3D graphics visualization C++ tool is designed for visualization of simulation and analysis data for articulated mechanisms. Examples of such systems are vehicles, robotic arms, biomechanics models, and biomolecular structures. DspaceOgre builds upon the open-source Ogre3D graphics visualization library. It provides additional classes to support the management of complex scenes involving multiple viewpoints and different scene groups, and can be used as a remote graphics server. This software provides improved support for adding programs at the graphics processing unit (GPU) level for improved performance. It also improves upon the messaging interface it exposes for use as a visualization server.
Portable X-ray Fluorescence Unit for Analyzing Crime Scenes
NASA Astrophysics Data System (ADS)
Visco, A.
2003-12-01
Goddard Space Flight Center and the National Institute of Justice have teamed up to apply NASA technology to the field of forensic science. NASA hardware that is under development for future planetary robotic missions, such as Mars exploration, is being engineered into a rugged, portable, non-destructive X-ray fluorescence system for identifying gunshot residue, blood, and semen at crime scenes. This project establishes the shielding requirements that will ensure that the exposure of a user to ionizing radiation is below the U.S. Nuclear Regulatory Commission's allowable limits, and also develops the benchtop model for testing the system in a controlled environment.
Lemmens, Ryanne J. M.; Timmermans, Annick A. A.; Janssen-Potten, Yvonne J. M.; Pulles, Sanne A. N. T. D.; Geers, Richard P. J.; Bakx, Wilbert G. M.; Smeets, Rob J. E. M.; Seelen, Henk A. M.
2014-01-01
Purpose This study aims to assess the extent to which accelerometers can be used to determine the effect of robot-supported task-oriented arm-hand training, relative to task-oriented arm-hand training alone, on the actual amount of arm-hand use of chronic stroke patients in their home situation. Methods This single-blind randomized controlled trial included 16 chronic stroke patients, randomly allocated using blocked randomization (n = 2) to receive task-oriented robot-supported arm-hand training or task-oriented (unsupported) arm-hand training. Training lasted 8 weeks, 4 times/week, 2×30 min/day using the (T-)TOAT ((Technology-supported)-Task-Oriented-Arm-Training) method. The actual amount of arm-hand use, was assessed at baseline, after 8 weeks training and 6 months after training cessation. Duration of use and intensity of use of the affected arm-hand during unimanual and bimanual activities were calculated. Results Duration and intensity of use of the affected arm-hand did not change significantly during and after training, with or without robot-support (i.e. duration of use of unimanual use of the affected arm-hand: median difference of −0.17% in the robot-group and −0.08% in the control group between baseline and after training cessation; intensity of the affected arm-hand: median difference of 3.95% in the robot-group and 3.32% in the control group between baseline and after training cessation). No significant between-group differences were found. Conclusions Accelerometer data did not show significant changes in actual amount of arm-hand use after task-oriented training, with or without robot-support. Next to the amount of use, discrimination between activities performed and information about quality of use of the affected arm-hand are essential to determine actual arm-hand performance. Trial Registration Controlled-trials.com ISRCTN82787126 PMID:24823925
Software for Secondary-School Learning About Robotics
NASA Technical Reports Server (NTRS)
Shelton, Robert O.; Smith, Stephanie L.; Truong, Dat; Hodgson, Terry R.
2005-01-01
The ROVer Ranch is an interactive computer program designed to help secondary-school students learn about space-program robotics and related basic scientific concepts by involving the students in simplified design and programming tasks that exercise skills in mathematics and science. The tasks involve building simulated robots and then observing how they behave. The program furnishes (1) programming tools that a student can use to assemble and program a simulated robot and (2) a virtual three-dimensional mission simulator for testing the robot. First, the ROVer Ranch presents fundamental information about robotics, mission goals, and facts about the mission environment. On the basis of this information, and using the aforementioned tools, the student assembles a robot by selecting parts from such subsystems as propulsion, navigation, and scientific tools, the student builds a simulated robot to accomplish its mission. Once the robot is built, it is programmed and then placed in a three-dimensional simulated environment. Success or failure in the simulation depends on the planning and design of the robot. Data and results of the mission are available in a summary log once the mission is concluded.
Research on the inspection robot for cable tunnel
NASA Astrophysics Data System (ADS)
Xin, Shihao
2017-03-01
Robot by mechanical obstacle, double end communication, remote control and monitoring software components. The mechanical obstacle part mainly uses the tracked mobile robot mechanism, in order to facilitate the design and installation of the robot, the other auxiliary swing arm; double side communication part used a combination of communication wire communication with wireless communication, great improve the communication range of the robot. When the robot is controlled by far detection range, using wired communication control, on the other hand, using wireless communication; remote control part mainly completes the inspection robot walking, navigation, positioning and identification of cloud platform control. In order to improve the reliability of its operation, the preliminary selection of IPC as the control core the movable body selection program hierarchical structure as a design basis; monitoring software part is the core part of the robot, which has a definite diagnosis Can be instead of manual simple fault judgment, instead the robot as a remote actuators, staff as long as the remote control can be, do not have to body at the scene. Four parts are independent of each other but are related to each other, the realization of the structure of independence and coherence, easy maintenance and coordination work. Robot with real-time positioning function and remote control function, greatly improves the IT operation. Robot remote monitor, to avoid the direct contact with the staff and line, thereby reducing the accident casualties, for the safety of the inspection work has far-reaching significance.
Robotic Anesthesia – A Vision for the Future of Anesthesia
Hemmerling, Thomas M; Taddei, Riccardo; Wehbe, Mohamad; Morse, Joshua; Cyr, Shantale; Zaouter, Cedrick
2011-01-01
Summary This narrative review describes a rationale for robotic anesthesia. It offers a first classification of robotic anesthesia by separating it into pharmacological robots and robots for aiding or replacing manual gestures. Developments in closed loop anesthesia are outlined. First attempts to perform manual tasks using robots are described. A critical analysis of the delayed development and introduction of robots in anesthesia is delivered. PMID:23905028
Task-specific ankle robotics gait training after stroke: a randomized pilot study.
Forrester, Larry W; Roy, Anindo; Hafer-Macko, Charlene; Krebs, Hermano I; Macko, Richard F
2016-06-02
An unsettled question in the use of robotics for post-stroke gait rehabilitation is whether task-specific locomotor training is more effective than targeting individual joint impairments to improve walking function. The paretic ankle is implicated in gait instability and fall risk, but is difficult to therapeutically isolate and refractory to recovery. We hypothesize that in chronic stroke, treadmill-integrated ankle robotics training is more effective to improve gait function than robotics focused on paretic ankle impairments. Participants with chronic hemiparetic gait were randomized to either six weeks of treadmill-integrated ankle robotics (n = 14) or dose-matched seated ankle robotics (n = 12) videogame training. Selected gait measures were collected at baseline, post-training, and six-week retention. Friedman, and Wilcoxon Sign Rank and Fisher's exact tests evaluated within and between group differences across time, respectively. Six weeks post-training, treadmill robotics proved more effective than seated robotics to increase walking velocity, paretic single support, paretic push-off impulse, and active dorsiflexion range of motion. Treadmill robotics durably improved gait dorsiflexion swing angle leading 6/7 initially requiring ankle braces to self-discarded them, while their unassisted paretic heel-first contacts increased from 44 % to 99.6 %, versus no change in assistive device usage (0/9) following seated robotics. Treadmill-integrated, but not seated ankle robotics training, durably improves gait biomechanics, reversing foot drop, restoring walking propulsion, and establishing safer foot landing in chronic stroke that may reduce reliance on assistive devices. These findings support a task-specific approach integrating adaptive ankle robotics with locomotor training to optimize mobility recovery. NCT01337960. https://clinicaltrials.gov/ct2/show/NCT01337960?term=NCT01337960&rank=1.
I want what you've got: Cross platform portabiity and human-robot interaction assessment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Julie L. Marble, Ph.D.*.; Douglas A. Few; David J. Bruemmer
2005-08-01
Human-robot interaction is a subtle, yet critical aspect of design that must be assessed during the development of both the human-robot interface and robot behaviors if the human-robot team is to effectively meet the complexities of the task environment. Testing not only ensures that the system can successfully achieve the tasks for which it was designed, but more importantly, usability testing allows the designers to understand how humans and robots can, will, and should work together to optimize workload distribution. A lack of human-centered robot interface design, the rigidity of sensor configuration, and the platform-specific nature of research robot developmentmore » environments are a few factors preventing robotic solutions from reaching functional utility in real word environments. Often the difficult engineering challenge of implementing adroit reactive behavior, reliable communication, trustworthy autonomy that combines with system transparency and usable interfaces is overlooked in favor of other research aims. The result is that many robotic systems never reach a level of functional utility necessary even to evaluate the efficacy of the basic system, much less result in a system that can be used in a critical, real-world environment. Further, because control architectures and interfaces are often platform specific, it is difficult or even impossible to make usability comparisons between them. This paper discusses the challenges inherent to the conduct of human factors testing of variable autonomy control architectures and across platforms within a complex, real-world environment. It discusses the need to compare behaviors, architectures, and interfaces within a structured environment that contains challenging real-world tasks, and the implications for system acceptance and trust of autonomous robotic systems for how humans and robots interact in true interactive teams.« less
A novel teaching system for industrial robots.
Lin, Hsien-I; Lin, Yu-Hsiang
2014-03-27
The most important tool for controlling an industrial robotic arm is a teach pendant, which controls the robotic arm movement in work spaces and accomplishes teaching tasks. A good teaching tool should be easy to operate and can complete teaching tasks rapidly and effortlessly. In this study, a new teaching system is proposed for enabling users to operate robotic arms and accomplish teaching tasks easily. The proposed teaching system consists of the teach pen, optical markers on the pen, a motion capture system, and the pen tip estimation algorithm. With the marker positions captured by the motion capture system, the pose of the teach pen is accurately calculated by the pen tip algorithm and used to control the robot tool frame. In addition, Fitts' Law is adopted to verify the usefulness of this new system, and the results show that the system provides high accuracy, excellent operation performance, and a stable error rate. In addition, the system maintains superior performance, even when users work on platforms with different inclination angles.
A Novel Teaching System for Industrial Robots
Lin, Hsien-I; Lin, Yu-Hsiang
2014-01-01
The most important tool for controlling an industrial robotic arm is a teach pendant, which controls the robotic arm movement in work spaces and accomplishes teaching tasks. A good teaching tool should be easy to operate and can complete teaching tasks rapidly and effortlessly. In this study, a new teaching system is proposed for enabling users to operate robotic arms and accomplish teaching tasks easily. The proposed teaching system consists of the teach pen, optical markers on the pen, a motion capture system, and the pen tip estimation algorithm. With the marker positions captured by the motion capture system, the pose of the teach pen is accurately calculated by the pen tip algorithm and used to control the robot tool frame. In addition, Fitts' Law is adopted to verify the usefulness of this new system, and the results show that the system provides high accuracy, excellent operation performance, and a stable error rate. In addition, the system maintains superior performance, even when users work on platforms with different inclination angles. PMID:24681669
Noninvasive Electroencephalogram Based Control of a Robotic Arm for Reach and Grasp Tasks
NASA Astrophysics Data System (ADS)
Meng, Jianjun; Zhang, Shuying; Bekyo, Angeliki; Olsoe, Jaron; Baxter, Bryan; He, Bin
2016-12-01
Brain-computer interface (BCI) technologies aim to provide a bridge between the human brain and external devices. Prior research using non-invasive BCI to control virtual objects, such as computer cursors and virtual helicopters, and real-world objects, such as wheelchairs and quadcopters, has demonstrated the promise of BCI technologies. However, controlling a robotic arm to complete reach-and-grasp tasks efficiently using non-invasive BCI has yet to be shown. In this study, we found that a group of 13 human subjects could willingly modulate brain activity to control a robotic arm with high accuracy for performing tasks requiring multiple degrees of freedom by combination of two sequential low dimensional controls. Subjects were able to effectively control reaching of the robotic arm through modulation of their brain rhythms within the span of only a few training sessions and maintained the ability to control the robotic arm over multiple months. Our results demonstrate the viability of human operation of prosthetic limbs using non-invasive BCI technology.
Using mixed-initiative human-robot interaction to bound performance in a search task
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curtis W. Nielsen; Douglas A. Few; Devin S. Athey
2008-12-01
Mobile robots are increasingly used in dangerous domains, because they can keep humans out of harm’s way. Despite their advantages in hazardous environments, their general acceptance in other less dangerous domains has not been apparent and, even in dangerous environments, robots are often viewed as a “last-possible choice.” In order to increase the utility and acceptance of robots in hazardous domains researchers at the Idaho National Laboratory have both developed and tested novel mixed-initiative solutions that support the human-robot interactions. In a recent “dirty-bomb” experiment, participants exhibited different search strategies making it difficult to determine any performance benefits. This papermore » presents a method for categorizing the search patterns and shows that the mixed-initiative solution decreased the time to complete the task and decreased the performance spread between participants independent of prior training and of individual strategies used to accomplish the task.« less
An efficient temporal logic for robotic task planning
NASA Technical Reports Server (NTRS)
Becker, Jeffrey M.
1989-01-01
Computations required for temporal reasoning can be prohibitively expensive if fully general representations are used. Overly simple representations, such as totally ordered sequence of time points, are inadequate for use in a nonlinear task planning system. A middle ground is identified which is general enough to support a capable nonlinear task planner, but specialized enough that the system can support online task planning in real time. A Temporal Logic System (TLS) was developed during the Intelligent Task Automation (ITA) project to support robotic task planning. TLS is also used within the ITA system to support plan execution, monitoring, and exception handling.
Uniform task level definitions for robotic system performance comparisons
NASA Technical Reports Server (NTRS)
Price, Charles; Tesar, Delbert
1989-01-01
A series of ten task levels of increasing difficulty was compiled for use in comparative performance evaluations of available and future robotics technology. Each level has a breakdown of ten additional levels of difficulty to provide a layering of 100 levels. It is assumed that each level of task performance must be achieved by the system before it can be appropriately considered for the next level.
Controlling robots in the home: Factors that affect the performance of novice robot operators.
McGinn, Conor; Sena, Aran; Kelly, Kevin
2017-11-01
For robots to successfully integrate into everyday life, it is important that they can be effectively controlled by laypeople. However, the task of manually controlling mobile robots can be challenging due to demanding cognitive and sensorimotor requirements. This research explores the effect that the built environment has on the manual control of domestic service robots. In this study, a virtual reality simulation of a domestic robot control scenario was developed. The performance of fifty novice users was evaluated, and their subjective experiences recorded through questionnaires. Through quantitative and qualitative analysis, it was found that untrained operators frequently perform poorly at navigation-based robot control tasks. The study found that passing through doorways accounted for the largest number of collisions, and was consistently identified as a very difficult operation to perform. These findings suggest that homes and other human-orientated settings present significant challenges to robot control. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Bradley, Arthur; Dubowsky, Steven; Quinn, Roger; Marzwell, Neville
2005-01-01
Robots that operate independently of one another will not be adequate to accomplish the future exploration tasks of long-distance autonomous navigation, habitat construction, resource discovery, and material handling. Such activities will require that systems widely share information, plan and divide complex tasks, share common resources, and physically cooperate to manipulate objects. Recognizing the need for interoperable robots to accomplish the new exploration initiative, NASA s Office of Exploration Systems Research & Technology recently funded the development of the Joint Technical Architecture for Robotic Systems (JTARS). JTARS charter is to identify the interface standards necessary to achieve interoperability among space robots. A JTARS working group (JTARS-WG) has been established comprising recognized leaders in the field of space robotics including representatives from seven NASA centers along with academia and private industry. The working group s early accomplishments include addressing key issues required for interoperability, defining which systems are within the project s scope, and framing the JTARS manuals around classes of robotic systems.
ERIC Educational Resources Information Center
Bacon-Mace, Nadege; Kirchner, Holle; Fabre-Thorpe, Michele; Thorpe, Simon J.
2007-01-01
Using manual responses, human participants are remarkably fast and accurate at deciding if a natural scene contains an animal, but recent data show that they are even faster to indicate with saccadic eye movements which of 2 scenes contains an animal. How could it be that 2 images can apparently be processed faster than a single image? To better…
ERIC Educational Resources Information Center
Cappelleri, D. J.; Vitoroulis, N.
2013-01-01
This paper presents a series of novel project-based learning labs for an introductory robotics course that are developed into a semester-long Robotic Decathlon. The last three events of the Robotic Decathlon are used as three final one-week-long project tasks; these replace a previous course project that was a semester-long robotics competition.…
A macro-micro robot for precise force applications
NASA Technical Reports Server (NTRS)
Marzwell, Neville I.; Wang, Yulun
1993-01-01
This paper describes an 8 degree-of-freedom macro-micro robot capable of performing tasks which require accurate force control. Applications such as polishing, finishing, grinding, deburring, and cleaning are a few examples of tasks which need this capability. Currently these tasks are either performed manually or with dedicated machinery because of the lack of a flexible and cost effective tool, such as a programmable force-controlled robot. The basic design and control of the macro-micro robot is described in this paper. A modular high-performance multiprocessor control system was designed to provide sufficient compute power for executing advanced control methods. An 8 degree of freedom macro-micro mechanism was constructed to enable accurate tip forces. Control algorithms based on the impedance control method were derived, coded, and load balanced for maximum execution speed on the multiprocessor system.
NASA Astrophysics Data System (ADS)
Dağlarli, Evren; Temeltaş, Hakan
2007-04-01
This paper presents artificial emotional system based autonomous robot control architecture. Hidden Markov model developed as mathematical background for stochastic emotional and behavior transitions. Motivation module of architecture considered as behavioral gain effect generator for achieving multi-objective robot tasks. According to emotional and behavioral state transition probabilities, artificial emotions determine sequences of behaviors. Also motivational gain effects of proposed architecture can be observed on the executing behaviors during simulation.
High degree-of-freedom dynamic manipulation
NASA Astrophysics Data System (ADS)
Murphy, Michael P.; Stephens, Benjamin; Abe, Yeuhi; Rizzi, Alfred A.
2012-06-01
The creation of high degree of freedom dynamic mobile manipulation techniques and behaviors will allow robots to accomplish difficult tasks in the field. We are investigating the use of the body and legs of legged robots to improve the strength, velocity, and workspace of an integrated manipulator to accomplish dynamic manipulation. This is an especially challenging task, as all of the degrees of freedom are active at all times, the dynamic forces generated are high, and the legged system must maintain robust balance throughout the duration of the tasks. To accomplish this goal, we are utilizing trajectory optimization techniques to generate feasible open-loop behaviors for our 28 dof quadruped robot (BigDog) by planning the trajectories in a 13 dimensional space. Covariance Matrix Adaptation techniques are utilized to optimize for several criteria such as payload capability and task completion speed while also obeying constraints such as torque and velocity limits, kinematic limits, and center of pressure location. These open-loop behaviors are then used to generate feed-forward terms, which are subsequently used online to improve tracking and maintain low controller gains. Some initial results on one of our existing balancing quadruped robots with an additional human-arm-like manipulator are demonstrated on robot hardware, including dynamic lifting and throwing of heavy objects 16.5kg cinder blocks, using motions that resemble a human athlete more than typical robotic motions. Increased payload capacity is accomplished through coordinated body motion.
Using qualitative maps to direct reactive robots
NASA Technical Reports Server (NTRS)
Bertin, Randolph; Pendleton, Tom
1992-01-01
The principal advantage of mobile robots is that they are able to go to specific locations to perform useful tasks rather than have the tasks brought to them. It is important therefore that the robot be used to reach desired locations efficiently and reliably. A mobile robot whose environment extends significantly beyond its sensory horizon must maintain a representation of the environment, a map, in order to attain these efficiency and reliability requirements. We believe that qualitative mapping methods provide useful and robust representation schemes and that such maps may be used to direct the actions of a reactively controlled robot. In this paper we describe our experience in employing qualitative maps to direct, through the selection of desired control strategies, a reactive-behavior based robot. This mapping capability represents the development of one aspect of a successful deliberative/reactive hybrid control architecture.
Physiological and subjective evaluation of a human-robot object hand-over task.
Dehais, Frédéric; Sisbot, Emrah Akin; Alami, Rachid; Causse, Mickaël
2011-11-01
In the context of task sharing between a robot companion and its human partners, the notions of safe and compliant hardware are not enough. It is necessary to guarantee ergonomic robot motions. Therefore, we have developed Human Aware Manipulation Planner (Sisbot et al., 2010), a motion planner specifically designed for human-robot object transfer by explicitly taking into account the legibility, the safety and the physical comfort of robot motions. The main objective of this research was to define precise subjective metrics to assess our planner when a human interacts with a robot in an object hand-over task. A second objective was to obtain quantitative data to evaluate the effect of this interaction. Given the short duration, the "relative ease" of the object hand-over task and its qualitative component, classical behavioral measures based on accuracy or reaction time were unsuitable to compare our gestures. In this perspective, we selected three measurements based on the galvanic skin conductance response, the deltoid muscle activity and the ocular activity. To test our assumptions and validate our planner, an experimental set-up involving Jido, a mobile manipulator robot, and a seated human was proposed. For the purpose of the experiment, we have defined three motions that combine different levels of legibility, safety and physical comfort values. After each robot gesture the participants were asked to rate them on a three dimensional subjective scale. It has appeared that the subjective data were in favor of our reference motion. Eventually the three motions elicited different physiological and ocular responses that could be used to partially discriminate them. Copyright © 2011 Elsevier Ltd and the Ergonomics Society. All rights reserved.
Three main paradigms of simultaneous localization and mapping (SLAM) problem
NASA Astrophysics Data System (ADS)
Imani, Vandad; Haataja, Keijo; Toivanen, Pekka
2018-04-01
Simultaneous Localization and Mapping (SLAM) is one of the most challenging research areas within computer and machine vision for automated scene commentary and explanation. The SLAM technique has been a developing research area in the robotics context during recent years. By utilizing the SLAM method robot can estimate the different positions of the robot at the distinct points of time which can indicate the trajectory of robot as well as generate a map of the environment. SLAM has unique traits which are estimating the location of robot and building a map in the various types of environment. SLAM is effective in different types of environment such as indoor, outdoor district, Air, Underwater, Underground and Space. Several approaches have been investigated to use SLAM technique in distinct environments. The purpose of this paper is to provide an accurate perceptive review of case history of SLAM relied on laser/ultrasonic sensors and camera as perception input data. In addition, we mainly focus on three paradigms of SLAM problem with all its pros and cons. In the future, use intelligent methods and some new idea will be used on visual SLAM to estimate the motion intelligent underwater robot and building a feature map of marine environment.
Visual encoding and fixation target selection in free viewing: presaccadic brain potentials
Nikolaev, Andrey R.; Jurica, Peter; Nakatani, Chie; Plomp, Gijs; van Leeuwen, Cees
2013-01-01
In scrutinizing a scene, the eyes alternate between fixations and saccades. During a fixation, two component processes can be distinguished: visual encoding and selection of the next fixation target. We aimed to distinguish the neural correlates of these processes in the electrical brain activity prior to a saccade onset. Participants viewed color photographs of natural scenes, in preparation for a change detection task. Then, for each participant and each scene we computed an image heat map, with temperature representing the duration and density of fixations. The temperature difference between the start and end points of saccades was taken as a measure of the expected task-relevance of the information concentrated in specific regions of a scene. Visual encoding was evaluated according to whether subsequent change was correctly detected. Saccades with larger temperature difference were more likely to be followed by correct detection than ones with smaller temperature differences. The amplitude of presaccadic activity over anterior brain areas was larger for correct detection than for detection failure. This difference was observed for short “scrutinizing” but not for long “explorative” saccades, suggesting that presaccadic activity reflects top-down saccade guidance. Thus, successful encoding requires local scanning of scene regions which are expected to be task-relevant. Next, we evaluated fixation target selection. Saccades “moving up” in temperature were preceded by presaccadic activity of higher amplitude than those “moving down”. This finding suggests that presaccadic activity reflects attention deployed to the following fixation location. Our findings illustrate how presaccadic activity can elucidate concurrent brain processes related to the immediate goal of planning the next saccade and the larger-scale goal of constructing a robust representation of the visual scene. PMID:23818877
NASA Technical Reports Server (NTRS)
Firby, R. James
1990-01-01
High-level robot control research must confront the limitations imposed by real sensors if robots are to be controlled effectively in the real world. In particular, sensor limitations make it impossible to maintain a complete, detailed world model of the situation surrounding the robot. To address the problems involved in planning with the resulting incomplete and uncertain world models, traditional robot control architectures must be altered significantly. Task-directed sensing and control is suggested as a way of coping with world model limitations by focusing sensing and analysis resources on only those parts of the world relevant to the robot's active goals. The RAP adaptive execution system is used as an example of a control architecture designed to deploy sensing resources in this way to accomplish both action and knowledge goals.
A task control architecture for autonomous robots
NASA Technical Reports Server (NTRS)
Simmons, Reid; Mitchell, Tom
1990-01-01
An architecture is presented for controlling robots that have multiple tasks, operate in dynamic domains, and require a fair degree of autonomy. The architecture is built on several layers of functionality, including a distributed communication layer, a behavior layer for querying sensors, expanding goals, and executing commands, and a task level for managing the temporal aspects of planning and achieving goals, coordinating tasks, allocating resources, monitoring, and recovering from errors. Application to a legged planetary rover and an indoor mobile manipulator is described.
Research on the man in the loop control system of the robot arm based on gesture control
NASA Astrophysics Data System (ADS)
Xiao, Lifeng; Peng, Jinbao
2017-03-01
The Man in the loop control system of the robot arm based on gesture control research complex real-world environment, which requires the operator to continuously control and adjust the remote manipulator, as the background, completes the specific mission human in the loop entire system as the research object. This paper puts forward a kind of robot arm control system of Man in the loop based on gesture control, by robot arm control system based on gesture control and Virtual reality scene feedback to enhance immersion and integration of operator, to make operator really become a part of the whole control loop. This paper expounds how to construct a man in the loop control system of the robot arm based on gesture control. The system is a complex system of human computer cooperative control, but also people in the loop control problem areas. The new system solves the problems that the traditional method has no immersion feeling and the operation lever is unnatural, the adjustment time is long, and the data glove mode wears uncomfortable and the price is expensive.
ERIC Educational Resources Information Center
Reed, Dean; Harden, Thomas K.
Robots are mechanical devices that can be programmed to perform some task of manipulation or locomotion under automatic control. This paper discusses: (1) early developments of the robotics industry in the United States; (2) the present structure of the industry; (3) noneconomic factors related to the use of robots; (4) labor considerations…
A robotic vision system to measure tree traits
USDA-ARS?s Scientific Manuscript database
The autonomous measurement of tree traits, such as branching structure, branch diameters, branch lengths, and branch angles, is required for tasks such as robotic pruning of trees as well as structural phenotyping. We propose a robotic vision system called the Robotic System for Tree Shape Estimati...
Colour agnosia impairs the recognition of natural but not of non-natural scenes.
Nijboer, Tanja C W; Van Der Smagt, Maarten J; Van Zandvoort, Martine J E; De Haan, Edward H F
2007-03-01
Scene recognition can be enhanced by appropriate colour information, yet the level of visual processing at which colour exerts its effects is still unclear. It has been suggested that colour supports low-level sensory processing, while others have claimed that colour information aids semantic categorization and recognition of objects and scenes. We investigated the effect of colour on scene recognition in a case of colour agnosia, M.A.H. In a scene identification task, participants had to name images of natural or non-natural scenes in six different formats. Irrespective of scene format, M.A.H. was much slower on the natural than on the non-natural scenes. As expected, neither M.A.H. nor control participants showed any difference in performance for the non-natural scenes. However, for the natural scenes, appropriate colour facilitated scene recognition in control participants (i.e., shorter reaction times), whereas M.A.H.'s performance did not differ across formats. Our data thus support the hypothesis that the effect of colour occurs at the level of learned associations.
Serendipitous Offline Learning in a Neuromorphic Robot.
Stewart, Terrence C; Kleinhans, Ashley; Mundy, Andrew; Conradt, Jörg
2016-01-01
We demonstrate a hybrid neuromorphic learning paradigm that learns complex sensorimotor mappings based on a small set of hard-coded reflex behaviors. A mobile robot is first controlled by a basic set of reflexive hand-designed behaviors. All sensor data is provided via a spike-based silicon retina camera (eDVS), and all control is implemented via spiking neurons simulated on neuromorphic hardware (SpiNNaker). Given this control system, the robot is capable of simple obstacle avoidance and random exploration. To train the robot to perform more complex tasks, we observe the robot and find instances where the robot accidentally performs the desired action. Data recorded from the robot during these times is then used to update the neural control system, increasing the likelihood of the robot performing that task in the future, given a similar sensor state. As an example application of this general-purpose method of training, we demonstrate the robot learning to respond to novel sensory stimuli (a mirror) by turning right if it is present at an intersection, and otherwise turning left. In general, this system can learn arbitrary relations between sensory input and motor behavior.
NASA Astrophysics Data System (ADS)
Schmerwitz, Sven; Többen, Helmut; Lorenz, Bernd; Iijima, Tomoko; Kuritz-Kaiser, Anthea
2006-05-01
Pathway-in-the-sky displays enable pilots to accurately fly difficult trajectories. However, these displays may drive pilots' attention to the aircraft guidance task at the expense of other tasks particularly when the pathway display is located head-down. A pathway HUD may be a viable solution to overcome this disadvantage. Moreover, the pathway may mitigate the perceptual segregation between the static near domain and the dynamic far domain and hence, may improve attention switching between both sources. In order to more comprehensively overcome the perceptual near-to-far domain disconnect alphanumeric symbols could be attached to the pathway leading to a HUD design concept called 'scene-linking'. Two studies are presented that investigated this concept. The first study used a simplified laboratory flight experiment. Pilots (N=14) flew a curved trajectory through mountainous terrain and had to detect display events (discrete changes in a command speed indicator to be matched with current speed) and outside scene events (hostile SAM station on ground). The speed indicators were presented in superposition to the scenery either in fixed position or scene-linked to the pathway. Outside scene event detection was found improved with scene linking, however, flight-path tracking was markedly deteriorated. In the second study a scene-linked pathway concept was implemented on a monocular retinal scanning HMD and tested in real flights on a Do228 involving 5 test pilots. The flight test mainly focused at usability issues of the display in combination with an optical head tracker. Visual and instrument departure and approach tasks were evaluated comparing HMD navigation with standard instrument or terrestrial navigation. The study revealed limitations of the HMD regarding its see-through capability, field of view, weight and wearing comfort that showed to have a strong influence on pilot acceptance rather than rebutting the approach of the display concept as such.
Cultural differences in attention: Eye movement evidence from a comparative visual search task.
Alotaibi, Albandri; Underwood, Geoffrey; Smith, Alastair D
2017-10-01
Individual differences in visual attention have been linked to thinking style: analytic thinking (common in individualistic cultures) is thought to promote attention to detail and focus on the most important part of a scene, whereas holistic thinking (common in collectivist cultures) promotes attention to the global structure of a scene and the relationship between its parts. However, this theory is primarily based on relatively simple judgement tasks. We compared groups from Great Britain (an individualist culture) and Saudi Arabia (a collectivist culture) on a more complex comparative visual search task, using simple natural scenes. A higher overall number of fixations for Saudi participants, along with longer search times, indicated less efficient search behaviour than British participants. Furthermore, intra-group comparisons of scan-path for Saudi participants revealed less similarity than within the British group. Together, these findings suggest that there is a positive relationship between an analytic cognitive style and controlled attention. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Luo, Xiongbiao; McLeod, A. Jonathan; Jayarathne, Uditha L.; Pautler, Stephen E.; Schlacta, Christopher M.; Peters, Terry M.
2016-03-01
Three-dimensional (3-D) scene reconstruction from stereoscopic binocular laparoscopic videos is an effective way to expand the limited surgical field and augment the structure visualization of the organ being operated in minimally invasive surgery. However, currently available reconstruction approaches are limited by image noise, occlusions, textureless and blurred structures. In particular, an endoscope inside the body only has the limited light source resulting in illumination non-uniformities in the visualized field. These limitations unavoidably deteriorate the stereo image quality and hence lead to low-resolution and inaccurate disparity maps, resulting in blurred edge structures in 3-D scene reconstruction. This paper proposes an improved stereo correspondence framework that integrates cost-volume filtering with joint upsampling for robust disparity estimation. Joint bilateral upsampling, joint geodesic upsampling, and tree filtering upsampling were compared to enhance the disparity accuracy. The experimental results demonstrate that joint upsampling provides an effective way to boost the disparity estimation and hence to improve the surgical endoscopic scene 3-D reconstruction. Moreover, the bilateral upsampling generally outperforms the other two upsampling methods in disparity estimation.
Figure-ground segmentation can occur without attention.
Kimchi, Ruth; Peterson, Mary A
2008-07-01
The question of whether or not figure-ground segmentation can occur without attention is unresolved. Early theorists assumed it can, but the evidence is scant and open to alternative interpretations. Recent research indicating that attention can influence figure-ground segmentation raises the question anew. We examined this issue by asking participants to perform a demanding change-detection task on a small matrix presented on a task-irrelevant scene of alternating regions organized into figures and grounds by convexity. Independently of any change in the matrix, the figure-ground organization of the scene changed or remained the same. Changes in scene organization produced congruency effects on target-change judgments, even though, when probed with surprise questions, participants could report neither the figure-ground status of the region on which the matrix appeared nor any change in that status. When attending to the scene, participants reported figure-ground status and changes to it highly accurately. These results clearly demonstrate that figure-ground segmentation can occur without focal attention.
ERIC Educational Resources Information Center
Barak, Moshe; Assal, Muhammad
2018-01-01
This study presents the case of development and evaluation of a STEM-oriented 30-h robotics course for junior high school students (n = 32). Class activities were designed according to the P3 Task Taxonomy, which included: (1) practice-basic closed-ended tasks and exercises; (2) problem solving--small-scale open-ended assignments in which the…
2010-10-01
of the modes of communication and target types central to the experimental tasks, as well as a task demonstration. The Soldier then practiced the...experimental tasks with each communication modality on a separate training course with targets. Practice was structured such that a communication... practice structure across all three communication conditions. Radio was always introduced first, then chat, and lastly tactor. The exception to this
Robust Behavior-Based Control for Distributed Multi-Robot Collection Tasks
2000-01-01
Department, University of Southern California, Los Angeles, CA 90089-0781 USA (e-mail: mataric @usc.edu) For a given task environment and set of robots...Press: Cambridge, Mas- sachusetts. [17] Richard T. Vaughan, Kasper Sty, Gaurav S. Sukhatme, and Maja J Mataric, \\Whistling in the dark : Cooperative
Problems and research issues associated with the hybrid control of force and displacement
NASA Technical Reports Server (NTRS)
Paul, R. P.
1987-01-01
The hybrid control of force and position is basic to the science of robotics but is only poorly understood. Before much progress can be made in robotics, this problem needs to be solved in a robust manner. However, the use of hybrid control implies the existence of a model of the environment, not an exact model (as the function of hybrid control is to accommodate these errors), but a model appropriate for planning and reasoning. The monitored forces in position control are interpreted in terms of a model of the task as are the monitored displacements in force control. The reaction forces of the task of writing are far different from those of hammering. The programming of actions in such a modeled world becomes more complicated and systems of task level programming need to be developed. Sensor based robotics, of which force sensing is the most basic, implies an entirely new level of technology. Indeed, robot force sensors, no matter how compliant they may be, must be protected from accidental collisions. This implies other sensors to monitor task execution and again the use of a world model. This new level of technology is the task level, in which task actions are specified, not the actions of individual sensors and manipulators.
Baxter, Mark G; Gaffan, David; Kyriazis, Diana A; Mitchell, Anna S
2007-10-17
The orbital prefrontal cortex is thought to be involved in behavioral flexibility in primates, and human neuroimaging studies have identified orbital prefrontal activation during episodic memory encoding. The goal of the present study was to ascertain whether deficits in strategy implementation and episodic memory that occur after ablation of the entire prefrontal cortex can be ascribed to damage to the orbital prefrontal cortex. Rhesus monkeys were preoperatively trained on two behavioral tasks, the performance of both of which is severely impaired by the disconnection of frontal cortex from inferotemporal cortex. In the strategy implementation task, monkeys were required to learn about two categories of objects, each associated with a different strategy that had to be performed to obtain food reward. The different strategies had to be applied flexibly to optimize the rate of reward delivery. In the scene memory task, monkeys learned 20 new object-in-place discrimination problems in each session. Monkeys were tested on both tasks before and after bilateral ablation of orbital prefrontal cortex. These lesions impaired new scene learning but had no effect on strategy implementation. This finding supports a role for the orbital prefrontal cortex in memory but places limits on the involvement of orbital prefrontal cortex in the representation and implementation of behavioral goals and strategies.
Robots as Language Learning Tools
ERIC Educational Resources Information Center
Collado, Ericka
2017-01-01
Robots are machines that resemble different forms, usually those of humans or animals, that can perform preprogrammed or autonomous tasks (Robot, n.d.). With the emergence of STEM programs, there has been a rise in the use of robots in educational settings. STEM programs are those where students study science, technology, engineering and…
Retention of fundamental surgical skills learned in robot-assisted surgery.
Suh, Irene H; Mukherjee, Mukul; Shah, Bhavin C; Oleynikov, Dmitry; Siu, Ka-Chun
2012-12-01
Evaluation of the learning curve for robotic surgery has shown reduced errors and decreased task completion and training times compared with regular laparoscopic surgery. However, most training evaluations of robotic surgery have only addressed short-term retention after the completion of training. Our goal was to investigate the amount of surgical skills retained after 3 months of training with the da Vinci™ Surgical System. Seven medical students without any surgical experience were recruited. Participants were trained with a 4-day training program of robotic surgical skills and underwent a series of retention tests at 1 day, 1 week, 1 month, and 3 months post-training. Data analysis included time to task completion, speed, distance traveled, and movement curvature by the instrument tip. Performance of the participants was graded using the modified Objective Structured Assessment of Technical Skills (OSATS) for robotic surgery. Participants filled out a survey after each training session by answering a set of questions. Time to task completion and the movement curvature was decreased from pre- to post-training and the performance was retained at all the corresponding retention periods: 1 day, 1 week, 1 month, and 3 months. The modified OSATS showed improvement from pre-test to post-test and this improvement was maintained during all the retention periods. Participants increased in self-confidence and mastery in performing robotic surgical tasks after training. Our novel comprehensive training program improved robot-assisted surgical performance and learning. All trainees retained their fundamental surgical skills for 3 months after receiving the training program.
Motion generation of robotic surgical tasks: learning from expert demonstrations.
Reiley, Carol E; Plaku, Erion; Hager, Gregory D
2010-01-01
Robotic surgical assistants offer the possibility of automating portions of a task that are time consuming and tedious in order to reduce the cognitive workload of a surgeon. This paper proposes using programming by demonstration to build generative models and generate smooth trajectories that capture the underlying structure of the motion data recorded from expert demonstrations. Specifically, motion data from Intuitive Surgical's da Vinci Surgical System of a panel of expert surgeons performing three surgical tasks are recorded. The trials are decomposed into subtasks or surgemes, which are then temporally aligned through dynamic time warping. Next, a Gaussian Mixture Model (GMM) encodes the experts' underlying motion structure. Gaussian Mixture Regression (GMR) is then used to extract a smooth reference trajectory to reproduce a trajectory of the task. The approach is evaluated through an automated skill assessment measurement. Results suggest that this paper presents a means to (i) extract important features of the task, (ii) create a metric to evaluate robot imitative performance (iii) generate smoother trajectories for reproduction of three common medical tasks.
Bergamasco, Massimo; Frisoli, Antonio; Fontana, Marco; Loconsole, Claudio; Leonardis, Daniele; Troncossi, Marco; Foumashi, Mohammad Mozaffari; Parenti-Castelli, Vincenzo
2011-01-01
This paper presents the preliminary results of the project BRAVO (Brain computer interfaces for Robotic enhanced Action in Visuo-motOr tasks). The objective of this project is to define a new approach to the development of assistive and rehabilitative robots for motor impaired users to perform complex visuomotor tasks that require a sequence of reaches, grasps and manipulations of objects. BRAVO aims at developing new robotic interfaces and HW/SW architectures for rehabilitation and regain/restoration of motor function in patients with upper limb sensorimotor impairment through extensive rehabilitation therapy and active assistance in the execution of Activities of Daily Living. The final system developed within this project will include a robotic arm exoskeleton and a hand orthosis that will be integrated together for providing force assistance. The main novelty that BRAVO introduces is the control of the robotic assistive device through the active prediction of intention/action. The system will actually integrate the information about the movement carried out by the user with a prediction of the performed action through an interpretation of current gaze of the user (measured through eye-tracking), brain activation (measured through BCI) and force sensor measurements. © 2011 IEEE
NASA Astrophysics Data System (ADS)
Thomaz, Andrea; Breazeal, Cynthia
2008-06-01
We present a learning system, socially guided exploration, in which a social robot learns new tasks through a combination of self-exploration and social interaction. The system's motivational drives, along with social scaffolding from a human partner, bias behaviour to create learning opportunities for a hierarchical reinforcement learning mechanism. The robot is able to learn on its own, but can flexibly take advantage of the guidance of a human teacher. We report the results of an experiment that analyses what the robot learns on its own as compared to being taught by human subjects. We also analyse the video of these interactions to understand human teaching behaviour and the social dynamics of the human-teacher/robot-learner system. With respect to learning performance, human guidance results in a task set that is significantly more focused and efficient at the tasks the human was trying to teach, whereas self-exploration results in a more diverse set. Analysis of human teaching behaviour reveals insights of social coupling between the human teacher and robot learner, different teaching styles, strong consistency in the kinds and frequency of scaffolding acts across teachers and nuances in the communicative intent behind positive and negative feedback.
NASA Astrophysics Data System (ADS)
Altschuler, Bruce R.; Monson, Keith L.
1998-03-01
Representation of crime scenes as virtual reality 3D computer displays promises to become a useful and important tool for law enforcement evaluation and analysis, forensic identification and pathological study and archival presentation during court proceedings. Use of these methods for assessment of evidentiary materials demands complete accuracy of reproduction of the original scene, both in data collection and in its eventual virtual reality representation. The recording of spatially accurate information as soon as possible after first arrival of law enforcement personnel is advantageous for unstable or hazardous crime scenes and reduces the possibility that either inadvertent measurement error or deliberate falsification may occur or be alleged concerning processing of a scene. Detailed measurements and multimedia archiving of critical surface topographical details in a calibrated, uniform, consistent and standardized quantitative 3D coordinate method are needed. These methods would afford professional personnel in initial contact with a crime scene the means for remote, non-contacting, immediate, thorough and unequivocal documentation of the contents of the scene. Measurements of the relative and absolute global positions of object sand victims, and their dispositions within the scene before their relocation and detailed examination, could be made. Resolution must be sufficient to map both small and large objects. Equipment must be able to map regions at varied resolution as collected from different perspectives. Progress is presented in devising methods for collecting and archiving 3D spatial numerical data from crime scenes, sufficient for law enforcement needs, by remote laser structured light and video imagery. Two types of simulation studies were done. One study evaluated the potential of 3D topographic mapping and 3D telepresence using a robotic platform for explosive ordnance disassembly. The second study involved using the laser mapping system on a fixed optical bench with simulated crime scene models of the people and furniture to assess feasibility, requirements and utility of such a system for crime scene documentation and analysis.
Association of Individual Characteristics with Teleoperation Performance.
Pan, Dan; Zhang, Yijing; Li, Zhizhong; Tian, Zhiqiang
2016-09-01
A number of space activities (e.g., extravehicular astronaut rescue, cooperation in satellite services, space station supplies, and assembly) are implemented directly or assisted by remote robotic arms. Our study aimed to reveal those individual characteristics which could positively influence or even predict teleoperation performance of such a space robotic arm. There were 64 male volunteers without robot operation experience recruited for the study. Their individual characteristics were assessed, including spatial cognitive ability, cognitive style, and personality traits. The experimental tasks were three abstracted teleoperation tasks of a simulated space robotic arm: point aiming, line alignment, and obstacle avoidance. Teleoperation performance was measured from two aspects: task performance (completion time, extra distance moved, operation slips) and safety performance (collisions, joint limitations reached). The Pearson coefficients between individual characteristics and teleoperation performance were examined along with performance prediction models. It was found that the subjects with relatively high mental rotation ability or low neuroticism had both better task and safety performance (|r| = 0.212 ∼ 0.381). Subjects with relatively high perspective taking ability or high agreeableness had better task performance (r = -0.253; r = -0.249). Imagery subjects performed better than verbal subjects regarding both task and safety performance (|r| = 0.236 ∼ 0.290). Compared with analytic subjects, wholist subjects had better safety performance (r = 0.300). Additionally, extraverted subjects had better task performance (r = -0.259), but worse safety performance (r = 0.230). Those with high spatial cognitive ability, imagery and wholist cognitive style, low neuroticism, and high agreeableness were seen to have more advantages in working with the remote robotic arm. These results could be helpful to astronaut selection and training for space station missions. Pan D, Zhang Y, Li Z, Tian Z. Association of individual characteristics with teleoperation performance. Aerosp Med Hum Perform. 2016; 87(9):772-780.
A situated reasoning architecture for space-based repair and replace tasks
NASA Technical Reports Server (NTRS)
Bloom, Ben; Mcgrath, Debra; Sanborn, Jim
1989-01-01
Space-based robots need low level control for collision detection and avoidance, short-term load management, fine-grained motion, and other physical tasks. In addition, higher level control is required to focus strategic decision making as missions are assigned and carried out. Reasoning and control must be responsive to ongoing changes in the environment. Research aimed at bridging the gap between high level artificial intelligence (AI) planning techniques and task-level robot programming for telerobotic systems is described. Situated reasoning is incorporated into AI and Robotics systems in order to coordinate a robot's activity within its environment. An integrated system under development in a component maintenance domain is described. It is geared towards replacing worn and/or failed Orbital Replacement Units (ORUs) designed for use aboard NASA's Space Station Freedom based on the collection of components available at a given time. High level control reasons in component space in order to maximize the number operational component-cells over time, while the task-level controls sensors and effectors, detects collisions, and carries out pick and place tasks in physical space. Situated reasoning is used throughout the system to cope with component failures, imperfect information, and unexpected events.
NASA Astrophysics Data System (ADS)
Vestrand, W. T.; Theiler, J.; Woznia, P. R.
2004-10-01
The existence of rapidly slewing robotic telescopes and fast alert distribution via the Internet is revolutionizing our capability to study the physics of fast astrophysical transients. But the salient challenge that optical time domain surveys must conquer is mining the torrent of data to recognize important transients in a scene full of normal variations. Humans simply do not have the attention span, memory, or reaction time required to recognize fast transients and rapidly respond. Autonomous robotic instrumentation with the ability to extract pertinent information from the data stream in real time will therefore be essential for recognizing transients and commanding rapid follow-up observations while the ephemeral behavior is still present. Here we discuss how the development and integration of three technologies: (1) robotic telescope networks; (2) machine learning; and (3) advanced database technology, can enable the construction of smart robotic telescopes, which we loosely call ``thinking'' telescopes, capable of mining the sky in real time.
Hadfield, Helms and Voss work on the SSRMS controls in Destiny
2001-04-28
S100-E-5884 (28 April 2001) --- Some of the principal participants of an historical event are pictured in the Destiny laboratory aboard the International Space Station (ISS). From left to right are astronauts Chris A. Hadfield, STS-100 mission specialist, and astronauts Susan J. Helms and James S. Voss, Expedition Two flight engineers. A Canadian “handshake in space” occurred at 4:02 p.m (CDT), April 28, 2001, as the Canadian-built space station robotic arm – operated by Helms – transferred its launch cradle over to Endeavour’s robotic arm, with Canadian Space Agency astronaut Hadfield at the controls. In this scene, Hadfield has temporarily vacated his post on Endeavour's aft flight deck and was having a brief strategy meeting with the Expedition Two crew on the docked station. The exchange of the pallet from station arm to shuttle arm marked the first ever robotic-to-robotic transfer in space. This image was recorded with a digital still camera.
Expedition Two Voss at SSRMS controls with Hadfield and Helms in Destiny module
2001-04-22
ISS002-303-036 (28 April 2001) --- Some of the principal participants of an historical event are pictured in the Destiny laboratory aboard the International Space Station (ISS). In the foreground is astronaut James S. Voss, with astronaut Chris A. Hadfield, STS-100 mission specialist, at center, and astronaut Susan J. Helms in the background. Voss and Helms are Expedition Two flight engineers. A Canadian "handshake in space" occurred at 4:02 p.m (CDT), April 28, 2001, as the Canadian-built space station robotic arm -- operated by Helms -- transferred its launch cradle over to Endeavour's robotic arm, with Canadian Space Agency astronaut Hadfield at the controls. In this scene, Hadfield had temporarily vacated his post on Endeavour's aft flight deck and was having a brief strategy meeting with the Expedition Two crew on the docked station. The exchange of the pallet from station arm to shuttle arm marked the first ever robotic-to-robotic transfer in space.
Timmermans, Annick A A; Lemmens, Ryanne J M; Monfrance, Maurice; Geers, Richard P J; Bakx, Wilbert; Smeets, Rob J E M; Seelen, Henk A M
2014-03-31
Over fifty percent of stroke patients experience chronic arm hand performance problems, compromising independence in daily life activities and quality of life. Task-oriented training may improve arm hand performance after stroke, whereby augmented therapy may lead to a better treatment outcome. Technology-supported training holds opportunities for increasing training intensity. However, the effects of robot-supported task-oriented training with real life objects in stroke patients are not known to date. The aim of the present study was to investigate the effectiveness and added value of the Haptic Master robot combined with task-oriented arm hand training in chronic stroke patients. In a single-blind randomized controlled trial, 22 chronic stroke patients were randomly allocated to receive either task-oriented robot-assisted arm-hand training (experimental group) or task-oriented non-robotic arm-hand training (control group). For training, the T-TOAT (Technology-supported Task-Oriented Arm Training) method was applied. Training was provided during 8 weeks, 4 times/week, 2 × 30 min/day. A significant improvement after training on the Action Research Arm Test (ARAT) was demonstrated in the experimental group (p = 0.008). Results were maintained until 6 months after cessation of the training. On the perceived performance measure (Motor Activity Log (MAL)), both, the experimental and control group improved significantly after training (control group p = 0.008; experimental group p = 0.013). The improvements on MAL in both groups were maintained until 6 months after cessation of the training. With regard to quality of life, only in the control group a significant improvement after training was found (EuroQol-5D p = 0.015, SF-36 physical p = 0.01). However, the improvement on SF-36 in the control group was not maintained (p = 0.012). No between-group differences could be demonstrated on any of the outcome measures. Arm hand performance improved in chronic stroke patients, after eight weeks of task oriented training. The use of a Haptic Master robot in support of task-oriented arm training did not show additional value over the video-instructed task-oriented exercises in highly functional stroke patients. Current Controlled Trials ISRCTN82787126.
Working safely with robot workers: Recommendations for the new workplace.
Murashov, Vladimir; Hearl, Frank; Howard, John
2016-01-01
The increasing use of robots in performing tasks alongside or together with human co-workers raises novel occupational safety and health issues. The new 21st century workplace will be one in which occupational robotics plays an increasing role. This article describes the increasing complexity of robots and proposes a number of recommendations for the practice of safe occupational robotics.
Working Safely with Robot Workers: Recommendations for the New Workplace
Murashov, Vladimir; Hearl, Frank; Howard, John
2016-01-01
The increasing use of robots in performing tasks alongside or together with human coworkers raises novel occupational safety and health issues. The new 21st century workplace will be one in which occupational robotics plays an increasing role. This paper describes the increasing complexity of robots and proposes a number of recommendations for the practice of safe occupational robotics. PMID:26554511
An egocentric vision based assistive co-robot.
Zhang, Jingzhe; Zhuang, Lishuo; Wang, Yang; Zhou, Yameng; Meng, Yan; Hua, Gang
2013-06-01
We present the prototype of an egocentric vision based assistive co-robot system. In this co-robot system, the user is wearing a pair of glasses with a forward looking camera, and is actively engaged in the control loop of the robot in navigational tasks. The egocentric vision glasses serve for two purposes. First, it serves as a source of visual input to request the robot to find a certain object in the environment. Second, the motion patterns computed from the egocentric video associated with a specific set of head movements are exploited to guide the robot to find the object. These are especially helpful for quadriplegic individuals who do not have needed hand functionality for interaction and control with other modalities (e.g., joystick). In our co-robot system, when the robot does not fulfill the object finding task in a pre-specified time window, it would actively solicit user controls for guidance. Then the users can use the egocentric vision based gesture interface to orient the robot towards the direction of the object. After that the robot will automatically navigate towards the object until it finds it. Our experiments validated the efficacy of the closed-loop design to engage the human in the loop.
Evolution of Collective Behaviors for a Real Swarm of Aquatic Surface Robots.
Duarte, Miguel; Costa, Vasco; Gomes, Jorge; Rodrigues, Tiago; Silva, Fernando; Oliveira, Sancho Moura; Christensen, Anders Lyhne
2016-01-01
Swarm robotics is a promising approach for the coordination of large numbers of robots. While previous studies have shown that evolutionary robotics techniques can be applied to obtain robust and efficient self-organized behaviors for robot swarms, most studies have been conducted in simulation, and the few that have been conducted on real robots have been confined to laboratory environments. In this paper, we demonstrate for the first time a swarm robotics system with evolved control successfully operating in a real and uncontrolled environment. We evolve neural network-based controllers in simulation for canonical swarm robotics tasks, namely homing, dispersion, clustering, and monitoring. We then assess the performance of the controllers on a real swarm of up to ten aquatic surface robots. Our results show that the evolved controllers transfer successfully to real robots and achieve a performance similar to the performance obtained in simulation. We validate that the evolved controllers display key properties of swarm intelligence-based control, namely scalability, flexibility, and robustness on the real swarm. We conclude with a proof-of-concept experiment in which the swarm performs a complete environmental monitoring task by combining multiple evolved controllers.
Evolution of Collective Behaviors for a Real Swarm of Aquatic Surface Robots
Duarte, Miguel; Costa, Vasco; Gomes, Jorge; Rodrigues, Tiago; Silva, Fernando; Oliveira, Sancho Moura; Christensen, Anders Lyhne
2016-01-01
Swarm robotics is a promising approach for the coordination of large numbers of robots. While previous studies have shown that evolutionary robotics techniques can be applied to obtain robust and efficient self-organized behaviors for robot swarms, most studies have been conducted in simulation, and the few that have been conducted on real robots have been confined to laboratory environments. In this paper, we demonstrate for the first time a swarm robotics system with evolved control successfully operating in a real and uncontrolled environment. We evolve neural network-based controllers in simulation for canonical swarm robotics tasks, namely homing, dispersion, clustering, and monitoring. We then assess the performance of the controllers on a real swarm of up to ten aquatic surface robots. Our results show that the evolved controllers transfer successfully to real robots and achieve a performance similar to the performance obtained in simulation. We validate that the evolved controllers display key properties of swarm intelligence-based control, namely scalability, flexibility, and robustness on the real swarm. We conclude with a proof-of-concept experiment in which the swarm performs a complete environmental monitoring task by combining multiple evolved controllers. PMID:26999614
Modeling Leadership Styles in Human-Robot Team Dynamics
NASA Technical Reports Server (NTRS)
Cruz, Gerardo E.
2005-01-01
The recent proliferation of robotic systems in our society has placed questions regarding interaction between humans and intelligent machines at the forefront of robotics research. In response, our research attempts to understand the context in which particular types of interaction optimize efficiency in tasks undertaken by human-robot teams. It is our conjecture that applying previous research results regarding leadership paradigms in human organizations will lead us to a greater understanding of the human-robot interaction space. In doing so, we adapt four leadership styles prevalent in human organizations to human-robot teams. By noting which leadership style is more appropriately suited to what situation, as given by previous research, a mapping is created between the adapted leadership styles and human-robot interaction scenarios-a mapping which will presumably maximize efficiency in task completion for a human-robot team. In this research we test this mapping with two adapted leadership styles: directive and transactional. For testing, we have taken a virtual 3D interface and integrated it with a genetic algorithm for use in &le-operation of a physical robot. By developing team efficiency metrics, we can determine whether this mapping indeed prescribes interaction styles that will maximize efficiency in the teleoperation of a robot.
Visual search in scenes involves selective and non-selective pathways
Wolfe, Jeremy M; Vo, Melissa L-H; Evans, Karla K; Greene, Michelle R
2010-01-01
How do we find objects in scenes? For decades, visual search models have been built on experiments in which observers search for targets, presented among distractor items, isolated and randomly arranged on blank backgrounds. Are these models relevant to search in continuous scenes? This paper argues that the mechanisms that govern artificial, laboratory search tasks do play a role in visual search in scenes. However, scene-based information is used to guide search in ways that had no place in earlier models. Search in scenes may be best explained by a dual-path model: A “selective” path in which candidate objects must be individually selected for recognition and a “non-selective” path in which information can be extracted from global / statistical information. PMID:21227734
Fixation and saliency during search of natural scenes: the case of visual agnosia.
Foulsham, Tom; Barton, Jason J S; Kingstone, Alan; Dewhurst, Richard; Underwood, Geoffrey
2009-07-01
Models of eye movement control in natural scenes often distinguish between stimulus-driven processes (which guide the eyes to visually salient regions) and those based on task and object knowledge (which depend on expectations or identification of objects and scene gist). In the present investigation, the eye movements of a patient with visual agnosia were recorded while she searched for objects within photographs of natural scenes and compared to those made by students and age-matched controls. Agnosia is assumed to disrupt the top-down knowledge available in this task, and so may increase the reliance on bottom-up cues. The patient's deficit in object recognition was seen in poor search performance and inefficient scanning. The low-level saliency of target objects had an effect on responses in visual agnosia, and the most salient region in the scene was more likely to be fixated by the patient than by controls. An analysis of model-predicted saliency at fixation locations indicated a closer match between fixations and low-level saliency in agnosia than in controls. These findings are discussed in relation to saliency-map models and the balance between high and low-level factors in eye guidance.
Three small deployed satellites
2012-10-04
ISS033-E-009282 (4 Oct. 2012) --- Several tiny satellites are featured in this image photographed by an Expedition 33 crew member on the International Space Station. The satellites were released outside the Kibo laboratory using a Small Satellite Orbital Deployer attached to the Japanese module’s robotic arm on Oct. 4, 2012. Japan Aerospace Exploration Agency astronaut Aki Hoshide, flight engineer, set up the satellite deployment gear inside the lab and placed it in the Kibo airlock. The Japanese robotic arm then grappled the deployment system and its satellites from the airlock for deployment. Earth’s horizon and the blackness of space provide the backdrop for the scene.
JEMRMS Small Satellite Deployment Observation
2012-10-04
ISS033-E-009315 (4 Oct. 2012) --- Several tiny satellites are featured in this image photographed by an Expedition 33 crew member on the International Space Station. The satellites were released outside the Kibo laboratory using a Small Satellite Orbital Deployer attached to the Japanese module’s robotic arm on Oct. 4, 2012. Japan Aerospace Exploration Agency astronaut Aki Hoshide, flight engineer, set up the satellite deployment gear inside the lab and placed it in the Kibo airlock. The Japanese robotic arm then grappled the deployment system and its satellites from the airlock for deployment. A blue and white part of Earth provides the backdrop for the scene.
3D display for enhanced tele-operation and other applications
NASA Astrophysics Data System (ADS)
Edmondson, Richard; Pezzaniti, J. Larry; Vaden, Justin; Hyatt, Brian; Morris, James; Chenault, David; Bodenhamer, Andrew; Pettijohn, Bradley; Tchon, Joe; Barnidge, Tracy; Kaufman, Seth; Kingston, David; Newell, Scott
2010-04-01
In this paper, we report on the use of a 3D vision field upgrade kit for TALON robot consisting of a replacement flat panel stereoscopic display, and multiple stereo camera systems. An assessment of the system's use for robotic driving, manipulation, and surveillance operations was conducted. A replacement display, replacement mast camera with zoom, auto-focus, and variable convergence, and a replacement gripper camera with fixed focus and zoom comprise the upgrade kit. The stereo mast camera allows for improved driving and situational awareness as well as scene survey. The stereo gripper camera allows for improved manipulation in typical TALON missions.
The robot's eyes - Stereo vision system for automated scene analysis
NASA Technical Reports Server (NTRS)
Williams, D. S.
1977-01-01
Attention is given to the robot stereo vision system which maintains the image produced by solid-state detector television cameras in a dynamic random access memory called RAPID. The imaging hardware consists of sensors (two solid-state image arrays using a charge injection technique), a video-rate analog-to-digital converter, the RAPID memory, and various types of computer-controlled displays, and preprocessing equipment (for reflexive actions, processing aids, and object detection). The software is aimed at locating objects and transversibility. An object-tracking algorithm is discussed and it is noted that tracking speed is in the 50-75 pixels/s range.
Research and applications: Artificial intelligence
NASA Technical Reports Server (NTRS)
Raphael, B.; Fikes, R. E.; Chaitin, L. J.; Hart, P. E.; Duda, R. O.; Nilsson, N. J.
1971-01-01
A program of research in the field of artificial intelligence is presented. The research areas discussed include automatic theorem proving, representations of real-world environments, problem-solving methods, the design of a programming system for problem-solving research, techniques for general scene analysis based upon television data, and the problems of assembling an integrated robot system. Major accomplishments include the development of a new problem-solving system that uses both formal logical inference and informal heuristic methods, the development of a method of automatic learning by generalization, and the design of the overall structure of a new complete robot system. Eight appendices to the report contain extensive technical details of the work described.
Makinde, O A; Mpofu, K; Vrabic, R; Ramatsetse, B I
2017-01-01
The development of a robotic-driven maintenance solution capable of automatically maintaining reconfigurable vibrating screen (RVS) machine when utilized in dangerous and hazardous underground mining environment has called for the design of a multifunctional robotic end-effector capable of carrying out all the maintenance tasks on the RVS machine. In view of this, the paper presents a bio-inspired approach which unfolds the design of a novel multifunctional robotic end-effector embedded with mechanical and control mechanisms capable of automatically maintaining the RVS machine. To achieve this, therblig and morphological methodologies (which classifies the motions as well as the actions required by the robotic end-effector in carrying out RVS machine maintenance tasks), obtained from a detailed analogy of how human being (i.e. a machine maintenance manager) will carry out different maintenance tasks on the RVS machine, were used to obtain the maintenance objective functions or goals of the multifunctional robotic end-effector as well as the maintenance activity constraints of the RVS machine that must be adhered to by the multifunctional robotic end-effector during the machine maintenance. The results of the therblig and morphological analyses of five (5) different maintenance tasks capture and classify one hundred and thirty-four (134) repetitive motions and fifty-four (54) functions required in automating the maintenance tasks of the RVS machine. Based on these findings, a worm-gear mechanism embedded with fingers extruded with a hexagonal shaped heads capable of carrying out the "gripping and ungrasping" and "loosening and bolting" functions of the robotic end-effector and an electric cylinder actuator module capable of carrying out "unpinning and hammering" functions of the robotic end-effector were integrated together to produce the customized multifunctional robotic end-effector capable of automatically maintaining the RVS machine. The axial forces ([Formula: see text] and [Formula: see text]), normal forces ([Formula: see text]) and total load [Formula: see text] acting on the teeth of the worm-gear module of the multifunctional robotic end-effector during the gripping of worn-out or new RVS machine subsystems, which are 978.547, 1245.06 and 1016.406 N, respectively, were satisfactory. The nominal bending and torsional stresses acting on the shoulder of the socket module of the multifunctional robotic end-effector during the loosing and tightening of bolts, which are 1450.72 and 179.523 MPa, respectively, were satisfactory. The hammering and unpinning forces utilized by the electric cylinder actuator module of the multifunctional robotic end-effector during the unpinning and hammering of screen panel pins out of and into the screen panels were satisfactory.
Coordinating teams of autonomous vehicles: an architectural perspective
NASA Astrophysics Data System (ADS)
Czichon, Cary; Peterson, Robert W.; Mettala, Erik G.; Vondrak, Ivo
2005-05-01
In defense-related robotics research, a mission level integration gap exists between mission tasks (tactical) performed by ground, sea, or air applications and elementary behaviors enacted by processing, communications, sensors, and weaponry resources (platform specific). The gap spans ensemble (heterogeneous team) behaviors, automatic MOE/MOP tracking, and tactical task modeling/simulation for virtual and mixed teams comprised of robotic and human combatants. This study surveys robotic system architectures, compares approaches for navigating problem/state spaces by autonomous systems, describes an architecture for an integrated, repository-based modeling, simulation, and execution environment, and outlines a multi-tiered scheme for robotic behavior components that is agent-based, platform-independent, and extendable via plug-ins. Tools for this integrated environment, along with a distributed agent framework for collaborative task performance are being developed by a U.S. Army funded SBIR project (RDECOM Contract N61339-04-C-0005).
Impact of 2D and 3D vision on performance of novice subjects using da Vinci robotic system.
Blavier, A; Gaudissart, Q; Cadière, G B; Nyssen, A S
2006-01-01
The aim of this study was to evaluate the impact of 3D and 2D vision on performance of novice subjects using da Vinci robotic system. 224 nurses without any surgical experience were divided into two groups and executed a motor task with the robotic system in 2D for one group and with the robotic system in 3D for the other group. Time to perform the task was recorded. Our data showed significant better time performance in 3D view (24.67 +/- 11.2) than in 2D view (40.26 +/- 17.49, P < 0.001). Our findings emphasized the advantage of 3D vision over 2D view in performing surgical task, encouraging the development of efficient and less expensive 3D systems in order to improve the accuracy of surgical gesture, the resident training and the operating time.
Maximizing Efficiency and Reducing Robotic Surgery Costs Using the NASA Task Load Index.
Walters, Carrie; Webb, Paula J
2017-10-01
Perioperative leaders at our facility were struggling to meet efficiency targets for robotic surgery procedures while also maintaining the satisfaction of the surgical team. We developed a human resources time and motion study tool and used it in conjunction with the NASA Task Load Index to observe and analyze the required workload of personnel assigned to 25 robotic surgery procedures. The time and motion study identified opportunities to enlist the help of nonlicensed support personnel to ensure safe patient care and improve OR efficiency. Using the NASA Task Load Index demonstrated that high temporal, effort, and physical demands existed for personnel assisting with and performing robotic surgery. We believe that this process could be used to develop cost-effective staffing models, resulting in safe and efficient care for all surgical patients. Copyright © 2017 AORN, Inc. Published by Elsevier Inc. All rights reserved.
Extended Task Space Control for Robotic Manipulators
NASA Technical Reports Server (NTRS)
Backes, Paul G. (Inventor); Long, Mark K. (Inventor)
1996-01-01
The invention is a method of operating a robot in successive sampling intervals to perform a task, the robot having joints and joint actuators with actuator control loops, by decomposing the task into behavior forces, accelerations, velocities and positions of plural behaviors to be exhibited by the robot simultaneously, computing actuator accelerations of the joint actuators for the current sampling interval from both behavior forces, accelerations velocities and positions of the current sampling interval and actuator velocities and positions of the previous sampling interval, computing actuator velocities and positions of the joint actuators for the current sampling interval from the actuator velocities and positions of the previous sampling interval, and, finally, controlling the actuators in accordance with the actuator accelerations, velocities and positions of the current sampling interval. The actuator accelerations, velocities and positions of the current sampling interval are stored for use during the next sampling interval.
Interrupted Visual Searches Reveal Volatile Search Memory
ERIC Educational Resources Information Center
Shen, Y. Jeremy; Jiang, Yuhong V.
2006-01-01
This study investigated memory from interrupted visual searches. Participants conducted a change detection search task on polygons overlaid on scenes. Search was interrupted by various disruptions, including unfilled delay, passive viewing of other scenes, and additional search on new displays. Results showed that performance was unaffected by…
Auditory Scene Analysis: An Attention Perspective
ERIC Educational Resources Information Center
Sussman, Elyse S.
2017-01-01
Purpose: This review article provides a new perspective on the role of attention in auditory scene analysis. Method: A framework for understanding how attention interacts with stimulus-driven processes to facilitate task goals is presented. Previously reported data obtained through behavioral and electrophysiological measures in adults with normal…
Stallkamp, J; Schraft, R D
2005-01-01
In minimally invasive surgery, a higher degree of accuracy is required by surgeons both for current and for future applications. This could be achieved using either a manipulator or a robot which would undertake selected tasks during surgery. However, a manually-controlled manipulator cannot fully exploit the maximum accuracy and feasibility of three-dimensional motion sequences. Therefore, apart from being used to perform simple positioning tasks, manipulators will probably be replaced by robot systems more and more in the future. However, in order to use a robot, accurate, up-to-date and extensive data is required which cannot yet be acquired by typical sensors such as CT, MRI, US or common x-ray machines. This paper deals with a new sensor and a concept for its application in robot-assisted minimally invasive surgery on soft tissue which could be a solution for data acquisition in future. Copyright 2005 Robotic Publications Ltd.
Brain-controlled telepresence robot by motor-disabled people.
Tonin, Luca; Carlson, Tom; Leeb, Robert; del R Millán, José
2011-01-01
In this paper we present the first results of users with disabilities in mentally controlling a telepresence robot, a rather complex task as the robot is continuously moving and the user must control it for a long period of time (over 6 minutes) to go along the whole path. These two users drove the telepresence robot from their clinic more than 100 km away. Remarkably, although the patients had never visited the location where the telepresence robot was operating, they achieve similar performances to a group of four healthy users who were familiar with the environment. In particular, the experimental results reported in this paper demonstrate the benefits of shared control for brain-controlled telepresence robots. It allows all subjects (including novel BMI subjects as our users with disabilities) to complete a complex task in similar time and with similar number of commands to those required by manual control.
Attitudes towards health-care robots in a retirement village.
Broadbent, Elizabeth; Tamagawa, Rie; Patience, Anna; Knock, Brett; Kerse, Ngaire; Day, Karen; MacDonald, Bruce A
2012-06-01
This study investigated the attitudes and preferences of staff, residents and relatives of residents in a retirement village towards a health-care robot. Focus groups were conducted with residents, managers and caregivers, and questionnaires were collected from 32 residents, 30 staff and 27 relatives of residents. The most popular robot tasks were detection of falls and calling for help, lifting, and monitoring location. Robot functionality was more important than appearance. Concerns included the loss of jobs and personal care, while perceived benefits included allowing staff to spend quality time with residents, and helping residents with self-care. Residents showed a more positive attitude towards robots than both staff and relatives. These results provide an initial guide for the tasks and appearance appropriate for a robot to provide assistance in aged care facilities and highlight concerns. © 2011 The Authors. Australasian Journal on Ageing © 2011 ACOTA.
Framework and Method for Controlling a Robotic System Using a Distributed Computer Network
NASA Technical Reports Server (NTRS)
Sanders, Adam M. (Inventor); Strawser, Philip A. (Inventor); Barajas, Leandro G. (Inventor); Permenter, Frank Noble (Inventor)
2015-01-01
A robotic system for performing an autonomous task includes a humanoid robot having a plurality of compliant robotic joints, actuators, and other integrated system devices that are controllable in response to control data from various control points, and having sensors for measuring feedback data at the control points. The system includes a multi-level distributed control framework (DCF) for controlling the integrated system components over multiple high-speed communication networks. The DCF has a plurality of first controllers each embedded in a respective one of the integrated system components, e.g., the robotic joints, a second controller coordinating the components via the first controllers, and a third controller for transmitting a signal commanding performance of the autonomous task to the second controller. The DCF virtually centralizes all of the control data and the feedback data in a single location to facilitate control of the robot across the multiple communication networks.
Reprogramming the articulated robotic arm for glass handling by using Arduino microcontroller
NASA Astrophysics Data System (ADS)
Razali, Zol Bahri; Kader, Mohamed Mydin M. Abdul; Kadir, Mohd Asmadi Akmal; Daud, Mohd Hisam
2017-09-01
The application of articulated robotic arm in industries is raised due to the expansion of using robot to replace human task, especially for the harmful tasks. However a few problems happen with the program use to schedule the arm, Thus the purpose of this project is to design, fabricate and integrate an articulated robotic arm by using Arduino microcontroller for handling glass sorting system. This project was designed to segregate glass and non-glass waste which would be pioneer step for recycling. This robotic arm has four servo motors to operate as a whole; three for the body and one for holding mechanism. This intelligent system is controlled by Arduino microcontroller and build with optical sensor to provide the distinguish objects that will be handled. Solidworks model was used to produce the detail design of the robotic arm and make the mechanical properties analysis by using a CAD software.
A Drastic Change in Background Luminance or Motion Degrades the Preview Benefit.
Osugi, Takayuki; Murakami, Ikuya
2017-01-01
When some distractors (old items) precede some others (new items) in an inefficient visual search task, the search is restricted to new items, and yields a phenomenon termed the preview benefit. It has recently been demonstrated that, in this preview search task, the onset of repetitive changes in the background disrupts the preview benefit, whereas a single transient change in the background does not. In the present study, we explored this effect with dynamic background changes occurring in the context of realistic scenes, to examine the robustness and usefulness of visual marking. We examined whether preview benefit in a preview search task survived through task-irrelevant changes in the scene, namely a luminance change and the initiation of coherent motion, both occurring in the background. Luminance change of the background disrupted preview benefit if it was synchronized with the onset of the search display. Furthermore, although the presence of coherent background motion per se did not affect preview benefit, its synchronized initiation with the onset of the search display did disrupt preview benefit if the motion speed was sufficiently high. These results suggest that visual marking can be destroyed by a transient event in the scene if that event is sufficiently drastic.
Hashem, Joseph; Schneider, Erich; Pryor, Mitch; ...
2017-01-01
Our paper describes how to use MCNP to evaluate the rate of material damage in a robot incurred by exposure to a neutron flux. The example used in this work is that of a robotic manipulator installed in a high intensity, fast, and collimated neutron radiography beam port at the University of Texas at Austin's TRIGA Mark II research reactor. Our effort includes taking robotic technologies and using them to automate non-destructive imaging tasks in nuclear facilities where the robotic manipulator acts as the motion control system for neutron imaging tasks. Simulated radiation tests are used to analyze the radiationmore » damage to the robot. Once the neutron damage is calculated using MCNP, several possible shielding materials are analyzed to determine the most effective way of minimizing the neutron damage. Furthermore, neutron damage predictions provide users the means to simulate geometrical and material changes, thus saving time, money, and energy in determining the optimal setup for a robotic system installed in a radiation environment.« less
Symbiotic Navigation in Multi-Robot Systems with Remote Obstacle Knowledge Sharing
Ravankar, Abhijeet; Ravankar, Ankit A.; Kobayashi, Yukinori; Emaru, Takanori
2017-01-01
Large scale operational areas often require multiple service robots for coverage and task parallelism. In such scenarios, each robot keeps its individual map of the environment and serves specific areas of the map at different times. We propose a knowledge sharing mechanism for multiple robots in which one robot can inform other robots about the changes in map, like path blockage, or new static obstacles, encountered at specific areas of the map. This symbiotic information sharing allows the robots to update remote areas of the map without having to explicitly navigate those areas, and plan efficient paths. A node representation of paths is presented for seamless sharing of blocked path information. The transience of obstacles is modeled to track obstacles which might have been removed. A lazy information update scheme is presented in which only relevant information affecting the current task is updated for efficiency. The advantages of the proposed method for path planning are discussed against traditional method with experimental results in both simulation and real environments. PMID:28678193
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hashem, Joseph; Schneider, Erich; Pryor, Mitch
Our paper describes how to use MCNP to evaluate the rate of material damage in a robot incurred by exposure to a neutron flux. The example used in this work is that of a robotic manipulator installed in a high intensity, fast, and collimated neutron radiography beam port at the University of Texas at Austin's TRIGA Mark II research reactor. Our effort includes taking robotic technologies and using them to automate non-destructive imaging tasks in nuclear facilities where the robotic manipulator acts as the motion control system for neutron imaging tasks. Simulated radiation tests are used to analyze the radiationmore » damage to the robot. Once the neutron damage is calculated using MCNP, several possible shielding materials are analyzed to determine the most effective way of minimizing the neutron damage. Furthermore, neutron damage predictions provide users the means to simulate geometrical and material changes, thus saving time, money, and energy in determining the optimal setup for a robotic system installed in a radiation environment.« less
Selective attention during scene perception: evidence from negative priming.
Gordon, Robert D
2006-10-01
In two experiments, we examined the role of semantic scene content in guiding attention during scene viewing. In each experiment, performance on a lexical decision task was measured following the brief presentation of a scene. The lexical decision stimulus named an object that was either present or not present in the scene. The results of Experiment 1 revealed no priming from inconsistent objects (whose identities conflicted with the scene in which they appeared), but negative priming from consistent objects. The results of Experiment 2 indicated that negative priming from consistent objects occurs only when inconsistent objects are present in the scenes. Together, the results suggest that observers are likely to attend to inconsistent objects, and that representations of consistent objects are suppressed in the presence of an inconsistent object. Furthermore, the data suggest that inconsistent objects draw attention because they are relatively difficult to identify in an inappropriate context.
Figure-Ground Organization in Visual Cortex for Natural Scenes
2016-01-01
Abstract Figure-ground organization and border-ownership assignment are essential for understanding natural scenes. It has been shown that many neurons in the macaque visual cortex signal border-ownership in displays of simple geometric shapes such as squares, but how well these neurons resolve border-ownership in natural scenes is not known. We studied area V2 neurons in behaving macaques with static images of complex natural scenes. We found that about half of the neurons were border-ownership selective for contours in natural scenes, and this selectivity originated from the image context. The border-ownership signals emerged within 70 ms after stimulus onset, only ∼30 ms after response onset. A substantial fraction of neurons were highly consistent across scenes. Thus, the cortical mechanisms of figure-ground organization are fast and efficient even in images of complex natural scenes. Understanding how the brain performs this task so fast remains a challenge. PMID:28058269
3D change detection in staggered voxels model for robotic sensing and navigation
NASA Astrophysics Data System (ADS)
Liu, Ruixu; Hampshire, Brandon; Asari, Vijayan K.
2016-05-01
3D scene change detection is a challenging problem in robotic sensing and navigation. There are several unpredictable aspects in performing scene change detection. A change detection method which can support various applications in varying environmental conditions is proposed. Point cloud models are acquired from a RGB-D sensor, which provides the required color and depth information. Change detection is performed on robot view point cloud model. A bilateral filter smooths the surface and fills the holes as well as keeps the edge details on depth image. Registration of the point cloud model is implemented by using Random Sample Consensus (RANSAC) algorithm. It uses surface normal as the previous stage for the ground and wall estimate. After preprocessing the data, we create a point voxel model which defines voxel as surface or free space. Then we create a color model which defines each voxel that has a color by the mean of all points' color value in this voxel. The preliminary change detection is detected by XOR subtract on the point voxel model. Next, the eight neighbors for this center voxel are defined. If they are neither all `changed' voxels nor all `no changed' voxels, a histogram of location and hue channel color is estimated. The experimental evaluations performed to evaluate the capability of our algorithm show promising results for novel change detection that indicate all the changing objects with very limited false alarm rate.
Evaluation of a novel flexible snake robot for endoluminal surgery.
Patel, Nisha; Seneci, Carlo A; Shang, Jianzhong; Leibrandt, Konrad; Yang, Guang-Zhong; Darzi, Ara; Teare, Julian
2015-11-01
Endoluminal therapeutic procedures such as endoscopic submucosal dissection are increasingly attractive given the shift in surgical paradigm towards minimally invasive surgery. This novel three-channel articulated robot was developed to overcome the limitations of the flexible endoscope which poses a number of challenges to endoluminal surgery. The device enables enhanced movement in a restricted workspace, with improved range of motion and with the accuracy required for endoluminal surgery. To evaluate a novel flexible robot for therapeutic endoluminal surgery. Bench-top studies. Research laboratory. Targeting and navigation tasks of the robot were performed to explore the range of motion and retroflexion capabilities. Complex endoluminal tasks such as endoscopic mucosal resection were also simulated. Successful completion, accuracy and time to perform the bench-top tasks were the main outcome measures. The robot ranges of movement, retroflexion and navigation capabilities were demonstrated. The device showed significantly greater accuracy of targeting in a retroflexed position compared to a conventional endoscope. Bench-top study and small study sample. We were able to demonstrate a number of simulated endoscopy tasks such as navigation, targeting, snaring and retroflexion. The improved accuracy of targeting whilst in a difficult configuration is extremely promising and may facilitate endoluminal surgery which has been notoriously challenging with a conventional endoscope.
McWhinney, S R; Tremblay, A; Boe, S G; Bardouille, T
2018-02-01
Neurofeedback training teaches individuals to modulate brain activity by providing real-time feedback and can be used for brain-computer interface control. The present study aimed to optimize training by maximizing engagement through goal-oriented task design. Participants were shown either a visual display or a robot, where each was manipulated using motor imagery (MI)-related electroencephalography signals. Those with the robot were instructed to quickly navigate grid spaces, as the potential for goal-oriented design to strengthen learning was central to our investigation. Both groups were hypothesized to show increased magnitude of these signals across 10 sessions, with the greatest gains being seen in those navigating the robot due to increased engagement. Participants demonstrated the predicted increase in magnitude, with no differentiation between hemispheres. Participants navigating the robot showed stronger left-hand MI increases than those with the computer display. This is likely due to success being reliant on maintaining strong MI-related signals. While older participants showed stronger signals in early sessions, this trend later reversed, suggesting greater natural proficiency but reduced flexibility. These results demonstrate capacity for modulating neurofeedback using MI over a series of training sessions, using tasks of varied design. Importantly, the more goal-oriented robot control task resulted in greater improvements.
NASA Technical Reports Server (NTRS)
Sweet, Barbara T.; Kaiser, Mary K.
2013-01-01
Although current technology simulator visual systems can achieve extremely realistic levels they do not completely replicate the experience of a pilot sitting in the cockpit, looking at the outside world. Some differences in experience are due to visual artifacts, or perceptual features that would not be present in a naturally viewed scene. Others are due to features that are missing from the simulated scene. In this paper, these differences will be defined and discussed. The significance of these differences will be examined as a function of several particular operational tasks. A framework to facilitate the choice of visual system characteristics based on operational task requirements will be proposed.
Effects of Imperfect Automation on Operator’s Supervisory Control of Multiple Robots
2011-08-01
7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) U.S. Army Research Laboratory ATTN: RDRL- HRM -AT Aberdeen Proving Ground, MD 21005-5425 8...Survey, the Ishihara Color Vision Test, and the Cube 6 Comparison test. Participants then received training and practice on the tasks they were about to...completing various tasks, several mini- exercises for practicing the steps, and exercises for performing the robotic control tasks. The type and
The Human-Robot Interaction Operating System
NASA Technical Reports Server (NTRS)
Fong, Terrence; Kunz, Clayton; Hiatt, Laura M.; Bugajska, Magda
2006-01-01
In order for humans and robots to work effectively together, they need to be able to converse about abilities, goals and achievements. Thus, we are developing an interaction infrastructure called the "Human-Robot Interaction Operating System" (HRI/OS). The HRI/OS provides a structured software framework for building human-robot teams, supports a variety of user interfaces, enables humans and robots to engage in task-oriented dialogue, and facilitates integration of robots through an extensible API.
Sample Return Robot Centennial Challenge
2012-06-16
Children visiting the Worcester Polytechnic Institute (WPI) "TouchTomorrow" education and outreach event try to catch basketballs being thrown by a robot from FIRST Robotics at Burncoat High School (Mass.) on Saturday, June 16, 2012 at WPI in Worcester, Mass. The TouchTomorrow event was held in tandem with the NASA-WPI Sample Return Robot Centennial Challenge. The NASA-WPI challenge tasked robotic teams to build autonomous robots that can identify, collect and return samples. NASA needs autonomous robotic capability for future planetary exploration. Photo Credit: (NASA/Bill Ingalls)
Improved configuration control for redundant robots
NASA Technical Reports Server (NTRS)
Seraji, H.; Colbaugh, R.
1990-01-01
This article presents a singularity-robust task-prioritized reformulation of the configuration control scheme for redundant robot manipulators. This reformulation suppresses large joint velocities near singularities, at the expense of small task trajectory errors. This is achieved by optimally reducing the joint velocities to induce minimal errors in the task performance by modifying the task trajectories. Furthermore, the same framework provides a means for assignment of priorities between the basic task of end-effector motion and the user-defined additional task for utilizing redundancy. This allows automatic relaxation of the additional task constraints in favor of the desired end-effector motion, when both cannot be achieved exactly. The improved configuration control scheme is illustrated for a variety of additional tasks, and extensive simulation results are presented.
Downey, John E; Weiss, Jeffrey M; Muelling, Katharina; Venkatraman, Arun; Valois, Jean-Sebastien; Hebert, Martial; Bagnell, J Andrew; Schwartz, Andrew B; Collinger, Jennifer L
2016-03-18
Recent studies have shown that brain-machine interfaces (BMIs) offer great potential for restoring upper limb function. However, grasping objects is a complicated task and the signals extracted from the brain may not always be capable of driving these movements reliably. Vision-guided robotic assistance is one possible way to improve BMI performance. We describe a method of shared control where the user controls a prosthetic arm using a BMI and receives assistance with positioning the hand when it approaches an object. Two human subjects with tetraplegia used a robotic arm to complete object transport tasks with and without shared control. The shared control system was designed to provide a balance between BMI-derived intention and computer assistance. An autonomous robotic grasping system identified and tracked objects and defined stable grasp positions for these objects. The system identified when the user intended to interact with an object based on the BMI-controlled movements of the robotic arm. Using shared control, BMI controlled movements and autonomous grasping commands were blended to ensure secure grasps. Both subjects were more successful on object transfer tasks when using shared control compared to BMI control alone. Movements made using shared control were more accurate, more efficient, and less difficult. One participant attempted a task with multiple objects and successfully lifted one of two closely spaced objects in 92 % of trials, demonstrating the potential for users to accurately execute their intention while using shared control. Integration of BMI control with vision-guided robotic assistance led to improved performance on object transfer tasks. Providing assistance while maintaining generalizability will make BMI systems more attractive to potential users. NCT01364480 and NCT01894802 .
Quantum robots and environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benioff, P.
1998-08-01
Quantum robots and their interactions with environments of quantum systems are described, and their study justified. A quantum robot is a mobile quantum system that includes an on-board quantum computer and needed ancillary systems. Quantum robots carry out tasks whose goals include specified changes in the state of the environment, or carrying out measurements on the environment. Each task is a sequence of alternating computation and action phases. Computation phase activites include determination of the action to be carried out in the next phase, and recording of information on neighborhood environmental system states. Action phase activities include motion of themore » quantum robot and changes in the neighborhood environment system states. Models of quantum robots and their interactions with environments are described using discrete space and time. A unitary step operator T that gives the single time step dynamics is associated with each task. T=T{sub a}+T{sub c} is a sum of action phase and computation phase step operators. Conditions that T{sub a} and T{sub c} should satisfy are given along with a description of the evolution as a sum over paths of completed phase input and output states. A simple example of a task{emdash}carrying out a measurement on a very simple environment{emdash}is analyzed in detail. A decision tree for the task is presented and discussed in terms of the sums over phase paths. It is seen that no definite times or durations are associated with the phase steps in the tree, and that the tree describes the successive phase steps in each path in the sum over phase paths. {copyright} {ital 1998} {ital The American Physical Society}« less
Yamada, Tatsuro; Murata, Shingo; Arie, Hiroaki; Ogata, Tetsuya
2016-01-01
To work cooperatively with humans by using language, robots must not only acquire a mapping between language and their behavior but also autonomously utilize the mapping in appropriate contexts of interactive tasks online. To this end, we propose a novel learning method linking language to robot behavior by means of a recurrent neural network. In this method, the network learns from correct examples of the imposed task that are given not as explicitly separated sets of language and behavior but as sequential data constructed from the actual temporal flow of the task. By doing this, the internal dynamics of the network models both language-behavior relationships and the temporal patterns of interaction. Here, "internal dynamics" refers to the time development of the system defined on the fixed-dimensional space of the internal states of the context layer. Thus, in the execution phase, by constantly representing where in the interaction context it is as its current state, the network autonomously switches between recognition and generation phases without any explicit signs and utilizes the acquired mapping in appropriate contexts. To evaluate our method, we conducted an experiment in which a robot generates appropriate behavior responding to a human's linguistic instruction. After learning, the network actually formed the attractor structure representing both language-behavior relationships and the task's temporal pattern in its internal dynamics. In the dynamics, language-behavior mapping was achieved by the branching structure. Repetition of human's instruction and robot's behavioral response was represented as the cyclic structure, and besides, waiting to a subsequent instruction was represented as the fixed-point attractor. Thanks to this structure, the robot was able to interact online with a human concerning the given task by autonomously switching phases.
Mentoring console improves collaboration and teaching in surgical robotics.
Hanly, Eric J; Miller, Brian E; Kumar, Rajesh; Hasser, Christopher J; Coste-Maniere, Eve; Talamini, Mark A; Aurora, Alexander A; Schenkman, Noah S; Marohn, Michael R
2006-10-01
One of the most significant limitations of surgical robots has been their inability to allow multiple surgeons and surgeons-in-training to engage in collaborative control of robotic surgical instruments. We report the initial experience with a novel two-headed da Vinci surgical robot that has two collaborative modes: the "swap" mode allows two surgeons to simultaneously operate and actively swap control of the robot's four arms, and the "nudge" mode allows them to share control of two of the robot's arms. The utility of the mentoring console operating in its two collaborative modes was evaluated through a combination of dry laboratory exercises and animal laboratory surgery. The results from surgeon-resident collaborative performance of complex three-handed surgical tasks were compared to results from single-surgeon and single-resident performance. Statistical significance was determined using Student's t-test. Collaborative surgeon-resident swap control reduced the time to completion of complex three-handed surgical tasks by 25% compared to single-surgeon operation of a four-armed da Vinci (P < 0.01) and by 34% compared to single-resident operation (P < 0.001). While swap mode was found to be most helpful during parts of surgical procedures that require multiple hands (such as isolation and division of vessels), nudge mode was particularly useful for guiding a resident's hands during crucially precise steps of an operation (such as proper placement of stitches). The da Vinci mentoring console greatly facilitates surgeon collaboration during robotic surgery and improves the performance of complex surgical tasks. The mentoring console has the potential to improve resident participation in surgical robotics cases, enhance resident education in surgical training programs engaged in surgical robotics, and improve patient safety during robotic surgery.
A Radio-Controlled Car Challenge
ERIC Educational Resources Information Center
Roman, Harry T.
2010-01-01
Watching a radio-controlled car zip along a sidewalk or street has become a common sight. Within this toy are the basic ingredients of a mobile robot, used by industry for a variety of important and potentially dangerous tasks. In this challenge, students consider modifying an of-the-shelf, radio-controlled car, adapting it for a robotic task.
Trust-based Task Assignment in Military Tactical Networks
2012-06-01
Decentralized Auctions for Robust Task Allocation ,“ IEEE Trans. on Robotics, vol. 25, no. 4, pp. 912‐926, Aug. 2009. [10] B.B. Choudhury, B.B. Biswal, and D...Choi et al. [9] and Choudhury et al. [10] investigated market‐ based task allocation algorithms for multi‐robot systems. Choi et al. [9] proposed a... consensus ‐ based bundle algorithm for rapid conflict‐free matching between tasks and robots. Choudhury et al. [10] conducted an empirical study on a
Hiding the system from the user: Moving from complex mental models to elegant metaphors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curtis W. Nielsen; David J. Bruemmer
2007-08-01
In previous work, increased complexity of robot behaviors and the accompanying interface design often led to operator confusion and/or a fight for control between the robot and operator. We believe the reason for the conflict was that the design of the interface and interactions presented too much of the underlying robot design model to the operator. Since the design model includes the implementation of sensors, behaviors, and sophisticated algorithms, the result was that the operator’s cognitive efforts were focused on understanding the design of the robot system as opposed to focusing on the task at hand. This paper illustrates howmore » this very problem emerged at the INL and how the implementation of new metaphors for interaction has allowed us to hide the design model from the user and allow the user to focus more on the task at hand. Supporting the user’s focus on the task rather than on the design model allows increased use of the system and significant performance improvement in a search task with novice users.« less
Real-time augmented feedback benefits robotic laparoscopic training.
Judkins, Timothy N; Oleynikov, Dmitry; Stergiou, Nick
2006-01-01
Robotic laparoscopic surgery has revolutionized minimally invasive surgery for treatment of abdominal pathologies. However, current training techniques rely on subjective evaluation. There is a lack of research on the type of tasks that should be used for training. Robotic surgical systems also do not currently have the ability to provide feedback to the surgeon regarding success of performing tasks. We trained medical students on three laparoscopic tasks and provided real-time feedback of performance during training. We found that real-time feedback can benefit training if the feedback provides information that is not available through other means (grip force). Subjects that received grip force feedback applied less force when the feedback was removed. Other forms of feedback (speed and relative phase) did not aid or impede training. Secondly, a relatively short training period (10 trials for each task) significantly improved most objective measures of performance. We also showed that robotic surgical performance can be quantitatively measured and evaluated. Providing grip force feedback can make the surgeon more aware of the forces being applied to delicate tissue during surgery.
Using multiple sensors for printed circuit board insertion
NASA Technical Reports Server (NTRS)
Sood, Deepak; Repko, Michael C.; Kelley, Robert B.
1989-01-01
As more and more activities are performed in space, there will be a greater demand placed on the information handling capacity of people who are to direct and accomplish these tasks. A promising alternative to full-time human involvement is the use of semi-autonomous, intelligent robot systems. To automate tasks such as assembly, disassembly, repair and maintenance, the issues presented by environmental uncertainties need to be addressed. These uncertainties are introduced by variations in the computed position of the robot at different locations in its work envelope, variations in part positioning, and tolerances of part dimensions. As a result, the robot system may not be able to accomplish the desired task without the help of sensor feedback. Measurements on the environment allow real time corrections to be made to the process. A design and implementation of an intelligent robot system which inserts printed circuit boards into a card cage are presented. Intelligent behavior is accomplished by coupling the task execution sequence with information derived from three different sensors: an overhead three-dimensional vision system, a fingertip infrared sensor, and a six degree of freedom wrist-mounted force/torque sensor.
Configuration-Control Scheme Copes With Singularities
NASA Technical Reports Server (NTRS)
Seraji, Homayoun; Colbaugh, Richard D.
1993-01-01
Improved configuration-control scheme for robotic manipulator having redundant degrees of freedom suppresses large joint velocities near singularities, at expense of small trajectory errors. Provides means to enforce order of priority of tasks assigned to robot. Basic concept of configuration control of redundant robot described in "Increasing The Dexterity Of Redundant Robots" (NPO-17801).
Learning to Explain: The Role of Educational Robots in Science Education
ERIC Educational Resources Information Center
Datteri, Edoardo; Zecca, Luisa; Laudisa, Federico; Castiglioni, Marco
2013-01-01
Educational robotics laboratories typically involve building and programming robotic systems to perform particular tasks or solve problems. In this paper we explore the potential educational value of a form of robot-supported educational activity that has been little discussed in the literature. During these activities, primary school children are…
Flinn, Nancy A; Smith, Jennifer L; Tripp, Christopher J; White, Matthew W
2009-01-01
The objective of the study was to examine the results of robotic therapy in a single client. A 48-year-old female client 15 months post-stroke, with right hemiparesis, received robotic therapy as an outpatient in a large Midwestern rehabilitation hospital. Robotic therapy was provided three times a week for 6 weeks. Robotic therapy consisted of goal-directed, robotic-aided reaching tasks to exercise the hemiparetic shoulder and elbow. No other therapeutic intervention for the affected upper extremity was provided during the study or 3 months follow-up period. The outcome measures included the Fugl-Meyer, graded Wolf motor function test (GWMFT), motor activity log, active range of motion and Canadian occupational performance measure. The participant made gains in active movement; performance; and satisfaction of functional tasks, GWMFT and functional use. Limitations involved in this study relate to the generalizability of the sample size, effect of medications, expense of robotic technologies and the impact of aphasia. Future research should incorporate functional use training along with robotic therapy.
Thubagere, Anupama J; Li, Wei; Johnson, Robert F; Chen, Zibo; Doroudi, Shayan; Lee, Yae Lim; Izatt, Gregory; Wittman, Sarah; Srinivas, Niranjan; Woods, Damien; Winfree, Erik; Qian, Lulu
2017-09-15
Two critical challenges in the design and synthesis of molecular robots are modularity and algorithm simplicity. We demonstrate three modular building blocks for a DNA robot that performs cargo sorting at the molecular level. A simple algorithm encoding recognition between cargos and their destinations allows for a simple robot design: a single-stranded DNA with one leg and two foot domains for walking, and one arm and one hand domain for picking up and dropping off cargos. The robot explores a two-dimensional testing ground on the surface of DNA origami, picks up multiple cargos of two types that are initially at unordered locations, and delivers them to specified destinations until all molecules are sorted into two distinct piles. The robot is designed to perform a random walk without any energy supply. Exploiting this feature, a single robot can repeatedly sort multiple cargos. Localization on DNA origami allows for distinct cargo-sorting tasks to take place simultaneously in one test tube or for multiple robots to collectively perform the same task. Copyright © 2017, American Association for the Advancement of Science.
An octopus-bioinspired solution to movement and manipulation for soft robots.
Calisti, M; Giorelli, M; Levy, G; Mazzolai, B; Hochner, B; Laschi, C; Dario, P
2011-09-01
Soft robotics is a challenging and promising branch of robotics. It can drive significant improvements across various fields of traditional robotics, and contribute solutions to basic problems such as locomotion and manipulation in unstructured environments. A challenging task for soft robotics is to build and control soft robots able to exert effective forces. In recent years, biology has inspired several solutions to such complex problems. This study aims at investigating the smart solution that the Octopus vulgaris adopts to perform a crawling movement, with the same limbs used for grasping and manipulation. An ad hoc robot was designed and built taking as a reference a biological hypothesis on crawling. A silicone arm with cables embedded to replicate the functionality of the arm muscles of the octopus was built. This novel arm is capable of pushing-based locomotion and object grasping, mimicking the movements that octopuses adopt when crawling. The results support the biological observations and clearly show a suitable way to build a more complex soft robot that, with minimum control, can perform diverse tasks.
Correction of Visual Perception Based on Neuro-Fuzzy Learning for the Humanoid Robot TEO.
Hernandez-Vicen, Juan; Martinez, Santiago; Garcia-Haro, Juan Miguel; Balaguer, Carlos
2018-03-25
New applications related to robotic manipulation or transportation tasks, with or without physical grasping, are continuously being developed. To perform these activities, the robot takes advantage of different kinds of perceptions. One of the key perceptions in robotics is vision. However, some problems related to image processing makes the application of visual information within robot control algorithms difficult. Camera-based systems have inherent errors that affect the quality and reliability of the information obtained. The need of correcting image distortion slows down image parameter computing, which decreases performance of control algorithms. In this paper, a new approach to correcting several sources of visual distortions on images in only one computing step is proposed. The goal of this system/algorithm is the computation of the tilt angle of an object transported by a robot, minimizing image inherent errors and increasing computing speed. After capturing the image, the computer system extracts the angle using a Fuzzy filter that corrects at the same time all possible distortions, obtaining the real angle in only one processing step. This filter has been developed by the means of Neuro-Fuzzy learning techniques, using datasets with information obtained from real experiments. In this way, the computing time has been decreased and the performance of the application has been improved. The resulting algorithm has been tried out experimentally in robot transportation tasks in the humanoid robot TEO (Task Environment Operator) from the University Carlos III of Madrid.
Correction of Visual Perception Based on Neuro-Fuzzy Learning for the Humanoid Robot TEO
2018-01-01
New applications related to robotic manipulation or transportation tasks, with or without physical grasping, are continuously being developed. To perform these activities, the robot takes advantage of different kinds of perceptions. One of the key perceptions in robotics is vision. However, some problems related to image processing makes the application of visual information within robot control algorithms difficult. Camera-based systems have inherent errors that affect the quality and reliability of the information obtained. The need of correcting image distortion slows down image parameter computing, which decreases performance of control algorithms. In this paper, a new approach to correcting several sources of visual distortions on images in only one computing step is proposed. The goal of this system/algorithm is the computation of the tilt angle of an object transported by a robot, minimizing image inherent errors and increasing computing speed. After capturing the image, the computer system extracts the angle using a Fuzzy filter that corrects at the same time all possible distortions, obtaining the real angle in only one processing step. This filter has been developed by the means of Neuro-Fuzzy learning techniques, using datasets with information obtained from real experiments. In this way, the computing time has been decreased and the performance of the application has been improved. The resulting algorithm has been tried out experimentally in robot transportation tasks in the humanoid robot TEO (Task Environment Operator) from the University Carlos III of Madrid. PMID:29587392
Yun, M H; Cannon, D; Freivalds, A; Thomas, G
1997-10-01
Hand posture and force, which define aspects of the way an object is grasped, are features of robotic manipulation. A means for specifying these grasping "flavors" has been developed that uses an instrumented glove equipped with joint and force sensors. The new grasp specification system will be used at the Pennsylvania State University (Penn State) in a Virtual Reality based Point-and-Direct (VR-PAD) robotics implementation. Here, an operator gives directives to a robot in the same natural way that human may direct another. Phrases such as "put that there" cause the robot to define a grasping strategy and motion strategy to complete the task on its own. In the VR-PAD concept, pointing is done using virtual tools such that an operator can appear to graphically grasp real items in live video. Rather than requiring full duplication of forces and kinesthetic movement throughout a task as is required in manual telemanipulation, hand posture and force are now specified only once. The grasp parameters then become object flavors. The robot maintains the specified force and hand posture flavors for an object throughout the task in handling the real workpiece or item of interest. In the Computer integrated Manufacturing (CIM) Laboratory at Penn State, hand posture and force data were collected for manipulating bricks and other items that require varying amounts of force at multiple pressure points. The feasibility of measuring desired grasp characteristics was demonstrated for a modified Cyberglove impregnated with Force-Sensitive Resistor (FSR) (pressure sensors in the fingertips. A joint/force model relating the parameters of finger articulation and pressure to various lifting tasks was validated for the instrumented "wired" glove. Operators using such a modified glove may ultimately be able to configure robot grasping tasks in environments involving hazardous waste remediation, flexible manufacturing, space operations and other flexible robotics applications. In each case, the VR-PAD approach will finesse the computational and delay problems of real-time multiple-degree-of-freedom force feedback telemanipulation.
NASA Astrophysics Data System (ADS)
Nagai, Yukie; Asada, Minoru; Hosoda, Koh
This paper presents a developmental learning model for joint attention between a robot and a human caregiver. The basic idea of the proposed model comes from the insight of the cognitive developmental science that the development can help the task learning. The model consists of a learning mechanism based on evaluation and two kinds of developmental mechanisms: a robot's development and a caregiver's one. The former means that the sensing and the actuating capabilities of the robot change from immaturity to maturity. On the other hand, the latter is defined as a process that the caregiver changes the task from easy situation to difficult one. These two developments are triggered by the learning progress. The experimental results show that the proposed model can accelerate the learning of joint attention owing to the caregiver's development. Furthermore, it is observed that the robot's development can improve the final task performance by reducing the internal representation in the learned neural network. The mechanisms that bring these effects to the learning are analyzed in line with the cognitive developmental science.
History of Reading Struggles Linked to Enhanced Learning in Low Spatial Frequency Scenes
Schneps, Matthew H.; Brockmole, James R.; Sonnert, Gerhard; Pomplun, Marc
2012-01-01
People with dyslexia, who face lifelong struggles with reading, exhibit numerous associated low-level sensory deficits including deficits in focal attention. Countering this, studies have shown that struggling readers outperform typical readers in some visual tasks that integrate distributed information across an expanse. Though such abilities would be expected to facilitate scene memory, prior investigations using the contextual cueing paradigm failed to find corresponding advantages in dyslexia. We suggest that these studies were confounded by task-dependent effects exaggerating known focal attention deficits in dyslexia, and that, if natural scenes were used as the context, advantages would emerge. Here, we investigate this hypothesis by comparing college students with histories of severe lifelong reading difficulties (SR) and typical readers (TR) in contexts that vary attention load. We find no differences in contextual-cueing when spatial contexts are letter-like objects, or when contexts are natural scenes. However, the SR group significantly outperforms the TR group when contexts are low-pass filtered natural scenes [F(3, 39) = 3.15, p<.05]. These findings suggest that perception or memory for low spatial frequency components in scenes is enhanced in dyslexia. These findings are important because they suggest strengths for spatial learning in a population otherwise impaired, carrying implications for the education and support of students who face challenges in school. PMID:22558210
History of reading struggles linked to enhanced learning in low spatial frequency scenes.
Schneps, Matthew H; Brockmole, James R; Sonnert, Gerhard; Pomplun, Marc
2012-01-01
People with dyslexia, who face lifelong struggles with reading, exhibit numerous associated low-level sensory deficits including deficits in focal attention. Countering this, studies have shown that struggling readers outperform typical readers in some visual tasks that integrate distributed information across an expanse. Though such abilities would be expected to facilitate scene memory, prior investigations using the contextual cueing paradigm failed to find corresponding advantages in dyslexia. We suggest that these studies were confounded by task-dependent effects exaggerating known focal attention deficits in dyslexia, and that, if natural scenes were used as the context, advantages would emerge. Here, we investigate this hypothesis by comparing college students with histories of severe lifelong reading difficulties (SR) and typical readers (TR) in contexts that vary attention load. We find no differences in contextual-cueing when spatial contexts are letter-like objects, or when contexts are natural scenes. However, the SR group significantly outperforms the TR group when contexts are low-pass filtered natural scenes [F(3, 39) = 3.15, p<.05]. These findings suggest that perception or memory for low spatial frequency components in scenes is enhanced in dyslexia. These findings are important because they suggest strengths for spatial learning in a population otherwise impaired, carrying implications for the education and support of students who face challenges in school.
Semantic Categorization Precedes Affective Evaluation of Visual Scenes
ERIC Educational Resources Information Center
Nummenmaa, Lauri; Hyona, Jukka; Calvo, Manuel G.
2010-01-01
We compared the primacy of affective versus semantic categorization by using forced-choice saccadic and manual response tasks. Participants viewed paired emotional and neutral scenes involving humans or animals flashed rapidly in extrafoveal vision. Participants were instructed to categorize the targets by saccading toward the location occupied by…
Yang, Kun; Perez, Manuela; Hossu, Gabriela; Hubert, Nicolas; Perrenot, Cyril; Hubert, Jacques
2017-01-01
In robotic surgery, the professional ergonomic habit of using an armrest reduces operator fatigue and increases the precision of motion. We designed and validated a pressure surveillance system (PSS) based on force sensors to investigate armrest use. The objective was to evaluate whether adding an alarm to the PSS system could shorten ergonomic training and improve performance. Twenty robot and simulator-naïve participants were recruited and randomized in two groups (A and B). The PSS was installed on a robotic simulator, the dV-Trainer, to detect contact with the armrest. The Group A members completed three tasks on the dV-Trainer without the alarm, making 15 attempts at each task. The Group B members practiced the first two tasks with the alarm and then completed the final tasks without the alarm. The simulator provided an overall score reflecting the trainees' performance. We used the new concept of an "armrest load" score to describe the ergonomic habit of using the armrest. Group B had a significantly higher performance score (p < 0.001) and armrest load score (p < 0.001) than Group A from the fifth attempt of the first task to the end of the experiment. Based on the conditioned reflex effect, the alarm associated with the PSS rectified ergonomic errors and accelerated professional ergonomic habit acquisition. The combination of the PSS and alarm is effective in significantly shortening the learning curve in the robotic training process.
A Practical Solution Using A New Approach To Robot Vision
NASA Astrophysics Data System (ADS)
Hudson, David L.
1984-01-01
Up to now, robot vision systems have been designed to serve both application development and operational needs in inspection, assembly and material handling. This universal approach to robot vision is too costly for many practical applications. A new industrial vision system separates the function of application program development from on-line operation. A Vision Development System (VDS) is equipped with facilities designed to simplify and accelerate the application program development process. A complimentary but lower cost Target Application System (TASK) runs the application program developed with the VDS. This concept is presented in the context of an actual robot vision application that improves inspection and assembly for a manufacturer of electronic terminal keyboards. Applications developed with a VDS experience lower development cost when compared with conventional vision systems. Since the TASK processor is not burdened with development tools, it can be installed at a lower cost than comparable "universal" vision systems that are intended to be used for both development and on-line operation. The VDS/TASK approach opens more industrial applications to robot vision that previously were not practical because of the high cost of vision systems. Although robot vision is a new technology, it has been applied successfully to a variety of industrial needs in inspection, manufacturing, and material handling. New developments in robot vision technology are creating practical, cost effective solutions for a variety of industrial needs. A year or two ago, researchers and robot manufacturers interested in implementing a robot vision application could take one of two approaches. The first approach was to purchase all the necessary vision components from various sources. That meant buying an image processor from one company, a camera from another and lens and light sources from yet others. The user then had to assemble the pieces, and in most instances he had to write all of his own software to test, analyze and process the vision application. The second and most common approach was to contract with the vision equipment vendor for the development and installation of a turnkey inspection or manufacturing system. The robot user and his company paid a premium for their vision system in an effort to assure the success of the system. Since 1981, emphasis on robotics has skyrocketed. New groups have been formed in many manufacturing companies with the charter to learn about, test and initially apply new robot and automation technologies. Machine vision is one of new technologies being tested and applied. This focused interest has created a need for a robot vision system that makes it easy for manufacturing engineers to learn about, test, and implement a robot vision application. A newly developed vision system addresses those needs. Vision Development System (VDS) is a complete hardware and software product for the development and testing of robot vision applications. A complimentary, low cost Target Application System (TASK) runs the application program developed with the VDS. An actual robot vision application that demonstrates inspection and pre-assembly for keyboard manufacturing is used to illustrate the VDS/TASK approach.
Evolution of Self-Organized Task Specialization in Robot Swarms
Ferrante, Eliseo; Turgut, Ali Emre; Duéñez-Guzmán, Edgar; Dorigo, Marco; Wenseleers, Tom
2015-01-01
Division of labor is ubiquitous in biological systems, as evidenced by various forms of complex task specialization observed in both animal societies and multicellular organisms. Although clearly adaptive, the way in which division of labor first evolved remains enigmatic, as it requires the simultaneous co-occurrence of several complex traits to achieve the required degree of coordination. Recently, evolutionary swarm robotics has emerged as an excellent test bed to study the evolution of coordinated group-level behavior. Here we use this framework for the first time to study the evolutionary origin of behavioral task specialization among groups of identical robots. The scenario we study involves an advanced form of division of labor, common in insect societies and known as “task partitioning”, whereby two sets of tasks have to be carried out in sequence by different individuals. Our results show that task partitioning is favored whenever the environment has features that, when exploited, reduce switching costs and increase the net efficiency of the group, and that an optimal mix of task specialists is achieved most readily when the behavioral repertoires aimed at carrying out the different subtasks are available as pre-adapted building blocks. Nevertheless, we also show for the first time that self-organized task specialization could be evolved entirely from scratch, starting only from basic, low-level behavioral primitives, using a nature-inspired evolutionary method known as Grammatical Evolution. Remarkably, division of labor was achieved merely by selecting on overall group performance, and without providing any prior information on how the global object retrieval task was best divided into smaller subtasks. We discuss the potential of our method for engineering adaptively behaving robot swarms and interpret our results in relation to the likely path that nature took to evolve complex sociality and task specialization. PMID:26247819
Evolution of Self-Organized Task Specialization in Robot Swarms.
Ferrante, Eliseo; Turgut, Ali Emre; Duéñez-Guzmán, Edgar; Dorigo, Marco; Wenseleers, Tom
2015-08-01
Division of labor is ubiquitous in biological systems, as evidenced by various forms of complex task specialization observed in both animal societies and multicellular organisms. Although clearly adaptive, the way in which division of labor first evolved remains enigmatic, as it requires the simultaneous co-occurrence of several complex traits to achieve the required degree of coordination. Recently, evolutionary swarm robotics has emerged as an excellent test bed to study the evolution of coordinated group-level behavior. Here we use this framework for the first time to study the evolutionary origin of behavioral task specialization among groups of identical robots. The scenario we study involves an advanced form of division of labor, common in insect societies and known as "task partitioning", whereby two sets of tasks have to be carried out in sequence by different individuals. Our results show that task partitioning is favored whenever the environment has features that, when exploited, reduce switching costs and increase the net efficiency of the group, and that an optimal mix of task specialists is achieved most readily when the behavioral repertoires aimed at carrying out the different subtasks are available as pre-adapted building blocks. Nevertheless, we also show for the first time that self-organized task specialization could be evolved entirely from scratch, starting only from basic, low-level behavioral primitives, using a nature-inspired evolutionary method known as Grammatical Evolution. Remarkably, division of labor was achieved merely by selecting on overall group performance, and without providing any prior information on how the global object retrieval task was best divided into smaller subtasks. We discuss the potential of our method for engineering adaptively behaving robot swarms and interpret our results in relation to the likely path that nature took to evolve complex sociality and task specialization.
Control of free-flying space robot manipulator systems
NASA Technical Reports Server (NTRS)
Cannon, Robert H., Jr.
1989-01-01
Control techniques for self-contained, autonomous free-flying space robots are being tested and developed. Free-flying space robots are envisioned as a key element of any successful long term presence in space. These robots must be capable of performing the assembly, maintenance, and inspection, and repair tasks that currently require astronaut extra-vehicular activity (EVA). Use of robots will provide economic savings as well as improved astronaut safety by reducing and in many cases, eliminating the need for human EVA. The focus of the work is to develop and carry out a set of research projects using laboratory models of satellite robots. These devices use air-cushion-vehicle (ACV) technology to simulate in two dimensions the drag-free, zero-g conditions of space. Current work is divided into six major projects or research areas. Fixed-base cooperative manipulation work represents our initial entry into multiple arm cooperation and high-level control with a sophisticated user interface. The floating-base cooperative manipulation project strives to transfer some of the technologies developed in the fixed-base work onto a floating base. The global control and navigation experiment seeks to demonstrate simultaneous control of the robot manipulators and the robot base position so that tasks can be accomplished while the base is undergoing a controlled motion. The multiple-vehicle cooperation project's goal is to demonstrate multiple free-floating robots working in teams to carry out tasks too difficult or complex for a single robot to perform. The Location Enhancement Arm Push-off (LEAP) activity's goal is to provide a viable alternative to expendable gas thrusters for vehicle propulsion wherein the robot uses its manipulators to throw itself from place to place. Because the successful execution of the LEAP technique requires an accurate model of the robot and payload mass properties, it was deemed an attractive testbed for adaptive control technology.
Sale, Patrizio; Infarinato, Francesco; Del Percio, Claudio; Lizio, Roberta; Babiloni, Claudio; Foti, Calogero; Franceschini, Marco
2015-12-01
Stroke is the leading cause of permanent disability in developed countries; its effects may include sensory, motor, and cognitive impairment as well as a reduced ability to perform self-care and participate in social and community activities. A number of studies have shown that the use of robotic systems in upper limb motor rehabilitation programs provides safe and intensive treatment to patients with motor impairments because of a neurological injury. Furthermore, robot-aided therapy was shown to be well accepted and tolerated by all patients; however, it is not known whether a specific robot-aided rehabilitation can induce beneficial cortical plasticity in stroke patients. Here, we present a procedure to study neural underpinning of robot-aided upper limb rehabilitation in stroke patients. Neurophysiological recordings use the following: (a) 10-20 system electroencephalographic (EEG) electrode montage; (b) bipolar vertical and horizontal electrooculographies; and (c) bipolar electromyography from the operating upper limb. Behavior monitoring includes the following: (a) clinical data and (b) kinematic and dynamic of the operant upper limb movements. Experimental conditions include the following: (a) resting state eyes closed and eyes open, and (b) robotic rehabilitation task (maximum 80 s each block to reach 4-min EEG data; interblock pause of 1 min). The data collection is performed before and after a program of 30 daily rehabilitation sessions. EEG markers include the following: (a) EEG power density in the eyes-closed condition; (b) reactivity of EEG power density to eyes opening; and (c) reactivity of EEG power density to robotic rehabilitation task. The above procedure was tested on a subacute patient (29 poststroke days) and on a chronic patient (21 poststroke months). After the rehabilitation program, we observed (a) improved clinical condition; (b) improved performance during the robotic task; (c) reduced delta rhythms (1-4 Hz) and increased alpha rhythms (8-12 Hz) during the resting state eyes-closed condition; (d) increased alpha desynchronization to eyes opening; and (e) decreased alpha desynchronization during the robotic rehabilitation task. We conclude that the present procedure is suitable for evaluation of the neural underpinning of robot-aided upper limb rehabilitation.
Cornelissen, Tim H W; Võ, Melissa L-H
2017-01-01
People have an amazing ability to identify objects and scenes with only a glimpse. How automatic is this scene and object identification? Are scene and object semantics-let alone their semantic congruity-processed to a degree that modulates ongoing gaze behavior even if they are irrelevant to the task at hand? Objects that do not fit the semantics of the scene (e.g., a toothbrush in an office) are typically fixated longer and more often than objects that are congruent with the scene context. In this study, we overlaid a letter T onto photographs of indoor scenes and instructed participants to search for it. Some of these background images contained scene-incongruent objects. Despite their lack of relevance to the search, we found that participants spent more time in total looking at semantically incongruent compared to congruent objects in the same position of the scene. Subsequent tests of explicit and implicit memory showed that participants did not remember many of the inconsistent objects and no more of the consistent objects. We argue that when we view natural environments, scene and object relationships are processed obligatorily, such that irrelevant semantic mismatches between scene and object identity can modulate ongoing eye-movement behavior.
A 2.5D Map-Based Mobile Robot Localization via Cooperation of Aerial and Ground Robots
Nam, Tae Hyeon; Shim, Jae Hong; Cho, Young Im
2017-01-01
Recently, there has been increasing interest in studying the task coordination of aerial and ground robots. When a robot begins navigation in an unknown area, it has no information about the surrounding environment. Accordingly, for robots to perform tasks based on location information, they need a simultaneous localization and mapping (SLAM) process that uses sensor information to draw a map of the environment, while simultaneously estimating the current location of the robot on the map. This paper aims to present a localization method based in cooperation between aerial and ground robots in an indoor environment. The proposed method allows a ground robot to reach accurate destination by using a 2.5D elevation map built by a low-cost RGB-D (Red Green and Blue-Depth) sensor and 2D Laser sensor attached onto an aerial robot. A 2.5D elevation map is formed by projecting height information of an obstacle using depth information obtained by the RGB-D sensor onto a grid map, which is generated by using the 2D Laser sensor and scan matching. Experimental results demonstrate the effectiveness of the proposed method for its accuracy in location recognition and computing speed. PMID:29186843
Canedo-Rodriguez, Adrián; Iglesias, Roberto; Regueiro, Carlos V.; Alvarez-Santos, Victor; Pardo, Xose Manuel
2013-01-01
To bring cutting edge robotics from research centres to social environments, the robotics community must start providing affordable solutions: the costs must be reduced and the quality and usefulness of the robot services must be enhanced. Unfortunately, nowadays the deployment of robots and the adaptation of their services to new environments are tasks that usually require several days of expert work. With this in view, we present a multi-agent system made up of intelligent cameras and autonomous robots, which is easy and fast to deploy in different environments. The cameras will enhance the robot perceptions and allow them to react to situations that require their services. Additionally, the cameras will support the movement of the robots. This will enable our robots to navigate even when there are not maps available. The deployment of our system does not require expertise and can be done in a short period of time, since neither software nor hardware tuning is needed. Every system task is automatic, distributed and based on self-organization processes. Our system is scalable, robust, and flexible to the environment. We carried out several real world experiments, which show the good performance of our proposal. PMID:23271604
Canedo-Rodriguez, Adrián; Iglesias, Roberto; Regueiro, Carlos V; Alvarez-Santos, Victor; Pardo, Xose Manuel
2012-12-27
To bring cutting edge robotics from research centres to social environments, the robotics community must start providing affordable solutions: the costs must be reduced and the quality and usefulness of the robot services must be enhanced. Unfortunately, nowadays the deployment of robots and the adaptation of their services to new environments are tasks that usually require several days of expert work. With this in view, we present a multi-agent system made up of intelligent cameras and autonomous robots, which is easy and fast to deploy in different environments. The cameras will enhance the robot perceptions and allow them to react to situations that require their services. Additionally, the cameras will support the movement of the robots. This will enable our robots to navigate even when there are not maps available. The deployment of our system does not require expertise and can be done in a short period of time, since neither software nor hardware tuning is needed. Every system task is automatic, distributed and based on self-organization processes. Our system is scalable, robust, and flexible to the environment. We carried out several real world experiments, which show the good performance of our proposal.
A 2.5D Map-Based Mobile Robot Localization via Cooperation of Aerial and Ground Robots.
Nam, Tae Hyeon; Shim, Jae Hong; Cho, Young Im
2017-11-25
Recently, there has been increasing interest in studying the task coordination of aerial and ground robots. When a robot begins navigation in an unknown area, it has no information about the surrounding environment. Accordingly, for robots to perform tasks based on location information, they need a simultaneous localization and mapping (SLAM) process that uses sensor information to draw a map of the environment, while simultaneously estimating the current location of the robot on the map. This paper aims to present a localization method based in cooperation between aerial and ground robots in an indoor environment. The proposed method allows a ground robot to reach accurate destination by using a 2.5D elevation map built by a low-cost RGB-D (Red Green and Blue-Depth) sensor and 2D Laser sensor attached onto an aerial robot. A 2.5D elevation map is formed by projecting height information of an obstacle using depth information obtained by the RGB-D sensor onto a grid map, which is generated by using the 2D Laser sensor and scan matching. Experimental results demonstrate the effectiveness of the proposed method for its accuracy in location recognition and computing speed.
Control of complex physically simulated robot groups
NASA Astrophysics Data System (ADS)
Brogan, David C.
2001-10-01
Actuated systems such as robots take many forms and sizes but each requires solving the difficult task of utilizing available control inputs to accomplish desired system performance. Coordinated groups of robots provide the opportunity to accomplish more complex tasks, to adapt to changing environmental conditions, and to survive individual failures. Similarly, groups of simulated robots, represented as graphical characters, can test the design of experimental scenarios and provide autonomous interactive counterparts for video games. The complexity of writing control algorithms for these groups currently hinders their use. A combination of biologically inspired heuristics, search strategies, and optimization techniques serve to reduce the complexity of controlling these real and simulated characters and to provide computationally feasible solutions.