Sample records for eye robot agent

  1. My thoughts through a robot's eyes: an augmented reality-brain-machine interface.

    PubMed

    Kansaku, Kenji; Hata, Naoki; Takano, Kouji

    2010-02-01

    A brain-machine interface (BMI) uses neurophysiological signals from the brain to control external devices, such as robot arms or computer cursors. Combining augmented reality with a BMI, we show that the user's brain signals successfully controlled an agent robot and operated devices in the robot's environment. The user's thoughts became reality through the robot's eyes, enabling the augmentation of real environments outside the anatomy of the human body.

  2. I Reach Faster When I See You Look: Gaze Effects in Human-Human and Human-Robot Face-to-Face Cooperation.

    PubMed

    Boucher, Jean-David; Pattacini, Ugo; Lelong, Amelie; Bailly, Gerrard; Elisei, Frederic; Fagel, Sascha; Dominey, Peter Ford; Ventre-Dominey, Jocelyne

    2012-01-01

    Human-human interaction in natural environments relies on a variety of perceptual cues. Humanoid robots are becoming increasingly refined in their sensorimotor capabilities, and thus should now be able to manipulate and exploit these social cues in cooperation with their human partners. Previous studies have demonstrated that people follow human and robot gaze, and that it can help them to cope with spatially ambiguous language. Our goal is to extend these findings into the domain of action, to determine how human and robot gaze can influence the speed and accuracy of human action. We report on results from a human-human cooperation experiment demonstrating that an agent's vision of her/his partner's gaze can significantly improve that agent's performance in a cooperative task. We then implement a heuristic capability to generate such gaze cues by a humanoid robot that engages in the same cooperative interaction. The subsequent human-robot experiments demonstrate that a human agent can indeed exploit the predictive gaze of their robot partner in a cooperative task. This allows us to render the humanoid robot more human-like in its ability to communicate with humans. The long term objectives of the work are thus to identify social cooperation cues, and to validate their pertinence through implementation in a cooperative robot. The current research provides the robot with the capability to produce appropriate speech and gaze cues in the context of human-robot cooperation tasks. Gaze is manipulated in three conditions: Full gaze (coordinated eye and head), eyes hidden with sunglasses, and head fixed. We demonstrate the pertinence of these cues in terms of statistical measures of action times for humans in the context of a cooperative task, as gaze significantly facilitates cooperation as measured by human response times.

  3. Robots As Intentional Agents: Using Neuroscientific Methods to Make Robots Appear More Social

    PubMed Central

    Wiese, Eva; Metta, Giorgio; Wykowska, Agnieszka

    2017-01-01

    Robots are increasingly envisaged as our future cohabitants. However, while considerable progress has been made in recent years in terms of their technological realization, the ability of robots to interact with humans in an intuitive and social way is still quite limited. An important challenge for social robotics is to determine how to design robots that can perceive the user’s needs, feelings, and intentions, and adapt to users over a broad range of cognitive abilities. It is conceivable that if robots were able to adequately demonstrate these skills, humans would eventually accept them as social companions. We argue that the best way to achieve this is using a systematic experimental approach based on behavioral and physiological neuroscience methods such as motion/eye-tracking, electroencephalography, or functional near-infrared spectroscopy embedded in interactive human–robot paradigms. This approach requires understanding how humans interact with each other, how they perform tasks together and how they develop feelings of social connection over time, and using these insights to formulate design principles that make social robots attuned to the workings of the human brain. In this review, we put forward the argument that the likelihood of artificial agents being perceived as social companions can be increased by designing them in a way that they are perceived as intentional agents that activate areas in the human brain involved in social-cognitive processing. We first review literature related to social-cognitive processes and mechanisms involved in human–human interactions, and highlight the importance of perceiving others as intentional agents to activate these social brain areas. We then discuss how attribution of intentionality can positively affect human–robot interaction by (a) fostering feelings of social connection, empathy and prosociality, and by (b) enhancing performance on joint human–robot tasks. Lastly, we describe circumstances under which attribution of intentionality to robot agents might be disadvantageous, and discuss challenges associated with designing social robots that are inspired by neuroscientific principles. PMID:29046651

  4. Robots As Intentional Agents: Using Neuroscientific Methods to Make Robots Appear More Social.

    PubMed

    Wiese, Eva; Metta, Giorgio; Wykowska, Agnieszka

    2017-01-01

    Robots are increasingly envisaged as our future cohabitants. However, while considerable progress has been made in recent years in terms of their technological realization, the ability of robots to interact with humans in an intuitive and social way is still quite limited. An important challenge for social robotics is to determine how to design robots that can perceive the user's needs, feelings, and intentions, and adapt to users over a broad range of cognitive abilities. It is conceivable that if robots were able to adequately demonstrate these skills, humans would eventually accept them as social companions. We argue that the best way to achieve this is using a systematic experimental approach based on behavioral and physiological neuroscience methods such as motion/eye-tracking, electroencephalography, or functional near-infrared spectroscopy embedded in interactive human-robot paradigms. This approach requires understanding how humans interact with each other, how they perform tasks together and how they develop feelings of social connection over time, and using these insights to formulate design principles that make social robots attuned to the workings of the human brain. In this review, we put forward the argument that the likelihood of artificial agents being perceived as social companions can be increased by designing them in a way that they are perceived as intentional agents that activate areas in the human brain involved in social-cognitive processing. We first review literature related to social-cognitive processes and mechanisms involved in human-human interactions, and highlight the importance of perceiving others as intentional agents to activate these social brain areas. We then discuss how attribution of intentionality can positively affect human-robot interaction by (a) fostering feelings of social connection, empathy and prosociality, and by (b) enhancing performance on joint human-robot tasks. Lastly, we describe circumstances under which attribution of intentionality to robot agents might be disadvantageous, and discuss challenges associated with designing social robots that are inspired by neuroscientific principles.

  5. Robot-Assisted Retinal Vein Cannulation with Force-Based Puncture Detection: Micron vs. the Steady-Hand Eye Robot*

    PubMed Central

    Gonenc, Berk; Tran, Nhat; Gehlbach, Peter; Taylor, Russell H.; Iordachita, Iulian

    2018-01-01

    Retinal vein cannulation is a demanding procedure where therapeutic agents are injected into occluded retina veins. The feasibility of this treatment is limited due to challenges in identifying the moment of venous puncture, achieving cannulation and maintaining it throughout the drug delivery period. In this study, we integrate a force-sensing microneedle with two distinct robotic systems: the handheld micromanipulator Micron, and the cooperatively controlled Steady-Hand Eye Robot (SHER). The sensed tool-to-tissue interaction forces are used to detect venous puncture and extend the robots’ standard control schemes with a new position holding mode (PHM) that assists the operator hold the needle position fixed and maintain cannulation for a longer time with less trauma on the vasculature. We evaluate the resulting systems comparatively in a dry phantom, stretched vinyl membranes. Results have shown that modulating the admittance control gain of SHER alone is not a very effective solution for preventing the undesired tool motion after puncture. However, after using puncture detection and PHM the deviation from the puncture point is significantly reduced, by 65% with Micron, and by 95% with SHER representing a potential advantage over freehand for both. PMID:28269417

  6. Babybot: a biologically inspired developing robotic agent

    NASA Astrophysics Data System (ADS)

    Metta, Giorgio; Panerai, Francesco M.; Sandini, Giulio

    2000-10-01

    The study of development, either artificial or biological, can highlight the mechanisms underlying learning and adaptive behavior. We shall argue whether developmental studies might provide a different and potentially interesting perspective either on how to build an artificial adaptive agent, or on understanding how the brain solves sensory, motor, and cognitive tasks. It is our opinion that the acquisition of the proper behavior might indeed be facilitated because within an ecological context, the agent, its adaptive structure and the environment dynamically interact thus constraining the otherwise difficult learning problem. In very general terms we shall describe the proposed approach and supporting biological related facts. In order to further analyze these aspects from the modeling point of view, we shall demonstrate how a twelve degrees of freedom baby humanoid robot acquires orienting and reaching behaviors, and what advantages the proposed framework might offer. In particular, the experimental setup consists of five degrees-of-freedom (dof) robot head, and an off-the-shelf six dof robot manipulator, both mounted on a rotating base: i.e. the torso. From the sensory point of view, the robot is equipped with two space-variant cameras, an inertial sensor simulating the vestibular system, and proprioceptive information through motor encoders. The biological parallel is exploited at many implementation levels. It is worth mentioning, for example, the space- variant eyes, exploiting foveal and peripheral vision in a single arrangement, the inertial sensor providing efficient image stabilization (vestibulo-ocular reflex).

  7. I Reach Faster When I See You Look: Gaze Effects in Human–Human and Human–Robot Face-to-Face Cooperation

    PubMed Central

    Boucher, Jean-David; Pattacini, Ugo; Lelong, Amelie; Bailly, Gerard; Elisei, Frederic; Fagel, Sascha; Dominey, Peter Ford; Ventre-Dominey, Jocelyne

    2012-01-01

    Human–human interaction in natural environments relies on a variety of perceptual cues. Humanoid robots are becoming increasingly refined in their sensorimotor capabilities, and thus should now be able to manipulate and exploit these social cues in cooperation with their human partners. Previous studies have demonstrated that people follow human and robot gaze, and that it can help them to cope with spatially ambiguous language. Our goal is to extend these findings into the domain of action, to determine how human and robot gaze can influence the speed and accuracy of human action. We report on results from a human–human cooperation experiment demonstrating that an agent’s vision of her/his partner’s gaze can significantly improve that agent’s performance in a cooperative task. We then implement a heuristic capability to generate such gaze cues by a humanoid robot that engages in the same cooperative interaction. The subsequent human–robot experiments demonstrate that a human agent can indeed exploit the predictive gaze of their robot partner in a cooperative task. This allows us to render the humanoid robot more human-like in its ability to communicate with humans. The long term objectives of the work are thus to identify social cooperation cues, and to validate their pertinence through implementation in a cooperative robot. The current research provides the robot with the capability to produce appropriate speech and gaze cues in the context of human–robot cooperation tasks. Gaze is manipulated in three conditions: Full gaze (coordinated eye and head), eyes hidden with sunglasses, and head fixed. We demonstrate the pertinence of these cues in terms of statistical measures of action times for humans in the context of a cooperative task, as gaze significantly facilitates cooperation as measured by human response times. PMID:22563315

  8. A neural network-based exploratory learning and motor planning system for co-robots

    PubMed Central

    Galbraith, Byron V.; Guenther, Frank H.; Versace, Massimiliano

    2015-01-01

    Collaborative robots, or co-robots, are semi-autonomous robotic agents designed to work alongside humans in shared workspaces. To be effective, co-robots require the ability to respond and adapt to dynamic scenarios encountered in natural environments. One way to achieve this is through exploratory learning, or “learning by doing,” an unsupervised method in which co-robots are able to build an internal model for motor planning and coordination based on real-time sensory inputs. In this paper, we present an adaptive neural network-based system for co-robot control that employs exploratory learning to achieve the coordinated motor planning needed to navigate toward, reach for, and grasp distant objects. To validate this system we used the 11-degrees-of-freedom RoPro Calliope mobile robot. Through motor babbling of its wheels and arm, the Calliope learned how to relate visual and proprioceptive information to achieve hand-eye-body coordination. By continually evaluating sensory inputs and externally provided goal directives, the Calliope was then able to autonomously select the appropriate wheel and joint velocities needed to perform its assigned task, such as following a moving target or retrieving an indicated object. PMID:26257640

  9. A neural network-based exploratory learning and motor planning system for co-robots.

    PubMed

    Galbraith, Byron V; Guenther, Frank H; Versace, Massimiliano

    2015-01-01

    Collaborative robots, or co-robots, are semi-autonomous robotic agents designed to work alongside humans in shared workspaces. To be effective, co-robots require the ability to respond and adapt to dynamic scenarios encountered in natural environments. One way to achieve this is through exploratory learning, or "learning by doing," an unsupervised method in which co-robots are able to build an internal model for motor planning and coordination based on real-time sensory inputs. In this paper, we present an adaptive neural network-based system for co-robot control that employs exploratory learning to achieve the coordinated motor planning needed to navigate toward, reach for, and grasp distant objects. To validate this system we used the 11-degrees-of-freedom RoPro Calliope mobile robot. Through motor babbling of its wheels and arm, the Calliope learned how to relate visual and proprioceptive information to achieve hand-eye-body coordination. By continually evaluating sensory inputs and externally provided goal directives, the Calliope was then able to autonomously select the appropriate wheel and joint velocities needed to perform its assigned task, such as following a moving target or retrieving an indicated object.

  10. Parameterizations for reducing camera reprojection error for robot-world hand-eye calibration

    USDA-ARS?s Scientific Manuscript database

    Accurate robot-world, hand-eye calibration is crucial to automation tasks. In this paper, we discuss the robot-world, hand-eye calibration problem which has been modeled as the linear relationship AX equals ZB, where X and Z are the unknown calibration matrices composed of rotation and translation ...

  11. Technical vision for robots

    NASA Astrophysics Data System (ADS)

    1985-01-01

    A new invention by scientists who have copied the structure of a human eye will help replace a human telescope-watching astronomer with a robot. It will be possible to provide technical vision not only for robot astronomers but also for their industrial fellow robots. So far, an artificial eye with dimensions close to those of a human eye discerns only black-and-white images. But already the second model of the eye is to perceive colors as well. Polymers which are suited for the role of the coat of an eye, lens, and vitreous body were applied. The retina has been replaced with a bundle of the finest glass filaments through which light rays get onto photomultipliers. They can be positioned outside the artificial eye. The main thing is to prevent great losses in the light guide.

  12. See You See Me: the Role of Eye Contact in Multimodal Human-Robot Interaction.

    PubMed

    Xu, Tian Linger; Zhang, Hui; Yu, Chen

    2016-05-01

    We focus on a fundamental looking behavior in human-robot interactions - gazing at each other's face. Eye contact and mutual gaze between two social partners are critical in smooth human-human interactions. Therefore, investigating at what moments and in what ways a robot should look at a human user's face as a response to the human's gaze behavior is an important topic. Toward this goal, we developed a gaze-contingent human-robot interaction system, which relied on momentary gaze behaviors from a human user to control an interacting robot in real time. Using this system, we conducted an experiment in which human participants interacted with the robot in a joint attention task. In the experiment, we systematically manipulated the robot's gaze toward the human partner's face in real time and then analyzed the human's gaze behavior as a response to the robot's gaze behavior. We found that more face looks from the robot led to more look-backs (to the robot's face) from human participants and consequently created more mutual gaze and eye contact between the two. Moreover, participants demonstrated more coordinated and synchronized multimodal behaviors between speech and gaze when more eye contact was successfully established and maintained.

  13. Design and development of miniature parallel robot for eye surgery.

    PubMed

    Sakai, Tomoya; Harada, Kanako; Tanaka, Shinichi; Ueta, Takashi; Noda, Yasuo; Sugita, Naohiko; Mitsuishi, Mamoru

    2014-01-01

    A five degree-of-freedom (DOF) miniature parallel robot has been developed to precisely and safely remove the thin internal limiting membrane in the eye ground during vitreoretinal surgery. A simulator has been developed to determine the design parameters of this robot. The developed robot's size is 85 mm × 100 mm × 240 mm, and its weight is 770 g. This robot incorporates an emergency instrument retraction function to quickly remove the instrument from the eye in case of sudden intraoperative complications such as bleeding. Experiments were conducted to evaluate the robot's performance in the master-slave configuration, and the results demonstrated that it had a tracing accuracy of 40.0 μm.

  14. Believing androids - fMRI activation in the right temporo-parietal junction is modulated by ascribing intentions to non-human agents.

    PubMed

    Özdem, Ceylan; Wiese, Eva; Wykowska, Agnieszka; Müller, Hermann; Brass, Marcel; Van Overwalle, Frank

    2017-10-01

    Attributing mind to interaction partners has been shown to increase the social relevance we ascribe to others' actions and to modulate the amount of attention dedicated to them. However, it remains unclear how the relationship between higher-order mind attribution and lower-level attention processes is established in the brain. In this neuroimaging study, participants saw images of an anthropomorphic robot that moved its eyes left- or rightwards to signal the appearance of an upcoming stimulus in the same (valid cue) or opposite location (invalid cue). Independently, participants' beliefs about the intentionality underlying the observed eye movements were manipulated by describing the eye movements as under human control or preprogrammed. As expected, we observed a validity effect behaviorally and neurologically (increased response times and activation in the invalid vs. valid condition). More importantly, we observed that this effect was more pronounced for the condition in which the robot's behavior was believed to be controlled by a human, as opposed to be preprogrammed. This interaction effect between cue validity and belief was, however, only found at the neural level and was manifested as a significant increase of activation in bilateral anterior temporoparietal junction.

  15. Solving the robot-world, hand-eye(s) calibration problem with iterative methods

    USDA-ARS?s Scientific Manuscript database

    Robot-world, hand-eye calibration is the problem of determining the transformation between the robot end effector and a camera, as well as the transformation between the robot base and the world coordinate system. This relationship has been modeled as AX = ZB, where X and Z are unknown homogeneous ...

  16. [Optimization of end-tool parameters based on robot hand-eye calibration].

    PubMed

    Zhang, Lilong; Cao, Tong; Liu, Da

    2017-04-01

    A new one-time registration method was developed in this research for hand-eye calibration of a surgical robot to simplify the operation process and reduce the preparation time. And a new and practical method is introduced in this research to optimize the end-tool parameters of the surgical robot based on analysis of the error sources in this registration method. In the process with one-time registration method, firstly a marker on the end-tool of the robot was recognized by a fixed binocular camera, and then the orientation and position of the marker were calculated based on the joint parameters of the robot. Secondly the relationship between the camera coordinate system and the robot base coordinate system could be established to complete the hand-eye calibration. Because of manufacturing and assembly errors of robot end-tool, an error equation was established with the transformation matrix between the robot end coordinate system and the robot end-tool coordinate system as the variable. Numerical optimization was employed to optimize end-tool parameters of the robot. The experimental results showed that the one-time registration method could significantly improve the efficiency of the robot hand-eye calibration compared with the existing methods. The parameter optimization method could significantly improve the absolute positioning accuracy of the one-time registration method. The absolute positioning accuracy of the one-time registration method can meet the requirements of the clinical surgery.

  17. Infant discrimination of humanoid robots

    PubMed Central

    Matsuda, Goh; Ishiguro, Hiroshi; Hiraki, Kazuo

    2015-01-01

    Recently, extremely humanlike robots called “androids” have been developed, some of which are already being used in the field of entertainment. In the context of psychological studies, androids are expected to be used in the future as fully controllable human stimuli to investigate human nature. In this study, we used an android to examine infant discrimination ability between human beings and non-human agents. Participants (N = 42 infants) were assigned to three groups based on their age, i.e., 6- to 8-month-olds, 9- to 11-month-olds, and 12- to 14-month-olds, and took part in a preferential looking paradigm. Of three types of agents involved in the paradigm—a human, an android modeled on the human, and a mechanical-looking robot made from the android—two at a time were presented side-by-side as they performed a grasping action. Infants’ looking behavior was measured using an eye tracking system, and the amount of time spent focusing on each of three areas of interest (face, goal, and body) was analyzed. Results showed that all age groups predominantly looked at the robot and at the face area, and that infants aged over 9 months watched the goal area for longer than the body area. There was no difference in looking times and areas focused on between the human and the android. These findings suggest that 6- to 14-month-olds are unable to discriminate between the human and the android, although they can distinguish the mechanical robot from the human. PMID:26441772

  18. Dynamic eye colour as an honest signal of aggression.

    PubMed

    Heathcote, Robert J P; Darden, Safi K; Troscianko, Jolyon; Lawson, Michael R M; Brown, Antony M; Laker, Philippa R; Naisbett-Jones, Lewis C; MacGregor, Hannah E A; Ramnarine, Indar; Croft, Darren P

    2018-06-04

    Animal eyes are some of the most widely recognisable structures in nature. Due to their salience to predators and prey, most research has focused on how animals hide or camouflage their eyes [1]. However, across all vertebrate Classes, many species actually express brightly coloured or conspicuous eyes, suggesting they may have also evolved a signalling function. Nevertheless, perhaps due to the difficulty with experimentally manipulating eye appearance, very few species beyond humans [2] have been experimentally shown to use eyes as signals [3]. Using staged behavioural trials we show that Trinidadian guppies (Poecilia reticulata), which can rapidly change their iris colour, predominantly express conspicuous eye colouration when performing aggressive behaviours towards smaller conspecifics. Furthermore, using a novel, visually realistic robotic system to create a mismatch between signal and relative competitive ability, we show that eye colour is used to honestly signal aggressive motivation. Specifically, robotic 'cheats' (that is, smaller, less-competitive robotic fish that display aggressive eye colouration when defending a food patch) attracted greater food competition from larger real fish. Our study suggests that eye colour may be an under-appreciated aspect of signalling in animals, shows the utility of our biomimetic robotic system for investigating animal behaviour, and provides experimental evidence that socially mediated costs towards low-quality individuals may maintain the honesty of dynamic colour signals. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. Hand-Eye Calibration in Visually-Guided Robot Grinding.

    PubMed

    Li, Wen-Long; Xie, He; Zhang, Gang; Yan, Si-Jie; Yin, Zhou-Ping

    2016-11-01

    Visually-guided robot grinding is a novel and promising automation technique for blade manufacturing. One common problem encountered in robot grinding is hand-eye calibration, which establishes the pose relationship between the end effector (hand) and the scanning sensor (eye). This paper proposes a new calibration approach for robot belt grinding. The main contribution of this paper is its consideration of both joint parameter errors and pose parameter errors in a hand-eye calibration equation. The objective function of the hand-eye calibration is built and solved, from which 30 compensated values (corresponding to 24 joint parameters and six pose parameters) are easily calculated in a closed solution. The proposed approach is economic and simple because only a criterion sphere is used to calculate the calibration parameters, avoiding the need for an expensive and complicated tracking process using a laser tracker. The effectiveness of this method is verified using a calibration experiment and a blade grinding experiment. The code used in this approach is attached in the Appendix.

  20. Electroencephalographic markers of robot-aided therapy in stroke patients for the evaluation of upper limb rehabilitation.

    PubMed

    Sale, Patrizio; Infarinato, Francesco; Del Percio, Claudio; Lizio, Roberta; Babiloni, Claudio; Foti, Calogero; Franceschini, Marco

    2015-12-01

    Stroke is the leading cause of permanent disability in developed countries; its effects may include sensory, motor, and cognitive impairment as well as a reduced ability to perform self-care and participate in social and community activities. A number of studies have shown that the use of robotic systems in upper limb motor rehabilitation programs provides safe and intensive treatment to patients with motor impairments because of a neurological injury. Furthermore, robot-aided therapy was shown to be well accepted and tolerated by all patients; however, it is not known whether a specific robot-aided rehabilitation can induce beneficial cortical plasticity in stroke patients. Here, we present a procedure to study neural underpinning of robot-aided upper limb rehabilitation in stroke patients. Neurophysiological recordings use the following: (a) 10-20 system electroencephalographic (EEG) electrode montage; (b) bipolar vertical and horizontal electrooculographies; and (c) bipolar electromyography from the operating upper limb. Behavior monitoring includes the following: (a) clinical data and (b) kinematic and dynamic of the operant upper limb movements. Experimental conditions include the following: (a) resting state eyes closed and eyes open, and (b) robotic rehabilitation task (maximum 80 s each block to reach 4-min EEG data; interblock pause of 1 min). The data collection is performed before and after a program of 30 daily rehabilitation sessions. EEG markers include the following: (a) EEG power density in the eyes-closed condition; (b) reactivity of EEG power density to eyes opening; and (c) reactivity of EEG power density to robotic rehabilitation task. The above procedure was tested on a subacute patient (29 poststroke days) and on a chronic patient (21 poststroke months). After the rehabilitation program, we observed (a) improved clinical condition; (b) improved performance during the robotic task; (c) reduced delta rhythms (1-4 Hz) and increased alpha rhythms (8-12 Hz) during the resting state eyes-closed condition; (d) increased alpha desynchronization to eyes opening; and (e) decreased alpha desynchronization during the robotic rehabilitation task. We conclude that the present procedure is suitable for evaluation of the neural underpinning of robot-aided upper limb rehabilitation.

  1. See You See Me: the Role of Eye Contact in Multimodal Human-Robot Interaction

    PubMed Central

    XU, TIAN (LINGER); ZHANG, HUI; YU, CHEN

    2016-01-01

    We focus on a fundamental looking behavior in human-robot interactions – gazing at each other’s face. Eye contact and mutual gaze between two social partners are critical in smooth human-human interactions. Therefore, investigating at what moments and in what ways a robot should look at a human user’s face as a response to the human’s gaze behavior is an important topic. Toward this goal, we developed a gaze-contingent human-robot interaction system, which relied on momentary gaze behaviors from a human user to control an interacting robot in real time. Using this system, we conducted an experiment in which human participants interacted with the robot in a joint attention task. In the experiment, we systematically manipulated the robot’s gaze toward the human partner’s face in real time and then analyzed the human’s gaze behavior as a response to the robot’s gaze behavior. We found that more face looks from the robot led to more look-backs (to the robot’s face) from human participants and consequently created more mutual gaze and eye contact between the two. Moreover, participants demonstrated more coordinated and synchronized multimodal behaviors between speech and gaze when more eye contact was successfully established and maintained. PMID:28966875

  2. Towards free 3D end-point control for robotic-assisted human reaching using binocular eye tracking.

    PubMed

    Maimon-Dror, Roni O; Fernandez-Quesada, Jorge; Zito, Giuseppe A; Konnaris, Charalambos; Dziemian, Sabine; Faisal, A Aldo

    2017-07-01

    Eye-movements are the only directly observable behavioural signals that are highly correlated with actions at the task level, and proactive of body movements and thus reflect action intentions. Moreover, eye movements are preserved in many movement disorders leading to paralysis (or amputees) from stroke, spinal cord injury, Parkinson's disease, multiple sclerosis, and muscular dystrophy among others. Despite this benefit, eye tracking is not widely used as control interface for robotic interfaces in movement impaired patients due to poor human-robot interfaces. We demonstrate here how combining 3D gaze tracking using our GT3D binocular eye tracker with custom designed 3D head tracking system and calibration method enables continuous 3D end-point control of a robotic arm support system. The users can move their own hand to any location of the workspace by simple looking at the target and winking once. This purely eye tracking based system enables the end-user to retain free head movement and yet achieves high spatial end point accuracy in the order of 6 cm RMSE error in each dimension and standard deviation of 4 cm. 3D calibration is achieved by moving the robot along a 3 dimensional space filling Peano curve while the user is tracking it with their eyes. This results in a fully automated calibration procedure that yields several thousand calibration points versus standard approaches using a dozen points, resulting in beyond state-of-the-art 3D accuracy and precision.

  3. Market-Based Coordination and Auditing Mechanisms for Self-Interested Multi-Robot Systems

    ERIC Educational Resources Information Center

    Ham, MyungJoo

    2009-01-01

    We propose market-based coordinated task allocation mechanisms, which allocate complex tasks that require synchronized and collaborated services of multiple robot agents to robot agents, and an auditing mechanism, which ensures proper behaviors of robot agents by verifying inter-agent activities, for self-interested, fully-distributed, and…

  4. Grounding language in action and perception: From cognitive agents to humanoid robots

    NASA Astrophysics Data System (ADS)

    Cangelosi, Angelo

    2010-06-01

    In this review we concentrate on a grounded approach to the modeling of cognition through the methodologies of cognitive agents and developmental robotics. This work will focus on the modeling of the evolutionary and developmental acquisition of linguistic capabilities based on the principles of symbol grounding. We review cognitive agent and developmental robotics models of the grounding of language to demonstrate their consistency with the empirical and theoretical evidence on language grounding and embodiment, and to reveal the benefits of such an approach in the design of linguistic capabilities in cognitive robotic agents. In particular, three different models will be discussed, where the complexity of the agent's sensorimotor and cognitive system gradually increases: from a multi-agent simulation of language evolution, to a simulated robotic agent model for symbol grounding transfer, to a model of language comprehension in the humanoid robot iCub. The review also discusses the benefits of the use of humanoid robotic platform, and specifically of the open source iCub platform, for the study of embodied cognition.

  5. Children with Autism Spectrum Disorders Make a Fruit Salad with Probo, the Social Robot: An Interaction Study.

    PubMed

    Simut, Ramona E; Vanderfaeillie, Johan; Peca, Andreea; Van de Perre, Greet; Vanderborght, Bram

    2016-01-01

    Social robots are thought to be motivating tools in play tasks with children with autism spectrum disorders. Thirty children with autism were included using a repeated measurements design. It was investigated if the children's interaction with a human differed from the interaction with a social robot during a play task. Also, it was examined if the two conditions differed in their ability to elicit interaction with a human accompanying the child during the task. Interaction of the children with both partners did not differ apart from the eye-contact. Participants had more eye-contact with the social robot compared to the eye-contact with the human. The conditions did not differ regarding the interaction elicited with the human accompanying the child.

  6. Computing Optic Flow with ArduEye Vision Sensor

    DTIC Science & Technology

    2013-01-01

    processing algorithm that can be applied to the flight control of other robotic platforms. 15. SUBJECT TERMS Optical flow, ArduEye, vision based ...2 Figure 2. ArduEye vision chip on Stonyman breakout board connected to Arduino Mega (8) (left) and the Stonyman vision chips (7...robotic platforms. There is a significant need for small, light , less power-hungry sensors and sensory data processing algorithms in order to control the

  7. Dynamic electronic institutions in agent oriented cloud robotic systems.

    PubMed

    Nagrath, Vineet; Morel, Olivier; Malik, Aamir; Saad, Naufal; Meriaudeau, Fabrice

    2015-01-01

    The dot-com bubble bursted in the year 2000 followed by a swift movement towards resource virtualization and cloud computing business model. Cloud computing emerged not as new form of computing or network technology but a mere remoulding of existing technologies to suit a new business model. Cloud robotics is understood as adaptation of cloud computing ideas for robotic applications. Current efforts in cloud robotics stress upon developing robots that utilize computing and service infrastructure of the cloud, without debating on the underlying business model. HTM5 is an OMG's MDA based Meta-model for agent oriented development of cloud robotic systems. The trade-view of HTM5 promotes peer-to-peer trade amongst software agents. HTM5 agents represent various cloud entities and implement their business logic on cloud interactions. Trade in a peer-to-peer cloud robotic system is based on relationships and contracts amongst several agent subsets. Electronic Institutions are associations of heterogeneous intelligent agents which interact with each other following predefined norms. In Dynamic Electronic Institutions, the process of formation, reformation and dissolution of institutions is automated leading to run time adaptations in groups of agents. DEIs in agent oriented cloud robotic ecosystems bring order and group intellect. This article presents DEI implementations through HTM5 methodology.

  8. Grounding language in action and perception: from cognitive agents to humanoid robots.

    PubMed

    Cangelosi, Angelo

    2010-06-01

    In this review we concentrate on a grounded approach to the modeling of cognition through the methodologies of cognitive agents and developmental robotics. This work will focus on the modeling of the evolutionary and developmental acquisition of linguistic capabilities based on the principles of symbol grounding. We review cognitive agent and developmental robotics models of the grounding of language to demonstrate their consistency with the empirical and theoretical evidence on language grounding and embodiment, and to reveal the benefits of such an approach in the design of linguistic capabilities in cognitive robotic agents. In particular, three different models will be discussed, where the complexity of the agent's sensorimotor and cognitive system gradually increases: from a multi-agent simulation of language evolution, to a simulated robotic agent model for symbol grounding transfer, to a model of language comprehension in the humanoid robot iCub. The review also discusses the benefits of the use of humanoid robotic platform, and specifically of the open source iCub platform, for the study of embodied cognition. Copyright 2010 Elsevier B.V. All rights reserved.

  9. Machine Vision Giving Eyes to Robots. Resources in Technology.

    ERIC Educational Resources Information Center

    Technology Teacher, 1990

    1990-01-01

    This module introduces machine vision, which can be used for inspection, robot guidance and part sorting. The future for machine vision will include new technology and will bring vision systems closer to the ultimate vision processor, the human eye. Includes a student quiz, outcomes, and activities. (JOW)

  10. Proposed Methodology for Application of Human-like gradual Multi-Agent Q-Learning (HuMAQ) for Multi-robot Exploration

    NASA Astrophysics Data System (ADS)

    Narayan Ray, Dip; Majumder, Somajyoti

    2014-07-01

    Several attempts have been made by the researchers around the world to develop a number of autonomous exploration techniques for robots. But it has been always an important issue for developing the algorithm for unstructured and unknown environments. Human-like gradual Multi-agent Q-leaming (HuMAQ) is a technique developed for autonomous robotic exploration in unknown (and even unimaginable) environments. It has been successfully implemented in multi-agent single robotic system. HuMAQ uses the concept of Subsumption architecture, a well-known Behaviour-based architecture for prioritizing the agents of the multi-agent system and executes only the most common action out of all the different actions recommended by different agents. Instead of using new state-action table (Q-table) each time, HuMAQ uses the immediate past table for efficient and faster exploration. The proof of learning has also been established both theoretically and practically. HuMAQ has the potential to be used in different and difficult situations as well as applications. The same architecture has been modified to use for multi-robot exploration in an environment. Apart from all other existing agents used in the single robotic system, agents for inter-robot communication and coordination/ co-operation with the other similar robots have been introduced in the present research. Current work uses a series of indigenously developed identical autonomous robotic systems, communicating with each other through ZigBee protocol.

  11. You Look Human, But Act Like a Machine: Agent Appearance and Behavior Modulate Different Aspects of Human-Robot Interaction.

    PubMed

    Abubshait, Abdulaziz; Wiese, Eva

    2017-01-01

    Gaze following occurs automatically in social interactions, but the degree to which gaze is followed depends on whether an agent is perceived to have a mind, making its behavior socially more relevant for the interaction. Mind perception also modulates the attitudes we have toward others, and determines the degree of empathy, prosociality, and morality invested in social interactions. Seeing mind in others is not exclusive to human agents, but mind can also be ascribed to non-human agents like robots, as long as their appearance and/or behavior allows them to be perceived as intentional beings. Previous studies have shown that human appearance and reliable behavior induce mind perception to robot agents, and positively affect attitudes and performance in human-robot interaction. What has not been investigated so far is whether different triggers of mind perception have an independent or interactive effect on attitudes and performance in human-robot interaction. We examine this question by manipulating agent appearance (human vs. robot) and behavior (reliable vs. random) within the same paradigm and examine how congruent (human/reliable vs. robot/random) versus incongruent (human/random vs. robot/reliable) combinations of these triggers affect performance (i.e., gaze following) and attitudes (i.e., agent ratings) in human-robot interaction. The results show that both appearance and behavior affect human-robot interaction but that the two triggers seem to operate in isolation, with appearance more strongly impacting attitudes, and behavior more strongly affecting performance. The implications of these findings for human-robot interaction are discussed.

  12. Autonomous Shepherding Behaviors of Multiple Target Steering Robots.

    PubMed

    Lee, Wonki; Kim, DaeEun

    2017-11-25

    This paper presents a distributed coordination methodology for multi-robot systems, based on nearest-neighbor interactions. Among many interesting tasks that may be performed using swarm robots, we propose a biologically-inspired control law for a shepherding task, whereby a group of external agents drives another group of agents to a desired location. First, we generated sheep-like robots that act like a flock. We assume that each agent is capable of measuring the relative location and velocity to each of its neighbors within a limited sensing area. Then, we designed a control strategy for shepherd-like robots that have information regarding where to go and a steering ability to control the flock, according to the robots' position relative to the flock. We define several independent behavior rules; each agent calculates to what extent it will move by summarizing each rule. The flocking sheep agents detect the steering agents and try to avoid them; this tendency leads to movement of the flock. Each steering agent only needs to focus on guiding the nearest flocking agent to the desired location. Without centralized coordination, multiple steering agents produce an arc formation to control the flock effectively. In addition, we propose a new rule for collecting behavior, whereby a scattered flock or multiple flocks are consolidated. From simulation results with multiple robots, we show that each robot performs actions for the shepherding behavior, and only a few steering agents are needed to control the whole flock. The results are displayed in maps that trace the paths of the flock and steering robots. Performance is evaluated via time cost and path accuracy to demonstrate the effectiveness of this approach.

  13. Developmental and Evolutionary Lexicon Acquisition in Cognitive Agents/Robots with Grounding Principle: A Short Review.

    PubMed

    Rasheed, Nadia; Amin, Shamsudin H M

    2016-01-01

    Grounded language acquisition is an important issue, particularly to facilitate human-robot interactions in an intelligent and effective way. The evolutionary and developmental language acquisition are two innovative and important methodologies for the grounding of language in cognitive agents or robots, the aim of which is to address current limitations in robot design. This paper concentrates on these two main modelling methods with the grounding principle for the acquisition of linguistic ability in cognitive agents or robots. This review not only presents a survey of the methodologies and relevant computational cognitive agents or robotic models, but also highlights the advantages and progress of these approaches for the language grounding issue.

  14. Developmental and Evolutionary Lexicon Acquisition in Cognitive Agents/Robots with Grounding Principle: A Short Review

    PubMed Central

    Rasheed, Nadia; Amin, Shamsudin H. M.

    2016-01-01

    Grounded language acquisition is an important issue, particularly to facilitate human-robot interactions in an intelligent and effective way. The evolutionary and developmental language acquisition are two innovative and important methodologies for the grounding of language in cognitive agents or robots, the aim of which is to address current limitations in robot design. This paper concentrates on these two main modelling methods with the grounding principle for the acquisition of linguistic ability in cognitive agents or robots. This review not only presents a survey of the methodologies and relevant computational cognitive agents or robotic models, but also highlights the advantages and progress of these approaches for the language grounding issue. PMID:27069470

  15. Maintaining Limited-Range Connectivity Among Second-Order Agents

    DTIC Science & Technology

    2016-07-07

    we consider ad-hoc networks of robotic agents with double integrator dynamics. For such networks, the connectivity maintenance problems are: (i) do...hoc networks of mobile autonomous agents. This loose ter- minology refers to groups of robotic agents with limited mobility and communica- tion...connectivity can be preserved. 3.1. Networks of robotic agents with second-order dynamics and the connectivity maintenance problem. We begin by

  16. Eye gaze tracking for endoscopic camera positioning: an application of a hardware/software interface developed to automate Aesop.

    PubMed

    Ali, S M; Reisner, L A; King, B; Cao, A; Auner, G; Klein, M; Pandya, A K

    2008-01-01

    A redesigned motion control system for the medical robot Aesop allows automating and programming its movements. An IR eye tracking system has been integrated with this control interface to implement an intelligent, autonomous eye gaze-based laparoscopic positioning system. A laparoscopic camera held by Aesop can be moved based on the data from the eye tracking interface to keep the user's gaze point region at the center of a video feedback monitor. This system setup provides autonomous camera control that works around the surgeon, providing an optimal robotic camera platform.

  17. A cognitive operating system (COGNOSYS) for JPL's robot, phase 1 report

    NASA Technical Reports Server (NTRS)

    Mathur, F. P.

    1972-01-01

    The most important software requirement for any robot development is the COGNitive Operating SYStem (COGNOSYS). This report describes the Stanford University Artificial Intelligence Laboratory's hand eye software system from the point of view of developing a cognitive operating system for JPL's robot. In this, the Phase 1 of the JPL robot COGNOSYS task the installation of a SAIL compiler and a FAIL assembler on Caltech's PDP-10 have been accomplished and guidelines have been prepared for the implementation of a Stanford University type hand eye software system on JPL-Caltech's computing facility. The alternatives offered by using RAND-USC's PDP-10 Tenex operating sytem are also considered.

  18. Meal assistance robot with ultrasonic motor

    NASA Astrophysics Data System (ADS)

    Kodani, Yasuhiro; Tanaka, Kanya; Wakasa, Yuji; Akashi, Takuya; Oka, Masato

    2007-12-01

    In this paper, we have constructed a robot that help people with disabilities of upper extremities and advanced stage amyotrophic lateral sclerosis (ALS) patients to eat with their residual abilities. Especially, many of people suffering from advanced stage ALS of the use a pacemaker. And they need to avoid electromagnetic waves. Therefore we adopt ultra sonic motor that does not generate electromagnetic waves as driving sources. Additionally we approach the problem of the conventional meal assistance robot. Moreover, we introduce the interface with eye movement so that extremities can also use our system. User operates our robot not with hands or foot but with eye movement.

  19. Autonomous Shepherding Behaviors of Multiple Target Steering Robots

    PubMed Central

    Lee, Wonki; Kim, DaeEun

    2017-01-01

    This paper presents a distributed coordination methodology for multi-robot systems, based on nearest-neighbor interactions. Among many interesting tasks that may be performed using swarm robots, we propose a biologically-inspired control law for a shepherding task, whereby a group of external agents drives another group of agents to a desired location. First, we generated sheep-like robots that act like a flock. We assume that each agent is capable of measuring the relative location and velocity to each of its neighbors within a limited sensing area. Then, we designed a control strategy for shepherd-like robots that have information regarding where to go and a steering ability to control the flock, according to the robots’ position relative to the flock. We define several independent behavior rules; each agent calculates to what extent it will move by summarizing each rule. The flocking sheep agents detect the steering agents and try to avoid them; this tendency leads to movement of the flock. Each steering agent only needs to focus on guiding the nearest flocking agent to the desired location. Without centralized coordination, multiple steering agents produce an arc formation to control the flock effectively. In addition, we propose a new rule for collecting behavior, whereby a scattered flock or multiple flocks are consolidated. From simulation results with multiple robots, we show that each robot performs actions for the shepherding behavior, and only a few steering agents are needed to control the whole flock. The results are displayed in maps that trace the paths of the flock and steering robots. Performance is evaluated via time cost and path accuracy to demonstrate the effectiveness of this approach. PMID:29186836

  20. The Resurrection of Malthus: space as the final escape from the law of diminishing returns

    NASA Astrophysics Data System (ADS)

    Sommers, J.; Beldavs, V.

    2017-09-01

    If there is a self-sustaining space economy, which is the goal of the International Lunar Decade, then it is a subject of economic analysis. The immediate challenge of space economics then is to conceptually demonstrate how a space economy could emerge and work where markets do not exist and few human agents may be involved, in fact where human agents may transact with either human agents or robotic agents or robotic agents may transact with other robotic agents.

  1. Live video monitoring robot controlled by web over internet

    NASA Astrophysics Data System (ADS)

    Lokanath, M.; Akhil Sai, Guruju

    2017-11-01

    Future is all about robots, robot can perform tasks where humans cannot, Robots have huge applications in military and industrial area for lifting heavy weights, for accurate placements, for repeating the same task number of times, where human are not efficient. Generally robot is a mix of electronic, electrical and mechanical engineering and can do the tasks automatically on its own or under the supervision of humans. The camera is the eye for robot, call as robovision helps in monitoring security system and also can reach into the places where the human eye cannot reach. This paper presents about developing a live video streaming robot controlled from the website. We designed the web, controlling for the robot to move left, right, front and back while streaming video. As we move to the smart environment or IoT (Internet of Things) by smart devices the system we developed here connects over the internet and can be operated with smart mobile phone using a web browser. The Raspberry Pi model B chip acts as heart for this system robot, the sufficient motors, surveillance camera R pi 2 are connected to Raspberry pi.

  2. A geometric method for computing ocular kinematics and classifying gaze events using monocular remote eye tracking in a robotic environment.

    PubMed

    Singh, Tarkeshwar; Perry, Christopher M; Herter, Troy M

    2016-01-26

    Robotic and virtual-reality systems offer tremendous potential for improving assessment and rehabilitation of neurological disorders affecting the upper extremity. A key feature of these systems is that visual stimuli are often presented within the same workspace as the hands (i.e., peripersonal space). Integrating video-based remote eye tracking with robotic and virtual-reality systems can provide an additional tool for investigating how cognitive processes influence visuomotor learning and rehabilitation of the upper extremity. However, remote eye tracking systems typically compute ocular kinematics by assuming eye movements are made in a plane with constant depth (e.g. frontal plane). When visual stimuli are presented at variable depths (e.g. transverse plane), eye movements have a vergence component that may influence reliable detection of gaze events (fixations, smooth pursuits and saccades). To our knowledge, there are no available methods to classify gaze events in the transverse plane for monocular remote eye tracking systems. Here we present a geometrical method to compute ocular kinematics from a monocular remote eye tracking system when visual stimuli are presented in the transverse plane. We then use the obtained kinematics to compute velocity-based thresholds that allow us to accurately identify onsets and offsets of fixations, saccades and smooth pursuits. Finally, we validate our algorithm by comparing the gaze events computed by the algorithm with those obtained from the eye-tracking software and manual digitization. Within the transverse plane, our algorithm reliably differentiates saccades from fixations (static visual stimuli) and smooth pursuits from saccades and fixations when visual stimuli are dynamic. The proposed methods provide advancements for examining eye movements in robotic and virtual-reality systems. Our methods can also be used with other video-based or tablet-based systems in which eye movements are performed in a peripersonal plane with variable depth.

  3. Simulation-based intelligent robotic agent for Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Biegl, Csaba A.; Springfield, James F.; Cook, George E.; Fernandez, Kenneth R.

    1990-01-01

    A robot control package is described which utilizes on-line structural simulation of robot manipulators and objects in their workspace. The model-based controller is interfaced with a high level agent-independent planner, which is responsible for the task-level planning of the robot's actions. Commands received from the agent-independent planner are refined and executed in the simulated workspace, and upon successful completion, they are transferred to the real manipulators.

  4. GOM-Face: GKP, EOG, and EMG-based multimodal interface with application to humanoid robot control.

    PubMed

    Nam, Yunjun; Koo, Bonkon; Cichocki, Andrzej; Choi, Seungjin

    2014-02-01

    We present a novel human-machine interface, called GOM-Face , and its application to humanoid robot control. The GOM-Face bases its interfacing on three electric potentials measured on the face: 1) glossokinetic potential (GKP), which involves the tongue movement; 2) electrooculogram (EOG), which involves the eye movement; 3) electromyogram, which involves the teeth clenching. Each potential has been individually used for assistive interfacing to provide persons with limb motor disabilities or even complete quadriplegia an alternative communication channel. However, to the best of our knowledge, GOM-Face is the first interface that exploits all these potentials together. We resolved the interference between GKP and EOG by extracting discriminative features from two covariance matrices: a tongue-movement-only data matrix and eye-movement-only data matrix. With the feature extraction method, GOM-Face can detect four kinds of horizontal tongue or eye movements with an accuracy of 86.7% within 2.77 s. We demonstrated the applicability of the GOM-Face to humanoid robot control: users were able to communicate with the robot by selecting from a predefined menu using the eye and tongue movements.

  5. A Biologically Inspired Cooperative Multi-Robot Control Architecture

    NASA Technical Reports Server (NTRS)

    Howsman, Tom; Craft, Mike; ONeil, Daniel; Howell, Joe T. (Technical Monitor)

    2002-01-01

    A prototype cooperative multi-robot control architecture suitable for the eventual construction of large space structures has been developed. In nature, there are numerous examples of complex architectures constructed by relatively simple insects, such as termites and wasps, which cooperatively assemble their nests. The prototype control architecture emulates this biological model. Actions of each of the autonomous robotic construction agents are only indirectly coordinated, thus mimicking the distributed construction processes of various social insects. The robotic construction agents perform their primary duties stigmergically i.e., without direct inter-agent communication and without a preprogrammed global blueprint of the final design. Communication and coordination between individual agents occurs indirectly through the sensed modifications that each agent makes to the structure. The global stigmergic building algorithm prototyped during the initial research assumes that the robotic builders only perceive the current state of the structure under construction. Simulation studies have established that an idealized form of the proposed architecture was indeed capable of producing representative large space structures with autonomous robots. This paper will explore the construction simulations in order to illustrate the multi-robot control architecture.

  6. A Stigmergic Cooperative Multi-Robot Control Architecture

    NASA Technical Reports Server (NTRS)

    Howsman, Thomas G.; O'Neil, Daniel; Craft, Michael A.

    2004-01-01

    In nature, there are numerous examples of complex architectures constructed by relatively simple insects, such as termites and wasps, which cooperatively assemble their nests. A prototype cooperative multi-robot control architecture which may be suitable for the eventual construction of large space structures has been developed which emulates this biological model. Actions of each of the autonomous robotic construction agents are only indirectly coordinated, thus mimicking the distributed construction processes of various social insects. The robotic construction agents perform their primary duties stigmergically, i.e., without direct inter-agent communication and without a preprogrammed global blueprint of the final design. Communication and coordination between individual agents occurs indirectly through the sensed modifications that each agent makes to the structure. The global stigmergic building algorithm prototyped during the initial research assumes that the robotic builders only perceive the current state of the structure under construction. Simulation studies have established that an idealized form of the proposed architecture was indeed capable of producing representative large space structures with autonomous robots. This paper will explore the construction simulations in order to illustrate the multi-robot control architecture.

  7. Human-like object tracking and gaze estimation with PKD android

    PubMed Central

    Wijayasinghe, Indika B.; Miller, Haylie L.; Das, Sumit K; Bugnariu, Nicoleta L.; Popa, Dan O.

    2018-01-01

    As the use of robots increases for tasks that require human-robot interactions, it is vital that robots exhibit and understand human-like cues for effective communication. In this paper, we describe the implementation of object tracking capability on Philip K. Dick (PKD) android and a gaze tracking algorithm, both of which further robot capabilities with regard to human communication. PKD's ability to track objects with human-like head postures is achieved with visual feedback from a Kinect system and an eye camera. The goal of object tracking with human-like gestures is twofold : to facilitate better human-robot interactions and to enable PKD as a human gaze emulator for future studies. The gaze tracking system employs a mobile eye tracking system (ETG; SensoMotoric Instruments) and a motion capture system (Cortex; Motion Analysis Corp.) for tracking the head orientations. Objects to be tracked are displayed by a virtual reality system, the Computer Assisted Rehabilitation Environment (CAREN; MotekForce Link). The gaze tracking algorithm converts eye tracking data and head orientations to gaze information facilitating two objectives: to evaluate the performance of the object tracking system for PKD and to use the gaze information to predict the intentions of the user, enabling the robot to understand physical cues by humans. PMID:29416193

  8. Human-like object tracking and gaze estimation with PKD android

    NASA Astrophysics Data System (ADS)

    Wijayasinghe, Indika B.; Miller, Haylie L.; Das, Sumit K.; Bugnariu, Nicoleta L.; Popa, Dan O.

    2016-05-01

    As the use of robots increases for tasks that require human-robot interactions, it is vital that robots exhibit and understand human-like cues for effective communication. In this paper, we describe the implementation of object tracking capability on Philip K. Dick (PKD) android and a gaze tracking algorithm, both of which further robot capabilities with regard to human communication. PKD's ability to track objects with human-like head postures is achieved with visual feedback from a Kinect system and an eye camera. The goal of object tracking with human-like gestures is twofold: to facilitate better human-robot interactions and to enable PKD as a human gaze emulator for future studies. The gaze tracking system employs a mobile eye tracking system (ETG; SensoMotoric Instruments) and a motion capture system (Cortex; Motion Analysis Corp.) for tracking the head orientations. Objects to be tracked are displayed by a virtual reality system, the Computer Assisted Rehabilitation Environment (CAREN; MotekForce Link). The gaze tracking algorithm converts eye tracking data and head orientations to gaze information facilitating two objectives: to evaluate the performance of the object tracking system for PKD and to use the gaze information to predict the intentions of the user, enabling the robot to understand physical cues by humans.

  9. Brain Computer Interfaces for Enhanced Interaction with Mobile Robot Agents

    DTIC Science & Technology

    2016-07-27

    synergistic and complementary way. This project focused on acquiring a mobile robotic agent platform that can be used to explore these interfaces...providing a test environment where the human control of a robot agent can be experimentally validated in 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND...Distribution Unlimited UU UU UU UU 27-07-2016 17-Sep-2013 16-Sep-2014 Final Report: Brain Computer Interfaces for Enhanced Interactions with Mobile Robot

  10. Development and validation of a low-cost mobile robotics testbed

    NASA Astrophysics Data System (ADS)

    Johnson, Michael; Hayes, Martin J.

    2012-03-01

    This paper considers the design, construction and validation of a low-cost experimental robotic testbed, which allows for the localisation and tracking of multiple robotic agents in real time. The testbed system is suitable for research and education in a range of different mobile robotic applications, for validating theoretical as well as practical research work in the field of digital control, mobile robotics, graphical programming and video tracking systems. It provides a reconfigurable floor space for mobile robotic agents to operate within, while tracking the position of multiple agents in real-time using the overhead vision system. The overall system provides a highly cost-effective solution to the topical problem of providing students with practical robotics experience within severe budget constraints. Several problems encountered in the design and development of the mobile robotic testbed and associated tracking system, such as radial lens distortion and the selection of robot identifier templates are clearly addressed. The testbed performance is quantified and several experiments involving LEGO Mindstorm NXT and Merlin System MiaBot robots are discussed.

  11. Agent independent task planning

    NASA Technical Reports Server (NTRS)

    Davis, William S.

    1990-01-01

    Agent-Independent Planning is a technique that allows the construction of activity plans without regard to the agent that will perform them. Once generated, a plan is then validated and translated into instructions for a particular agent, whether a robot, crewmember, or software-based control system. Because Space Station Freedom (SSF) is planned for orbital operations for approximately thirty years, it will almost certainly experience numerous enhancements and upgrades, including upgrades in robotic manipulators. Agent-Independent Planning provides the capability to construct plans for SSF operations, independent of specific robotic systems, by combining techniques of object oriented modeling, nonlinear planning and temporal logic. Since a plan is validated using the physical and functional models of a particular agent, new robotic systems can be developed and integrated with existing operations in a robust manner. This technique also provides the capability to generate plans for crewmembers with varying skill levels, and later apply these same plans to more sophisticated robotic manipulators made available by evolutions in technology.

  12. Surgeons' physical discomfort and symptoms during robotic surgery: a comprehensive ergonomic survey study.

    PubMed

    Lee, G I; Lee, M R; Green, I; Allaf, M; Marohn, M R

    2017-04-01

    It is commonly believed that robotic surgery systems provide surgeons with an ergonomically sound work environment; however, the actual experience of surgeons practicing robotic surgery (RS) has not been thoroughly researched. In this ergonomics survey study, we investigated surgeons' physical symptom reports and their association with factors including demographics, specialties, and robotic systems. Four hundred and thirty-two surgeons regularly practicing RS completed this comprehensive survey comprising 20 questions in four categories: demographics, systems, ergonomics, and physical symptoms. Chi-square and multinomial logistic regression analyses were used for statistical analysis. Two hundred and thirty-six surgeons (56.1 %) reported physical symptoms or discomfort. Among those symptoms, neck stiffness, finger, and eye fatigues were the most common. With the newest robot, eye symptom rate was considerably reduced, while neck and finger symptoms did not improve significantly. A high rate of lower back stiffness was correlated with higher annual robotic case volume, and eye symptoms were more common with longer years practicing robotic surgery (p < 0.05). The symptom report rate from urology surgeons was significantly higher than other specialties (p < 0.05). Noticeably, surgeons with higher confidence and helpfulness levels with their ergonomic settings reported lower symptom report rates. Symptoms were not correlated with age and gender. Although RS provides relatively better ergonomics, this study demonstrates that 56.1 % of regularly practicing robotic surgeons still experience related physical symptoms or discomfort. In addition to system improvement, surgeon education in optimizing the ergonomic settings may be necessary to maximize the ergonomic benefits in RS.

  13. Multirobot autonomous landmine detection using distributed multisensor information aggregation

    NASA Astrophysics Data System (ADS)

    Jumadinova, Janyl; Dasgupta, Prithviraj

    2012-06-01

    We consider the problem of distributed sensor information fusion by multiple autonomous robots within the context of landmine detection. We assume that different landmines can be composed of different types of material and robots are equipped with different types of sensors, while each robot has only one type of landmine detection sensor on it. We introduce a novel technique that uses a market-based information aggregation mechanism called a prediction market. Each robot is provided with a software agent that uses sensory input of the robot and performs calculations of the prediction market technique. The result of the agent's calculations is a 'belief' representing the confidence of the agent in identifying the object as a landmine. The beliefs from different robots are aggregated by the market mechanism and passed on to a decision maker agent. The decision maker agent uses this aggregate belief information about a potential landmine and makes decisions about which other robots should be deployed to its location, so that the landmine can be confirmed rapidly and accurately. Our experimental results show that, for identical data distributions and settings, using our prediction market-based information aggregation technique increases the accuracy of object classification favorably as compared to two other commonly used techniques.

  14. Can robots be responsible moral agents? And why should we care?

    NASA Astrophysics Data System (ADS)

    Sharkey, Amanda

    2017-07-01

    This principle highlights the need for humans to accept responsibility for robot behaviour and in that it is commendable. However, it raises further questions about legal and moral responsibility. The issues considered here are (i) the reasons for assuming that humans and not robots are responsible agents, (ii) whether it is sufficient to design robots to comply with existing laws and human rights and (iii) the implications, for robot deployment, of the assumption that robots are not morally responsible.

  15. Algorithms of walking and stability for an anthropomorphic robot

    NASA Astrophysics Data System (ADS)

    Sirazetdinov, R. T.; Devaev, V. M.; Nikitina, D. V.; Fadeev, A. Y.; Kamalov, A. R.

    2017-09-01

    Autonomous movement of an anthropomorphic robot is considered as a superposition of a set of typical elements of movement - so-called patterns, each of which can be considered as an agent of some multi-agent system [ 1 ]. To control the AP-601 robot, an information and communication infrastructure has been created that represents some multi-agent system that allows the development of algorithms for individual patterns of moving and run them in the system as a set of independently executed and interacting agents. The algorithms of lateral movement of the anthropomorphic robot AP-601 series with active stability due to the stability pattern are presented.

  16. A new neural net approach to robot 3D perception and visuo-motor coordination

    NASA Technical Reports Server (NTRS)

    Lee, Sukhan

    1992-01-01

    A novel neural network approach to robot hand-eye coordination is presented. The approach provides a true sense of visual error servoing, redundant arm configuration control for collision avoidance, and invariant visuo-motor learning under gazing control. A 3-D perception network is introduced to represent the robot internal 3-D metric space in which visual error servoing and arm configuration control are performed. The arm kinematic network performs the bidirectional association between 3-D space arm configurations and joint angles, and enforces the legitimate arm configurations. The arm kinematic net is structured by a radial-based competitive and cooperative network with hierarchical self-organizing learning. The main goal of the present work is to demonstrate that the neural net representation of the robot 3-D perception net serves as an important intermediate functional block connecting robot eyes and arms.

  17. Navigation system for a mobile robot with a visual sensor using a fish-eye lens

    NASA Astrophysics Data System (ADS)

    Kurata, Junichi; Grattan, Kenneth T. V.; Uchiyama, Hironobu

    1998-02-01

    Various position sensing and navigation systems have been proposed for the autonomous control of mobile robots. Some of these systems have been installed with an omnidirectional visual sensor system that proved very useful in obtaining information on the environment around the mobile robot for position reckoning. In this article, this type of navigation system is discussed. The sensor is composed of one TV camera with a fish-eye lens, using a reference target on a ceiling and hybrid image processing circuits. The position of the robot, with respect to the floor, is calculated by integrating the information obtained from a visual sensor and a gyroscope mounted in the mobile robot, and the use of a simple algorithm based on PTP control for guidance is discussed. An experimental trial showed that the proposed system was both valid and useful for the navigation of an indoor vehicle.

  18. Combining psychological and engineering approaches to utilizing social robots with children with autism.

    PubMed

    Dickstein-Fischer, Laurie; Fischer, Gregory S

    2014-01-01

    It is estimated that Autism Spectrum Disorder (ASD) affects 1 in 68 children. Early identification of an ASD is exceedingly important to the introduction of an intervention. We are developing a robot-assisted approach that will serve as an improved diagnostic and early intervention tool for children with autism. The robot, named PABI® (Penguin for Autism Behavioral Interventions), is a compact humanoid robot taking on an expressive cartoon-like embodiment. The robot is affordable, durable, and portable so that it can be used in various settings including schools, clinics, and the home. Thus enabling significantly enhanced and more readily available diagnosis and continuation of care. Through facial expressions, body motion, verbal cues, stereo vision-based tracking, and a tablet computer, the robot is capable of interacting meaningfully with an autistic child. Initial implementations of the robot, as part of a comprehensive treatment model (CTM), include Applied Behavioral Analysis (ABA) therapy where the child interacts with a tablet computer wirelessly interfaced with the robot. At the same time, the robot makes meaningful expressions and utterances and uses stereo cameras in eyes to track the child, maintain eye contact, and collect data such as affect and gaze direction for charting of progress. In this paper we present the clinical justification, anticipated usage with corresponding requirements, prototype development of the robotic system, and demonstration of a sample application for robot-assisted ABA therapy.

  19. Fast and robust curve skeletonization for real-world elongated objects

    USDA-ARS?s Scientific Manuscript database

    These datasets were generated for calibrating robot-camera systems. In an extension, we also considered the problem of calibrating robots with more than one camera. These datasets are provided as a companion to the paper, "Solving the Robot-World Hand-Eye(s) Calibration Problem with Iterative Meth...

  20. Do infants perceive the social robot Keepon as a communicative partner?

    PubMed

    Peca, Andreea; Simut, Ramona; Cao, Hoang-Long; Vanderborght, Bram

    2016-02-01

    This study investigates if infants perceive an unfamiliar agent, such as the robot Keepon, as a social agent after observing an interaction between the robot and a human adult. 23 infants, aged 9-17 month, were exposed, in a first phase, to either a contingent interaction between the active robot and an active human adult, or to an interaction between an active human adult and the non-active robot, followed by a second phase, in which infants were offered the opportunity to initiate a turn-taking interaction with Keepon. The measured variables were: (1) the number of social initiations the infant directed toward the robot, and (2) the number of anticipatory orientations of attention to the agent that follows in the conversation. The results indicate a significant higher level of initiations in the interactive robot condition compared to the non-active robot condition, while the difference between the frequencies of anticipations of turn-taking behaviors was not significant. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Analyzing Cyber-Physical Threats on Robotic Platforms.

    PubMed

    Ahmad Yousef, Khalil M; AlMajali, Anas; Ghalyon, Salah Abu; Dweik, Waleed; Mohd, Bassam J

    2018-05-21

    Robots are increasingly involved in our daily lives. Fundamental to robots are the communication link (or stream) and the applications that connect the robots to their clients or users. Such communication link and applications are usually supported through client/server network connection. This networking system is amenable of being attacked and vulnerable to the security threats. Ensuring security and privacy for robotic platforms is thus critical, as failures and attacks could have devastating consequences. In this paper, we examine several cyber-physical security threats that are unique to the robotic platforms; specifically the communication link and the applications. Threats target integrity, availability and confidential security requirements of the robotic platforms, which use MobileEyes/arnlServer client/server applications. A robot attack tool (RAT) was developed to perform specific security attacks. An impact-oriented approach was adopted to analyze the assessment results of the attacks. Tests and experiments of attacks were conducted in simulation environment and physically on the robot. The simulation environment was based on MobileSim; a software tool for simulating, debugging and experimenting on MobileRobots/ActivMedia platforms and their environments. The robot platform PeopleBot TM was used for physical experiments. The analysis and testing results show that certain attacks were successful at breaching the robot security. Integrity attacks modified commands and manipulated the robot behavior. Availability attacks were able to cause Denial-of-Service (DoS) and the robot was not responsive to MobileEyes commands. Integrity and availability attacks caused sensitive information on the robot to be hijacked. To mitigate security threats, we provide possible mitigation techniques and suggestions to raise awareness of threats on the robotic platforms, especially when the robots are involved in critical missions or applications.

  2. Analyzing Cyber-Physical Threats on Robotic Platforms †

    PubMed Central

    2018-01-01

    Robots are increasingly involved in our daily lives. Fundamental to robots are the communication link (or stream) and the applications that connect the robots to their clients or users. Such communication link and applications are usually supported through client/server network connection. This networking system is amenable of being attacked and vulnerable to the security threats. Ensuring security and privacy for robotic platforms is thus critical, as failures and attacks could have devastating consequences. In this paper, we examine several cyber-physical security threats that are unique to the robotic platforms; specifically the communication link and the applications. Threats target integrity, availability and confidential security requirements of the robotic platforms, which use MobileEyes/arnlServer client/server applications. A robot attack tool (RAT) was developed to perform specific security attacks. An impact-oriented approach was adopted to analyze the assessment results of the attacks. Tests and experiments of attacks were conducted in simulation environment and physically on the robot. The simulation environment was based on MobileSim; a software tool for simulating, debugging and experimenting on MobileRobots/ActivMedia platforms and their environments. The robot platform PeopleBotTM was used for physical experiments. The analysis and testing results show that certain attacks were successful at breaching the robot security. Integrity attacks modified commands and manipulated the robot behavior. Availability attacks were able to cause Denial-of-Service (DoS) and the robot was not responsive to MobileEyes commands. Integrity and availability attacks caused sensitive information on the robot to be hijacked. To mitigate security threats, we provide possible mitigation techniques and suggestions to raise awareness of threats on the robotic platforms, especially when the robots are involved in critical missions or applications. PMID:29883403

  3. How to Build an Intentional Android: Infants' Imitation of a Robot's Goal-Directed Actions

    ERIC Educational Resources Information Center

    Itakura, Shoji; Ishida, Hiraku; Kanda, Takayuki; Shimada, Yohko; Ishiguro, Hiroshi; Lee, Kang

    2008-01-01

    This study examined whether young children are able to imitate a robot's goal-directed actions. Children (24-35 months old) viewed videos showing a robot attempting to manipulate an object (e.g., putting beads inside a cup) but failing to achieve its goal (e.g., beads fell outside the cup). In 1 video, the robot made eye contact with a human…

  4. Understanding the Uncanny: Both Atypical Features and Category Ambiguity Provoke Aversion toward Humanlike Robots.

    PubMed

    Strait, Megan K; Floerke, Victoria A; Ju, Wendy; Maddox, Keith; Remedios, Jessica D; Jung, Malte F; Urry, Heather L

    2017-01-01

    Robots intended for social contexts are often designed with explicit humanlike attributes in order to facilitate their reception by (and communication with) people. However, observation of an "uncanny valley"-a phenomenon in which highly humanlike entities provoke aversion in human observers-has lead some to caution against this practice. Both of these contrasting perspectives on the anthropomorphic design of social robots find some support in empirical investigations to date. Yet, owing to outstanding empirical limitations and theoretical disputes, the uncanny valley and its implications for human-robot interaction remains poorly understood. We thus explored the relationship between human similarity and people's aversion toward humanlike robots via manipulation of the agents' appearances. To that end, we employed a picture-viewing task ( N agents = 60) to conduct an experimental test ( N participants = 72) of the uncanny valley's existence and the visual features that cause certain humanlike robots to be unnerving. Across the levels of human similarity, we further manipulated agent appearance on two dimensions, typicality (prototypic, atypical, and ambiguous) and agent identity (robot, person), and measured participants' aversion using both subjective and behavioral indices. Our findings were as follows: (1) Further substantiating its existence, the data show a clear and consistent uncanny valley in the current design space of humanoid robots. (2) Both category ambiguity, and more so, atypicalities provoke aversive responding, thus shedding light on the visual factors that drive people's discomfort. (3) Use of the Negative Attitudes toward Robots Scale did not reveal any significant relationships between people's pre-existing attitudes toward humanlike robots and their aversive responding-suggesting positive exposure and/or additional experience with robots is unlikely to affect the occurrence of an uncanny valley effect in humanoid robotics. This work furthers our understanding of both the uncanny valley, as well as the visual factors that contribute to an agent's uncanniness.

  5. Modeling of the First Layers in the Fly's Eye

    NASA Technical Reports Server (NTRS)

    Moya, J. A.; Wilcox, M. J.; Donohoe, G. W.

    1997-01-01

    Increased autonomy of robots would yield significant advantages in the exploration of space. The shortfalls of computer vision can, however, pose significant limitations on a robot's potential. At the same time, simple insects which are largely hard-wired have effective visual systems. The understanding of insect vision systems thus may lead to improved approaches to visual tasks. A good starting point for the study of a vision system is its eye. In this paper, a model of the sensory portion of the fly's eye is presented. The effectiveness of the model is briefly addressed by a comparison of its performance to experimental data.

  6. A Decentralized Framework for Multi-Agent Robotic Systems

    PubMed Central

    2018-01-01

    Over the past few years, decentralization of multi-agent robotic systems has become an important research area. These systems do not depend on a central control unit, which enables the control and assignment of distributed, asynchronous and robust tasks. However, in some cases, the network communication process between robotic agents is overlooked, and this creates a dependency for each agent to maintain a permanent link with nearby units to be able to fulfill its goals. This article describes a communication framework, where each agent in the system can leave the network or accept new connections, sending its information based on the transfer history of all nodes in the network. To this end, each agent needs to comply with four processes to participate in the system, plus a fifth process for data transfer to the nearest nodes that is based on Received Signal Strength Indicator (RSSI) and data history. To validate this framework, we use differential robotic agents and a monitoring agent to generate a topological map of an environment with the presence of obstacles. PMID:29389849

  7. Robotic and Virtual Reality BCIs Using Spatial Tactile and Auditory Oddball Paradigms.

    PubMed

    Rutkowski, Tomasz M

    2016-01-01

    The paper reviews nine robotic and virtual reality (VR) brain-computer interface (BCI) projects developed by the author, in collaboration with his graduate students, within the BCI-lab research group during its association with University of Tsukuba, Japan. The nine novel approaches are discussed in applications to direct brain-robot and brain-virtual-reality-agent control interfaces using tactile and auditory BCI technologies. The BCI user intentions are decoded from the brainwaves in realtime using a non-invasive electroencephalography (EEG) and they are translated to a symbiotic robot or virtual reality agent thought-based only control. A communication protocol between the BCI output and the robot or the virtual environment is realized in a symbiotic communication scenario using an user datagram protocol (UDP), which constitutes an internet of things (IoT) control scenario. Results obtained from healthy users reproducing simple brain-robot and brain-virtual-agent control tasks in online experiments support the research goal of a possibility to interact with robotic devices and virtual reality agents using symbiotic thought-based BCI technologies. An offline BCI classification accuracy boosting method, using a previously proposed information geometry derived approach, is also discussed in order to further support the reviewed robotic and virtual reality thought-based control paradigms.

  8. Robotic and Virtual Reality BCIs Using Spatial Tactile and Auditory Oddball Paradigms

    PubMed Central

    Rutkowski, Tomasz M.

    2016-01-01

    The paper reviews nine robotic and virtual reality (VR) brain–computer interface (BCI) projects developed by the author, in collaboration with his graduate students, within the BCI–lab research group during its association with University of Tsukuba, Japan. The nine novel approaches are discussed in applications to direct brain-robot and brain-virtual-reality-agent control interfaces using tactile and auditory BCI technologies. The BCI user intentions are decoded from the brainwaves in realtime using a non-invasive electroencephalography (EEG) and they are translated to a symbiotic robot or virtual reality agent thought-based only control. A communication protocol between the BCI output and the robot or the virtual environment is realized in a symbiotic communication scenario using an user datagram protocol (UDP), which constitutes an internet of things (IoT) control scenario. Results obtained from healthy users reproducing simple brain-robot and brain-virtual-agent control tasks in online experiments support the research goal of a possibility to interact with robotic devices and virtual reality agents using symbiotic thought-based BCI technologies. An offline BCI classification accuracy boosting method, using a previously proposed information geometry derived approach, is also discussed in order to further support the reviewed robotic and virtual reality thought-based control paradigms. PMID:27999538

  9. Robot-assisted intraocular surgery: development of the IRISS and feasibility studies in an animal model

    PubMed Central

    Rahimy, E; Wilson, J; Tsao, T-C; Schwartz, S; Hubschman, J-P

    2013-01-01

    Purpose The aim of this study is to develop a novel robotic surgical platform, the IRISS (Intraocular Robotic Interventional and Surgical System), capable of performing both anterior and posterior segment intraocular surgery, and assess its performance in terms of range of motion, speed of motion, accuracy, and overall capacities. Patients and methods To test the feasibility of performing ‘bimanual' intraocular surgical tasks using the IRISS, we defined four steps out of typical anterior (phacoemulsification) and posterior (pars plana vitrectomy (PPV)) segment surgery. Selected phacoemulsification steps included construction of a continuous curvilinear capsulorhexis and cortex removal in infusion–aspiration (I/A) mode. Vitrectomy steps consisted of performing a core PPV, followed by aspiration of the posterior hyaloid with the vitreous cutter to induce a posterior vitreous detachment (PVD) assisted with triamcinolone, and simulation of the microcannulation of a temporal retinal vein. For each evaluation, the duration and the successful completion of the task with or without complications or involuntary events was assessed. Results Intraocular procedures were successfully performed on 16 porcine eyes. Four eyes underwent creation of a round, curvilinear anterior capsulorhexis without radialization. Four eyes had I/A of lens cortical material completed without posterior capsular tear. Four eyes completed 23-gauge PPV followed by successful PVD induction without any complications. Finally, simulation of microcannulation of a temporal retinal vein was successfully achieved in four eyes without any retinal tears/perforations noted. Conclusion Robotic-assisted intraocular surgery with the IRISS may be technically feasible in humans. Further studies are pending to improve this particular surgical platform. PMID:23722720

  10. Robot computer problem solving system

    NASA Technical Reports Server (NTRS)

    Becker, J. D.; Merriam, E. W.

    1974-01-01

    The conceptual, experimental, and practical aspects of the development of a robot computer problem solving system were investigated. The distinctive characteristics were formulated of the approach taken in relation to various studies of cognition and robotics. Vehicle and eye control systems were structured, and the information to be generated by the visual system is defined.

  11. Robotic-assisted surgery in ophthalmology.

    PubMed

    de Smet, Marc D; Naus, Gerrit J L; Faridpooya, Koorosh; Mura, Marco

    2018-05-01

    Provide an overview of the current landscape of robotics in ophthalmology, including the pros and cons of system designs, the clinical development path, and the likely future direction of the field. Robots designed for eye surgery should meet certain basic requirements. Three designs are currently being developed: smart surgical tools such as the steady hand, comanipulation devices and telemanipulators using either a fixed or virtual remote center of motion. Successful human intraocular surgery is being performed using the Preceyes surgical system. Another telemanipulation robot, the da Vinci Surgical System, has been used to perform a pterygium repair in humans and was successful in ex-vivo corneal surgery despite its nonophthalmic design. Apart from Preceyes' BV research platform, none of the currently eye-specific systems has reached a commercial stage. Systems are likely to evolve from robotic assistance during specific procedural steps to semiautonomous surgery, as smart sensors are introduced to enhance the basic functionalities of robotic systems. Robotics is still in its infancy in ophthalmology but is rapidly reaching a stage wherein it will be introduced into everyday ophthalmic practice. It will most likely be introduced first for demanding vitreo-retinal procedures, followed by anterior segment applications.

  12. A novel EOG/EEG hybrid human-machine interface adopting eye movements and ERPs: application to robot control.

    PubMed

    Ma, Jiaxin; Zhang, Yu; Cichocki, Andrzej; Matsuno, Fumitoshi

    2015-03-01

    This study presents a novel human-machine interface (HMI) based on both electrooculography (EOG) and electroencephalography (EEG). This hybrid interface works in two modes: an EOG mode recognizes eye movements such as blinks, and an EEG mode detects event related potentials (ERPs) like P300. While both eye movements and ERPs have been separately used for implementing assistive interfaces, which help patients with motor disabilities in performing daily tasks, the proposed hybrid interface integrates them together. In this way, both the eye movements and ERPs complement each other. Therefore, it can provide a better efficiency and a wider scope of application. In this study, we design a threshold algorithm that can recognize four kinds of eye movements including blink, wink, gaze, and frown. In addition, an oddball paradigm with stimuli of inverted faces is used to evoke multiple ERP components including P300, N170, and VPP. To verify the effectiveness of the proposed system, two different online experiments are carried out. One is to control a multifunctional humanoid robot, and the other is to control four mobile robots. In both experiments, the subjects can complete tasks effectively by using the proposed interface, whereas the best completion time is relatively short and very close to the one operated by hand.

  13. Multiagent robotic systems' ambient light sensor

    NASA Astrophysics Data System (ADS)

    Iureva, Radda A.; Maslennikov, Oleg S.; Komarov, Igor I.

    2017-05-01

    Swarm robotics is one of the fastest growing areas of modern technology. Being subclass of multi-agent systems it inherits the main part of scientific-methodological apparatus of construction and functioning of practically useful complexes, which consist of rather autonomous independent agents. Ambient light sensors (ALS) are widely used in robotics. But speaking about swarm robotics, the technology which has great number of specific features and is developing, we can't help mentioning that its important to use sensors on each robot not only in order to help it to get directionally oriented, but also to follow light emitted by robot-chief or to help to find the goal easier. Key words: ambient light sensor, swarm system, multiagent system, robotic system, robotic complexes, simulation modelling

  14. An Intelligent Agent-Controlled and Robot-Based Disassembly Assistant

    NASA Astrophysics Data System (ADS)

    Jungbluth, Jan; Gerke, Wolfgang; Plapper, Peter

    2017-09-01

    One key for successful and fluent human-robot-collaboration in disassembly processes is equipping the robot system with higher autonomy and intelligence. In this paper, we present an informed software agent that controls the robot behavior to form an intelligent robot assistant for disassembly purposes. While the disassembly process first depends on the product structure, we inform the agent using a generic approach through product models. The product model is then transformed to a directed graph and used to build, share and define a coarse disassembly plan. To refine the workflow, we formulate “the problem of loosening a connection and the distribution of the work” as a search problem. The created detailed plan consists of a sequence of actions that are used to call, parametrize and execute robot programs for the fulfillment of the assistance. The aim of this research is to equip robot systems with knowledge and skills to allow them to be autonomous in the performance of their assistance to finally improve the ergonomics of disassembly workstations.

  15. Social skills training for children with autism spectrum disorder using a robotic behavioral intervention system.

    PubMed

    Yun, Sang-Seok; Choi, JongSuk; Park, Sung-Kee; Bong, Gui-Young; Yoo, HeeJeong

    2017-07-01

    We designed a robot system that assisted in behavioral intervention programs of children with autism spectrum disorder (ASD). The eight-session intervention program was based on the discrete trial teaching protocol and focused on two basic social skills: eye contact and facial emotion recognition. The robotic interactions occurred in four modules: training element query, recognition of human activity, coping-mode selection, and follow-up action. Children with ASD who were between 4 and 7 years old and who had verbal IQ ≥ 60 were recruited and randomly assigned to the treatment group (TG, n = 8, 5.75 ± 0.89 years) or control group (CG, n = 7; 6.32 ± 1.23 years). The therapeutic robot facilitated the treatment intervention in the TG, and the human assistant facilitated the treatment intervention in the CG. The intervention procedures were identical in both groups. The primary outcome measures included parent-completed questionnaires, the Autism Diagnostic Observation Schedule (ADOS), and frequency of eye contact, which was measured with the partial interval recording method. After completing treatment, the eye contact percentages were significantly increased in both groups. For facial emotion recognition, the percentages of correct answers were increased in similar patterns in both groups compared to baseline (P > 0.05), with no difference between the TG and CG (P > 0.05). The subjects' ability to play, general behavioral and emotional symptoms were significantly diminished after treatment (p < 0.05). These results showed that the robot-facilitated and human-facilitated behavioral interventions had similar positive effects on eye contact and facial emotion recognition, which suggested that robots are useful mediators of social skills training for children with ASD. Autism Res 2017,. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. Autism Res 2017, 10: 1306-1323. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. © 2017 International Society for Autism Research, Wiley Periodicals, Inc.

  16. A Bayesian Developmental Approach to Robotic Goal-Based Imitation Learning.

    PubMed

    Chung, Michael Jae-Yoon; Friesen, Abram L; Fox, Dieter; Meltzoff, Andrew N; Rao, Rajesh P N

    2015-01-01

    A fundamental challenge in robotics today is building robots that can learn new skills by observing humans and imitating human actions. We propose a new Bayesian approach to robotic learning by imitation inspired by the developmental hypothesis that children use self-experience to bootstrap the process of intention recognition and goal-based imitation. Our approach allows an autonomous agent to: (i) learn probabilistic models of actions through self-discovery and experience, (ii) utilize these learned models for inferring the goals of human actions, and (iii) perform goal-based imitation for robotic learning and human-robot collaboration. Such an approach allows a robot to leverage its increasing repertoire of learned behaviors to interpret increasingly complex human actions and use the inferred goals for imitation, even when the robot has very different actuators from humans. We demonstrate our approach using two different scenarios: (i) a simulated robot that learns human-like gaze following behavior, and (ii) a robot that learns to imitate human actions in a tabletop organization task. In both cases, the agent learns a probabilistic model of its own actions, and uses this model for goal inference and goal-based imitation. We also show that the robotic agent can use its probabilistic model to seek human assistance when it recognizes that its inferred actions are too uncertain, risky, or impossible to perform, thereby opening the door to human-robot collaboration.

  17. A Bayesian Developmental Approach to Robotic Goal-Based Imitation Learning

    PubMed Central

    Chung, Michael Jae-Yoon; Friesen, Abram L.; Fox, Dieter; Meltzoff, Andrew N.; Rao, Rajesh P. N.

    2015-01-01

    A fundamental challenge in robotics today is building robots that can learn new skills by observing humans and imitating human actions. We propose a new Bayesian approach to robotic learning by imitation inspired by the developmental hypothesis that children use self-experience to bootstrap the process of intention recognition and goal-based imitation. Our approach allows an autonomous agent to: (i) learn probabilistic models of actions through self-discovery and experience, (ii) utilize these learned models for inferring the goals of human actions, and (iii) perform goal-based imitation for robotic learning and human-robot collaboration. Such an approach allows a robot to leverage its increasing repertoire of learned behaviors to interpret increasingly complex human actions and use the inferred goals for imitation, even when the robot has very different actuators from humans. We demonstrate our approach using two different scenarios: (i) a simulated robot that learns human-like gaze following behavior, and (ii) a robot that learns to imitate human actions in a tabletop organization task. In both cases, the agent learns a probabilistic model of its own actions, and uses this model for goal inference and goal-based imitation. We also show that the robotic agent can use its probabilistic model to seek human assistance when it recognizes that its inferred actions are too uncertain, risky, or impossible to perform, thereby opening the door to human-robot collaboration. PMID:26536366

  18. EEG theta and Mu oscillations during perception of human and robot actions

    PubMed Central

    Urgen, Burcu A.; Plank, Markus; Ishiguro, Hiroshi; Poizner, Howard; Saygin, Ayse P.

    2013-01-01

    The perception of others’ actions supports important skills such as communication, intention understanding, and empathy. Are mechanisms of action processing in the human brain specifically tuned to process biological agents? Humanoid robots can perform recognizable actions, but can look and move differently from humans, and as such, can be used in experiments to address such questions. Here, we recorded EEG as participants viewed actions performed by three agents. In the Human condition, the agent had biological appearance and motion. The other two conditions featured a state-of-the-art robot in two different appearances: Android, which had biological appearance but mechanical motion, and Robot, which had mechanical appearance and motion. We explored whether sensorimotor mu (8–13 Hz) and frontal theta (4–8 Hz) activity exhibited selectivity for biological entities, in particular for whether the visual appearance and/or the motion of the observed agent was biological. Sensorimotor mu suppression has been linked to the motor simulation aspect of action processing (and the human mirror neuron system, MNS), and frontal theta to semantic and memory-related aspects. For all three agents, action observation induced significant attenuation in the power of mu oscillations, with no difference between agents. Thus, mu suppression, considered an index of MNS activity, does not appear to be selective for biological agents. Observation of the Robot resulted in greater frontal theta activity compared to the Android and the Human, whereas the latter two did not differ from each other. Frontal theta thus appears to be sensitive to visual appearance, suggesting agents that are not sufficiently biological in appearance may result in greater memory processing demands for the observer. Studies combining robotics and neuroscience such as this one can allow us to explore neural basis of action processing on the one hand, and inform the design of social robots on the other. PMID:24348375

  19. EEG theta and Mu oscillations during perception of human and robot actions.

    PubMed

    Urgen, Burcu A; Plank, Markus; Ishiguro, Hiroshi; Poizner, Howard; Saygin, Ayse P

    2013-01-01

    The perception of others' actions supports important skills such as communication, intention understanding, and empathy. Are mechanisms of action processing in the human brain specifically tuned to process biological agents? Humanoid robots can perform recognizable actions, but can look and move differently from humans, and as such, can be used in experiments to address such questions. Here, we recorded EEG as participants viewed actions performed by three agents. In the Human condition, the agent had biological appearance and motion. The other two conditions featured a state-of-the-art robot in two different appearances: Android, which had biological appearance but mechanical motion, and Robot, which had mechanical appearance and motion. We explored whether sensorimotor mu (8-13 Hz) and frontal theta (4-8 Hz) activity exhibited selectivity for biological entities, in particular for whether the visual appearance and/or the motion of the observed agent was biological. Sensorimotor mu suppression has been linked to the motor simulation aspect of action processing (and the human mirror neuron system, MNS), and frontal theta to semantic and memory-related aspects. For all three agents, action observation induced significant attenuation in the power of mu oscillations, with no difference between agents. Thus, mu suppression, considered an index of MNS activity, does not appear to be selective for biological agents. Observation of the Robot resulted in greater frontal theta activity compared to the Android and the Human, whereas the latter two did not differ from each other. Frontal theta thus appears to be sensitive to visual appearance, suggesting agents that are not sufficiently biological in appearance may result in greater memory processing demands for the observer. Studies combining robotics and neuroscience such as this one can allow us to explore neural basis of action processing on the one hand, and inform the design of social robots on the other.

  20. Modelling of robotic work cells using agent based-approach

    NASA Astrophysics Data System (ADS)

    Sękala, A.; Banaś, W.; Gwiazda, A.; Monica, Z.; Kost, G.; Hryniewicz, P.

    2016-08-01

    In the case of modern manufacturing systems the requirements, both according the scope and according characteristics of technical procedures are dynamically changing. This results in production system organization inability to keep up with changes in a market demand. Accordingly, there is a need for new design methods, characterized, on the one hand with a high efficiency and on the other with the adequate level of the generated organizational solutions. One of the tools that could be used for this purpose is the concept of agent systems. These systems are the tools of artificial intelligence. They allow assigning to agents the proper domains of procedures and knowledge so that they represent in a self-organizing system of an agent environment, components of a real system. The agent-based system for modelling robotic work cell should be designed taking into consideration many limitations considered with the characteristic of this production unit. It is possible to distinguish some grouped of structural components that constitute such a system. This confirms the structural complexity of a work cell as a specific production system. So it is necessary to develop agents depicting various aspects of the work cell structure. The main groups of agents that are used to model a robotic work cell should at least include next pattern representatives: machine tool agents, auxiliary equipment agents, robots agents, transport equipment agents, organizational agents as well as data and knowledge bases agents. In this way it is possible to create the holarchy of the agent-based system.

  1. Searching Dynamic Agents with a Team of Mobile Robots

    PubMed Central

    Juliá, Miguel; Gil, Arturo; Reinoso, Oscar

    2012-01-01

    This paper presents a new algorithm that allows a team of robots to cooperatively search for a set of moving targets. An estimation of the areas of the environment that are more likely to hold a target agent is obtained using a grid-based Bayesian filter. The robot sensor readings and the maximum speed of the moving targets are used in order to update the grid. This representation is used in a search algorithm that commands the robots to those areas that are more likely to present target agents. This algorithm splits the environment in a tree of connected regions using dynamic programming. This tree is used in order to decide the destination for each robot in a coordinated manner. The algorithm has been successfully tested in known and unknown environments showing the validity of the approach. PMID:23012519

  2. Searching dynamic agents with a team of mobile robots.

    PubMed

    Juliá, Miguel; Gil, Arturo; Reinoso, Oscar

    2012-01-01

    This paper presents a new algorithm that allows a team of robots to cooperatively search for a set of moving targets. An estimation of the areas of the environment that are more likely to hold a target agent is obtained using a grid-based Bayesian filter. The robot sensor readings and the maximum speed of the moving targets are used in order to update the grid. This representation is used in a search algorithm that commands the robots to those areas that are more likely to present target agents. This algorithm splits the environment in a tree of connected regions using dynamic programming. This tree is used in order to decide the destination for each robot in a coordinated manner. The algorithm has been successfully tested in known and unknown environments showing the validity of the approach.

  3. Autonomy in robots and other agents.

    PubMed

    Smithers, T

    1997-06-01

    The word "autonomous" has become widely used in artificial intelligence, robotics, and, more recently, artificial life and is typically used to qualify types of systems, agents, or robots: we see terms like "autonomous systems," "autonomous agents," and "autonomous robots." Its use in these fields is, however, both weak, with no distinctions being made that are not better and more precisely made with other existing terms, and varied, with no single underlying concept being involved. This ill-disciplined usage contrasts strongly with the use of the same term in other fields such as biology, philosophy, ethics, law, and human rights, for example. In all these quite different areas the concept of autonomy is essentially the same, though the language used and the aspects and issues of concern, of course, differ. In all these cases the underlying notion is one of self-law making and the closely related concept of self-identity. In this paper I argue that the loose and varied use of the term autonomous in artificial intelligence, robotics, and artificial life has effectively robbed these fields of an important concept. A concept essentially the same as we find it in biology, philosophy, ethics, and law, and one that is needed to distinguish a particular kind of agent or robot from those developed and built so far. I suggest that robots and other agents will have to be autonomous, i.e., self-law making, not just self-regulating, if they are to be able effectively to deal with the kinds of environments in which we live and work: environments which have significant large scale spatial and temporal invariant structure, but which also have large amounts of local spatial and temporal dynamic variation and unpredictability, and which lead to the frequent occurrence of previously unexperienced situations for the agents that interact with them.

  4. Research on wheelchair robot control system based on EOG

    NASA Astrophysics Data System (ADS)

    Xu, Wang; Chen, Naijian; Han, Xiangdong; Sun, Jianbo

    2018-04-01

    The paper describes an intelligent wheelchair control system based on EOG. It can help disabled people improve their living ability. The system can acquire EOG signal from the user, detect the number of blink and the direction of glancing, and then send commands to the wheelchair robot via RS-232 to achieve the control of wheelchair robot. Wheelchair robot control system based on EOG is composed of processing EOG signal and human-computer interactive technology, which achieves a purpose of using conscious eye movement to control wheelchair robot.

  5. On the Utilization of Social Animals as a Model for Social Robotics

    PubMed Central

    Miklósi, Ádám; Gácsi, Márta

    2012-01-01

    Social robotics is a thriving field in building artificial agents. The possibility to construct agents that can engage in meaningful social interaction with humans presents new challenges for engineers. In general, social robotics has been inspired primarily by psychologists with the aim of building human-like robots. Only a small subcategory of “companion robots” (also referred to as robotic pets) was built to mimic animals. In this opinion essay we argue that all social robots should be seen as companions and more conceptual emphasis should be put on the inter-specific interaction between humans and social robots. This view is underlined by the means of an ethological analysis and critical evaluation of present day companion robots. We suggest that human–animal interaction provides a rich source of knowledge for designing social robots that are able to interact with humans under a wide range of conditions. PMID:22457658

  6. Understanding the Uncanny: Both Atypical Features and Category Ambiguity Provoke Aversion toward Humanlike Robots

    PubMed Central

    Strait, Megan K.; Floerke, Victoria A.; Ju, Wendy; Maddox, Keith; Remedios, Jessica D.; Jung, Malte F.; Urry, Heather L.

    2017-01-01

    Robots intended for social contexts are often designed with explicit humanlike attributes in order to facilitate their reception by (and communication with) people. However, observation of an “uncanny valley”—a phenomenon in which highly humanlike entities provoke aversion in human observers—has lead some to caution against this practice. Both of these contrasting perspectives on the anthropomorphic design of social robots find some support in empirical investigations to date. Yet, owing to outstanding empirical limitations and theoretical disputes, the uncanny valley and its implications for human-robot interaction remains poorly understood. We thus explored the relationship between human similarity and people's aversion toward humanlike robots via manipulation of the agents' appearances. To that end, we employed a picture-viewing task (Nagents = 60) to conduct an experimental test (Nparticipants = 72) of the uncanny valley's existence and the visual features that cause certain humanlike robots to be unnerving. Across the levels of human similarity, we further manipulated agent appearance on two dimensions, typicality (prototypic, atypical, and ambiguous) and agent identity (robot, person), and measured participants' aversion using both subjective and behavioral indices. Our findings were as follows: (1) Further substantiating its existence, the data show a clear and consistent uncanny valley in the current design space of humanoid robots. (2) Both category ambiguity, and more so, atypicalities provoke aversive responding, thus shedding light on the visual factors that drive people's discomfort. (3) Use of the Negative Attitudes toward Robots Scale did not reveal any significant relationships between people's pre-existing attitudes toward humanlike robots and their aversive responding—suggesting positive exposure and/or additional experience with robots is unlikely to affect the occurrence of an uncanny valley effect in humanoid robotics. This work furthers our understanding of both the uncanny valley, as well as the visual factors that contribute to an agent's uncanniness. PMID:28912736

  7. Application of the HeartLander Crawling Robot for Injection of a Thermally Sensitive Anti-Remodeling Agent for Myocardial Infarction Therapy

    PubMed Central

    Chapman, Michael P.; López González, Jose L.; Goyette, Brina E.; Fujimoto, Kazuro L.; Ma, Zuwei; Wagner, William R.; Zenati, Marco A.; Riviere, Cameron N.

    2011-01-01

    The injection of a mechanical bulking agent into the left ventricular (LV) wall of the heart has shown promise as a therapy for maladaptive remodeling of the myocardium after myocardial infarct (MI). The HeartLander robotic crawler presented itself as an ideal vehicle for minimally-invasive, highly accurate epicardial injection of such an agent. Use of the optimal bulking agent, a thermosetting hydrogel developed by our group, presents a number of engineering obstacles, including cooling of the miniaturized injection system while the robot is navigating in the warm environment of a living patient. We present herein a demonstration of an integrated miniature cooling and injection system in the HeartLander crawling robot, that is fully biocompatible and capable of multiple injections of a thermosetting hydrogel into dense animal tissue while the entire system is immersed in a 37°C water bath. PMID:21096276

  8. Analyzing the multiple-target-multiple-agent scenario using optimal assignment algorithms

    NASA Astrophysics Data System (ADS)

    Kwok, Kwan S.; Driessen, Brian J.; Phillips, Cynthia A.; Tovey, Craig A.

    1997-09-01

    This work considers the problem of maximum utilization of a set of mobile robots with limited sensor-range capabilities and limited travel distances. The robots are initially in random positions. A set of robots properly guards or covers a region if every point within the region is within the effective sensor range of at least one vehicle. We wish to move the vehicles into surveillance positions so as to guard or cover a region, while minimizing the maximum distance traveled by any vehicle. This problem can be formulated as an assignment problem, in which we must optimally decide which robot to assign to which slot of a desired matrix of grid points. The cost function is the maximum distance traveled by any robot. Assignment problems can be solved very efficiently. Solution times for one hundred robots took only seconds on a silicon graphics crimson workstation. The initial positions of all the robots can be sampled by a central base station and their newly assigned positions communicated back to the robots. Alternatively, the robots can establish their own coordinate system with the origin fixed at one of the robots and orientation determined by the compass bearing of another robot relative to this robot. This paper presents example solutions to the multiple-target-multiple-agent scenario using a matching algorithm. Two separate cases with one hundred agents in each were analyzed using this method. We have found these mobile robot problems to be a very interesting application of network optimization methods, and we expect this to be a fruitful area for future research.

  9. Calibration of a flexible measurement system based on industrial articulated robot and structured light sensor

    NASA Astrophysics Data System (ADS)

    Mu, Nan; Wang, Kun; Xie, Zexiao; Ren, Ping

    2017-05-01

    To realize online rapid measurement for complex workpieces, a flexible measurement system based on an articulated industrial robot with a structured light sensor mounted on the end-effector is developed. A method for calibrating the system parameters is proposed in which the hand-eye transformation parameters and the robot kinematic parameters are synthesized in the calibration process. An initial hand-eye calibration is first performed using a standard sphere as the calibration target. By applying the modified complete and parametrically continuous method, we establish a synthesized kinematic model that combines the initial hand-eye transformation and distal link parameters as a whole with the sensor coordinate system as the tool frame. According to the synthesized kinematic model, an error model is constructed based on spheres' center-to-center distance errors. Consequently, the error model parameters can be identified in a calibration experiment using a three-standard-sphere target. Furthermore, the redundancy of error model parameters is eliminated to ensure the accuracy and robustness of the parameter identification. Calibration and measurement experiments are carried out based on an ER3A-C60 robot. The experimental results show that the proposed calibration method enjoys high measurement accuracy, and this efficient and flexible system is suitable for online measurement in industrial scenes.

  10. Caregivers' requirements for in-home robotic agent for supporting community-living elderly subjects with cognitive impairment.

    PubMed

    Faucounau, Véronique; Wu, Ya-Huei; Boulay, Mélodie; Maestrutti, Marina; Rigaud, Anne-Sophie

    2009-01-01

    Older people are an important and growing sector of the population. This demographic change raises the profile of frailty and disability within the world's population. In such conditions, many old people need aides to perform daily activities. Most of the support is given by family members who are now a new target in the therapeutic approach. With advances in technology, robotics becomes increasingly important as a means of supporting older people at home. In order to ensure appropriate technology, 30 caregivers filled out a self-administered questionnaire including questions on needs to support their proxy and requirements concerning the robotic agent's functions and modes of action. This paper points out the functions to be integrated into the robot in order to support caregivers in the care of their proxy. The results also show that caregivers have a positive attitude towards robotic agents.

  11. A remote assessment system with a vision robot and wearable sensors.

    PubMed

    Zhang, Tong; Wang, Jue; Ren, Yumiao; Li, Jianjun

    2004-01-01

    This paper describes an ongoing researched remote rehabilitation assessment system that has a 6-freedom double-eyes vision robot to catch vision information, and a group of wearable sensors to acquire biomechanical signals. A server computer is fixed on the robot, to provide services to the robot's controller and all the sensors. The robot is connected to Internet by wireless channel, and so do the sensors to the robot. Rehabilitation professionals can semi-automatically practise an assessment program via Internet. The preliminary results show that the smart device, including the robot and the sensors, can improve the quality of remote assessment, and reduce the complexity of operation at a distance.

  12. Gaze-contingent control for minimally invasive robotic surgery.

    PubMed

    Mylonas, George P; Darzi, Ara; Yang, Guang Zhong

    2006-09-01

    Recovering tissue depth and deformation during robotically assisted minimally invasive procedures is an important step towards motion compensation, stabilization and co-registration with preoperative data. This work demonstrates that eye gaze derived from binocular eye tracking can be effectively used to recover 3D motion and deformation of the soft tissue. A binocular eye-tracking device was integrated into the stereoscopic surgical console. After calibration, the 3D fixation point of the participating subjects could be accurately resolved in real time. A CT-scanned phantom heart model was used to demonstrate the accuracy of gaze-contingent depth extraction and motion stabilization of the soft tissue. The dynamic response of the oculomotor system was assessed with the proposed framework by using autoregressive modeling techniques. In vivo data were also used to perform gaze-contingent decoupling of cardiac and respiratory motion. Depth reconstruction, deformation tracking, and motion stabilization of the soft tissue were possible with binocular eye tracking. The dynamic response of the oculomotor system was able to cope with frequencies likely to occur under most routine minimally invasive surgical operations. The proposed framework presents a novel approach towards the tight integration of a human and a surgical robot where interaction in response to sensing is required to be under the control of the operating surgeon.

  13. Full High-definition three-dimensional gynaecological laparoscopy--clinical assessment of a new robot-assisted device.

    PubMed

    Tuschy, Benjamin; Berlit, Sebastian; Brade, Joachim; Sütterlin, Marc; Hornemann, Amadeus

    2014-01-01

    To investigate the clinical assessment of a full high-definition (HD) three-dimensional robot-assisted laparoscopic device in gynaecological surgery. This study included 70 women who underwent gynaecological laparoscopic procedures. Demographic parameters, type and duration of surgery and perioperative complications were analyzed. Fifteen surgeons were postoperatively interviewed regarding their assessment of this new system with a standardized questionnaire. The clinical assessment revealed that three-dimensional full-HD visualisation is comfortable and improves spatial orientation and hand-to-eye coordination. The majority of the surgeons stated they would prefer a three-dimensional system to a conventional two-dimensional device and stated that the robotic camera arm led to more relaxed working conditions. Three-dimensional laparoscopy is feasible, comfortable and well-accepted in daily routine. The three-dimensional visualisation improves surgeons' hand-to-eye coordination, intracorporeal suturing and fine dissection. The combination of full-HD three-dimensional visualisation with the robotic camera arm results in very high image quality and stability.

  14. The problem with multiple robots

    NASA Technical Reports Server (NTRS)

    Huber, Marcus J.; Kenny, Patrick G.

    1994-01-01

    The issues that can arise in research associated with multiple, robotic agents are discussed. Two particular multi-robot projects are presented as examples. This paper was written in the hope that it might ease the transition from single to multiple robot research.

  15. Physical Scaffolding Accelerates the Evolution of Robot Behavior.

    PubMed

    Buckingham, David; Bongard, Josh

    2017-01-01

    In some evolutionary robotics experiments, evolved robots are transferred from simulation to reality, while sensor/motor data flows back from reality to improve the next transferral. We envision a generalization of this approach: a simulation-to-reality pipeline. In this pipeline, increasingly embodied agents flow up through a sequence of increasingly physically realistic simulators, while data flows back down to improve the next transferral between neighboring simulators; physical reality is the last link in this chain. As a first proof of concept, we introduce a two-link chain: A fast yet low-fidelity ( lo-fi) simulator hosts minimally embodied agents, which gradually evolve controllers and morphologies to colonize a slow yet high-fidelity ( hi-fi) simulator. The agents are thus physically scaffolded. We show here that, given the same computational budget, these physically scaffolded robots reach higher performance in the hi-fi simulator than do robots that only evolve in the hi-fi simulator, but only for a sufficiently difficult task. These results suggest that a simulation-to-reality pipeline may strike a good balance between accelerating evolution in simulation while anchoring the results in reality, free the investigator from having to prespecify the robot's morphology, and pave the way to scalable, automated, robot-generating systems.

  16. Brain Response to a Humanoid Robot in Areas Implicated in the Perception of Human Emotional Gestures

    PubMed Central

    Chaminade, Thierry; Zecca, Massimiliano; Blakemore, Sarah-Jayne; Takanishi, Atsuo; Frith, Chris D.; Micera, Silvestro; Dario, Paolo; Rizzolatti, Giacomo; Gallese, Vittorio; Umiltà, Maria Alessandra

    2010-01-01

    Background The humanoid robot WE4-RII was designed to express human emotions in order to improve human-robot interaction. We can read the emotions depicted in its gestures, yet might utilize different neural processes than those used for reading the emotions in human agents. Methodology Here, fMRI was used to assess how brain areas activated by the perception of human basic emotions (facial expression of Anger, Joy, Disgust) and silent speech respond to a humanoid robot impersonating the same emotions, while participants were instructed to attend either to the emotion or to the motion depicted. Principal Findings Increased responses to robot compared to human stimuli in the occipital and posterior temporal cortices suggest additional visual processing when perceiving a mechanical anthropomorphic agent. In contrast, activity in cortical areas endowed with mirror properties, like left Broca's area for the perception of speech, and in the processing of emotions like the left anterior insula for the perception of disgust and the orbitofrontal cortex for the perception of anger, is reduced for robot stimuli, suggesting lesser resonance with the mechanical agent. Finally, instructions to explicitly attend to the emotion significantly increased response to robot, but not human facial expressions in the anterior part of the left inferior frontal gyrus, a neural marker of motor resonance. Conclusions Motor resonance towards a humanoid robot, but not a human, display of facial emotion is increased when attention is directed towards judging emotions. Significance Artificial agents can be used to assess how factors like anthropomorphism affect neural response to the perception of human actions. PMID:20657777

  17. Towards the Verification of Human-Robot Teams

    NASA Technical Reports Server (NTRS)

    Fisher, Michael; Pearce, Edward; Wooldridge, Mike; Sierhuis, Maarten; Visser, Willem; Bordini, Rafael H.

    2005-01-01

    Human-Agent collaboration is increasingly important. Not only do high-profile activities such as NASA missions to Mars intend to employ such teams, but our everyday activities involving interaction with computational devices falls into this category. In many of these scenarios, we are expected to trust that the agents will do what we expect and that the agents and humans will work together as expected. But how can we be sure? In this paper, we bring together previous work on the verification of multi-agent systems with work on the modelling of human-agent teamwork. Specifically, we target human-robot teamwork. This paper provides an outline of the way we are using formal verification techniques in order to analyse such collaborative activities. A particular application is the analysis of human-robot teams intended for use in future space exploration.

  18. Human-machine interfaces based on EMG and EEG applied to robotic systems.

    PubMed

    Ferreira, Andre; Celeste, Wanderley C; Cheein, Fernando A; Bastos-Filho, Teodiano F; Sarcinelli-Filho, Mario; Carelli, Ricardo

    2008-03-26

    Two different Human-Machine Interfaces (HMIs) were developed, both based on electro-biological signals. One is based on the EMG signal and the other is based on the EEG signal. Two major features of such interfaces are their relatively simple data acquisition and processing systems, which need just a few hardware and software resources, so that they are, computationally and financially speaking, low cost solutions. Both interfaces were applied to robotic systems, and their performances are analyzed here. The EMG-based HMI was tested in a mobile robot, while the EEG-based HMI was tested in a mobile robot and a robotic manipulator as well. Experiments using the EMG-based HMI were carried out by eight individuals, who were asked to accomplish ten eye blinks with each eye, in order to test the eye blink detection algorithm. An average rightness rate of about 95% reached by individuals with the ability to blink both eyes allowed to conclude that the system could be used to command devices. Experiments with EEG consisted of inviting 25 people (some of them had suffered cases of meningitis and epilepsy) to test the system. All of them managed to deal with the HMI in only one training session. Most of them learnt how to use such HMI in less than 15 minutes. The minimum and maximum training times observed were 3 and 50 minutes, respectively. Such works are the initial parts of a system to help people with neuromotor diseases, including those with severe dysfunctions. The next steps are to convert a commercial wheelchair in an autonomous mobile vehicle; to implement the HMI onboard the autonomous wheelchair thus obtained to assist people with motor diseases, and to explore the potentiality of EEG signals, making the EEG-based HMI more robust and faster, aiming at using it to help individuals with severe motor dysfunctions.

  19. All-Terrain Intelligent Robot Braves Battlefront to Save Lives

    NASA Technical Reports Server (NTRS)

    2005-01-01

    As NASA s lead center for creating robotic spacecraft and rovers, the Jet Propulsion Laboratory (JPL) builds smart machines that can perform very complicated tasks, far, far away from the homeland. JPL s robotic proficiency is making an impact millions of miles away on Mars, where two rovers are presently unlocking the secrets of the Red Planet s rugged terrain, and thousands of miles away in the embattled regions of Iraq and Afghanistan, where robots sown from the seeds of JPL machines have been deployed to be the "eyes and ears" of humans on the front line. This commercial offspring, known as the PackBot Tactical Mobile Robot, is manufactured by iRobot, Inc., of Burlington, Massachusetts.

  20. Self-development of visual space perception by learning from the hand

    NASA Astrophysics Data System (ADS)

    Chung, Jae-Moon; Ohnishi, Noboru

    1998-10-01

    Animals have been considered to develop ability for interpreting images captured on their retina by themselves gradually from their birth. For this they do not need external supervisor. We think that the visual function is obtained together with the development of hand reaching and grasping operations which are executed by active interaction with environment. On the viewpoint of hand teaches eye, this paper shows how visual space perception is developed in a simulated robot. The robot has simplified human-like structure used for hand-eye coordination. From the experimental results it may be possible to validate the method to describe how visual space perception of biological systems is developed. In addition the description gives a way to self-calibrate the vision of intelligent robot based on learn by doing manner without external supervision.

  1. Mobile Agents: A Distributed Voice-Commanded Sensory and Robotic System for Surface EVA Assistance

    NASA Technical Reports Server (NTRS)

    Clancey, William J.; Sierhuis, Maarten; Alena, Rick; Crawford, Sekou; Dowding, John; Graham, Jeff; Kaskiris, Charis; Tyree, Kim S.; vanHoof, Ronnie

    2003-01-01

    A model-based, distributed architecture integrates diverse components in a system designed for lunar and planetary surface operations: spacesuit biosensors, cameras, GPS, and a robotic assistant. The system transmits data and assists communication between the extra-vehicular activity (EVA) astronauts, the crew in a local habitat, and a remote mission support team. Software processes ("agents"), implemented in a system called Brahms, run on multiple, mobile platforms, including the spacesuit backpacks, all-terrain vehicles, and robot. These "mobile agents" interpret and transform available data to help people and robotic systems coordinate their actions to make operations more safe and efficient. Different types of agents relate platforms to each other ("proxy agents"), devices to software ("comm agents"), and people to the system ("personal agents"). A state-of-the-art spoken dialogue interface enables people to communicate with their personal agents, supporting a speech-driven navigation and scheduling tool, field observation record, and rover command system. An important aspect of the engineering methodology involves first simulating the entire hardware and software system in Brahms, and then configuring the agents into a runtime system. Design of mobile agent functionality has been based on ethnographic observation of scientists working in Mars analog settings in the High Canadian Arctic on Devon Island and the southeast Utah desert. The Mobile Agents system is developed iteratively in the context of use, with people doing authentic work. This paper provides a brief introduction to the architecture and emphasizes the method of empirical requirements analysis, through which observation, modeling, design, and testing are integrated in simulated EVA operations.

  2. View of the Cupola RWS taken with Fish-Eye Lens

    NASA Image and Video Library

    2010-05-08

    ISS023-E-039983 (8 May 2010) --- A fish-eye lens attached to an electronic still camera was used by an Expedition 23 crew member to capture this image of the robotic workstation in the Cupola of the International Space Station.

  3. Model-free learning on robot kinematic chains using a nested multi-agent topology

    NASA Astrophysics Data System (ADS)

    Karigiannis, John N.; Tzafestas, Costas S.

    2016-11-01

    This paper proposes a model-free learning scheme for the developmental acquisition of robot kinematic control and dexterous manipulation skills. The approach is based on a nested-hierarchical multi-agent architecture that intuitively encapsulates the topology of robot kinematic chains, where the activity of each independent degree-of-freedom (DOF) is finally mapped onto a distinct agent. Each one of those agents progressively evolves a local kinematic control strategy in a game-theoretic sense, that is, based on a partial (local) view of the whole system topology, which is incrementally updated through a recursive communication process according to the nested-hierarchical topology. Learning is thus approached not through demonstration and training but through an autonomous self-exploration process. A fuzzy reinforcement learning scheme is employed within each agent to enable efficient exploration in a continuous state-action domain. This paper constitutes in fact a proof of concept, demonstrating that global dexterous manipulation skills can indeed evolve through such a distributed iterative learning of local agent sensorimotor mappings. The main motivation behind the development of such an incremental multi-agent topology is to enhance system modularity, to facilitate extensibility to more complex problem domains and to improve robustness with respect to structural variations including unpredictable internal failures. These attributes of the proposed system are assessed in this paper through numerical experiments in different robot manipulation task scenarios, involving both single and multi-robot kinematic chains. The generalisation capacity of the learning scheme is experimentally assessed and robustness properties of the multi-agent system are also evaluated with respect to unpredictable variations in the kinematic topology. Furthermore, these numerical experiments demonstrate the scalability properties of the proposed nested-hierarchical architecture, where new agents can be recursively added in the hierarchy to encapsulate individual active DOFs. The results presented in this paper demonstrate the feasibility of such a distributed multi-agent control framework, showing that the solutions which emerge are plausible and near-optimal. Numerical efficiency and computational cost issues are also discussed.

  4. New robotics: design principles for intelligent systems.

    PubMed

    Pfeifer, Rolf; Iida, Fumiya; Bongard, Josh

    2005-01-01

    New robotics is an approach to robotics that, in contrast to traditional robotics, employs ideas and principles from biology. While in the traditional approach there are generally accepted methods (e. g., from control theory), designing agents in the new robotics approach is still largely considered an art. In recent years, we have been developing a set of heuristics, or design principles, that on the one hand capture theoretical insights about intelligent (adaptive) behavior, and on the other provide guidance in actually designing and building systems. In this article we provide an overview of all the principles but focus on the principles of ecological balance, which concerns the relation between environment, morphology, materials, and control, and sensory-motor coordination, which concerns self-generated sensory stimulation as the agent interacts with the environment and which is a key to the development of high-level intelligence. As we argue, artificial evolution together with morphogenesis is not only "nice to have" but is in fact a necessary tool for designing embodied agents.

  5. Human-Robot Teaming in a Multi-Agent Space Assembly Task

    NASA Technical Reports Server (NTRS)

    Rehnmark, Fredrik; Currie, Nancy; Ambrose, Robert O.; Culbert, Christopher

    2004-01-01

    NASA's Human Space Flight program depends heavily on spacewalks performed by pairs of suited human astronauts. These Extra-Vehicular Activities (EVAs) are severely restricted in both duration and scope by consumables and available manpower. An expanded multi-agent EVA team combining the information-gathering and problem-solving skills of humans with the survivability and physical capabilities of robots is proposed and illustrated by example. Such teams are useful for large-scale, complex missions requiring dispersed manipulation, locomotion and sensing capabilities. To study collaboration modalities within a multi-agent EVA team, a 1-g test is conducted with humans and robots working together in various supporting roles.

  6. Biobotic insect swarm based sensor networks for search and rescue

    NASA Astrophysics Data System (ADS)

    Bozkurt, Alper; Lobaton, Edgar; Sichitiu, Mihail; Hedrick, Tyson; Latif, Tahmid; Dirafzoon, Alireza; Whitmire, Eric; Verderber, Alexander; Marin, Juan; Xiong, Hong

    2014-06-01

    The potential benefits of distributed robotics systems in applications requiring situational awareness, such as search-and-rescue in emergency situations, are indisputable. The efficiency of such systems requires robotic agents capable of coping with uncertain and dynamic environmental conditions. For example, after an earthquake, a tremendous effort is spent for days to reach to surviving victims where robotic swarms or other distributed robotic systems might play a great role in achieving this faster. However, current technology falls short of offering centimeter scale mobile agents that can function effectively under such conditions. Insects, the inspiration of many robotic swarms, exhibit an unmatched ability to navigate through such environments while successfully maintaining control and stability. We have benefitted from recent developments in neural engineering and neuromuscular stimulation research to fuse the locomotory advantages of insects with the latest developments in wireless networking technologies to enable biobotic insect agents to function as search-and-rescue agents. Our research efforts towards this goal include development of biobot electronic backpack technologies, establishment of biobot tracking testbeds to evaluate locomotion control efficiency, investigation of biobotic control strategies with Gromphadorhina portentosa cockroaches and Manduca sexta moths, establishment of a localization and communication infrastructure, modeling and controlling collective motion by learning deterministic and stochastic motion models, topological motion modeling based on these models, and the development of a swarm robotic platform to be used as a testbed for our algorithms.

  7. Control Architecture for Robotic Agent Command and Sensing

    NASA Technical Reports Server (NTRS)

    Huntsberger, Terrance; Aghazarian, Hrand; Estlin, Tara; Gaines, Daniel

    2008-01-01

    Control Architecture for Robotic Agent Command and Sensing (CARACaS) is a recent product of a continuing effort to develop architectures for controlling either a single autonomous robotic vehicle or multiple cooperating but otherwise autonomous robotic vehicles. CARACaS is potentially applicable to diverse robotic systems that could include aircraft, spacecraft, ground vehicles, surface water vessels, and/or underwater vessels. CARACaS incudes an integral combination of three coupled agents: a dynamic planning engine, a behavior engine, and a perception engine. The perception and dynamic planning en - gines are also coupled with a memory in the form of a world model. CARACaS is intended to satisfy the need for two major capabilities essential for proper functioning of an autonomous robotic system: a capability for deterministic reaction to unanticipated occurrences and a capability for re-planning in the face of changing goals, conditions, or resources. The behavior engine incorporates the multi-agent control architecture, called CAMPOUT, described in An Architecture for Controlling Multiple Robots (NPO-30345), NASA Tech Briefs, Vol. 28, No. 11 (November 2004), page 65. CAMPOUT is used to develop behavior-composition and -coordination mechanisms. Real-time process algebra operators are used to compose a behavior network for any given mission scenario. These operators afford a capability for producing a formally correct kernel of behaviors that guarantee predictable performance. By use of a method based on multi-objective decision theory (MODT), recommendations from multiple behaviors are combined to form a set of control actions that represents their consensus. In this approach, all behaviors contribute simultaneously to the control of the robotic system in a cooperative rather than a competitive manner. This approach guarantees a solution that is good enough with respect to resolution of complex, possibly conflicting goals within the constraints of the mission to be accomplished by the vehicle(s).

  8. Friendship with a robot: Children's perception of similarity between a robot's physical and virtual embodiment that supports diabetes self-management.

    PubMed

    Sinoo, Claudia; van der Pal, Sylvia; Blanson Henkemans, Olivier A; Keizer, Anouk; Bierman, Bert P B; Looije, Rosemarijn; Neerincx, Mark A

    2018-07-01

    The PAL project develops a conversational agent with a physical (robot) and virtual (avatar) embodiment to support diabetes self-management of children ubiquitously. This paper assesses 1) the effect of perceived similarity between robot and avatar on children's' friendship towards the avatar, and 2) the effect of this friendship on usability of a self-management application containing the avatar (a) and children's motivation to play with it (b). During a four-day diabetes camp in the Netherlands, 21 children participated in interactions with both agent embodiments. Questionnaires measured perceived similarity, friendship, motivation to play with the app and its usability. Children felt stronger friendship towards the physical robot than towards the avatar. The more children perceived the robot and its avatar as the same agency, the stronger their friendship with the avatar was. The stronger their friendship with the avatar, the more they were motivated to play with the app and the higher the app scored on usability. The combination of physical and virtual embodiments seems to provide a unique opportunity for building ubiquitous long-term child-agent friendships. an avatar complementing a physical robot in health care could increase children's motivation and adherence to use self-management support systems. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. Nonuniform Deployment of Autonomous Agents in Harbor-Like Environments

    DTIC Science & Technology

    2014-11-12

    ith agent than to all other agents. Interested readers are referred to [55] for the comprehensive study on Voronoi partitioning and its applications...robots: An rfid approach, PhD dissertation, School of Electrical Engi- neering and Computer Science, University of Ottawa (October 2012). [55] A. Okabe, B...Gueaieb, A stochastic approach of mobile robot navigation using customized rfid sys- tems, International Conference on Signals, Circuits and Systems

  10. Advantages of Brahms for Specifying and Implementing a Multiagent Human-Robotic Exploration System

    NASA Technical Reports Server (NTRS)

    Clancey, William J.; Sierhuis, Maarten; Kaskiris, Charis; vanHoof, Ron

    2003-01-01

    We have developed a model-based, distributed architecture that integrates diverse components in a system designed for lunar and planetary surface operations: an astronaut's space suit, cameras, all-terrain vehicles, robotic assistant, crew in a local habitat, and mission support team. Software processes ('agents') implemented in the Brahms language, run on multiple, mobile platforms. These mobile agents interpret and transform available data to help people and robotic systems coordinate their actions to make operations more safe and efficient. The Brahms-based mobile agent architecture (MAA) uses a novel combination of agent types so the software agents may understand and facilitate communications between people and between system components. A state-of-the-art spoken dialogue interface is integrated with Brahms models, supporting a speech-driven field observation record and rover command system. An important aspect of the methodology involves first simulating the entire system in Brahms, then configuring the agents into a runtime system Thus, Brahms provides a language, engine, and system builder's toolkit for specifying and implementing multiagent systems.

  11. Da Vinci Xi Robot-Assisted Penetrating Keratoplasty.

    PubMed

    Chammas, Jimmy; Sauer, Arnaud; Pizzuto, Joëlle; Pouthier, Fabienne; Gaucher, David; Marescaux, Jacques; Mutter, Didier; Bourcier, Tristan

    2017-06-01

    This study aims (1) to investigate the feasibility of robot-assisted penetrating keratoplasty (PK) using the new Da Vinci Xi Surgical System and (2) to report what we believe to be the first use of this system in experimental eye surgery. Robot-assisted PK procedures were performed on human corneal transplants using the Da Vinci Xi Surgical System. After an 8-mm corneal trephination, four interrupted sutures and one 10.0 monofilament running suture were made. For each procedure, duration and successful completion of the surgery as well as any unexpected events were assessed. The depth of the corneal sutures was checked postoperatively using spectral-domain optical coherence tomography (SD-OCT). Robot-assisted PK was successfully performed on 12 corneas. The Da Vinci Xi Surgical System provided the necessary dexterity to perform the different steps of surgery. The mean duration of the procedures was 43.4 ± 8.9 minutes (range: 28.5-61.1 minutes). There were no unexpected intraoperative events. SD-OCT confirmed that the sutures were placed at the appropriate depth. We confirm the feasibility of robot-assisted PK with the new Da Vinci Surgical System and report the first use of the Xi model in experimental eye surgery. Operative time of robot-assisted PK surgery is now close to that of conventional manual surgery due to both improvement of the optical system and the presence of microsurgical instruments. Experimentations will allow the advantages of robot-assisted microsurgery to be identified while underlining the improvements and innovations necessary for clinical use.

  12. Software for Automation of Real-Time Agents, Version 2

    NASA Technical Reports Server (NTRS)

    Fisher, Forest; Estlin, Tara; Gaines, Daniel; Schaffer, Steve; Chouinard, Caroline; Engelhardt, Barbara; Wilklow, Colette; Mutz, Darren; Knight, Russell; Rabideau, Gregg; hide

    2005-01-01

    Version 2 of Closed Loop Execution and Recovery (CLEaR) has been developed. CLEaR is an artificial intelligence computer program for use in planning and execution of actions of autonomous agents, including, for example, Deep Space Network (DSN) antenna ground stations, robotic exploratory ground vehicles (rovers), robotic aircraft (UAVs), and robotic spacecraft. CLEaR automates the generation and execution of command sequences, monitoring the sequence execution, and modifying the command sequence in response to execution deviations and failures as well as new goals for the agent to achieve. The development of CLEaR has focused on the unification of planning and execution to increase the ability of the autonomous agent to perform under tight resource and time constraints coupled with uncertainty in how much of resources and time will be required to perform a task. This unification is realized by extending the traditional three-tier robotic control architecture by increasing the interaction between the software components that perform deliberation and reactive functions. The increase in interaction reduces the need to replan, enables earlier detection of the need to replan, and enables replanning to occur before an agent enters a state of failure.

  13. Eye-in-Hand Manipulation for Remote Handling: Experimental Setup

    NASA Astrophysics Data System (ADS)

    Niu, Longchuan; Suominen, Olli; Aref, Mohammad M.; Mattila, Jouni; Ruiz, Emilio; Esque, Salvador

    2018-03-01

    A prototype for eye-in-hand manipulation in the context of remote handling in the International Thermonuclear Experimental Reactor (ITER)1 is presented in this paper. The setup consists of an industrial robot manipulator with a modified open control architecture and equipped with a pair of stereoscopic cameras, a force/torque sensor, and pneumatic tools. It is controlled through a haptic device in a mock-up environment. The industrial robot controller has been replaced by a single industrial PC running Xenomai that has a real-time connection to both the robot controller and another Linux PC running as the controller for the haptic device. The new remote handling control environment enables further development of advanced control schemes for autonomous and semi-autonomous manipulation tasks. This setup benefits from a stereovision system for accurate tracking of the target objects with irregular shapes. The overall environmental setup successfully demonstrates the required robustness and precision that remote handling tasks need.

  14. Towards a new modality-independent interface for a robotic wheelchair.

    PubMed

    Bastos-Filho, Teodiano Freire; Cheein, Fernando Auat; Müller, Sandra Mara Torres; Celeste, Wanderley Cardoso; de la Cruz, Celso; Cavalieri, Daniel Cruz; Sarcinelli-Filho, Mário; Amaral, Paulo Faria Santos; Perez, Elisa; Soria, Carlos Miguel; Carelli, Ricardo

    2014-05-01

    This work presents the development of a robotic wheelchair that can be commanded by users in a supervised way or by a fully automatic unsupervised navigation system. It provides flexibility to choose different modalities to command the wheelchair, in addition to be suitable for people with different levels of disabilities. Users can command the wheelchair based on their eye blinks, eye movements, head movements, by sip-and-puff and through brain signals. The wheelchair can also operate like an auto-guided vehicle, following metallic tapes, or in an autonomous way. The system is provided with an easy to use and flexible graphical user interface onboard a personal digital assistant, which is used to allow users to choose commands to be sent to the robotic wheelchair. Several experiments were carried out with people with disabilities, and the results validate the developed system as an assistive tool for people with distinct levels of disability.

  15. Prototyping a Hybrid Cooperative and Tele-robotic Surgical System for Retinal Microsurgery.

    PubMed

    Balicki, Marcin; Xia, Tian; Jung, Min Yang; Deguet, Anton; Vagvolgyi, Balazs; Kazanzides, Peter; Taylor, Russell

    2011-06-01

    This paper presents the design of a tele-robotic microsurgical platform designed for development of cooperative and tele-operative control schemes, sensor based smart instruments, user interfaces and new surgical techniques with eye surgery as the driving application. The system is built using the distributed component-based cisst libraries and the Surgical Assistant Workstation framework. It includes a cooperatively controlled EyeRobot2, a da Vinci Master manipulator, and a remote stereo visualization system. We use constrained optimization based virtual fixture control to provide Virtual Remote-Center-of-Motion (vRCM) and haptic feedback. Such system can be used in a hybrid setup, combining local cooperative control with remote tele-operation, where an experienced surgeon can provide hand-over-hand tutoring to a novice user. In another scheme, the system can provide haptic feedback based on virtual fixtures constructed from real-time force and proximity sensor information.

  16. Prototyping a Hybrid Cooperative and Tele-robotic Surgical System for Retinal Microsurgery

    PubMed Central

    Balicki, Marcin; Xia, Tian; Jung, Min Yang; Deguet, Anton; Vagvolgyi, Balazs; Kazanzides, Peter; Taylor, Russell

    2013-01-01

    This paper presents the design of a tele-robotic microsurgical platform designed for development of cooperative and tele-operative control schemes, sensor based smart instruments, user interfaces and new surgical techniques with eye surgery as the driving application. The system is built using the distributed component-based cisst libraries and the Surgical Assistant Workstation framework. It includes a cooperatively controlled EyeRobot2, a da Vinci Master manipulator, and a remote stereo visualization system. We use constrained optimization based virtual fixture control to provide Virtual Remote-Center-of-Motion (vRCM) and haptic feedback. Such system can be used in a hybrid setup, combining local cooperative control with remote tele-operation, where an experienced surgeon can provide hand-over-hand tutoring to a novice user. In another scheme, the system can provide haptic feedback based on virtual fixtures constructed from real-time force and proximity sensor information. PMID:24398557

  17. Robotic surgery and hemostatic agents in partial nephrectomy: a high rate of success without vascular clamping.

    PubMed

    Morelli, Luca; Morelli, John; Palmeri, Matteo; D'Isidoro, Cristiano; Kauffmann, Emanuele Federico; Tartaglia, Dario; Caprili, Giovanni; Pisano, Roberta; Guadagni, Simone; Di Franco, Gregorio; Di Candio, Giulio; Mosca, Franco

    2015-09-01

    Robot-assisted partial nephrectomy has been proposed as a technique to overcome technical challenges of laparoscopic partial nephrectomy. We prospectively collected and analyzed data from 31 patients who underwent robotic partial nephrectomy with systematic use of hemostatic agents, between February 2009 and October 2014. Thirty-three renal tumors were treated in 31 patients. There were no conversions to open surgery, intraoperative complications, or blood transfusions. The mean size of the resected tumors was 27 mm (median 20 mm, range 5-40 mm). Twenty-seven of 33 lesions (82%) did not require vascular clamping and therefore were treated in the absence of ischemia. All margins were negative. The high partial nephrectomy success rate without vascular clamping suggests that robotic nephron-sparing surgery with systematic use of hemostatic agents may be a safe, effective method to completely avoid ischemia in the treatment of selected renal masses.

  18. A Face Attention Technique for a Robot Able to Interpret Facial Expressions

    NASA Astrophysics Data System (ADS)

    Simplício, Carlos; Prado, José; Dias, Jorge

    Automatic facial expressions recognition using vision is an important subject towards human-robot interaction. Here is proposed a human face focus of attention technique and a facial expressions classifier (a Dynamic Bayesian Network) to incorporate in an autonomous mobile agent whose hardware is composed by a robotic platform and a robotic head. The focus of attention technique is based on the symmetry presented by human faces. By using the output of this module the autonomous agent keeps always targeting the human face frontally. In order to accomplish this, the robot platform performs an arc centered at the human; thus the robotic head, when necessary, moves synchronized. In the proposed probabilistic classifier the information is propagated, from the previous instant, in a lower level of the network, to the current instant. Moreover, to recognize facial expressions are used not only positive evidences but also negative.

  19. A 3-DOF parallel robot with spherical motion for the rehabilitation and evaluation of balance performance.

    PubMed

    Patanè, Fabrizio; Cappa, Paolo

    2011-04-01

    In this paper a novel electrically actuated parallel robot with three degrees-of-freedom (3 DOF) for dynamic postural studies is presented. The design has been described, the solution to the inverse kinematics has been found, and a numerical solution for the direct kinematics has been proposed. The workspace of the implemented robot is characterized by an angular range of motion of about ±10° for roll and pitch when yaw is in the range ±15°. The robot was constructed and the orientation accuracy was tested by means of an optoelectronic system and by imposing a sinusoidal input, with a frequency of 1 Hz and amplitude of 10°, along the three axes, in sequence. The collected data indicated a phase delay of 1° and an amplitude error of 0.5%-1.5%; similar values were observed for cross-axis sensitivity errors. We also conducted a clinical application on a group of normal subjects, who were standing in equilibrium on the robot base with eyes open (EO) and eyes closed (EC), which was rotated with a tri-axial sinusoidal trajectory with a frequency of 0.5 Hz and amplitude 5° for roll and pitch and 10° for the yaw. The postural configuration of the subjects was recorded with an optoelectronic system. However, due to the mainly technical nature of this paper, only initial validation outcomes are reported here. The clinical application showed that only the tilt and displacement on the sagittal pane of head, trunk, and pelvis in the trials conducted with eyes closed were affected by drift and that the reduction of the yaw rotation and of the mediolateral translation was not a controlled parameter, as happened, instead, for the other anatomical directions.

  20. Construction of multi-agent mobile robots control system in the problem of persecution with using a modified reinforcement learning method based on neural networks

    NASA Astrophysics Data System (ADS)

    Patkin, M. L.; Rogachev, G. N.

    2018-02-01

    A method for constructing a multi-agent control system for mobile robots based on training with reinforcement using deep neural networks is considered. Synthesis of the management system is proposed to be carried out with reinforcement training and the modified Actor-Critic method, in which the Actor module is divided into Action Actor and Communication Actor in order to simultaneously manage mobile robots and communicate with partners. Communication is carried out by sending partners at each step a vector of real numbers that are added to the observation vector and affect the behaviour. Functions of Actors and Critic are approximated by deep neural networks. The Critics value function is trained by using the TD-error method and the Actor’s function by using DDPG. The Communication Actor’s neural network is trained through gradients received from partner agents. An environment in which a cooperative multi-agent interaction is present was developed, computer simulation of the application of this method in the control problem of two robots pursuing two goals was carried out.

  1. Cooperative Robot Localization Using Event-Triggered Estimation

    NASA Astrophysics Data System (ADS)

    Iglesias Echevarria, David I.

    It is known that multiple robot systems that need to cooperate to perform certain activities or tasks incur in high energy costs that hinder their autonomous functioning and limit the benefits provided to humans by these kinds of platforms. This work presents a communications-based method for cooperative robot localization. Implementing concepts from event-triggered estimation, used with success in the field of wireless sensor networks but rarely to do robot localization, agents are able to only send measurements to their neighbors when the expected novelty in this information is high. Since all agents know the condition that triggers a measurement to be sent or not, the lack of a measurement is therefore informative and fused into state estimates. In the case agents do not receive either direct nor indirect measurements of all others, the agents employ a covariance intersection fusion rule in order to keep the local covariance error metric bounded. A comprehensive analysis of the proposed algorithm and its estimation performance in a variety of scenarios is performed, and the algorithm is compared to similar cooperative localization approaches. Extensive simulations are performed that illustrate the effectiveness of this method.

  2. Simultaneous Deployment and Tracking Multi-Robot Strategies with Connectivity Maintenance

    PubMed Central

    Tardós, Javier; Aragues, Rosario; Sagüés, Carlos; Rubio, Carlos

    2018-01-01

    Multi-robot teams composed of ground and aerial vehicles have gained attention during the last few years. We present a scenario where both types of robots must monitor the same area from different view points. In this paper, we propose two Lloyd-based tracking strategies to allow the ground robots (agents) to follow the aerial ones (targets), keeping the connectivity between the agents. The first strategy establishes density functions on the environment so that the targets acquire more importance than other zones, while the second one iteratively modifies the virtual limits of the working area depending on the positions of the targets. We consider the connectivity maintenance due to the fact that coverage tasks tend to spread the agents as much as possible, which is addressed by restricting their motions so that they keep the links of a minimum spanning tree of the communication graph. We provide a thorough parametric study of the performance of the proposed strategies under several simulated scenarios. In addition, the methods are implemented and tested using realistic robotic simulation environments and real experiments. PMID:29558446

  3. Observation and imitation of actions performed by humans, androids, and robots: an EMG study

    PubMed Central

    Hofree, Galit; Urgen, Burcu A.; Winkielman, Piotr; Saygin, Ayse P.

    2015-01-01

    Understanding others’ actions is essential for functioning in the physical and social world. In the past two decades research has shown that action perception involves the motor system, supporting theories that we understand others’ behavior via embodied motor simulation. Recently, empirical approach to action perception has been facilitated by using well-controlled artificial stimuli, such as robots. One broad question this approach can address is what aspects of similarity between the observer and the observed agent facilitate motor simulation. Since humans have evolved among other humans and animals, using artificial stimuli such as robots allows us to probe whether our social perceptual systems are specifically tuned to process other biological entities. In this study, we used humanoid robots with different degrees of human-likeness in appearance and motion along with electromyography (EMG) to measure muscle activity in participants’ arms while they either observed or imitated videos of three agents produce actions with their right arm. The agents were a Human (biological appearance and motion), a Robot (mechanical appearance and motion), and an Android (biological appearance and mechanical motion). Right arm muscle activity increased when participants imitated all agents. Increased muscle activation was found also in the stationary arm both during imitation and observation. Furthermore, muscle activity was sensitive to motion dynamics: activity was significantly stronger for imitation of the human than both mechanical agents. There was also a relationship between the dynamics of the muscle activity and motion dynamics in stimuli. Overall our data indicate that motor simulation is not limited to observation and imitation of agents with a biological appearance, but is also found for robots. However we also found sensitivity to human motion in the EMG responses. Combining data from multiple methods allows us to obtain a more complete picture of action understanding and the underlying neural computations. PMID:26150782

  4. Investigations Into Internal and External Aspects of Dynamic Agent-Environment Couplings

    NASA Astrophysics Data System (ADS)

    Dautenhahn, Kerstin

    This paper originates from my work on `social agents'. An issue which I consider important to this kind of research is the dynamic coupling of an agent with its social and non-social environment. I hypothesize `internal dynamics' inside an agent as a basic step towards understanding. The paper therefore focuses on the internal and external dynamics which couple an agent to its environment. The issue of embodiment in animals and artifacts and its relation to `social dynamics' is discussed first. I argue that embodiment is linked to a concept of a body and is not necessarily given when running a control program on robot hardware. I stress the individual characteristics of an embodied cognitive system, as well as its social embeddedness. I outline the framework of a physical-psychological state space which changes dynamically in a self-modifying way as a holistic approach towards embodied human and artificial cognition. This framework is meant to discuss internal and external dynamics of an embodied, natural or artificial agent. In order to stress the importance of a dynamic memory I introduce the concept of an `autobiographical agent'. The second part of the paper gives an example of the implementation of a physical agent, a robot, which is dynamically coupled to its environment by balancing on a seesaw. For the control of the robot a behavior-oriented approach using the dynamical systems metaphor is used. The problem is studied through building a complete and co-adapted robot-environment system. A seesaw which varies its orientation with one or two degrees of freedom is used as the artificial `habitat'. The problem of stabilizing the body axis by active motion on a seesaw is solved by using two inclination sensors and a parallel, behavior-oriented control architecture. Some experiments are described which demonstrate the exploitation of the dynamics of the robot-environment system.

  5. Designing and implementing transparency for real time inspection of autonomous robots

    NASA Astrophysics Data System (ADS)

    Theodorou, Andreas; Wortham, Robert H.; Bryson, Joanna J.

    2017-07-01

    The EPSRC's Principles of Robotics advises the implementation of transparency in robotic systems, however research related to AI transparency is in its infancy. This paper introduces the reader of the importance of having transparent inspection of intelligent agents and provides guidance for good practice when developing such agents. By considering and expanding upon other prominent definitions found in literature, we provide a robust definition of transparency as a mechanism to expose the decision-making of a robot. The paper continues by addressing potential design decisions developers need to consider when designing and developing transparent systems. Finally, we describe our new interactive intelligence editor, designed to visualise, develop and debug real-time intelligence.

  6. This "Ethical Trap" Is for Roboticists, Not Robots: On the Issue of Artificial Agent Ethical Decision-Making.

    PubMed

    Miller, Keith W; Wolf, Marty J; Grodzinsky, Frances

    2017-04-01

    In this paper we address the question of when a researcher is justified in describing his or her artificial agent as demonstrating ethical decision-making. The paper is motivated by the amount of research being done that attempts to imbue artificial agents with expertise in ethical decision-making. It seems clear that computing systems make decisions, in that they make choices between different options; and there is scholarship in philosophy that addresses the distinction between ethical decision-making and general decision-making. Essentially, the qualitative difference between ethical decisions and general decisions is that ethical decisions must be part of the process of developing ethical expertise within an agent. We use this distinction in examining publicity surrounding a particular experiment in which a simulated robot attempted to safeguard simulated humans from falling into a hole. We conclude that any suggestions that this simulated robot was making ethical decisions were misleading.

  7. Cable-driven elastic parallel humanoid head with face tracking for Autism Spectrum Disorder interventions.

    PubMed

    Su, Hao; Dickstein-Fischer, Laurie; Harrington, Kevin; Fu, Qiushi; Lu, Weina; Huang, Haibo; Cole, Gregory; Fischer, Gregory S

    2010-01-01

    This paper presents the development of new prismatic actuation approach and its application in human-safe humanoid head design. To reduce actuator output impedance and mitigate unexpected external shock, the prismatic actuation method uses cables to drive a piston with preloaded spring. By leveraging the advantages of parallel manipulator and cable-driven mechanism, the developed neck has a parallel manipulator embodiment with two cable-driven limbs embedded with preloaded springs and one passive limb. The eye mechanism is adapted for low-cost webcam with succinct "ball-in-socket" structure. Based on human head anatomy and biomimetics, the neck has 3 degree of freedom (DOF) motion: pan, tilt and one decoupled roll while each eye has independent pan and synchronous tilt motion (3 DOF eyes). A Kalman filter based face tracking algorithm is implemented to interact with the human. This neck and eye structure is translatable to other human-safe humanoid robots. The robot's appearance reflects a non-threatening image of a penguin, which can be translated into a possible therapeutic intervention for children with Autism Spectrum Disorders.

  8. Applying Biomimetic Algorithms for Extra-Terrestrial Habitat Generation

    NASA Technical Reports Server (NTRS)

    Birge, Brian

    2012-01-01

    The objective is to simulate and optimize distributed cooperation among a network of robots tasked with cooperative excavation on an extra-terrestrial surface. Additionally to examine the concept of directed Emergence among a group of limited artificially intelligent agents. Emergence is the concept of achieving complex results from very simple rules or interactions. For example, in a termite mound each individual termite does not carry a blueprint of how to make their home in a global sense, but their interactions based strictly on local desires create a complex superstructure. Leveraging this Emergence concept applied to a simulation of cooperative agents (robots) will allow an examination of the success of non-directed group strategy achieving specific results. Specifically the simulation will be a testbed to evaluate population based robotic exploration and cooperative strategies while leveraging the evolutionary teamwork approach in the face of uncertainty about the environment and partial loss of sensors. Checking against a cost function and 'social' constraints will optimize cooperation when excavating a simulated tunnel. Agents will act locally with non-local results. The rules by which the simulated robots interact will be optimized to the simplest possible for the desired result, leveraging Emergence. Sensor malfunction and line of sight issues will be incorporated into the simulation. This approach falls under Swarm Robotics, a subset of robot control concerned with finding ways to control large groups of robots. Swarm Robotics often contains biologically inspired approaches, research comes from social insect observation but also data from among groups of herding, schooling, and flocking animals. Biomimetic algorithms applied to manned space exploration is the method under consideration for further study.

  9. Incorporation of perception-based information in robot learning using fuzzy reinforcement learning agents

    NASA Astrophysics Data System (ADS)

    Zhou, Changjiu; Meng, Qingchun; Guo, Zhongwen; Qu, Wiefen; Yin, Bo

    2002-04-01

    Robot learning in unstructured environments has been proved to be an extremely challenging problem, mainly because of many uncertainties always present in the real world. Human beings, on the other hand, seem to cope very well with uncertain and unpredictable environments, often relying on perception-based information. Furthermore, humans beings can also utilize perceptions to guide their learning on those parts of the perception-action space that are actually relevant to the task. Therefore, we conduct a research aimed at improving robot learning through the incorporation of both perception-based and measurement-based information. For this reason, a fuzzy reinforcement learning (FRL) agent is proposed in this paper. Based on a neural-fuzzy architecture, different kinds of information can be incorporated into the FRL agent to initialise its action network, critic network and evaluation feedback module so as to accelerate its learning. By making use of the global optimisation capability of GAs (genetic algorithms), a GA-based FRL (GAFRL) agent is presented to solve the local minima problem in traditional actor-critic reinforcement learning. On the other hand, with the prediction capability of the critic network, GAs can perform a more effective global search. Different GAFRL agents are constructed and verified by using the simulation model of a physical biped robot. The simulation analysis shows that the biped learning rate for dynamic balance can be improved by incorporating perception-based information on biped balancing and walking evaluation. The biped robot can find its application in ocean exploration, detection or sea rescue activity, as well as military maritime activity.

  10. Robotics technology discipline

    NASA Technical Reports Server (NTRS)

    Montemerlo, Melvin D.

    1990-01-01

    Viewgraphs on robotics technology discipline for Space Station Freedom are presented. Topics covered include: mechanisms; sensors; systems engineering processes for integrated robotics; man/machine cooperative control; 3D-real-time machine perception; multiple arm redundancy control; manipulator control from a movable base; multi-agent reasoning; and surfacing evolution technologies.

  11. [A plane-based hand-eye calibration method for surgical robots].

    PubMed

    Zeng, Bowei; Meng, Fanle; Ding, Hui; Liu, Wenbo; Wu, Di; Wang, Guangzhi

    2017-04-01

    In order to calibrate the hand-eye transformation of the surgical robot and laser range finder (LRF), a calibration algorithm based on a planar template was designed. A mathematical model of the planar template had been given and the approach to address the equations had been derived. Aiming at the problems of the measurement error in a practical system, we proposed a new algorithm for selecting coplanar data. This algorithm can effectively eliminate considerable measurement error data to improve the calibration accuracy. Furthermore, three orthogonal planes were used to improve the calibration accuracy, in which a nonlinear optimization for hand-eye calibration was used. With the purpose of verifying the calibration precision, we used the LRF to measure some fixed points in different directions and a cuboid's surfaces. Experimental results indicated that the precision of a single planar template method was (1.37±0.24) mm, and that of the three orthogonal planes method was (0.37±0.05) mm. Moreover, the mean FRE of three-dimensional (3D) points was 0.24 mm and mean TRE was 0.26 mm. The maximum angle measurement error was 0.4 degree. Experimental results show that the method presented in this paper is effective with high accuracy and can meet the requirements of surgical robot precise location.

  12. The Dominant Robot: Threatening Robots Cause Psychological Reactance, Especially When They Have Incongruent Goals

    NASA Astrophysics Data System (ADS)

    Roubroeks, M. A. J.; Ham, J. R. C.; Midden, C. J. H.

    Persuasive technology can take the form of a social agent that persuades people to change behavior or attitudes. However, like any persuasive technology, persuasive social agents might trigger psychological reactance, which can lead to restoration behavior. The current study investigated whether interacting with a persuasive robot can cause psychological reactance. Additionally, we investigated whether goal congruency plays a role in psychological reactance. Participants programmed a washing machine while a robot gave threatening advice. Confirming expectations, participants experienced more psychological reactance when receiving high-threatening advice compared to low-threatening advice. Moreover, when the robot gave high-threatening advice and expressed an incongruent goal, participants reported the highest level of psychological reactance (on an anger measure). Finally, high-threatening advice led to more restoration, and this relationship was partially mediated by psychological reactance. Overall, results imply that under certain circumstances persuasive technology can trigger opposite effects, especially when people have incongruent goal intentions.

  13. Shape models of asteroids based on lightcurve observations with BlueEye600 robotic observatory

    NASA Astrophysics Data System (ADS)

    Ďurech, Josef; Hanuš, Josef; Brož, Miroslav; Lehký, Martin; Behrend, Raoul; Antonini, Pierre; Charbonnel, Stephane; Crippa, Roberto; Dubreuil, Pierre; Farroni, Gino; Kober, Gilles; Lopez, Alain; Manzini, Federico; Oey, Julian; Poncy, Raymond; Rinner, Claudine; Roy, René

    2018-04-01

    We present physical models, i.e. convex shapes, directions of the rotation axis, and sidereal rotation periods, of 18 asteroids out of which 10 are new models and 8 are refined models based on much larger data sets than in previous work. The models were reconstructed by the lightcurve inversion method from archived publicly available lightcurves and our new observations with BlueEye600 robotic observatory. One of the new results is the shape model of asteroid (1663) van den Bos with the rotation period of 749 h, which makes it the slowest rotator with known shape. We describe our strategy for target selection that aims at fast production of new models using the enormous potential of already available photometry stored in public databases. We also briefly describe the control software and scheduler of the robotic observatory and we discuss the importance of building a database of asteroid models for studying asteroid physical properties in collisional families.

  14. Emergency response nurse scheduling with medical support robot by multi-agent and fuzzy technique.

    PubMed

    Kono, Shinya; Kitamura, Akira

    2015-08-01

    In this paper, a new co-operative re-scheduling method corresponding the medical support tasks that the time of occurrence can not be predicted is described, assuming robot can co-operate medical activities with the nurse. Here, Multi-Agent-System (MAS) is used for the co-operative re-scheduling, in which Fuzzy-Contract-Net (FCN) is applied to the robots task assignment for the emergency tasks. As the simulation results, it is confirmed that the re-scheduling results by the proposed method can keep the patients satisfaction and decrease the work load of the nurse.

  15. Learning classifier systems for single and multiple mobile robots in unstructured environments

    NASA Astrophysics Data System (ADS)

    Bay, John S.

    1995-12-01

    The learning classifier system (LCS) is a learning production system that generates behavioral rules via an underlying discovery mechanism. The LCS architecture operates similarly to a blackboard architecture; i.e., by posted-message communications. But in the LCS, the message board is wiped clean at every time interval, thereby requiring no persistent shared resource. In this paper, we adapt the LCS to the problem of mobile robot navigation in completely unstructured environments. We consider the model of the robot itself, including its sensor and actuator structures, to be part of this environment, in addition to the world-model that includes a goal and obstacles at unknown locations. This requires a robot to learn its own I/O characteristics in addition to solving its navigation problem, but results in a learning controller that is equally applicable, unaltered, in robots with a wide variety of kinematic structures and sensing capabilities. We show the effectiveness of this LCS-based controller through both simulation and experimental trials with a small robot. We then propose a new architecture, the Distributed Learning Classifier System (DLCS), which generalizes the message-passing behavior of the LCS from internal messages within a single agent to broadcast massages among multiple agents. This communications mode requires little bandwidth and is easily implemented with inexpensive, off-the-shelf hardware. The DLCS is shown to have potential application as a learning controller for multiple intelligent agents.

  16. Dimensions of complexity in learning from interactive instruction. [for robotic systems deployed in space

    NASA Technical Reports Server (NTRS)

    Huffman, Scott B.; Laird, John E.

    1992-01-01

    Robot systems deployed in space must exhibit flexibility. In particular, an intelligent robotic agent should not have to be reprogrammed for each of the various tasks it may face during the course of its lifetime. However, pre-programming knowledge for all of the possible tasks that may be needed is extremely difficult. Therefore, a powerful notion is that of an instructible agent, one which is able to receive task-level instructions and advice from a human advisor. An agent must do more than simply memorize the instructions it is given (this would amount to programming). Rather, after mapping instructions into task constructs that it can reason with, it must determine each instruction's proper scope of applicability. In this paper, we will examine the characteristics of instruction, and the characteristics of agents, that affect learning from instruction. We find that in addition to a myriad of linguistic concerns, both the situatedness of the instructions (their placement within the ongoing execution of tasks) and the prior domain knowledge of the agent have an impact on what can be learned.

  17. Admittance Control for Robot Assisted Retinal Vein Micro-Cannulation under Human-Robot Collaborative Mode.

    PubMed

    Zhang, He; Gonenc, Berk; Iordachita, Iulian

    2017-10-01

    Retinal vein occlusion is one of the most common retinovascular diseases. Retinal vein cannulation is a potentially effective treatment method for this condition that currently lies, however, at the limits of human capabilities. In this work, the aim is to use robotic systems and advanced instrumentation to alleviate these challenges, and assist the procedure via a human-robot collaborative mode based on our earlier work on the Steady-Hand Eye Robot and force-sensing instruments. An admittance control method is employed to stabilize the cannula relative to the vein and maintain it inside the lumen during the injection process. A pre-stress strategy is used to prevent the tip of microneedle from getting out of vein in in prolonged infusions, and the performance is verified through simulations.

  18. Using parallel evolutionary development for a biologically-inspired computer vision system for mobile robots.

    PubMed

    Wright, Cameron H G; Barrett, Steven F; Pack, Daniel J

    2005-01-01

    We describe a new approach to attacking the problem of robust computer vision for mobile robots. The overall strategy is to mimic the biological evolution of animal vision systems. Our basic imaging sensor is based upon the eye of the common house fly, Musca domestica. The computational algorithms are a mix of traditional image processing, subspace techniques, and multilayer neural networks.

  19. Design of an integrated master-slave robotic system for minimally invasive surgery.

    PubMed

    Li, Jianmin; Zhou, Ningxin; Wang, Shuxin; Gao, Yuanqian; Liu, Dongchun

    2012-03-01

    Minimally invasive surgery (MIS) robots are commonly used in hospitals and medical centres. However, currently available robotic systems are very complicated and huge, greatly raising system costs and the requirements of operating rooms. These disadvantages have become the major impediments to the expansion of MIS robots. An integrated MIS robotic system is proposed based on the analysis of advantages and disadvantages of different MIS robots. In the proposed system, the master manipulators, slave manipulators, image display device and control system have been designed as a whole. Modular design is adopted for the control system for easy maintenance and upgrade. The kinematic relations between the master and the slave are also investigated and embedded in software to realize intuitive movements of hand and instrument. Finally, animal experiments were designed to test the effectiveness of the robot. The robot realizes natural hand-eye movements between the master and the slave to facilitate MIS operations. The experimental results show that the robot can realize similar functions to those of current commercialized robots. The integrated design simplifies the robotic system and facilitates use of the robot. Compared with the commercialized robots, the proposed MIS robot achieves similar functions and features but with a smaller size and less weight. Copyright © 2011 John Wiley & Sons, Ltd.

  20. Expedient range enhanced 3-D robot colour vision

    NASA Astrophysics Data System (ADS)

    Jarvis, R. A.

    1983-01-01

    Computer vision has been chosen, in many cases, as offering the richest form of sensory information which can be utilized for guiding robotic manipulation. The present investigation is concerned with the problem of three-dimensional (3D) visual interpretation of colored objects in support of robotic manipulation of those objects with a minimum of semantic guidance. The scene 'interpretations' are aimed at providing basic parameters to guide robotic manipulation rather than to provide humans with a detailed description of what the scene 'means'. Attention is given to overall system configuration, hue transforms, a connectivity analysis, plan/elevation segmentations, range scanners, elevation/range segmentation, higher level structure, eye in hand research, and aspects of array and video stream processing.

  1. Da Vinci Xi Robot–Assisted Penetrating Keratoplasty

    PubMed Central

    Chammas, Jimmy; Sauer, Arnaud; Pizzuto, Joëlle; Pouthier, Fabienne; Gaucher, David; Marescaux, Jacques; Mutter, Didier; Bourcier, Tristan

    2017-01-01

    Purpose This study aims (1) to investigate the feasibility of robot-assisted penetrating keratoplasty (PK) using the new Da Vinci Xi Surgical System and (2) to report what we believe to be the first use of this system in experimental eye surgery. Methods Robot-assisted PK procedures were performed on human corneal transplants using the Da Vinci Xi Surgical System. After an 8-mm corneal trephination, four interrupted sutures and one 10.0 monofilament running suture were made. For each procedure, duration and successful completion of the surgery as well as any unexpected events were assessed. The depth of the corneal sutures was checked postoperatively using spectral-domain optical coherence tomography (SD-OCT). Results Robot-assisted PK was successfully performed on 12 corneas. The Da Vinci Xi Surgical System provided the necessary dexterity to perform the different steps of surgery. The mean duration of the procedures was 43.4 ± 8.9 minutes (range: 28.5–61.1 minutes). There were no unexpected intraoperative events. SD-OCT confirmed that the sutures were placed at the appropriate depth. Conclusions We confirm the feasibility of robot-assisted PK with the new Da Vinci Surgical System and report the first use of the Xi model in experimental eye surgery. Operative time of robot-assisted PK surgery is now close to that of conventional manual surgery due to both improvement of the optical system and the presence of microsurgical instruments. Translational Relevance Experimentations will allow the advantages of robot-assisted microsurgery to be identified while underlining the improvements and innovations necessary for clinical use. PMID:28660096

  2. Biomimetics and the Development of Humanlike Robots as the Ultimate Challenge

    NASA Technical Reports Server (NTRS)

    Bar-Cohen, Yoseph

    2011-01-01

    Evolution led to effective solutions to nature's challenges and they were improved over millions of years. Humans have always made efforts to use nature as a model for innovation and problems solving. These efforts became more intensive in recent years where systematic studies of nature are being made towards better understanding and applying more sophisticated capabilities. Making humanlike robots, including the appearance, functions and intelligence, poses the ultimate challenges to biomimetics. For many years, making such robots was considered science fiction, but as a result of significant advances in biologically inspired technologies, such robots are increasingly becoming an engineering reality. There are already humanlike robots that walk, talk, interpret speech, make eye-contact and facial expressions, as well as perform many other humanlike functions. In this paper, the state-of-the-art of humanlike robots, potential applications and issues of concern will be reviewed.

  3. Understanding of and applications for robot vision guidance at KSC

    NASA Technical Reports Server (NTRS)

    Shawaga, Lawrence M.

    1988-01-01

    The primary thrust of robotics at KSC is for the servicing of Space Shuttle remote umbilical docking functions. In order for this to occur, robots performing servicing operations must be capable of tracking a swaying Orbiter in Six Degrees of Freedom (6-DOF). Currently, in NASA KSC's Robotic Applications Development Laboratory (RADL), an ASEA IRB-90 industrial robot is being equipped with a real-time computer vision (hardware and software) system to allow it to track a simulated Orbiter interface (target) in 6-DOF. The real-time computer vision system effectively becomes the eyes for the lab robot, guiding it through a closed loop visual feedback system to move with the simulated Orbiter interface. This paper will address an understanding of this vision guidance system and how it will be applied to remote umbilical servicing at KSC. In addition, other current and future applications will be addressed.

  4. Series Pneumatic Artificial Muscles (sPAMs) and Application to a Soft Continuum Robot.

    PubMed

    Greer, Joseph D; Morimoto, Tania K; Okamura, Allison M; Hawkes, Elliot W

    2017-01-01

    We describe a new series pneumatic artificial muscle (sPAM) and its application as an actuator for a soft continuum robot. The robot consists of three sPAMs arranged radially round a tubular pneumatic backbone. Analogous to tendons, the sPAMs exert a tension force on the robot's pneumatic backbone, causing bending that is approximately constant curvature. Unlike a traditional tendon driven continuum robot, the robot is entirely soft and contains no hard components, making it safer for human interaction. Models of both the sPAM and soft continuum robot kinematics are presented and experimentally verified. We found a mean position accuracy of 5.5 cm for predicting the end-effector position of a 42 cm long robot with the kinematic model. Finally, closed-loop control is demonstrated using an eye-in-hand visual servo control law which provides a simple interface for operation by a human. The soft continuum robot with closed-loop control was found to have a step-response rise time and settling time of less than two seconds.

  5. Engineering Sensorial Delay to Control Phototaxis and Emergent Collective Behaviors

    NASA Astrophysics Data System (ADS)

    Mijalkov, Mite; McDaniel, Austin; Wehr, Jan; Volpe, Giovanni

    2016-01-01

    Collective motions emerging from the interaction of autonomous mobile individuals play a key role in many phenomena, from the growth of bacterial colonies to the coordination of robotic swarms. For these collective behaviors to take hold, the individuals must be able to emit, sense, and react to signals. When dealing with simple organisms and robots, these signals are necessarily very elementary; e.g., a cell might signal its presence by releasing chemicals and a robot by shining light. An additional challenge arises because the motion of the individuals is often noisy; e.g., the orientation of cells can be altered by Brownian motion and that of robots by an uneven terrain. Therefore, the emphasis is on achieving complex and tunable behaviors from simple autonomous agents communicating with each other in robust ways. Here, we show that the delay between sensing and reacting to a signal can determine the individual and collective long-term behavior of autonomous agents whose motion is intrinsically noisy. We experimentally demonstrate that the collective behavior of a group of phototactic robots capable of emitting a radially decaying light field can be tuned from segregation to aggregation and clustering by controlling the delay with which they change their propulsion speed in response to the light intensity they measure. We track this transition to the underlying dynamics of this system, in particular, to the ratio between the robots' sensorial delay time and the characteristic time of the robots' random reorientation. Supported by numerics, we discuss how the same mechanism can be applied to control active agents, e.g., airborne drones, moving in a three-dimensional space. Given the simplicity of this mechanism, the engineering of sensorial delay provides a potentially powerful tool to engineer and dynamically tune the behavior of large ensembles of autonomous mobile agents; furthermore, this mechanism might already be at work within living organisms such as chemotactic cells.

  6. Robotically assisted small animal MRI-guided mouse biopsy

    NASA Astrophysics Data System (ADS)

    Wilson, Emmanuel; Chiodo, Chris; Wong, Kenneth H.; Fricke, Stanley; Jung, Mira; Cleary, Kevin

    2010-02-01

    Small mammals, namely mice and rats, play an important role in biomedical research. Imaging, in conjunction with accurate therapeutic agent delivery, has tremendous value in small animal research since it enables serial, non-destructive testing of animals and facilitates the study of biomarkers of disease progression. The small size of organs in mice lends some difficulty to accurate biopsies and therapeutic agent delivery. Image guidance with the use of robotic devices should enable more accurate and repeatable targeting for biopsies and delivery of therapeutic agents, as well as the ability to acquire tissue from a pre-specified location based on image anatomy. This paper presents our work in integrating a robotic needle guide device, specialized stereotaxic mouse holder, and magnetic resonance imaging, with a long-term goal of performing accurate and repeatable targeting in anesthetized mice studies.

  7. Lifelong Transfer Learning for Heterogeneous Teams of Agents in Sequential Decision Processes

    DTIC Science & Technology

    2016-06-01

    making (SDM) tasks in dynamic environments with simulated and physical robots . 15. SUBJECT TERMS Sequential decision making, lifelong learning, transfer...sequential decision-making (SDM) tasks in dynamic environments with both simple benchmark tasks and more complex aerial and ground robot tasks. Our work...and ground robots in the presence of disturbances: We applied our methods to the problem of learning controllers for robots with novel disturbances in

  8. Learning robotic eye-arm-hand coordination from human demonstration: a coupled dynamical systems approach.

    PubMed

    Lukic, Luka; Santos-Victor, José; Billard, Aude

    2014-04-01

    We investigate the role of obstacle avoidance in visually guided reaching and grasping movements. We report on a human study in which subjects performed prehensile motion with obstacle avoidance where the position of the obstacle was systematically varied across trials. These experiments suggest that reaching with obstacle avoidance is organized in a sequential manner, where the obstacle acts as an intermediary target. Furthermore, we demonstrate that the notion of workspace travelled by the hand is embedded explicitly in a forward planning scheme, which is actively involved in detecting obstacles on the way when performing reaching. We find that the gaze proactively coordinates the pattern of eye-arm motion during obstacle avoidance. This study provides also a quantitative assessment of the coupling between the eye-arm-hand motion. We show that the coupling follows regular phase dependencies and is unaltered during obstacle avoidance. These observations provide a basis for the design of a computational model. Our controller extends the coupled dynamical systems framework and provides fast and synchronous control of the eyes, the arm and the hand within a single and compact framework, mimicking similar control system found in humans. We validate our model for visuomotor control of a humanoid robot.

  9. Automated cross-modal mapping in robotic eye/hand systems using plastic radial basis function networks

    NASA Astrophysics Data System (ADS)

    Meng, Qinggang; Lee, M. H.

    2007-03-01

    Advanced autonomous artificial systems will need incremental learning and adaptive abilities similar to those seen in humans. Knowledge from biology, psychology and neuroscience is now inspiring new approaches for systems that have sensory-motor capabilities and operate in complex environments. Eye/hand coordination is an important cross-modal cognitive function, and is also typical of many of the other coordinations that must be involved in the control and operation of embodied intelligent systems. This paper examines a biologically inspired approach for incrementally constructing compact mapping networks for eye/hand coordination. We present a simplified node-decoupled extended Kalman filter for radial basis function networks, and compare this with other learning algorithms. An experimental system consisting of a robot arm and a pan-and-tilt head with a colour camera is used to produce results and test the algorithms in this paper. We also present three approaches for adapting to structural changes during eye/hand coordination tasks, and the robustness of the algorithms under noise are investigated. The learning and adaptation approaches in this paper have similarities with current ideas about neural growth in the brains of humans and animals during tool-use, and infants during early cognitive development.

  10. The Unified Behavior Framework for the Simulation of Autonomous Agents

    DTIC Science & Technology

    2015-03-01

    1980s, researchers have designed a variety of robot control architectures intending to imbue robots with some degree of autonomy. A recently developed ...Identification Friend or Foe viii THE UNIFIED BEHAVIOR FRAMEWORK FOR THE SIMULATION OF AUTONOMOUS AGENTS I. Introduction The development of autonomy has...room for research by utilizing methods like simulation and modeling that consume less time and fewer monetary resources. A recently developed reactive

  11. Using the rear projection of the Socibot Desktop robot for creation of applications with facial expressions

    NASA Astrophysics Data System (ADS)

    Gîlcă, G.; Bîzdoacă, N. G.; Diaconu, I.

    2016-08-01

    This article aims to implement some practical applications using the Socibot Desktop social robot. We mean to realize three applications: creating a speech sequence using the Kiosk menu of the browser interface, creating a program in the Virtual Robot browser interface and making a new guise to be loaded into the robot's memory in order to be projected onto it face. The first application is actually created in the Compose submenu that contains 5 file categories: audio, eyes, face, head, mood, this being helpful in the creation of the projected sequence. The second application is more complex, the completed program containing: audio files, speeches (can be created in over 20 languages), head movements, the robot's facial parameters function of each action units (AUs) of the facial muscles, its expressions and its line of sight. Last application aims to change the robot's appearance with the guise created by us. The guise was created in Adobe Photoshop and then loaded into the robot's memory.

  12. Effect of Viscous Agents on Corneal Density in Dry Eye Disease.

    PubMed

    Wegener, Alfred R; Meyer, Linda M; Schönfeld, Carl-Ludwig

    2015-10-01

    To investigate the effect of the viscous agents, hydroxypropyl methylcellulose (HPMC), carbomer, povidone, and a combination of HPMC and povidone on corneal density in patients with dry eye disease. In total, 98 eyes of 49 patients suffering from dry eye and 65 eyes of 33 healthy age-matched individuals were included in this prospective, randomized study. Corneal morphology was documented with Scheimpflug photography and corneal density was analyzed in 5 anatomical layers (epithelium, bowman membrane, stroma, descemet's membrane, and endothelium). Corneal density was evaluated for the active ingredients HPMC, carbomer, povidone, and a combination of HPMC and povidone as the viscous agents contained in the artificial tear formulations used by the dry eye patients. Data were compared to the age-matched healthy control group without medication. Corneal density in dry eye patients was reduced in all 5 anatomical layers compared to controls. Corneal density was highest and very close to control in patients treated with HPMC containing ocular lubricants. Patients treated with lubricants, including carbomer as the viscous agent displayed a significant reduction of corneal density in layers 1 and 2 compared to control. HPMC containing ocular lubricants can help to maintain physiological corneal density and may be beneficial in the treatment of dry eye disease.

  13. A bio-inspired swarm robot coordination algorithm for multiple target searching

    NASA Astrophysics Data System (ADS)

    Meng, Yan; Gan, Jing; Desai, Sachi

    2008-04-01

    The coordination of a multi-robot system searching for multi targets is challenging under dynamic environment since the multi-robot system demands group coherence (agents need to have the incentive to work together faithfully) and group competence (agents need to know how to work together well). In our previous proposed bio-inspired coordination method, Local Interaction through Virtual Stigmergy (LIVS), one problem is the considerable randomness of the robot movement during coordination, which may lead to more power consumption and longer searching time. To address these issues, an adaptive LIVS (ALIVS) method is proposed in this paper, which not only considers the travel cost and target weight, but also predicting the target/robot ratio and potential robot redundancy with respect to the detected targets. Furthermore, a dynamic weight adjustment is also applied to improve the searching performance. This new method a truly distributed method where each robot makes its own decision based on its local sensing information and the information from its neighbors. Basically, each robot only communicates with its neighbors through a virtual stigmergy mechanism and makes its local movement decision based on a Particle Swarm Optimization (PSO) algorithm. The proposed ALIVS algorithm has been implemented on the embodied robot simulator, Player/Stage, in a searching target. The simulation results demonstrate the efficiency and robustness in a power-efficient manner with the real-world constraints.

  14. A small-scale hyperacute compound eye featuring active eye tremor: application to visual stabilization, target tracking, and short-range odometry.

    PubMed

    Colonnier, Fabien; Manecy, Augustin; Juston, Raphaël; Mallot, Hanspeter; Leitel, Robert; Floreano, Dario; Viollet, Stéphane

    2015-02-25

    In this study, a miniature artificial compound eye (15 mm in diameter) called the curved artificial compound eye (CurvACE) was endowed for the first time with hyperacuity, using similar micro-movements to those occurring in the fly's compound eye. A periodic micro-scanning movement of only a few degrees enables the vibrating compound eye to locate contrasting objects with a 40-fold greater resolution than that imposed by the interommatidial angle. In this study, we developed a new algorithm merging the output of 35 local processing units consisting of adjacent pairs of artificial ommatidia. The local measurements performed by each pair are processed in parallel with very few computational resources, which makes it possible to reach a high refresh rate of 500 Hz. An aerial robotic platform with two degrees of freedom equipped with the active CurvACE placed over naturally textured panels was able to assess its linear position accurately with respect to the environment thanks to its efficient gaze stabilization system. The algorithm was found to perform robustly at different light conditions as well as distance variations relative to the ground and featured small closed-loop positioning errors of the robot in the range of 45 mm. In addition, three tasks of interest were performed without having to change the algorithm: short-range odometry, visual stabilization, and tracking contrasting objects (hands) moving over a textured background.

  15. Hand gesture guided robot-assisted surgery based on a direct augmented reality interface.

    PubMed

    Wen, Rong; Tay, Wei-Liang; Nguyen, Binh P; Chng, Chin-Boon; Chui, Chee-Kong

    2014-09-01

    Radiofrequency (RF) ablation is a good alternative to hepatic resection for treatment of liver tumors. However, accurate needle insertion requires precise hand-eye coordination and is also affected by the difficulty of RF needle navigation. This paper proposes a cooperative surgical robot system, guided by hand gestures and supported by an augmented reality (AR)-based surgical field, for robot-assisted percutaneous treatment. It establishes a robot-assisted natural AR guidance mechanism that incorporates the advantages of the following three aspects: AR visual guidance information, surgeon's experiences and accuracy of robotic surgery. A projector-based AR environment is directly overlaid on a patient to display preoperative and intraoperative information, while a mobile surgical robot system implements specified RF needle insertion plans. Natural hand gestures are used as an intuitive and robust method to interact with both the AR system and surgical robot. The proposed system was evaluated on a mannequin model. Experimental results demonstrated that hand gesture guidance was able to effectively guide the surgical robot, and the robot-assisted implementation was found to improve the accuracy of needle insertion. This human-robot cooperative mechanism is a promising approach for precise transcutaneous ablation therapy. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  16. Envisioning Cognitive Robots for Future Space Exploration

    NASA Technical Reports Server (NTRS)

    Huntsberger, Terry; Stoica, Adrian

    2010-01-01

    Cognitive robots in the context of space exploration are envisioned with advanced capabilities of model building, continuous planning/re-planning, self-diagnosis, as well as the ability to exhibit a level of 'understanding' of new situations. An overview of some JPL components (e.g. CASPER, CAMPOUT) and a description of the architecture CARACaS (Control Architecture for Robotic Agent Command and Sensing) that combines these in the context of a cognitive robotic system operating in a various scenarios are presented. Finally, two examples of typical scenarios of a multi-robot construction mission and a human-robot mission, involving direct collaboration with humans is given.

  17. Development and Evaluation of Sensor Concepts for Ageless Aerospace Vehicles: Report 6 - Development and Demonstration of a Self-Organizing Diagnostic System for Structural Health Monitoring

    NASA Technical Reports Server (NTRS)

    Batten, Adam; Edwards, Graeme; Gerasimov, Vadim; Hoschke, Nigel; Isaacs, Peter; Lewis, Chris; Moore, Richard; Oppolzer, Florien; Price, Don; Prokopenko, Mikhail; hide

    2010-01-01

    This report describes a significant advance in the capability of the CSIRO/NASA structural health monitoring Concept Demonstrator (CD). The main thrust of the work has been the development of a mobile robotic agent, and the hardware and software modifications and developments required to enable the demonstrator to operate as a single, self-organizing, multi-agent system. This single-robot system is seen as the forerunner of a system in which larger numbers of small robots perform inspection and repair tasks cooperatively, by self-organization. While the goal of demonstrating self-organized damage diagnosis was not fully achieved in the time available, much of the work required for the final element that enables the robot to point the video camera and transmit an image has been completed. A demonstration video of the CD and robotic systems operating will be made and forwarded to NASA.

  18. Triggering social interactions: chimpanzees respond to imitation by a humanoid robot and request responses from it.

    PubMed

    Davila-Ross, Marina; Hutchinson, Johanna; Russell, Jamie L; Schaeffer, Jennifer; Billard, Aude; Hopkins, William D; Bard, Kim A

    2014-05-01

    Even the most rudimentary social cues may evoke affiliative responses in humans and promote social communication and cohesion. The present work tested whether such cues of an agent may also promote communicative interactions in a nonhuman primate species, by examining interaction-promoting behaviours in chimpanzees. Here, chimpanzees were tested during interactions with an interactive humanoid robot, which showed simple bodily movements and sent out calls. The results revealed that chimpanzees exhibited two types of interaction-promoting behaviours during relaxed or playful contexts. First, the chimpanzees showed prolonged active interest when they were imitated by the robot. Second, the subjects requested 'social' responses from the robot, i.e. by showing play invitations and offering toys or other objects. This study thus provides evidence that even rudimentary cues of a robotic agent may promote social interactions in chimpanzees, like in humans. Such simple and frequent social interactions most likely provided a foundation for sophisticated forms of affiliative communication to emerge.

  19. Distance-Based Behaviors for Low-Complexity Control in Multiagent Robotics

    NASA Astrophysics Data System (ADS)

    Pierpaoli, Pietro

    Several biological examples show that living organisms cooperate to collectively accomplish tasks impossible for single individuals. More importantly, this coordination is often achieved with a very limited set of information. Inspired by these observations, research on autonomous systems has focused on the development of distributed control techniques for control and guidance of groups of autonomous mobile agents, or robots. From an engineering perspective, when coordination and cooperation is sought in large ensembles of robotic vehicles, a reduction in hardware and algorithms' complexity becomes mandatory from the very early stages of the project design. The research for solutions capable of lowering power consumption, cost and increasing reliability are thus worth investigating. In this work, we studied low-complexity techniques to achieve cohesion and control on swarms of autonomous robots. Starting from an inspiring example with two-agents, we introduced effects of neighbors' relative positions on control of an autonomous agent. The extension of this intuition addressed the control of large ensembles of autonomous vehicles, and was applied in the form of a herding-like technique. To this end, a low-complexity distance-based aggregation protocol was defined. We first showed that our protocol produced a cohesion aggregation among the agent while avoiding inter-agent collisions. Then, a feedback leader-follower architecture was introduced for the control of the swarm. We also described how proximity measures and probability of collisions with neighbors can also be used as source of information in highly populated environments.

  20. Moving Just Like You: Motor Interference Depends on Similar Motility of Agent and Observer

    PubMed Central

    Kupferberg, Aleksandra; Huber, Markus; Helfer, Bartosz; Lenz, Claus; Knoll, Alois; Glasauer, Stefan

    2012-01-01

    Recent findings in neuroscience suggest an overlap between brain regions involved in the execution of movement and perception of another’s movement. This so-called “action-perception coupling” is supposed to serve our ability to automatically infer the goals and intentions of others by internal simulation of their actions. A consequence of this coupling is motor interference (MI), the effect of movement observation on the trajectory of one’s own movement. Previous studies emphasized that various features of the observed agent determine the degree of MI, but could not clarify how human-like an agent has to be for its movements to elicit MI and, more importantly, what ‘human-like’ means in the context of MI. Thus, we investigated in several experiments how different aspects of appearance and motility of the observed agent influence motor interference (MI). Participants performed arm movements in horizontal and vertical directions while observing videos of a human, a humanoid robot, or an industrial robot arm with either artificial (industrial) or human-like joint configurations. Our results show that, given a human-like joint configuration, MI was elicited by observing arm movements of both humanoid and industrial robots. However, if the joint configuration of the robot did not resemble that of the human arm, MI could longer be demonstrated. Our findings present evidence for the importance of human-like joint configuration rather than other human-like features for perception-action coupling when observing inanimate agents. PMID:22761853

  1. Future of robotic surgery in urology.

    PubMed

    Rassweiler, Jens J; Autorino, Riccardo; Klein, Jan; Mottrie, Alex; Goezen, Ali Serdar; Stolzenburg, Jens-Uwe; Rha, Koon H; Schurr, Marc; Kaouk, Jihad; Patel, Vipul; Dasgupta, Prokar; Liatsikos, Evangelos

    2017-12-01

    To provide a comprehensive overview of the current status of the field of robotic systems for urological surgery and discuss future perspectives. A non-systematic literature review was performed using PubMed/Medline search electronic engines. Existing patents for robotic devices were researched using the Google search engine. Findings were also critically analysed taking into account the personal experience of the authors. The relevant patents for the first generation of the da Vinci platform will expire in 2019. New robotic systems are coming onto the stage. These can be classified according to type of console, arrangement of robotic arms, handles and instruments, and other specific features (haptic feedback, eye-tracking). The Telelap ALF-X robot uses an open console with eye-tracking, laparoscopy-like handles with haptic feedback, and arms mounted on separate carts; first clinical trials with this system were reported in 2016. The Medtronic robot provides an open console using three-dimensional high-definition video technology and three arms. The Avatera robot features a closed console with microscope-like oculars, four arms arranged on one cart, and 5-mm instruments with six degrees of freedom. The REVO-I consists of an open console and a four-arm arrangement on one cart; the first experiments with this system were published in 2016. Medicaroid uses a semi-open console and three robot arms attached to the operating table. Clinical trials of the SP 1098-platform using the da Vinci Xi for console-based single-port surgery were reported in 2015. The SPORT robot has been tested in animal experiments for single-port surgery. The SurgiBot represents a bedside solution for single-port surgery providing flexible tube-guided instruments. The Avicenna Roboflex has been developed for robotic flexible ureteroscopy, with promising early clinical results. Several console-based robots for laparoscopic multi- and single-port surgery are expected to come to market within the next 5 years. Future developments in the field of robotic surgery are likely to focus on the specific features of robotic arms, instruments, console, and video technology. The high technical standards of four da Vinci generations have set a high bar for upcoming devices. Ultimately, the implementation of these upcoming systems will depend on their clinical applicability and costs. How these technical developments will facilitate surgery and whether their use will translate into better outcomes for our patients remains to be determined. © 2017 The Authors BJU International © 2017 BJU International Published by John Wiley & Sons Ltd.

  2. Series Pneumatic Artificial Muscles (sPAMs) and Application to a Soft Continuum Robot

    PubMed Central

    Greer, Joseph D.; Morimoto, Tania K.; Okamura, Allison M.; Hawkes, Elliot W.

    2017-01-01

    We describe a new series pneumatic artificial muscle (sPAM) and its application as an actuator for a soft continuum robot. The robot consists of three sPAMs arranged radially round a tubular pneumatic backbone. Analogous to tendons, the sPAMs exert a tension force on the robot’s pneumatic backbone, causing bending that is approximately constant curvature. Unlike a traditional tendon driven continuum robot, the robot is entirely soft and contains no hard components, making it safer for human interaction. Models of both the sPAM and soft continuum robot kinematics are presented and experimentally verified. We found a mean position accuracy of 5.5 cm for predicting the end-effector position of a 42 cm long robot with the kinematic model. Finally, closed-loop control is demonstrated using an eye-in-hand visual servo control law which provides a simple interface for operation by a human. The soft continuum robot with closed-loop control was found to have a step-response rise time and settling time of less than two seconds. PMID:29379672

  3. A spherical parallel three degrees-of-freedom robot for ankle-foot neuro-rehabilitation.

    PubMed

    Malosio, Matteo; Negri, Simone Pio; Pedrocchi, Nicola; Vicentini, Federico; Caimmi, Marco; Molinari Tosatti, Lorenzo

    2012-01-01

    The ankle represents a fairly complex bone structure, resulting in kinematics that hinders a flawless robot-assisted recovery of foot motility in impaired subjects. The paper proposes a novel device for ankle-foot neuro-rehabilitation based on a mechatronic redesign of the remarkable Agile Eye spherical robot on the basis of clinical requisites. The kinematic design allows the positioning of the ankle articular center close to the machine rotation center with valuable benefits in term of therapy functions. The prototype, named PKAnkle, Parallel Kinematic machine for Ankle rehabilitation, provides a 6-axes load cell for the measure of subject interaction forces/torques, and it integrates a commercial EMG-acquisition system. Robot control provides active and passive therapeutic exercises.

  4. Lunar crane hook

    NASA Technical Reports Server (NTRS)

    Cash, John Wilson, III; Cone, Alan E.; Garolera, Frank J.; German, David; Lindabury, David Peter; Luckado, Marshall Cleveland; Murphey, Craig; Rowell, John Bryan; Wilkinson, Brad

    1988-01-01

    The base and ball hook system is an attachment that is designed to be used on the lunar surface as an improved alternative to the common crane hook and eye system. The design proposed uses an omni-directional ball hook and base to overcome the design problems associated with a conventional crane hook. The base and ball hook is not sensitive to cable twist which would render a robotic lunar crane useless since there is little atmospheric resistance to dampen the motion of an oscillating member. The symmetric characteristics of the ball hook and base eliminates manual placement of the ball hook into the base; commonly associated with the typical hook and eye stem. The major advantage of the base and ball hook system is it's ease of couple and uncouple modes that are advantages during unmanned robotic lunar missions.

  5. An Intelligent Agent Approach for Teaching Neural Networks Using LEGO[R] Handy Board Robots

    ERIC Educational Resources Information Center

    Imberman, Susan P.

    2004-01-01

    In this article we describe a project for an undergraduate artificial intelligence class. The project teaches neural networks using LEGO[R] handy board robots. Students construct robots with two motors and two photosensors. Photosensors provide readings that act as inputs for the neural network. Output values power the motors and maintain the…

  6. The Potential of Peer Robots to Assist Human Creativity in Finding Problems and Problem Solving

    ERIC Educational Resources Information Center

    Okita, Sandra

    2015-01-01

    Many technological artifacts (e.g., humanoid robots, computer agents) consist of biologically inspired features of human-like appearance and behaviors that elicit a social response. The strong social components of technology permit people to share information and ideas with these artifacts. As robots cross the boundaries between humans and…

  7. Remote Control and Children's Understanding of Robots

    ERIC Educational Resources Information Center

    Somanader, Mark C.; Saylor, Megan M.; Levin, Daniel T.

    2011-01-01

    Children use goal-directed motion to classify agents as living things from early in infancy. In the current study, we asked whether preschoolers are flexible in their application of this criterion by introducing them to robots that engaged in goal-directed motion. In one case the robot appeared to move fully autonomously, and in the other case it…

  8. Infant and Adult Perceptions of Possible and Impossible Body Movements: An Eye-Tracking Study

    ERIC Educational Resources Information Center

    Morita, Tomoyo; Slaughter, Virginia; Katayama, Nobuko; Kitazaki, Michiteru; Kakigi, Ryusuke; Itakura, Shoji

    2012-01-01

    This study investigated how infants perceive and interpret human body movement. We recorded the eye movements and pupil sizes of 9- and 12-month-old infants and of adults (N = 14 per group) as they observed animation clips of biomechanically possible and impossible arm movements performed by a human and by a humanoid robot. Both 12-month-old…

  9. FBG-based sensorized light pipe for robotic intraocular illumination facilitates bimanual retinal microsurgery.

    PubMed

    Horise, Yuki; He, Xingchi; Gehlbach, Peter; Taylor, Russell; Iordachita, Iulian

    2015-01-01

    In retinal surgery, microsurgical instruments such as micro forceps, scissors and picks are inserted through the eye wall via sclerotomies. A handheld intraocular light source is typically used to visualize the tools during the procedure. Retinal surgery requires precise and stable tool maneuvers as the surgical targets are micro scale, fragile and critical to function. Retinal surgeons typically control an active surgical tool with one hand and an illumination source with the other. In this paper, we present a "smart" light pipe that enables true bimanual surgery via utilization of an active, robot-assisted source of targeted illumination. The novel sensorized smart light pipe measures the contact force between the sclerotomy and its own shaft, thereby accommodating the motion of the patient's eye. Forces at the point of contact with the sclera are detected by fiber Bragg grating (FBG) sensors on the light pipe. Our calibration and validation results demonstrate reliable measurement of the contact force as well as location of the sclerotomy. Preliminary experiments have been conducted to functionally evaluate robotic intraocular illumination.

  10. Human-directed local autonomy for motion guidance and coordination in an intelligent manufacturing system

    NASA Astrophysics Data System (ADS)

    Alford, W. A.; Kawamura, Kazuhiko; Wilkes, Don M.

    1997-12-01

    This paper discusses the problem of integrating human intelligence and skills into an intelligent manufacturing system. Our center has jointed the Holonic Manufacturing Systems (HMS) Project, an international consortium dedicated to developing holonic systems technologies. One of our contributions to this effort is in Work Package 6: flexible human integration. This paper focuses on one activity, namely, human integration into motion guidance and coordination. Much research on intelligent systems focuses on creating totally autonomous agents. At the Center for Intelligent Systems (CIS), we design robots that interact directly with a human user. We focus on using the natural intelligence of the user to simplify the design of a robotic system. The problem is finding ways for the user to interact with the robot that are efficient and comfortable for the user. Manufacturing applications impose the additional constraint that the manufacturing process should not be disturbed; that is, frequent interacting with the user could degrade real-time performance. Our research in human-robot interaction is based on a concept called human directed local autonomy (HuDL). Under this paradigm, the intelligent agent selects and executes a behavior or skill, based upon directions from a human user. The user interacts with the robot via speech, gestures, or other media. Our control software is based on the intelligent machine architecture (IMA), an object-oriented architecture which facilitates cooperation and communication among intelligent agents. In this paper we describe our research testbed, a dual-arm humanoid robot and human user, and the use of this testbed for a human directed sorting task. We also discuss some proposed experiments for evaluating the integration of the human into the robot system. At the time of this writing, the experiments have not been completed.

  11. A cognitive robotic system based on the Soar cognitive architecture for mobile robot navigation, search, and mapping missions

    NASA Astrophysics Data System (ADS)

    Hanford, Scott D.

    Most unmanned vehicles used for civilian and military applications are remotely operated or are designed for specific applications. As these vehicles are used to perform more difficult missions or a larger number of missions in remote environments, there will be a great need for these vehicles to behave intelligently and autonomously. Cognitive architectures, computer programs that define mechanisms that are important for modeling and generating domain-independent intelligent behavior, have the potential for generating intelligent and autonomous behavior in unmanned vehicles. The research described in this presentation explored the use of the Soar cognitive architecture for cognitive robotics. The Cognitive Robotic System (CRS) has been developed to integrate software systems for motor control and sensor processing with Soar for unmanned vehicle control. The CRS has been tested using two mobile robot missions: outdoor navigation and search in an indoor environment. The use of the CRS for the outdoor navigation mission demonstrated that a Soar agent could autonomously navigate to a specified location while avoiding obstacles, including cul-de-sacs, with only a minimal amount of knowledge about the environment. While most systems use information from maps or long-range perceptual capabilities to avoid cul-de-sacs, a Soar agent in the CRS was able to recognize when a simple approach to avoiding obstacles was unsuccessful and switch to a different strategy for avoiding complex obstacles. During the indoor search mission, the CRS autonomously and intelligently searches a building for an object of interest and common intersection types. While searching the building, the Soar agent builds a topological map of the environment using information about the intersections the CRS detects. The agent uses this topological model (along with Soar's reasoning, planning, and learning mechanisms) to make intelligent decisions about how to effectively search the building. Once the object of interest has been detected, the Soar agent uses the topological map to make decisions about how to efficiently return to the location where the mission began. Additionally, the CRS can send an email containing step-by-step directions using the intersections in the environment as landmarks that describe a direct path from the mission's start location to the object of interest. The CRS has displayed several characteristics of intelligent behavior, including reasoning, planning, learning, and communication of learned knowledge, while autonomously performing two missions. The CRS has also demonstrated how Soar can be integrated with common robotic motor and perceptual systems that complement the strengths of Soar for unmanned vehicles and is one of the few systems that use perceptual systems such as occupancy grid, computer vision, and fuzzy logic algorithms with cognitive architectures for robotics. The use of these perceptual systems to generate symbolic information about the environment during the indoor search mission allowed the CRS to use Soar's planning and learning mechanisms, which have rarely been used by agents to control mobile robots in real environments. Additionally, the system developed for the indoor search mission represents the first known use of a topological map with a cognitive architecture on a mobile robot. The ability to learn both a topological map and production rules allowed the Soar agent used during the indoor search mission to make intelligent decisions and behave more efficiently as it learned about its environment. While the CRS has been applied to two different missions, it has been developed with the intention that it be extended in the future so it can be used as a general system for mobile robot control. The CRS can be expanded through the addition of new sensors and sensor processing algorithms, development of Soar agents with more production rules, and the use of new architectural mechanisms in Soar.

  12. Human Eye Phantom for Developing Computer and Robot-Assisted Epiretinal Membrane Peeling*

    PubMed Central

    Gupta, Amrita; Gonenc, Berk; Balicki, Marcin; Olds, Kevin; Handa, James; Gehlbach, Peter; Taylor, Russell H.; Iordachita, Iulian

    2014-01-01

    A number of technologies are being developed to facilitate key intraoperative actions in vitreoretinal microsurgery. There is a need for cost-effective, reusable benchtop eye phantoms to enable frequent evaluation of these developments. In this study, we describe an artificial eye phantom for developing intraocular imaging and force-sensing tools. We test four candidate materials for simulating epiretinal membranes using a handheld tremor-canceling micromanipulator with force-sensing micro-forceps tip and demonstrate peeling forces comparable to those encountered in clinical practice. PMID:25571573

  13. The Tactile Ethics of Soft Robotics: Designing Wisely for Human-Robot Interaction.

    PubMed

    Arnold, Thomas; Scheutz, Matthias

    2017-06-01

    Soft robots promise an exciting design trajectory in the field of robotics and human-robot interaction (HRI), promising more adaptive, resilient movement within environments as well as a safer, more sensitive interface for the objects or agents the robot encounters. In particular, tactile HRI is a critical dimension for designers to consider, especially given the onrush of assistive and companion robots into our society. In this article, we propose to surface an important set of ethical challenges for the field of soft robotics to meet. Tactile HRI strongly suggests that soft-bodied robots balance tactile engagement against emotional manipulation, model intimacy on the bonding with a tool not with a person, and deflect users from personally and socially destructive behavior the soft bodies and surfaces could normally entice.

  14. A Vision-Based Self-Calibration Method for Robotic Visual Inspection Systems

    PubMed Central

    Yin, Shibin; Ren, Yongjie; Zhu, Jigui; Yang, Shourui; Ye, Shenghua

    2013-01-01

    A vision-based robot self-calibration method is proposed in this paper to evaluate the kinematic parameter errors of a robot using a visual sensor mounted on its end-effector. This approach could be performed in the industrial field without external, expensive apparatus or an elaborate setup. A robot Tool Center Point (TCP) is defined in the structural model of a line-structured laser sensor, and aligned to a reference point fixed in the robot workspace. A mathematical model is established to formulate the misalignment errors with kinematic parameter errors and TCP position errors. Based on the fixed point constraints, the kinematic parameter errors and TCP position errors are identified with an iterative algorithm. Compared to the conventional methods, this proposed method eliminates the need for a robot-based-frame and hand-to-eye calibrations, shortens the error propagation chain, and makes the calibration process more accurate and convenient. A validation experiment is performed on an ABB IRB2400 robot. An optimal configuration on the number and distribution of fixed points in the robot workspace is obtained based on the experimental results. Comparative experiments reveal that there is a significant improvement of the measuring accuracy of the robotic visual inspection system. PMID:24300597

  15. Performance Evaluation of a Pose Estimation Method based on the SwissRanger SR4000

    DTIC Science & Technology

    2012-08-01

    however, not suitable for navigating a small robot. Commercially available Flash LIDAR now has sufficient accuracy for robotic application. A...Flash LIDAR simultaneously produces intensity and range images of the scene at a video frame rate. It has the following advantages over stereovision...fully dense depth data across its field-of-view. The commercially available Flash LIDAR includes the SwissRanger [17] and TigerEye 3D [18

  16. Coordinating teams of autonomous vehicles: an architectural perspective

    NASA Astrophysics Data System (ADS)

    Czichon, Cary; Peterson, Robert W.; Mettala, Erik G.; Vondrak, Ivo

    2005-05-01

    In defense-related robotics research, a mission level integration gap exists between mission tasks (tactical) performed by ground, sea, or air applications and elementary behaviors enacted by processing, communications, sensors, and weaponry resources (platform specific). The gap spans ensemble (heterogeneous team) behaviors, automatic MOE/MOP tracking, and tactical task modeling/simulation for virtual and mixed teams comprised of robotic and human combatants. This study surveys robotic system architectures, compares approaches for navigating problem/state spaces by autonomous systems, describes an architecture for an integrated, repository-based modeling, simulation, and execution environment, and outlines a multi-tiered scheme for robotic behavior components that is agent-based, platform-independent, and extendable via plug-ins. Tools for this integrated environment, along with a distributed agent framework for collaborative task performance are being developed by a U.S. Army funded SBIR project (RDECOM Contract N61339-04-C-0005).

  17. Control and Guidance of Low-Cost Robots via Gesture Perception for Monitoring Activities in the Home

    PubMed Central

    Sempere, Angel D.; Serna-Leon, Arturo; Gil, Pablo; Puente, Santiago; Torres, Fernando

    2015-01-01

    This paper describes the development of a low-cost mini-robot that is controlled by visual gestures. The prototype allows a person with disabilities to perform visual inspections indoors and in domestic spaces. Such a device could be used as the operator's eyes obviating the need for him to move about. The robot is equipped with a motorised webcam that is also controlled by visual gestures. This camera is used to monitor tasks in the home using the mini-robot while the operator remains quiet and motionless. The prototype was evaluated through several experiments testing the ability to use the mini-robot’s kinematics and communication systems to make it follow certain paths. The mini-robot can be programmed with specific orders and can be tele-operated by means of 3D hand gestures to enable the operator to perform movements and monitor tasks from a distance. PMID:26690448

  18. Beaming into the News: A System for and Case Study of Tele-Immersive Journalism.

    PubMed

    Kishore, Sameer; Navarro, Xavi; Dominguez, Eva; de la Pena, Nonny; Slater, Mel

    2016-05-25

    We show how a combination of virtual reality and robotics can be used to beam a physical representation of a person to a distant location, and describe an application of this system in the context of journalism. Full body motion capture data of a person is streamed and mapped in real time, onto the limbs of a humanoid robot present at the remote location. A pair of cameras in the robot's 'eyes' stream stereoscopic video back to the HMD worn by the visitor, and a two-way audio connection allows the visitor to talk to people in the remote destination. By fusing the multisensory data of the visitor with the robot, the visitor's 'consciousness' is transformed to the robot's body. This system was used by a journalist to interview a neuroscientist and a chef 900 miles distant, about food for the brain, resulting in an article published in the popular press.

  19. Beaming into the News: A System for and Case Study of Tele-Immersive Journalism.

    PubMed

    Kishore, Sameer; Navarro, Xavi; Dominguez, Eva; De La Pena, Nonny; Slater, Mel

    2018-03-01

    We show how a combination of virtual reality and robotics can be used to beam a physical representation of a person to a distant location, and describe an application of this system in the context of journalism. Full body motion capture data of a person is streamed and mapped in real time, onto the limbs of a humanoid robot present at the remote location. A pair of cameras in the robots eyes stream stereoscopic video back to the HMD worn by the visitor, and a two-way audio connection allows the visitor to talk to people in the remote destination. By fusing the multisensory data of the visitor with the robot, the visitors consciousness is transformed to the robots body. This system was used by a journalist to interview a neuroscientist and a chef 900 miles distant, about food for the brain, resulting in an article published in the popular press.

  20. Dynamic Routing and Coordination in Multi-Agent Networks

    DTIC Science & Technology

    2016-06-10

    SECURITY CLASSIFICATION OF: Supported by this project, we designed innovative routing, planning and coordination strategies for robotic networks and...tasks partitioned among robots , in what order are they to be performed, and along which deterministic routes or according to which stochastic rules do...individual robots move. The fundamental novelties and our recent breakthroughs supported by this project are manifold: (1) the application 1

  1. Fronto-parietal coding of goal-directed actions performed by artificial agents.

    PubMed

    Kupferberg, Aleksandra; Iacoboni, Marco; Flanagin, Virginia; Huber, Markus; Kasparbauer, Anna; Baumgartner, Thomas; Hasler, Gregor; Schmidt, Florian; Borst, Christoph; Glasauer, Stefan

    2018-03-01

    With advances in technology, artificial agents such as humanoid robots will soon become a part of our daily lives. For safe and intuitive collaboration, it is important to understand the goals behind their motor actions. In humans, this process is mediated by changes in activity in fronto-parietal brain areas. The extent to which these areas are activated when observing artificial agents indicates the naturalness and easiness of interaction. Previous studies indicated that fronto-parietal activity does not depend on whether the agent is human or artificial. However, it is unknown whether this activity is modulated by observing grasping (self-related action) and pointing actions (other-related action) performed by an artificial agent depending on the action goal. Therefore, we designed an experiment in which subjects observed human and artificial agents perform pointing and grasping actions aimed at two different object categories suggesting different goals. We found a signal increase in the bilateral inferior parietal lobule and the premotor cortex when tool versus food items were pointed to or grasped by both agents, probably reflecting the association of hand actions with the functional use of tools. Our results show that goal attribution engages the fronto-parietal network not only for observing a human but also a robotic agent for both self-related and social actions. The debriefing after the experiment has shown that actions of human-like artificial agents can be perceived as being goal-directed. Therefore, humans will be able to interact with service robots intuitively in various domains such as education, healthcare, public service, and entertainment. © 2017 Wiley Periodicals, Inc.

  2. Hand-Eye Calibration of Robonaut

    NASA Technical Reports Server (NTRS)

    Nickels, Kevin; Huber, Eric

    2004-01-01

    NASA's Human Space Flight program depends heavily on Extra-Vehicular Activities (EVA's) performed by human astronauts. EVA is a high risk environment that requires extensive training and ground support. In collaboration with the Defense Advanced Research Projects Agency (DARPA), NASA is conducting a ground development project to produce a robotic astronaut's assistant, called Robonaut, that could help reduce human EVA time and workload. The project described in this paper designed and implemented a hand-eye calibration scheme for Robonaut, Unit A. The intent of this calibration scheme is to improve hand-eye coordination of the robot. The basic approach is to use kinematic and stereo vision measurements, namely the joint angles self-reported by the right arm and 3-D positions of a calibration fixture as measured by vision, to estimate the transformation from Robonaut's base coordinate system to its hand coordinate system and to its vision coordinate system. Two methods of gathering data sets have been developed, along with software to support each. In the first, the system observes the robotic arm and neck angles as the robot is operated under external control, and measures the 3-D position of a calibration fixture using Robonaut's stereo cameras, and logs these data. In the second, the system drives the arm and neck through a set of pre-recorded configurations, and data are again logged. Two variants of the calibration scheme have been developed. The full calibration scheme is a batch procedure that estimates all relevant kinematic parameters of the arm and neck of the robot The daily calibration scheme estimates only joint offsets for each rotational joint on the arm and neck, which are assumed to change from day to day. The schemes have been designed to be automatic and easy to use so that the robot can be fully recalibrated when needed such as after repair, upgrade, etc, and can be partially recalibrated after each power cycle. The scheme has been implemented on Robonaut Unit A and has been shown to reduce mismatch between kinematically derived positions and visually derived positions from a mean of 13.75cm using the previous calibration to means of 1.85cm using a full calibration and 2.02cm using a suboptimal but faster daily calibration. This improved calibration has already enabled the robot to more accurately reach for and grasp objects that it sees within its workspace. The system has been used to support an autonomous wrench-grasping experiment and significantly improved the workspace positioning of the hand based on visually derived wrench position. estimates.

  3. U.S. Unmanned Aerial Systems

    DTIC Science & Technology

    2012-01-03

    that time, they have been called drones, robot planes, pilotless aircraft, RPVs (remotely piloted vehicles), RPAs (remotely piloted aircraft) and...Paul Jackson, p. 728. OSD. UAS Roadmap 2005-2030. August, 2005, Section 2, p.10. 82 National Journal’s Congress Daily. “ Pilotless Aircraft Makers Seek...Eye Proposed by the Boeing Phantom Works, Phantom Eye would use hydrogen-fueled automobile engines to carry a 3,000-pound payload for ten days.195 A

  4. When a robot is social: spatial arrangements and multimodal semiotic engagement in the practice of social robotics.

    PubMed

    Alac, Morana; Movellan, Javier; Tanaka, Fumihide

    2011-12-01

    Social roboticists design their robots to function as social agents in interaction with humans and other robots. Although we do not deny that the robot's design features are crucial for attaining this aim, we point to the relevance of spatial organization and coordination between the robot and the humans who interact with it. We recover these interactions through an observational study of a social robotics laboratory and examine them by applying a multimodal interactional analysis to two moments of robotics practice. We describe the vital role of roboticists and of the group of preverbal infants, who are involved in a robot's design activity, and we argue that the robot's social character is intrinsically related to the subtleties of human interactional moves in laboratories of social robotics. This human involvement in the robot's social agency is not simply controlled by individual will. Instead, the human-machine couplings are demanded by the situational dynamics in which the robot is lodged.

  5. Design and Experiment of Electrooculogram (EOG) System and Its Application to Control Mobile Robot

    NASA Astrophysics Data System (ADS)

    Sanjaya, W. S. M.; Anggraeni, D.; Multajam, R.; Subkhi, M. N.; Muttaqien, I.

    2017-03-01

    In this paper, we design and investigate a biological signal detection of eye movements (Electrooculogram). To detect a signal of Electrooculogram (EOG) used 4 instrument amplifier process; differential instrumentation amplifier, High Pass Filter (HPF) with 3 stage filters, Low Pass Filter (LPF) with 3 stage filters and Level Shifter circuit. The total of amplifying is 1000 times of gain, with frequency range 0.5-30 Hz. IC OP-Amp OP07 was used for all amplifying process. EOG signal will be read as analog input for Arduino microprocessor, and will interfaced with serial communication to PC Monitor using Processing® software. The result of this research show a differences value of eye movements. Differences signal of EOG have been applied to navigation control of the mobile robot. In this research, all communication process using Bluetooth HC-05.

  6. Hybrid Exploration Agent Platform and Sensor Web System

    NASA Technical Reports Server (NTRS)

    Stoffel, A. William; VanSteenberg, Michael E.

    2004-01-01

    A sensor web to collect the scientific data needed to further exploration is a major and efficient asset to any exploration effort. This is true not only for lunar and planetary environments, but also for interplanetary and liquid environments. Such a system would also have myriad direct commercial spin-off applications. The Hybrid Exploration Agent Platform and Sensor Web or HEAP-SW like the ANTS concept is a Sensor Web concept. The HEAP-SW is conceptually and practically a very different system. HEAP-SW is applicable to any environment and a huge range of exploration tasks. It is a very robust, low cost, high return, solution to a complex problem. All of the technology for initial development and implementation is currently available. The HEAP Sensor Web or HEAP-SW consists of three major parts, The Hybrid Exploration Agent Platforms or HEAP, the Sensor Web or SW and the immobile Data collection and Uplink units or DU. The HEAP-SW as a whole will refer to any group of mobile agents or robots where each robot is a mobile data collection unit that spends most of its time acting in concert with all other robots, DUs in the web, and the HEAP-SWs overall Command and Control (CC) system. Each DU and robot is, however, capable of acting independently. The three parts of the HEAP-SW system are discussed in this paper. The Goals of the HEAP-SW system are: 1) To maximize the amount of exploration enhancing science data collected; 2) To minimize data loss due to system malfunctions; 3) To minimize or, possibly, eliminate the risk of total system failure; 4) To minimize the size, weight, and power requirements of each HEAP robot; 5) To minimize HEAP-SW system costs. The rest of this paper discusses how these goals are attained.

  7. Seeing Minds in Others – Can Agents with Robotic Appearance Have Human-Like Preferences?

    PubMed Central

    Martini, Molly C.; Gonzalez, Christian A.; Wiese, Eva

    2016-01-01

    Ascribing mental states to non-human agents has been shown to increase their likeability and lead to better joint-task performance in human-robot interaction (HRI). However, it is currently unclear what physical features non-human agents need to possess in order to trigger mind attribution and whether different aspects of having a mind (e.g., feeling pain, being able to move) need different levels of human-likeness before they are readily ascribed to non-human agents. The current study addresses this issue by modeling how increasing the degree of human-like appearance (on a spectrum from mechanistic to humanoid to human) changes the likelihood by which mind is attributed towards non-human agents. We also test whether different internal states (e.g., being hungry, being alive) need different degrees of humanness before they are ascribed to non-human agents. The results suggest that the relationship between physical appearance and the degree to which mind is attributed to non-human agents is best described as a two-linear model with no change in mind attribution on the spectrum from mechanistic to humanoid robot, but a significant increase in mind attribution as soon as human features are included in the image. There seems to be a qualitative difference in the perception of mindful versus mindless agents given that increasing human-like appearance alone does not increase mind attribution until a certain threshold is reached, that is: agents need to be classified as having a mind first before the addition of more human-like features significantly increases the degree to which mind is attributed to that agent. PMID:26745500

  8. Multi-agent robotic systems and applications for satellite missions

    NASA Astrophysics Data System (ADS)

    Nunes, Miguel A.

    A revolution in the space sector is happening. It is expected that in the next decade there will be more satellites launched than in the previous sixty years of space exploration. Major challenges are associated with this growth of space assets such as the autonomy and management of large groups of satellites, in particular with small satellites. There are two main objectives for this work. First, a flexible and distributed software architecture is presented to expand the possibilities of spacecraft autonomy and in particular autonomous motion in attitude and position. The approach taken is based on the concept of distributed software agents, also referred to as multi-agent robotic system. Agents are defined as software programs that are social, reactive and proactive to autonomously maximize the chances of achieving the set goals. Part of the work is to demonstrate that a multi-agent robotic system is a feasible approach for different problems of autonomy such as satellite attitude determination and control and autonomous rendezvous and docking. The second main objective is to develop a method to optimize multi-satellite configurations in space, also known as satellite constellations. This automated method generates new optimal mega-constellations designs for Earth observations and fast revisit times on large ground areas. The optimal satellite constellation can be used by researchers as the baseline for new missions. The first contribution of this work is the development of a new multi-agent robotic system for distributing the attitude determination and control subsystem for HiakaSat. The multi-agent robotic system is implemented and tested on the satellite hardware-in-the-loop testbed that simulates a representative space environment. The results show that the newly proposed system for this particular case achieves an equivalent control performance when compared to the monolithic implementation. In terms on computational efficiency it is found that the multi-agent robotic system has a consistent lower CPU load of 0.29 +/- 0.03 compared to 0.35 +/- 0.04 for the monolithic implementation, a 17.1 % reduction. The second contribution of this work is the development of a multi-agent robotic system for the autonomous rendezvous and docking of multiple spacecraft. To compute the maneuvers guidance, navigation and control algorithms are implemented as part of the multi-agent robotic system. The navigation and control functions are implemented using existing algorithms, but one important contribution of this section is the introduction of a new six degrees of freedom guidance method which is part of the guidance, navigation and control architecture. This new method is an explicit solution to the guidance problem, and is particularly useful for real time guidance for attitude and position, as opposed to typical guidance methods which are based on numerical solutions, and therefore are computationally intensive. A simulation scenario is run for docking four CubeSats deployed radially from a launch vehicle. Considering fully actuated CubeSats, the simulations show docking maneuvers that are successfully completed within 25 minutes which is approximately 30% of a full orbital period in low earth orbit. The final section investigates the problem of optimization of satellite constellations for fast revisit time, and introduces a new method to generate different constellation configurations that are evaluated with a genetic algorithm. Two case studies are presented. The first is the optimization of a constellation for rapid coverage of the oceans of the globe in 24 hours or less. Results show that for an 80 km sensor swath width 50 satellites are required to cover the oceans with a 24 hour revisit time. The second constellation configuration study focuses on the optimization for the rapid coverage of the North Atlantic Tracks for air traffic monitoring in 3 hours or less. The results show that for a fixed swath width of 160 km and for a 3 hour revisit time 52 satellites are required.

  9. KENNEDY SPACE CENTER, FLA. - The Mars Exploration Rover 2 (MER-2) undergoes a weight and center of gravity determination in the Payload Hazardous Servicing Facility. NASA's twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can't yet go. Launch of MER-2 is scheduled for June 5 from Cape Canaveral Air Force Station.

    NASA Image and Video Library

    2003-05-09

    KENNEDY SPACE CENTER, FLA. - The Mars Exploration Rover 2 (MER-2) undergoes a weight and center of gravity determination in the Payload Hazardous Servicing Facility. NASA's twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can't yet go. Launch of MER-2 is scheduled for June 5 from Cape Canaveral Air Force Station.

  10. KENNEDY SPACE CENTER, FLA. - Workers in the Payload Hazardous Servicing Facility prepare the Mars Exploration Rover 2 (MER-2) for a weight and center of gravity determination. NASA's twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can't yet go. Launch of MER-2 is scheduled for June 5 from Cape Canaveral Air Force Station.

    NASA Image and Video Library

    2003-05-09

    KENNEDY SPACE CENTER, FLA. - Workers in the Payload Hazardous Servicing Facility prepare the Mars Exploration Rover 2 (MER-2) for a weight and center of gravity determination. NASA's twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can't yet go. Launch of MER-2 is scheduled for June 5 from Cape Canaveral Air Force Station.

  11. KENNEDY SPACE CENTER, FLA. - Workers in the Payload Hazardous Servicing Facility are preparing to determine weight and center of gravity for the Mars Exploration Rover 2 (MER-2). NASA's twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can't yet go. Launch of MER-2 is scheduled for June 5 from Cape Canaveral Air Force Station.

    NASA Image and Video Library

    2003-05-09

    KENNEDY SPACE CENTER, FLA. - Workers in the Payload Hazardous Servicing Facility are preparing to determine weight and center of gravity for the Mars Exploration Rover 2 (MER-2). NASA's twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can't yet go. Launch of MER-2 is scheduled for June 5 from Cape Canaveral Air Force Station.

  12. KENNEDY SPACE CENTER, FLA. - In the Payload Hazardous Servicing Facility, workers prepare to mate the Mars Exploration Rover-2 (MER-2) to the third stage of a Delta II rocket for launch on June 5. NASA’s twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can’t yet go. MER-1 (MER-B) will launch June 25.

    NASA Image and Video Library

    2003-05-23

    KENNEDY SPACE CENTER, FLA. - In the Payload Hazardous Servicing Facility, workers prepare to mate the Mars Exploration Rover-2 (MER-2) to the third stage of a Delta II rocket for launch on June 5. NASA’s twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can’t yet go. MER-1 (MER-B) will launch June 25.

  13. KENNEDY SPACE CENTER, FLA. - In the Payload Hazardous Servicing Facility, the Mars Exploration Rover 2 (MER-2) is moved to a spin table. NASA’s twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can’t yet go. The MER-2 is scheduled to launch June 5 from Launch Pad 17-A, Cape Canaveral Air Force Station.

    NASA Image and Video Library

    2003-05-19

    KENNEDY SPACE CENTER, FLA. - In the Payload Hazardous Servicing Facility, the Mars Exploration Rover 2 (MER-2) is moved to a spin table. NASA’s twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can’t yet go. The MER-2 is scheduled to launch June 5 from Launch Pad 17-A, Cape Canaveral Air Force Station.

  14. KENNEDY SPACE CENTER, FLA. - In the Payload Hazardous Servicing Facility, workers mate the Mars Exploration Rover-2 (MER-2) to the third stage of a Delta II rocket for launch on June 5. NASA’s twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can’t yet go. MER-1 (MER-B) will launch June 25.

    NASA Image and Video Library

    2003-05-23

    KENNEDY SPACE CENTER, FLA. - In the Payload Hazardous Servicing Facility, workers mate the Mars Exploration Rover-2 (MER-2) to the third stage of a Delta II rocket for launch on June 5. NASA’s twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can’t yet go. MER-1 (MER-B) will launch June 25.

  15. Emergent adaptive behaviour of GRN-controlled simulated robots in a changing environment.

    PubMed

    Yao, Yao; Storme, Veronique; Marchal, Kathleen; Van de Peer, Yves

    2016-01-01

    We developed a bio-inspired robot controller combining an artificial genome with an agent-based control system. The genome encodes a gene regulatory network (GRN) that is switched on by environmental cues and, following the rules of transcriptional regulation, provides output signals to actuators. Whereas the genome represents the full encoding of the transcriptional network, the agent-based system mimics the active regulatory network and signal transduction system also present in naturally occurring biological systems. Using such a design that separates the static from the conditionally active part of the gene regulatory network contributes to a better general adaptive behaviour. Here, we have explored the potential of our platform with respect to the evolution of adaptive behaviour, such as preying when food becomes scarce, in a complex and changing environment and show through simulations of swarm robots in an A-life environment that evolution of collective behaviour likely can be attributed to bio-inspired evolutionary processes acting at different levels, from the gene and the genome to the individual robot and robot population.

  16. Emergent adaptive behaviour of GRN-controlled simulated robots in a changing environment

    PubMed Central

    Yao, Yao; Storme, Veronique; Marchal, Kathleen

    2016-01-01

    We developed a bio-inspired robot controller combining an artificial genome with an agent-based control system. The genome encodes a gene regulatory network (GRN) that is switched on by environmental cues and, following the rules of transcriptional regulation, provides output signals to actuators. Whereas the genome represents the full encoding of the transcriptional network, the agent-based system mimics the active regulatory network and signal transduction system also present in naturally occurring biological systems. Using such a design that separates the static from the conditionally active part of the gene regulatory network contributes to a better general adaptive behaviour. Here, we have explored the potential of our platform with respect to the evolution of adaptive behaviour, such as preying when food becomes scarce, in a complex and changing environment and show through simulations of swarm robots in an A-life environment that evolution of collective behaviour likely can be attributed to bio-inspired evolutionary processes acting at different levels, from the gene and the genome to the individual robot and robot population. PMID:28028477

  17. Agent Based Intelligence in a Tetrahedral Rover

    NASA Technical Reports Server (NTRS)

    Phelps, Peter; Truszkowski, Walt

    2007-01-01

    A tetrahedron is a 4-node 6-strut pyramid structure which is being used by the NASA - Goddard Space Flight Center as the basic building block for a new approach to robotic motion. The struts are extendable; it is by the sequence of activities: strut-extension, changing the center of gravity and falling that the tetrahedron "moves". Currently, strut-extension is handled by human remote control. There is an effort underway to make the movement of the tetrahedron autonomous, driven by an attempt to achieve a goal. The approach being taken is to associate an intelligent agent with each node. Thus, the autonomous tetrahedron is realized as a constrained multi-agent system, where the constraints arise from the fact that between any two agents there is an extendible strut. The hypothesis of this work is that, by proper composition of such automated tetrahedra, robotic structures of various levels of complexity can be developed which will support more complex dynamic motions. This is the basis of the new approach to robotic motion which is under investigation. A Java-based simulator for the single tetrahedron, realized as a constrained multi-agent system, has been developed and evaluated. This paper reports on this project and presents a discussion of the structure and dynamics of the simulator.

  18. The Role of Reciprocity in Verbally Persuasive Robots.

    PubMed

    Lee, Seungcheol Austin; Liang, Yuhua Jake

    2016-08-01

    The current research examines the persuasive effects of reciprocity in the context of human-robot interaction. This is an important theoretical and practical extension of persuasive robotics by testing (1) if robots can utilize verbal requests and (2) if robots can utilize persuasive mechanisms (e.g., reciprocity) to gain human compliance. Participants played a trivia game with a robot teammate. The ostensibly autonomous robot helped (or failed to help) the participants by providing the correct (vs. incorrect) trivia answers. Then, the robot directly asked participants to complete a 15-minute task for pattern recognition. Compared to no help, results showed that a robot's prior helping behavior significantly increased the likelihood of compliance (60 percent vs. 33 percent). Interestingly, participants' evaluations toward the robot (i.e., competence, warmth, and trustworthiness) did not predict compliance. These results also provided an insightful comparison showing that participants complied at similar rates with the robot and with computer agents. This result documents a clear empirically powerful potential for the role of verbal messages in persuasive robotics.

  19. A pilot study: the efficacy of virgin coconut oil as ocular rewetting agent on rabbit eyes.

    PubMed

    Mutalib, Haliza Abdul; Kaur, Sharanjeet; Ghazali, Ahmad Rohi; Chinn Hooi, Ng; Safie, Nor Hasanah

    2015-01-01

    Purpose. An open-label pilot study of virgin coconut oil (VCO) was conducted to determine the safety of the agent as ocular rewetting eye drops on rabbits. Methods. Efficacy of the VCO was assessed by measuring NIBUT, anterior eye assessment, corneal staining, pH, and Schirmer value before instillation and at 30 min, 60 min, and two weeks after instillation. Friedman test was used to analyse any changes in all the measurable variables over the period of time. Results. Only conjunctival redness with instillation of saline agent showed significant difference over the period of time (P < 0.05). However, further statistical analysis had shown no significant difference at 30 min, 60 min, and two weeks compared to initial measurement (P > 0.05). There were no changes in the NIBUT, limbal redness, palpebral conjunctiva redness, corneal staining, pH, and Schirmer value over the period of time for each agent (P > 0.05). Conclusion. VCO acts as safe rewetting eye drops as it has shown no significant difference in the measurable parameter compared to commercial brand eye drops and saline. These study data suggest that VCO is safe to be used as ocular rewetting agent on human being.

  20. A Pilot Study: The Efficacy of Virgin Coconut Oil as Ocular Rewetting Agent on Rabbit Eyes

    PubMed Central

    Mutalib, Haliza Abdul; Kaur, Sharanjeet; Ghazali, Ahmad Rohi; Chinn Hooi, Ng; Safie, Nor Hasanah

    2015-01-01

    Purpose. An open-label pilot study of virgin coconut oil (VCO) was conducted to determine the safety of the agent as ocular rewetting eye drops on rabbits. Methods. Efficacy of the VCO was assessed by measuring NIBUT, anterior eye assessment, corneal staining, pH, and Schirmer value before instillation and at 30 min, 60 min, and two weeks after instillation. Friedman test was used to analyse any changes in all the measurable variables over the period of time. Results. Only conjunctival redness with instillation of saline agent showed significant difference over the period of time (P < 0.05). However, further statistical analysis had shown no significant difference at 30 min, 60 min, and two weeks compared to initial measurement (P > 0.05). There were no changes in the NIBUT, limbal redness, palpebral conjunctiva redness, corneal staining, pH, and Schirmer value over the period of time for each agent (P > 0.05). Conclusion. VCO acts as safe rewetting eye drops as it has shown no significant difference in the measurable parameter compared to commercial brand eye drops and saline. These study data suggest that VCO is safe to be used as ocular rewetting agent on human being. PMID:25802534

  1. Advancements in anti-inflammatory therapy for dry eye syndrome.

    PubMed

    McCabe, Erin; Narayanan, Srihari

    2009-10-01

    The goal of this literature review is to discuss recent discoveries in the pathophysiology of dry eye and the subsequent evolution of diagnostic and management techniques. The mechanisms of various anti-inflammatory treatments are reviewed, and the efficacy of common pharmacologic agents is assessed. Anti-inflammatory therapy is evaluated in terms of its primary indications, target population, and utility within a clinical setting. The Medline PubMed database and the World Wide Web were searched for current information regarding dry eye prevalence, pathogenesis, diagnosis, and management. After an analysis of the literature, major concepts were integrated to generate an updated portrayal of the status of dry eye syndrome. Inflammation appears to play a key role in perpetuating and sustaining dry eye. Discoveries of inflammatory markers found within the corneal and conjunctival epithelium of dry eye patients have triggered recent advancements in therapy. Pharmacologic anti-inflammatory therapy for dry eye includes 2 major categories: corticosteroids and immunomodulatory agents. Fatty acid and androgen supplementation and oral antibiotics have also shown promise in dry eye therapy because of their anti-inflammatory effects. Anti-inflammatory pharmacologic agents have shown great success in patients with moderate to severe dry eye when compared with alternative treatment modalities. A deeper understanding of the link between inflammation and dry eye validates the utilization of anti-inflammatory therapy in everyday optometric practice.

  2. Dry eye disease caused by viral infection: review.

    PubMed

    Alves, Monica; Angerami, Rodrigo Nogueira; Rocha, Eduardo Melani

    2013-01-01

    Dry eye disease and ocular surface disorders may be caused or worsened by viral agents. There are several known and suspected virus associated to ocular surface diseases. The possible pathogenic mechanisms for virus-related dry eye disease are presented herein. This review serves to reinforce the importance of ophthalmologists as one of the healthcare professional able to diagnose a potentially large number of infected patients with high prevalent viral agents.

  3. Cellular proliferation and regeneration following tissue damage. Progress report. [Eyes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harding, C.V.

    1976-10-01

    Results are reported from a study of wound healing in tissues of the eye, particularly lens, cornea, and surrounding tissues. The reactions of these tissues to mechanical injuries, as well as injuries induced by chemotoxic agents were studied. It is postulated that a better understanding of the basic reactions of the eye to injurious agents may be of importance in the evaluation of potential environmental hazards.

  4. The Fourth Law of Robotics.

    ERIC Educational Resources Information Center

    Markoff, John

    1994-01-01

    Discusses intelligent software agents, or knowledge robots (knowbots), and the impact they have on the Internet. Topics addressed include ethical dilemmas; problems created by rapid growth on the Internet; new technologies that are amplifying growth; and a shift to a market economy and resulting costs. (LRW)

  5. Proceedings 3rd NASA/IEEE Workshop on Formal Approaches to Agent-Based Systems (FAABS-III)

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael (Editor); Rash, James (Editor); Truszkowski, Walt (Editor); Rouff, Christopher (Editor)

    2004-01-01

    These preceedings contain 18 papers and 4 poster presentation, covering topics such as: multi-agent systems, agent-based control, formalism, norms, as well as physical and biological models of agent-based systems. Some applications presented in the proceedings include systems analysis, software engineering, computer networks and robot control.

  6. Brahms Mobile Agents: Architecture and Field Tests

    NASA Technical Reports Server (NTRS)

    Clancey, William J.; Sierhuis, Maarten; Kaskiris, Charis; vanHoof, Ron

    2002-01-01

    We have developed a model-based, distributed architecture that integrates diverse components in a system designed for lunar and planetary surface operations: an astronaut's space suit, cameras, rover/All-Terrain Vehicle (ATV), robotic assistant, other personnel in a local habitat, and a remote mission support team (with time delay). Software processes, called agents, implemented in the Brahms language, run on multiple, mobile platforms. These mobile agents interpret and transform available data to help people and robotic systems coordinate their actions to make operations more safe and efficient. The Brahms-based mobile agent architecture (MAA) uses a novel combination of agent types so the software agents may understand and facilitate communications between people and between system components. A state-of-the-art spoken dialogue interface is integrated with Brahms models, supporting a speech-driven field observation record and rover command system (e.g., return here later and bring this back to the habitat ). This combination of agents, rover, and model-based spoken dialogue interface constitutes a personal assistant. An important aspect of the methodology involves first simulating the entire system in Brahms, then configuring the agents into a run-time system.

  7. Intrinsically motivated reinforcement learning for human-robot interaction in the real-world.

    PubMed

    Qureshi, Ahmed Hussain; Nakamura, Yutaka; Yoshikawa, Yuichiro; Ishiguro, Hiroshi

    2018-03-26

    For a natural social human-robot interaction, it is essential for a robot to learn the human-like social skills. However, learning such skills is notoriously hard due to the limited availability of direct instructions from people to teach a robot. In this paper, we propose an intrinsically motivated reinforcement learning framework in which an agent gets the intrinsic motivation-based rewards through the action-conditional predictive model. By using the proposed method, the robot learned the social skills from the human-robot interaction experiences gathered in the real uncontrolled environments. The results indicate that the robot not only acquired human-like social skills but also took more human-like decisions, on a test dataset, than a robot which received direct rewards for the task achievement. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. Rhythm Patterns Interaction - Synchronization Behavior for Human-Robot Joint Action

    PubMed Central

    Mörtl, Alexander; Lorenz, Tamara; Hirche, Sandra

    2014-01-01

    Interactive behavior among humans is governed by the dynamics of movement synchronization in a variety of repetitive tasks. This requires the interaction partners to perform for example rhythmic limb swinging or even goal-directed arm movements. Inspired by that essential feature of human interaction, we present a novel concept and design methodology to synthesize goal-directed synchronization behavior for robotic agents in repetitive joint action tasks. The agents’ tasks are described by closed movement trajectories and interpreted as limit cycles, for which instantaneous phase variables are derived based on oscillator theory. Events segmenting the trajectories into multiple primitives are introduced as anchoring points for enhanced synchronization modes. Utilizing both continuous phases and discrete events in a unifying view, we design a continuous dynamical process synchronizing the derived modes. Inverse to the derivation of phases, we also address the generation of goal-directed movements from the behavioral dynamics. The developed concept is implemented to an anthropomorphic robot. For evaluation of the concept an experiment is designed and conducted in which the robot performs a prototypical pick-and-place task jointly with human partners. The effectiveness of the designed behavior is successfully evidenced by objective measures of phase and event synchronization. Feedback gathered from the participants of our exploratory study suggests a subjectively pleasant sense of interaction created by the interactive behavior. The results highlight potential applications of the synchronization concept both in motor coordination among robotic agents and in enhanced social interaction between humanoid agents and humans. PMID:24752212

  9. Generating and Describing Affective Eye Behaviors

    NASA Astrophysics Data System (ADS)

    Mao, Xia; Li, Zheng

    The manner of a person's eye movement conveys much about nonverbal information and emotional intent beyond speech. This paper describes work on expressing emotion through eye behaviors in virtual agents based on the parameters selected from the AU-Coded facial expression database and real-time eye movement data (pupil size, blink rate and saccade). A rule-based approach to generate primary (joyful, sad, angry, afraid, disgusted and surprise) and intermediate emotions (emotions that can be represented as the mixture of two primary emotions) utilized the MPEG4 FAPs (facial animation parameters) is introduced. Meanwhile, based on our research, a scripting tool, named EEMML (Emotional Eye Movement Markup Language) that enables authors to describe and generate emotional eye movement of virtual agents, is proposed.

  10. Flocking algorithm for autonomous flying robots.

    PubMed

    Virágh, Csaba; Vásárhelyi, Gábor; Tarcai, Norbert; Szörényi, Tamás; Somorjai, Gergő; Nepusz, Tamás; Vicsek, Tamás

    2014-06-01

    Animal swarms displaying a variety of typical flocking patterns would not exist without the underlying safe, optimal and stable dynamics of the individuals. The emergence of these universal patterns can be efficiently reconstructed with agent-based models. If we want to reproduce these patterns with artificial systems, such as autonomous aerial robots, agent-based models can also be used in their control algorithms. However, finding the proper algorithms and thus understanding the essential characteristics of the emergent collective behaviour requires thorough and realistic modeling of the robot and also the environment. In this paper, we first present an abstract mathematical model of an autonomous flying robot. The model takes into account several realistic features, such as time delay and locality of communication, inaccuracy of the on-board sensors and inertial effects. We present two decentralized control algorithms. One is based on a simple self-propelled flocking model of animal collective motion, the other is a collective target tracking algorithm. Both algorithms contain a viscous friction-like term, which aligns the velocities of neighbouring agents parallel to each other. We show that this term can be essential for reducing the inherent instabilities of such a noisy and delayed realistic system. We discuss simulation results on the stability of the control algorithms, and perform real experiments to show the applicability of the algorithms on a group of autonomous quadcopters. In our case, bio-inspiration works in two ways. On the one hand, the whole idea of trying to build and control a swarm of robots comes from the observation that birds tend to flock to optimize their behaviour as a group. On the other hand, by using a realistic simulation framework and studying the group behaviour of autonomous robots we can learn about the major factors influencing the flight of bird flocks.

  11. Acquisition of Autonomous Behaviors by Robotic Assistants

    NASA Technical Reports Server (NTRS)

    Peters, R. A., II; Sarkar, N.; Bodenheimer, R. E.; Brown, E.; Campbell, C.; Hambuchen, K.; Johnson, C.; Koku, A. B.; Nilas, P.; Peng, J.

    2005-01-01

    Our research achievements under the NASA-JSC grant contributed significantly in the following areas. Multi-agent based robot control architecture called the Intelligent Machine Architecture (IMA) : The Vanderbilt team received a Space Act Award for this research from NASA JSC in October 2004. Cognitive Control and the Self Agent : Cognitive control in human is the ability to consciously manipulate thoughts and behaviors using attention to deal with conflicting goals and demands. We have been updating the IMA Self Agent towards this goal. If opportunity arises, we would like to work with NASA to empower Robonaut to do cognitive control. Applications 1. SES for Robonaut, 2. Robonaut Fault Diagnostic System, 3. ISAC Behavior Generation and Learning, 4. Segway Research.

  12. Robot transparency, trust and utility

    NASA Astrophysics Data System (ADS)

    Wortham, Robert H.; Theodorou, Andreas

    2017-07-01

    As robot reasoning becomes more complex, debugging becomes increasingly hard based solely on observable behaviour, even for robot designers and technical specialists. Similarly, non-specialist users have difficulty creating useful mental models of robot reasoning from observations of robot behaviour. The EPSRC Principles of Robotics mandate that our artefacts should be transparent, but what does this mean in practice, and how does transparency affect both trust and utility? We investigate this relationship in the literature and find it to be complex, particularly in nonindustrial environments where, depending on the application and purpose of the robot, transparency may have a wider range of effects on trust and utility. We outline our programme of research to support our assertion that it is nevertheless possible to create transparent agents that are emotionally engaging despite having a transparent machine nature.

  13. A flexible 3D laser scanning system using a robotic arm

    NASA Astrophysics Data System (ADS)

    Fei, Zixuan; Zhou, Xiang; Gao, Xiaofei; Zhang, Guanliang

    2017-06-01

    In this paper, we present a flexible 3D scanning system based on a MEMS scanner mounted on an industrial arm with a turntable. This system has 7-degrees of freedom and is able to conduct a full field scan from any angle, suitable for scanning object with the complex shape. The existing non-contact 3D scanning system usually uses laser scanner that projects fixed stripe mounted on the Coordinate Measuring Machine (CMM) or industrial robot. These existing systems can't perform path planning without CAD models. The 3D scanning system presented in this paper can scan the object without CAD models, and we introduced this path planning method in the paper. We also propose a practical approach to calibrating the hand-in-eye system based on binocular stereo vision and analyzes the errors of the hand-eye calibration.

  14. Towards Vision-Based Control of a Handheld Micromanipulator for Retinal Cannulation in an Eyeball Phantom

    PubMed Central

    Becker, Brian C.; Yang, Sungwook; MacLachlan, Robert A.; Riviere, Cameron N.

    2012-01-01

    Injecting clot-busting drugs such as t-PA into tiny vessels thinner than a human hair in the eye is a challenging procedure, especially since the vessels lie directly on top of the delicate and easily damaged retina. Various robotic aids have been proposed with the goal of increasing safety by removing tremor and increasing precision with motion scaling. We have developed a fully handheld micromanipulator, Micron, that has demonstrated reduced tremor when cannulating porcine retinal veins in an “open sky” scenario. In this paper, we present work towards handheld robotic cannulation with the goal of vision-based virtual fixtures guiding the tip of the cannula to the vessel. Using a realistic eyeball phantom, we address sclerotomy constraints, eye movement, and non-planar retina. Preliminary results indicate a handheld micromanipulator aided by visual control is a promising solution to retinal vessel occlusion. PMID:24649479

  15. Self-Organizing Map With Time-Varying Structure to Plan and Control Artificial Locomotion.

    PubMed

    Araujo, Aluizio F R; Santana, Orivaldo V

    2015-08-01

    This paper presents an algorithm, self-organizing map-state trajectory generator (SOM-STG), to plan and control legged robot locomotion. The SOM-STG is based on an SOM with a time-varying structure characterized by constructing autonomously close-state trajectories from an arbitrary number of robot postures. Each trajectory represents a cyclical movement of the limbs of an animal. The SOM-STG was designed to possess important features of a central pattern generator, such as rhythmic pattern generation, synchronization between limbs, and swapping between gaits following a single command. The acquisition of data for SOM-STG is based on learning by demonstration in which the data are obtained from different demonstrator agents. The SOM-STG can construct one or more gaits for a simulated robot with six legs, can control the robot with any of the gaits learned, and can smoothly swap gaits. In addition, SOM-STG can learn to construct a state trajectory form observing an animal in locomotion. In this paper, a dog is the demonstrator agent.

  16. How Albot0 finds its way home: a novel approach to cognitive mapping using robots.

    PubMed

    Yeap, Wai K

    2011-10-01

    Much of what we know about cognitive mapping comes from observing how biological agents behave in their physical environments, and several of these ideas were implemented on robots, imitating such a process. In this paper a novel approach to cognitive mapping is presented whereby robots are treated as a species of their own and their cognitive mapping is being investigated. Such robots are referred to as Albots. The design of the first Albot, Albot0 , is presented. Albot0 computes an imprecise map and employs a novel method to find its way home. Both the map and the return-home algorithm exhibited characteristics commonly found in biological agents. What we have learned from Albot0 's cognitive mapping are discussed. One major lesson is that the spatiality in a cognitive map affords us rich and useful information and this argues against recent suggestions that the notion of a cognitive map is not a useful one. Copyright © 2011 Cognitive Science Society, Inc.

  17. A probabilistic model of overt visual attention for cognitive robots.

    PubMed

    Begum, Momotaz; Karray, Fakhri; Mann, George K I; Gosine, Raymond G

    2010-10-01

    Visual attention is one of the major requirements for a robot to serve as a cognitive companion for human. The robotic visual attention is mostly concerned with overt attention which accompanies head and eye movements of a robot. In this case, each movement of the camera head triggers a number of events, namely transformation of the camera and the image coordinate systems, change of content of the visual field, and partial appearance of the stimuli. All of these events contribute to the reduction in probability of meaningful identification of the next focus of attention. These events are specific to overt attention with head movement and, therefore, their effects are not addressed in the classical models of covert visual attention. This paper proposes a Bayesian model as a robot-centric solution for the overt visual attention problem. The proposed model, while taking inspiration from the primates visual attention mechanism, guides a robot to direct its camera toward behaviorally relevant and/or visually demanding stimuli. A particle filter implementation of this model addresses the challenges involved in overt attention with head movement. Experimental results demonstrate the performance of the proposed model.

  18. Robust Kalman filtering cooperated Elman neural network learning for vision-sensing-based robotic manipulation with global stability.

    PubMed

    Zhong, Xungao; Zhong, Xunyu; Peng, Xiafu

    2013-10-08

    In this paper, a global-state-space visual servoing scheme is proposed for uncalibrated model-independent robotic manipulation. The scheme is based on robust Kalman filtering (KF), in conjunction with Elman neural network (ENN) learning techniques. The global map relationship between the vision space and the robotic workspace is learned using an ENN. This learned mapping is shown to be an approximate estimate of the Jacobian in global space. In the testing phase, the desired Jacobian is arrived at using a robust KF to improve the ENN learning result so as to achieve robotic precise convergence of the desired pose. Meanwhile, the ENN weights are updated (re-trained) using a new input-output data pair vector (obtained from the KF cycle) to ensure robot global stability manipulation. Thus, our method, without requiring either camera or model parameters, avoids the corrupted performances caused by camera calibration and modeling errors. To demonstrate the proposed scheme's performance, various simulation and experimental results have been presented using a six-degree-of-freedom robotic manipulator with eye-in-hand configurations.

  19. The multi-criteria optimization for the formation of the multiple-valued logic model of a robotic agent

    NASA Astrophysics Data System (ADS)

    Bykovsky, A. Yu; Sherbakov, A. A.

    2016-08-01

    The C-valued Allen-Givone algebra is the attractive tool for modeling of a robotic agent, but it requires the consensus method of minimization for the simplification of logic expressions. This procedure substitutes some undefined states of the function for the maximal truth value, thus extending the initially given truth table. This further creates the problem of different formal representations for the same initially given function. The multi-criteria optimization is proposed for the deliberate choice of undefined states and model formation.

  20. What pressure is exerted on the retina by heavy tamponade agents?

    PubMed

    Wong, David; Williams, Rachel; Stappler, Theodor; Groenewald, Carl

    2005-05-01

    Histological changes in the retina during the use of heavy tamponade agents have been linked with the pressure on the retina caused by the increased specific gravity of the agent. This paper calculates the possible increases in pressure due to these agents and questions the validity of this argument. A model eye chamber was used to make measurements of the shape of F6H8 bubbles, with incrementally increasing volumes, and thus calculate the maximum possible increase in pressure under the tamponade agent. The maximum increase in pressure under an F6H8 tamponade which completely fills an eye with a diameter of 2.2 cm would be 0.52 mmHg. This increase in pressure is within normal diurnal pressure changes in the eye; therefore, it would seem unlikely that such an increase could cause the histological changes observed. With increasing volumes of a heavy tamponade agent, aqueous is excluded from a greater area of retina. This could account for the pathological changes reported.

  1. SpikingLab: modelling agents controlled by Spiking Neural Networks in Netlogo.

    PubMed

    Jimenez-Romero, Cristian; Johnson, Jeffrey

    2017-01-01

    The scientific interest attracted by Spiking Neural Networks (SNN) has lead to the development of tools for the simulation and study of neuronal dynamics ranging from phenomenological models to the more sophisticated and biologically accurate Hodgkin-and-Huxley-based and multi-compartmental models. However, despite the multiple features offered by neural modelling tools, their integration with environments for the simulation of robots and agents can be challenging and time consuming. The implementation of artificial neural circuits to control robots generally involves the following tasks: (1) understanding the simulation tools, (2) creating the neural circuit in the neural simulator, (3) linking the simulated neural circuit with the environment of the agent and (4) programming the appropriate interface in the robot or agent to use the neural controller. The accomplishment of the above-mentioned tasks can be challenging, especially for undergraduate students or novice researchers. This paper presents an alternative tool which facilitates the simulation of simple SNN circuits using the multi-agent simulation and the programming environment Netlogo (educational software that simplifies the study and experimentation of complex systems). The engine proposed and implemented in Netlogo for the simulation of a functional model of SNN is a simplification of integrate and fire (I&F) models. The characteristics of the engine (including neuronal dynamics, STDP learning and synaptic delay) are demonstrated through the implementation of an agent representing an artificial insect controlled by a simple neural circuit. The setup of the experiment and its outcomes are described in this work.

  2. Laser assisted robotic surgery in cornea transplantation

    NASA Astrophysics Data System (ADS)

    Rossi, Francesca; Micheletti, Filippo; Magni, Giada; Pini, Roberto; Menabuoni, Luca; Leoni, Fabio; Magnani, Bernardo

    2017-03-01

    Robotic surgery is a reality in several surgical fields, such as in gastrointestinal surgery. In ophthalmic surgery the required high spatial precision is limiting the application of robotic system, and even if several attempts have been designed in the last 10 years, only some application in retinal surgery were tested in animal models. The combination of photonics and robotics can really open new frontiers in minimally invasive surgery, improving the precision, reducing tremor, amplifying scale of motion, and automating the procedure. In this manuscript we present the preliminary results in developing a vision guided robotic platform for laser-assisted anterior eye surgery. The robotic console is composed by a robotic arm equipped with an "end effector" designed to deliver laser light to the anterior corneal surface. The main intended application is for laser welding of corneal tissue in laser assisted penetrating keratoplasty and endothelial keratoplasty. The console is equipped with an integrated vision system. The experiment originates from a clear medical demand in order to improve the efficacy of different surgical procedures: when the prototype will be optimized, other surgical areas will be included in its application, such as neurosurgery, urology and spinal surgery.

  3. A robotic platform for laser welding of corneal tissue

    NASA Astrophysics Data System (ADS)

    Rossi, Francesca; Micheletti, Filippo; Magni, Giada; Pini, Roberto; Menabuoni, Luca; Leoni, Fabio; Magnani, Bernardo

    2017-07-01

    Robotic surgery is a reality in several surgical fields, such as in gastrointestinal surgery. In ophthalmic surgery the required high spatial precision is limiting the application of robotic system, and even if several attempts have been designed in the last 10 years, only some application in retinal surgery were tested in animal models. The combination of photonics and robotics can really open new frontiers in minimally invasive surgery, improving the precision, reducing tremor, amplifying scale of motion, and automating the procedure. In this manuscript we present the preliminary results in developing a vision guided robotic platform for laser-assisted anterior eye surgery. The robotic console is composed by a robotic arm equipped with an "end effector" designed to deliver laser light to the anterior corneal surface. The main intended application is for laser welding of corneal tissue in laser assisted penetrating keratoplasty and endothelial keratoplasty. The console is equipped with an integrated vision system. The experiment originates from a clear medical demand in order to improve the efficacy of different surgical procedures: when the prototype will be optimized, other surgical areas will be included in its application, such as neurosurgery, urology and spinal surgery.

  4. Flying over uneven moving terrain based on optic-flow cues without any need for reference frames or accelerometers.

    PubMed

    Expert, Fabien; Ruffier, Franck

    2015-02-26

    Two bio-inspired guidance principles involving no reference frame are presented here and were implemented in a rotorcraft, which was equipped with panoramic optic flow (OF) sensors but (as in flying insects) no accelerometer. To test these two guidance principles, we built a tethered tandem rotorcraft called BeeRotor (80 grams), which was tested flying along a high-roofed tunnel. The aerial robot adjusts its pitch and hence its speed, hugs the ground and lands safely without any need for an inertial reference frame. The rotorcraft's altitude and forward speed are adjusted via two OF regulators piloting the lift and the pitch angle on the basis of the common-mode and differential rotor speeds, respectively. The robot equipped with two wide-field OF sensors was tested in order to assess the performances of the following two systems of guidance involving no inertial reference frame: (i) a system with a fixed eye orientation based on the curved artificial compound eye (CurvACE) sensor, and (ii) an active system of reorientation based on a quasi-panoramic eye which constantly realigns its gaze, keeping it parallel to the nearest surface followed. Safe automatic terrain following and landing were obtained with CurvACE under dim light to daylight conditions and the active eye-reorientation system over rugged, changing terrain, without any need for an inertial reference frame.

  5. Eye care for patients receiving neuromuscular blocking agents or propofol during mechanical ventilation.

    PubMed

    Lenart, S B; Garrity, J A

    2000-05-01

    The presence of a corneal reflex and the ability to maintain eye closure are instrumental in protecting the cornea. Use of neuromuscular blocking agents or propofol can result in impaired eyelid closure and loss of corneal reflex, leading to corneal exposure. The cornea is then at risk for drying, infection, and scarring, which may lead to permanent visual loss. To determine whether applying artificial tear ointment to the eyes of paralyzed or heavily sedated patients receiving mechanical ventilation decreases the prevalence of exposure keratitis more than does passive closure of the eyelid. A prospective, randomized control trial was done. The sample was 50 patients in the intensive care unit receiving either neuromuscular blocking agents or propofol during mechanical ventilation. In each patient, artificial tear ointment was applied to one eye; passive closure of the eyelid was used for the other eye (control eye). Nine patients had evidence of exposure keratitis in the untreated eye, and 2 had corneal abrasions in both the treated and the control eyes. The remaining 39 patients did not have corneal abrasions in either eye. Use of the artificial tear ointment was more effective in preventing corneal exposure than was passive eyelid closure (P = .004). Eye care with a lubricating ointment on a regular, set schedule can effectively reduce the prevalence of corneal abrasions in patients who are either paralyzed or heavily sedated and thus can help prevent serious complications such as corneal ulceration, infection, and visual loss.

  6. When Humanoid Robots Become Human-Like Interaction Partners: Corepresentation of Robotic Actions

    ERIC Educational Resources Information Center

    Stenzel, Anna; Chinellato, Eris; Bou, Maria A. Tirado; del Pobil, Angel P.; Lappe, Markus; Liepelt, Roman

    2012-01-01

    In human-human interactions, corepresenting a partner's actions is crucial to successfully adjust and coordinate actions with others. Current research suggests that action corepresentation is restricted to interactions between human agents facilitating social interaction with conspecifics. In this study, we investigated whether action…

  7. A posthuman liturgy? Virtual worlds, robotics, and human flourishing.

    PubMed

    Shatzer, Jacob

    2013-01-01

    In order to inspire a vision of biotechnology that affirms human dignity and human flourishing, the author poses questions about virtual reality and the use of robotics in health care. Using the concept of 'liturgy' and an anthropology of humans as lovers, the author explores how virtual reality and robotics in health care shape human moral agents, and how such shaping could influence the way we do or do not pursue a 'posthuman' future.

  8. Robust Agent Control of an Autonomous Robot with Many Sensors and Actuators

    DTIC Science & Technology

    1993-05-01

    Overview 22 3.1 Issues of Controller Design ........................ 22 3.2 Robot Behavior Control Philosophy .................. 23 3.3 Overview of the... designed and built by our lab as an 9 Figure 1.1- Hannibal. 10 experimental platform to explore planetary micro-rover control issues (Angle 1991). When... designing the robot, careful consideration was given to mobility, sensing, and robustness issues. Much has been said concerning the advan- tages of

  9. Guaranteeing Spoof-Resilient Multi-Robot Networks

    DTIC Science & Technology

    2015-05-12

    particularly challenging attack on this assumption is the so-called “Sybil attack.” In a Sybil attack a malicious agent can generate (or spoof) a large...cybersecurity in general multi-node networks (e.g. a wired LAN), the same is not true for multi- robot networks [14, 28], leaving them largely vulnerable...key passing or cryptographic authen- tication is difficult to maintain due to the highly dynamic and distributed nature of multi-robot teams where

  10. Three-dimensional vision enhances task performance independently of the surgical method.

    PubMed

    Wagner, O J; Hagen, M; Kurmann, A; Horgan, S; Candinas, D; Vorburger, S A

    2012-10-01

    Within the next few years, the medical industry will launch increasingly affordable three-dimensional (3D) vision systems for the operating room (OR). This study aimed to evaluate the effect of two-dimensional (2D) and 3D visualization on surgical skills and task performance. In this study, 34 individuals with varying laparoscopic experience (18 inexperienced individuals) performed three tasks to test spatial relationships, grasping and positioning, dexterity, precision, and hand-eye and hand-hand coordination. Each task was performed in 3D using binocular vision for open performance, the Viking 3Di Vision System for laparoscopic performance, and the DaVinci robotic system. The same tasks were repeated in 2D using an eye patch for monocular vision, conventional laparoscopy, and the DaVinci robotic system. Loss of 3D vision significantly increased the perceived difficulty of a task and the time required to perform it, independently of the approach (P < 0.0001-0.02). Simple tasks took 25 % to 30 % longer to complete and more complex tasks took 75 % longer with 2D than with 3D vision. Only the difficult task was performed faster with the robot than with laparoscopy (P = 0.005). In every case, 3D robotic performance was superior to conventional laparoscopy (2D) (P < 0.001-0.015). The more complex the task, the more 3D vision accelerates task completion compared with 2D vision. The gain in task performance is independent of the surgical method.

  11. The backshell for the Mars Exploration Rover 1 (MER-1) is moved toward the rover (foreground, left). The backshell is a protective cover for the rover. NASA's twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can't yet go. MER-1 is scheduled to launch June 25 as MER-B aboard a Delta II rocket from Cape Canaveral Air Force Station.

    NASA Image and Video Library

    2003-05-10

    The backshell for the Mars Exploration Rover 1 (MER-1) is moved toward the rover (foreground, left). The backshell is a protective cover for the rover. NASA's twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can't yet go. MER-1 is scheduled to launch June 25 as MER-B aboard a Delta II rocket from Cape Canaveral Air Force Station.

  12. KENNEDY SPACE CENTER, FLA. - In the foreground, three solid rocket boosters (SRBs) suspended in the launch tower flank the Delta II rocket (in the background) that will launch Mars Exploration Rover 2 (MER-2). NASA’s twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can’t yet go. MER-2 is scheduled to launch June 5 as MER-A. MER-1 (MER-B) will launch June 25.

    NASA Image and Video Library

    2003-05-15

    KENNEDY SPACE CENTER, FLA. - In the foreground, three solid rocket boosters (SRBs) suspended in the launch tower flank the Delta II rocket (in the background) that will launch Mars Exploration Rover 2 (MER-2). NASA’s twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can’t yet go. MER-2 is scheduled to launch June 5 as MER-A. MER-1 (MER-B) will launch June 25.

  13. KENNEDY SPACE CENTER, FLA. - Workers in the Payload Hazardous Servicing Facility prepare to lift and move the backshell that will cover the Mars Exploration Rover 1 (MER-1) and its lander. NASA's twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can't yet go. MER-1 is scheduled to launch June 25 as MER-B aboard a Delta II rocket from Cape Canaveral Air Force Station.

    NASA Image and Video Library

    2003-05-10

    KENNEDY SPACE CENTER, FLA. - Workers in the Payload Hazardous Servicing Facility prepare to lift and move the backshell that will cover the Mars Exploration Rover 1 (MER-1) and its lander. NASA's twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can't yet go. MER-1 is scheduled to launch June 25 as MER-B aboard a Delta II rocket from Cape Canaveral Air Force Station.

  14. Robotic general surgery: current practice, evidence, and perspective.

    PubMed

    Jung, M; Morel, P; Buehler, L; Buchs, N C; Hagen, M E

    2015-04-01

    Robotic technology commenced to be adopted for the field of general surgery in the 1990s. Since then, the da Vinci surgical system (Intuitive Surgical Inc, Sunnyvale, CA, USA) has remained by far the most commonly used system in this domain. The da Vinci surgical system is a master-slave machine that offers three-dimensional vision, articulated instruments with seven degrees of freedom, and additional software features such as motion scaling and tremor filtration. The specific design allows hand-eye alignment with intuitive control of the minimally invasive instruments. As such, robotic surgery appears technologically superior when compared with laparoscopy by overcoming some of the technical limitations that are imposed on the surgeon by the conventional approach. This article reviews the current literature and the perspective of robotic general surgery. While robotics has been applied to a wide range of general surgery procedures, its precise role in this field remains a subject of further research. Until now, only limited clinical evidence that could establish the use of robotics as the gold standard for procedures of general surgery has been created. While surgical robotics is still in its infancy with multiple novel systems currently under development and clinical trials in progress, the opportunities for this technology appear endless, and robotics should have a lasting impact to the field of general surgery.

  15. VBOT: Motivating computational and complex systems fluencies with constructionist virtual/physical robotics

    NASA Astrophysics Data System (ADS)

    Berland, Matthew W.

    As scientists use the tools of computational and complex systems theory to broaden science perspectives (e.g., Bar-Yam, 1997; Holland, 1995; Wolfram, 2002), so can middle-school students broaden their perspectives using appropriate tools. The goals of this dissertation project are to build, study, evaluate, and compare activities designed to foster both computational and complex systems fluencies through collaborative constructionist virtual and physical robotics. In these activities, each student builds an agent (e.g., a robot-bird) that must interact with fellow students' agents to generate a complex aggregate (e.g., a flock of robot-birds) in a participatory simulation environment (Wilensky & Stroup, 1999a). In a participatory simulation, students collaborate by acting in a common space, teaching each other, and discussing content with one another. As a result, the students improve both their computational fluency and their complex systems fluency, where fluency is defined as the ability to both consume and produce relevant content (DiSessa, 2000). To date, several systems have been designed to foster computational and complex systems fluencies through computer programming and collaborative play (e.g., Hancock, 2003; Wilensky & Stroup, 1999b); this study suggests that, by supporting the relevant fluencies through collaborative play, they become mutually reinforcing. In this work, I will present both the design of the VBOT virtual/physical constructionist robotics learning environment and a comparative study of student interaction with the virtual and physical environments across four middle-school classrooms, focusing on the contrast in systems perspectives differently afforded by the two environments. In particular, I found that while performance gains were similar overall, the physical environment supported agent perspectives on aggregate behavior, and the virtual environment supported aggregate perspectives on agent behavior. The primary research questions are: (1) What are the relative affordances of virtual and physical constructionist robotics systems towards computational and complex systems fluencies? (2) What can middle school students learn using computational/complex systems learning environments in a collaborative setting? (3) In what ways are these environments and activities effective in teaching students computational and complex systems fluencies?

  16. Preliminary study of the safety and efficacy of medium-chain triglycerides for use as an intraocular tamponading agent in minipigs.

    PubMed

    Soler, Vincent J; Laurent, Camille; Sakr, Frédéric; Regnier, Alain; Tricoire, Cyrielle; Cases, Olivier; Kozyraki, Renata; Douet, Jean-Yves; Pagot-Mathis, Véronique

    2017-08-01

    To date, only silicone oils and gases have the appropriate characteristics for use in vitreo-retinal surgery as vitreous substitutes with intraocular tamponading properties. This preliminary study evaluated the safety and efficacy of medium-chain triglycerides (MCTs) for use as a tamponading agent in minipigs. In 15 minipigs, 15 right eyes underwent vitrectomies followed by injection of MCT tamponade (day 1). Two groups were defined. In Group A (ten eyes), the surgical procedure before MCT injection included induced rhegmatogenous retinal detachment (RRD), retina flattening, and retinopexy. In Group B (five eyes), MCT was injected without inducing RRD; in these eyes, MCT was removed on day 90. Pigs were sacrificed on day 45 (Group A) or 120 (Group B). Eyes were examined on days 1, 5, 15, and 45 in both groups and on days 90 and 120 in Group B. In Group B only, we performed bilateral electroretinography examinations on days 1 and 120, and histological examinations of MCTs and controlateral eyes were performed after sacrifice. In Group A eyes (n = 9; one eye was non-assessable), on day 45, the retina was flat in seven eyes and two RRD detachments were observed in insufficiently MCT-filled eyes. In Group B, electroretinography showed no significant differences between MCT eyes and controls on days 1 or 120. Histological analyses revealed no signs of retinal toxicity. This study showed that MCT tamponade seems to be effective and safe; however, additional studies are needed before it becomes commonly used as a tamponading agent in humans.

  17. An Architecture for Controlling Multiple Robots

    NASA Technical Reports Server (NTRS)

    Aghazarian, Hrand; Pirjanian, Paolo; Schenker, Paul; Huntsberger, Terrance

    2004-01-01

    The Control Architecture for Multirobot Outpost (CAMPOUT) is a distributed-control architecture for coordinating the activities of multiple robots. In the CAMPOUT, multiple-agent activities and sensor-based controls are derived as group compositions and involve coordination of more basic controllers denoted, for present purposes, as behaviors. The CAMPOUT provides basic mechanistic concepts for representation and execution of distributed group activities. One considers a network of nodes that comprise behaviors (self-contained controllers) augmented with hyper-links, which are used to exchange information between the nodes to achieve coordinated activities. Group behavior is guided by a scripted plan, which encodes a conditional sequence of single-agent activities. Thus, higher-level functionality is composed by coordination of more basic behaviors under the downward task decomposition of a multi-agent planner

  18. A problem of optimal control and observation for distributed homogeneous multi-agent system

    NASA Astrophysics Data System (ADS)

    Kruglikov, Sergey V.

    2017-12-01

    The paper considers the implementation of a algorithm for controlling a distributed complex of several mobile multi-robots. The concept of a unified information space of the controlling system is applied. The presented information and mathematical models of participants and obstacles, as real agents, and goals and scenarios, as virtual agents, create the base forming the algorithmic and software background for computer decision support system. The controlling scheme assumes the indirect management of the robotic team on the basis of optimal control and observation problem predicting intellectual behavior in a dynamic, hostile environment. A basic content problem is a compound cargo transportation by a group of participants in the case of a distributed control scheme in the terrain with multiple obstacles.

  19. A robot control formalism based on an information quality concept

    NASA Technical Reports Server (NTRS)

    Ekman, A.; Torne, A.; Stromberg, D.

    1994-01-01

    A relevance measure based on Jaynes maximum entropy principle is introduced. Information quality is the conjunction of accuracy and relevance. The formalism based on information quality is developed for one-agent applications. The robot requires a well defined working environment where properties of each object must be accurately specified.

  20. Goal Tracking in a Natural Language Interface: Towards Achieving Adjustable Autonomy

    DTIC Science & Technology

    1999-01-01

    communication , we believe that human/machine interfaces that share some of the characteristics of human- human communication can be friendlier and easier...natural means of communicating with a mobile robot. Although we are not claiming that communication with robotic agents must be patterned after human

  1. Soft brain-machine interfaces for assistive robotics: A novel control approach.

    PubMed

    Schiatti, Lucia; Tessadori, Jacopo; Barresi, Giacinto; Mattos, Leonardo S; Ajoudani, Arash

    2017-07-01

    Robotic systems offer the possibility of improving the life quality of people with severe motor disabilities, enhancing the individual's degree of independence and interaction with the external environment. In this direction, the operator's residual functions must be exploited for the control of the robot movements and the underlying dynamic interaction through intuitive and effective human-robot interfaces. Towards this end, this work aims at exploring the potential of a novel Soft Brain-Machine Interface (BMI), suitable for dynamic execution of remote manipulation tasks for a wide range of patients. The interface is composed of an eye-tracking system, for an intuitive and reliable control of a robotic arm system's trajectories, and a Brain-Computer Interface (BCI) unit, for the control of the robot Cartesian stiffness, which determines the interaction forces between the robot and environment. The latter control is achieved by estimating in real-time a unidimensional index from user's electroencephalographic (EEG) signals, which provides the probability of a neutral or active state. This estimated state is then translated into a stiffness value for the robotic arm, allowing a reliable modulation of the robot's impedance. A preliminary evaluation of this hybrid interface concept provided evidence on the effective execution of tasks with dynamic uncertainties, demonstrating the great potential of this control method in BMI applications for self-service and clinical care.

  2. Gaze-contingent soft tissue deformation tracking for minimally invasive robotic surgery.

    PubMed

    Mylonas, George P; Stoyanov, Danail; Deligianni, Fani; Darzi, Ara; Yang, Guang-Zhong

    2005-01-01

    The introduction of surgical robots in Minimally Invasive Surgery (MIS) has allowed enhanced manual dexterity through the use of microprocessor controlled mechanical wrists. Although fully autonomous robots are attractive, both ethical and legal barriers can prohibit their practical use in surgery. The purpose of this paper is to demonstrate that it is possible to use real-time binocular eye tracking for empowering robots with human vision by using knowledge acquired in situ. By utilizing the close relationship between the horizontal disparity and the depth perception varying with the viewing distance, it is possible to use ocular vergence for recovering 3D motion and deformation of the soft tissue during MIS procedures. Both phantom and in vivo experiments were carried out to assess the potential frequency limit of the system and its intrinsic depth recovery accuracy. The potential applications of the technique include motion stabilization and intra-operative planning in the presence of large tissue deformation.

  3. Learning and adaptation: neural and behavioural mechanisms behind behaviour change

    NASA Astrophysics Data System (ADS)

    Lowe, Robert; Sandamirskaya, Yulia

    2018-01-01

    This special issue presents perspectives on learning and adaptation as they apply to a number of cognitive phenomena including pupil dilation in humans and attention in robots, natural language acquisition and production in embodied agents (robots), human-robot game play and social interaction, neural-dynamic modelling of active perception and neural-dynamic modelling of infant development in the Piagetian A-not-B task. The aim of the special issue, through its contributions, is to highlight some of the critical neural-dynamic and behavioural aspects of learning as it grounds adaptive responses in robotic- and neural-dynamic systems.

  4. Research and development at ORNL/CESAR towards cooperating robotic systems for hazardous environments

    NASA Technical Reports Server (NTRS)

    Mann, R. C.; Fujimura, K.; Unseren, M. A.

    1992-01-01

    One of the frontiers in intelligent machine research is the understanding of how constructive cooperation among multiple autonomous agents can be effected. The effort at the Center for Engineering Systems Advanced Research (CESAR) at the Oak Ridge National Laboratory (ORNL) focuses on two problem areas: (1) cooperation by multiple mobile robots in dynamic, incompletely known environments; and (2) cooperating robotic manipulators. Particular emphasis is placed on experimental evaluation of research and developments using the CESAR robot system testbeds, including three mobile robots, and a seven-axis, kinematically redundant mobile manipulator. This paper summarizes initial results of research addressing the decoupling of position and force control for two manipulators holding a common object, and the path planning for multiple robots in a common workspace.

  5. Adjustably Autonomous Multi-agent Plan Execution with an Internal Spacecraft Free-Flying Robot Prototype

    NASA Technical Reports Server (NTRS)

    Dorais, Gregory A.; Nicewarner, Keith

    2006-01-01

    We present an multi-agent model-based autonomy architecture with monitoring, planning, diagnosis, and execution elements. We discuss an internal spacecraft free-flying robot prototype controlled by an implementation of this architecture and a ground test facility used for development. In addition, we discuss a simplified environment control life support system for the spacecraft domain also controlled by an implementation of this architecture. We discuss adjustable autonomy and how it applies to this architecture. We describe an interface that provides the user situation awareness of both autonomous systems and enables the user to dynamically edit the plans prior to and during execution as well as control these agents at various levels of autonomy. This interface also permits the agents to query the user or request the user to perform tasks to help achieve the commanded goals. We conclude by describing a scenario where these two agents and a human interact to cooperatively detect, diagnose and recover from a simulated spacecraft fault.

  6. A Framework to Describe, Analyze and Generate Interactive Motor Behaviors

    PubMed Central

    Jarrassé, Nathanaël; Charalambous, Themistoklis; Burdet, Etienne

    2012-01-01

    While motor interaction between a robot and a human, or between humans, has important implications for society as well as promising applications, little research has been devoted to its investigation. In particular, it is important to understand the different ways two agents can interact and generate suitable interactive behaviors. Towards this end, this paper introduces a framework for the description and implementation of interactive behaviors of two agents performing a joint motor task. A taxonomy of interactive behaviors is introduced, which can classify tasks and cost functions that represent the way each agent interacts. The role of an agent interacting during a motor task can be directly explained from the cost function this agent is minimizing and the task constraints. The novel framework is used to interpret and classify previous works on human-robot motor interaction. Its implementation power is demonstrated by simulating representative interactions of two humans. It also enables us to interpret and explain the role distribution and switching between roles when performing joint motor tasks. PMID:23226231

  7. A framework to describe, analyze and generate interactive motor behaviors.

    PubMed

    Jarrassé, Nathanaël; Charalambous, Themistoklis; Burdet, Etienne

    2012-01-01

    While motor interaction between a robot and a human, or between humans, has important implications for society as well as promising applications, little research has been devoted to its investigation. In particular, it is important to understand the different ways two agents can interact and generate suitable interactive behaviors. Towards this end, this paper introduces a framework for the description and implementation of interactive behaviors of two agents performing a joint motor task. A taxonomy of interactive behaviors is introduced, which can classify tasks and cost functions that represent the way each agent interacts. The role of an agent interacting during a motor task can be directly explained from the cost function this agent is minimizing and the task constraints. The novel framework is used to interpret and classify previous works on human-robot motor interaction. Its implementation power is demonstrated by simulating representative interactions of two humans. It also enables us to interpret and explain the role distribution and switching between roles when performing joint motor tasks.

  8. Agent-based human-robot interaction of a combat bulldozer

    NASA Astrophysics Data System (ADS)

    Granot, Reuven; Feldman, Maxim

    2004-09-01

    A small-scale supervised autonomous bulldozer in a remote site was developed to experience agent based human intervention. The model is based on Lego Mindstorms kit and represents combat equipment, whose job performance does not require high accuracy. The model enables evaluation of system response for different operator interventions, as well as for a small colony of semiautonomous dozers. The supervising human may better react than a fully autonomous system to unexpected contingent events, which are a major barrier to implement full autonomy. The automation is introduced as improved Man Machine Interface (MMI) by developing control agents as intelligent tools to negotiate between human requests and task level controllers as well as negotiate with other elements of the software environment. Current UGVs demand significant communication resources and constant human operation. Therefore they will be replaced by semi-autonomous human supervisory controlled systems (telerobotic). For human intervention at the low layers of the control hierarchy we suggest a task oriented control agent to take care of the fluent transition between the state in which the robot operates and the one imposed by the human. This transition should take care about the imperfections, which are responsible for the improper operation of the robot, by disconnecting or adapting them to the new situation. Preliminary conclusions from the small-scale experiments are presented.

  9. [Effect of anti-inflammatory therapy on the treatment of dry eye syndrome].

    PubMed

    Mrukwa-Kominek, Ewa; Rogowska-Godela, Anna; Gierek-Ciaciura, Stanisława

    2007-01-01

    Dry eye syndrome is a common chronic disease; agents and strategies for its effective management are still lacking. The syndrome tends to be accompanied by ocular surface inflammation; therefore, the use of anti-inflammatory agents might prove beneficial. The authors present up-to-date guidelines, strategies, and efficacy of dry eye syndrome management, including anti-inflammatory treatment. As no diagnostic tests are now available to assess ocular surface inflammation severity, the right timing to launch an anti-inflammatory agent is difficult to determine. Patients with mild intermittent bouts of symptoms which can be alleviated with ophthalmic lubricants do not typically require anti-inflammatory therapy. The latter should be considered in those who do not respond to lubricating drops, obtain poor results on clinical tests, and show symptoms of ocular surface irritation (eg. conjunctivae redness). Anti-inflammatory treatment of dry eye syndrome may include short-term corticosteroids, cyclosporine A emulsion, oral tetracycline therapy, oral omega-3 fatty acid supplements, and autologous serum eye drops. Anti-inflammatory treatment should be safe and effective; potential benefits should be evaluated for each individual patient. The authors have reviewed the advantages of anti-inflammatory treatment in dry eye syndrome, presented in literature.

  10. Collaborative autonomous sensing with Bayesians in the loop

    NASA Astrophysics Data System (ADS)

    Ahmed, Nisar

    2016-10-01

    There is a strong push to develop intelligent unmanned autonomy that complements human reasoning for applications as diverse as wilderness search and rescue, military surveillance, and robotic space exploration. More than just replacing humans for `dull, dirty and dangerous' work, autonomous agents are expected to cope with a whole host of uncertainties while working closely together with humans in new situations. The robotics revolution firmly established the primacy of Bayesian algorithms for tackling challenging perception, learning and decision-making problems. Since the next frontier of autonomy demands the ability to gather information across stretches of time and space that are beyond the reach of a single autonomous agent, the next generation of Bayesian algorithms must capitalize on opportunities to draw upon the sensing and perception abilities of humans-in/on-the-loop. This work summarizes our recent research toward harnessing `human sensors' for information gathering tasks. The basic idea behind is to allow human end users (i.e. non-experts in robotics, statistics, machine learning, etc.) to directly `talk to' the information fusion engine and perceptual processes aboard any autonomous agent. Our approach is grounded in rigorous Bayesian modeling and fusion of flexible semantic information derived from user-friendly interfaces, such as natural language chat and locative hand-drawn sketches. This naturally enables `plug and play' human sensing with existing probabilistic algorithms for planning and perception, and has been successfully demonstrated with human-robot teams in target localization applications.

  11. Efficacy of Several Therapeutic Agents in a Murine Model of Dry Eye Syndrome

    PubMed Central

    Kilic, Servet; Kulualp, Kadri

    2016-01-01

    In the current study, we used 56 female BALB/c mice with induced dry eye syndrome to evaluate the therapeutic effects of formal saline (FS), sodium hyaluronate (SH), diclofenac sodium (DS), olopatadine (OP), retinoic acid (RA), fluoromethanole (FML), cyclosporine A (CsA), and doxycycline hyclate (DH). All subjects were kept in an evaporative ‘dry eye cabinet’ for the assessment of blink rate, tear production, tear break-up time, and impression cytology prior to (baseline) and during weeks 2, 4, and 6 of the study. The right eyes of all subjects were treated topically with 5 µL of the test agent twice daily during weeks 2 through 6. Impression cytology and tear break-up time differed between time points in all groups and differed between groups at weeks 4 and 6. Blink rate differed by time point only in the FS, FML, and DH groups. Tear production according to the phenol red cotton thread test differed by time point for all groups except RA, CsA, and DH and differed between groups only at week 6. Among the compounds tested in the present study, DS and CsA were the most effective therapeutic agents in our mouse model of dry eye syndrome; these agents likely exert their therapeutic effect through their antiinflammatory activity. PMID:27053565

  12. Modelling cooperation of industrial robots as multi-agent systems

    NASA Astrophysics Data System (ADS)

    Hryniewicz, P.; Banas, W.; Foit, K.; Gwiazda, A.; Sekala, A.

    2017-08-01

    Nowadays, more and more often in a cell is more than one robot, there is also a dual arm robots, because of this cooperation of two robots in the same space becomes more and more important. Programming robotic cell consisting of two or more robots are currently performed separately for each element of the robot and the cell. It is performed only synchronization programs, but no robot movements. In such situations often placed industrial robots so they do not have common space so the robots are operated separately. When industrial robots are a common space this space can occupy only one robot the other one must be outside the common space. It is very difficult to find applications where two robots are in the same workspace. It was tested but one robot did not do of movement when moving the second and waited for permission to move from the second when it sent a permit - stop the move. Such programs are very difficult and require a lot of experience from the programmer and must be tested separately at the beginning and then very slowly under control. Ideally, the operator takes care of exactly one robot during the test and it is very important to take special care.

  13. What makes a robot 'social'?

    PubMed

    Jones, Raya A

    2017-08-01

    Rhetorical moves that construct humanoid robots as social agents disclose tensions at the intersection of science and technology studies (STS) and social robotics. The discourse of robotics often constructs robots that are like us (and therefore unlike dumb artefacts). In the discourse of STS, descriptions of how people assimilate robots into their activities are presented directly or indirectly against the backdrop of actor-network theory, which prompts attributing agency to mundane artefacts. In contradistinction to both social robotics and STS, it is suggested here that to view a capacity to partake in dialogical action (to have a 'voice') is necessary for regarding an artefact as authentically social. The theme is explored partly through a critical reinterpretation of an episode that Morana Alač reported and analysed towards demonstrating her bodies-in-interaction concept. This paper turns to 'body' with particular reference to Gibsonian affordances theory so as to identify the level of analysis at which dialogicality enters social interactions.

  14. Structure Assembly by a Heterogeneous Team of Robots Using State Estimation, Generalized Joints, and Mobile Parallel Manipulators

    NASA Technical Reports Server (NTRS)

    Komendera, Erik E.; Adhikari, Shaurav; Glassner, Samantha; Kishen, Ashwin; Quartaro, Amy

    2017-01-01

    Autonomous robotic assembly by mobile field robots has seen significant advances in recent decades, yet practicality remains elusive. Identified challenges include better use of state estimation to and reasoning with uncertainty, spreading out tasks to specialized robots, and implementing representative joining methods. This paper proposes replacing 1) self-correcting mechanical linkages with generalized joints for improved applicability, 2) assembly serial manipulators with parallel manipulators for higher precision and stability, and 3) all-in-one robots with a heterogeneous team of specialized robots for agent simplicity. This paper then describes a general assembly algorithm utilizing state estimation. Finally, these concepts are tested in the context of solar array assembly, requiring a team of robots to assemble, bond, and deploy a set of solar panel mockups to a backbone truss to an accuracy not built into the parts. This paper presents the results of these tests.

  15. Collective Machine Learning: Team Learning and Classification in Multi-Agent Systems

    ERIC Educational Resources Information Center

    Gifford, Christopher M.

    2009-01-01

    This dissertation focuses on the collaboration of multiple heterogeneous, intelligent agents (hardware or software) which collaborate to learn a task and are capable of sharing knowledge. The concept of collaborative learning in multi-agent and multi-robot systems is largely under studied, and represents an area where further research is needed to…

  16. Safe motion planning for mobile agents: A model of reactive planning for multiple mobile agents

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fujimura, Kikuo.

    1990-01-01

    The problem of motion planning for multiple mobile agents is studied. Each planning agent independently plans its own action based on its map which contains a limited information about the environment. In an environment where more than one mobile agent interacts, the motions of the robots are uncertain and dynamic. A model for reactive agents is described and simulation results are presented to show their behavior patterns. 18 refs., 2 figs.

  17. Analysis of decentralized variable structure control for collective search by mobile robots

    NASA Astrophysics Data System (ADS)

    Goldsmith, Steven Y.; Feddema, John T.; Robinett, Rush D., III

    1998-10-01

    This paper presents an analysis of a decentralized coordination strategy for organizing and controlling a team of mobile robots performing collective search. The alpha- beta coordination strategy is a family of collective search algorithms that allow teams of communicating robots to implicitly coordinate their search activities through a division of labor based on self-selected roles. In an alpha- beta team, alpha agents are motivated to improve their status by exploring new regions of the search space. Beta agents are conservative, and rely on the alpha agents to provide advanced information on favorable regions of the search space. An agent selects its current role dynamically based on its current status value relative to the current status values of the other team members. Status is determined by some function of the agent's sensor readings, and is generally a measurement of source intensity at the agent's current location. Variations on the decision rules determining alpha and beta behavior produce different versions of the algorithm that lead to different global properties. The alpha-beta strategy is based on a simple finite-state machine that implements a form of Variable Structure Control (VSC). The VSC system changes the dynamics of the collective system by abruptly switching at defined states to alternative control laws. In VSC, Lyapunov's direct method is often used to design control surfaces which guide the system to a given goal. We introduce the alpha- beta algorithm and present an analysis of the equilibrium point and the global stability of the alpha-beta algorithm based on Lyapunov's method.

  18. Brain activation in parietal area during manipulation with a surgical robot simulator.

    PubMed

    Miura, Satoshi; Kobayashi, Yo; Kawamura, Kazuya; Nakashima, Yasutaka; Fujie, Masakatsu G

    2015-06-01

    we present an evaluation method to qualify the embodiment caused by the physical difference between master-slave surgical robots by measuring the activation of the intraparietal sulcus in the user's brain activity during surgical robot manipulation. We show the change of embodiment based on the change of the optical axis-to-target view angle in the surgical simulator to change the manipulator's appearance in the monitor in terms of hand-eye coordination. The objective is to explore the change of brain activation according to the change of the optical axis-to-target view angle. In the experiments, we used a functional near-infrared spectroscopic topography (f-NIRS) brain imaging device to measure the brain activity of the seven subjects while they moved the hand controller to insert a curved needle into a target using the manipulator in a surgical simulator. The experiment was carried out several times with a variety of optical axis-to-target view angles. Some participants showed a significant peak (P value = 0.037, F-number = 2.841) when the optical axis-to-target view angle was 75°. The positional relationship between the manipulators and endoscope at 75° would be the closest to the human physical relationship between the hands and eyes.

  19. A Novel Robot System Integrating Biological and Mechanical Intelligence Based on Dissociated Neural Network-Controlled Closed-Loop Environment.

    PubMed

    Li, Yongcheng; Sun, Rong; Wang, Yuechao; Li, Hongyi; Zheng, Xiongfei

    2016-01-01

    We propose the architecture of a novel robot system merging biological and artificial intelligence based on a neural controller connected to an external agent. We initially built a framework that connected the dissociated neural network to a mobile robot system to implement a realistic vehicle. The mobile robot system characterized by a camera and two-wheeled robot was designed to execute the target-searching task. We modified a software architecture and developed a home-made stimulation generator to build a bi-directional connection between the biological and the artificial components via simple binomial coding/decoding schemes. In this paper, we utilized a specific hierarchical dissociated neural network for the first time as the neural controller. Based on our work, neural cultures were successfully employed to control an artificial agent resulting in high performance. Surprisingly, under the tetanus stimulus training, the robot performed better and better with the increasement of training cycle because of the short-term plasticity of neural network (a kind of reinforced learning). Comparing to the work previously reported, we adopted an effective experimental proposal (i.e. increasing the training cycle) to make sure of the occurrence of the short-term plasticity, and preliminarily demonstrated that the improvement of the robot's performance could be caused independently by the plasticity development of dissociated neural network. This new framework may provide some possible solutions for the learning abilities of intelligent robots by the engineering application of the plasticity processing of neural networks, also for the development of theoretical inspiration for the next generation neuro-prostheses on the basis of the bi-directional exchange of information within the hierarchical neural networks.

  20. Prototyping and Simulation of Robot Group Intelligence using Kohonen Networks.

    PubMed

    Wang, Zhijun; Mirdamadi, Reza; Wang, Qing

    2016-01-01

    Intelligent agents such as robots can form ad hoc networks and replace human being in many dangerous scenarios such as a complicated disaster relief site. This project prototypes and builds a computer simulator to simulate robot kinetics, unsupervised learning using Kohonen networks, as well as group intelligence when an ad hoc network is formed. Each robot is modeled using an object with a simple set of attributes and methods that define its internal states and possible actions it may take under certain circumstances. As the result, simple, reliable, and affordable robots can be deployed to form the network. The simulator simulates a group of robots as an unsupervised learning unit and tests the learning results under scenarios with different complexities. The simulation results show that a group of robots could demonstrate highly collaborative behavior on a complex terrain. This study could potentially provide a software simulation platform for testing individual and group capability of robots before the design process and manufacturing of robots. Therefore, results of the project have the potential to reduce the cost and improve the efficiency of robot design and building.

  1. Prototyping and Simulation of Robot Group Intelligence using Kohonen Networks

    PubMed Central

    Wang, Zhijun; Mirdamadi, Reza; Wang, Qing

    2016-01-01

    Intelligent agents such as robots can form ad hoc networks and replace human being in many dangerous scenarios such as a complicated disaster relief site. This project prototypes and builds a computer simulator to simulate robot kinetics, unsupervised learning using Kohonen networks, as well as group intelligence when an ad hoc network is formed. Each robot is modeled using an object with a simple set of attributes and methods that define its internal states and possible actions it may take under certain circumstances. As the result, simple, reliable, and affordable robots can be deployed to form the network. The simulator simulates a group of robots as an unsupervised learning unit and tests the learning results under scenarios with different complexities. The simulation results show that a group of robots could demonstrate highly collaborative behavior on a complex terrain. This study could potentially provide a software simulation platform for testing individual and group capability of robots before the design process and manufacturing of robots. Therefore, results of the project have the potential to reduce the cost and improve the efficiency of robot design and building. PMID:28540284

  2. Layered Learning in Multi-Agent Systems

    DTIC Science & Technology

    1998-12-15

    project almost from the beginning has tirelessly experimented with different robot architectures, always managing to pull things together and create...TEAM MEMBER AGENT ARCHITECTURE I " ! Midfielder, Left : • i ) ( ^ J Goalie , Center Home Coordinates Home Range Max Range Figure

  3. Natural Speech Toward Humans and Intelligent Agents During a Simulated Search and Rescue Mission

    DTIC Science & Technology

    2008-12-01

    Eklundh, 2006). Research has been done on giving directions to robots, and the point of view that teammates normally attribute to them (Imai, Hiraki ...Bystander intervention as a resource in human-robot collaboration. Interaction Studies, 7(3), 455-477. Imai, M., Hiraki , K., Miyasato, T., Nakatsu, R

  4. Various view with fish-eye lens of STS-103 crew on aft flight deck

    NASA Image and Video Library

    2000-01-28

    STS103-375-027 (19 - 27 December 1999).--- Astronaut Jean-Francois Clervoy, mission specialist representing the European Space Agency (ESA), controls Discovery's remote manipulator system (RMS) robot arm during operations.with the Hubble Space Telescope (HST).

  5. Autonomous Navigation, Dynamic Path and Work Flow Planning in Multi-Agent Robotic Swarms Project

    NASA Technical Reports Server (NTRS)

    Falker, John; Zeitlin, Nancy; Leucht, Kurt; Stolleis, Karl

    2015-01-01

    Kennedy Space Center has teamed up with the Biological Computation Lab at the University of New Mexico to create a swarm of small, low-cost, autonomous robots, called Swarmies, to be used as a ground-based research platform for in-situ resource utilization missions. The behavior of the robot swarm mimics the central-place foraging strategy of ants to find and collect resources in an unknown environment and return those resources to a central site.

  6. Reinforcement learning algorithms for robotic navigation in dynamic environments.

    PubMed

    Yen, Gary G; Hickey, Travis W

    2004-04-01

    The purpose of this study was to examine improvements to reinforcement learning (RL) algorithms in order to successfully interact within dynamic environments. The scope of the research was that of RL algorithms as applied to robotic navigation. Proposed improvements include: addition of a forgetting mechanism, use of feature based state inputs, and hierarchical structuring of an RL agent. Simulations were performed to evaluate the individual merits and flaws of each proposal, to compare proposed methods to prior established methods, and to compare proposed methods to theoretically optimal solutions. Incorporation of a forgetting mechanism did considerably improve the learning times of RL agents in a dynamic environment. However, direct implementation of a feature-based RL agent did not result in any performance enhancements, as pure feature-based navigation results in a lack of positional awareness, and the inability of the agent to determine the location of the goal state. Inclusion of a hierarchical structure in an RL agent resulted in significantly improved performance, specifically when one layer of the hierarchy included a feature-based agent for obstacle avoidance, and a standard RL agent for global navigation. In summary, the inclusion of a forgetting mechanism, and the use of a hierarchically structured RL agent offer substantially increased performance when compared to traditional RL agents navigating in a dynamic environment.

  7. Minimal Representation and Decision Making for Networked Autonomous Agents

    DTIC Science & Technology

    2015-08-27

    to a multi-vehicle version of the Travelling Salesman Problem (TSP). We further provided a direct formula for computing the number of robots...the sensor. As a first stab at this, the two-agent rendezvous problem is considered where one agent (the target) is equipped with no sensors and is...by the total distance traveled by all agents. For agents with limited sensing and communication capabilities, we give a formula that computes the

  8. Preliminary results of BRAVO project: brain computer interfaces for Robotic enhanced Action in Visuo-motOr tasks.

    PubMed

    Bergamasco, Massimo; Frisoli, Antonio; Fontana, Marco; Loconsole, Claudio; Leonardis, Daniele; Troncossi, Marco; Foumashi, Mohammad Mozaffari; Parenti-Castelli, Vincenzo

    2011-01-01

    This paper presents the preliminary results of the project BRAVO (Brain computer interfaces for Robotic enhanced Action in Visuo-motOr tasks). The objective of this project is to define a new approach to the development of assistive and rehabilitative robots for motor impaired users to perform complex visuomotor tasks that require a sequence of reaches, grasps and manipulations of objects. BRAVO aims at developing new robotic interfaces and HW/SW architectures for rehabilitation and regain/restoration of motor function in patients with upper limb sensorimotor impairment through extensive rehabilitation therapy and active assistance in the execution of Activities of Daily Living. The final system developed within this project will include a robotic arm exoskeleton and a hand orthosis that will be integrated together for providing force assistance. The main novelty that BRAVO introduces is the control of the robotic assistive device through the active prediction of intention/action. The system will actually integrate the information about the movement carried out by the user with a prediction of the performed action through an interpretation of current gaze of the user (measured through eye-tracking), brain activation (measured through BCI) and force sensor measurements. © 2011 IEEE

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mann, R.C.; Fujimura, K.; Unseren, M.A.

    One of the frontiers in intelligent machine research is the understanding of how constructive cooperation among multiple autonomous agents can be effected. The effort at the Center for Engineering Systems Advanced Research (CESAR)at the Oak Ridge National Laboratory (ORNL) focuses on two problem areas: (1) cooperation by multiple mobile robots in dynamic, incompletely known environments; and (2) cooperating robotic manipulators. Particular emphasis is placed on experimental evaluation of research and developments using the CESAR robot system testbeds, including three mobile robots, and a seven-axis, kinematically redundant mobile manipulator. This paper summarizes initial results of research addressing the decoupling of positionmore » and force control for two manipulators holding a common object, and the path planning for multiple robots in a common workspace. 15 refs., 3 figs.« less

  10. Agent-Based Chemical Plume Tracing Using Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Zarzhitsky, Dimitri; Spears, Diana; Thayer, David; Spears, William

    2004-01-01

    This paper presents a rigorous evaluation of a novel, distributed chemical plume tracing algorithm. The algorithm is a combination of the best aspects of the two most popular predecessors for this task. Furthermore, it is based on solid, formal principles from the field of fluid mechanics. The algorithm is applied by a network of mobile sensing agents (e.g., robots or micro-air vehicles) that sense the ambient fluid velocity and chemical concentration, and calculate derivatives. The algorithm drives the robotic network to the source of the toxic plume, where measures can be taken to disable the source emitter. This work is part of a much larger effort in research and development of a physics-based approach to developing networks of mobile sensing agents for monitoring, tracking, reporting and responding to hazardous conditions.

  11. Reliability of assessing postural control during seated balancing using a physical human-robot interaction.

    PubMed

    Ramadan, Ahmed; Cholewicki, Jacek; Radcliffe, Clark J; Popovich, John M; Reeves, N Peter; Choi, Jongeun

    2017-11-07

    This study evaluated the within- and between-visit reliability of a seated balance test for quantifying trunk motor control using input-output data. Thirty healthy subjects performed a seated balance test under three conditions: eyes open (EO), eyes closed (EC), and eyes closed with vibration to the lumbar muscles (VIB). Each subject performed three trials of each condition on three different visits. The seated balance test utilized a torque-controlled robotic seat, which together with a sitting subject resulted in a physical human-robot interaction (pHRI) (two degrees-of-freedom with upper and lower body rotations). Subjects balanced the pHRI by controlling trunk rotation in response to pseudorandom torque perturbations applied to the seat in the coronal plane. Performance error was expressed as the root mean square (RMSE) of deviations from the upright position in the time domain and as the mean bandpass signal energy (E mb ) in the frequency domain. Intra-class correlation coefficients (ICC) quantified the between-visit reliability of both RMSE and E mb . The empirical transfer function estimates (ETFE) from the perturbation input to each of the two rotational outputs were calculated. Coefficients of multiple correlation (CMC) quantified the within- and between-visit reliability of the averaged ETFE. ICCs of RMSE and E mb for all conditions were ≥0.84. The mean within- and between-visit CMCs were all ≥0.96 for the lower body rotation and ≥0.89 for the upper body rotation. Therefore, our seated balance test consisting of pHRI to assess coronal plane trunk motor control is reliable. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Evolving a Neural Olfactorimotor System in Virtual and Real Olfactory Environments

    PubMed Central

    Rhodes, Paul A.; Anderson, Todd O.

    2012-01-01

    To provide a platform to enable the study of simulated olfactory circuitry in context, we have integrated a simulated neural olfactorimotor system with a virtual world which simulates both computational fluid dynamics as well as a robotic agent capable of exploring the simulated plumes. A number of the elements which we developed for this purpose have not, to our knowledge, been previously assembled into an integrated system, including: control of a simulated agent by a neural olfactorimotor system; continuous interaction between the simulated robot and the virtual plume; the inclusion of multiple distinct odorant plumes and background odor; the systematic use of artificial evolution driven by olfactorimotor performance (e.g., time to locate a plume source) to specify parameter values; the incorporation of the realities of an imperfect physical robot using a hybrid model where a physical robot encounters a simulated plume. We close by describing ongoing work toward engineering a high dimensional, reversible, low power electronic olfactory sensor which will allow olfactorimotor neural circuitry evolved in the virtual world to control an autonomous olfactory robot in the physical world. The platform described here is intended to better test theories of olfactory circuit function, as well as provide robust odor source localization in realistic environments. PMID:23112772

  13. Cosine Kuramoto Based Distribution of a Convoy with Limit-Cycle Obstacle Avoidance Through the Use of Simulated Agents

    NASA Astrophysics Data System (ADS)

    Howerton, William

    This thesis presents a method for the integration of complex network control algorithms with localized agent specific algorithms for maneuvering and obstacle avoidance. This method allows for successful implementation of group and agent specific behaviors. It has proven to be robust and will work for a variety of vehicle platforms. Initially, a review and implementation of two specific algorithms will be detailed. The first, a modified Kuramoto model was developed by Xu [1] which utilizes tools from graph theory to efficiently perform the task of distributing agents. The second algorithm developed by Kim [2] is an effective method for wheeled robots to avoid local obstacles using a limit-cycle navigation method. The results of implementing these methods on a test-bed of wheeled robots will be presented. Control issues related to outside disturbances not anticipated in the original theory are then discussed. A novel method of using simulated agents to separate the task of distributing agents from agent specific velocity and heading commands has been developed and implemented to address these issues. This new method can be used to combine various behaviors and is not limited to a specific control algorithm.

  14. Robot-Mediated Interviews - How Effective Is a Humanoid Robot as a Tool for Interviewing Young Children?

    PubMed Central

    Wood, Luke Jai; Dautenhahn, Kerstin; Rainer, Austen; Robins, Ben; Lehmann, Hagen; Syrdal, Dag Sverre

    2013-01-01

    Robots have been used in a variety of education, therapy or entertainment contexts. This paper introduces the novel application of using humanoid robots for robot-mediated interviews. An experimental study examines how children’s responses towards the humanoid robot KASPAR in an interview context differ in comparison to their interaction with a human in a similar setting. Twenty-one children aged between 7 and 9 took part in this study. Each child participated in two interviews, one with an adult and one with a humanoid robot. Measures include the behavioural coding of the children’s behaviour during the interviews and questionnaire data. The questions in these interviews focused on a special event that had recently taken place in the school. The results reveal that the children interacted with KASPAR very similar to how they interacted with a human interviewer. The quantitative behaviour analysis reveal that the most notable difference between the interviews with KASPAR and the human were the duration of the interviews, the eye gaze directed towards the different interviewers, and the response time of the interviewers. These results are discussed in light of future work towards developing KASPAR as an ‘interviewer’ for young children in application areas where a robot may have advantages over a human interviewer, e.g. in police, social services, or healthcare applications. PMID:23533625

  15. New insights into the diagnosis and treatment of dry eye.

    PubMed

    Dogru, Murat; Tsubota, Kazuo

    2004-04-01

    Over the past decade, numerous advances have been made in relation to dry eye diagnostic markers, technologies, and treatment options. The mainstay of treatment of dry eye is the use of artificial tear solutions and punctum plugs. A goal is the development of agents that provide symptomatic treatment and, at the same time, improve ocular surface keratinization. It is the authors' opinion that the functional visual acuity tester and the new tear stability analysis system will be widely used to improve diagnosis and evaluate treatment outcomes in KCS. Advances in treatment will utilize anti-inflammatory agents, immune suppressants such as Cyclosporin A and FK-506, growth hormones, androgens, topical mucins and ocular surface stimulating drugs, like INS365. Although aqueous-deficient dry eye is most commonly not associated with Sjogren syndrome (SS), aqueous-deficient dry eye is often most severe in patients with SS; thus, this article focuses mainly on SS-associated dry eye.

  16. A Human-Robot Co-Manipulation Approach Based on Human Sensorimotor Information.

    PubMed

    Peternel, Luka; Tsagarakis, Nikos; Ajoudani, Arash

    2017-07-01

    This paper aims to improve the interaction and coordination between the human and the robot in cooperative execution of complex, powerful, and dynamic tasks. We propose a novel approach that integrates online information about the human motor function and manipulability properties into the hybrid controller of the assistive robot. Through this human-in-the-loop framework, the robot can adapt to the human motor behavior and provide the appropriate assistive response in different phases of the cooperative task. We experimentally evaluate the proposed approach in two human-robot co-manipulation tasks that require specific complementary behavior from the two agents. Results suggest that the proposed technique, which relies on a minimum degree of task-level pre-programming, can achieve an enhanced physical human-robot interaction performance and deliver appropriate level of assistance to the human operator.

  17. KENNEDY SPACE CENTER, FLA. - Workers watch as an overhead crane begins to lift the backshell with the Mars Exploration Rover 1 (MER-1) inside. The backshell will be moved and attached to the lower heat shield. NASA's twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can't yet go. MER-1 is scheduled to launch June 25 as MER-B aboard a Delta II rocket from Cape Canaveral Air Force Station.

    NASA Image and Video Library

    2003-05-15

    KENNEDY SPACE CENTER, FLA. - Workers watch as an overhead crane begins to lift the backshell with the Mars Exploration Rover 1 (MER-1) inside. The backshell will be moved and attached to the lower heat shield. NASA's twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can't yet go. MER-1 is scheduled to launch June 25 as MER-B aboard a Delta II rocket from Cape Canaveral Air Force Station.

  18. KENNEDY SPACE CENTER, FLA. - A closeup of the cruise stage to be mated to the Mars Exploration Rover 2 (MER-2) entry vehicle. The cruise stage includes fuel tanks, thruster clusters and avionics for steering and propulsion. NASA's twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can't yet go. MER-2 is scheduled to launch June 5 as MER-A aboard a Delta rocket from Cape Canaveral Air Force Station.

    NASA Image and Video Library

    2003-05-06

    KENNEDY SPACE CENTER, FLA. - A closeup of the cruise stage to be mated to the Mars Exploration Rover 2 (MER-2) entry vehicle. The cruise stage includes fuel tanks, thruster clusters and avionics for steering and propulsion. NASA's twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can't yet go. MER-2 is scheduled to launch June 5 as MER-A aboard a Delta rocket from Cape Canaveral Air Force Station.

  19. KENNEDY SPACE CENTER, FLA. - Assembly of the backshell and heat shield surrounding the Mars Exploration Rover 1 (MER-1) is complete. The resulting aeroshell will protect the rover on its journey to Mars. NASA's twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can't yet go. MER-1 is scheduled to launch June 25 as MER-B aboard a Delta II rocket from Cape Canaveral Air Force Station.

    NASA Image and Video Library

    2003-05-15

    KENNEDY SPACE CENTER, FLA. - Assembly of the backshell and heat shield surrounding the Mars Exploration Rover 1 (MER-1) is complete. The resulting aeroshell will protect the rover on its journey to Mars. NASA's twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can't yet go. MER-1 is scheduled to launch June 25 as MER-B aboard a Delta II rocket from Cape Canaveral Air Force Station.

  20. KENNEDY SPACE CENTER, FLA. - A solid rocket booster arrives at Launch Complex 17-A, Cape Canaveral Air Force Station. It is one of nine that will be mated to the Delta rocket to launch Mars Exploration Rover 2. NASA’s twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can’t yet go. MER-2 is scheduled to launch June 5 as MER-A. MER-1 (MER-B) will launch June 25.

    NASA Image and Video Library

    2003-05-14

    KENNEDY SPACE CENTER, FLA. - A solid rocket booster arrives at Launch Complex 17-A, Cape Canaveral Air Force Station. It is one of nine that will be mated to the Delta rocket to launch Mars Exploration Rover 2. NASA’s twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can’t yet go. MER-2 is scheduled to launch June 5 as MER-A. MER-1 (MER-B) will launch June 25.

  1. KENNEDY SPACE CENTER, FLA. - Workers walk with the suspended backshell/ Mars Exploration Rover 1 (MER-1) as it travels across the floor of the Payload Hazardous Servicing Facility. The backshell will be attached to the lower heat shield. NASA's twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can't yet go. MER-1 is scheduled to launch June 25 as MER-B aboard a Delta II rocket from Cape Canaveral Air Force Station.

    NASA Image and Video Library

    2003-05-15

    KENNEDY SPACE CENTER, FLA. - Workers walk with the suspended backshell/ Mars Exploration Rover 1 (MER-1) as it travels across the floor of the Payload Hazardous Servicing Facility. The backshell will be attached to the lower heat shield. NASA's twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can't yet go. MER-1 is scheduled to launch June 25 as MER-B aboard a Delta II rocket from Cape Canaveral Air Force Station.

  2. Swarmie User Manual: A Rover Used for Multi-agent Swarm Research

    NASA Technical Reports Server (NTRS)

    Montague, Gilbert

    2014-01-01

    The ability to create multiple functional yet cost effective robots is crucial for conducting swarming robotics research. The Center Innovation Fund (CIF) swarming robotics project is a collaboration among the KSC Granular Mechanics and Regolith Operations (GMRO) group, the University of New Mexico Biological Computation Lab, and the NASA Ames Intelligent Robotics Group (IRG) that uses rovers, dubbed "Swarmies", as test platforms for genetic search algorithms. This fall, I assisted in the development of the software modules used on the Swarmies and created this guide to provide thorough instructions on how to configure your workspace to operate a Swarmie both in simulation and out in the field.

  3. Does a robotic surgery approach offer optimal ergonomics to gynecologic surgeons?: a comprehensive ergonomics survey study in gynecologic robotic surgery.

    PubMed

    Lee, Mija Ruth; Lee, Gyusung Isaiah

    2017-09-01

    To better understand the ergonomics associated with robotic surgery including physical discomfort and symptoms, factors influencing symptom reporting, and robotic surgery systems components recommended to be improved. The anonymous survey included 20 questions regarding demographics, systems, ergonomics, and physical symptoms and was completed by experienced robotic surgeons online through American Association of Gynecologic Laparoscopists (AAGL) and Society of Robotic Surgery (SRS). There were 289 (260 gynecology, 22 gynecology-oncology, and 7 urogynecology) gynecologic surgeon respondents regularly practicing robotic surgery. Statistical data analysis was performed using the t-test, χ² test, and logistic regression. One hundred fifty-six surgeons (54.0%) reported experiencing physical symptoms or discomfort. Participants with higher robotic case volume reported significantly lower physical symptom report rates (p<0.05). Gynecologists who felt highly confident about managing ergonomic settings not only acknowledged that the adjustments were helpful for better ergonomics but also reported a lower physical symptom rate (p<0.05). In minimizing their symptoms, surgeons changed ergonomic settings (32.7%), took a break (33.3%) or simply ignored the problem (34%). Fingers and neck were the most common body parts with symptoms. Eye symptom complaints were significantly decreased with the Si robot (p<0.05). The most common robotic system components to be improved for better ergonomics were microphone/speaker, pedal design, and finger clutch. More than half of participants reported physical symptoms which were found to be primarily associated with confidence in managing ergonomic settings and familiarity with the system depending on the volume of robotic cases. Optimal guidelines and education on managing ergonomic settings should be implemented to maximize the ergonomic benefits of robotic surgery. Copyright © 2017. Asian Society of Gynecologic Oncology, Korean Society of Gynecologic Oncology

  4. Does a robotic surgery approach offer optimal ergonomics to gynecologic surgeons?: a comprehensive ergonomics survey study in gynecologic robotic surgery

    PubMed Central

    2017-01-01

    Objective To better understand the ergonomics associated with robotic surgery including physical discomfort and symptoms, factors influencing symptom reporting, and robotic surgery systems components recommended to be improved. Methods The anonymous survey included 20 questions regarding demographics, systems, ergonomics, and physical symptoms and was completed by experienced robotic surgeons online through American Association of Gynecologic Laparoscopists (AAGL) and Society of Robotic Surgery (SRS). Results There were 289 (260 gynecology, 22 gynecology-oncology, and 7 urogynecology) gynecologic surgeon respondents regularly practicing robotic surgery. Statistical data analysis was performed using the t-test, χ2 test, and logistic regression. One hundred fifty-six surgeons (54.0%) reported experiencing physical symptoms or discomfort. Participants with higher robotic case volume reported significantly lower physical symptom report rates (p<0.05). Gynecologists who felt highly confident about managing ergonomic settings not only acknowledged that the adjustments were helpful for better ergonomics but also reported a lower physical symptom rate (p<0.05). In minimizing their symptoms, surgeons changed ergonomic settings (32.7%), took a break (33.3%) or simply ignored the problem (34%). Fingers and neck were the most common body parts with symptoms. Eye symptom complaints were significantly decreased with the Si robot (p<0.05). The most common robotic system components to be improved for better ergonomics were microphone/speaker, pedal design, and finger clutch. Conclusion More than half of participants reported physical symptoms which were found to be primarily associated with confidence in managing ergonomic settings and familiarity with the system depending on the volume of robotic cases. Optimal guidelines and education on managing ergonomic settings should be implemented to maximize the ergonomic benefits of robotic surgery. PMID:28657231

  5. A review of medical robotics for minimally invasive soft tissue surgery.

    PubMed

    Dogangil, G; Davies, B L; Rodriguez y Baena, F

    2010-01-01

    This paper provides an overview of recent trends and developments in medical robotics for minimally invasive soft tissue surgery, with a view to highlight some of the issues posed and solutions proposed in the literature. The paper includes a thorough review of the literature, which focuses on soft tissue surgical robots developed and published in the last five years (between 2004 and 2008) in indexed journals and conference proceedings. Only surgical systems were considered; imaging and diagnostic devices were excluded from the review. The systems included in this paper are classified according to the following surgical specialties: neurosurgery; eye surgery and ear, nose, and throat (ENT); general, thoracic, and cardiac surgery; gastrointestinal and colorectal surgery; and urologic surgery. The systems are also cross-classified according to their engineering design and robotics technology, which is included in tabular form at the end of the paper. The review concludes with an overview of the field, along with some statistical considerations about the size, geographical spread, and impact of medical robotics for soft tissue surgery today.

  6. Distributed cooperating processes in a mobile robot control system

    NASA Technical Reports Server (NTRS)

    Skillman, Thomas L., Jr.

    1988-01-01

    A mobile inspection robot has been proposed for the NASA Space Station. It will be a free flying autonomous vehicle that will leave a berthing unit to accomplish a variety of inspection tasks around the Space Station, and then return to its berth to recharge, refuel, and transfer information. The Flying Eye robot will receive voice communication to change its attitude, move at a constant velocity, and move to a predefined location along a self generated path. This mobile robot control system requires integration of traditional command and control techniques with a number of AI technologies. Speech recognition, natural language understanding, task and path planning, sensory abstraction and pattern recognition are all required for successful implementation. The interface between the traditional numeric control techniques and the symbolic processing to the AI technologies must be developed, and a distributed computing approach will be needed to meet the real time computing requirements. To study the integration of the elements of this project, a novel mobile robot control architecture and simulation based on the blackboard architecture was developed. The control system operation and structure is discussed.

  7. Views from Within a Narrative: Evaluating Long-Term Human-Robot Interaction in a Naturalistic Environment Using Open-Ended Scenarios.

    PubMed

    Syrdal, Dag Sverre; Dautenhahn, Kerstin; Koay, Kheng Lee; Ho, Wan Ching

    2014-01-01

    This article describes the prototyping of human-robot interactions in the University of Hertfordshire (UH) Robot House. Twelve participants took part in a long-term study in which they interacted with robots in the UH Robot House once a week for a period of 10 weeks. A prototyping method using the narrative framing technique allowed participants to engage with the robots in episodic interactions that were framed using narrative to convey the impression of a continuous long-term interaction. The goal was to examine how participants responded to the scenarios and the robots as well as specific robot behaviours, such as agent migration and expressive behaviours. Evaluation of the robots and the scenarios were elicited using several measures, including the standardised System Usability Scale, an ad hoc Scenario Acceptance Scale, as well as single-item Likert scales, open-ended questionnaire items and a debriefing interview. Results suggest that participants felt that the use of this prototyping technique allowed them insight into the use of the robot, and that they accepted the use of the robot within the scenario.

  8. Honey: A Natural Remedy for Eye Diseases.

    PubMed

    Majtanova, Nora; Cernak, Martin; Majtan, Juraj

    2016-01-01

    Honey has been considered as a therapeutic agent; its successful application in the treatment of non-healing infected wounds has promoted its further clinical usage for treating various disorders including eye disorders. There is evidence that honey may be helpful in treating dry eye disease, post-operative corneal edema, and bullous keratopathy. Furthermore, it can be used as an antibacterial agent to reduce the ocular flora. This review discusses both the current knowledge of and new perspectives for honey therapy in ophthalmology. © 2016 S. Karger GmbH, Freiburg.

  9. Integrated fringe projection 3D scanning system for large-scale metrology based on laser tracker

    NASA Astrophysics Data System (ADS)

    Du, Hui; Chen, Xiaobo; Zhou, Dan; Guo, Gen; Xi, Juntong

    2017-10-01

    Large scale components exist widely in advance manufacturing industry,3D profilometry plays a pivotal role for the quality control. This paper proposes a flexible, robust large-scale 3D scanning system by integrating a robot with a binocular structured light scanner and a laser tracker. The measurement principle and system construction of the integrated system are introduced. And a mathematical model is established for the global data fusion. Subsequently, a flexible and robust method and mechanism is introduced for the establishment of the end coordination system. Based on this method, a virtual robot noumenon is constructed for hand-eye calibration. And then the transformation matrix between end coordination system and world coordination system is solved. Validation experiment is implemented for verifying the proposed algorithms. Firstly, hand-eye transformation matrix is solved. Then a car body rear is measured for 16 times for the global data fusion algorithm verification. And the 3D shape of the rear is reconstructed successfully.

  10. Development and Verification of a Novel Robot-Integrated Fringe Projection 3D Scanning System for Large-Scale Metrology.

    PubMed

    Du, Hui; Chen, Xiaobo; Xi, Juntong; Yu, Chengyi; Zhao, Bao

    2017-12-12

    Large-scale surfaces are prevalent in advanced manufacturing industries, and 3D profilometry of these surfaces plays a pivotal role for quality control. This paper proposes a novel and flexible large-scale 3D scanning system assembled by combining a robot, a binocular structured light scanner and a laser tracker. The measurement principle and system construction of the integrated system are introduced. A mathematical model is established for the global data fusion. Subsequently, a robust method is introduced for the establishment of the end coordinate system. As for hand-eye calibration, the calibration ball is observed by the scanner and the laser tracker simultaneously. With this data, the hand-eye relationship is solved, and then an algorithm is built to get the transformation matrix between the end coordinate system and the world coordinate system. A validation experiment is designed to verify the proposed algorithms. Firstly, a hand-eye calibration experiment is implemented and the computation of the transformation matrix is done. Then a car body rear is measured 22 times in order to verify the global data fusion algorithm. The 3D shape of the rear is reconstructed successfully. To evaluate the precision of the proposed method, a metric tool is built and the results are presented.

  11. Visual and somatic sensory feedback of brain activity for intuitive surgical robot manipulation.

    PubMed

    Miura, Satoshi; Matsumoto, Yuya; Kobayashi, Yo; Kawamura, Kazuya; Nakashima, Yasutaka; Fujie, Masakatsu G

    2015-01-01

    This paper presents a method to evaluate the hand-eye coordination of the master-slave surgical robot by measuring the activation of the intraparietal sulcus in users brain activity during controlling virtual manipulation. The objective is to examine the changes in activity of the intraparietal sulcus when the user's visual or somatic feedback is passed through or intercepted. The hypothesis is that the intraparietal sulcus activates significantly when both the visual and somatic sense pass feedback, but deactivates when either visual or somatic is intercepted. The brain activity of three subjects was measured by the functional near-infrared spectroscopic-topography brain imaging while they used a hand controller to move a virtual arm of a surgical simulator. The experiment was performed several times with three conditions: (i) the user controlled the virtual arm naturally under both visual and somatic feedback passed, (ii) the user moved with closed eyes under only somatic feedback passed, (iii) the user only gazed at the screen under only visual feedback passed. Brain activity showed significantly better control of the virtual arm naturally (p<;0.05) when compared with moving with closed eyes or only gazing among all participants. In conclusion, the brain can activate according to visual and somatic sensory feedback agreement.

  12. Do Intelligent Robots Need Emotion?

    PubMed

    Pessoa, Luiz

    2017-11-01

    What is the place of emotion in intelligent robots? Researchers have advocated the inclusion of some emotion-related components in the information-processing architecture of autonomous agents. It is argued here that emotion needs to be merged with all aspects of the architecture: cognitive-emotional integration should be a key design principle. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Deactivation in the Sensorimotor Area during Observation of a Human Agent Performing Robotic Actions

    ERIC Educational Resources Information Center

    Shimada, Sotaro

    2010-01-01

    It is well established that several motor areas, called the mirror-neuron system (MNS), are activated when an individual observes other's actions. However, whether the MNS responds similarly to robotic actions compared with human actions is still controversial. The present study investigated whether and how the motor area activity is influenced by…

  14. Decentralized Planning for Autonomous Agents Cooperating in Complex Missions

    DTIC Science & Technology

    2010-09-01

    Consensus - based decentralized auctions for robust task allocation ," IEEE Transactions on Robotics...Robotics, vol. 24, pp. 209-222, 2006. [44] H.-L. Choi, L. Brunet, and J. P. How, " Consensus - based decentralized auctions for robust task allocation ...2003. 123 [31] L. Brunet, " Consensus - Based Auctions for Decentralized Task Assignment," Master’s thesis, Dept.

  15. [Principles of MR-guided interventions, surgery, navigation, and robotics].

    PubMed

    Melzer, A

    2010-08-01

    The application of magnetic resonance imaging (MRI) as an imaging technique in interventional and surgical techniques provides a new dimension of soft tissue-oriented precise procedures without exposure to ionizing radiation and nephrotoxic allergenic, iodine-containing contrast agents. The technical capabilities of MRI in combination with interventional devices and systems, navigation, and robotics are discussed.

  16. Learning to Predict Consequences as a Method of Knowledge Transfer in Reinforcement Learning.

    PubMed

    Chalmers, Eric; Contreras, Edgar Bermudez; Robertson, Brandon; Luczak, Artur; Gruber, Aaron

    2017-04-17

    The reinforcement learning (RL) paradigm allows agents to solve tasks through trial-and-error learning. To be capable of efficient, long-term learning, RL agents should be able to apply knowledge gained in the past to new tasks they may encounter in the future. The ability to predict actions' consequences may facilitate such knowledge transfer. We consider here domains where an RL agent has access to two kinds of information: agent-centric information with constant semantics across tasks, and environment-centric information, which is necessary to solve the task, but with semantics that differ between tasks. For example, in robot navigation, environment-centric information may include the robot's geographic location, while agent-centric information may include sensor readings of various nearby obstacles. We propose that these situations provide an opportunity for a very natural style of knowledge transfer, in which the agent learns to predict actions' environmental consequences using agent-centric information. These predictions contain important information about the affordances and dangers present in a novel environment, and can effectively transfer knowledge from agent-centric to environment-centric learning systems. Using several example problems including spatial navigation and network routing, we show that our knowledge transfer approach can allow faster and lower cost learning than existing alternatives.

  17. Animated Pedagogical Agents as Aids in Multimedia Learning: Effects on Eye-Fixations during Learning and Learning Outcomes

    ERIC Educational Resources Information Center

    Wang, Fuxing; Li, Wenjing; Mayer, Richard E.; Liu, Huashan

    2018-01-01

    The goal of the present study is to determine how to incorporate social cues such as gesturing in animated pedagogical agents (PAs) for online multimedia lessons in ways that promote student learning. In 3 experiments, college students learned about synaptic transmission from a multimedia narrated presentation while their eye movements were…

  18. 40 CFR Appendix O to Subpart G of... - Substitutes Listed in the September 27, 2006 Final Rule, Effective November 27, 2006

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... End-use Substitute Decision Conditions Further information Total flooding Gelled Halocarbon/Dry... ventilation should be in place to reduce airborne exposure to constituents of agent; —An eye wash fountain and... reduce airborne exposure to constituents of agent; —An eye wash fountain and quick drench facility should...

  19. Towards a model of temporal attention for on-line learning in a mobile robot

    NASA Astrophysics Data System (ADS)

    Marom, Yuval; Hayes, Gillian

    2001-06-01

    We present a simple attention system, capable of bottom-up signal detection adaptive to subjective internal needs. The system is used by a robotic agent, learning to perform phototaxis and obstacle avoidance by following a teacher agent around a simulated environment, and deciding when to form associations between perceived information and imitated actions. We refer to this kind of decision-making as on-line temporal attention. The main role of the attention system is perception of change; the system is regulated through feedback about cognitive effort. We show how different levels of effort affect both the ability to learn a task, and to execute it.

  20. Human-Centric Teaming in a Multi-Agent EVA Assembly Task

    NASA Technical Reports Server (NTRS)

    Rehnmark, Fredrik; Currie, Nancy; Ambrose, Robert O.; Culbert, Christopher

    2004-01-01

    NASA's Human Space Flight program depends heavily on spacewalks performed by pairs of suited human astronauts. These Extra-Vehicular Activities (EVAs) are severely restricted in both duration and scope by consumables and available manpower.An expanded multi-agent EVA team combining the information-gathering and problem-solving skills of human astronauts with the survivability and physical capabilities of highly dexterous space robots is proposed. A 1-g test featuring two NASA/DARPA Robonaut systems working side-by-side with a suited human subject is conducted to evaluate human-robot teaming strategies in the context of a simulated EVA assembly task based on the STS-61B ACCESS flight experiment.

  1. Building entity models through observation and learning

    NASA Astrophysics Data System (ADS)

    Garcia, Richard; Kania, Robert; Fields, MaryAnne; Barnes, Laura

    2011-05-01

    To support the missions and tasks of mixed robotic/human teams, future robotic systems will need to adapt to the dynamic behavior of both teammates and opponents. One of the basic elements of this adaptation is the ability to exploit both long and short-term temporal data. This adaptation allows robotic systems to predict/anticipate, as well as influence, future behavior for both opponents and teammates and will afford the system the ability to adjust its own behavior in order to optimize its ability to achieve the mission goals. This work is a preliminary step in the effort to develop online entity behavior models through a combination of learning techniques and observations. As knowledge is extracted from the system through sensor and temporal feedback, agents within the multi-agent system attempt to develop and exploit a basic movement model of an opponent. For the purpose of this work, extraction and exploitation is performed through the use of a discretized two-dimensional game. The game consists of a predetermined number of sentries attempting to keep an unknown intruder agent from penetrating their territory. The sentries utilize temporal data coupled with past opponent observations to hypothesize the probable locations of the opponent and thus optimize their guarding locations.

  2. Robotic agents for supporting community-dwelling elderly people with memory complaints: Perceived needs and preferences.

    PubMed

    Wu, Ya-Huei; Faucounau, Véronique; Boulay, Mélodie; Maestrutti, Marina; Rigaud, Anne-Sophie

    2011-03-01

    Researchers in robotics have been increasingly focusing on robots as a means of supporting older people with cognitive impairment at home. The aim of this study is to explore the elderly's needs and preferences towards having an assistive robot in the home. In order to ensure the appropriateness of this technology, 30 subjects aged 60 and older with memory complaints were recruited from the Memory Clinic of the Broca Hospital. We conducted an interview-administered questionnaire that included questions about their needs and preferences concerning robot functions and modes of action. The subjects reported a desire to retain their capacity to manage their daily activities, to maintain good health and to stimulate their memory. Regarding robot functions, the cognitive stimulation programme earned the highest proportion of positive responses, followed by the safeguarding functions, fall detection and the automatic help call. © The Author(s) 2010.

  3. Gait development on Minitaur, a direct drive quadrupedal robot

    NASA Astrophysics Data System (ADS)

    Blackman, Daniel J.; Nicholson, John V.; Ordonez, Camilo; Miller, Bruce D.; Clark, Jonathan E.

    2016-05-01

    This paper describes the development of a dynamic, quadrupedal robot designed for rapid traversal and interaction in human environments. We explore improvements to both physical and control methods to a legged robot (Minitaur) in order to improve the speed and stability of its gaits and increase the range of obstacles that it can overcome, with an eye toward negotiating man-made terrains such as stairs. These modifications include an analysis of physical compliance, an investigation of foot and leg design, and the implementation of ground and obstacle contact sensing for inclusion in the control schemes. Structural and mechanical improvements were made to reduce undesired compliance for more consistent agreement with dynamic models, which necessitated refinement of foot design for greater durability. Contact sensing was implemented into the control scheme for identifying obstacles and deviations in surface level for negotiation of varying terrain. Overall the incorporation of these features greatly enhances the mobility of the dynamic quadrupedal robot and helps to establish a basis for overcoming obstacles.

  4. Enhancing patient freedom in rehabilitation robotics using gaze-based intention detection.

    PubMed

    Novak, Domen; Riener, Robert

    2013-06-01

    Several design strategies for rehabilitation robotics have aimed to improve patients' experiences using motivating and engaging virtual environments. This paper presents a new design strategy: enhancing patient freedom with a complex virtual environment that intelligently detects patients' intentions and supports the intended actions. A 'virtual kitchen' scenario has been developed in which many possible actions can be performed at any time, allowing patients to experiment and giving them more freedom. Remote eye tracking is used to detect the intended action and trigger appropriate support by a rehabilitation robot. This approach requires no additional equipment attached to the patient and has a calibration time of less than a minute. The system was tested on healthy subjects using the ARMin III arm rehabilitation robot. It was found to be technically feasible and usable by healthy subjects. However, the intention detection algorithm should be improved using better sensor fusion, and clinical tests with patients are needed to evaluate the system's usability and potential therapeutic benefits.

  5. A Novel Robot System Integrating Biological and Mechanical Intelligence Based on Dissociated Neural Network-Controlled Closed-Loop Environment

    PubMed Central

    Wang, Yuechao; Li, Hongyi; Zheng, Xiongfei

    2016-01-01

    We propose the architecture of a novel robot system merging biological and artificial intelligence based on a neural controller connected to an external agent. We initially built a framework that connected the dissociated neural network to a mobile robot system to implement a realistic vehicle. The mobile robot system characterized by a camera and two-wheeled robot was designed to execute the target-searching task. We modified a software architecture and developed a home-made stimulation generator to build a bi-directional connection between the biological and the artificial components via simple binomial coding/decoding schemes. In this paper, we utilized a specific hierarchical dissociated neural network for the first time as the neural controller. Based on our work, neural cultures were successfully employed to control an artificial agent resulting in high performance. Surprisingly, under the tetanus stimulus training, the robot performed better and better with the increasement of training cycle because of the short-term plasticity of neural network (a kind of reinforced learning). Comparing to the work previously reported, we adopted an effective experimental proposal (i.e. increasing the training cycle) to make sure of the occurrence of the short-term plasticity, and preliminarily demonstrated that the improvement of the robot’s performance could be caused independently by the plasticity development of dissociated neural network. This new framework may provide some possible solutions for the learning abilities of intelligent robots by the engineering application of the plasticity processing of neural networks, also for the development of theoretical inspiration for the next generation neuro-prostheses on the basis of the bi-directional exchange of information within the hierarchical neural networks. PMID:27806074

  6. Modelling brain emergent behaviours through coevolution of neural agents.

    PubMed

    Maniadakis, Michail; Trahanias, Panos

    2006-06-01

    Recently, many research efforts focus on modelling partial brain areas with the long-term goal to support cognitive abilities of artificial organisms. Existing models usually suffer from heterogeneity, which constitutes their integration very difficult. The present work introduces a computational framework to address brain modelling tasks, emphasizing on the integrative performance of substructures. Moreover, implemented models are embedded in a robotic platform to support its behavioural capabilities. We follow an agent-based approach in the design of substructures to support the autonomy of partial brain structures. Agents are formulated to allow the emergence of a desired behaviour after a certain amount of interaction with the environment. An appropriate collaborative coevolutionary algorithm, able to emphasize both the speciality of brain areas and their cooperative performance, is employed to support design specification of agent structures. The effectiveness of the proposed approach is illustrated through the implementation of computational models for motor cortex and hippocampus, which are successfully tested on a simulated mobile robot.

  7. Toward understanding social cues and signals in human-robot interaction: effects of robot gaze and proxemic behavior.

    PubMed

    Fiore, Stephen M; Wiltshire, Travis J; Lobato, Emilio J C; Jentsch, Florian G; Huang, Wesley H; Axelrod, Benjamin

    2013-01-01

    As robots are increasingly deployed in settings requiring social interaction, research is needed to examine the social signals perceived by humans when robots display certain social cues. In this paper, we report a study designed to examine how humans interpret social cues exhibited by robots. We first provide a brief overview of perspectives from social cognition in humans and how these processes are applicable to human-robot interaction (HRI). We then discuss the need to examine the relationship between social cues and signals as a function of the degree to which a robot is perceived as a socially present agent. We describe an experiment in which social cues were manipulated on an iRobot Ava(TM) mobile robotics platform in a hallway navigation scenario. Cues associated with the robot's proxemic behavior were found to significantly affect participant perceptions of the robot's social presence and emotional state while cues associated with the robot's gaze behavior were not found to be significant. Further, regardless of the proxemic behavior, participants attributed more social presence and emotional states to the robot over repeated interactions than when they first interacted with it. Generally, these results indicate the importance for HRI research to consider how social cues expressed by a robot can differentially affect perceptions of the robot's mental states and intentions. The discussion focuses on implications for the design of robotic systems and future directions for research on the relationship between social cues and signals.

  8. Analysis of Decentralized Variable Structure Control for Collective Search by Mobile Robots

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feddema, J.; Goldsmith, S.; Robinett, R.

    1998-11-04

    This paper presents an analysis of a decentralized coordination strategy for organizing and controlling a team of mobile robots performing collective search. The alpha-beta coordination strategy is a family of collective search algorithms that allow teams of communicating robots to implicitly coordinate their search activities through a division of labor based on self-selected roIes. In an alpha-beta team. alpha agents are motivated to improve their status by exploring new regions of the search space. Beta a~ents are conservative, and reiy on the alpha agents to provide advanced information on favorable regions of the search space. An agent selects its currentmore » role dynamically based on its current status value relative to the current status values of the other team members. Status is determined by some function of the agent's sensor readings, and is generally a measurement of source intensity at the agent's current location. Variations on the decision rules determining alpha and beta behavior produce different versions of the algorithm that lead to different global properties. The alpha-beta strategy is based on a simple finite-state machine that implements a form of Variable Structure Control (VSC). The VSC system changes the dynamics of the collective system by abruptly switching at defined states to alternative control laws . In VSC, Lyapunov's direct method is often used to design control surfaces which guide the system to a given goal. We introduce the alpha-beta aIgorithm and present an analysis of the equilibrium point and the global stability of the alpha-beta algorithm based on Lyapunov's method.« less

  9. How gestures affect students: A comparative experiment using class presentations conducted by an anthropomorphic agent

    NASA Astrophysics Data System (ADS)

    Shirakawa, Tomohiro; Sato, Hiroshi; Imao, Tomoya

    2017-07-01

    Recently, a variety of user interfaces have been developed based on human-robot and human-agent interaction, and anthropomorphic agents are used as one type of interface. However, the use of anthropomorphic agents is applied mainly to the medical and cognitive sciences, and there are few studies of their application to other fields. Therefore, we used an anthropomorphic agent of MMD in a virtual lecture to analyze the effect of gestures on students and search for ways to apply anthropomorphic agents to the field of educational technology.

  10. Exhaustive search system and method using space-filling curves

    DOEpatents

    Spires, Shannon V.

    2003-10-21

    A search system and method for one agent or for multiple agents using a space-filling curve provides a way to control one or more agents to cover an area of any space of any dimensionality using an exhaustive search pattern. An example of the space-filling curve is a Hilbert curve. The search area can be a physical geography, a cyberspace search area, or an area searchable by computing resources. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace.

  11. Affine Transform to Reform Pixel Coordinates of EOG Signals for Controlling Robot Manipulators Using Gaze Motions

    PubMed Central

    Rusydi, Muhammad Ilhamdi; Sasaki, Minoru; Ito, Satoshi

    2014-01-01

    Biosignals will play an important role in building communication between machines and humans. One of the types of biosignals that is widely used in neuroscience are electrooculography (EOG) signals. An EOG has a linear relationship with eye movement displacement. Experiments were performed to construct a gaze motion tracking method indicated by robot manipulator movements. Three operators looked at 24 target points displayed on a monitor that was 40 cm in front of them. Two channels (Ch1 and Ch2) produced EOG signals for every single eye movement. These signals were converted to pixel units by using the linear relationship between EOG signals and gaze motion distances. The conversion outcomes were actual pixel locations. An affine transform method is proposed to determine the shift of actual pixels to target pixels. This method consisted of sequences of five geometry processes, which are translation-1, rotation, translation-2, shear and dilatation. The accuracy was approximately 0.86° ± 0.67° in the horizontal direction and 0.54° ± 0.34° in the vertical. This system successfully tracked the gaze motions not only in direction, but also in distance. Using this system, three operators could operate a robot manipulator to point at some targets. This result shows that the method is reliable in building communication between humans and machines using EOGs. PMID:24919013

  12. Demonstration of a Semi-Autonomous Hybrid Brain-Machine Interface using Human Intracranial EEG, Eye Tracking, and Computer Vision to Control a Robotic Upper Limb Prosthetic

    PubMed Central

    McMullen, David P.; Hotson, Guy; Katyal, Kapil D.; Wester, Brock A.; Fifer, Matthew S.; McGee, Timothy G.; Harris, Andrew; Johannes, Matthew S.; Vogelstein, R. Jacob; Ravitz, Alan D.; Anderson, William S.; Thakor, Nitish V.; Crone, Nathan E.

    2014-01-01

    To increase the ability of brain-machine interfaces (BMIs) to control advanced prostheses such as the modular prosthetic limb (MPL), we are developing a novel system: the Hybrid Augmented Reality Multimodal Operation Neural Integration Environment (HARMONIE). This system utilizes hybrid input, supervisory control, and intelligent robotics to allow users to identify an object (via eye tracking and computer vision) and initiate (via brain-control) a semi-autonomous reach-grasp-and-drop of the object by the MPL. Sequential iterations of HARMONIE were tested in two pilot subjects implanted with electrocorticographic (ECoG) and depth electrodes within motor areas. The subjects performed the complex task in 71.4% (20/28) and 67.7% (21/31) of trials after minimal training. Balanced accuracy for detecting movements was 91.1% and 92.9%, significantly greater than chance accuracies (p < 0.05). After BMI-based initiation, the MPL completed the entire task 100% (one object) and 70% (three objects) of the time. The MPL took approximately 12.2 seconds for task completion after system improvements implemented for the second subject. Our hybrid-BMI design prevented all but one baseline false positive from initiating the system. The novel approach demonstrated in this proof-of-principle study, using hybrid input, supervisory control, and intelligent robotics, addresses limitations of current BMIs. PMID:24760914

  13. Demonstration of a semi-autonomous hybrid brain-machine interface using human intracranial EEG, eye tracking, and computer vision to control a robotic upper limb prosthetic.

    PubMed

    McMullen, David P; Hotson, Guy; Katyal, Kapil D; Wester, Brock A; Fifer, Matthew S; McGee, Timothy G; Harris, Andrew; Johannes, Matthew S; Vogelstein, R Jacob; Ravitz, Alan D; Anderson, William S; Thakor, Nitish V; Crone, Nathan E

    2014-07-01

    To increase the ability of brain-machine interfaces (BMIs) to control advanced prostheses such as the modular prosthetic limb (MPL), we are developing a novel system: the Hybrid Augmented Reality Multimodal Operation Neural Integration Environment (HARMONIE). This system utilizes hybrid input, supervisory control, and intelligent robotics to allow users to identify an object (via eye tracking and computer vision) and initiate (via brain-control) a semi-autonomous reach-grasp-and-drop of the object by the MPL. Sequential iterations of HARMONIE were tested in two pilot subjects implanted with electrocorticographic (ECoG) and depth electrodes within motor areas. The subjects performed the complex task in 71.4% (20/28) and 67.7% (21/31) of trials after minimal training. Balanced accuracy for detecting movements was 91.1% and 92.9%, significantly greater than chance accuracies (p < 0.05). After BMI-based initiation, the MPL completed the entire task 100% (one object) and 70% (three objects) of the time. The MPL took approximately 12.2 s for task completion after system improvements implemented for the second subject. Our hybrid-BMI design prevented all but one baseline false positive from initiating the system. The novel approach demonstrated in this proof-of-principle study, using hybrid input, supervisory control, and intelligent robotics, addresses limitations of current BMIs.

  14. "Television" Artists

    ERIC Educational Resources Information Center

    Szekely, George

    2010-01-01

    In an art class, children browse through space-age knobs, robot antennas and gyroscopic signal searchers. They extend space needle antennas before turning on an old TV. They discover the sights and sounds of televisions past, hearing the hiss, the gathering power, and seeing the blinking eye, the black-and-white light and blurry images projected…

  15. Data from: Solving the Robot-World Hand-Eye(s) Calibration Problem with

    Science.gov Websites

    Iterative Methods | National Agricultural Library Skip to main content Home National Agricultural Library United States Department of Agriculture Ag Data Commons Beta Toggle navigation Datasets . License U.S. Public Domain Funding Source(s) National Science Foundation IOS-1339211 Agricultural Research

  16. Ocular allergy and dry eye syndrome.

    PubMed

    Bielory, Leonard

    2004-10-01

    Ocular allergy is a common clinical disorder that includes dry eye syndrome in its differential diagnosis. While ocular allergy treatments have continued to evolve since the early 1990s when the new prescription topical agents became available, there have been no major advances in the treatment of dry eye syndrome other than changes in the chemical structures of various artificial tear formulations. This review is timely and relevant due to the recent FDA approval of several new agents for the treatment of dry eye syndrome. The literature reviewed brings the practicing allergist/clinical immunologist up to date on the recent understanding that T-cell activation plays a key role in dry eye syndrome immunopathophysiology. In addition, the parallel novel treatment developments are discussed, including new formulations for tear substitutes, topical cyclosporine A and purinergic receptor (P2Y2) agonists. The recent developments bode well for patients who are referred for ocular allergy, including dry eye syndrome. A new formulation for a tear substitute that generates a 'soft gel' covering the ocular surface (in situ) is ideal for early forms of dry syndrome, while topical cyclosporine is the first new real prescription treatment for patients with moderate to severe forms of dry eye. Another potential agent to revolutionize the treatment of various disorders is based on the discovery of the purinergic receptor agonists. This is not only relevant for the production of mucin and the change in tear fluid content, but it may also have implications for other sinopulmonary disorders such as cystic fibrosis and chronic sinusitis.

  17. Autism and social robotics: A systematic review.

    PubMed

    Pennisi, Paola; Tonacci, Alessandro; Tartarisco, Gennaro; Billeci, Lucia; Ruta, Liliana; Gangemi, Sebastiano; Pioggia, Giovanni

    2016-02-01

    Social robotics could be a promising method for Autism Spectrum Disorders (ASD) treatment. The aim of this article is to carry out a systematic literature review of the studies on this topic that were published in the last 10 years. We tried to address the following questions: can social robots be a useful tool in autism therapy? We followed the PRISMA guidelines, and the protocol was registered within PROSPERO database (CRD42015016158). We found many positive implications in the use of social robots in therapy as for example: ASD subjects often performed better with a robot partner rather than a human partner; sometimes, ASD patients had, toward robots, behaviors that TD patients had toward human agents; ASDs had a lot of social behaviors toward robots; during robotic sessions, ASDs showed reduced repetitive and stereotyped behaviors and, social robots manage to improve spontaneous language during therapy sessions. Therefore, robots provide therapists and researchers a means to connect with autistic subjects in an easier way, but studies in this area are still insufficient. It is necessary to clarify whether sex, intelligence quotient, and age of participants affect the outcome of therapy and whether any beneficial effects only occur during the robotic session or if they are still observable outside the clinical/experimental context. © 2015 International Society for Autism Research, Wiley Periodicals, Inc.

  18. Muecas: A Multi-Sensor Robotic Head for Affective Human Robot Interaction and Imitation

    PubMed Central

    Cid, Felipe; Moreno, Jose; Bustos, Pablo; Núñez, Pedro

    2014-01-01

    This paper presents a multi-sensor humanoid robotic head for human robot interaction. The design of the robotic head, Muecas, is based on ongoing research on the mechanisms of perception and imitation of human expressions and emotions. These mechanisms allow direct interaction between the robot and its human companion through the different natural language modalities: speech, body language and facial expressions. The robotic head has 12 degrees of freedom, in a human-like configuration, including eyes, eyebrows, mouth and neck, and has been designed and built entirely by IADeX (Engineering, Automation and Design of Extremadura) and RoboLab. A detailed description of its kinematics is provided along with the design of the most complex controllers. Muecas can be directly controlled by FACS (Facial Action Coding System), the de facto standard for facial expression recognition and synthesis. This feature facilitates its use by third party platforms and encourages the development of imitation and of goal-based systems. Imitation systems learn from the user, while goal-based ones use planning techniques to drive the user towards a final desired state. To show the flexibility and reliability of the robotic head, the paper presents a software architecture that is able to detect, recognize, classify and generate facial expressions in real time using FACS. This system has been implemented using the robotics framework, RoboComp, which provides hardware-independent access to the sensors in the head. Finally, the paper presents experimental results showing the real-time functioning of the whole system, including recognition and imitation of human facial expressions. PMID:24787636

  19. Impact of IQ, computer-gaming skills, general dexterity, and laparoscopic experience on performance with the da Vinci surgical system.

    PubMed

    Hagen, Monika E; Wagner, Oliver J; Inan, Ihsan; Morel, Philippe

    2009-09-01

    Due to improved ergonomics and dexterity, robotic surgery is promoted as being easily performed by surgeons with no special skills necessary. We tested this hypothesis by measuring IQ elements, computer gaming skills, general dexterity with chopsticks, and evaluating laparoscopic experience in correlation to performance ability with the da Vinci robot. Thirty-four individuals were tested for robotic dexterity, IQ elements, computer-gaming skills and general dexterity. Eighteen surgically inexperienced and 16 laparoscopically trained surgeons were included. Each individual performed three different tasks with the da Vinci surgical system and their times were recorded. An IQ test (elements: logical thinking, 3D imagination and technical understanding) was completed by each participant. Computer skills were tested with a simple computer game (hand-eye coordination) and general dexterity was evaluated by the ability to use chopsticks. We found no correlation between logical thinking, 3D imagination and robotic skills. Both computer gaming and general dexterity showed a slight but non-significant improvement in performance with the da Vinci robot (p > 0.05). A significant correlation between robotic skills, technical understanding and laparoscopic experience was observed (p < 0.05). The data support the conclusion that there are no significant correlations between robotic performance and logical thinking, 3D understanding, computer gaming skills and general dexterity. A correlation between robotic skills and technical understanding may exist. Laparoscopic experience seems to be the strongest predictor of performance with the da Vinci surgical system. Generally, it appears difficult to determine non-surgical predictors for robotic surgery.

  20. Escape and surveillance asymmetries in locusts exposed to a Guinea fowl-mimicking robot predator.

    PubMed

    Romano, Donato; Benelli, Giovanni; Stefanini, Cesare

    2017-10-09

    Escape and surveillance responses to predators are lateralized in several vertebrate species. However, little is known on the laterality of escapes and predator surveillance in arthropods. In this study, we investigated the lateralization of escape and surveillance responses in young instars and adults of Locusta migratoria during biomimetic interactions with a robot-predator inspired to the Guinea fowl, Numida meleagris. Results showed individual-level lateralization in the jumping escape of locusts exposed to the robot-predator attack. The laterality of this response was higher in L. migratoria adults over young instars. Furthermore, population-level lateralization of predator surveillance was found testing both L. migratoria adults and young instars; locusts used the right compound eye to oversee the robot-predator. Right-biased individuals were more stationary over left-biased ones during surveillance of the robot-predator. Individual-level lateralization could avoid predictability during the jumping escape. Population-level lateralization may improve coordination in the swarm during specific group tasks such as predator surveillance. To the best of our knowledge, this is the first report of lateralized predator-prey interactions in insects. Our findings outline the possibility of using biomimetic robots to study predator-prey interaction, avoiding the use of real predators, thus achieving standardized experimental conditions to investigate complex and flexible behaviours.

  1. Modelling and Experiment Based on a Navigation System for a Cranio-Maxillofacial Surgical Robot.

    PubMed

    Duan, Xingguang; Gao, Liang; Wang, Yonggui; Li, Jianxi; Li, Haoyuan; Guo, Yanjun

    2018-01-01

    In view of the characteristics of high risk and high accuracy in cranio-maxillofacial surgery, we present a novel surgical robot system that can be used in a variety of surgeries. The surgical robot system can assist surgeons in completing biopsy of skull base lesions, radiofrequency thermocoagulation of the trigeminal ganglion, and radioactive particle implantation of skull base malignant tumors. This paper focuses on modelling and experimental analyses of the robot system based on navigation technology. Firstly, the transformation relationship between the subsystems is realized based on the quaternion and the iterative closest point registration algorithm. The hand-eye coordination model based on optical navigation is established to control the end effector of the robot moving to the target position along the planning path. The closed-loop control method, "kinematics + optics" hybrid motion control method, is presented to improve the positioning accuracy of the system. Secondly, the accuracy of the system model was tested by model experiments. And the feasibility of the closed-loop control method was verified by comparing the positioning accuracy before and after the application of the method. Finally, the skull model experiments were performed to evaluate the function of the surgical robot system. The results validate its feasibility and are consistent with the preoperative surgical planning.

  2. Modelling and Experiment Based on a Navigation System for a Cranio-Maxillofacial Surgical Robot

    PubMed Central

    Duan, Xingguang; Gao, Liang; Li, Jianxi; Li, Haoyuan; Guo, Yanjun

    2018-01-01

    In view of the characteristics of high risk and high accuracy in cranio-maxillofacial surgery, we present a novel surgical robot system that can be used in a variety of surgeries. The surgical robot system can assist surgeons in completing biopsy of skull base lesions, radiofrequency thermocoagulation of the trigeminal ganglion, and radioactive particle implantation of skull base malignant tumors. This paper focuses on modelling and experimental analyses of the robot system based on navigation technology. Firstly, the transformation relationship between the subsystems is realized based on the quaternion and the iterative closest point registration algorithm. The hand-eye coordination model based on optical navigation is established to control the end effector of the robot moving to the target position along the planning path. The closed-loop control method, “kinematics + optics” hybrid motion control method, is presented to improve the positioning accuracy of the system. Secondly, the accuracy of the system model was tested by model experiments. And the feasibility of the closed-loop control method was verified by comparing the positioning accuracy before and after the application of the method. Finally, the skull model experiments were performed to evaluate the function of the surgical robot system. The results validate its feasibility and are consistent with the preoperative surgical planning. PMID:29599948

  3. Recovery of Proprioception in the Upper Extremity by Robotic Mirror Therapy: a Clinical Pilot Study for Proof of Concept

    PubMed Central

    2017-01-01

    A novel robotic mirror therapy system was recently developed to provide proprioceptive stimulus to the hemiplegic arm during a mirror therapy. Validation of the robotic mirror therapy system was performed to confirm its synchronicity prior to the clinical study. The mean error angle range between the intact arm and the robot was 1.97 to 4.59 degrees. A 56-year-old male who had right middle cerebral artery infarction 11 months ago received the robotic mirror therapy for ten 30-minute sessions during 2 weeks. Clinical evaluation and functional magnetic resonance imaging (fMRI) studies were performed before and after the intervention. At the follow-up evaluation, the thumb finding test score improved from 2 to 1 for eye level and from 3 to 1 for overhead level. The Albert's test score on the left side improved from 6 to 11. Improvements were sustained at 2-month follow-up. The fMRI during the passive motion revealed a considerable increase in brain activity at the lower part of the right superior parietal lobule, suggesting the possibility of proprioception enhancement. The robotic mirror therapy system may serve as a useful treatment method for patients with supratentorial stroke to facilitate recovery of proprioceptive deficit and hemineglect. PMID:28875598

  4. KENNEDY SPACE CENTER, FLA. - At right is the Delta II rocket on Launch Complex 17-A, Cape Canaveral Air Force Station, that will launch Mars Exploration Rover 2 (MER-2) on June 5. In the center are three more solid rocket boosters that will be added to the Delta, which will carry nine in all. NASA’s twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can’t yet go. MER-2 is scheduled to launch as MER-A. MER-1 (MER-B) will launch June 25.

    NASA Image and Video Library

    2003-05-15

    KENNEDY SPACE CENTER, FLA. - At right is the Delta II rocket on Launch Complex 17-A, Cape Canaveral Air Force Station, that will launch Mars Exploration Rover 2 (MER-2) on June 5. In the center are three more solid rocket boosters that will be added to the Delta, which will carry nine in all. NASA’s twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can’t yet go. MER-2 is scheduled to launch as MER-A. MER-1 (MER-B) will launch June 25.

  5. KENNEDY SPACE CENTER, FLA. - The Delta II rocket on Launch Complex 17-A, Cape Canaveral Air Force Station, is having solid rocket boosters (SRBs) installed that will help launch Mars Exploration Rover 2 (MER-2) on June 5. In the center are three more solid rocket boosters that will be added to the Delta, which will carry nine in all. NASA’s twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can’t yet go. MER-2 is scheduled to launch as MER-A. MER-1 (MER-B) will launch June 25.

    NASA Image and Video Library

    2003-05-15

    KENNEDY SPACE CENTER, FLA. - The Delta II rocket on Launch Complex 17-A, Cape Canaveral Air Force Station, is having solid rocket boosters (SRBs) installed that will help launch Mars Exploration Rover 2 (MER-2) on June 5. In the center are three more solid rocket boosters that will be added to the Delta, which will carry nine in all. NASA’s twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can’t yet go. MER-2 is scheduled to launch as MER-A. MER-1 (MER-B) will launch June 25.

  6. KENNEDY SPACE CENTER, FLA. - A third solid rocket booster (SRB) is lifted up the launch tower on Launch Complex 17-A, Cape Canaveral Air Force Station. They are three of nine SRBs that will be mated to the Delta rocket to launch Mars Exploration Rover 2. NASA’s twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can’t yet go. MER-2 is scheduled to launch June 5 as MER-A. MER-1 (MER-B) will launch June 25.

    NASA Image and Video Library

    2003-05-14

    KENNEDY SPACE CENTER, FLA. - A third solid rocket booster (SRB) is lifted up the launch tower on Launch Complex 17-A, Cape Canaveral Air Force Station. They are three of nine SRBs that will be mated to the Delta rocket to launch Mars Exploration Rover 2. NASA’s twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can’t yet go. MER-2 is scheduled to launch June 5 as MER-A. MER-1 (MER-B) will launch June 25.

  7. KENNEDY SPACE CENTER, FLA. - On Launch Complex 17-A, Cape Canaveral Air Force Station, workers complete raising a solid rocket booster to a vertical position. It will be lifted up the launch tower and mated to the Delta rocket to launch Mars Exploration Rover 2. NASA’s twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can’t yet go. MER-2 is scheduled to launch June 5 as MER-A. MER-1 (MER-B) will launch June 25.

    NASA Image and Video Library

    2003-05-14

    KENNEDY SPACE CENTER, FLA. - On Launch Complex 17-A, Cape Canaveral Air Force Station, workers complete raising a solid rocket booster to a vertical position. It will be lifted up the launch tower and mated to the Delta rocket to launch Mars Exploration Rover 2. NASA’s twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can’t yet go. MER-2 is scheduled to launch June 5 as MER-A. MER-1 (MER-B) will launch June 25.

  8. KENNEDY SPACE CENTER, FLA. - On Launch Complex 17-A, Cape Canaveral Air Force Station, a solid rocket booster is raised off the transporter. When vertical, it will be lifted up the launch tower and mated to the Delta rocket (in the background) to launch Mars Exploration Rover 2. NASA’s twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can’t yet go. MER-2 is scheduled to launch June 5 as MER-A. MER-1 (MER-B) will launch June 25.

    NASA Image and Video Library

    2003-05-14

    KENNEDY SPACE CENTER, FLA. - On Launch Complex 17-A, Cape Canaveral Air Force Station, a solid rocket booster is raised off the transporter. When vertical, it will be lifted up the launch tower and mated to the Delta rocket (in the background) to launch Mars Exploration Rover 2. NASA’s twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can’t yet go. MER-2 is scheduled to launch June 5 as MER-A. MER-1 (MER-B) will launch June 25.

  9. KENNEDY SPACE CENTER, FLA. - In the Payload Hazardous Servicing Facility, workers lower the backshell with the Mars Exploration Rover 1 (MER-1) onto the heat shield. The two components form the aeroshell that will protect the rover on its journey to Mars. NASA's twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can't yet go. MER-1 is scheduled to launch June 25 as MER-B aboard a Delta II rocket from Cape Canaveral Air Force Station.

    NASA Image and Video Library

    2003-05-15

    KENNEDY SPACE CENTER, FLA. - In the Payload Hazardous Servicing Facility, workers lower the backshell with the Mars Exploration Rover 1 (MER-1) onto the heat shield. The two components form the aeroshell that will protect the rover on its journey to Mars. NASA's twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can't yet go. MER-1 is scheduled to launch June 25 as MER-B aboard a Delta II rocket from Cape Canaveral Air Force Station.

  10. KENNEDY SPACE CENTER, FLA. - On Launch Complex 17-A, Cape Canaveral Air Force Station, a solid rocket booster is moved into position to raise to vertical and lift up the launch tower. It is one of nine that will be mated to the Delta rocket to launch Mars Exploration Rover 2. NASA’s twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can’t yet go. MER-2 is scheduled to launch June 5 as MER-A. MER-1 (MER-B) will launch June 25.

    NASA Image and Video Library

    2003-05-14

    KENNEDY SPACE CENTER, FLA. - On Launch Complex 17-A, Cape Canaveral Air Force Station, a solid rocket booster is moved into position to raise to vertical and lift up the launch tower. It is one of nine that will be mated to the Delta rocket to launch Mars Exploration Rover 2. NASA’s twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can’t yet go. MER-2 is scheduled to launch June 5 as MER-A. MER-1 (MER-B) will launch June 25.

  11. KENNEDY SPACE CENTER, FLA. - In the Payload Hazardous Servicing Facility, workers check the attachment between the backshell (above) and heat shield (below) surrounding the Mars Exploration Rover 1 (MER-1). The aeroshell will protect the rover on its journey to Mars. NASA's twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can't yet go. MER-1 is scheduled to launch June 25 as MER-B aboard a Delta II rocket from Cape Canaveral Air Force Station.

    NASA Image and Video Library

    2003-05-15

    KENNEDY SPACE CENTER, FLA. - In the Payload Hazardous Servicing Facility, workers check the attachment between the backshell (above) and heat shield (below) surrounding the Mars Exploration Rover 1 (MER-1). The aeroshell will protect the rover on its journey to Mars. NASA's twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can't yet go. MER-1 is scheduled to launch June 25 as MER-B aboard a Delta II rocket from Cape Canaveral Air Force Station.

  12. KENNEDY SPACE CENTER, FLA. - At Launch Complex 17-A, Cape Canaveral Air Force Station, the first half of the fairing for the Mars Exploration Rover 2 (MER-2) is installed around the Mars Exploration Rover 2 (MER-2). MER-2 is one of NASA's twin Mars Exploration Rovers designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can't yet go. MER-2 is scheduled to launch no earlier than June 8 as MER-A, with two launch opportunities each day during the launch period that closes on June 19.

    NASA Image and Video Library

    2003-05-31

    KENNEDY SPACE CENTER, FLA. - At Launch Complex 17-A, Cape Canaveral Air Force Station, the first half of the fairing for the Mars Exploration Rover 2 (MER-2) is installed around the Mars Exploration Rover 2 (MER-2). MER-2 is one of NASA's twin Mars Exploration Rovers designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can't yet go. MER-2 is scheduled to launch no earlier than June 8 as MER-A, with two launch opportunities each day during the launch period that closes on June 19.

  13. KENNEDY SPACE CENTER, FLA. - Workers on the launch tower of Complex 17-A, Cape Canaveral Air Force Station, stand by while a solid rocket booster (SRB) is lifted to vertical. It is one of nine that will help launch Mars Exploration Rover 2 (MER-2). NASA’s twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can’t yet go. MER-2 is scheduled to launch June 5 as MER-A. MER-1 (MER-B) will launch June 25.

    NASA Image and Video Library

    2003-05-15

    KENNEDY SPACE CENTER, FLA. - Workers on the launch tower of Complex 17-A, Cape Canaveral Air Force Station, stand by while a solid rocket booster (SRB) is lifted to vertical. It is one of nine that will help launch Mars Exploration Rover 2 (MER-2). NASA’s twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can’t yet go. MER-2 is scheduled to launch June 5 as MER-A. MER-1 (MER-B) will launch June 25.

  14. KENNEDY SPACE CENTER, FLA. - The Mobile Service Tower is rolled back at Launch Complex 17A to reveal a Delta II rocket ready to launch the Mars Exploration Rover-A mission. NASA's twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans are not yet able to go. MER-A, with the rover Spirit aboard, is scheduled to launch on June 8 at 2:06 p.m. EDT, with two launch opportunities each day during a launch period that closes on June 24.

    NASA Image and Video Library

    2003-06-08

    KENNEDY SPACE CENTER, FLA. - The Mobile Service Tower is rolled back at Launch Complex 17A to reveal a Delta II rocket ready to launch the Mars Exploration Rover-A mission. NASA's twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans are not yet able to go. MER-A, with the rover Spirit aboard, is scheduled to launch on June 8 at 2:06 p.m. EDT, with two launch opportunities each day during a launch period that closes on June 24.

  15. Do two and three year old children use an incremental first-NP-as-agent bias to process active transitive and passive sentences?: A permutation analysis

    PubMed Central

    Chang, Franklin; Rowland, Caroline; Ferguson, Heather; Pine, Julian

    2017-01-01

    We used eye-tracking to investigate if and when children show an incremental bias to assume that the first noun phrase in a sentence is the agent (first-NP-as-agent bias) while processing the meaning of English active and passive transitive sentences. We also investigated whether children can override this bias to successfully distinguish active from passive sentences, after processing the remainder of the sentence frame. For this second question we used eye-tracking (Study 1) and forced-choice pointing (Study 2). For both studies, we used a paradigm in which participants simultaneously saw two novel actions with reversed agent-patient relations while listening to active and passive sentences. We compared English-speaking 25-month-olds and 41-month-olds in between-subjects sentence structure conditions (Active Transitive Condition vs. Passive Condition). A permutation analysis found that both age groups showed a bias to incrementally map the first noun in a sentence onto an agent role. Regarding the second question, 25-month-olds showed some evidence of distinguishing the two structures in the eye-tracking study. However, the 25-month-olds did not distinguish active from passive sentences in the forced choice pointing task. In contrast, the 41-month-old children did reanalyse their initial first-NP-as-agent bias to the extent that they clearly distinguished between active and passive sentences both in the eye-tracking data and in the pointing task. The results are discussed in relation to the development of syntactic (re)parsing. PMID:29049390

  16. Self-Organized Multi-Camera Network for a Fast and Easy Deployment of Ubiquitous Robots in Unknown Environments

    PubMed Central

    Canedo-Rodriguez, Adrián; Iglesias, Roberto; Regueiro, Carlos V.; Alvarez-Santos, Victor; Pardo, Xose Manuel

    2013-01-01

    To bring cutting edge robotics from research centres to social environments, the robotics community must start providing affordable solutions: the costs must be reduced and the quality and usefulness of the robot services must be enhanced. Unfortunately, nowadays the deployment of robots and the adaptation of their services to new environments are tasks that usually require several days of expert work. With this in view, we present a multi-agent system made up of intelligent cameras and autonomous robots, which is easy and fast to deploy in different environments. The cameras will enhance the robot perceptions and allow them to react to situations that require their services. Additionally, the cameras will support the movement of the robots. This will enable our robots to navigate even when there are not maps available. The deployment of our system does not require expertise and can be done in a short period of time, since neither software nor hardware tuning is needed. Every system task is automatic, distributed and based on self-organization processes. Our system is scalable, robust, and flexible to the environment. We carried out several real world experiments, which show the good performance of our proposal. PMID:23271604

  17. Self-organized multi-camera network for a fast and easy deployment of ubiquitous robots in unknown environments.

    PubMed

    Canedo-Rodriguez, Adrián; Iglesias, Roberto; Regueiro, Carlos V; Alvarez-Santos, Victor; Pardo, Xose Manuel

    2012-12-27

    To bring cutting edge robotics from research centres to social environments, the robotics community must start providing affordable solutions: the costs must be reduced and the quality and usefulness of the robot services must be enhanced. Unfortunately, nowadays the deployment of robots and the adaptation of their services to new environments are tasks that usually require several days of expert work. With this in view, we present a multi-agent system made up of intelligent cameras and autonomous robots, which is easy and fast to deploy in different environments. The cameras will enhance the robot perceptions and allow them to react to situations that require their services. Additionally, the cameras will support the movement of the robots. This will enable our robots to navigate even when there are not maps available. The deployment of our system does not require expertise and can be done in a short period of time, since neither software nor hardware tuning is needed. Every system task is automatic, distributed and based on self-organization processes. Our system is scalable, robust, and flexible to the environment. We carried out several real world experiments, which show the good performance of our proposal.

  18. Model learning for robot control: a survey.

    PubMed

    Nguyen-Tuong, Duy; Peters, Jan

    2011-11-01

    Models are among the most essential tools in robotics, such as kinematics and dynamics models of the robot's own body and controllable external objects. It is widely believed that intelligent mammals also rely on internal models in order to generate their actions. However, while classical robotics relies on manually generated models that are based on human insights into physics, future autonomous, cognitive robots need to be able to automatically generate models that are based on information which is extracted from the data streams accessible to the robot. In this paper, we survey the progress in model learning with a strong focus on robot control on a kinematic as well as dynamical level. Here, a model describes essential information about the behavior of the environment and the influence of an agent on this environment. In the context of model-based learning control, we view the model from three different perspectives. First, we need to study the different possible model learning architectures for robotics. Second, we discuss what kind of problems these architecture and the domain of robotics imply for the applicable learning methods. From this discussion, we deduce future directions of real-time learning algorithms. Third, we show where these scenarios have been used successfully in several case studies.

  19. Riot Control Agents

    DTIC Science & Technology

    2009-01-01

    However, a local anesthetic applied to the eye will help with eye pain and allow for further evaluation of the eye by slit lamp. Contact lenses...RCAs. Eye findings from RCA toxicity can range in severity from conjunctival erythema to ocular necrosis . Lacrimation, conjunctival erythema/edema...and ocular necrosis (Grant, 1986). Figure 12.12 illustrates and summarizes the common toxic ophthalmological signs and symptoms associated with

  20. Basic emotions and adaptation. A computational and evolutionary model.

    PubMed

    Pacella, Daniela; Ponticorvo, Michela; Gigliotta, Onofrio; Miglino, Orazio

    2017-01-01

    The core principles of the evolutionary theories of emotions declare that affective states represent crucial drives for action selection in the environment and regulated the behavior and adaptation of natural agents in ancestrally recurrent situations. While many different studies used autonomous artificial agents to simulate emotional responses and the way these patterns can affect decision-making, few are the approaches that tried to analyze the evolutionary emergence of affective behaviors directly from the specific adaptive problems posed by the ancestral environment. A model of the evolution of affective behaviors is presented using simulated artificial agents equipped with neural networks and physically inspired on the architecture of the iCub humanoid robot. We use genetic algorithms to train populations of virtual robots across generations, and investigate the spontaneous emergence of basic emotional behaviors in different experimental conditions. In particular, we focus on studying the emotion of fear, therefore the environment explored by the artificial agents can contain stimuli that are safe or dangerous to pick. The simulated task is based on classical conditioning and the agents are asked to learn a strategy to recognize whether the environment is safe or represents a threat to their lives and select the correct action to perform in absence of any visual cues. The simulated agents have special input units in their neural structure whose activation keep track of their actual "sensations" based on the outcome of past behavior. We train five different neural network architectures and then test the best ranked individuals comparing their performances and analyzing the unit activations in each individual's life cycle. We show that the agents, regardless of the presence of recurrent connections, spontaneously evolved the ability to cope with potentially dangerous environment by collecting information about the environment and then switching their behavior to a genetically selected pattern in order to maximize the possible reward. We also prove the determinant presence of an internal time perception unit for the robots to achieve the highest performance and survivability across all conditions.

  1. Humans and Autonomy: Implications of Shared Decision Making for Military Operations

    DTIC Science & Technology

    2017-01-01

    and machine learning transparency are identified as future research opportunities. 15. SUBJECT TERMS autonomy, human factors, intelligent agents...network as either the mission changes or an agent becomes disabled (DSB 2012). Fig. 2 Control structures for human agent teams. Robots without tools... learning (ML) algorithms monitor progress. However, operators have final executive authority; they are able to tweak the plan or choose an option

  2. Work- and non-work-related eye injuries in a highly industrialized area in northern Italy: comparison between two three-year periods (1994-1996 and 2005-2007).

    PubMed

    Semeraro, F; Polcini, C; Forbice, Eliana; Monfardini, A; Costagliola, C; Apostoli, P

    2013-01-01

    Ocular trauma is a major cause of monocular blindness and visual impairment in industrialized countries. The aim of this paper was to study epidemiology, causes, and clinical features of work-related and non-work-related eye injuries in a highly industrialized area of northern Italy. All patients hospitalized for eye injuries were enrolled. Two 3-year periods were studied (1994-1996 and 2005-2007). The variables analyzed included sex, age, social class of the patients, nature of the injuring agent (e.g., metal, plastic, etc.), place where the accident occurred (e.g., home, work, etc.), and time of the year (e.g., summer, winter, etc.). We enrolled 1001 men and 129 women. There were no significant differences between the two 3-year periods as regards distribution of sex, age, and location. Road-related injuries significantly decreased (p < 0.004). Comparison of injuring agents showed a decrease in metallic agents (p < 0.001) and an increase in lime agents (p < 0.001). Analysis of the type of trauma showed a decrease in blunt traumas (p < 0.001) and an increase in chemical injuries (p < 0.001) and actinic keratitis (p = 0.002). In the second 3-year period, we found a significant increase in injuries in non-Italian subjects (p < 0.001). Work-related injuries were the major cause of eye trauma. Road accident-related eye injuries dropped significantly in the second 3-year period. The adoption of higher safety standards, as well as information and educational campaigns, can significantly reduce work-related and non-work-related eye injuries.

  3. NICA: Natural Interaction with a Caring Agent

    NASA Astrophysics Data System (ADS)

    de Carolis, Berardina; Mazzotta, Irene; Novielli, Nicole

    Ambient Intelligence solutions may provide a great opportunity for elderly people to live longer at home. Assistance and care are delegated to the intelligence embedded in the environment. However, besides considering service-oriented response to the user needs, the assistance has to take into account the establishment of social relations. We propose the use of a robot NICA (as the name of the project Natural Interaction with a Caring Agent) acting as a caring assistant that provides a social interface with the smart home services. In this paper, we introduce the general architecture of the robot's "mind" and then we focus on the need to properly react to affective and socially oriented situations.

  4. Human leader and robot follower team: correcting leader's position from follower's heading

    NASA Astrophysics Data System (ADS)

    Borenstein, Johann; Thomas, David; Sights, Brandon; Ojeda, Lauro; Bankole, Peter; Fellars, Donald

    2010-04-01

    In multi-agent scenarios, there can be a disparity in the quality of position estimation amongst the various agents. Here, we consider the case of two agents - a leader and a follower - following the same path, in which the follower has a significantly better estimate of position and heading. This may be applicable to many situations, such as a robotic "mule" following a soldier. Another example is that of a convoy, in which only one vehicle (not necessarily the leading one) is instrumented with precision navigation instruments while all other vehicles use lower-precision instruments. We present an algorithm, called Follower-derived Heading Correction (FDHC), which substantially improves estimates of the leader's heading and, subsequently, position. Specifically, FHDC produces a very accurate estimate of heading errors caused by slow-changing errors (e.g., those caused by drift in gyros) of the leader's navigation system and corrects those errors.

  5. Evolution of Implicit and Explicit Communication in Mobile Robots

    NASA Astrophysics Data System (ADS)

    de Greeff, Joachim; Nolfi, Stefano

    This work investigates the conditions in which a population of embodied agents evolved for the ability to display coordinated/cooperative skills can develop an ability to communicate, whether and to what extent the evolved communication system can complexify during the course of the evolutionary process, and how the characteristics of such communication system varies evolutionarily. The analysis of the obtained results indicates that evolving robots develop a capacity to access/generate information which has a communicative value, an ability to produce different signals encoding useful regularities, and an ability to react appropriately to explicit and implicit signals. The analysis of the obtained results allows us to formulate detailed hypothesis on the evolution of communication for what concern aspects such us: (i) how communication can emerge from a population of initially non-communicating agents, (ii) how communication systems can complexify, (iii) how signals/meanings can originate and how they can be grounded in agents' sensory-motor states.

  6. Insect-Inspired Optical-Flow Navigation Sensors

    NASA Technical Reports Server (NTRS)

    Thakoor, Sarita; Morookian, John M.; Chahl, Javan; Soccol, Dean; Hines, Butler; Zornetzer, Steven

    2005-01-01

    Integrated circuits that exploit optical flow to sense motions of computer mice on or near surfaces ( optical mouse chips ) are used as navigation sensors in a class of small flying robots now undergoing development for potential use in such applications as exploration, search, and surveillance. The basic principles of these robots were described briefly in Insect-Inspired Flight Control for Small Flying Robots (NPO-30545), NASA Tech Briefs, Vol. 29, No. 1 (January 2005), page 61. To recapitulate from the cited prior article: The concept of optical flow can be defined, loosely, as the use of texture in images as a source of motion cues. The flight-control and navigation systems of these robots are inspired largely by the designs and functions of the vision systems and brains of insects, which have been demonstrated to utilize optical flow (as detected by their eyes and brains) resulting from their own motions in the environment. Optical flow has been shown to be very effective as a means of avoiding obstacles and controlling speeds and altitudes in robotic navigation. Prior systems used in experiments on navigating by means of optical flow have involved the use of panoramic optics, high-resolution image sensors, and programmable imagedata- processing computers.

  7. Attention control learning in the decision space using state estimation

    NASA Astrophysics Data System (ADS)

    Gharaee, Zahra; Fatehi, Alireza; Mirian, Maryam S.; Nili Ahmadabadi, Majid

    2016-05-01

    The main goal of this paper is modelling attention while using it in efficient path planning of mobile robots. The key challenge in concurrently aiming these two goals is how to make an optimal, or near-optimal, decision in spite of time and processing power limitations, which inherently exist in a typical multi-sensor real-world robotic application. To efficiently recognise the environment under these two limitations, attention of an intelligent agent is controlled by employing the reinforcement learning framework. We propose an estimation method using estimated mixture-of-experts task and attention learning in perceptual space. An agent learns how to employ its sensory resources, and when to stop observing, by estimating its perceptual space. In this paper, static estimation of the state space in a learning task problem, which is examined in the WebotsTM simulator, is performed. Simulation results show that a robot learns how to achieve an optimal policy with a controlled cost by estimating the state space instead of continually updating sensory information.

  8. The thing that should not be: predictive coding and the uncanny valley in perceiving human and humanoid robot actions

    PubMed Central

    Chaminade, Thierry; Ishiguro, Hiroshi; Driver, Jon; Frith, Chris

    2012-01-01

    Using functional magnetic resonance imaging (fMRI) repetition suppression, we explored the selectivity of the human action perception system (APS), which consists of temporal, parietal and frontal areas, for the appearance and/or motion of the perceived agent. Participants watched body movements of a human (biological appearance and movement), a robot (mechanical appearance and movement) or an android (biological appearance, mechanical movement). With the exception of extrastriate body area, which showed more suppression for human like appearance, the APS was not selective for appearance or motion per se. Instead, distinctive responses were found to the mismatch between appearance and motion: whereas suppression effects for the human and robot were similar to each other, they were stronger for the android, notably in bilateral anterior intraparietal sulcus, a key node in the APS. These results could reflect increased prediction error as the brain negotiates an agent that appears human, but does not move biologically, and help explain the ‘uncanny valley’ phenomenon. PMID:21515639

  9. Multi-Objective Constraint Satisfaction for Mobile Robot Area Defense

    DTIC Science & Technology

    2010-03-01

    17 NSGA-II non-dominated sorting genetic algorithm II . . . . . . . . . . . . . . . . . . . 17 jMetal Metaheuristic Algorithms in...to alert the other agents and ensure trust in the system. This research presents an algorithm that tasks robots to meet the two specific goals of...problem is defined as a constraint satisfaction problem solved using the Non-dominated Sorting Genetic Algorithm II (NSGA-II). Both goals of

  10. 21 CFR 349.3 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... § 200.50, to be applied to the eyelid or instilled in the eye. (b) Astringent. A locally acting pharmacologic agent which, by precipitating protein, helps to clear mucus from the outer surface of the eye. (c..., usually a water-soluble polymer, which is applied topically to the eye to protect and lubricate mucous...

  11. 21 CFR 349.3 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... § 200.50, to be applied to the eyelid or instilled in the eye. (b) Astringent. A locally acting pharmacologic agent which, by precipitating protein, helps to clear mucus from the outer surface of the eye. (c..., usually a water-soluble polymer, which is applied topically to the eye to protect and lubricate mucous...

  12. Surgeons' display reduced mental effort and workload while performing robotically assisted surgical tasks, when compared to conventional laparoscopy.

    PubMed

    Moore, Lee J; Wilson, Mark R; McGrath, John S; Waine, Elizabeth; Masters, Rich S W; Vine, Samuel J

    2015-09-01

    Research has demonstrated the benefits of robotic surgery for the patient; however, research examining the benefits of robotic technology for the surgeon is limited. This study aimed to adopt validated measures of workload, mental effort, and gaze control to assess the benefits of robotic surgery for the surgeon. We predicted that the performance of surgical training tasks on a surgical robot would require lower investments of workload and mental effort, and would be accompanied by superior gaze control and better performance, when compared to conventional laparoscopy. Thirty-two surgeons performed two trials on a ball pick-and-drop task and a rope-threading task on both robotic and laparoscopic systems. Measures of workload (the surgery task load index), mental effort (subjective: rating scale for mental effort and objective: standard deviation of beat-to-beat intervals), gaze control (using a mobile eye movement recorder), and task performance (completion time and number of errors) were recorded. As expected, surgeons performed both tasks more quickly and accurately (with fewer errors) on the robotic system. Self-reported measures of workload and mental effort were significantly lower on the robotic system compared to the laparoscopic system. Similarly, an objective cardiovascular measure of mental effort revealed lower investment of mental effort when using the robotic platform relative to the laparoscopic platform. Gaze control distinguished the robotic from the laparoscopic systems, but not in the predicted fashion, with the robotic system associated with poorer (more novice like) gaze control. The findings highlight the benefits of robotic technology for surgical operators. Specifically, they suggest that tasks can be performed more proficiently, at a lower workload, and with the investment of less mental effort, this may allow surgeons greater cognitive resources for dealing with other demands such as communication, decision-making, or periods of increased complexity in the operating room.

  13. Integrating deliberative planning in a robot architecture

    NASA Technical Reports Server (NTRS)

    Elsaesser, Chris; Slack, Marc G.

    1994-01-01

    The role of planning and reactive control in an architecture for autonomous agents is discussed. The postulated architecture seperates the general robot intelligence problem into three interacting pieces: (1) robot reactive skills, i.e., grasping, object tracking, etc.; (2) a sequencing capability to differentially ativate the reactive skills; and (3) a delibrative planning capability to reason in depth about goals, preconditions, resources, and timing constraints. Within the sequencing module, caching techniques are used for handling routine activities. The planning system then builds on these cached solutions to routine tasks to build larger grain sized primitives. This eliminates large numbers of essentially linear planning problems. The architecture will be used in the future to incorporate in robots cognitive capabilites normally associated with intelligent behavior.

  14. Information Foraging and Change Detection for Automated Science Exploration

    NASA Technical Reports Server (NTRS)

    Furlong, P. Michael; Dille, Michael

    2016-01-01

    This paper presents a new algorithm for autonomous on-line exploration in unknown environments. The objective is to free remote scientists from possibly-infeasible extensive preliminary site investigation prior to sending robotic agents. We simulate a common exploration task for an autonomous robot sampling the environment at various locations and compare performance against simpler control strategies. An extension is proposed and evaluated that further permits operation in the presence of environmental variability in which the robot encounters a change in the distribution underlying sampling targets. Experimental results indicate a strong improvement in performance across varied parameter choices for the scenario.

  15. Investigating the feasibility of a BCI-driven robot-based writing agent for handicapped individuals

    NASA Astrophysics Data System (ADS)

    Syan, Chanan S.; Harnarinesingh, Randy E. S.; Beharry, Rishi

    2014-07-01

    Brain-Computer Interfaces (BCIs) predominantly employ output actuators such as virtual keyboards and wheelchair controllers to enable handicapped individuals to interact and communicate with their environment. However, BCI-based assistive technologies are limited in their application. There is minimal research geared towards granting disabled individuals the ability to communicate using written words. This is a drawback because involving a human attendant in writing tasks can entail a breach of personal privacy where the task entails sensitive and private information such as banking matters. BCI-driven robot-based writing however can provide a safeguard for user privacy where it is required. This study investigated the feasibility of a BCI-driven writing agent using the 3 degree-of- freedom Phantom Omnibot. A full alphanumerical English character set was developed and validated using a teach pendant program in MATLAB. The Omnibot was subsequently interfaced to a P300-based BCI. Three subjects utilised the BCI in the online context to communicate words to the writing robot over a Local Area Network (LAN). The average online letter-wise classification accuracy was 91.43%. The writing agent legibly constructed the communicated letters with minor errors in trajectory execution. The developed system therefore provided a feasible platform for BCI-based writing.

  16. Virtual Reality for Artificial Intelligence: human-centered simulation for social science.

    PubMed

    Cipresso, Pietro; Riva, Giuseppe

    2015-01-01

    There is a long last tradition in Artificial Intelligence as use of Robots endowing human peculiarities, from a cognitive and emotional point of view, and not only in shape. Today Artificial Intelligence is more oriented to several form of collective intelligence, also building robot simulators (hardware or software) to deeply understand collective behaviors in human beings and society as a whole. Modeling has also been crucial in the social sciences, to understand how complex systems can arise from simple rules. However, while engineers' simulations can be performed in the physical world using robots, for social scientist this is impossible. For decades, researchers tried to improve simulations by endowing artificial agents with simple and complex rules that emulated human behavior also by using artificial intelligence (AI). To include human beings and their real intelligence within artificial societies is now the big challenge. We present an hybrid (human-artificial) platform where experiments can be performed by simulated artificial worlds in the following manner: 1) agents' behaviors are regulated by the behaviors shown in Virtual Reality involving real human beings exposed to specific situations to simulate, and 2) technology transfers these rules into the artificial world. These form a closed-loop of real behaviors inserted into artificial agents, which can be used to study real society.

  17. The robot's eyes - Stereo vision system for automated scene analysis

    NASA Technical Reports Server (NTRS)

    Williams, D. S.

    1977-01-01

    Attention is given to the robot stereo vision system which maintains the image produced by solid-state detector television cameras in a dynamic random access memory called RAPID. The imaging hardware consists of sensors (two solid-state image arrays using a charge injection technique), a video-rate analog-to-digital converter, the RAPID memory, and various types of computer-controlled displays, and preprocessing equipment (for reflexive actions, processing aids, and object detection). The software is aimed at locating objects and transversibility. An object-tracking algorithm is discussed and it is noted that tracking speed is in the 50-75 pixels/s range.

  18. An Ethnographic Eye on Religion in Everyday Life

    ERIC Educational Resources Information Center

    Berglund, Jenny

    2014-01-01

    There are many pitfalls associated with teaching about religions. One such pitfall entails the risk of presenting religions as stereotypical monolithic systems; that is, all who belong to a particular religious tradition think and act in the same way. I like to call this sort of stereotyping the "robotic tendency" because it has a habit…

  19. Artificial consciousness, artificial emotions, and autonomous robots.

    PubMed

    Cardon, Alain

    2006-12-01

    Nowadays for robots, the notion of behavior is reduced to a simple factual concept at the level of the movements. On another hand, consciousness is a very cultural concept, founding the main property of human beings, according to themselves. We propose to develop a computable transposition of the consciousness concepts into artificial brains, able to express emotions and consciousness facts. The production of such artificial brains allows the intentional and really adaptive behavior for the autonomous robots. Such a system managing the robot's behavior will be made of two parts: the first one computes and generates, in a constructivist manner, a representation for the robot moving in its environment, and using symbols and concepts. The other part achieves the representation of the previous one using morphologies in a dynamic geometrical way. The robot's body will be seen for itself as the morphologic apprehension of its material substrata. The model goes strictly by the notion of massive multi-agent's organizations with a morphologic control.

  20. Robots Learn to Recognize Individuals from Imitative Encounters with People and Avatars

    NASA Astrophysics Data System (ADS)

    Boucenna, Sofiane; Cohen, David; Meltzoff, Andrew N.; Gaussier, Philippe; Chetouani, Mohamed

    2016-02-01

    Prior to language, human infants are prolific imitators. Developmental science grounds infant imitation in the neural coding of actions, and highlights the use of imitation for learning from and about people. Here, we used computational modeling and a robot implementation to explore the functional value of action imitation. We report 3 experiments using a mutual imitation task between robots, adults, typically developing children, and children with Autism Spectrum Disorder. We show that a particular learning architecture - specifically one combining artificial neural nets for (i) extraction of visual features, (ii) the robot’s motor internal state, (iii) posture recognition, and (iv) novelty detection - is able to learn from an interactive experience involving mutual imitation. This mutual imitation experience allowed the robot to recognize the interactive agent in a subsequent encounter. These experiments using robots as tools for modeling human cognitive development, based on developmental theory, confirm the promise of developmental robotics. Additionally, findings illustrate how person recognition may emerge through imitative experience, intercorporeal mapping, and statistical learning.

  1. Robot soccer anywhere: achieving persistent autonomous navigation, mapping, and object vision tracking in dynamic environments

    NASA Astrophysics Data System (ADS)

    Dragone, Mauro; O'Donoghue, Ruadhan; Leonard, John J.; O'Hare, Gregory; Duffy, Brian; Patrikalakis, Andrew; Leederkerken, Jacques

    2005-06-01

    The paper describes an ongoing effort to enable autonomous mobile robots to play soccer in unstructured, everyday environments. Unlike conventional robot soccer competitions that are usually held on purpose-built robot soccer "fields", in our work we seek to develop the capability for robots to demonstrate aspects of soccer-playing in more diverse environments, such as schools, hospitals, or shopping malls, with static obstacles (furniture) and dynamic natural obstacles (people). This problem of "Soccer Anywhere" presents numerous research challenges including: (1) Simultaneous Localization and Mapping (SLAM) in dynamic, unstructured environments, (2) software control architectures for decentralized, distributed control of mobile agents, (3) integration of vision-based object tracking with dynamic control, and (4) social interaction with human participants. In addition to the intrinsic research merit of these topics, we believe that this capability would prove useful for outreach activities, in demonstrating robotics technology to primary and secondary school students, to motivate them to pursue careers in science and engineering.

  2. Robots Learn to Recognize Individuals from Imitative Encounters with People and Avatars

    PubMed Central

    Boucenna, Sofiane; Cohen, David; Meltzoff, Andrew N.; Gaussier, Philippe; Chetouani, Mohamed

    2016-01-01

    Prior to language, human infants are prolific imitators. Developmental science grounds infant imitation in the neural coding of actions, and highlights the use of imitation for learning from and about people. Here, we used computational modeling and a robot implementation to explore the functional value of action imitation. We report 3 experiments using a mutual imitation task between robots, adults, typically developing children, and children with Autism Spectrum Disorder. We show that a particular learning architecture - specifically one combining artificial neural nets for (i) extraction of visual features, (ii) the robot’s motor internal state, (iii) posture recognition, and (iv) novelty detection - is able to learn from an interactive experience involving mutual imitation. This mutual imitation experience allowed the robot to recognize the interactive agent in a subsequent encounter. These experiments using robots as tools for modeling human cognitive development, based on developmental theory, confirm the promise of developmental robotics. Additionally, findings illustrate how person recognition may emerge through imitative experience, intercorporeal mapping, and statistical learning. PMID:26844862

  3. Use Of Green Porphyrinsto Treat Neovasculature In The Eyes

    DOEpatents

    Levy, Julia; Miller, Joan W.; Gradoudas, Evangelos S.; Hasan, Tayyaba; Schmidt-Erfurth, Ursula

    1998-08-25

    Photodynamic therapy of conditions of the eye characterized by unwanted neovasculature, such as age-related macular degeneration, is effective using green porphyrins as photoactive agents, preferably as liposomal compositions.

  4. Memetic Engineering as a Basis for Learning in Robotic Communities

    NASA Technical Reports Server (NTRS)

    Truszkowski, Walter F.; Rouff, Christopher; Akhavannik, Mohammad H.

    2014-01-01

    This paper represents a new contribution to the growing literature on memes. While most memetic thought has been focused on its implications on humans, this paper speculates on the role that memetics can have on robotic communities. Though speculative, the concepts are based on proven advanced multi agent technology work done at NASA - Goddard Space Flight Center and Lockheed Martin. The paper is composed of the following sections : 1) An introductory section which gently leads the reader into the realm of memes. 2) A section on memetic engineering which addresses some of the central issues with robotic learning via memes. 3) A section on related work which very concisely identifies three other areas of memetic applications, i.e., news, psychology, and the study of human behaviors. 4) A section which discusses the proposed approach for realizing memetic behaviors in robots and robotic communities. 5) A section which presents an exploration scenario for a community of robots working on Mars. 6) A final section which discusses future research which will be required to realize a comprehensive science of robotic memetics.

  5. Distributed optimization system and method

    DOEpatents

    Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.

    2003-06-10

    A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.

  6. Distributed Optimization System

    DOEpatents

    Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.

    2004-11-30

    A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.

  7. Group sessions with Paro in a nursing home: Structure, observations and interviews.

    PubMed

    Robinson, Hayley; Broadbent, Elizabeth; MacDonald, Bruce

    2016-06-01

    We recently reported that a companion robot reduced residents' loneliness in a randomised controlled trial at an aged-care facility. This report aims to provide additional, previously unpublished data about how the sessions were run, residents' interactions with the robot and staff perspectives. Observations were conducted focusing on engagement, how residents treated the robot and if the robot acted as a social catalyst. In addition, 16 residents and 21 staff were asked open-ended questions at the end of the study about the sessions and the robot. Observations indicated that some residents engaged on an emotional level with Paro, and Paro was treated as both an agent and an artificial object. Interviews revealed that residents enjoyed sharing, interacting with and talking about Paro. This study supports other research showing Paro has psychosocial benefits and provides a guide for those wishing to use Paro in a group setting in aged care. © 2015 AJA Inc.

  8. Human-Vehicle Interface for Semi-Autonomous Operation of Uninhabited Aero Vehicles

    NASA Technical Reports Server (NTRS)

    Jones, Henry L.; Frew, Eric W.; Woodley, Bruce R.; Rock, Stephen M.

    2001-01-01

    The robustness of autonomous robotic systems to unanticipated circumstances is typically insufficient for use in the field. The many skills of human user often fill this gap in robotic capability. To incorporate the human into the system, a useful interaction between man and machine must exist. This interaction should enable useful communication to be exchanged in a natural way between human and robot on a variety of levels. This report describes the current human-robot interaction for the Stanford HUMMINGBIRD autonomous helicopter. In particular, the report discusses the elements of the system that enable multiple levels of communication. An intelligent system agent manages the different inputs given to the helicopter. An advanced user interface gives the user and helicopter a method for exchanging useful information. Using this human-robot interaction, the HUMMINGBIRD has carried out various autonomous search, tracking, and retrieval missions.

  9. An Effective Division of Labor Between Human and Robotic Agents Performing a Cooperative Assembly Task

    NASA Technical Reports Server (NTRS)

    Rehnmark, Fredrik; Bluethmann, William; Rochlis, Jennifer; Huber, Eric; Ambrose, Robert

    2003-01-01

    NASA's Human Space Flight program depends heavily on spacewalks performed by human astronauts. These so-called extra-vehicular activities (EVAs) are risky, expensive and complex. Work is underway to develop a robotic astronaut's assistant that can help reduce human EVA time and workload by delivering human-like dexterous manipulation capabilities to any EVA worksite. An experiment is conducted to evaluate human-robot teaming strategies in the context of a simplified EVA assembly task in which Robonaut, a collaborative effort with the Defense Advanced Research Projects Agency (DARPA), an anthropomorphic robot works side-by-side with a human subject. Team performance is studied in an effort to identify the strengths and weaknesses of each teaming configuration and to recommend an appropriate division of labor. A shared control approach is developed to take advantage of the complementary strengths of the human teleoperator and robot, even in the presence of significant time delay.

  10. Target Capturing Control for Space Robots with Unknown Mass Properties: A Self-Tuning Method Based on Gyros and Cameras.

    PubMed

    Li, Zhenyu; Wang, Bin; Liu, Hong

    2016-08-30

    Satellite capturing with free-floating space robots is still a challenging task due to the non-fixed base and unknown mass property issues. In this paper gyro and eye-in-hand camera data are adopted as an alternative choice for solving this problem. For this improved system, a new modeling approach that reduces the complexity of system control and identification is proposed. With the newly developed model, the space robot is equivalent to a ground-fixed manipulator system. Accordingly, a self-tuning control scheme is applied to handle such a control problem including unknown parameters. To determine the controller parameters, an estimator is designed based on the least-squares technique for identifying the unknown mass properties in real time. The proposed method is tested with a credible 3-dimensional ground verification experimental system, and the experimental results confirm the effectiveness of the proposed control scheme.

  11. Target Capturing Control for Space Robots with Unknown Mass Properties: A Self-Tuning Method Based on Gyros and Cameras

    PubMed Central

    Li, Zhenyu; Wang, Bin; Liu, Hong

    2016-01-01

    Satellite capturing with free-floating space robots is still a challenging task due to the non-fixed base and unknown mass property issues. In this paper gyro and eye-in-hand camera data are adopted as an alternative choice for solving this problem. For this improved system, a new modeling approach that reduces the complexity of system control and identification is proposed. With the newly developed model, the space robot is equivalent to a ground-fixed manipulator system. Accordingly, a self-tuning control scheme is applied to handle such a control problem including unknown parameters. To determine the controller parameters, an estimator is designed based on the least-squares technique for identifying the unknown mass properties in real time. The proposed method is tested with a credible 3-dimensional ground verification experimental system, and the experimental results confirm the effectiveness of the proposed control scheme. PMID:27589748

  12. Cooperative crossing of traffic intersections in a distributed robot system

    NASA Astrophysics Data System (ADS)

    Rausch, Alexander; Oswald, Norbert; Levi, Paul

    1995-09-01

    In traffic scenarios a distributed robot system has to cope with problems like resource sharing, distributed planning, distributed job scheduling, etc. While travelling along a street segment can be done autonomously by each robot, crossing of an intersection as a shared resource forces the robot to coordinate its actions with those of other robots e.g. by means of negotiations. We discuss the issue of cooperation on the design of a robot control architecture. Task and sensor specific cooperation between robots requires the robots' architectures to be interlinked at different hierarchical levels. Inside each level control cycles are running in parallel and provide fast reaction on events. Internal cooperation may occur between cycles of the same level. Altogether the architecture is matrix-shaped and contains abstract control cycles with a certain degree of autonomy. Based upon the internal structure of a cycle we consider the horizontal and vertical interconnection of cycles to form an individual architecture. Thereafter we examine the linkage of several agents and its influence on an interacting architecture. A prototypical implementation of a scenario, which combines aspects of active vision and cooperation, illustrates our approach. Two vision-guided vehicles are faced with line following, intersection recognition and negotiation.

  13. Formalization, implementation, and modeling of institutional controllers for distributed robotic systems.

    PubMed

    Pereira, José N; Silva, Porfírio; Lima, Pedro U; Martinoli, Alcherio

    2014-01-01

    The work described is part of a long term program of introducing institutional robotics, a novel framework for the coordination of robot teams that stems from institutional economics concepts. Under the framework, institutions are cumulative sets of persistent artificial modifications made to the environment or to the internal mechanisms of a subset of agents, thought to be functional for the collective order. In this article we introduce a formal model of institutional controllers based on Petri nets. We define executable Petri nets-an extension of Petri nets that takes into account robot actions and sensing-to design, program, and execute institutional controllers. We use a generalized stochastic Petri net view of the robot team controlled by the institutional controllers to model and analyze the stochastic performance of the resulting distributed robotic system. The ability of our formalism to replicate results obtained using other approaches is assessed through realistic simulations of up to 40 e-puck robots. In particular, we model a robot swarm and its institutional controller with the goal of maintaining wireless connectivity, and successfully compare our model predictions and simulation results with previously reported results, obtained by using finite state automaton models and controllers.

  14. Human-Centered Design for the Personal Satellite Assistant

    NASA Technical Reports Server (NTRS)

    Bradshaw, Jeffrey M.; Sierhuis, Maarten; Gawdiak, Yuri; Thomas, Hans; Greaves, Mark; Clancey, William J.; Swanson, Keith (Technical Monitor)

    2000-01-01

    The Personal Satellite Assistant (PSA) is a softball-sized flying robot designed to operate autonomously onboard manned spacecraft in pressurized micro-gravity environments. We describe how the Brahms multi-agent modeling and simulation environment in conjunction with a KAoS agent teamwork approach can be used to support human-centered design for the PSA.

  15. Beyond Robotic Wastelands of Time: Abandoned Pedagogical Agents and "New" Pedalled Pedagogies

    ERIC Educational Resources Information Center

    Savin-Baden, Maggi; Tombs, Gemma; Bhakta, Roy

    2015-01-01

    Chatbots, known as pedagogical agents in educational settings, have a long history of use, beginning with Alan Turing's work. Since then online chatbots have become embedded into the fabric of technology. Yet understandings of these technologies are inchoate and often untheorised. Integration of chatbots into educational settings over the past…

  16. Control of Synchronization Regimes in Networks of Mobile Interacting Agents

    NASA Astrophysics Data System (ADS)

    Perez-Diaz, Fernando; Zillmer, Ruediger; Groß, Roderich

    2017-05-01

    We investigate synchronization in a population of mobile pulse-coupled agents with a view towards implementations in swarm-robotics systems and mobile sensor networks. Previous theoretical approaches dealt with range and nearest-neighbor interactions. In the latter case, a synchronization-hindering regime for intermediate agent mobility is found. We investigate the robustness of this intermediate regime under practical scenarios. We show that synchronization in the intermediate regime can be predicted by means of a suitable metric of the phase response curve. Furthermore, we study more-realistic K -nearest-neighbor and cone-of-vision interactions, showing that it is possible to control the extent of the synchronization-hindering region by appropriately tuning the size of the neighborhood. To assess the effect of noise, we analyze the propagation of perturbations over the network and draw an analogy between the response in the hindering regime and stable chaos. Our findings reveal the conditions for the control of clock or activity synchronization of agents with intermediate mobility. In addition, the emergence of the intermediate regime is validated experimentally using a swarm of physical robots interacting with cone-of-vision interactions.

  17. Ocular delivery systems for topical application of anti-infective agents.

    PubMed

    Duxfield, Linda; Sultana, Rubab; Wang, Ruokai; Englebretsen, Vanessa; Deo, Samantha; Rupenthal, Ilva D; Al-Kassas, Raida

    2016-01-01

    For the treatment of anterior eye segment infections using anti-infective agents, topical ocular application is the most convenient route of administration. However, topical delivery of anti-infective agents is associated with a number of problems and challenges owing to the unique structure of the eye and the physicochemical properties of these compounds. Topical ocular drug delivery systems can be classified into two forms: conventional and non-conventional. The efficacy of conventional ocular formulations is limited by poor corneal retention and permeation resulting in low ocular bioavailability. Recently, attention has been focused on improving topical ocular delivery of anti-infective agents using advanced drug delivery systems. This review will focus on the challenges of efficient topical ocular delivery of anti-infective agents and will discuss the various types of delivery systems used to improve the treatment anterior segment infections.

  18. An Innovative Multi-Agent Search-and-Rescue Path Planning Approach

    DTIC Science & Technology

    2015-03-09

    search problems from search theory and artificial intelligence /distributed robotic control, and pursuit-evasion problem perspectives may be found in...Dissanayake, “Probabilistic search for a moving target in an indoor environment”, In Proc. IEEE/RSJ Int. Conf. Intelligent Robots and Systems, 2006, pp...3393-3398. [7] H. Lau, and G. Dissanayake, “Optimal search for multiple targets in a built environment”, In Proc. IEEE/RSJ Int. Conf. Intelligent

  19. Evaluating the Dynamics of Agent-Environment Interaction

    DTIC Science & Technology

    2001-05-01

    a color sensor in the gripper, a radio transmitter/receiver for communication and data gathering, and an ultrasound /radio triangulation system for...Cooperative Mobile Robot Control’, Autonomous Robots 4(4), 387{403. Vaughan, R. T., Sty, K., Sukhatme, G. S. & Mataric, M. J. (2000), Whistling in the Dark...sensor in the gripper, a radio transmitter/receiver for communication and data gathering, and an ultrasound /radio triangu- lation system for

  20. Dual-Schemata Model

    NASA Astrophysics Data System (ADS)

    Taniguchi, Tadahiro; Sawaragi, Tetsuo

    In this paper, a new machine-learning method, called Dual-Schemata model, is presented. Dual-Schemata model is a kind of self-organizational machine learning methods for an autonomous robot interacting with an unknown dynamical environment. This is based on Piaget's Schema model, that is a classical psychological model to explain memory and cognitive development of human beings. Our Dual-Schemata model is developed as a computational model of Piaget's Schema model, especially focusing on sensori-motor developing period. This developmental process is characterized by a couple of two mutually-interacting dynamics; one is a dynamics formed by assimilation and accommodation, and the other dynamics is formed by equilibration and differentiation. By these dynamics schema system enables an agent to act well in a real world. This schema's differentiation process corresponds to a symbol formation process occurring within an autonomous agent when it interacts with an unknown, dynamically changing environment. Experiment results obtained from an autonomous facial robot in which our model is embedded are presented; an autonomous facial robot becomes able to chase a ball moving in various ways without any rewards nor teaching signals from outside. Moreover, emergence of concepts on the target movements within a robot is shown and discussed in terms of fuzzy logics on set-subset inclusive relationships.

  1. Controlling the autonomy of a reconnaissance robot

    NASA Astrophysics Data System (ADS)

    Dalgalarrondo, Andre; Dufourd, Delphine; Filliat, David

    2004-09-01

    In this paper, we present our research on the control of a mobile robot for indoor reconnaissance missions. Based on previous work concerning our robot control architecture HARPIC, we have developed a man machine interface and software components that allow a human operator to control a robot at different levels of autonomy. This work aims at studying how a robot could be helpful in indoor reconnaissance and surveillance missions in hostile environment. In such missions, since a soldier faces many threats and must protect himself while looking around and holding his weapon, he cannot devote his attention to the teleoperation of the robot. Moreover, robots are not yet able to conduct complex missions in a fully autonomous mode. Thus, in a pragmatic way, we have built a software that allows dynamic swapping between control modes (manual, safeguarded and behavior-based) while automatically performing map building and localization of the robot. It also includes surveillance functions like movement detection and is designed for multirobot extensions. We first describe the design of our agent-based robot control architecture and discuss the various ways to control and interact with a robot. The main modules and functionalities implementing those ideas in our architecture are detailed. More precisely, we show how we combine manual controls, obstacle avoidance, wall and corridor following, way point and planned travelling. Some experiments on a Pioneer robot equipped with various sensors are presented. Finally, we suggest some promising directions for the development of robots and user interfaces for hostile environment and discuss our planned future improvements.

  2. Toward understanding social cues and signals in human–robot interaction: effects of robot gaze and proxemic behavior

    PubMed Central

    Fiore, Stephen M.; Wiltshire, Travis J.; Lobato, Emilio J. C.; Jentsch, Florian G.; Huang, Wesley H.; Axelrod, Benjamin

    2013-01-01

    As robots are increasingly deployed in settings requiring social interaction, research is needed to examine the social signals perceived by humans when robots display certain social cues. In this paper, we report a study designed to examine how humans interpret social cues exhibited by robots. We first provide a brief overview of perspectives from social cognition in humans and how these processes are applicable to human–robot interaction (HRI). We then discuss the need to examine the relationship between social cues and signals as a function of the degree to which a robot is perceived as a socially present agent. We describe an experiment in which social cues were manipulated on an iRobot AvaTM mobile robotics platform in a hallway navigation scenario. Cues associated with the robot’s proxemic behavior were found to significantly affect participant perceptions of the robot’s social presence and emotional state while cues associated with the robot’s gaze behavior were not found to be significant. Further, regardless of the proxemic behavior, participants attributed more social presence and emotional states to the robot over repeated interactions than when they first interacted with it. Generally, these results indicate the importance for HRI research to consider how social cues expressed by a robot can differentially affect perceptions of the robot’s mental states and intentions. The discussion focuses on implications for the design of robotic systems and future directions for research on the relationship between social cues and signals. PMID:24348434

  3. Recovery of Proprioception in the Upper Extremity by Robotic Mirror Therapy: a Clinical Pilot Study for Proof of Concept.

    PubMed

    Nam, Hyung Seok; Koh, Sukgyu; Beom, Jaewon; Kim, Yoon Jae; Park, Jang Woo; Koh, Eun Sil; Chung, Sun Gun; Kim, Sungwan

    2017-10-01

    A novel robotic mirror therapy system was recently developed to provide proprioceptive stimulus to the hemiplegic arm during a mirror therapy. Validation of the robotic mirror therapy system was performed to confirm its synchronicity prior to the clinical study. The mean error angle range between the intact arm and the robot was 1.97 to 4.59 degrees. A 56-year-old male who had right middle cerebral artery infarction 11 months ago received the robotic mirror therapy for ten 30-minute sessions during 2 weeks. Clinical evaluation and functional magnetic resonance imaging (fMRI) studies were performed before and after the intervention. At the follow-up evaluation, the thumb finding test score improved from 2 to 1 for eye level and from 3 to 1 for overhead level. The Albert's test score on the left side improved from 6 to 11. Improvements were sustained at 2-month follow-up. The fMRI during the passive motion revealed a considerable increase in brain activity at the lower part of the right superior parietal lobule, suggesting the possibility of proprioception enhancement. The robotic mirror therapy system may serve as a useful treatment method for patients with supratentorial stroke to facilitate recovery of proprioceptive deficit and hemineglect. © 2017 The Korean Academy of Medical Sciences.

  4. Designing collective behavior in a termite-inspired robot construction team.

    PubMed

    Werfel, Justin; Petersen, Kirstin; Nagpal, Radhika

    2014-02-14

    Complex systems are characterized by many independent components whose low-level actions produce collective high-level results. Predicting high-level results given low-level rules is a key open challenge; the inverse problem, finding low-level rules that give specific outcomes, is in general still less understood. We present a multi-agent construction system inspired by mound-building termites, solving such an inverse problem. A user specifies a desired structure, and the system automatically generates low-level rules for independent climbing robots that guarantee production of that structure. Robots use only local sensing and coordinate their activity via the shared environment. We demonstrate the approach via a physical realization with three autonomous climbing robots limited to onboard sensing. This work advances the aim of engineering complex systems that achieve specific human-designed goals.

  5. Quantitative 3-D imaging topogrammetry for telemedicine applications

    NASA Technical Reports Server (NTRS)

    Altschuler, Bruce R.

    1994-01-01

    The technology to reliably transmit high-resolution visual imagery over short to medium distances in real time has led to the serious considerations of the use of telemedicine, telepresence, and telerobotics in the delivery of health care. These concepts may involve, and evolve toward: consultation from remote expert teaching centers; diagnosis; triage; real-time remote advice to the surgeon; and real-time remote surgical instrument manipulation (telerobotics with virtual reality). Further extrapolation leads to teledesign and telereplication of spare surgical parts through quantitative teleimaging of 3-D surfaces tied to CAD/CAM devices and an artificially intelligent archival data base of 'normal' shapes. The ability to generate 'topogrames' or 3-D surface numerical tables of coordinate values capable of creating computer-generated virtual holographic-like displays, machine part replication, and statistical diagnostic shape assessment is critical to the progression of telemedicine. Any virtual reality simulation will remain in 'video-game' realm until realistic dimensional and spatial relational inputs from real measurements in vivo during surgeries are added to an ever-growing statistical data archive. The challenges of managing and interpreting this 3-D data base, which would include radiographic and surface quantitative data, are considerable. As technology drives toward dynamic and continuous 3-D surface measurements, presenting millions of X, Y, Z data points per second of flexing, stretching, moving human organs, the knowledge base and interpretive capabilities of 'brilliant robots' to work as a surgeon's tireless assistants becomes imaginable. The brilliant robot would 'see' what the surgeon sees--and more, for the robot could quantify its 3-D sensing and would 'see' in a wider spectral range than humans, and could zoom its 'eyes' from the macro world to long-distance microscopy. Unerring robot hands could rapidly perform machine-aided suturing with precision micro-sewing machines, splice neural connections with laser welds, micro-bore through constricted vessels, and computer combine ultrasound, microradiography, and 3-D mini-borescopes to quickly assess and trace vascular problems in situ. The spatial relationships between organs, robotic arms, and end-effector diagnostic, manipulative, and surgical instruments would be constantly monitored by the robot 'brain' using inputs from its multiple 3-D quantitative 'eyes' remote sensing, as well as by contact and proximity force measuring devices. Methods to create accurate and quantitative 3-D topograms at continuous video data rates are described.

  6. Potassium-titanyl-phosphate laser assisted robotic partial nephrectomy in a porcine model: can robotic assistance optimize the power needed for effective cutting and hemostasis?

    PubMed

    Boris, Ronald S; Eun, Daniel; Bhandari, Akshay; Lyall, Kathryn; Bhandari, Mahendra; Rogers, Craig; Alassi, Osama; Menon, Mani

    2007-01-01

    A potassium-titanyl-phosphate (KTP) laser through robotic endo-wrist instrument has been evaluated as an ablative and hemostatic tool in robotic assisted laparoscopic partial nephrectomy (RALPN). Ten RALPN were performed in five domestic female pigs. The partial nephrectomies were performed with bulldog clamping of the pedicle. Flexible glass fiber carrying 532-nm green light laser was used through a robotic endowrist instrument in two cases. Power usage from 4 to 10 W was tested. The laser probe was explored both as a cutting knife and for hemostasis. The pelvicalyceal system was closed with a running suture. Partial nephrectomies using KTP laser were performed without complications. Mean operative times and warm ischemia times for laser cases were 96 and 18 min, respectively. Mean estimated blood loss was 60 ml compared with 50 ml for non-laser cases. Complete hemostasis with the laser alone could be achieved with a power of 4 W and was found to be effective. In our hands the laser fiber powered up to 10 W was not effective as a quick cutting agent. Histopathologic analysis of the renal remnant revealed a cauterized surface effect with average laser penetration depth less than 1 mm and minimal surrounding cellular injury. The new robotic endowrist instrument carrying flexible glass fiber transmitting 532-nm green light laser is a useful addition to the armamentarium of the robotic urologic setup. Its control by the console surgeon enables quicker and more complete hemostasis of the cut surface in renal sparing surgery using a porcine model. Histologically proven lased depth of less than 1 mm suggests minimal parenchyma damage in an acute setting. Laser application as a cutting agent, however, requires further investigation with interval power settings beyond the limits of this preliminary study. We estimate that effective cutting should be possible with a setting lower than traditionally recommended for solid organs.

  7. Interacting With Robots to Investigate the Bases of Social Interaction.

    PubMed

    Sciutti, Alessandra; Sandini, Giulio

    2017-12-01

    Humans show a great natural ability at interacting with each other. Such efficiency in joint actions depends on a synergy between planned collaboration and emergent coordination, a subconscious mechanism based on a tight link between action execution and perception. This link supports phenomena as mutual adaptation, synchronization, and anticipation, which cut drastically the delays in the interaction and the need of complex verbal instructions and result in the establishment of joint intentions, the backbone of social interaction. From a neurophysiological perspective, this is possible, because the same neural system supporting action execution is responsible of the understanding and the anticipation of the observed action of others. Defining which human motion features allow for such emergent coordination with another agent would be crucial to establish more natural and efficient interaction paradigms with artificial devices, ranging from assistive and rehabilitative technology to companion robots. However, investigating the behavioral and neural mechanisms supporting natural interaction poses substantial problems. In particular, the unconscious processes at the basis of emergent coordination (e.g., unintentional movements or gazing) are very difficult-if not impossible-to restrain or control in a quantitative way for a human agent. Moreover, during an interaction, participants influence each other continuously in a complex way, resulting in behaviors that go beyond experimental control. In this paper, we propose robotics technology as a potential solution to this methodological problem. Robots indeed can establish an interaction with a human partner, contingently reacting to his actions without losing the controllability of the experiment or the naturalness of the interactive scenario. A robot could represent an "interactive probe" to assess the sensory and motor mechanisms underlying human-human interaction. We discuss this proposal with examples from our research with the humanoid robot iCub, showing how an interactive humanoid robot could be a key tool to serve the investigation of the psychological and neuroscientific bases of social interaction.

  8. Stability of Nonlinear Swarms on Flat and Curved Surfaces

    DTIC Science & Technology

    numerical experiments have shown that the system either converges to a rotating circular limit cycle with a fixed center of mass, or the agents clump ...Swarming is a near-universal phenomenon in nature. Many mathematical models of swarms exist , both to model natural processes and to control robotic...agents. We study a swarm of agents with spring-like at-traction and nonlinear self-propulsion. Swarms of this type have been studied numerically, but

  9. Clashes in the Infosphere, General Intelligence, and Metacognition

    DTIC Science & Technology

    2012-12-13

    robotic agents . We also implemented the Mars Rover domain and integrated it with MonCon. Finally, the work with AIML chatbots , including human subjects...Park, MD 20742 Abstract Humans confront the unexpected every day, deal with it, and often learn from it. AI agents , on the other hand, are...call the Metacognitive Loop or MCL. To do this, we have implemented MCL- based systems that enable agents to help themselves; they must establish

  10. Determinants of system transparency and its influence on trust in and reliance on unmanned robotic systems

    NASA Astrophysics Data System (ADS)

    Ososky, Scott; Sanders, Tracy; Jentsch, Florian; Hancock, Peter; Chen, Jessie Y. C.

    2014-06-01

    Increasingly autonomous robotic systems are expected to play a vital role in aiding humans in complex and dangerous environments. It is unlikely, however, that such systems will be able to consistently operate with perfect reliability. Even less than 100% reliable systems can provide a significant benefit to humans, but this benefit will depend on a human operator's ability to understand a robot's behaviors and states. The notion of system transparency is examined as a vital aspect of robotic design, for maintaining humans' trust in and reliance on increasingly automated platforms. System transparency is described as the degree to which a system's action, or the intention of an action, is apparent to human operators and/or observers. While the physical designs of robotic systems have been demonstrated to greatly influence humans' impressions of robots, determinants of transparency between humans and robots are not solely robot-centric. Our approach considers transparency as emergent property of the human-robot system. In this paper, we present insights from our interdisciplinary efforts to improve the transparency of teams made up of humans and unmanned robots. These near-futuristic teams are those in which robot agents will autonomously collaborate with humans to achieve task goals. This paper demonstrates how factors such as human-robot communication and human mental models regarding robots impact a human's ability to recognize the actions or states of an automated system. Furthermore, we will discuss the implications of system transparency on other critical HRI factors such as situation awareness, operator workload, and perceptions of trust.

  11. Anthropomorphism in Human–Robot Co-evolution

    PubMed Central

    Damiano, Luisa; Dumouchel, Paul

    2018-01-01

    Social robotics entertains a particular relationship with anthropomorphism, which it neither sees as a cognitive error, nor as a sign of immaturity. Rather it considers that this common human tendency, which is hypothesized to have evolved because it favored cooperation among early humans, can be used today to facilitate social interactions between humans and a new type of cooperative and interactive agents – social robots. This approach leads social robotics to focus research on the engineering of robots that activate anthropomorphic projections in users. The objective is to give robots “social presence” and “social behaviors” that are sufficiently credible for human users to engage in comfortable and potentially long-lasting relations with these machines. This choice of ‘applied anthropomorphism’ as a research methodology exposes the artifacts produced by social robotics to ethical condemnation: social robots are judged to be a “cheating” technology, as they generate in users the illusion of reciprocal social and affective relations. This article takes position in this debate, not only developing a series of arguments relevant to philosophy of mind, cognitive sciences, and robotic AI, but also asking what social robotics can teach us about anthropomorphism. On this basis, we propose a theoretical perspective that characterizes anthropomorphism as a basic mechanism of interaction, and rebuts the ethical reflections that a priori condemns “anthropomorphism-based” social robots. To address the relevant ethical issues, we promote a critical experimentally based ethical approach to social robotics, “synthetic ethics,” which aims at allowing humans to use social robots for two main goals: self-knowledge and moral growth. PMID:29632507

  12. Nonhuman gamblers: lessons from rodents, primates, and robots

    PubMed Central

    Paglieri, Fabio; Addessi, Elsa; De Petrillo, Francesca; Laviola, Giovanni; Mirolli, Marco; Parisi, Domenico; Petrosino, Giancarlo; Ventricelli, Marialba; Zoratto, Francesca; Adriani, Walter

    2014-01-01

    The search for neuronal and psychological underpinnings of pathological gambling in humans would benefit from investigating related phenomena also outside of our species. In this paper, we present a survey of studies in three widely different populations of agents, namely rodents, non-human primates, and robots. Each of these populations offers valuable and complementary insights on the topic, as the literature demonstrates. In addition, we highlight the deep and complex connections between relevant results across these different areas of research (i.e., cognitive and computational neuroscience, neuroethology, cognitive primatology, neuropsychiatry, evolutionary robotics), to make the case for a greater degree of methodological integration in future studies on pathological gambling. PMID:24574984

  13. Controlling free flight of a robotic fly using an onboard vision sensor inspired by insect ocelli

    PubMed Central

    Fuller, Sawyer B.; Karpelson, Michael; Censi, Andrea; Ma, Kevin Y.; Wood, Robert J.

    2014-01-01

    Scaling a flying robot down to the size of a fly or bee requires advances in manufacturing, sensing and control, and will provide insights into mechanisms used by their biological counterparts. Controlled flight at this scale has previously required external cameras to provide the feedback to regulate the continuous corrective manoeuvres necessary to keep the unstable robot from tumbling. One stabilization mechanism used by flying insects may be to sense the horizon or Sun using the ocelli, a set of three light sensors distinct from the compound eyes. Here, we present an ocelli-inspired visual sensor and use it to stabilize a fly-sized robot. We propose a feedback controller that applies torque in proportion to the angular velocity of the source of light estimated by the ocelli. We demonstrate theoretically and empirically that this is sufficient to stabilize the robot's upright orientation. This constitutes the first known use of onboard sensors at this scale. Dipteran flies use halteres to provide gyroscopic velocity feedback, but it is unknown how other insects such as honeybees stabilize flight without these sensory organs. Our results, using a vehicle of similar size and dynamics to the honeybee, suggest how the ocelli could serve this role. PMID:24942846

  14. Challenges for Deployment Man-Portable Robots into Hostile Environments

    DTIC Science & Technology

    2000-11-01

    video, JAUGS , MDARS 1. BACKGROUND In modern-day warfare the most likely battlefield is an urban environment, which poses many threats to today’s...Unmanned Ground Systems ( JAUGS ). The hybrid architecture is termed SMART for Small Robotic Technology. It uses the underlying MDARS MRHA message format...and a similar approach to function-oriented operation4. From JAUGS it borrows the concept of functional agents or components that are responsible for

  15. DCF(Registered)-A JAUS and TENA Compliant Agent-Based Framework for Test and Evaluation of Unmanned Vehicles

    DTIC Science & Technology

    2011-03-01

    functions of the vignette editor include visualizing the state of the UAS team, creating T&E scenarios, monitoring the UAS team performance, and...These behaviors are then executed by the robot sequentially (Figure 2). A state machine mission editor allows mission builders to use behaviors from the...include control, robotics, distributed applications, multimedia applications, databases, design patterns, and software engineering. Mr. Lenzi is the

  16. Intelligent Adaptive Systems: Literature Research of Design Guidance for Intelligent Adaptive Automation and Interfaces

    DTIC Science & Technology

    2007-09-01

    behaviour based on past experience of interacting with the operator), and mobile (i.e., can move themselves from one machine to another). Edwards argues that...Sofge, D., Bugajska, M., Adams, W., Perzanowski, D., and Schultz, A. (2003). Agent-based Multimodal Interface for Dynamically Autonomous Mobile Robots...based architecture can provide a natural and scalable approach to implementing a multimodal interface to control mobile robots through dynamic

  17. Evolving self-assembly in autonomous homogeneous robots: experiments with two physical robots.

    PubMed

    Ampatzis, Christos; Tuci, Elio; Trianni, Vito; Christensen, Anders Lyhne; Dorigo, Marco

    2009-01-01

    This research work illustrates an approach to the design of controllers for self-assembling robots in which the self-assembly is initiated and regulated by perceptual cues that are brought forth by the physical robots through their dynamical interactions. More specifically, we present a homogeneous control system that can achieve assembly between two modules (two fully autonomous robots) of a mobile self-reconfigurable system without a priori introduced behavioral or morphological heterogeneities. The controllers are dynamic neural networks evolved in simulation that directly control all the actuators of the two robots. The neurocontrollers cause the dynamic specialization of the robots by allocating roles between them based solely on their interaction. We show that the best evolved controller proves to be successful when tested on a real hardware platform, the swarm-bot. The performance achieved is similar to the one achieved by existing modular or behavior-based approaches, also due to the effect of an emergent recovery mechanism that was neither explicitly rewarded by the fitness function, nor observed during the evolutionary simulation. Our results suggest that direct access to the orientations or intentions of the other agents is not a necessary condition for robot coordination: Our robots coordinate without direct or explicit communication, contrary to what is assumed by most research works in collective robotics. This work also contributes to strengthening the evidence that evolutionary robotics is a design methodology that can tackle real-world tasks demanding fine sensory-motor coordination.

  18. An ocular biomechanic model for dynamic simulation of different eye movements.

    PubMed

    Iskander, J; Hossny, M; Nahavandi, S; Del Porto, L

    2018-04-11

    Simulating and analysing eye movement is useful for assessing visual system contribution to discomfort with respect to body movements, especially in virtual environments where simulation sickness might occur. It can also be used in the design of eye prosthesis or humanoid robot eye. In this paper, we present two biomechanic ocular models that are easily integrated into the available musculoskeletal models. The model was previously used to simulate eye-head coordination. The models are used to simulate and analyse eye movements. The proposed models are based on physiological and kinematic properties of the human eye. They incorporate an eye-globe, orbital suspension tissues and six muscles with their connective tissues (pulleys). Pulleys were incorporated in rectus and inferior oblique muscles. The two proposed models are the passive pulleys and the active pulleys models. Dynamic simulations of different eye movements, including fixation, saccade and smooth pursuit, are performed to validate both models. The resultant force-length curves of the models were similar to the experimental data. The simulation results show that the proposed models are suitable to generate eye movement simulations with results comparable to other musculoskeletal models. The maximum kinematic root mean square error (RMSE) is 5.68° and 4.35° for the passive and active pulley models, respectively. The analysis of the muscle forces showed realistic muscle activation with increased muscle synergy in the active pulley model. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. Intelligent Agent Architectures: Reactive Planning Testbed

    NASA Technical Reports Server (NTRS)

    Rosenschein, Stanley J.; Kahn, Philip

    1993-01-01

    An Integrated Agent Architecture (IAA) is a framework or paradigm for constructing intelligent agents. Intelligent agents are collections of sensors, computers, and effectors that interact with their environments in real time in goal-directed ways. Because of the complexity involved in designing intelligent agents, it has been found useful to approach the construction of agents with some organizing principle, theory, or paradigm that gives shape to the agent's components and structures their relationships. Given the wide variety of approaches being taken in the field, the question naturally arises: Is there a way to compare and evaluate these approaches? The purpose of the present work is to develop common benchmark tasks and evaluation metrics to which intelligent agents, including complex robotic agents, constructed using various architectural approaches can be subjected.

  20. Phenylephrine eye drops in pediatric patients undergoing ophthalmic surgery: incidence, presentation, and management of complications during general anesthesia.

    PubMed

    Sbaraglia, Fabio; Mores, Nadia; Garra, Rossella; Giuratrabocchetta, Giuseppe; Lepore, Domenico; Molle, Fernando; Savino, Gustavo; Piastra, Marco; Pulitano', Silvia; Sammartino, Maria

    2014-04-01

    Phenylephrine eye drops are widely used as mydriatic agent to reach the posterior segment of the eye. In literature, many reports suggest a systemic absorption of this agent as a source of severe adverse drug reactions. Hence, we reviewed our experience with topical phenylephrine in ophthalmic surgery. In May 2006, following US guidelines publication, a standard operating procedure was issued in our operating rooms to standardize the use of phenylephrine eye drops in our practice. Two years later, after the occurrence of a cluster of serious adverse drug reactions in infants undergoing surgery, a review of phenylephrine safety and systemic complications incidence was performed. We observed 451 pediatric patients, and 187 met the inclusions criteria: Among them, 4 experienced hemodynamic complications due to phenylephrine eye drops. The incidence of major complications was 2.1%. Two different patterns of side effects occurred. The first one was a cardiovascular derangement with severe hypertension and heart rate alterations; the other one involved exclusively pulmonary circuit causing early edema. These clinical manifestations, their duration, and treatment responses are all explainable by alfa1-adrenergic action of phenylephrine. This hypothetic pathogenesis has been confirmed also by the usefulness of direct vasodilators (anesthetic agents) and by the negative outcome occurred in the past with the use of beta-blockers. © 2013 John Wiley & Sons Ltd.

  1. Robot Competence Development by Constructive Learning

    NASA Astrophysics Data System (ADS)

    Meng, Q.; Lee, M. H.; Hinde, C. J.

    This paper presents a constructive learning approach for developing sensor-motor mapping in autonomous systems. The system’s adaptation to environment changes is discussed and three methods are proposed to deal with long term and short term changes. The proposed constructive learning allows autonomous systems to develop network topology and adjust network parameters. The approach is supported by findings from psychology and neuroscience especially during infants cognitive development at early stages. A growing radial basis function network is introduced as a computational substrate for sensory-motor mapping learning. Experiments are conducted on a robot eye/hand coordination testbed and results show the incremental development of sensory-motor mapping and its adaptation to changes such as in tool-use.

  2. Robot Competence Development by Constructive Learning

    NASA Astrophysics Data System (ADS)

    Meng, Q.; Lee, M. H.; Hinde, C. J.

    This paper presents a constructive learning approach for developing sensor-motor mapping in autonomous systems. The system's adaptation to environment changes is discussed and three methods are proposed to deal with long term and short term changes. The proposed constructive learning allows autonomous systems to develop network topology and adjust network parameters. The approach is supported by findings from psychology and neuroscience especially during infants cognitive development at early stages. A growing radial basis function network is introduced as a computational substrate for sensory-motor mapping learning. Experiments are conducted on a robot eye/hand coordination testbed and results show the incremental development of sensory-motor mapping and its adaptation to changes such as in tool-use.

  3. SpaceX CRS-10 "What's On Board" Science Briefing

    NASA Image and Video Library

    2017-02-17

    Jolyn Russell, deputy Robotics program manager at NASA’s Goddard Space Flight Center’s Satellite Servicing Projects Division in Maryland, speaks to members of social media in the Kennedy Space Center’s Press Site auditorium. The briefing focused on “Raven” research planned for the International Space Station. The Raven investigation studies a real-time robotic spacecraft navigation system that provides the eyes and intelligence to see a target and steer safely toward it. Raven will be part of experiments aboard a Dragon spacecraft scheduled for launch from Kennedy’s Launch Complex 39A on Feb. 18 atop a SpaceX Falcon 9 rocket on the company's 10th Commercial Resupply Services mission to the space station.

  4. Real-time image mosaicing for medical applications.

    PubMed

    Loewke, Kevin E; Camarillo, David B; Jobst, Christopher A; Salisbury, J Kenneth

    2007-01-01

    In this paper we describe the development of a robotically-assisted image mosaicing system for medical applications. The processing occurs in real-time due to a fast initial image alignment provided by robotic position sensing. Near-field imaging, defined by relatively large camera motion, requires translations as well as pan and tilt orientations to be measured. To capture these measurements we use 5-d.o.f. sensing along with a hand-eye calibration to account for sensor offset. This sensor-based approach speeds up the mosaicing, eliminates cumulative errors, and readily handles arbitrary camera motions. Our results have produced visually satisfactory mosaics on a dental model but can be extended to other medical images.

  5. The shaping of social perception by stimulus and knowledge cues to human animacy

    PubMed Central

    Ramsey, Richard; Liepelt, Roman; Prinz, Wolfgang; Hamilton, Antonia F. de C.

    2016-01-01

    Although robots are becoming an ever-growing presence in society, we do not hold the same expectations for robots as we do for humans, nor do we treat them the same. As such, the ability to recognize cues to human animacy is fundamental for guiding social interactions. We review literature that demonstrates cortical networks associated with person perception, action observation and mentalizing are sensitive to human animacy information. In addition, we show that most prior research has explored stimulus properties of artificial agents (humanness of appearance or motion), with less investigation into knowledge cues (whether an agent is believed to have human or artificial origins). Therefore, currently little is known about the relationship between stimulus and knowledge cues to human animacy in terms of cognitive and brain mechanisms. Using fMRI, an elaborate belief manipulation, and human and robot avatars, we found that knowledge cues to human animacy modulate engagement of person perception and mentalizing networks, while stimulus cues to human animacy had less impact on social brain networks. These findings demonstrate that self–other similarities are not only grounded in physical features but are also shaped by prior knowledge. More broadly, as artificial agents fulfil increasingly social roles, a challenge for roboticists will be to manage the impact of pre-conceived beliefs while optimizing human-like design. PMID:26644594

  6. The shaping of social perception by stimulus and knowledge cues to human animacy.

    PubMed

    Cross, Emily S; Ramsey, Richard; Liepelt, Roman; Prinz, Wolfgang; de C Hamilton, Antonia F

    2016-01-19

    Although robots are becoming an ever-growing presence in society, we do not hold the same expectations for robots as we do for humans, nor do we treat them the same. As such, the ability to recognize cues to human animacy is fundamental for guiding social interactions. We review literature that demonstrates cortical networks associated with person perception, action observation and mentalizing are sensitive to human animacy information. In addition, we show that most prior research has explored stimulus properties of artificial agents (humanness of appearance or motion), with less investigation into knowledge cues (whether an agent is believed to have human or artificial origins). Therefore, currently little is known about the relationship between stimulus and knowledge cues to human animacy in terms of cognitive and brain mechanisms. Using fMRI, an elaborate belief manipulation, and human and robot avatars, we found that knowledge cues to human animacy modulate engagement of person perception and mentalizing networks, while stimulus cues to human animacy had less impact on social brain networks. These findings demonstrate that self-other similarities are not only grounded in physical features but are also shaped by prior knowledge. More broadly, as artificial agents fulfil increasingly social roles, a challenge for roboticists will be to manage the impact of pre-conceived beliefs while optimizing human-like design. © 2015 The Authors.

  7. Non-orthogonal tool/flange and robot/world calibration.

    PubMed

    Ernst, Floris; Richter, Lars; Matthäus, Lars; Martens, Volker; Bruder, Ralf; Schlaefer, Alexander; Schweikard, Achim

    2012-12-01

    For many robot-assisted medical applications, it is necessary to accurately compute the relation between the robot's coordinate system and the coordinate system of a localisation or tracking device. Today, this is typically carried out using hand-eye calibration methods like those proposed by Tsai/Lenz or Daniilidis. We present a new method for simultaneous tool/flange and robot/world calibration by estimating a solution to the matrix equation AX = YB. It is computed using a least-squares approach. Because real robots and localisation are all afflicted by errors, our approach allows for non-orthogonal matrices, partially compensating for imperfect calibration of the robot or localisation device. We also introduce a new method where full robot/world and partial tool/flange calibration is possible by using localisation devices providing less than six degrees of freedom (DOFs). The methods are evaluated on simulation data and on real-world measurements from optical and magnetical tracking devices, volumetric ultrasound providing 3-DOF data, and a surface laser scanning device. We compare our methods with two classical approaches: the method by Tsai/Lenz and the method by Daniilidis. In all experiments, the new algorithms outperform the classical methods in terms of translational accuracy by up to 80% and perform similarly in terms of rotational accuracy. Additionally, the methods are shown to be stable: the number of calibration stations used has far less influence on calibration quality than for the classical methods. Our work shows that the new method can be used for estimating the relationship between the robot's and the localisation device's coordinate systems. The new method can also be used for deficient systems providing only 3-DOF data, and it can be employed in real-time scenarios because of its speed. Copyright © 2012 John Wiley & Sons, Ltd.

  8. Basic emotions and adaptation. A computational and evolutionary model

    PubMed Central

    2017-01-01

    The core principles of the evolutionary theories of emotions declare that affective states represent crucial drives for action selection in the environment and regulated the behavior and adaptation of natural agents in ancestrally recurrent situations. While many different studies used autonomous artificial agents to simulate emotional responses and the way these patterns can affect decision-making, few are the approaches that tried to analyze the evolutionary emergence of affective behaviors directly from the specific adaptive problems posed by the ancestral environment. A model of the evolution of affective behaviors is presented using simulated artificial agents equipped with neural networks and physically inspired on the architecture of the iCub humanoid robot. We use genetic algorithms to train populations of virtual robots across generations, and investigate the spontaneous emergence of basic emotional behaviors in different experimental conditions. In particular, we focus on studying the emotion of fear, therefore the environment explored by the artificial agents can contain stimuli that are safe or dangerous to pick. The simulated task is based on classical conditioning and the agents are asked to learn a strategy to recognize whether the environment is safe or represents a threat to their lives and select the correct action to perform in absence of any visual cues. The simulated agents have special input units in their neural structure whose activation keep track of their actual “sensations” based on the outcome of past behavior. We train five different neural network architectures and then test the best ranked individuals comparing their performances and analyzing the unit activations in each individual’s life cycle. We show that the agents, regardless of the presence of recurrent connections, spontaneously evolved the ability to cope with potentially dangerous environment by collecting information about the environment and then switching their behavior to a genetically selected pattern in order to maximize the possible reward. We also prove the determinant presence of an internal time perception unit for the robots to achieve the highest performance and survivability across all conditions. PMID:29107988

  9. A Remote Lab for Experiments with a Team of Mobile Robots

    PubMed Central

    Casini, Marco; Garulli, Andrea; Giannitrapani, Antonio; Vicino, Antonio

    2014-01-01

    In this paper, a remote lab for experimenting with a team of mobile robots is presented. Robots are built with the LEGO Mindstorms technology and user-defined control laws can be directly coded in the Matlab programming language and validated on the real system. The lab is versatile enough to be used for both teaching and research purposes. Students can easily go through a number of predefined mobile robotics experiences without having to worry about robot hardware or low-level programming languages. More advanced experiments can also be carried out by uploading custom controllers. The capability to have full control of the vehicles, together with the possibility to define arbitrarily complex environments through the definition of virtual obstacles, makes the proposed facility well suited to quickly test and compare different control laws in a real-world scenario. Moreover, the user can simulate the presence of different types of exteroceptive sensors on board of the robots or a specific communication architecture among the agents, so that decentralized control strategies and motion coordination algorithms can be easily implemented and tested. A number of possible applications and real experiments are presented in order to illustrate the main features of the proposed mobile robotics remote lab. PMID:25192316

  10. A remote lab for experiments with a team of mobile robots.

    PubMed

    Casini, Marco; Garulli, Andrea; Giannitrapani, Antonio; Vicino, Antonio

    2014-09-04

    In this paper, a remote lab for experimenting with a team of mobile robots is presented. Robots are built with the LEGO Mindstorms technology and user-defined control laws can be directly coded in the Matlab programming language and validated on the real system. The lab is versatile enough to be used for both teaching and research purposes. Students can easily go through a number of predefined mobile robotics experiences without having to worry about robot hardware or low-level programming languages. More advanced experiments can also be carried out by uploading custom controllers. The capability to have full control of the vehicles, together with the possibility to define arbitrarily complex environments through the definition of virtual obstacles, makes the proposed facility well suited to quickly test and compare different control laws in a real-world scenario. Moreover, the user can simulate the presence of different types of exteroceptive sensors on board of the robots or a specific communication architecture among the agents, so that decentralized control strategies and motion coordination algorithms can be easily implemented and tested. A number of possible applications and real experiments are presented in order to illustrate the main features of the proposed mobile robotics remote lab.

  11. Caregiver and social assistant robot for rehabilitation and coaching for the elderly.

    PubMed

    Pérez, P J; Garcia-Zapirain, B; Mendez-Zorrilla, A

    2015-01-01

    Socially assistive robotics (SAR) has been a major field of investigation during the last decade and, as it develops, the groups the technology can be applied to and all ways in which these can be assisted are rapidly increasing. The main objective is to design and develop a complete robotic agent, so that it performs physical and mental activities for elderly people to maintain their healthy life habits and, as a final result, improve their quality of life. LEGO Mindstorms NXT® robot's unique capacity for adaptability and engaging its users to develop coaching activities and assistive rehabilitation for the elderly. Such activities will aim to enhance healthy habits and provide training in physical and mental rehabilitation. The robot is attached to an iPod Touch that acts as its interface. The robot has been tested by a voluntary group of residents, also from that retirement home. Results in the variables of the questionnaire show scores above 4 points out of 5 for all the categories. Based on the tests, an easy to use Robot is prepared to deliver basic coaching for physical activities as proposed by the client, the staff of La Misericordia, who confirmed their satisfaction regarding this aspect.

  12. Toward cognitive robotics

    NASA Astrophysics Data System (ADS)

    Laird, John E.

    2009-05-01

    Our long-term goal is to develop autonomous robotic systems that have the cognitive abilities of humans, including communication, coordination, adapting to novel situations, and learning through experience. Our approach rests on the recent integration of the Soar cognitive architecture with both virtual and physical robotic systems. Soar has been used to develop a wide variety of knowledge-rich agents for complex virtual environments, including distributed training environments and interactive computer games. For development and testing in robotic virtual environments, Soar interfaces to a variety of robotic simulators and a simple mobile robot. We have recently made significant extensions to Soar that add new memories and new non-symbolic reasoning to Soar's original symbolic processing, which should significantly improve Soar abilities for control of robots. These extensions include episodic memory, semantic memory, reinforcement learning, and mental imagery. Episodic memory and semantic memory support the learning and recalling of prior events and situations as well as facts about the world. Reinforcement learning provides the ability of the system to tune its procedural knowledge - knowledge about how to do things. Mental imagery supports the use of diagrammatic and visual representations that are critical to support spatial reasoning. We speculate on the future of unmanned systems and the need for cognitive robotics to support dynamic instruction and taskability.

  13. Are You Talking to Me? Dialogue Systems Supporting Mixed Teams of Humans and Robots

    NASA Technical Reports Server (NTRS)

    Dowding, John; Clancey, William J.; Graham, Jeffrey

    2006-01-01

    This position paper describes an approach to building spoken dialogue systems for environments containing multiple human speakers and hearers, and multiple robotic speakers and hearers. We address the issue, for robotic hearers, of whether the speech they hear is intended for them, or more likely to be intended for some other hearer. We will describe data collected during a series of experiments involving teams of multiple human and robots (and other software participants), and some preliminary results for distinguishing robot-directed speech from human-directed speech. The domain of these experiments is Mars-analogue planetary exploration. These Mars-analogue field studies involve two subjects in simulated planetary space suits doing geological exploration with the help of 1-2 robots, supporting software agents, a habitat communicator and links to a remote science team. The two subjects are performing a task (geological exploration) which requires them to speak with each other while also speaking with their assistants. The technique used here is to use a probabilistic context-free grammar language model in the speech recognizer that is trained on prior robot-directed speech. Intuitively, the recognizer will give higher confidence to an utterance if it is similar to utterances that have been directed to the robot in the past.

  14. Socially grounded game strategy enhances bonding and perceived smartness of a humanoid robot

    NASA Astrophysics Data System (ADS)

    Barakova, E. I.; De Haas, M.; Kuijpers, W.; Irigoyen, N.; Betancourt, A.

    2018-01-01

    In search for better technological solutions for education, we adapted a principle from economic game theory, namely that giving a help will promote collaboration and eventually long-term relations between a robot and a child. This principle has been shown to be effective in games between humans and between humans and computer agents. We compared the social and cognitive engagement of children when playing checkers game combined with a social strategy against a robot or against a computer. We found that by combining the social and game strategy the children (average age of 8.3 years) had more empathy and social engagement with the robot since the children did not want to necessarily win against it. This finding is promising for using social strategies for the creation of long-term relations between robots and children and making educational tasks more engaging. An additional outcome of the study was the significant difference in the perception of the children about the difficulty of the game - the game with the robot was seen as more challenging and the robot - as a smarter opponent. This finding might be due to the higher perceived or expected intelligence from the robot, or because of the higher complexity of seeing patterns in three-dimensional world.

  15. Experiments with an EVA Assistant Robot

    NASA Technical Reports Server (NTRS)

    Burridge, Robert R.; Graham, Jeffrey; Shillcutt, Kim; Hirsh, Robert; Kortenkamp, David

    2003-01-01

    Human missions to the Moon or Mars will likely be accompanied by many useful robots that will assist in all aspects of the mission, from construction to maintenance to surface exploration. Such robots might scout terrain, carry tools, take pictures, curate samples, or provide status information during a traverse. At NASA/JSC, the EVA Robotic Assistant (ERA) project has developed a robot testbed for exploring the issues of astronaut-robot interaction. Together with JSC's Advanced Spacesuit Lab, the ERA team has been developing robot capabilities and testing them with space-suited test subjects at planetary surface analog sites. In this paper, we describe the current state of the ERA testbed and two weeks of remote field tests in Arizona in September 2002. A number of teams with a broad range of interests participated in these experiments to explore different aspects of what must be done to develop a program for robotic assistance to surface EVA. Technologies explored in the field experiments included a fuel cell, new mobility platform and manipulator, novel software and communications infrastructure for multi-agent modeling and planning, a mobile science lab, an "InfoPak" for monitoring the spacesuit, and delayed satellite communication to a remote operations team. In this paper, we will describe this latest round of field tests in detail.

  16. AIonAI: a humanitarian law of artificial intelligence and robotics.

    PubMed

    Ashrafian, Hutan

    2015-02-01

    The enduring progression of artificial intelligence and cybernetics offers an ever-closer possibility of rational and sentient robots. The ethics and morals deriving from this technological prospect have been considered in the philosophy of artificial intelligence, the design of automatons with roboethics and the contemplation of machine ethics through the concept of artificial moral agents. Across these categories, the robotics laws first proposed by Isaac Asimov in the twentieth century remain well-recognised and esteemed due to their specification of preventing human harm, stipulating obedience to humans and incorporating robotic self-protection. However the overwhelming predominance in the study of this field has focussed on human-robot interactions without fully considering the ethical inevitability of future artificial intelligences communicating together and has not addressed the moral nature of robot-robot interactions. A new robotic law is proposed and termed AIonAI or artificial intelligence-on-artificial intelligence. This law tackles the overlooked area where future artificial intelligences will likely interact amongst themselves, potentially leading to exploitation. As such, they would benefit from adopting a universal law of rights to recognise inherent dignity and the inalienable rights of artificial intelligences. Such a consideration can help prevent exploitation and abuse of rational and sentient beings, but would also importantly reflect on our moral code of ethics and the humanity of our civilisation.

  17. Adaptive Control Parameters for Dispersal of Multi-Agent Mobile Ad Hoc Network (MANET) Swarms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurt Derr; Milos Manic

    A mobile ad hoc network is a collection of independent nodes that communicate wirelessly with one another. This paper investigates nodes that are swarm robots with communications and sensing capabilities. Each robot in the swarm may operate in a distributed and decentralized manner to achieve some goal. This paper presents a novel approach to dynamically adapting control parameters to achieve mesh configuration stability. The presented approach to robot interaction is based on spring force laws (attraction and repulsion laws) to create near-optimal mesh like configurations. In prior work, we presented the extended virtual spring mesh (EVSM) algorithm for the dispersionmore » of robot swarms. This paper extends the EVSM framework by providing the first known study on the effects of adaptive versus static control parameters on robot swarm stability. The EVSM algorithm provides the following novelties: 1) improved performance with adaptive control parameters and 2) accelerated convergence with high formation effectiveness. Simulation results show that 120 robots reach convergence using adaptive control parameters more than twice as fast as with static control parameters in a multiple obstacle environment.« less

  18. A hardware/software environment to support R D in intelligent machines and mobile robotic systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mann, R.C.

    1990-01-01

    The Center for Engineering Systems Advanced Research (CESAR) serves as a focal point at the Oak Ridge National Laboratory (ORNL) for basic and applied research in intelligent machines. R D at CESAR addresses issues related to autonomous systems, unstructured (i.e. incompletely known) operational environments, and multiple performing agents. Two mobile robot prototypes (HERMIES-IIB and HERMIES-III) are being used to test new developments in several robot component technologies. This paper briefly introduces the computing environment at CESAR which includes three hypercube concurrent computers (two on-board the mobile robots), a graphics workstation, VAX, and multiple VME-based systems (several on-board the mobile robots).more » The current software environment at CESAR is intended to satisfy several goals, e.g.: code portability, re-usability in different experimental scenarios, modularity, concurrent computer hardware transparent to applications programmer, future support for multiple mobile robots, support human-machine interface modules, and support for integration of software from other, geographically disparate laboratories with different hardware set-ups. 6 refs., 1 fig.« less

  19. Controlling Tensegrity Robots Through Evolution

    NASA Technical Reports Server (NTRS)

    Iscen, Atil; Agogino, Adrian; SunSpiral, Vytas; Tumer, Kagan

    2013-01-01

    Tensegrity structures (built from interconnected rods and cables) have the potential to offer a revolutionary new robotic design that is light-weight, energy-efficient, robust to failures, capable of unique modes of locomotion, impact tolerant, and compliant (reducing damage between the robot and its environment). Unfortunately robots built from tensegrity structures are difficult to control with traditional methods due to their oscillatory nature, nonlinear coupling between components and overall complexity. Fortunately this formidable control challenge can be overcome through the use of evolutionary algorithms. In this paper we show that evolutionary algorithms can be used to efficiently control a ball-shaped tensegrity robot. Experimental results performed with a variety of evolutionary algorithms in a detailed soft-body physics simulator show that a centralized evolutionary algorithm performs 400 percent better than a hand-coded solution, while the multi-agent evolution performs 800 percent better. In addition, evolution is able to discover diverse control solutions (both crawling and rolling) that are robust against structural failures and can be adapted to a wide range of energy and actuation constraints. These successful controls will form the basis for building high-performance tensegrity robots in the near future.

  20. Evaluation of a completely robotized neurosurgical operating microscope.

    PubMed

    Kantelhardt, Sven R; Finke, Markus; Schweikard, Achim; Giese, Alf

    2013-01-01

    Operating microscopes are essential for most neurosurgical procedures. Modern robot-assisted controls offer new possibilities, combining the advantages of conventional and automated systems. We evaluated the prototype of a completely robotized operating microscope with an integrated optical coherence tomography module. A standard operating microscope was fitted with motors and control instruments, with the manual control mode and balance preserved. In the robot mode, the microscope was steered by a remote control that could be fixed to a surgical instrument. External encoders and accelerometers tracked microscope movements. The microscope was additionally fitted with an optical coherence tomography-scanning module. The robotized microscope was tested on model systems. It could be freely positioned, without forcing the surgeon to take the hands from the instruments or avert the eyes from the oculars. Positioning error was about 1 mm, and vibration faded in 1 second. Tracking of microscope movements, combined with an autofocus function, allowed determination of the focus position within the 3-dimensional space. This constituted a second loop of navigation independent from conventional infrared reflector-based techniques. In the robot mode, automated optical coherence tomography scanning of large surface areas was feasible. The prototype of a robotized optical coherence tomography-integrated operating microscope combines the advantages of a conventional manually controlled operating microscope with a remote-controlled positioning aid and a self-navigating microscope system that performs automated positioning tasks such as surface scans. This demonstrates that, in the future, operating microscopes may be used to acquire intraoperative spatial data, volume changes, and structural data of brain or brain tumor tissue.

  1. Treatment of Dry Eye Disease.

    PubMed

    Marshall, Leisa L; Roach, J Michael

    2016-02-01

    Review of the etiology, clinical manifestations, and treatment of dry eye disease (DED). Articles indexed in PubMed (National Library of Medicine), Iowa Drug Information Service (IDIS), and the Cochrane Reviews and Trials in the last 10 years using the key words "dry eye disease," "dry eye syndrome," "dry eye and treatment." Primary sources were used to locate additional resources. Sixty-eight publications were reviewed, and criteria supporting the primary objective were used to identify useful resources. The literature included practice guidelines, book chapters, review articles, original research articles, and product prescribing information for the etiology, clinical manifestations, diagnosis, and treatment of DED. DED is one of the most common ophthalmic disorders. Signs and symptoms of DED vary by patient, but may include ocular irritation, redness, itching, photosensitivity, visual blurring, mucous discharge, and decreased tear meniscus or break-up time. Symptoms improve with treatment, but the condition is not completely curable. Treatment includes reducing environmental causes, discontinuing medications that cause or worsen dry eye, and managing contributing ocular or systemic conditions. Most patients use nonprescription tear substitutes, and if these are not sufficient, other treatment is prescribed. These treatments include the ophthalmic anti-inflammatory agent cyclosporine, punctal occlusion, eye side shields, systemic cholinergic agents, and autologous serum tears. This article reviews the etiology, symptoms, and current therapy for DED.

  2. Mixed reality framework for collective motion patterns of swarms with delay coupling

    NASA Astrophysics Data System (ADS)

    Szwaykowska, Klementyna; Schwartz, Ira

    The formation of coherent patterns in swarms of interacting self-propelled autonomous agents is an important subject for many applications within the field of distributed robotic systems. However, there are significant logistical challenges associated with testing fully distributed systems in real-world settings. In this paper, we provide a rigorous theoretical justification for the use of mixed-reality experiments as a stepping stone to fully physical testing of distributed robotic systems. We also model and experimentally realize a mixed-reality large-scale swarm of delay-coupled agents. Our analyses, assuming agents communicating over an Erdos-Renyi network, demonstrate the existence of stable coherent patterns that can be achieved only with delay coupling and that are robust to decreasing network connectivity and heterogeneity in agent dynamics. We show how the bifurcation structure for emergence of different patterns changes with heterogeneity in agent acceleration capabilities and limited connectivity in the network as a function of coupling strength and delay. Our results are verified through simulation as well as preliminary experimental results of delay-induced pattern formation in a mixed-reality swarm. K. S. was a National Research Council postdoctoral fellow. I.B.S was supported by the U.S. Naval Research Laboratory funding (N0001414WX00023) and office of Naval Research (N0001414WX20610).

  3. Effect of sub-Tenon's and peribulbar anesthesia on intraocular pressure and ocular pulse amplitude.

    PubMed

    Pianka, P; Weintraub-Padova, H; Lazar, M; Geyer, O

    2001-08-01

    To compare the effect of peribulbar and sub-Tenon's anesthesia on intraocular pressure (IOP) and ocular pulse amplitude (OPA) in the injected eye and the fellow noninjected (control) eye. Tel Aviv Medical Center, Tel Aviv, Israel. This prospective study measured IOP and OPA at baseline and 1 and 10 minutes after administration of lidocaine anesthesia in 40 consecutive adult patients having elective cataract surgery. The IOP remained stable throughout the study with both modes of anesthesia. One minute after injection of the anesthetic agent, the OPA was significantly decreased in the injected eyes in both the sub-Tenon's (24%; P < .05) and peribulbar (25%; P < .05) groups. The decrease in the OPA in the sub-Tenon's group (14%; P < .05) was detectable after 10 minutes in the control eyes. In the peribulbar anesthesia group, the OPA in the control eyes increased significantly (9%; P < .05) 1 minute after injection of the anesthetic agent, returning to preinjection levels 10 minutes after the injection. The OPA in the eyes in which lidocaine was injected decreased significantly in both the sub-Tenon's and peribulbar groups. These findings have implications for the management of patients whose ocular circulation may be compromised.

  4. Grounding Action Words in the Sensorimotor Interaction with the World: Experiments with a Simulated iCub Humanoid Robot

    PubMed Central

    Marocco, Davide; Cangelosi, Angelo; Fischer, Kerstin; Belpaeme, Tony

    2010-01-01

    This paper presents a cognitive robotics model for the study of the embodied representation of action words. The present research will present how an iCub humanoid robot can learn the meaning of action words (i.e. words that represent dynamical events that happen in time) by physically interacting with the environment and linking the effects of its own actions with the behavior observed on the objects before and after the action. The control system of the robot is an artificial neural network trained to manipulate an object through a Back-Propagation-Through-Time algorithm. We will show that in the presented model the grounding of action words relies directly to the way in which an agent interacts with the environment and manipulates it. PMID:20725503

  5. Social cognitive neuroscience and humanoid robotics.

    PubMed

    Chaminade, Thierry; Cheng, Gordon

    2009-01-01

    We believe that humanoid robots provide new tools to investigate human social cognition, the processes underlying everyday interactions between individuals. Resonance is an emerging framework to understand social interactions that is based on the finding that cognitive processes involved when experiencing a mental state and when perceiving another individual experiencing the same mental state overlap, both at the behavioral and neural levels. We will first review important aspects of his framework. In a second part, we will discuss how this framework is used to address questions pertaining to artificial agents' social competence. We will focus on two types of paradigm, one derived from experimental psychology and the other using neuroimaging, that have been used to investigate humans' responses to humanoid robots. Finally, we will speculate on the consequences of resonance in natural social interactions if humanoid robots are to become integral part of our societies.

  6. Fatal necrotising enterocolitis due to mydriatic eye drops.

    PubMed

    Ozgun, Uygur; Demet, Terek; Ozge, Koroglu A; Zafer, Dokumcu; Murat, Sezak; Mehmet, Yalaz; Nilgun, Kultursay

    2014-05-01

    Retinopathy of prematurity (ROP) is a serious problem of preterm infants which may lead to impairment of vision and even to blindness if untreated. Routine eye examination is necessary for early diagnosis and treatment of ROP in preterm infants. Mydriatic eye drops (cyclopentolate, tropicamide and phenylephrine) are applied before the ophthalmic examination. These agents are rarely absorbed to systemic circulation and in some cases result with serious side effects like skin rash, tachycardia, feeding intolerance, discomfort, apnea, gastric dilatation and ileus, despite different treatment models and dosage reducing strategies. We report here a preterm patient who died because of severe diffuse necrotizing enterocolitis (NEC) after topical application of 0.5% cyclopentolate and 1.25% phenylephrine during ROP screening to emphasise the serious side effects of these agents.

  7. An embodiment effect in computer-based learning with animated pedagogical agents.

    PubMed

    Mayer, Richard E; DaPra, C Scott

    2012-09-01

    How do social cues such as gesturing, facial expression, eye gaze, and human-like movement affect multimedia learning with onscreen agents? To help address this question, students were asked to twice view a 4-min narrated presentation on how solar cells work in which the screen showed an animated pedagogical agent standing to the left of 11 successive slides. Across three experiments, learners performed better on a transfer test when a human-voiced agent displayed human-like gestures, facial expression, eye gaze, and body movement than when the agent did not, yielding an embodiment effect. In Experiment 2 the embodiment effect was found when the agent spoke in a human voice but not in a machine voice. In Experiment 3, the embodiment effect was found both when students were told the onscreen agent was consistent with their choice of agent characteristics and when inconsistent. Students who viewed a highly embodied agent also rated the social attributes of the agent more positively than did students who viewed a nongesturing agent. The results are explained by social agency theory, in which social cues in a multimedia message prime a feeling of social partnership in the learner, which leads to deeper cognitive processing during learning, and results in a more meaningful learning outcome as reflected in transfer test performance.

  8. Untethered magnetic millirobot for targeted drug delivery.

    PubMed

    Iacovacci, Veronica; Lucarini, Gioia; Ricotti, Leonardo; Dario, Paolo; Dupont, Pierre E; Menciassi, Arianna

    2015-01-01

    This paper reports the design and development of a novel millimeter-sized robotic system for targeted therapy. The proposed medical robot is conceived to perform therapy in relatively small diameter body canals (spine, urinary system, ovary, etc.), and to release several kinds of therapeutics, depending on the pathology to be treated. The robot is a nearly-buoyant bi-component system consisting of a carrier, in which the therapeutic agent is embedded, and a piston. The piston, by exploiting magnetic effects, docks with the carrier and compresses a drug-loaded hydrogel, thus activating the release mechanism. External magnetic fields are exploited to propel the robot towards the target region, while intermagnetic forces are exploited to trigger drug release. After designing and fabricating the robot, the system has been tested in vitro with an anticancer drug (doxorubicin) embedded in the carrier. The efficiency of the drug release mechanism has been demonstrated by both quantifying the amount of drug released and by assessing the efficacy of this therapeutic procedure on human bladder cancer cells.

  9. Mobile robot navigation modulated by artificial emotions.

    PubMed

    Lee-Johnson, C P; Carnegie, D A

    2010-04-01

    For artificial intelligence research to progress beyond the highly specialized task-dependent implementations achievable today, researchers may need to incorporate aspects of biological behavior that have not traditionally been associated with intelligence. Affective processes such as emotions may be crucial to the generalized intelligence possessed by humans and animals. A number of robots and autonomous agents have been created that can emulate human emotions, but the majority of this research focuses on the social domain. In contrast, we have developed a hybrid reactive/deliberative architecture that incorporates artificial emotions to improve the general adaptive performance of a mobile robot for a navigation task. Emotions are active on multiple architectural levels, modulating the robot's decisions and actions to suit the context of its situation. Reactive emotions interact with the robot's control system, altering its parameters in response to appraisals from short-term sensor data. Deliberative emotions are learned associations that bias path planning in response to eliciting objects or events. Quantitative results are presented that demonstrate situations in which each artificial emotion can be beneficial to performance.

  10. On the Evolution of Behaviors through Embodied Imitation.

    PubMed

    Erbas, Mehmet D; Bull, Larry; Winfield, Alan F T

    2015-01-01

    This article describes research in which embodied imitation and behavioral adaptation are investigated in collective robotics. We model social learning in artificial agents with real robots. The robots are able to observe and learn each others' movement patterns using their on-board sensors only, so that imitation is embodied. We show that the variations that arise from embodiment allow certain behaviors that are better adapted to the process of imitation to emerge and evolve during multiple cycles of imitation. As these behaviors are more robust to uncertainties in the real robots' sensors and actuators, they can be learned by other members of the collective with higher fidelity. Three different types of learned-behavior memory have been experimentally tested to investigate the effect of memory capacity on the evolution of movement patterns, and results show that as the movement patterns evolve through multiple cycles of imitation, selection, and variation, the robots are able to, in a sense, agree on the structure of the behaviors that are imitated.

  11. Recognition of flow in everyday life using sensor agent robot with laser range finder

    NASA Astrophysics Data System (ADS)

    Goshima, Misa; Mita, Akira

    2011-04-01

    In the present paper, we suggest an algorithm for a sensor agent robot with a laser range finder to recognize the flows of residents in the living spaces in order to achieve flow recognition in the living spaces, recognition of the number of people in spaces, and the classification of the flows. House reform is or will be demanded to prolong the lifetime of the home. Adaption for the individuals is needed for our aging society which is growing at a rapid pace. Home autonomous mobile robots will become popular in the future for aged people to assist them in various situations. Therefore we have to collect various type of information of human and living spaces. However, a penetration in personal privacy must be avoided. It is essential to recognize flows in everyday life in order to assist house reforms and aging societies in terms of adaption for the individuals. With background subtraction, extra noise removal, and the clustering based k-means method, we got an average accuracy of more than 90% from the behavior from 1 to 3 persons, and also confirmed the reliability of our system no matter the position of the sensor. Our system can take advantages from autonomous mobile robots and protect the personal privacy. It hints at a generalization of flow recognition methods in the living spaces.

  12. KENNEDY SPACE CENTER, FLA. - On Mars Exploration Rover 1 (MER-1) , air bags are installed on the lander. The airbags will inflate to cushion the landing of the spacecraft on the surface of Mars. When it stops bouncing and rolling, the airbags will deflate and retract, the petals will open to bring the lander to an upright position, and the rover will be exposed. NASA's twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can't yet go. MER-1 is scheduled to launch June 25 as MER-B aboard a Delta II rocket from Cape Canaveral Air Force Station.

    NASA Image and Video Library

    2003-05-10

    KENNEDY SPACE CENTER, FLA. - On Mars Exploration Rover 1 (MER-1) , air bags are installed on the lander. The airbags will inflate to cushion the landing of the spacecraft on the surface of Mars. When it stops bouncing and rolling, the airbags will deflate and retract, the petals will open to bring the lander to an upright position, and the rover will be exposed. NASA's twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can't yet go. MER-1 is scheduled to launch June 25 as MER-B aboard a Delta II rocket from Cape Canaveral Air Force Station.

  13. KENNEDY SPACE CENTER, FLA. - The Mars Exploration Rover 1 (MER-1) is seen after installation of the air bags on the outside of the lander. The airbags will inflate to cushion the landing of the spacecraft on the surface of Mars. When it stops bouncing and rolling, the airbags will deflate and retract, the petals will open to bring the lander to an upright position, and the rover will be exposed. NASA's twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can't yet go. MER-1 is scheduled to launch June 25 as MER-B aboard a Delta II rocket from Cape Canaveral Air Force Station.

    NASA Image and Video Library

    2003-05-10

    KENNEDY SPACE CENTER, FLA. - The Mars Exploration Rover 1 (MER-1) is seen after installation of the air bags on the outside of the lander. The airbags will inflate to cushion the landing of the spacecraft on the surface of Mars. When it stops bouncing and rolling, the airbags will deflate and retract, the petals will open to bring the lander to an upright position, and the rover will be exposed. NASA's twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can't yet go. MER-1 is scheduled to launch June 25 as MER-B aboard a Delta II rocket from Cape Canaveral Air Force Station.

  14. The effect of biologic therapy different from infliximab or adalimumab in patients with refractory uveitis due to Behçet's disease: results of a multicentre open-label study.

    PubMed

    Santos-Gómez, Montserrat; Calvo-Río, Vanesa; Blanco, Ricardo; Beltrán, Emma; Mesquida, Marina; Adán, Alfredo; Cordero-Coma, Miguel; García-Aparicio, Ángel M; Valls Pascual, Elia; Martínez-Costa, Lucía; Hernández, María Victoria; Hernandez Garfella, Marisa; González-Vela, María C; Pina, Trinitario; Palmou-Fontana, Natalia; Loricera, Javier; Hernández, José L; González-Gay, Miguel A

    2016-01-01

    To assess the efficacy of other biologic therapies, different from infliximab (IFX) and adalimumab (ADA), in patients with Behçet's disease uveitis (BU). Multicenter study of 124 patients with BU refractory to at least one standard immunosuppressive agent that required IFX or ADA therapy. Patients who had to be switched to another biologic agent due to inefficacy or intolerance to IFX or ADA or patient's decision were assessed. The main outcome measures were the degree of anterior and posterior chamber inflammation and macular thickness. Seven (5.6%) of 124 cases (4 women/3 men; mean age, 43 (range 28- 67) years; 12 affected eyes) were studied. Five of them had been initially treated with ADA and 2 with IFX. The other biologic agents used were golimumab (n=4), tocilizumab (n=2) and rituximab (n=1). The ocular pattern was panuveitis (n=4) or posterior uveitis (n=3). Uveitis was bilateral in 5 patients (71.4%). At baseline, anterior chamber and vitreous inflammation were present in 6 (50%) and 7 (58.3%) of the eyes. All the patients (12 eyes) had macular thickening (OCT>250μm) and 4 of them (7 eyes), cystoid macular edema (OCT>300 μm). Besides reduction anterior chamber and vitreous inflammation, we observed a reduction of OCT values, from 330.4±58.5 μm at the onset of the biological agent to 273±50 μm at month 12 (p=0.06). Six patients achieved a complete remission of uveitis. The vast majority of patients with BU refractory to standard immunosuppressive drugs are successfully controlled with ADA and/or IFX. Other biologic agents appear to be also useful.

  15. The Effect of TNF-α Blocker HL036337 and Its Best Concentration to Inhibit Dry Eye Inflammation.

    PubMed

    Choi, Wungrak; Noh, Hyemi; Yeo, Areum; Jang, Hanmil; Ahn, Hyea Kyung; Song, Yeon Jung; Lee, Hyung Keun

    2016-08-01

    Dry eye syndrome is commonly thought of as an inflammatory disease, and we have previously presented data showing the effectiveness of topical TNF-α blocker agents for the treatment of this condition. The purpose of this study was to investigate the effectiveness of the TNF-α blocking agent HL036337 compared to cyclosporine A for the treatment of dry eye induced inflammation in order to establish whether HL036337 represents a more effective method for suppressing inflammation. The efficacy of HL036337 and cyclosporine A was determined using an experimental murine dry eye model. The TNF-α blocker HL036337 is a modified form of TNF receptor I. Using dry eye induced C57BL/6 mice (n = 45), corneal erosion was measured at day 4 and 7 after topical treatment with cyclosporine A or HL036337. To determine the effective treatment dose, 0.25, 0.5, 1, 2.5, and 5 mg/mL of HL036337 were topically administered twice per day to dry eye induced murine corneas for 1 week. The optimal concentration of the TNF-α blocker HL036337 for treatment of dry eye induced corneal erosion was determined to be 1 mg/mL. Dry eye induced corneal erosion was improved after 1 week with topically applied cyclosporine A and HL036337 at 1 mg/mL. HL036337 administered topically at 1 mg/mL effectively improved corneal erosion induced by dry eye. This finding may also suggest that inhibition of TNF-α can improve dry eye syndrome.

  16. The Moon: Been there, done that?

    NASA Technical Reports Server (NTRS)

    Cohen, Barbara

    2013-01-01

    Lunar science is planetary science. Lunar samples teach us about the formation and evolution of the Moon, and the history of all the planets. The Moon is a cornerstone for all rocky planets, since it formed and evolved similarly to Earth, Mars, Mercury, Venus, and large asteroids. Lunar robotic missions provide important science and engineering objectives, and keep our eyes on the Moon.

  17. Optimal bipedal interactions with dynamic terrain: synthesis and analysis via nonlinear programming

    NASA Astrophysics Data System (ADS)

    Hubicki, Christian; Goldman, Daniel; Ames, Aaron

    In terrestrial locomotion, gait dynamics and motor control behaviors are tuned to interact efficiently and stably with the dynamics of the terrain (i.e. terradynamics). This controlled interaction must be particularly thoughtful in bipeds, as their reduced contact points render them highly susceptible to falls. While bipedalism under rigid terrain assumptions is well-studied, insights for two-legged locomotion on soft terrain, such as sand and dirt, are comparatively sparse. We seek an understanding of how biological bipeds stably and economically negotiate granular media, with an eye toward imbuing those abilities in bipedal robots. We present a trajectory optimization method for controlled systems subject to granular intrusion. By formulating a large-scale nonlinear program (NLP) with reduced-order resistive force theory (RFT) models and jamming cone dynamics, the optimized motions are informed and shaped by the dynamics of the terrain. Using a variant of direct collocation methods, we can express all optimization objectives and constraints in closed-form, resulting in rapid solving by standard NLP solvers, such as IPOPT. We employ this tool to analyze emergent features of bipedal locomotion in granular media, with an eye toward robotic implementation.

  18. Can Infants Use a Nonhuman Agent's Gaze Direction to Establish Word-Object Relations?

    ERIC Educational Resources Information Center

    O'Connell, Laura; Poulin-Dubois, Diane; Demke, Tamara; Guay, Amanda

    2009-01-01

    Adopting a procedure developed with human speakers, we examined infants' ability to follow a nonhuman agent's gaze direction and subsequently to use its gaze to learn new words. When a programmable robot acted as the speaker (Experiment 1), infants followed its gaze toward the word referent whether or not it coincided with their own focus of…

  19. Six-and-a-Half-Month-Old Children Positively Attribute Goals to Human Action and to Humanoid-Robot Motion

    ERIC Educational Resources Information Center

    Kamewari, K.; Kato, M.; Kanda, T.; Ishiguro, H.; Hiraki, K.

    2005-01-01

    Recent infant studies indicate that goal attribution (understanding of goal-directed action) is present very early in infancy. We examined whether 6.5-month-olds attribute goals to agents and whether infants change the interpretation of goal-directed action according to the kind of agent. We conducted three experiments using the visual habituation…

  20. A Multi-Agent Approach to the Simulation of Robotized Manufacturing Systems

    NASA Astrophysics Data System (ADS)

    Foit, K.; Gwiazda, A.; Banaś, W.

    2016-08-01

    The recent years of eventful industry development, brought many competing products, addressed to the same market segment. The shortening of a development cycle became a necessity if the company would like to be competitive. Because of switching to the Intelligent Manufacturing model the industry search for new scheduling algorithms, while the traditional ones do not meet the current requirements. The agent-based approach has been considered by many researchers as an important way of evolution of modern manufacturing systems. Due to the properties of the multi-agent systems, this methodology is very helpful during creation of the model of production system, allowing depicting both processing and informational part. The complexity of such approach makes the analysis impossible without the computer assistance. Computer simulation still uses a mathematical model to recreate a real situation, but nowadays the 2D or 3D virtual environments or even virtual reality have been used for realistic illustration of the considered systems. This paper will focus on robotized manufacturing system and will present the one of possible approaches to the simulation of such systems. The selection of multi-agent approach is motivated by the flexibility of this solution that offers the modularity, robustness and autonomy.

  1. Modelling and simulation of a robotic work cell

    NASA Astrophysics Data System (ADS)

    Sękala, A.; Gwiazda, A.; Kost, G.; Banaś, W.

    2017-08-01

    The subject of considerations presented in this work concerns the designing and simulation of a robotic work cell. The designing of robotic cells is the process of synergistic combining the components in the group, combining this groups into specific, larger work units or dividing the large work units into small ones. Combinations or divisions are carried out in the terms of the needs of realization the assumed objectives to be performed in these unit. The designing process bases on the integrated approach what lets to take into consideration all needed elements of this process. Each of the elements of a design process could be an independent design agent which could tend to obtain its objectives.

  2. Finding intrinsic rewards by embodied evolution and constrained reinforcement learning.

    PubMed

    Uchibe, Eiji; Doya, Kenji

    2008-12-01

    Understanding the design principle of reward functions is a substantial challenge both in artificial intelligence and neuroscience. Successful acquisition of a task usually requires not only rewards for goals, but also for intermediate states to promote effective exploration. This paper proposes a method for designing 'intrinsic' rewards of autonomous agents by combining constrained policy gradient reinforcement learning and embodied evolution. To validate the method, we use Cyber Rodent robots, in which collision avoidance, recharging from battery packs, and 'mating' by software reproduction are three major 'extrinsic' rewards. We show in hardware experiments that the robots can find appropriate 'intrinsic' rewards for the vision of battery packs and other robots to promote approach behaviors.

  3. Arm-eye coordination test to objectively quantify motor performance and muscles activation in persons after stroke undergoing robot-aided rehabilitation training: a pilot study.

    PubMed

    Song, Rong; Tong, Kai-Yu; Hu, Xiaoling; Li, Le; Sun, Rui

    2013-09-01

    This study designed an arm-eye coordination test to investigate the effectiveness of the robot-aided rehabilitation for persons after stroke. Six chronic poststroke subjects were recruited to attend a 20-session robot-aided rehabilitation training of elbow joint. Before and after the training program, subjects were asked to perform voluntary movements of elbow flection and extension by following sinusoidal trajectories at different velocities with visual feedback on their joint positions. The elbow angle and the electromyographic signal of biceps and triceps as well as clinical scores were evaluated together with the parameters. Performance was objectively quantified by root mean square error (RMSE), root mean square jerk (RMSJ), range of motion (ROM), and co-contraction index (CI). After 20 sessions, RMSE and ROM improved significantly in both the affected and the unaffected side based on two-way ANOVA (P < 0.05). There was significant lower RMSJ in the affected side at higher velocities (P < 0.05). There was significant negative correlation between average RMSE with different tracking velocities and Fugl-Meyer shoulder-elbow score (P < 0.05). There was also significant negative correlation between average RMSE and average ROM (P < 0.05), and moderate nonsignificant negative correlation with RMSJ, and CI. The characterization of velocity-dependent deficiencies, monitoring of training-induced improvement, and the correlation between quantitative parameters and clinical scales could enable the exploration of effects of different types of treatment and design progress-based training method to accelerate the processes of recovery.

  4. Opportunistic Behavior in Motivated Learning Agents.

    PubMed

    Graham, James; Starzyk, Janusz A; Jachyra, Daniel

    2015-08-01

    This paper focuses on the novel motivated learning (ML) scheme and opportunistic behavior of an intelligent agent. It extends previously developed ML to opportunistic behavior in a multitask situation. Our paper describes the virtual world implementation of autonomous opportunistic agents learning in a dynamically changing environment, creating abstract goals, and taking advantage of arising opportunities to improve their performance. An opportunistic agent achieves better results than an agent based on ML only. It does so by minimizing the average value of all need signals rather than a dominating need. This paper applies to the design of autonomous embodied systems (robots) learning in real-time how to operate in a complex environment.

  5. In-Human Robot-Assisted Retinal Vein Cannulation, A World First.

    PubMed

    Gijbels, Andy; Smits, Jonas; Schoevaerdts, Laurent; Willekens, Koen; Vander Poorten, Emmanuel B; Stalmans, Peter; Reynaerts, Dominiek

    2018-05-24

    Retinal Vein Occlusion (RVO) is a blinding disease caused by one or more occluded retinal veins. Current treatment methods only focus on symptom mitigation rather than targeting a solution for the root cause of the disorder. Retinal vein cannulation is an experimental eye surgical procedure which could potentially cure RVO. Its goal is to dissolve the occlusion by injecting an anticoagulant directly into the blocked vein. Given the scale and the fragility of retinal veins on one end and surgeons' limited positioning precision on the other, performing this procedure manually is considered to be too risky. The authors have been developing robotic devices and instruments to assist surgeons in performing this therapy in a safe and successful manner. This work reports on the clinical translation of the technology, resulting in the world-first in-human robot-assisted retinal vein cannulation. Four RVO patients have been treated with the technology in the context of a phase I clinical trial. The results show that it is technically feasible to safely inject an anticoagulant into a [Formula: see text]-thick retinal vein of an RVO patient for a period of 10 min with the aid of the presented robotic technology and instrumentation.

  6. How do we think machines think? An fMRI study of alleged competition with an artificial intelligence

    PubMed Central

    Chaminade, Thierry; Rosset, Delphine; Da Fonseca, David; Nazarian, Bruno; Lutcher, Ewald; Cheng, Gordon; Deruelle, Christine

    2012-01-01

    Mentalizing is defined as the inference of mental states of fellow humans, and is a particularly important skill for social interactions. Here we assessed whether activity in brain areas involved in mentalizing is specific to the processing of mental states or can be generalized to the inference of non-mental states by comparing brain responses during the interaction with an intentional and an artificial agent. Participants were scanned using fMRI during interactive rock-paper-scissors games while believing their opponent was a fellow human (Intentional agent, Int), a humanoid robot endowed with an artificial intelligence (Artificial agent, Art), or a computer playing randomly (Random agent, Rnd). Participants' subjective reports indicated that they adopted different stances against the three agents. The contrast of brain activity during interaction with the artificial and the random agents didn't yield any cluster at the threshold used, suggesting the absence of a reproducible stance when interacting with an artificial intelligence. We probed response to the artificial agent in regions of interest corresponding to clusters found in the contrast between the intentional and the random agents. In the precuneus involved in working memory, the posterior intraparietal suclus, in the control of attention and the dorsolateral prefrontal cortex, in executive functions, brain activity for Art was larger than for Rnd but lower than for Int, supporting the intrinsically engaging nature of social interactions. A similar pattern in the left premotor cortex and anterior intraparietal sulcus involved in motor resonance suggested that participants simulated human, and to a lesser extend humanoid robot actions, when playing the game. Finally, mentalizing regions, the medial prefrontal cortex and right temporoparietal junction, responded to the human only, supporting the specificity of mentalizing areas for interactions with intentional agents. PMID:22586381

  7. How do we think machines think? An fMRI study of alleged competition with an artificial intelligence.

    PubMed

    Chaminade, Thierry; Rosset, Delphine; Da Fonseca, David; Nazarian, Bruno; Lutcher, Ewald; Cheng, Gordon; Deruelle, Christine

    2012-01-01

    Mentalizing is defined as the inference of mental states of fellow humans, and is a particularly important skill for social interactions. Here we assessed whether activity in brain areas involved in mentalizing is specific to the processing of mental states or can be generalized to the inference of non-mental states by comparing brain responses during the interaction with an intentional and an artificial agent. Participants were scanned using fMRI during interactive rock-paper-scissors games while believing their opponent was a fellow human (Intentional agent, Int), a humanoid robot endowed with an artificial intelligence (Artificial agent, Art), or a computer playing randomly (Random agent, Rnd). Participants' subjective reports indicated that they adopted different stances against the three agents. The contrast of brain activity during interaction with the artificial and the random agents didn't yield any cluster at the threshold used, suggesting the absence of a reproducible stance when interacting with an artificial intelligence. We probed response to the artificial agent in regions of interest corresponding to clusters found in the contrast between the intentional and the random agents. In the precuneus involved in working memory, the posterior intraparietal suclus, in the control of attention and the dorsolateral prefrontal cortex, in executive functions, brain activity for Art was larger than for Rnd but lower than for Int, supporting the intrinsically engaging nature of social interactions. A similar pattern in the left premotor cortex and anterior intraparietal sulcus involved in motor resonance suggested that participants simulated human, and to a lesser extend humanoid robot actions, when playing the game. Finally, mentalizing regions, the medial prefrontal cortex and right temporoparietal junction, responded to the human only, supporting the specificity of mentalizing areas for interactions with intentional agents.

  8. Graphical user interface for a robotic workstation in a surgical environment.

    PubMed

    Bielski, A; Lohmann, C P; Maier, M; Zapp, D; Nasseri, M A

    2016-08-01

    Surgery using a robotic system has proven to have significant potential but is still a highly challenging task for the surgeon. An eye surgery assistant has been developed to eliminate the problem of tremor caused by human motions endangering the outcome of ophthalmic surgery. In order to exploit the full potential of the robot and improve the workflow of the surgeon, providing the ability to change control parameters live in the system as well as the ability to connect additional ancillary systems is necessary. Additionally the surgeon should always be able to get an overview over the status of all systems with a quick glance. Therefore a workstation has been built. The contribution of this paper is the design and the implementation of an intuitive graphical user interface for this workstation. The interface has been designed with feedback from surgeons and technical staff in order to ensure its usability in a surgical environment. Furthermore, the system was designed with the intent of supporting additional systems with minimal additional effort.

  9. Design and evaluation of Mina: a robotic orthosis for paraplegics.

    PubMed

    Neuhaus, Peter D; Noorden, Jerryll H; Craig, Travis J; Torres, Tecalote; Kirschbaum, Justin; Pratt, Jerry E

    2011-01-01

    Mobility options for persons suffering from paraplegia or paraparesis are limited to mainly wheeled devices. There are significant health, psychological, and social consequences related to being confined to a wheelchair. We present the Mina, a robotic orthosis for assisting mobility, which offers a legged mobility option for these persons. Mina is an overground robotic device that is worn on the back and around the legs to provide mobility assistance for people suffering from paraplegia or paraparesis. Mina uses compliant actuation to power the hip and knee joints. For paralyzed users, balance is provided with the assistance of forearm crutches. This paper presents the evaluation of Mina with two paraplegics (SCI ASIA-A). We confirmed that with a few hours of training and practice, Mina is currently able to provide paraplegics walking mobility at speeds of up to 0.20 m/s. We further confirmed that using Mina is not physically taxing and requires little cognitive effort, allowing the user to converse and maintain eye contact while walking. © 2011 IEEE

  10. Perception of artificial conspecifics by bearded dragons (Pogona vitticeps).

    PubMed

    Frohnwieser, Anna; Pike, Thomas W; Murray, John C; Wilkinson, Anna

    2018-01-09

    Artificial animals are increasingly used as conspecific stimuli in animal behavior research. However, researchers often have an incomplete understanding of how the species under study perceives conspecifics, and hence which features needed for a stimulus to be perceived appropriately. To investigate the features to which bearded dragons (Pogona vitticeps) attend, we measured their lateralized eye use when assessing a successive range of stimuli. These ranged through several stages of realism in artificial conspecifics, to see how features such as color, the presence of eyes, body shape and motion influence behavior. We found differences in lateralized eye use depending on the sex of the observing bearded dragon and the artificial conspecific, as well as the artificial conspecific's behavior. Therefore, this approach can inform the design of robotic animals that elicit biologically-meaningful responses in live animals. This article is protected by copyright. All rights reserved.

  11. Architectural design and support for knowledge sharing across heterogeneous MAST systems

    NASA Astrophysics Data System (ADS)

    Arkin, Ronald C.; Garcia-Vergara, Sergio; Lee, Sung G.

    2012-06-01

    A novel approach for the sharing of knowledge between widely heterogeneous robotic agents is presented, drawing upon Gardenfors Conceptual Spaces approach [4]. The target microrobotic platforms considered are computationally, power, sensor, and communications impoverished compared to more traditional robotics platforms due to their small size. This produces novel challenges for the system to converge on an interpretation of events within the world, in this case specifically focusing on the task of recognizing the concept of a biohazard in an indoor setting.

  12. The Effects of Level of Autonomy on Human-Agent Teaming for Multi-Robot Control and Local Security Maintenance

    DTIC Science & Technology

    2013-11-01

    different types of tasks, the associated costs of task switching also increase (Squire et al., 2006). Task switching costs may be increased with higher...switching costs as well, particularly when managing robot teams of increasing size (Squire et al., 2006). 1.3 Individual Differences The effects of...Research: Ready to Deliver the Promises. Mind 2003, 2 (3), 4. Jian, J.; Bisantz, A. M.; Drury , C. G. Foundations for an Empirically Determined Scale

  13. KSC-03PD-1586

    NASA Technical Reports Server (NTRS)

    2003-01-01

    KENNEDY SPACE CENTER, FLA. The backshell is in place over the Mars Exploration Rover 1 (MER-1). The backshell is a protective cover for the rover. NASA's twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can't yet go. MER-1 is scheduled to launch June 25 as MER-B aboard a Delta II rocket from Cape Canaveral Air Force Station.

  14. KSC-03PD-1578

    NASA Technical Reports Server (NTRS)

    2003-01-01

    KENNEDY SPACE CENTER, FLA. Workers in the Payload Hazardous Servicing Facility prepare to lift and move the backshell that will cover the Mars Exploration Rover 1 (MER-1) and its lander. NASA's twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can't yet go. MER-1 is scheduled to launch June 25 as MER-B aboard a Delta II rocket from Cape Canaveral Air Force Station.

  15. KSC-03pd1835

    NASA Image and Video Library

    2003-06-06

    KENNEDY SPACE CENTER, FLA. - Dr.Jim Garvin, Mars lead scientist at NASA Headquarters, takes part in a science briefing for the media. NASA's twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans are not yet able to go. MER-A is scheduled to launch on June 8 at 2:06 p.m. EDT, with two launch opportunities each day during a launch period that closes on June 24.

  16. KSC-03PD-1584

    NASA Technical Reports Server (NTRS)

    2003-01-01

    KENNEDY SPACE CENTER, FLA. Workers in the Payload Hazardous Servicing Facility lower the backshell over the Mars Exploration Rover 1 (MER-1). The backshell is a protective cover for the rover. NASA's twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can't yet go. MER-1 is scheduled to launch June 25 as MER-B aboard a Delta II rocket from Cape Canaveral Air Force Station.

  17. A comprehensive overview of the applications of artificial life.

    PubMed

    Kim, Kyung-Joong; Cho, Sung-Bae

    2006-01-01

    We review the applications of artificial life (ALife), the creation of synthetic life on computers to study, simulate, and understand living systems. The definition and features of ALife are shown by application studies. ALife application fields treated include robot control, robot manufacturing, practical robots, computer graphics, natural phenomenon modeling, entertainment, games, music, economics, Internet, information processing, industrial design, simulation software, electronics, security, data mining, and telecommunications. In order to show the status of ALife application research, this review primarily features a survey of about 180 ALife application articles rather than a selected representation of a few articles. Evolutionary computation is the most popular method for designing such applications, but recently swarm intelligence, artificial immune network, and agent-based modeling have also produced results. Applications were initially restricted to the robotics and computer graphics, but presently, many different applications in engineering areas are of interest.

  18. Data management for biofied building

    NASA Astrophysics Data System (ADS)

    Matsuura, Kohta; Mita, Akira

    2015-03-01

    Recently, Smart houses have been studied by many researchers to satisfy individual demands of residents. However, they are not feasible yet as they are very costly and require many sensors to be embedded into houses. Therefore, we suggest "Biofied Building". In Biofied Building, sensor agent robots conduct sensing, actuation, and control in their house. The robots monitor many parameters of human lives such as walking postures and emotion continuously. In this paper, a prototype network system and a data model for practical application for Biofied Building is pro-posed. In the system, functions of robots and servers are divided according to service flows in Biofield Buildings. The data model is designed to accumulate both the building data and the residents' data. Data sent from the robots and data analyzed in the servers are automatically registered into the database. Lastly, feasibility of this system is verified through lighting control simulation performed in an office space.

  19. A robotic system for researching social integration in honeybees.

    PubMed

    Griparić, Karlo; Haus, Tomislav; Miklić, Damjan; Polić, Marsela; Bogdan, Stjepan

    2017-01-01

    In this paper, we present a novel robotic system developed for researching collective social mechanisms in a biohybrid society of robots and honeybees. The potential for distributed coordination, as observed in nature in many different animal species, has caused an increased interest in collective behaviour research in recent years because of its applicability to a broad spectrum of technical systems requiring robust multi-agent control. One of the main problems is understanding the mechanisms driving the emergence of collective behaviour of social animals. With the aim of deepening the knowledge in this field, we have designed a multi-robot system capable of interacting with honeybees within an experimental arena. The final product, stationary autonomous robot units, designed by specificaly considering the physical, sensorimotor and behavioral characteristics of the honeybees (lat. Apis mallifera), are equipped with sensing, actuating, computation, and communication capabilities that enable the measurement of relevant environmental states, such as honeybee presence, and adequate response to the measurements by generating heat, vibration and airflow. The coordination among robots in the developed system is established using distributed controllers. The cooperation between the two different types of collective systems is realized by means of a consensus algorithm, enabling the honeybees and the robots to achieve a common objective. Presented results, obtained within ASSISIbf project, show successful cooperation indicating its potential for future applications.

  20. The trade-off between morphology and control in the co-optimized design of robots.

    PubMed

    Rosendo, Andre; von Atzigen, Marco; Iida, Fumiya

    2017-01-01

    Conventionally, robot morphologies are developed through simulations and calculations, and different control methods are applied afterwards. Assuming that simulations and predictions are simplified representations of our reality, how sure can roboticists be that the chosen morphology is the most adequate for the possible control choices in the real-world? Here we study the influence of the design parameters in the creation of a robot with a Bayesian morphology-control (MC) co-optimization process. A robot autonomously creates child robots from a set of possible design parameters and uses Bayesian Optimization (BO) to infer the best locomotion behavior from real world experiments. Then, we systematically change from an MC co-optimization to a control-only (C) optimization, which better represents the traditional way that robots are developed, to explore the trade-off between these two methods. We show that although C processes can greatly improve the behavior of poor morphologies, such agents are still outperformed by MC co-optimization results with as few as 25 iterations. Our findings, on one hand, suggest that BO should be used in the design process of robots for both morphological and control parameters to reach optimal performance, and on the other hand, point to the downfall of current design methods in face of new search techniques.

  1. The trade-off between morphology and control in the co-optimized design of robots

    PubMed Central

    Iida, Fumiya

    2017-01-01

    Conventionally, robot morphologies are developed through simulations and calculations, and different control methods are applied afterwards. Assuming that simulations and predictions are simplified representations of our reality, how sure can roboticists be that the chosen morphology is the most adequate for the possible control choices in the real-world? Here we study the influence of the design parameters in the creation of a robot with a Bayesian morphology-control (MC) co-optimization process. A robot autonomously creates child robots from a set of possible design parameters and uses Bayesian Optimization (BO) to infer the best locomotion behavior from real world experiments. Then, we systematically change from an MC co-optimization to a control-only (C) optimization, which better represents the traditional way that robots are developed, to explore the trade-off between these two methods. We show that although C processes can greatly improve the behavior of poor morphologies, such agents are still outperformed by MC co-optimization results with as few as 25 iterations. Our findings, on one hand, suggest that BO should be used in the design process of robots for both morphological and control parameters to reach optimal performance, and on the other hand, point to the downfall of current design methods in face of new search techniques. PMID:29023482

  2. Efficiency Improvement of Action Acquisition in Two-Link Robot Arm Using Fuzzy ART with Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Kotani, Naoki; Taniguchi, Kenji

    An efficient learning method using Fuzzy ART with Genetic Algorithm is proposed. The proposed method reduces the number of trials by using a policy acquired in other tasks because a reinforcement learning needs a lot of the number of trials until an agent acquires appropriate actions. Fuzzy ART is an incremental unsupervised learning algorithm in responce to arbitrary sequences of analog or binary input vectors. Our proposed method gives a policy by crossover or mutation when an agent observes unknown states. Selection controls the category proliferation problem of Fuzzy ART. The effectiveness of the proposed method was verified with the simulation of the reaching problem for the two-link robot arm. The proposed method achieves a reduction of both the number of trials and the number of states.

  3. Understanding the adoption dynamics of medical innovations: affordances of the da Vinci robot in the Netherlands.

    PubMed

    Abrishami, Payam; Boer, Albert; Horstman, Klasien

    2014-09-01

    This study explored the rather rapid adoption of a new surgical device - the da Vinci robot - in the Netherlands despite the high costs and its controversial clinical benefits. We used the concept 'affordances' as a conceptual-analytic tool to refer to the perceived promises, symbolic meanings, and utility values of an innovation constructed in the wider social context of use. This concept helps us empirically understand robot adoption. Data from 28 in-depth interviews with diverse purposively-sampled stakeholders, and from medical literature, policy documents, Health Technology Assessment reports, congress websites and patients' weblogs/forums between April 2009 and February 2014 were systematically analysed from the perspective of affordances. We distinguished five interrelated affordances of the robot that accounted for shaping and fulfilling its rapid adoption: 'characteristics-related' affordances such as smart nomenclature and novelty, symbolising high-tech clinical excellence; 'research-related' affordances offering medical-technical scientific excellence; 'entrepreneurship-related' affordances for performing better-than-the-competition; 'policy-related' affordances indicating the robot's liberalised provision and its reduced financial risks; and 'communication-related' affordances of the robot in shaping patients' choices and the public's expectations by resonating promising discourses while pushing uncertainties into the background. These affordances make the take-up and use of the da Vinci robot sound perfectly rational and inevitable. This Dutch case study demonstrates the fruitfulness of the affordances approach to empirically capturing the contextual dynamics of technology adoption in health care: exploring in-depth actors' interaction with the technology while considering the interpretative spaces created in situations of use. This approach can best elicit real-life value of innovations, values as defined through the eyes of (potential) users. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Toward Shared Working Space of Human and Robotic Agents Through Dipole Flow Field for Dependable Path Planning.

    PubMed

    Trinh, Lan Anh; Ekström, Mikael; Cürüklü, Baran

    2018-01-01

    Recent industrial developments in autonomous systems, or agents, which assume that humans and the agents share the same space or even work in close proximity, open for new challenges in robotics, especially in motion planning and control. In these settings, the control system should be able to provide these agents a reliable path following control when they are working in a group or in collaboration with one or several humans in complex and dynamic environments. In such scenarios, these agents are not only moving to reach their goals, i.e., locations, they are also aware of the movements of other entities to find a collision-free path. Thus, this paper proposes a dependable, i.e., safe, reliable and effective, path planning algorithm for a group of agents that share their working space with humans. Firstly, the method employs the Theta * algorithm to initialize the paths from a starting point to a goal for a set of agents. As Theta * algorithm is computationally heavy, it only reruns when there is a significant change of the environment. To deal with the movements of the agents, a static flow field along the configured path is defined. This field is used by the agents to navigate and reach their goals even if the planned trajectories are changed. Secondly, a dipole field is calculated to avoid the collision of agents with other agents and human subjects. In this approach, each agent is assumed to be a source of a magnetic dipole field in which the magnetic moment is aligned with the moving direction of the agent. The magnetic dipole-dipole interactions between these agents generate repulsive forces to help them to avoid collision. The effectiveness of the proposed approach has been evaluated with extensive simulations. The results show that the static flow field is able to drive agents to the goals with a small number of requirements to update the path of agents. Meanwhile, the dipole flow field plays an important role to prevent collisions. The combination of these two fields results in a safe path planning algorithm, with a deterministic outcome, to navigate agents to their desired goals.

  5. Evidence Report, Risk of Inadequate Design of Human and Automation/Robotic Integration

    NASA Technical Reports Server (NTRS)

    Zumbado, Jennifer Rochlis; Billman, Dorrit; Feary, Mike; Green, Collin

    2011-01-01

    The success of future exploration missions depends, even more than today, on effective integration of humans and technology (automation and robotics). This will not emerge by chance, but by design. Both crew and ground personnel will need to do more demanding tasks in more difficult conditions, amplifying the costs of poor design and the benefits of good design. This report has looked at the importance of good design and the risks from poor design from several perspectives: 1) If the relevant functions needed for a mission are not identified, then designs of technology and its use by humans are unlikely to be effective: critical functions will be missing and irrelevant functions will mislead or drain attention. 2) If functions are not distributed effectively among the (multiple) participating humans and automation/robotic systems, later design choices can do little to repair this: additional unnecessary coordination work may be introduced, workload may be redistributed to create problems, limited human attentional resources may be wasted, and the capabilities of both humans and technology underused. 3) If the design does not promote accurate understanding of the capabilities of the technology, the operators will not use the technology effectively: the system may be switched off in conditions where it would be effective, or used for tasks or in contexts where its effectiveness may be very limited. 4) If an ineffective interaction design is implemented and put into use, a wide range of problems can ensue. Many involve lack of transparency into the system: operators may be unable or find it very difficult to determine a) the current state and changes of state of the automation or robot, b) the current state and changes in state of the system being controlled or acted on, and c) what actions by human or by system had what effects. 5) If the human interfaces for operation and control of robotic agents are not designed to accommodate the unique points of view and operating environments of both the human and the robotic agent, then effective human-robot coordination cannot be achieved.

  6. Enabling private and public sector organizations as agents of homeland security

    NASA Astrophysics Data System (ADS)

    Glassco, David H. J.; Glassco, Jordan C.

    2006-05-01

    Homeland security and defense applications seek to reduce the risk of undesirable eventualities across physical space in real-time. With that functional requirement in mind, our work focused on the development of IP based agent telecommunication solutions for heterogeneous sensor / robotic intelligent "Things" that could be deployed across the internet. This paper explains how multi-organization information and device sharing alliances may be formed to enable organizations to act as agents of homeland security (in addition to other uses). Topics include: (i) using location-aware, agent based, real-time information sharing systems to integrate business systems, mobile devices, sensor and actuator based devices and embedded devices used in physical infrastructure assets, equipment and other man-made "Things"; (ii) organization-centric real-time information sharing spaces using on-demand XML schema formatted networks; (iii) object-oriented XML serialization as a methodology for heterogeneous device glue code; (iv) how complex requirements for inter / intra organization information and device ownership and sharing, security and access control, mobility and remote communication service, tailored solution life cycle management, service QoS, service and geographic scalability and the projection of remote physical presence (through sensing and robotics) and remote informational presence (knowledge of what is going elsewhere) can be more easily supported through feature inheritance with a rapid agent system development methodology; (v) how remote object identification and tracking can be supported across large areas; (vi) how agent synergy may be leveraged with analytics to complement heterogeneous device networks.

  7. Planning and Execution: The Spirit of Opportunity for Robust Autonomous Systems

    NASA Technical Reports Server (NTRS)

    Muscettola, Nicola

    2004-01-01

    One of the most exciting endeavors pursued by human kind is the search for life in the Solar System and the Universe at large. NASA is leading this effort by designing, deploying and operating robotic systems that will reach planets, planet moons, asteroids and comets searching for water, organic building blocks and signs of past or present microbial life. None of these missions will be achievable without substantial advances in.the design, implementation and validation of autonomous control agents. These agents must be capable of robustly controlling a robotic explorer in a hostile environment with very limited or no communication with Earth. The talk focuses on work pursued at the NASA Ames Research center ranging from basic research on algorithm to deployed mission support systems. We will start by discussing how planning and scheduling technology derived from the Remote Agent experiment is being used daily in the operations of the Spirit and Opportunity rovers. Planning and scheduling is also used as the fundamental paradigm at the core of our research in real-time autonomous agents. In particular, we will describe our efforts in the Intelligent Distributed Execution Architecture (IDEA), a multi-agent real-time architecture that exploits artificial intelligence planning as the core reasoning engine of an autonomous agent. We will also describe how the issue of plan robustness at execution can be addressed by novel constraint propagation algorithms capable of giving the tightest exact bounds on resource consumption or all possible executions of a flexible plan.

  8. Frontoparietal priority maps as biomarkers for mTBI

    DTIC Science & Technology

    2016-10-01

    spatial attention and eye movement deficits associated with mTBI result from disruption of the gray matter and/or the white matter in cortical...The hypothesis being tested is that spatial attention and eye movement deficits associated with mTBI result from disruption of the gray matter and/or...select agents Nothing to report. PRODUCTS o Publications, conference papers, and presentations “Visual Attention and Eye Movement Deficits in

  9. The Employment Effects of High-Technology: A Case Study of Machine Vision. Research Report No. 86-19.

    ERIC Educational Resources Information Center

    Chen, Kan; Stafford, Frank P.

    A case study of machine vision was conducted to identify and analyze the employment effects of high technology in general. (Machine vision is the automatic acquisition and analysis of an image to obtain desired information for use in controlling an industrial activity, such as the visual sensor system that gives eyes to a robot.) Machine vision as…

  10. Survey of Collision Avoidance and Ranging Sensors for Mobile Robots.

    DTIC Science & Technology

    1988-03-01

    systems represent a potential safety problem in that the intense and often invisible beam can be an eye hazard. Furthermore, gas lasers require high ...sensor, or out of range. Conventional diffuse proximity detectors based on return signal intensity display high repeatability only when target...because the low transmission intensity of this infrared wavelength results in minimal return radiation. (The extremely cold detector produces a high

  11. Towards soft robotic devices for site-specific drug delivery.

    PubMed

    Alici, Gursel

    2015-01-01

    Considerable research efforts have recently been dedicated to the establishment of various drug delivery systems (DDS) that are mechanical/physical, chemical and biological/molecular DDS. In this paper, we report on the recent advances in site-specific drug delivery (site-specific, controlled, targeted or smart drug delivery are terms used interchangeably in the literature, to mean to transport a drug or a therapeutic agent to a desired location within the body and release it as desired with negligibly small toxicity and side effect compared to classical drug administration means such as peroral, parenteral, transmucosal, topical and inhalation) based on mechanical/physical systems consisting of implantable and robotic drug delivery systems. While we specifically focus on the robotic or autonomous DDS, which can be reprogrammable and provide multiple doses of a drug at a required time and rate, we briefly cover the implanted DDS, which are well-developed relative to the robotic DDS, to highlight the design and performance requirements, and investigate issues associated with the robotic DDS. Critical research issues associated with both DDSs are presented to describe the research challenges ahead of us in order to establish soft robotic devices for clinical and biomedical applications.

  12. Learning Multirobot Hose Transportation and Deployment by Distributed Round-Robin Q-Learning.

    PubMed

    Fernandez-Gauna, Borja; Etxeberria-Agiriano, Ismael; Graña, Manuel

    2015-01-01

    Multi-Agent Reinforcement Learning (MARL) algorithms face two main difficulties: the curse of dimensionality, and environment non-stationarity due to the independent learning processes carried out by the agents concurrently. In this paper we formalize and prove the convergence of a Distributed Round Robin Q-learning (D-RR-QL) algorithm for cooperative systems. The computational complexity of this algorithm increases linearly with the number of agents. Moreover, it eliminates environment non sta tionarity by carrying a round-robin scheduling of the action selection and execution. That this learning scheme allows the implementation of Modular State-Action Vetoes (MSAV) in cooperative multi-agent systems, which speeds up learning convergence in over-constrained systems by vetoing state-action pairs which lead to undesired termination states (UTS) in the relevant state-action subspace. Each agent's local state-action value function learning is an independent process, including the MSAV policies. Coordination of locally optimal policies to obtain the global optimal joint policy is achieved by a greedy selection procedure using message passing. We show that D-RR-QL improves over state-of-the-art approaches, such as Distributed Q-Learning, Team Q-Learning and Coordinated Reinforcement Learning in a paradigmatic Linked Multi-Component Robotic System (L-MCRS) control problem: the hose transportation task. L-MCRS are over-constrained systems with many UTS induced by the interaction of the passive linking element and the active mobile robots.

  13. Social communication with virtual agents: The effects of body and gaze direction on attention and emotional responding in human observers.

    PubMed

    Marschner, Linda; Pannasch, Sebastian; Schulz, Johannes; Graupner, Sven-Thomas

    2015-08-01

    In social communication, the gaze direction of other persons provides important information to perceive and interpret their emotional response. Previous research investigated the influence of gaze by manipulating mutual eye contact. Therefore, gaze and body direction have been changed as a whole, resulting in only congruent gaze and body directions (averted or directed) of another person. Here, we aimed to disentangle these effects by using short animated sequences of virtual agents posing with either direct or averted body or gaze. Attention allocation by means of eye movements, facial muscle response, and emotional experience to agents of different gender and facial expressions were investigated. Eye movement data revealed longer fixation durations, i.e., a stronger allocation of attention, when gaze and body direction were not congruent with each other or when both were directed towards the observer. This suggests that direct interaction as well as incongruous signals increase the demands of attentional resources in the observer. For the facial muscle response, only the reaction of muscle zygomaticus major revealed an effect of body direction, expressed by stronger activity in response to happy expressions for direct compared to averted gaze when the virtual character's body was directed towards the observer. Finally, body direction also influenced the emotional experience ratings towards happy expressions. While earlier findings suggested that mutual eye contact is the main source for increased emotional responding and attentional allocation, the present results indicate that direction of the virtual agent's body and head also plays a minor but significant role. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Robot Faces that Follow Gaze Facilitate Attentional Engagement and Increase Their Likeability.

    PubMed

    Willemse, Cesco; Marchesi, Serena; Wykowska, Agnieszka

    2018-01-01

    Gaze behavior of humanoid robots is an efficient mechanism for cueing our spatial orienting, but less is known about the cognitive-affective consequences of robots responding to human directional cues. Here, we examined how the extent to which a humanoid robot (iCub) avatar directed its gaze to the same objects as our participants affected engagement with the robot, subsequent gaze-cueing, and subjective ratings of the robot's characteristic traits. In a gaze-contingent eyetracking task, participants were asked to indicate a preference for one of two objects with their gaze while an iCub avatar was presented between the object photographs. In one condition, the iCub then shifted its gaze toward the object chosen by a participant in 80% of the trials (joint condition) and in the other condition it looked at the opposite object 80% of the time (disjoint condition). Based on the literature in human-human social cognition, we took the speed with which the participants looked back at the robot as a measure of facilitated reorienting and robot-preference, and found these return saccade onset times to be quicker in the joint condition than in the disjoint condition. As indicated by results from a subsequent gaze-cueing tasks, the gaze-following behavior of the robot had little effect on how our participants responded to gaze cues. Nevertheless, subjective reports suggested that our participants preferred the iCub following participants' gaze to the one with a disjoint attention behavior, rated it as more human-like and as more likeable. Taken together, our findings show a preference for robots who follow our gaze. Importantly, such subtle differences in gaze behavior are sufficient to influence our perception of humanoid agents, which clearly provides hints about the design of behavioral characteristics of humanoid robots in more naturalistic settings.

  15. Automating CapCom Using Mobile Agents and Robotic Assistants

    NASA Technical Reports Server (NTRS)

    Clancey, William J.; Sierhuis, Maarten; Alena, Richard L.; Graham, Jeffrey S.; Tyree, Kim S.; Hirsh, Robert L.; Garry, W. Brent; Semple, Abigail; Shum, Simon J. Buckingham; Shadbolt, Nigel; hide

    2007-01-01

    Mobile Agents (MA) is an advanced Extra-Vehicular Activity (EVA) communications and computing system to increase astronaut self-reliance and safety, reducing dependence on continuous monitoring and advising from mission control on Earth. MA is voice controlled and provides information verbally to the astronauts through programs called "personal agents." The system partly automates the role of CapCom in Apollo-including monitoring and managing navigation, scheduling, equipment deployment, telemetry, health tracking, and scientific data collection. Data are stored automatically in a shared database in the habitat/vehicle and mirrored to a site accessible by a remote science team. The program has been developed iteratively in authentic work contexts, including six years of ethnographic observation of field geology. Analog field experiments in Utah enabled empirically discovering requirements and testing alternative technologies and protocols. We report on the 2004 system configuration, experiments, and results, in which an EVA robotic assistant (ERA) followed geologists approximately 150 m through a winding, narrow canyon. On voice command, the ERA took photographs and panoramas and was directed to serve as a relay on the wireless network.

  16. Roles for Agent Assistants in Field Science: Understanding Personal Projects and Collaboration

    NASA Technical Reports Server (NTRS)

    Clancey, William J.

    2003-01-01

    A human-centered approach to computer systems design involves reframing analysis in terms of the people interacting with each other. The primary concern is not how people can interact with computers, but how shall we design work systems (facilities, tools, roles, and procedures) to help people pursue their personal projects, as they work independently and collaboratively? Two case studies provide empirical requirements. First, an analysis of astronaut interactions with CapCom on Earth during one traverse of Apollo 17 shows what kind of information was conveyed and what might be automated today. A variety of agent and robotic technologies are proposed that deal with recurrent problems in communication and coordination during the analyzed traverse. Second, an analysis of biologists and a geologist working at Haughton Crater in the High Canadian Arctic reveals how work interactions between people involve independent personal projects, sensitively coordinated for mutual benefit. In both cases, an agent or robotic system's role would be to assist people, rather than collaborating, because today's computer systems lack the identity and purpose that consciousness provides.

  17. Safety of medium-chain triglycerides used as an intraocular tamponading agent in an experimental vitrectomy model rabbit.

    PubMed

    Auriol, Sylvain; Mahieu, Laurence; Brousset, Pierre; Malecaze, François; Mathis, Véronique

    2013-01-01

    To evaluate safety of medium-chain triglycerides used as a possible intraocular tamponading agent. A 20-gauge pars plana vitrectomy was performed in the right eye of 28 rabbits. An ophthalmologic examination was performed every week until rabbits were killed. At days 7, 30, 60, and 90, rabbits were killed and the treated eyes were examined macroscopically and prepared for histologic examination. Principal outcome was retinal toxicity evaluated by light and electron microscopy, and secondary outcomes were the presence of medium-chain triglyceride emulsification, inflammatory reactions, and the development of cataract. Histologic examination did not reveal any retinal toxicity. Two cases of moderate emulsification were observed, but in these cases, emulsification was caused by the perioperative injection of the agent and did not increase during the postoperative period. We noted 13 cases of inflammatory reaction in vitreous cavity and no case of inflammatory reaction in anterior chamber. Two eyes developed cataract as a result of perioperative trauma to the lens with the vitreous cutter and not secondary to the presence of medium-chain triglycerides in the vitreous cavity. Medium-chain triglycerides did not induce morphologic evidence of retinal toxicity. The results suggest that medium-chain triglycerides could be a promising alternative intraocular tamponading agent for the treatment of retinal detachments.

  18. An affordable compact humanoid robot for Autism Spectrum Disorder interventions in children.

    PubMed

    Dickstein-Fischer, Laurie; Alexander, Elizabeth; Yan, Xiaoan; Su, Hao; Harrington, Kevin; Fischer, Gregory S

    2011-01-01

    Autism Spectrum Disorder impacts an ever-increasing number of children. The disorder is marked by social functioning that is characterized by impairment in the use of nonverbal behaviors, failure to develop appropriate peer relationships and lack of social and emotional exchanges. Providing early intervention through the modality of play therapy has been effective in improving behavioral and social outcomes for children with autism. Interacting with humanoid robots that provide simple emotional response and interaction has been shown to improve the communication skills of autistic children. In particular, early intervention and continuous care provide significantly better outcomes. Currently, there are no robots capable of meeting these requirements that are both low-cost and available to families of autistic children for in-home use. This paper proposes the piloting the use of robotics as an improved diagnostic and early intervention tool for autistic children that is affordable, non-threatening, durable, and capable of interacting with an autistic child. This robot has the ability to track the child with its 3 degree of freedom (DOF) eyes and 3-DOF head, open and close its 1-DOF beak and 1-DOF each eyelids, raise its 1-DOF each wings, play sound, and record sound. These attributes will give it the ability to be used for the diagnosis and treatment of autism. As part of this project, the robot and the electronic and control software have been developed, and integrating semi-autonomous interaction, teleoperation from a remote healthcare provider and initiating trials with children in a local clinic are in progress.

  19. Motor contagion during human-human and human-robot interaction.

    PubMed

    Bisio, Ambra; Sciutti, Alessandra; Nori, Francesco; Metta, Giorgio; Fadiga, Luciano; Sandini, Giulio; Pozzo, Thierry

    2014-01-01

    Motor resonance mechanisms are known to affect humans' ability to interact with others, yielding the kind of "mutual understanding" that is the basis of social interaction. However, it remains unclear how the partner's action features combine or compete to promote or prevent motor resonance during interaction. To clarify this point, the present study tested whether and how the nature of the visual stimulus and the properties of the observed actions influence observer's motor response, being motor contagion one of the behavioral manifestations of motor resonance. Participants observed a humanoid robot and a human agent move their hands into a pre-specified final position or put an object into a container at various velocities. Their movements, both in the object- and non-object- directed conditions, were characterized by either a smooth/curvilinear or a jerky/segmented trajectory. These trajectories were covered with biological or non-biological kinematics (the latter only by the humanoid robot). After action observation, participants were requested to either reach the indicated final position or to transport a similar object into another container. Results showed that motor contagion appeared for both the interactive partner except when the humanoid robot violated the biological laws of motion. These findings suggest that the observer may transiently match his/her own motor repertoire to that of the observed agent. This matching might mediate the activation of motor resonance, and modulate the spontaneity and the pleasantness of the interaction, whatever the nature of the communication partner.

  20. Critiquing the Reasons for Making Artificial Moral Agents.

    PubMed

    van Wynsberghe, Aimee; Robbins, Scott

    2018-02-19

    Many industry leaders and academics from the field of machine ethics would have us believe that the inevitability of robots coming to have a larger role in our lives demands that robots be endowed with moral reasoning capabilities. Robots endowed in this way may be referred to as artificial moral agents (AMA). Reasons often given for developing AMAs are: the prevention of harm, the necessity for public trust, the prevention of immoral use, such machines are better moral reasoners than humans, and building these machines would lead to a better understanding of human morality. Although some scholars have challenged the very initiative to develop AMAs, what is currently missing from the debate is a closer examination of the reasons offered by machine ethicists to justify the development of AMAs. This closer examination is especially needed because of the amount of funding currently being allocated to the development of AMAs (from funders like Elon Musk) coupled with the amount of attention researchers and industry leaders receive in the media for their efforts in this direction. The stakes in this debate are high because moral robots would make demands on society; answers to a host of pending questions about what counts as an AMA and whether they are morally responsible for their behavior or not. This paper shifts the burden of proof back to the machine ethicists demanding that they give good reasons to build AMAs. The paper argues that until this is done, the development of commercially available AMAs should not proceed further.

  1. Motor Contagion during Human-Human and Human-Robot Interaction

    PubMed Central

    Bisio, Ambra; Sciutti, Alessandra; Nori, Francesco; Metta, Giorgio; Fadiga, Luciano; Sandini, Giulio; Pozzo, Thierry

    2014-01-01

    Motor resonance mechanisms are known to affect humans' ability to interact with others, yielding the kind of “mutual understanding” that is the basis of social interaction. However, it remains unclear how the partner's action features combine or compete to promote or prevent motor resonance during interaction. To clarify this point, the present study tested whether and how the nature of the visual stimulus and the properties of the observed actions influence observer's motor response, being motor contagion one of the behavioral manifestations of motor resonance. Participants observed a humanoid robot and a human agent move their hands into a pre-specified final position or put an object into a container at various velocities. Their movements, both in the object- and non-object- directed conditions, were characterized by either a smooth/curvilinear or a jerky/segmented trajectory. These trajectories were covered with biological or non-biological kinematics (the latter only by the humanoid robot). After action observation, participants were requested to either reach the indicated final position or to transport a similar object into another container. Results showed that motor contagion appeared for both the interactive partner except when the humanoid robot violated the biological laws of motion. These findings suggest that the observer may transiently match his/her own motor repertoire to that of the observed agent. This matching might mediate the activation of motor resonance, and modulate the spontaneity and the pleasantness of the interaction, whatever the nature of the communication partner. PMID:25153990

  2. Robotic laser tissue welding of sclera using chitosan films.

    PubMed

    Garcia, Pablo; Mines, Michael J; Bower, Kraig S; Hill, J; Menon, J; Tremblay, Eric; Smith, Benjamin

    2009-01-01

    To demonstrate the feasibility of scleral wound closure using a novel adhesive made of chitosan film. Five-millimeter scleral lacerations were created in enucleated pig eyes. Casted chitosan films were sized to 7x7 mm patches. Lacerations were sealed with chitosan film alone (7 eyes) or chitosan film followed by laser irradiation using a near infrared laser (1,455 nm) at 350 mW for 6 minutes (7 eyes). Seven eyes were closed with 9-0 nylon suture for comparison (7 eyes). Outcome measures included watertight closure, closure time, and leak pressure. Leak pressure was measured with a pressure transducer attached to tubing continuously monitored intraocular pressure during saline infusion. Watertight closure testing was performed immediately following closure (n = 3 per group) and after 24 hours (n = 3 per group). One eye in each group was fixed in formalin for histology. All wounds were watertight for each closure method. Mean closure time with unlasered chitosan film was 2.24 minutes (range 1.80-3.26, 7 eyes) with a mean leak pressure of 303 mm Hg (range 217-364, 3 eyes). Mean closure time with lasered chitosan was 12.47 minutes (range 11.45-14.15, 7 eyes) with a mean leak pressure of 454.7 mm Hg (range 152-721, 3 eyes). Suture closure required a mean of 4.83 minutes (range 4.03-7.30, 7 eyes) and resulted in a mean leak pressure of 570.3 mm Hg (range 460-646, 3 eyes). Both lasered and unlasered chitosan eyes remained watertight after 24 hours. Histology revealed minimal laser tissue damage in lasered eyes. In this preliminary study chitosan film successfully closed scleral lacerations with and without the application of laser energy. While laser appears to strengthen the closure, it significantly increases the closure time. Chitosan based adhesives hold promise as a scleral wound closure technique.

  3. Effect of Phenylephrine on the Accommodative System

    PubMed Central

    Del Águila-Carrasco, Antonio J.; Bernal-Molina, Paula; Ferrer-Blasco, Teresa; López-Gil, Norberto; Montés-Micó, Robert

    2016-01-01

    Accommodation is controlled by the action of the ciliary muscle and mediated primarily by parasympathetic input through postganglionic fibers that originate from neurons in the ciliary and pterygopalatine ganglia. During accommodation the pupil constricts to increase the depth of focus of the eye and improve retinal image quality. Researchers have traditionally faced the challenge of measuring the accommodative properties of the eye through a small pupil and thus have relied on pharmacological agents to dilate the pupil. Achieving pupil dilation (mydriasis) without affecting the accommodative ability of the eye (cycloplegia) could be useful in many clinical and research contexts. Phenylephrine hydrochloride (PHCl) is a sympathomimetic agent that is used clinically to dilate the pupil. Nevertheless, first investigations suggested some loss of functional accommodation in the human eye after PHCl instillation. Subsequent studies, based on different measurement procedures, obtained contradictory conclusions, causing therefore an unexpected controversy that has been spread almost to the present days. This manuscript reviews and summarizes the main research studies that have been performed to analyze the effect of PHCl on the accommodative system and provides clear conclusions that could help clinicians know the real effects of PHCl on the accommodative system of the human eye. PMID:28053778

  4. Design and development of biomimetic quadruped robot for behavior studies of rats and mice.

    PubMed

    Ishii, Hiroyuki; Masuda, Yuichi; Miyagishima, Syunsuke; Fumino, Shogo; Takanishi, Atsuo; Laschi, Cecilia; Mazzolai, Barbara; Mattoli, Virgilio; Dario, Paolo

    2009-01-01

    This paper presents the design and development of a novel biomimetic quadruped robot for behavior studies of rats and mice. Many studies have been performed using these animals for the purpose of understanding human mind in psychology, pharmacology and brain science. In these fields, several experiments on social interactions have been performed using rats as basic studies of mental disorders or social learning. However, some researchers mention that the experiments on social interactions using animals are poorly-reproducible. Therefore, we consider that reproducibility of these experiments can be improved by using a robotic agent that interacts with an animal subject. Thus, we developed a small quadruped robot WR-2 (Waseda Rat No. 2) that behaves like a real rat. Proportion and DOF arrangement of WR-2 are designed based on those of a mature rat. This robot has four 3-DOF legs, a 2-DOF waist and a 1-DOF neck. A microcontroller and a wireless communication module are implemented on it. A battery is also implemented. Thus, it can walk, rear by limbs and groom its body.

  5. Real-time, wide-area hyperspectral imaging sensors for standoff detection of explosives and chemical warfare agents

    NASA Astrophysics Data System (ADS)

    Gomer, Nathaniel R.; Tazik, Shawna; Gardner, Charles W.; Nelson, Matthew P.

    2017-05-01

    Hyperspectral imaging (HSI) is a valuable tool for the detection and analysis of targets located within complex backgrounds. HSI can detect threat materials on environmental surfaces, where the concentration of the target of interest is often very low and is typically found within complex scenery. Unfortunately, current generation HSI systems have size, weight, and power limitations that prohibit their use for field-portable and/or real-time applications. Current generation systems commonly provide an inefficient area search rate, require close proximity to the target for screening, and/or are not capable of making real-time measurements. ChemImage Sensor Systems (CISS) is developing a variety of real-time, wide-field hyperspectral imaging systems that utilize shortwave infrared (SWIR) absorption and Raman spectroscopy. SWIR HSI sensors provide wide-area imagery with at or near real time detection speeds. Raman HSI sensors are being developed to overcome two obstacles present in standard Raman detection systems: slow area search rate (due to small laser spot sizes) and lack of eye-safety. SWIR HSI sensors have been integrated into mobile, robot based platforms and handheld variants for the detection of explosives and chemical warfare agents (CWAs). In addition, the fusion of these two technologies into a single system has shown the feasibility of using both techniques concurrently to provide higher probability of detection and lower false alarm rates. This paper will provide background on Raman and SWIR HSI, discuss the applications for these techniques, and provide an overview of novel CISS HSI sensors focusing on sensor design and detection results.

  6. Coordinating Learning Agents for Active Information Collection

    DTIC Science & Technology

    2011-06-30

    the experiments were not particularly sensitive to this parameter. By limiting the number of actions that are updated (DANT-L in black/ dark ), the...Bazzan, A. and Ossowski, S. (eds.), Applications of Agent Technology in Traffic and Transportation (Springer, 2005). [19] Mataric , M. J., Coordination...organizing market (1998), preprint cond- mat/9802177. [19] Jones, C. and Mataric , M. J., Adaptive division of labor in large-scale multi-robot systems, in IEEE

  7. Temporal Heterogeneity and the Value of Slowness in Robotic Systems

    DTIC Science & Technology

    2015-11-01

    DIMENSIONS OF HETEROGENEITY By now, we have become reasonably good at designing distributed control strategies for teams of networked agents in order...possible is the recent emergence of a relatively mature theory of how to coordinate control decisions across teams of networked agents. In fact...Loris, illustrated in Figure 2. Figure 2: Slow mammals that serve as bio-inspiration for SlowBot Behavior [Wikipedia] Top: Tree

  8. Effects of Agent Transparency on Multi-Robot Management Effectiveness

    DTIC Science & Technology

    2015-09-01

    capacity was found to be a significant predictor of participants’ trust in the agent. Individual differences in spatial ability accounted for...Another concern regarding autonomous systems is operator workload, which is the cost of performing a task that reduces an individual’s ability to complete...more elaborate and costly strategy that cost additional time (Clark et al. 2011). We hypothesize, therefore, that action GE will be associated with

  9. Observing Shared Attention Modulates Gaze Following

    ERIC Educational Resources Information Center

    Bockler, Anne; Knoblich, Gunther; Sebanz, Natalie

    2011-01-01

    Humans' tendency to follow others' gaze is considered to be rather resistant to top-down influences. However, recent evidence indicates that gaze following depends on prior eye contact with the observed agent. Does observing two people engaging in eye contact also modulate gaze following? Participants observed two faces looking at each other or…

  10. Using expectations to monitor robotic progress and recover from problems

    NASA Astrophysics Data System (ADS)

    Kurup, Unmesh; Lebiere, Christian; Stentz, Anthony; Hebert, Martial

    2013-05-01

    How does a robot know when something goes wrong? Our research answers this question by leveraging expectations - predictions about the immediate future - and using the mismatch between the expectations and the external world to monitor the robot's progress. We use the cognitive architecture ACT-R (Adaptive Control of Thought - Rational) to learn the associations between the current state of the robot and the world, the action to be performed in the world, and the future state of the world. These associations are used to generate expectations that are then matched by the architecture with the next state of the world. A significant mismatch between these expectations and the actual state of the world indicate a problem possibly resulting from unexpected consequences of the robot's actions, unforeseen changes in the environment or unanticipated actions of other agents. When a problem is detected, the recovery model can suggest a number of recovery options. If the situation is unknown, that is, the mismatch between expectations and the world is novel, the robot can use a recovery solution from a set of heuristic options. When a recovery option is successfully applied, the robot learns to associate that recovery option with the mismatch. When the same problem is encountered later, the robot can apply the learned recovery solution rather than using the heuristics or randomly exploring the space of recovery solutions. We present results from execution monitoring and recovery performed during an assessment conducted at the Combined Arms Collective Training Facility (CACTF) at Fort Indiantown Gap.

  11. KSC-03PD-1601

    NASA Technical Reports Server (NTRS)

    2003-01-01

    KENNEDY SPACE CENTER, FLA. Workers attach an overhead crane to the Mars Exploration Rover 1 (MER-1) inside the upper backshell. The backshell will be moved and attached to the lower heat shield. NASA's twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can't yet go. MER-1 is scheduled to launch June 25 as MER-B aboard a Delta II rocket from Cape Canaveral Air Force Station.

  12. KSC-03PD-1603

    NASA Technical Reports Server (NTRS)

    2003-01-01

    KENNEDY SPACE CENTER, FLA. Workers walk with the suspended backshell/ Mars Exploration Rover 1 (MER-1) as it travels across the floor of the Payload Hazardous Servicing Facility. The backshell will be attached to the lower heat shield. NASA's twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can't yet go. MER-1 is scheduled to launch June 25 as MER-B aboard a Delta II rocket from Cape Canaveral Air Force Station.

  13. Considerations for human-machine interfaces in tele-operations

    NASA Technical Reports Server (NTRS)

    Newport, Curt

    1991-01-01

    Numerous factors impact on the efficiency of tele-operative manipulative work. Generally, these are related to the physical environment of the tele-operator and how he interfaces with robotic control consoles. The capabilities of the operator can be influenced by considerations such as temperature, eye strain, body fatigue, and boredom created by repetitive work tasks. In addition, the successful combination of man and machine will, in part, be determined by the configuration of the visual and physical interfaces available to the teleoperator. The design and operation of system components such as full-scale and mini-master manipulator controllers, servo joysticks, and video monitors will have a direct impact on operational efficiency. As a result, the local environment and the interaction of the operator with the robotic control console have a substantial effect on mission productivity.

  14. STS-55 German Payload Specialist Schlegel manipulates ROTEX controls in SL-D2

    NASA Technical Reports Server (NTRS)

    1993-01-01

    STS-55 German Payload Specialist 2 Hans Schlegel, wearing goggles (eye glasses) and positioned in front of Spacelab Deutsche 2 (SL-D2) Rack 4 System Rack controls, operates Robotics Technology Experiment (ROTEX) arm. ROTEX is a robotic arm that operates within an enclosed workcell in Rack 6 (partially visible in the foreground) and uses teleoperation from both an onboard station located nearby in Rack 4 and from a station on the ground. The device uses teleprogramming and artificial intelligence to look at the design, verification and operation of advanced autonomous systems for use in future applications. Schlegel represents the German Aerospace Research Establishment (DLR). SL-D2, a German-managed payload, is aboard Columbia, Orbiter Vehicle (OV) 102, for this science research mission.

  15. KSC-03PD-1605

    NASA Technical Reports Server (NTRS)

    2003-01-01

    KENNEDY SPACE CENTER, FLA. In the Payload Hazardous Servicing Facility, workers move the heat shield (foreground) toward the upper backshell/ Mars Exploration Rover 1 (MER-1), in the background. The backshell and heat shield will be mated. NASA's twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans can't yet go. MER-1 is scheduled to launch June 25 as MER-B aboard a Delta II rocket from Cape Canaveral Air Force Station.

  16. KSC-03PD-1587

    NASA Technical Reports Server (NTRS)

    2003-01-01

    KENNEDY SPACE CENTER, FLA. A solid rocket booster arrives at Launch Complex 17-A, Cape Canaveral Air Force Station. It is one of nine that will be mated to the Delta rocket to launch Mars Exploration Rover 2. NASAs twin Mars Exploration Rovers are designed to study the history of water on Mars. These robotic geologists are equipped with a robotic arm, a drilling tool, three spectrometers, and four pairs of cameras that allow them to have a human-like, 3D view of the terrain. Each rover could travel as far as 100 meters in one day to act as Mars scientists' eyes and hands, exploring an environment where humans cant yet go. MER-2 is scheduled to launch June 5 as MER-A. MER-1 (MER-B) will launch June 25.

  17. Remote secure observing for the Faulkes Telescopes

    NASA Astrophysics Data System (ADS)

    Smith, Robert J.; Steele, Iain A.; Marchant, Jonathan M.; Fraser, Stephen N.; Mucke-Herzberg, Dorothea

    2004-09-01

    Since the Faulkes Telescopes are to be used by a wide variety of audiences, both powerful engineering level and simple graphical interfaces exist giving complete remote and robotic control of the telescope over the internet. Security is extremely important to protect the health of both humans and equipment. Data integrity must also be carefully guarded for images being delivered directly into the classroom. The adopted network architecture is described along with the variety of security and intrusion detection software. We use a combination of SSL, proxies, IPSec, and both Linux iptables and Cisco IOS firewalls to ensure only authenticated and safe commands are sent to the telescopes. With an eye to a possible future global network of robotic telescopes, the system implemented is capable of scaling linearly to any moderate (of order ten) number of telescopes.

  18. Autonomous intelligent cars: proof that the EPSRC Principles are future-proof

    NASA Astrophysics Data System (ADS)

    de Cock Buning, Madeleine; de Bruin, Roeland

    2017-07-01

    Principle 2 of the EPSRC's principles of robotics (AISB workshop on Principles of Robotics, 2016) proves to be future proof when applied to the current state of the art of law and technology surrounding autonomous intelligent cars (AICs). Humans, not AICS, are responsible agents. AICs should be designed; operated as far as is practicable to comply with existing laws and fundamental rights and freedoms, including privacy by design. It will show that some legal questions arising from autonomous intelligent driving technology can be answered by the technology itself.

  19. Reliable vision-guided grasping

    NASA Technical Reports Server (NTRS)

    Nicewarner, Keith E.; Kelley, Robert B.

    1992-01-01

    Automated assembly of truss structures in space requires vision-guided servoing for grasping a strut when its position and orientation are uncertain. This paper presents a methodology for efficient and robust vision-guided robot grasping alignment. The vision-guided grasping problem is related to vision-guided 'docking' problems. It differs from other hand-in-eye visual servoing problems, such as tracking, in that the distance from the target is a relevant servo parameter. The methodology described in this paper is hierarchy of levels in which the vision/robot interface is decreasingly 'intelligent,' and increasingly fast. Speed is achieved primarily by information reduction. This reduction exploits the use of region-of-interest windows in the image plane and feature motion prediction. These reductions invariably require stringent assumptions about the image. Therefore, at a higher level, these assumptions are verified using slower, more reliable methods. This hierarchy provides for robust error recovery in that when a lower-level routine fails, the next-higher routine will be called and so on. A working system is described which visually aligns a robot to grasp a cylindrical strut. The system uses a single camera mounted on the end effector of a robot and requires only crude calibration parameters. The grasping procedure is fast and reliable, with a multi-level error recovery system.

  20. Automating CapCom Using Mobile Agents and Robotic Assistants

    NASA Technical Reports Server (NTRS)

    Clancey, William J.; Sierhaus, Maarten; Alena, Richard L.; Berrios, Daniel; Dowding, John; Graham, Jeffrey S.; Tyree, Kim S.; Hirsh, Robert L.; Garry, W. Brent; Semple, Abigail

    2005-01-01

    We have developed and tested an advanced EVA communications and computing system to increase astronaut self-reliance and safety, reducing dependence on continuous monitoring and advising from mission control on Earth. This system, called Mobile Agents (MA), is voice controlled and provides information verbally to the astronauts through programs called personal agents. The system partly automates the role of CapCom in Apollo-including monitoring and managing EVA navigation, scheduling, equipment deployment, telemetry, health tracking, and scientific data collection. EVA data are stored automatically in a shared database in the habitat/vehicle and mirrored to a site accessible by a remote science team. The program has been developed iteratively in the context of use, including six years of ethnographic observation of field geology. Our approach is to develop automation that supports the human work practices, allowing people to do what they do well, and to work in ways they are most familiar. Field experiments in Utah have enabled empirically discovering requirements and testing alternative technologies and protocols. This paper reports on the 2004 system configuration, experiments, and results, in which an EVA robotic assistant (ERA) followed geologists approximately 150 m through a winding, narrow canyon. On voice command, the ERA took photographs and panoramas and was directed to move and wait in various locations to serve as a relay on the wireless network. The MA system is applicable to many space work situations that involve creating and navigating from maps (including configuring equipment for local topology), interacting with piloted and unpiloted rovers, adapting to environmental conditions, and remote team collaboration involving people and robots.

Top