Sample records for active gaze control

  1. Brain stem omnipause neurons and the control of combined eye-head gaze saccades in the alert cat.

    PubMed

    Paré, M; Guitton, D

    1998-06-01

    When the head is unrestrained, rapid displacements of the visual axis-gaze shifts (eye-re-space)-are made by coordinated movements of the eyes (eye-re-head) and head (head-re-space). To address the problem of the neural control of gaze shifts, we studied and contrasted the discharges of omnipause neurons (OPNs) during a variety of combined eye-head gaze shifts and head-fixed eye saccades executed by alert cats. OPNs discharged tonically during intersaccadic intervals and at a reduced level during slow perisaccadic gaze movements sometimes accompanying saccades. Their activity ceased for the duration of the saccadic gaze shifts the animal executed, either by head-fixed eye saccades alone or by combined eye-head movements. This was true for all types of gaze shifts studied: active movements to visual targets; passive movements induced by whole-body rotation or by head rotation about stationary body; and electrically evoked movements by stimulation of the caudal part of the superior colliculus (SC), a central structure for gaze control. For combined eye-head gaze shifts, the OPN pause was therefore not correlated to the eye-in-head trajectory. For instance, in active gaze movements, the end of the pause was better correlated with the gaze end than with either the eye saccade end or the time of eye counterrotation. The hypothesis that cat OPNs participate in controlling gaze shifts is supported by these results, and also by the observation that the movements of both the eyes and the head were transiently interrupted by stimulation of OPNs during gaze shifts. However, we found that the OPN pause could be dissociated from the gaze-motor-error signal producing the gaze shift. First, OPNs resumed discharging when perturbation of head motion briefly interrupted a gaze shift before its intended amplitude was attained. Second, stimulation of caudal SC sites in head-free cat elicited large head-free gaze shifts consistent with the creation of a large gaze-motor-error signal. However, stimulation of the same sites in head-fixed cat produced small "goal-directed" eye saccades, and OPNs paused only for the duration of the latter; neither a pause nor an eye movement occurred when the same stimulation was applied with the eyes at the goal location. We conclude that OPNs can be controlled by neither a simple eye control system nor an absolute gaze control system. Our data cannot be accounted for by existing models describing the control of combined eye-head gaze shifts and therefore put new constraints on future models, which will have to incorporate all the various signals that act synergistically to control gaze shifts.

  2. Seeing direct and averted gaze activates the approach-avoidance motivational brain systems.

    PubMed

    Hietanen, Jari K; Leppänen, Jukka M; Peltola, Mikko J; Linna-Aho, Kati; Ruuhiala, Heidi J

    2008-01-01

    Gaze direction is known to be an important factor in regulating social interaction. Recent evidence suggests that direct and averted gaze can signal the sender's motivational tendencies of approach and avoidance, respectively. We aimed at determining whether seeing another person's direct vs. averted gaze has an influence on the observer's neural approach-avoidance responses. We also examined whether it would make a difference if the participants were looking at the face of a real person or a picture. Measurements of hemispheric asymmetry in the frontal electroencephalographic activity indicated that another person's direct gaze elicited a relative left-sided frontal EEG activation (indicative of a tendency to approach), whereas averted gaze activated right-sided asymmetry (indicative of avoidance). Skin conductance responses were larger to faces than to control objects and to direct relative to averted gaze, indicating that faces, in general, and faces with direct gaze, in particular, elicited more intense autonomic activation and strength of the motivational tendencies than did control stimuli. Gaze direction also influenced subjective ratings of emotional arousal and valence. However, all these effects were observed only when participants were facing a real person, not when looking at a picture of a face. This finding was suggested to be due to the motivational responses to gaze direction being activated in the context of enhanced self-awareness by the presence of another person. The present results, thus, provide direct evidence that eye contact and gaze aversion between two persons influence the neural mechanisms regulating basic motivational-emotional responses and differentially activate the motivational approach-avoidance brain systems.

  3. Contribution of the cerebellar flocculus to gaze control during active head movements

    NASA Technical Reports Server (NTRS)

    Belton, T.; McCrea, R. A.; Peterson, B. W. (Principal Investigator)

    1999-01-01

    The flocculus and ventral paraflocculus are adjacent regions of the cerebellar cortex that are essential for controlling smooth pursuit eye movements and for altering the performance of the vestibulo-ocular reflex (VOR). The question addressed in this study is whether these regions of the cerebellum are more globally involved in controlling gaze, regardless of whether eye or active head movements are used to pursue moving visual targets. Single-unit recordings were obtained from Purkinje (Pk) cells in the floccular region of squirrel monkeys that were trained to fixate and pursue small visual targets. Cell firing rate was recorded during smooth pursuit eye movements, cancellation of the VOR, combined eye-head pursuit, and spontaneous gaze shifts in the absence of targets. Pk cells were found to be much less sensitive to gaze velocity during combined eye-head pursuit than during ocular pursuit. They were not sensitive to gaze or head velocity during gaze saccades. Temporary inactivation of the floccular region by muscimol injection compromised ocular pursuit but had little effect on the ability of monkeys to pursue visual targets with head movements or to cancel the VOR during active head movements. Thus the signals produced by Pk cells in the floccular region are necessary for controlling smooth pursuit eye movements but not for coordinating gaze during active head movements. The results imply that individual functional modules in the cerebellar cortex are less involved in the global organization and coordination of movements than with parametric control of movements produced by a specific part of the body.

  4. Direct gaze elicits atypical activation of the theory-of-mind network in autism spectrum conditions.

    PubMed

    von dem Hagen, Elisabeth A H; Stoyanova, Raliza S; Rowe, James B; Baron-Cohen, Simon; Calder, Andrew J

    2014-06-01

    Eye contact plays a key role in social interaction and is frequently reported to be atypical in individuals with autism spectrum conditions (ASCs). Despite the importance of direct gaze, previous functional magnetic resonance imaging in ASC has generally focused on paradigms using averted gaze. The current study sought to determine the neural processing of faces displaying direct and averted gaze in 18 males with ASC and 23 matched controls. Controls showed an increased response to direct gaze in brain areas implicated in theory-of-mind and gaze perception, including medial prefrontal cortex, temporoparietal junction, posterior superior temporal sulcus region, and amygdala. In contrast, the same regions showed an increased response to averted gaze in individuals with an ASC. This difference was confirmed by a significant gaze direction × group interaction. Relative to controls, participants with ASC also showed reduced functional connectivity between these regions. We suggest that, in the typical brain, perceiving another person gazing directly at you triggers spontaneous attributions of mental states (e.g. he is "interested" in me), and that such mental state attributions to direct gaze may be reduced or absent in the autistic brain.

  5. Robust gaze-steering of an active vision system against errors in the estimated parameters

    NASA Astrophysics Data System (ADS)

    Han, Youngmo

    2015-01-01

    Gaze-steering is often used to broaden the viewing range of an active vision system. Gaze-steering procedures are usually based on estimated parameters such as image position, image velocity, depth and camera calibration parameters. However, there may be uncertainties in these estimated parameters because of measurement noise and estimation errors. In this case, robust gaze-steering cannot be guaranteed. To compensate for such problems, this paper proposes a gaze-steering method based on a linear matrix inequality (LMI). In this method, we first propose a proportional derivative (PD) control scheme on the unit sphere that does not use depth parameters. This proposed PD control scheme can avoid uncertainties in the estimated depth and camera calibration parameters, as well as inconveniences in their estimation process, including the use of auxiliary feature points and highly non-linear computation. Furthermore, the control gain of the proposed PD control scheme on the unit sphere is designed using LMI such that the designed control is robust in the presence of uncertainties in the other estimated parameters, such as image position and velocity. Simulation results demonstrate that the proposed method provides a better compensation for uncertainties in the estimated parameters than the contemporary linear method and steers the gaze of the camera more steadily over time than the contemporary non-linear method.

  6. Discriminating between intentional and unintentional gaze fixation using multimodal-based fuzzy logic algorithm for gaze tracking system with NIR camera sensor

    NASA Astrophysics Data System (ADS)

    Naqvi, Rizwan Ali; Park, Kang Ryoung

    2016-06-01

    Gaze tracking systems are widely used in human-computer interfaces, interfaces for the disabled, game interfaces, and for controlling home appliances. Most studies on gaze detection have focused on enhancing its accuracy, whereas few have considered the discrimination of intentional gaze fixation (looking at a target to activate or select it) from unintentional fixation while using gaze detection systems. Previous research methods based on the use of a keyboard or mouse button, eye blinking, and the dwell time of gaze position have various limitations. Therefore, we propose a method for discriminating between intentional and unintentional gaze fixation using a multimodal fuzzy logic algorithm applied to a gaze tracking system with a near-infrared camera sensor. Experimental results show that the proposed method outperforms the conventional method for determining gaze fixation.

  7. Electrical stimulation of rhesus monkey nucleus reticularis gigantocellularis. II. Effects on metrics and kinematics of ongoing gaze shifts to visual targets.

    PubMed

    Freedman, Edward G; Quessy, Stephan

    2004-06-01

    Saccade kinematics are altered by ongoing head movements. The hypothesis that a head movement command signal, proportional to head velocity, transiently reduces the gain of the saccadic burst generator (Freedman 2001, Biol Cybern 84:453-462) can account for this observation. Using electrical stimulation of the rhesus monkey nucleus reticularis gigantocellularis (NRG) to alter the head contribution to ongoing gaze shifts, two critical predictions of this gaze control hypothesis were tested. First, this hypothesis predicts that activation of the head command pathway will cause a transient reduction in the gain of the saccadic burst generator. This should alter saccade kinematics by initially reducing velocity without altering saccade amplitude. Second, because this hypothesis does not assume that gaze amplitude is controlled via feedback, the added head contribution (produced by NRG stimulation on the side ipsilateral to the direction of an ongoing gaze shift) should lead to hypermetric gaze shifts. At every stimulation site tested, saccade kinematics were systematically altered in a way that was consistent with transient reduction of the gain of the saccadic burst generator. In addition, gaze shifts produced during NRG stimulation were hypermetric compared with control movements. For example, when targets were briefly flashed 30 degrees from an initial fixation location, gaze shifts during NRG stimulation were on average 140% larger than control movements. These data are consistent with the predictions of the tested hypothesis, and may be problematic for gaze control models that rely on feedback control of gaze amplitude, as well as for models that do not posit an interaction between head commands and the saccade burst generator.

  8. Look over there! Unilateral gaze increases geographical memory of the 50 United States.

    PubMed

    Propper, Ruth E; Brunyé, Tad T; Christman, Stephen D; Januszewskia, Ashley

    2012-02-01

    Based on their specialized processing abilities, the left and right hemispheres of the brain may not contribute equally to recall of general world knowledge. US college students recalled the verbal names and spatial locations of the 50 US states while sustaining leftward or rightward unilateral gaze, a procedure that selectively activates the contralateral hemisphere. Compared to a no-unilateral gaze control, right gaze/left hemisphere activation resulted in better recall, demonstrating left hemisphere superiority in recall of general world knowledge and offering equivocal support for the hemispheric encoding asymmetry model of memory. Unilateral gaze- regardless of direction- improved recall of spatial, but not verbal, information. Future research could investigate the conditions under which unilateral gaze increases recall. Sustained unilateral gaze can be used as a simple, inexpensive, means for testing theories of hemispheric specialization of cognitive functions. Results support an overall deficit in US geographical knowledge in undergraduate college students. Copyright © 2011 Elsevier Inc. All rights reserved.

  9. Figure-ground activity in V1 and guidance of saccadic eye movements.

    PubMed

    Supèr, Hans

    2006-01-01

    Every day we shift our gaze about 150.000 times mostly without noticing it. The direction of these gaze shifts are not random but directed by sensory information and internal factors. After each movement the eyes hold still for a brief moment so that visual information at the center of our gaze can be processed in detail. This means that visual information at the saccade target location is sufficient to accurately guide the gaze shift but yet is not sufficiently processed to be fully perceived. In this paper I will discuss the possible role of activity in the primary visual cortex (V1), in particular figure-ground activity, in oculo-motor behavior. Figure-ground activity occurs during the late response period of V1 neurons and correlates with perception. The strength of figure-ground responses predicts the direction and moment of saccadic eye movements. The superior colliculus, a gaze control center that integrates visual and motor signals, receives direct anatomical connections from V1. These projections may convey the perceptual information that is required for appropriate gaze shifts. In conclusion, figure-ground activity in V1 may act as an intermediate component linking visual and motor signals.

  10. Countermanding eye-head gaze shifts in humans: marching orders are delivered to the head first.

    PubMed

    Corneil, Brian D; Elsley, James K

    2005-07-01

    The countermanding task requires subjects to cancel a planned movement on appearance of a stop signal, providing insights into response generation and suppression. Here, we studied human eye-head gaze shifts in a countermanding task with targets located beyond the horizontal oculomotor range. Consistent with head-restrained saccadic countermanding studies, the proportion of gaze shifts on stop trials increased the longer the stop signal was delayed after target presentation, and gaze shift stop-signal reaction times (SSRTs: a derived statistic measuring how long it takes to cancel a movement) averaged approximately 120 ms across seven subjects. We also observed a marked proportion of trials (13% of all stop trials) during which gaze remained stable but the head moved toward the target. Such head movements were more common at intermediate stop signal delays. We never observed the converse sequence wherein gaze moved while the head remained stable. SSRTs for head movements averaged approximately 190 ms or approximately 70-75 ms longer than gaze SSRTs. Although our findings are inconsistent with a single race to threshold as proposed for controlling saccadic eye movements, movement parameters on stop trials attested to interactions consistent with a race model architecture. To explain our data, we tested two extensions to the saccadic race model. The first assumed that gaze shifts and head movements are controlled by parallel but independent races. The second model assumed that gaze shifts and head movements are controlled by a single race, preceded by terminal ballistic intervals not under inhibitory control, and that the head-movement branch is activated at a lower threshold. Although simulations of both models produced acceptable fits to the empirical data, we favor the second alternative as it is more parsimonious with recent findings in the oculomotor system. Using the second model, estimates for gaze and head ballistic intervals were approximately 25 and 90 ms, respectively, consistent with the known physiology of the final motor paths. Further, the threshold of the head movement branch was estimated to be 85% of that required to activate gaze shifts. From these results, we conclude that a commitment to a head movement is made in advance of gaze shifts and that the comparative SSRT differences result primarily from biomechanical differences inherent to eye and head motion.

  11. A telephoto camera system with shooting direction control by gaze detection

    NASA Astrophysics Data System (ADS)

    Teraya, Daiki; Hachisu, Takumi; Yendo, Tomohiro

    2015-05-01

    For safe driving, it is important for driver to check traffic conditions such as traffic lights, or traffic signs as early as soon. If on-vehicle camera takes image of important objects to understand traffic conditions from long distance and shows these to driver, driver can understand traffic conditions earlier. To take image of long distance objects clearly, the focal length of camera must be long. When the focal length is long, on-vehicle camera doesn't have enough field of view to check traffic conditions. Therefore, in order to get necessary images from long distance, camera must have long-focal length and controllability of shooting direction. In previous study, driver indicates shooting direction on displayed image taken by a wide-angle camera, a direction controllable camera takes telescopic image, and displays these to driver. However, driver uses a touch panel to indicate the shooting direction in previous study. It is cause of disturb driving. So, we propose a telephoto camera system for driving support whose shooting direction is controlled by driver's gaze to avoid disturbing drive. This proposed system is composed of a gaze detector and an active telephoto camera whose shooting direction is controlled. We adopt non-wear detecting method to avoid hindrance to drive. The gaze detector measures driver's gaze by image processing. The shooting direction of the active telephoto camera is controlled by galvanometer scanners and the direction can be switched within a few milliseconds. We confirmed that the proposed system takes images of gazing straight ahead of subject by experiments.

  12. The imitation game: Effects of social cues on 'imitation' are domain-general in nature.

    PubMed

    Marsh, Lauren E; Bird, Geoffrey; Catmur, Caroline

    2016-10-01

    Imitation has been hailed as 'social glue', facilitating rapport with others. Previous studies suggest that social cues modulate imitation but the mechanism of such modulation remains underspecified. Here we examine the locus, specificity, and neural basis of the social control of imitation. Social cues (group membership and eye gaze) were manipulated during an imitation task in which imitative and spatial compatibility could be measured independently. Participants were faster to perform compatible compared to incompatible movements in both spatial and imitative domains. However, only spatial compatibility was modulated by social cues: an interaction between group membership and eye gaze revealed more spatial compatibility for ingroup members with direct gaze and outgroup members with averted gaze. The fMRI data were consistent with this finding. Regions associated with the control of imitative responding (temporoparietal junction, inferior frontal gyrus) were more active during imitatively incompatible compared to imitatively compatible trials. However, this activity was not modulated by social cues. On the contrary, an interaction between group, gaze and spatial compatibility was found in the dorsolateral prefrontal cortex in a pattern consistent with reaction times. This region may be exerting control over the motor system to modulate response inhibition. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  13. Where We Look When We Drive with or without Active Steering Wheel Control

    PubMed Central

    Mars, Franck; Navarro, Jordan

    2012-01-01

    Current theories on the role of visuomotor coordination in driving agree that active sampling of the road by the driver informs the arm-motor system in charge of performing actions on the steering wheel. Still under debate, however, is the nature of visual cues and gaze strategies used by drivers. In particular, the tangent point hypothesis, which states that drivers look at a specific point on the inside edge line, has recently become the object of controversy. An alternative hypothesis proposes that drivers orient gaze toward the desired future path, which happens to be often situated in the vicinity of the tangent point. The present study contributed to this debate through the analyses of the distribution of gaze orientation with respect to the tangent point. The results revealed that drivers sampled the roadway in the close vicinity of the tangent point rather than the tangent point proper. This supports the idea that drivers look at the boundary of a safe trajectory envelop near the inside edge line. Furthermore, the study investigated for the first time the reciprocal influence of manual control on gaze control in the context of driving. This was achieved through the comparison of gaze behavior when drivers actively steered the vehicle or when steering was performed by an automatic controller. The results showed an increase in look-ahead fixations in the direction of the bend exit and a small but consistent reduction in the time spent looking in the area of the tangent point when steering was passive. This may be the consequence of a change in the balance between cognitive and sensorimotor anticipatory gaze strategies. It might also reflect bidirectional coordination control between the eye and arm-motor systems, which goes beyond the common assumption that the eyes lead the hands when driving. PMID:22928043

  14. Gaze stabilization in chronic vestibular-loss and in cerebellar ataxia: interactions of feedforward and sensory feedback mechanisms.

    PubMed

    Sağlam, M; Lehnen, N

    2014-01-01

    During gaze shifts, humans can use visual, vestibular, and proprioceptive feedback, as well as feedforward mechanisms, for stabilization against active and passive head movements. The contributions of feedforward and sensory feedback control, and the role of the cerebellum, are still under debate. To quantify these contributions, we increased the head moment of inertia in three groups (ten healthy, five chronic vestibular-loss and nine cerebellar-ataxia patients) while they performed large gaze shifts to flashed targets in darkness. This induces undesired head oscillations. Consequently, both active (desired) and passive (undesired) head movements had to be compensated for to stabilize gaze. All groups compensated for active and passive head movements, vestibular-loss patients less than the other groups (P < 0.001, passive/active compensatory gains: vestibular-loss 0.23 ± 0.09/0.43 ± 0.12, healthy 0.80 ± 0.17/0.83 ± 0.15, cerebellar-ataxia 0.68 ± 0.17/0.77 ± 0.30, mean ± SD). The compensation gain ratio against passive and active movements was smaller than one in vestibular-loss patients (0.54 ± 0.10, P=0.001). Healthy and cerebellar-ataxia patients did not differ in active and passive compensation. In summary, vestibular-loss patients can better stabilize gaze against active than against passive head movements. Therefore, feedforward mechanisms substantially contribute to gaze stabilization. Proprioception alone is not sufficient (gain 0.2). Stabilization against active and passive head movements was not impaired in our cerebellar ataxia patients.

  15. The Role of Visual and Nonvisual Information in the Control of Locomotion

    ERIC Educational Resources Information Center

    Wilkie, Richard M.; Wann, John P.

    2005-01-01

    During locomotion, retinal flow, gaze angle, and vestibular information can contribute to one's perception of self-motion. Their respective roles were investigated during active steering: Retinal flow and gaze angle were biased by altering the visual information during computer-simulated locomotion, and vestibular information was controlled…

  16. Gaze stability, dynamic balance and participation deficits in people with multiple sclerosis at fall-risk.

    PubMed

    Garg, Hina; Dibble, Leland E; Schubert, Michael C; Sibthorp, Jim; Foreman, K Bo; Gappmaier, Eduard

    2018-05-05

    Despite the common complaints of dizziness and demyelination of afferent or efferent pathways to and from the vestibular nuclei which may adversely affect the angular Vestibulo-Ocular Reflex (aVOR) and vestibulo-spinal function in persons with Multiple Sclerosis (PwMS), few studies have examined gaze and dynamic balance function in PwMS. 1) Determine the differences in gaze stability, dynamic balance and participation measures between PwMS and controls, 2) Examine the relationships between gaze stability, dynamic balance and participation. Nineteen ambulatory PwMS at fall-risk and 14 age-matched controls were recruited. Outcomes included (a) gaze stability [angular Vestibulo-Ocular Reflex (aVOR) gain (ratio of eye to head velocity); number of Compensatory Saccades (CS) per head rotation; CS latency; gaze position error; Coefficient of Variation (CV) of aVOR gain], (b) dynamic balance [Functional Gait Assessment, FGA; four square step test], and (c) participation [dizziness handicap inventory; activities-specific balance confidence scale]. Separate independent t-tests and Pearson's correlations were calculated. PwMS were age = 53 ± 11.7yrs and had 4.2 ± 3.3 falls/yr. PwMS demonstrated significant (p<0.05) impairments in gaze stability, dynamic balance and participation measures compared to controls. CV of aVOR gain and CS latency were significantly correlated with FGA. Deficits and correlations across a spectrum of disability measures highlight the relevance of gaze and dynamic balance assessment in PwMS. This article is protected by copyright. All rights reserved. © 2018 Wiley Periodicals, Inc.

  17. Cervico-ocular coordination during neck rotation is distorted in people with whiplash-associated disorders.

    PubMed

    Bexander, Catharina S M; Hodges, Paul W

    2012-03-01

    People with whiplash-associated disorders (WAD) not only suffer from neck/head pain, but commonly report deficits in eye movement control. Recent work has highlighted a strong relationship between eye and neck muscle activation in pain-free subjects. It is possible that WAD may disrupt the intricate coordination between eye and neck movement. Electromyographic activity (EMG) of muscles that rotate the cervical spine to the right (left sternocleidomastoid, right obliquus capitis inferior (OI), right splenius capitis (SC) and right multifidus (MF)) was recorded in nine people with chronic WAD. Cervical rotation was performed with five gaze conditions involving different gaze directions relative to cervical rotation. The relationship between eye position/movement and neck muscle activity was contrasted with previous observations from pain-free controls. Three main differences were observed in WAD. First, the superficial muscle SC was active with both directions of cervical rotation in contrast to activity only with right rotation in pain-free controls. Second, activity of OI and MF varied between directions of cervical rotation, unlike the non-direction-specific activity in controls. Third, the effect of horizontal gaze direction on neck muscle EMG was augmented compared to controls. These observations provide evidence of redistribution of activity between neck muscles during cervical rotation and increased interaction between eye and neck muscle activity in people with WAD. These changes in cervico-ocular coordination may underlie clinical symptoms reported by people with WAD that involve visual deficits and changes in function during cervical rotation such as postural control.

  18. Getting a Grip on Social Gaze: Control over Others' Gaze Helps Gaze Detection in High-Functioning Autism

    ERIC Educational Resources Information Center

    Dratsch, Thomas; Schwartz, Caroline; Yanev, Kliment; Schilbach, Leonhard; Vogeley, Kai; Bente, Gary

    2013-01-01

    We investigated the influence of control over a social stimulus on the ability to detect direct gaze in high-functioning autism (HFA). In a pilot study, 19 participants with and 19 without HFA were compared on a gaze detection and a gaze setting task. Participants with HFA were less accurate in detecting direct gaze in the detection task, but did…

  19. Effect of direct eye contact in PTSD related to interpersonal trauma: an fMRI study of activation of an innate alarm system.

    PubMed

    Steuwe, Carolin; Daniels, Judith K; Frewen, Paul A; Densmore, Maria; Pannasch, Sebastian; Beblo, Thomas; Reiss, Jeffrey; Lanius, Ruth A

    2014-01-01

    In healthy individuals, direct eye contact initially leads to activation of a fast subcortical pathway, which then modulates a cortical route eliciting social cognitive processes. The aim of this study was to gain insight into the neurobiological effects of direct eye-to-eye contact using a virtual reality paradigm in individuals with posttraumatic stress disorder (PTSD) related to prolonged childhood abuse. We examined 16 healthy comparison subjects and 16 patients with a primary diagnosis of PTSD using a virtual reality functional magnetic resonance imaging paradigm involving direct vs averted gaze (happy, sad, neutral) as developed by Schrammel et al. in 2009. Irrespective of the displayed emotion, controls exhibited an increased blood oxygenation level-dependent response during direct vs averted gaze within the dorsomedial prefrontal cortex, left temporoparietal junction and right temporal pole. Under the same conditions, individuals with PTSD showed increased activation within the superior colliculus (SC)/periaqueductal gray (PAG) and locus coeruleus. Our findings suggest that healthy controls react to the exposure of direct gaze with an activation of a cortical route that enhances evaluative 'top-down' processes underlying social interactions. In individuals with PTSD, however, direct gaze leads to sustained activation of a subcortical route of eye-contact processing, an innate alarm system involving the SC and the underlying circuits of the PAG.

  20. Effect of direct eye contact in PTSD related to interpersonal trauma: an fMRI study of activation of an innate alarm system

    PubMed Central

    Steuwe, Carolin; Daniels, Judith K.; Frewen, Paul A.; Densmore, Maria; Pannasch, Sebastian; Beblo, Thomas; Reiss, Jeffrey; Lanius, Ruth A.

    2014-01-01

    In healthy individuals, direct eye contact initially leads to activation of a fast subcortical pathway, which then modulates a cortical route eliciting social cognitive processes. The aim of this study was to gain insight into the neurobiological effects of direct eye-to-eye contact using a virtual reality paradigm in individuals with posttraumatic stress disorder (PTSD) related to prolonged childhood abuse. We examined 16 healthy comparison subjects and 16 patients with a primary diagnosis of PTSD using a virtual reality functional magnetic resonance imaging paradigm involving direct vs averted gaze (happy, sad, neutral) as developed by Schrammel et al. in 2009. Irrespective of the displayed emotion, controls exhibited an increased blood oxygenation level-dependent response during direct vs averted gaze within the dorsomedial prefrontal cortex, left temporoparietal junction and right temporal pole. Under the same conditions, individuals with PTSD showed increased activation within the superior colliculus (SC)/periaqueductal gray (PAG) and locus coeruleus. Our findings suggest that healthy controls react to the exposure of direct gaze with an activation of a cortical route that enhances evaluative ‘top–down’ processes underlying social interactions. In individuals with PTSD, however, direct gaze leads to sustained activation of a subcortical route of eye-contact processing, an innate alarm system involving the SC and the underlying circuits of the PAG. PMID:22977200

  1. Head eye co-ordination and gaze stability in subjects with persistent whiplash associated disorders.

    PubMed

    Treleaven, Julia; Jull, Gwendolen; Grip, Helena

    2011-06-01

    Symptoms of dizziness, unsteadiness and visual disturbances are frequent complaints in persons with persistent whiplash associated disorders. This study investigated eye, head co-ordination and gaze stability in subjects with persistent whiplash (n = 20) and asymptomatic controls (n = 20). Wireless motion sensors and electro-oculography were used to measure: head rotation during unconstrained head movement, head rotation during gaze stability and sequential head and eye movements. Ten control subjects participated in a repeatability study (two occasions one week apart). Between-day repeatability was acceptable (ICC > 0.6) for most measures. The whiplash group had significantly less maximal eye angle to the left, range of head movement during the gaze stability task and decreased velocity of head movement in head eye co-ordination and gaze stability tasks compared to the control group (p < 0.01). There were significant correlations (r > 0.55) between both unrestrained neck movement and neck pain and head movement and velocity in the whiplash group. Deficits in gaze stability and head eye co-ordination may be related to disturbed reflex activity associated with decreased head range of motion and/or neck pain. Further research is required to explore the mechanisms behind these deficits, the nature of changes over time and the tests' ability to measure change in response to rehabilitation. Crown Copyright © 2010. Published by Elsevier Ltd. All rights reserved.

  2. Gaze-based assistive technology in daily activities in children with severe physical impairments-An intervention study.

    PubMed

    Borgestig, Maria; Sandqvist, Jan; Ahlsten, Gunnar; Falkmer, Torbjörn; Hemmingsson, Helena

    2017-04-01

    To establish the impact of a gaze-based assistive technology (AT) intervention on activity repertoire, autonomous use, and goal attainment in children with severe physical impairments, and to examine parents' satisfaction with the gaze-based AT and with services related to the gaze-based AT intervention. Non-experimental multiple case study with before, after, and follow-up design. Ten children with severe physical impairments without speaking ability (aged 1-15 years) participated in gaze-based AT intervention for 9-10 months, during which period the gaze-based AT was implemented in daily activities. Repertoire of computer activities increased for seven children. All children had sustained usage of gaze-based AT in daily activities at follow-up, all had attained goals, and parents' satisfaction with the AT and with services was high. The gaze-based AT intervention was effective in guiding parents and teachers to continue supporting the children to perform activities with the AT after the intervention program.

  3. Design and control of active vision based mechanisms for intelligent robots

    NASA Technical Reports Server (NTRS)

    Wu, Liwei; Marefat, Michael M.

    1994-01-01

    In this paper, we propose a design of an active vision system for intelligent robot application purposes. The system has the degrees of freedom of pan, tilt, vergence, camera height adjustment, and baseline adjustment with a hierarchical control system structure. Based on this vision system, we discuss two problems involved in the binocular gaze stabilization process: fixation point selection and vergence disparity extraction. A hierarchical approach to determining point of fixation from potential gaze targets using evaluation function representing human visual behavior to outside stimuli is suggested. We also characterize different visual tasks in two cameras for vergence control purposes, and a phase-based method based on binarized images to extract vergence disparity for vergence control is presented. A control algorithm for vergence control is discussed.

  4. Visual–Motor Transformations Within Frontal Eye Fields During Head-Unrestrained Gaze Shifts in the Monkey

    PubMed Central

    Sajad, Amirsaman; Sadeh, Morteza; Keith, Gerald P.; Yan, Xiaogang; Wang, Hongying; Crawford, John Douglas

    2015-01-01

    A fundamental question in sensorimotor control concerns the transformation of spatial signals from the retina into eye and head motor commands required for accurate gaze shifts. Here, we investigated these transformations by identifying the spatial codes embedded in visually evoked and movement-related responses in the frontal eye fields (FEFs) during head-unrestrained gaze shifts. Monkeys made delayed gaze shifts to the remembered location of briefly presented visual stimuli, with delay serving to dissociate visual and movement responses. A statistical analysis of nonparametric model fits to response field data from 57 neurons (38 with visual and 49 with movement activities) eliminated most effector-specific, head-fixed, and space-fixed models, but confirmed the dominance of eye-centered codes observed in head-restrained studies. More importantly, the visual response encoded target location, whereas the movement response mainly encoded the final position of the imminent gaze shift (including gaze errors). This spatiotemporal distinction between target and gaze coding was present not only at the population level, but even at the single-cell level. We propose that an imperfect visual–motor transformation occurs during the brief memory interval between perception and action, and further transformations from the FEF's eye-centered gaze motor code to effector-specific codes in motor frames occur downstream in the subcortical areas. PMID:25491118

  5. Emotion Unchained: Facial Expression Modulates Gaze Cueing under Cognitive Load.

    PubMed

    Pecchinenda, Anna; Petrucci, Manuel

    2016-01-01

    Direction of eye gaze cues spatial attention, and typically this cueing effect is not modulated by the expression of a face unless top-down processes are explicitly or implicitly involved. To investigate the role of cognitive control on gaze cueing by emotional faces, participants performed a gaze cueing task with happy, angry, or neutral faces under high (i.e., counting backward by 7) or low cognitive load (i.e., counting forward by 2). Results show that high cognitive load enhances gaze cueing effects for angry facial expressions. In addition, cognitive load reduces gaze cueing for neutral faces, whereas happy facial expressions and gaze affected object preferences regardless of load. This evidence clearly indicates a differential role of cognitive control in processing gaze direction and facial expression, suggesting that under typical conditions, when we shift attention based on social cues from another person, cognitive control processes are used to reduce interference from emotional information.

  6. Emotion Unchained: Facial Expression Modulates Gaze Cueing under Cognitive Load

    PubMed Central

    Petrucci, Manuel

    2016-01-01

    Direction of eye gaze cues spatial attention, and typically this cueing effect is not modulated by the expression of a face unless top-down processes are explicitly or implicitly involved. To investigate the role of cognitive control on gaze cueing by emotional faces, participants performed a gaze cueing task with happy, angry, or neutral faces under high (i.e., counting backward by 7) or low cognitive load (i.e., counting forward by 2). Results show that high cognitive load enhances gaze cueing effects for angry facial expressions. In addition, cognitive load reduces gaze cueing for neutral faces, whereas happy facial expressions and gaze affected object preferences regardless of load. This evidence clearly indicates a differential role of cognitive control in processing gaze direction and facial expression, suggesting that under typical conditions, when we shift attention based on social cues from another person, cognitive control processes are used to reduce interference from emotional information. PMID:27959925

  7. Effect of 3,4-diaminopyridine on the postural control in patients with downbeat nystagmus.

    PubMed

    Sprenger, Andreas; Zils, Elisabeth; Rambold, Holger; Sander, Thurid; Helmchen, Christoph

    2005-04-01

    Downbeat nystagmus (DBN) is a common, usually persistent ocular motor sign in vestibulocerebellar midline lesions. Postural imbalance in DBN may increase on lateral gaze when downbeat nystagmus increases. 3,4-Diaminopyridine (3,4-DAP) has been shown to suppress the slow-phase velocity component of downbeat nystagmus and its gravity-dependent component with concomitant improvement of oscillopsia. Because the pharmacological effect is thought to be caused by improvement of the vestibulocerebellar Purkinje cell activity, the effect of 3,4-DAP on the postural control of patients with downbeat nystagmus syndrome was examined. Eye movements were recorded with the video-based Eyelink II system. Postural sway and pathway were assessed by posturography in lateral gaze in the light and on eye closure. Two out of four patients showed an improvement of the area of postural sway by 57% of control (baseline) on eye closure. In contrast, downbeat nystagmus in gaze straight ahead and on lateral gaze did not benefit in these two patients, implying a specific influence of 3,4-DAP on the vestibulocerebellar control of posture. It was concluded that 3,4-DAP may particularly influence the postural performance in patients with downbeat nystagmus.

  8. Trained Eyes: Experience Promotes Adaptive Gaze Control in Dynamic and Uncertain Visual Environments

    PubMed Central

    Taya, Shuichiro; Windridge, David; Osman, Magda

    2013-01-01

    Current eye-tracking research suggests that our eyes make anticipatory movements to a location that is relevant for a forthcoming task. Moreover, there is evidence to suggest that with more practice anticipatory gaze control can improve. However, these findings are largely limited to situations where participants are actively engaged in a task. We ask: does experience modulate anticipative gaze control while passively observing a visual scene? To tackle this we tested people with varying degrees of experience of tennis, in order to uncover potential associations between experience and eye movement behaviour while they watched tennis videos. The number, size, and accuracy of saccades (rapid eye-movements) made around ‘events,’ which is critical for the scene context (i.e. hit and bounce) were analysed. Overall, we found that experience improved anticipatory eye-movements while watching tennis clips. In general, those with extensive experience showed greater accuracy of saccades to upcoming event locations; this was particularly prevalent for events in the scene that carried high uncertainty (i.e. ball bounces). The results indicate that, even when passively observing, our gaze control system utilizes prior relevant knowledge in order to anticipate upcoming uncertain event locations. PMID:23951147

  9. Altered activity of the primary visual area during gaze processing in individuals with high-functioning autistic spectrum disorder: a magnetoencephalography study.

    PubMed

    Hasegawa, Naoya; Kitamura, Hideaki; Murakami, Hiroatsu; Kameyama, Shigeki; Sasagawa, Mutsuo; Egawa, Jun; Tamura, Ryu; Endo, Taro; Someya, Toshiyuki

    2013-01-01

    Individuals with autistic spectrum disorder (ASD) demonstrate an impaired ability to infer the mental states of others from their gaze. Thus, investigating the relationship between ASD and eye gaze processing is crucial for understanding the neural basis of social impairments seen in individuals with ASD. In addition, characteristics of ASD are observed in more comprehensive visual perception tasks. These visual characteristics of ASD have been well-explained in terms of the atypical relationship between high- and low-level gaze processing in ASD. We studied neural activity during gaze processing in individuals with ASD using magnetoencephalography, with a focus on the relationship between high- and low-level gaze processing both temporally and spatially. Minimum Current Estimate analysis was applied to perform source analysis of magnetic responses to gaze stimuli. The source analysis showed that later activity in the primary visual area (V1) was affected by gaze direction only in the ASD group. Conversely, the right posterior superior temporal sulcus, which is a brain region that processes gaze as a social signal, in the typically developed group showed a tendency toward greater activation during direct compared with averted gaze processing. These results suggest that later activity in V1 relating to gaze processing is altered or possibly enhanced in high-functioning individuals with ASD, which may underpin the social cognitive impairments in these individuals. © 2013 S. Karger AG, Basel.

  10. Home-Based Computer Gaming in Vestibular Rehabilitation of Gaze and Balance Impairment.

    PubMed

    Szturm, Tony; Reimer, Karen M; Hochman, Jordan

    2015-06-01

    Disease or damage of the vestibular sense organs cause a range of distressing symptoms and functional problems that could include loss of balance, gaze instability, disorientation, and dizziness. A novel computer-based rehabilitation system with therapeutic gaming application has been developed. This method allows different gaze and head movement exercises to be coupled to a wide range of inexpensive, commercial computer games. It can be used in standing, and thus graded balance demands using a sponge pad can be incorporated into the program. A case series pre- and postintervention study was conducted of nine adults diagnosed with peripheral vestibular dysfunction who received a 12-week home rehabilitation program. The feasibility and usability of the home computer-based therapeutic program were established. Study findings revealed that using head rotation to interact with computer games, when coupled to demanding balance conditions, resulted in significant improvements in standing balance, dynamic visual acuity, gaze control, and walking performance. Perception of dizziness as measured by the Dizziness Handicap Inventory also decreased significantly. These preliminary findings provide support that a low-cost home game-based exercise program is well suited to train standing balance and gaze control (with active and passive head motion).

  11. Gaze training enhances laparoscopic technical skill acquisition and multi-tasking performance: a randomized, controlled study.

    PubMed

    Wilson, Mark R; Vine, Samuel J; Bright, Elizabeth; Masters, Rich S W; Defriend, David; McGrath, John S

    2011-12-01

    The operating room environment is replete with stressors and distractions that increase the attention demands of what are already complex psychomotor procedures. Contemporary research in other fields (e.g., sport) has revealed that gaze training interventions may support the development of robust movement skills. This current study was designed to examine the utility of gaze training for technical laparoscopic skills and to test performance under multitasking conditions. Thirty medical trainees with no laparoscopic experience were divided randomly into one of three treatment groups: gaze trained (GAZE), movement trained (MOVE), and discovery learning/control (DISCOVERY). Participants were fitted with a Mobile Eye gaze registration system, which measures eye-line of gaze at 25 Hz. Training consisted of ten repetitions of the "eye-hand coordination" task from the LAP Mentor VR laparoscopic surgical simulator while receiving instruction and video feedback (specific to each treatment condition). After training, all participants completed a control test (designed to assess learning) and a multitasking transfer test, in which they completed the procedure while performing a concurrent tone counting task. Not only did the GAZE group learn more quickly than the MOVE and DISCOVERY groups (faster completion times in the control test), but the performance difference was even more pronounced when multitasking. Differences in gaze control (target locking fixations), rather than tool movement measures (tool path length), underpinned this performance advantage for GAZE training. These results suggest that although the GAZE intervention focused on training gaze behavior only, there were indirect benefits for movement behaviors and performance efficiency. Additionally, focusing on a single external target when learning, rather than on complex movement patterns, may have freed-up attentional resources that could be applied to concurrent cognitive tasks.

  12. Postural control and head stability during natural gaze behaviour in 6- to 12-year-old children.

    PubMed

    Schärli, A M; van de Langenberg, R; Murer, K; Müller, R M

    2013-06-01

    We investigated how the influence of natural exploratory gaze behaviour on postural control develops from childhood into adulthood. In a cross-sectional design, we compared four age groups: 6-, 9-, 12-year-olds and young adults. Two experimental trials were performed: quiet stance with a fixed gaze (fixed) and quiet stance with natural exploratory gaze behaviour (exploratory). The latter was elicited by having participants watch an animated short film on a large screen in front of them. 3D head rotations in space and centre of pressure (COP) excursions on the ground plane were measured. Across conditions, both head rotation and COP displacement decreased with increasing age. Head movement was greater in the exploratory condition in all age groups. In all children-but not in adults-COP displacement was markedly greater in the exploratory condition. Bivariate correlations across groups showed highly significant positive correlations between COP displacement in ML direction and head rotation in yaw, roll, and pitch in both conditions. The regularity of COP displacements did not show a clear developmental trend, which indicates that COP dynamics were qualitatively similar across age groups. Together, the results suggest that the contribution of head movement to eye-head saccades decreases with age and that head instability-in part resulting from such gaze-related head movements-is an important limiting factor in children's postural control. The lack of head stabilisation might particularly affect children in everyday activities in which both postural control and visual exploration are required.

  13. Visual-Motor Transformations Within Frontal Eye Fields During Head-Unrestrained Gaze Shifts in the Monkey.

    PubMed

    Sajad, Amirsaman; Sadeh, Morteza; Keith, Gerald P; Yan, Xiaogang; Wang, Hongying; Crawford, John Douglas

    2015-10-01

    A fundamental question in sensorimotor control concerns the transformation of spatial signals from the retina into eye and head motor commands required for accurate gaze shifts. Here, we investigated these transformations by identifying the spatial codes embedded in visually evoked and movement-related responses in the frontal eye fields (FEFs) during head-unrestrained gaze shifts. Monkeys made delayed gaze shifts to the remembered location of briefly presented visual stimuli, with delay serving to dissociate visual and movement responses. A statistical analysis of nonparametric model fits to response field data from 57 neurons (38 with visual and 49 with movement activities) eliminated most effector-specific, head-fixed, and space-fixed models, but confirmed the dominance of eye-centered codes observed in head-restrained studies. More importantly, the visual response encoded target location, whereas the movement response mainly encoded the final position of the imminent gaze shift (including gaze errors). This spatiotemporal distinction between target and gaze coding was present not only at the population level, but even at the single-cell level. We propose that an imperfect visual-motor transformation occurs during the brief memory interval between perception and action, and further transformations from the FEF's eye-centered gaze motor code to effector-specific codes in motor frames occur downstream in the subcortical areas. © The Author 2014. Published by Oxford University Press.

  14. Flexible Coordination of Stationary and Mobile Conversations with Gaze: Resource Allocation among Multiple Joint Activities

    PubMed Central

    Mayor, Eric; Bangerter, Adrian

    2016-01-01

    Gaze is instrumental in coordinating face-to-face social interactions. But little is known about gaze use when social interactions co-occur with other joint activities. We investigated the case of walking while talking. We assessed how gaze gets allocated among various targets in mobile conversations, whether allocation of gaze to other targets affects conversational coordination, and whether reduced availability of gaze for conversational coordination affects conversational performance and content. In an experimental study, pairs were videotaped in four conditions of mobility (standing still, talking while walking along a straight-line itinerary, talking while walking along a complex itinerary, or walking along a complex itinerary with no conversational task). Gaze to partners was substantially reduced in mobile conversations, but gaze was still used to coordinate conversation via displays of mutual orientation, and conversational performance and content was not different between stationary and mobile conditions. Results expand the phenomena of multitasking to joint activities. PMID:27822189

  15. ASB clinical biomechanics award winner 2016: Assessment of gaze stability within 24-48hours post-concussion.

    PubMed

    Murray, Nicholas G; D'Amico, Nathan R; Powell, Douglas; Mormile, Megan E; Grimes, Katelyn E; Munkasy, Barry A; Gore, Russell K; Reed-Jones, Rebecca J

    2017-05-01

    Approximately 90% of athletes with concussion experience a certain degree of visual system dysfunction immediately post-concussion. Of these abnormalities, gaze stability deficits are denoted as among the most common. Little research quantitatively explores these variables post-concussion. As such, the purpose of this study was to investigate and compare gaze stability between a control group of healthy non-injured athletes and a group of athletes with concussions 24-48hours post-injury. Ten collegiate NCAA Division I athletes with concussions and ten healthy control collegiate athletes completed two trials of a sport-like antisaccade postural control task, the Wii Fit Soccer Heading Game. During play all participants were instructed to minimize gaze deviations away from a central fixed area. Athletes with concussions were assessed within 24-48 post-concussion while healthy control data were collected during pre-season athletic screening. Raw ocular point of gaze coordinates were tracked with a monocular eye tracking device (240Hz) and motion capture during the postural task to determine the instantaneous gaze coordinates. This data was exported and analyzed using a custom algorithm. Independent t-tests analyzed gaze resultant distance, prosaccade errors, mean vertical velocity, and mean horizontal velocity. Athletes with concussions had significantly greater gaze resultant distance (p=0.006), prosaccade errors (p<0.001), and horizontal velocity (p=0.029) when compared to healthy controls. These data suggest that athletes with concussions had less control of gaze during play of the Wii Fit Soccer Heading Game. This could indicate a gaze stability deficit via potentially reduced cortical inhibition that is present within 24-48hours post-concussion. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Gaze-Contingent Music Reward Therapy for Social Anxiety Disorder: A Randomized Controlled Trial.

    PubMed

    Lazarov, Amit; Pine, Daniel S; Bar-Haim, Yair

    2017-07-01

    Patients with social anxiety disorder exhibit increased attentional dwelling on social threats, providing a viable target for therapeutics. This randomized controlled trial examined the efficacy of a novel gaze-contingent music reward therapy for social anxiety disorder designed to reduce attention dwelling on threats. Forty patients with social anxiety disorder were randomly assigned to eight sessions of either gaze-contingent music reward therapy, designed to divert patients' gaze toward neutral stimuli rather than threat stimuli, or to a control condition. Clinician and self-report measures of social anxiety were acquired pretreatment, posttreatment, and at 3-month follow-up. Dwell time on socially threatening faces was assessed during the training sessions and at pre- and posttreatment. Gaze-contingent music reward therapy yielded greater reductions of symptoms of social anxiety disorder than the control condition on both clinician-rated and self-reported measures. Therapeutic effects were maintained at follow-up. Gaze-contingent music reward therapy, but not the control condition, also reduced dwell time on threat, which partially mediated clinical effects. Finally, gaze-contingent music reward therapy, but not the control condition, also altered dwell time on socially threatening faces not used in training, reflecting near-transfer training generalization. This is the first randomized controlled trial to examine a gaze-contingent intervention in social anxiety disorder. The results demonstrate target engagement and clinical effects. This study sets the stage for larger randomized controlled trials and testing in other emotional disorders.

  17. Spatiotemporal commonalities of fronto-parietal activation in attentional orienting triggered by supraliminal and subliminal gaze cues: An event-related potential study.

    PubMed

    Uono, Shota; Sato, Wataru; Sawada, Reiko; Kochiyama, Takanori; Toichi, Motomi

    2018-05-04

    Eye gaze triggers attentional shifts with and without conscious awareness. It remains unclear whether the spatiotemporal patterns of electric neural activity are the same for conscious and unconscious attentional shifts. Thus, the present study recorded event-related potentials (ERPs) and evaluated the neural activation involved in attentional orienting induced by subliminal and supraliminal gaze cues. Nonpredictive gaze cues were presented in the central field of vision, and participants were asked to detect a subsequent peripheral target. The mean reaction time was shorter for congruent gaze cues than for incongruent gaze cues under both presentation conditions, indicating that both types of cues reliably trigger attentional orienting. The ERP analysis revealed that averted versus straight gaze induced greater negative deflection in the bilateral fronto-central and temporal regions between 278 and 344 ms under both supraliminal and subliminal presentation conditions. Supraliminal cues, irrespective of gaze direction, induced a greater negative amplitude than did subliminal cues at the right posterior cortices at a peak of approximately 170 ms and in the 200-300 ms. These results suggest that similar spatial and temporal fronto-parietal activity is involved in attentional orienting triggered by both supraliminal and subliminal gaze cues, although inputs from different visual processing routes (cortical and subcortical regions) may trigger activity in the attentional network. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. The Gaze-Cueing Effect in the United States and Japan: Influence of Cultural Differences in Cognitive Strategies on Control of Attention

    PubMed Central

    Takao, Saki; Yamani, Yusuke; Ariga, Atsunori

    2018-01-01

    The direction of gaze automatically and exogenously guides visual spatial attention, a phenomenon termed as the gaze-cueing effect. Although this effect arises when the duration of stimulus onset asynchrony (SOA) between a non-predictive gaze cue and the target is relatively long, no empirical research has examined the factors underlying this extended cueing effect. Two experiments compared the gaze-cueing effect at longer SOAs (700 ms) in Japanese and American participants. Cross-cultural studies on cognition suggest that Westerners tend to use a context-independent analytical strategy to process visual environments, whereas Asians use a context-dependent holistic approach. We hypothesized that Japanese participants would not demonstrate the gaze-cueing effect at longer SOAs because they are more sensitive to contextual information, such as the knowledge that the direction of a gaze is not predictive. Furthermore, we hypothesized that American participants would demonstrate the gaze-cueing effect at the long SOAs because they tend to follow gaze direction whether it is predictive or not. In Experiment 1, American participants demonstrated the gaze-cueing effect at the long SOA, indicating that their attention was driven by the central non-predictive gaze direction regardless of the SOAs. In Experiment 2, Japanese participants demonstrated no gaze-cueing effect at the long SOA, suggesting that the Japanese participants exercised voluntary control of their attention, which inhibited the gaze-cueing effect with the long SOA. Our findings suggest that the control of visual spatial attention elicited by social stimuli systematically differs between American and Japanese individuals. PMID:29379457

  19. The Gaze-Cueing Effect in the United States and Japan: Influence of Cultural Differences in Cognitive Strategies on Control of Attention.

    PubMed

    Takao, Saki; Yamani, Yusuke; Ariga, Atsunori

    2017-01-01

    The direction of gaze automatically and exogenously guides visual spatial attention, a phenomenon termed as the gaze-cueing effect . Although this effect arises when the duration of stimulus onset asynchrony (SOA) between a non-predictive gaze cue and the target is relatively long, no empirical research has examined the factors underlying this extended cueing effect. Two experiments compared the gaze-cueing effect at longer SOAs (700 ms) in Japanese and American participants. Cross-cultural studies on cognition suggest that Westerners tend to use a context-independent analytical strategy to process visual environments, whereas Asians use a context-dependent holistic approach. We hypothesized that Japanese participants would not demonstrate the gaze-cueing effect at longer SOAs because they are more sensitive to contextual information, such as the knowledge that the direction of a gaze is not predictive. Furthermore, we hypothesized that American participants would demonstrate the gaze-cueing effect at the long SOAs because they tend to follow gaze direction whether it is predictive or not. In Experiment 1, American participants demonstrated the gaze-cueing effect at the long SOA, indicating that their attention was driven by the central non-predictive gaze direction regardless of the SOAs. In Experiment 2, Japanese participants demonstrated no gaze-cueing effect at the long SOA, suggesting that the Japanese participants exercised voluntary control of their attention, which inhibited the gaze-cueing effect with the long SOA. Our findings suggest that the control of visual spatial attention elicited by social stimuli systematically differs between American and Japanese individuals.

  20. Why we interact: on the functional role of the striatum in the subjective experience of social interaction.

    PubMed

    Pfeiffer, Ulrich J; Schilbach, Leonhard; Timmermans, Bert; Kuzmanovic, Bojana; Georgescu, Alexandra L; Bente, Gary; Vogeley, Kai

    2014-11-01

    There is ample evidence that human primates strive for social contact and experience interactions with conspecifics as intrinsically rewarding. Focusing on gaze behavior as a crucial means of human interaction, this study employed a unique combination of neuroimaging, eye-tracking, and computer-animated virtual agents to assess the neural mechanisms underlying this component of behavior. In the interaction task, participants believed that during each interaction the agent's gaze behavior could either be controlled by another participant or by a computer program. Their task was to indicate whether they experienced a given interaction as an interaction with another human participant or the computer program based on the agent's reaction. Unbeknownst to them, the agent was always controlled by a computer to enable a systematic manipulation of gaze reactions by varying the degree to which the agent engaged in joint attention. This allowed creating a tool to distinguish neural activity underlying the subjective experience of being engaged in social and non-social interaction. In contrast to previous research, this allows measuring neural activity while participants experience active engagement in real-time social interactions. Results demonstrate that gaze-based interactions with a perceived human partner are associated with activity in the ventral striatum, a core component of reward-related neurocircuitry. In contrast, interactions with a computer-driven agent activate attention networks. Comparisons of neural activity during interaction with behaviorally naïve and explicitly cooperative partners demonstrate different temporal dynamics of the reward system and indicate that the mere experience of engagement in social interaction is sufficient to recruit this system. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. Neural activity in the posterior superior temporal region during eye contact perception correlates with autistic traits.

    PubMed

    Hasegawa, Naoya; Kitamura, Hideaki; Murakami, Hiroatsu; Kameyama, Shigeki; Sasagawa, Mutsuo; Egawa, Jun; Endo, Taro; Someya, Toshiyuki

    2013-08-09

    The present study investigated the relationship between neural activity associated with gaze processing and autistic traits in typically developed subjects using magnetoencephalography. Autistic traits in 24 typically developed college students with normal intelligence were assessed using the Autism Spectrum Quotient (AQ). The Minimum Current Estimates method was applied to estimate the cortical sources of magnetic responses to gaze stimuli. These stimuli consisted of apparent motion of the eyes, displaying direct or averted gaze motion. Results revealed gaze-related brain activations in the 150-250 ms time window in the right posterior superior temporal sulcus (pSTS), and in the 150-450 ms time window in medial prefrontal regions. In addition, the mean amplitude in the 150-250 ms time window in the right pSTS region was modulated by gaze direction, and its activity in response to direct gaze stimuli correlated with AQ score. pSTS activation in response to direct gaze is thought to be related to higher-order social processes. Thus, these results suggest that brain activity linking eye contact and social signals is associated with autistic traits in a typical population. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  2. Effects of Facial Symmetry and Gaze Direction on Perception of Social Attributes: A Study in Experimental Art History.

    PubMed

    Folgerø, Per O; Hodne, Lasse; Johansson, Christer; Andresen, Alf E; Sætren, Lill C; Specht, Karsten; Skaar, Øystein O; Reber, Rolf

    2016-01-01

    This article explores the possibility of testing hypotheses about art production in the past by collecting data in the present. We call this enterprise "experimental art history". Why did medieval artists prefer to paint Christ with his face directed towards the beholder, while profane faces were noticeably more often painted in different degrees of profile? Is a preference for frontal faces motivated by deeper evolutionary and biological considerations? Head and gaze direction is a significant factor for detecting the intentions of others, and accurate detection of gaze direction depends on strong contrast between a dark iris and a bright sclera, a combination that is only found in humans among the primates. One uniquely human capacity is language acquisition, where the detection of shared or joint attention, for example through detection of gaze direction, contributes significantly to the ease of acquisition. The perceived face and gaze direction is also related to fundamental emotional reactions such as fear, aggression, empathy and sympathy. The fast-track modulator model presents a related fast and unconscious subcortical route that involves many central brain areas. Activity in this pathway mediates the affective valence of the stimulus. In particular, different sub-regions of the amygdala show specific activation as response to gaze direction, head orientation and the valence of facial expression. We present three experiments on the effects of face orientation and gaze direction on the judgments of social attributes. We observed that frontal faces with direct gaze were more highly associated with positive adjectives. Does this help to associate positive values to the Holy Face in a Western context? The formal result indicates that the Holy Face is perceived more positively than profiles with both direct and averted gaze. Two control studies, using a Brazilian and a Dutch database of photographs, showed a similar but weaker effect with a larger contrast between the gaze directions for profiles. Our findings indicate that many factors affect the impression of a face, and that eye contact in combination with face direction reinforce the general impression of portraits, rather than determine it.

  3. Fuzzy Integral-Based Gaze Control of a Robotic Head for Human Robot Interaction.

    PubMed

    Yoo, Bum-Soo; Kim, Jong-Hwan

    2015-09-01

    During the last few decades, as a part of effort to enhance natural human robot interaction (HRI), considerable research has been carried out to develop human-like gaze control. However, most studies did not consider hardware implementation, real-time processing, and the real environment, factors that should be taken into account to achieve natural HRI. This paper proposes a fuzzy integral-based gaze control algorithm, operating in real-time and the real environment, for a robotic head. We formulate the gaze control as a multicriteria decision making problem and devise seven human gaze-inspired criteria. Partial evaluations of all candidate gaze directions are carried out with respect to the seven criteria defined from perceived visual, auditory, and internal inputs, and fuzzy measures are assigned to a power set of the criteria to reflect the user defined preference. A fuzzy integral of the partial evaluations with respect to the fuzzy measures is employed to make global evaluations of all candidate gaze directions. The global evaluation values are adjusted by applying inhibition of return and are compared with the global evaluation values of the previous gaze directions to decide the final gaze direction. The effectiveness of the proposed algorithm is demonstrated with a robotic head, developed in the Robot Intelligence Technology Laboratory at Korea Advanced Institute of Science and Technology, through three interaction scenarios and three comparison scenarios with another algorithm.

  4. Type of gesture, valence, and gaze modulate the influence of gestures on observer's behaviors

    PubMed Central

    De Stefani, Elisa; Innocenti, Alessandro; Secchi, Claudio; Papa, Veronica; Gentilucci, Maurizio

    2013-01-01

    The present kinematic study aimed at determining whether the observation of arm/hand gestures performed by conspecifics affected an action apparently unrelated to the gesture (i.e., reaching-grasping). In 3 experiments we examined the influence of different gestures on action kinematics. We also analyzed the effects of words corresponding in meaning to the gestures, on the same action. In Experiment 1, the type of gesture, valence and actor's gaze were the investigated variables Participants executed the action of reaching-grasping after discriminating whether the gestures produced by a conspecific were meaningful or not. The meaningful gestures were request or symbolic and their valence was positive or negative. They were presented by the conspecific either blindfolded or not. In control Experiment 2 we searched for effects of the sole gaze, and, in Experiment 3, the effects of the same characteristics of words corresponding in meaning to the gestures and visually presented by the conspecific. Type of gesture, valence, and gaze influenced the actual action kinematics; these effects were similar, but not the same as those induced by words. We proposed that the signal activated a response which made the actual action faster for negative valence of gesture, whereas for request signals and available gaze, the response interfered with the actual action more than symbolic signals and not available gaze. Finally, we proposed the existence of a common circuit involved in the comprehension of gestures and words and in the activation of consequent responses to them. PMID:24046742

  5. Gaze shifts and fixations dominate gaze behavior of walking cats

    PubMed Central

    Rivers, Trevor J.; Sirota, Mikhail G.; Guttentag, Andrew I.; Ogorodnikov, Dmitri A.; Shah, Neet A.; Beloozerova, Irina N.

    2014-01-01

    Vision is important for locomotion in complex environments. How it is used to guide stepping is not well understood. We used an eye search coil technique combined with an active marker-based head recording system to characterize the gaze patterns of cats walking over terrains of different complexity: (1) on a flat surface in the dark when no visual information was available, (2) on the flat surface in light when visual information was available but not required, (3) along the highly structured but regular and familiar surface of a horizontal ladder, a task for which visual guidance of stepping was required, and (4) along a pathway cluttered with many small stones, an irregularly structured surface that was new each day. Three cats walked in a 2.5 m corridor, and 958 passages were analyzed. Gaze activity during the time when the gaze was directed at the walking surface was subdivided into four behaviors based on speed of gaze movement along the surface: gaze shift (fast movement), gaze fixation (no movement), constant gaze (movement at the body’s speed), and slow gaze (the remainder). We found that gaze shifts and fixations dominated the cats’ gaze behavior during all locomotor tasks, jointly occupying 62–84% of the time when the gaze was directed at the surface. As visual complexity of the surface and demand on visual guidance of stepping increased, cats spent more time looking at the surface, looked closer to them, and switched between gaze behaviors more often. During both visually guided locomotor tasks, gaze behaviors predominantly followed a repeated cycle of forward gaze shift followed by fixation. We call this behavior “gaze stepping”. Each gaze shift took gaze to a site approximately 75–80 cm in front of the cat, which the cat reached in 0.7–1.2 s and 1.1–1.6 strides. Constant gaze occupied only 5–21% of the time cats spent looking at the walking surface. PMID:24973656

  6. Pointing control using a moving base of support.

    PubMed

    Hondzinski, Jan M; Kwon, Taegyong

    2009-07-01

    The purposes of this study were to determine whether gaze direction provides a control signal for movement direction for a pointing task requiring a step and to gain insight into why discrepancies previously identified in the literature for endpoint accuracy with gaze directed eccentrically exist. Straight arm pointing movements were performed to real and remembered target locations, either toward or 30 degrees eccentric to gaze direction. Pointing occurred in normal room lighting or darkness while subjects sat, stood still or side-stepped left or right. Trunk rotation contributed 22-65% to gaze orientations when it was not constrained. Error differences for different target locations explained discrepancies among previous experiments. Variable pointing errors were influenced by gaze direction, while mean systematic pointing errors and trunk orientations were influenced by step direction. These data support the use of a control strategy that relies on gaze direction and equilibrium inputs for whole-body goal-directed movements.

  7. Elevated amygdala response to faces and gaze aversion in autism spectrum disorder.

    PubMed

    Tottenham, Nim; Hertzig, Margaret E; Gillespie-Lynch, Kristen; Gilhooly, Tara; Millner, Alexander J; Casey, B J

    2014-01-01

    Autism spectrum disorders (ASD) are often associated with impairments in judgment of facial expressions. This impairment is often accompanied by diminished eye contact and atypical amygdala responses to face stimuli. The current study used a within-subjects design to examine the effects of natural viewing and an experimental eye-gaze manipulation on amygdala responses to faces. Individuals with ASD showed less gaze toward the eye region of faces relative to a control group. Among individuals with ASD, reduced eye gaze was associated with higher threat ratings of neutral faces. Amygdala signal was elevated in the ASD group relative to controls. This elevated response was further potentiated by experimentally manipulating gaze to the eye region. Potentiation by the gaze manipulation was largest for those individuals who exhibited the least amount of naturally occurring gaze toward the eye region and was associated with their subjective threat ratings. Effects were largest for neutral faces, highlighting the importance of examining neutral faces in the pathophysiology of autism and questioning their use as control stimuli with this population. Overall, our findings provide support for the notion that gaze direction modulates affective response to faces in ASD.

  8. Experimental Test of Spatial Updating Models for Monkey Eye-Head Gaze Shifts

    PubMed Central

    Van Grootel, Tom J.; Van der Willigen, Robert F.; Van Opstal, A. John

    2012-01-01

    How the brain maintains an accurate and stable representation of visual target locations despite the occurrence of saccadic gaze shifts is a classical problem in oculomotor research. Here we test and dissociate the predictions of different conceptual models for head-unrestrained gaze-localization behavior of macaque monkeys. We adopted the double-step paradigm with rapid eye-head gaze shifts to measure localization accuracy in response to flashed visual stimuli in darkness. We presented the second target flash either before (static), or during (dynamic) the first gaze displacement. In the dynamic case the brief visual flash induced a small retinal streak of up to about 20 deg at an unpredictable moment and retinal location during the eye-head gaze shift, which provides serious challenges for the gaze-control system. However, for both stimulus conditions, monkeys localized the flashed targets with accurate gaze shifts, which rules out several models of visuomotor control. First, these findings exclude the possibility that gaze-shift programming relies on retinal inputs only. Instead, they support the notion that accurate eye-head motor feedback updates the gaze-saccade coordinates. Second, in dynamic trials the visuomotor system cannot rely on the coordinates of the planned first eye-head saccade either, which rules out remapping on the basis of a predictive corollary gaze-displacement signal. Finally, because gaze-related head movements were also goal-directed, requiring continuous access to eye-in-head position, we propose that our results best support a dynamic feedback scheme for spatial updating in which visuomotor control incorporates accurate signals about instantaneous eye- and head positions rather than relative eye- and head displacements. PMID:23118883

  9. Evaluating Gaze-Based Interface Tools to Facilitate Point-and-Select Tasks with Small Targets

    ERIC Educational Resources Information Center

    Skovsgaard, Henrik; Mateo, Julio C.; Hansen, John Paulin

    2011-01-01

    Gaze interaction affords hands-free control of computers. Pointing to and selecting small targets using gaze alone is difficult because of the limited accuracy of gaze pointing. This is the first experimental comparison of gaze-based interface tools for small-target (e.g. less than 12 x 12 pixels) point-and-select tasks. We conducted two…

  10. Interactions of Neonates and Infants with Prenatal Cocaine Exposure.

    ERIC Educational Resources Information Center

    Sparks, Shirley N.; Gushurst, Colette

    1995-01-01

    The effect of prenatal cocaine exposure on gaze of neonates and recovery of gaze of 2-month old infants (n=11) was studied. Compared to nonexposed controls, cocaine-exposed neonates had shorter gaze, and 2-month-old exposed infants had longer gaze. (Author/SW)

  11. How physician electronic health record screen sharing affects patient and doctor non-verbal communication in primary care.

    PubMed

    Asan, Onur; Young, Henry N; Chewning, Betty; Montague, Enid

    2015-03-01

    Use of electronic health records (EHRs) in primary-care exam rooms changes the dynamics of patient-physician interaction. This study examines and compares doctor-patient non-verbal communication (eye-gaze patterns) during primary care encounters for three different screen/information sharing groups: (1) active information sharing, (2) passive information sharing, and (3) technology withdrawal. Researchers video recorded 100 primary-care visits and coded the direction and duration of doctor and patient gaze. Descriptive statistics compared the length of gaze patterns as a percentage of visit length. Lag sequential analysis determined whether physician eye-gaze influenced patient eye gaze, and vice versa, and examined variations across groups. Significant differences were found in duration of gaze across groups. Lag sequential analysis found significant associations between several gaze patterns. Some, such as DGP-PGD ("doctor gaze patient" followed by "patient gaze doctor") were significant for all groups. Others, such DGT-PGU ("doctor gaze technology" followed by "patient gaze unknown") were unique to one group. Some technology use styles (active information sharing) seem to create more patient engagement, while others (passive information sharing) lead to patient disengagement. Doctors can engage patients in communication by using EHRs in the visits. EHR training and design should facilitate this. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  12. Functional changes of the reward system underlie blunted response to social gaze in cocaine users

    PubMed Central

    Preller, Katrin H.; Herdener, Marcus; Schilbach, Leonhard; Stämpfli, Philipp; Hulka, Lea M.; Vonmoos, Matthias; Ingold, Nina; Vogeley, Kai; Tobler, Philippe N.; Seifritz, Erich; Quednow, Boris B.

    2014-01-01

    Social interaction deficits in drug users likely impede treatment, increase the burden of the affected families, and consequently contribute to the high costs for society associated with addiction. Despite its significance, the neural basis of altered social interaction in drug users is currently unknown. Therefore, we investigated basal social gaze behavior in cocaine users by applying behavioral, psychophysiological, and functional brain-imaging methods. In study I, 80 regular cocaine users and 63 healthy controls completed an interactive paradigm in which the participants’ gaze was recorded by an eye-tracking device that controlled the gaze of an anthropomorphic virtual character. Valence ratings of different eye-contact conditions revealed that cocaine users show diminished emotional engagement in social interaction, which was also supported by reduced pupil responses. Study II investigated the neural underpinnings of changes in social reward processing observed in study I. Sixteen cocaine users and 16 controls completed a similar interaction paradigm as used in study I while undergoing functional magnetic resonance imaging. In response to social interaction, cocaine users displayed decreased activation of the medial orbitofrontal cortex, a key region of reward processing. Moreover, blunted activation of the medial orbitofrontal cortex was significantly correlated with a decreased social network size, reflecting problems in real-life social behavior because of reduced social reward. In conclusion, basic social interaction deficits in cocaine users as observed here may arise from altered social reward processing. Consequently, these results point to the importance of reinstatement of social reward in the treatment of stimulant addiction. PMID:24449854

  13. Cognitive control modulates attention to food cues: Support for the control readiness model of self-control.

    PubMed

    Kleiman, Tali; Trope, Yaacov; Amodio, David M

    2016-12-01

    Self-control in one's food choices often depends on the regulation of attention toward healthy choices and away from temptations. We tested whether selective attention to food cues can be modulated by a newly developed proactive self-control mechanism-control readiness-whereby control activated in one domain can facilitate control in another domain. In two studies, we elicited the activation of control using a color-naming Stroop task and tested its effect on attention to food cues in a subsequent, unrelated task. We found that control readiness modulates both overt attention, which involves shifts in eye gaze (Study 1), and covert attention, which involves shift in mental attention without shifting in eye gaze (Study 2). We further demonstrated that individuals for whom tempting food cues signal a self-control problem (operationalized by relatively higher BMI) were especially likely to benefit from control readiness. We discuss the theoretical contributions of the control readiness model and the implications of our findings for enhancing proactive self-control to overcome temptation in food choices. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Speech Disfluency-dependent Amygdala Activity in Adults Who Stutter: Neuroimaging of Interpersonal Communication in MRI Scanner Environment.

    PubMed

    Toyomura, Akira; Fujii, Tetsunoshin; Yokosawa, Koichi; Kuriki, Shinya

    2018-03-15

    Affective states, such as anticipatory anxiety, critically influence speech communication behavior in adults who stutter. However, there is currently little evidence regarding the involvement of the limbic system in speech disfluency during interpersonal communication. We designed this neuroimaging study and experimental procedure to sample neural activity during interpersonal communication between human participants, and to investigate the relationship between the amygdala activity and speech disfluency. Participants were required to engage in live communication with a stranger of the opposite sex in the MRI scanner environment. In the gaze condition, the stranger gazed at the participant without speaking, while in the live conversation condition, the stranger asked questions that the participant was required to answer. The stranger continued to gaze silently at the participant while the participant answered. Adults who stutter reported significantly higher discomfort than fluent controls during the experiment. Activity in the right amygdala, a key anatomical region in the limbic system involved in emotion, was significantly correlated with stuttering occurrences in adults who stutter. Right amygdala activity from pooled data of all participants also showed a significant correlation with discomfort level during the experiment. Activity in the prefrontal cortex, which forms emotion regulation neural circuitry with the amygdala, was decreased in adults who stutter than in fluent controls. This is the first study to demonstrate that amygdala activity during interpersonal communication is involved in disfluent speech in adults who stutter. Copyright © 2018 IBRO. Published by Elsevier Ltd. All rights reserved.

  15. Teachers' Experiences of Using Eye Gaze-Controlled Computers for Pupils with Severe Motor Impairments and without Speech

    ERIC Educational Resources Information Center

    Rytterström, Patrik; Borgestig, Maria; Hemmingsson, Helena

    2016-01-01

    The purpose of this study is to explore teachers' experiences of using eye gaze-controlled computers with pupils with severe disabilities. Technology to control a computer with eye gaze is a fast growing field and has promising implications for people with severe disabilities. This is a new assistive technology and a new learning situation for…

  16. Gaze-evoked nystagmus induced by alcohol intoxication.

    PubMed

    Romano, Fausto; Tarnutzer, Alexander A; Straumann, Dominik; Ramat, Stefano; Bertolini, Giovanni

    2017-03-15

    The cerebellum is the core structure controlling gaze stability. Chronic cerebellar diseases and acute alcohol intoxication affect cerebellar function, inducing, among others, gaze instability as gaze-evoked nystagmus. Gaze-evoked nystagmus is characterized by increased centripetal eye-drift. It is used as an important diagnostic sign for patients with cerebellar degeneration and to assess the 'driving while intoxicated' condition. We quantified the effect of alcohol on gaze-holding using an approach allowing, for the first time, the comparison of deficits induced by alcohol intoxication and cerebellar degeneration. Our results showed that alcohol intoxication induces a two-fold increase of centripetal eye-drift. We establish analysis techniques for using controlled alcohol intake as a model to support the study of cerebellar deficits. The observed similarity between the effect of alcohol and the clinical signs observed in cerebellar patients suggests a possible pathomechanism for gaze-holding deficits. Gaze-evoked nystagmus (GEN) is an ocular-motor finding commonly observed in cerebellar disease, characterized by increased centripetal eye-drift with centrifugal correcting saccades at eccentric gaze. With cerebellar degeneration being a rare and clinically heterogeneous disease, data from patients are limited. We hypothesized that a transient inhibition of cerebellar function by defined amounts of alcohol may provide a suitable model to study gaze-holding deficits in cerebellar disease. We recorded gaze-holding at varying horizontal eye positions in 15 healthy participants before and 30 min after alcohol intake required to reach 0.6‰ blood alcohol content (BAC). Changes in ocular-motor behaviour were quantified measuring eye-drift velocity as a continuous function of gaze eccentricity over a large range (±40 deg) of horizontal gaze angles and characterized using a two-parameter tangent model. The effect of alcohol on gaze stability was assessed analysing: (1) overall effects on the gaze-holding system, (2) specific effects on each eye and (3) differences between gaze angles in the temporal and nasal hemifields. For all subjects, alcohol consumption induced gaze instability, causing a two-fold increase [2.21 (0.55), median (median absolute deviation); P = 0.002] of eye-drift velocity at all eccentricities. Results were confirmed analysing each eye and hemifield independently. The alcohol-induced transient global deficit in gaze-holding matched the pattern previously described in patients with late-onset cerebellar degeneration. Controlled intake of alcohol seems a suitable disease model to study cerebellar GEN. With alcohol resulting in global cerebellar hypofunction, we hypothesize that patients matching the gaze-holding behaviour observed here suffered from diffuse deficits in the gaze-holding system as well. © 2016 The Authors. The Journal of Physiology © 2016 The Physiological Society.

  17. The Effectiveness of Gaze-Contingent Control in Computer Games.

    PubMed

    Orlov, Paul A; Apraksin, Nikolay

    2015-01-01

    Eye-tracking technology and gaze-contingent control in human-computer interaction have become an objective reality. This article reports on a series of eye-tracking experiments, in which we concentrated on one aspect of gaze-contingent interaction: Its effectiveness compared with mouse-based control in a computer strategy game. We propose a measure for evaluating the effectiveness of interaction based on "the time of recognition" the game unit. In this article, we use this measure to compare gaze- and mouse-contingent systems, and we present the analysis of the differences as a function of the number of game units. Our results indicate that performance of gaze-contingent interaction is typically higher than mouse manipulation in a visual searching task. When tested on 60 subjects, the results showed that the effectiveness of gaze-contingent systems over 1.5 times higher. In addition, we obtained that eye behavior stays quite stabile with or without mouse interaction. © The Author(s) 2015.

  18. Aversive eye gaze during a speech in virtual environment in patients with social anxiety disorder.

    PubMed

    Kim, Haena; Shin, Jung Eun; Hong, Yeon-Ju; Shin, Yu-Bin; Shin, Young Seok; Han, Kiwan; Kim, Jae-Jin; Choi, Soo-Hee

    2018-03-01

    One of the main characteristics of social anxiety disorder is excessive fear of social evaluation. In such situations, anxiety can influence gaze behaviour. Thus, the current study adopted virtual reality to examine eye gaze pattern of social anxiety disorder patients while presenting different types of speeches. A total of 79 social anxiety disorder patients and 51 healthy controls presented prepared speeches on general topics and impromptu speeches on self-related topics to a virtual audience while their eye gaze was recorded. Their presentation performance was also evaluated. Overall, social anxiety disorder patients showed less eye gaze towards the audience than healthy controls. Types of speech did not influence social anxiety disorder patients' gaze allocation towards the audience. However, patients with social anxiety disorder showed significant correlations between the amount of eye gaze towards the audience while presenting self-related speeches and social anxiety cognitions. The current study confirms that eye gaze behaviour of social anxiety disorder patients is aversive and that their anxiety symptoms are more dependent on the nature of topic.

  19. Perceptual Training in Beach Volleyball Defence: Different Effects of Gaze-Path Cueing on Gaze and Decision-Making

    PubMed Central

    Klostermann, André; Vater, Christian; Kredel, Ralf; Hossner, Ernst-Joachim

    2015-01-01

    For perceptual-cognitive skill training, a variety of intervention methods has been proposed, including the so-called “color-cueing method” which aims on superior gaze-path learning by applying visual markers. However, recent findings challenge this method, especially, with regards to its actual effects on gaze behavior. Consequently, after a preparatory study on the identification of appropriate visual cues for life-size displays, a perceptual-training experiment on decision-making in beach volleyball was conducted, contrasting two cueing interventions (functional vs. dysfunctional gaze path) with a conservative control condition (anticipation-related instructions). Gaze analyses revealed learning effects for the dysfunctional group only. Regarding decision-making, all groups showed enhanced performance with largest improvements for the control group followed by the functional and the dysfunctional group. Hence, the results confirm cueing effects on gaze behavior, but they also question its benefit for enhancing decision-making. However, before completely denying the method’s value, optimisations should be checked regarding, for instance, cueing-pattern characteristics and gaze-related feedback. PMID:26648894

  20. Speaker gaze increases information coupling between infant and adult brains.

    PubMed

    Leong, Victoria; Byrne, Elizabeth; Clackson, Kaili; Georgieva, Stanimira; Lam, Sarah; Wass, Sam

    2017-12-12

    When infants and adults communicate, they exchange social signals of availability and communicative intention such as eye gaze. Previous research indicates that when communication is successful, close temporal dependencies arise between adult speakers' and listeners' neural activity. However, it is not known whether similar neural contingencies exist within adult-infant dyads. Here, we used dual-electroencephalography to assess whether direct gaze increases neural coupling between adults and infants during screen-based and live interactions. In experiment 1 ( n = 17), infants viewed videos of an adult who was singing nursery rhymes with ( i ) direct gaze (looking forward), ( ii ) indirect gaze (head and eyes averted by 20°), or ( iii ) direct-oblique gaze (head averted but eyes orientated forward). In experiment 2 ( n = 19), infants viewed the same adult in a live context, singing with direct or indirect gaze. Gaze-related changes in adult-infant neural network connectivity were measured using partial directed coherence. Across both experiments, the adult had a significant (Granger) causal influence on infants' neural activity, which was stronger during direct and direct-oblique gaze relative to indirect gaze. During live interactions, infants also influenced the adult more during direct than indirect gaze. Further, infants vocalized more frequently during live direct gaze, and individual infants who vocalized longer also elicited stronger synchronization from the adult. These results demonstrate that direct gaze strengthens bidirectional adult-infant neural connectivity during communication. Thus, ostensive social signals could act to bring brains into mutual temporal alignment, creating a joint-networked state that is structured to facilitate information transfer during early communication and learning. Copyright © 2017 the Author(s). Published by PNAS.

  1. Speaker gaze increases information coupling between infant and adult brains

    PubMed Central

    Leong, Victoria; Byrne, Elizabeth; Clackson, Kaili; Georgieva, Stanimira; Lam, Sarah

    2017-01-01

    When infants and adults communicate, they exchange social signals of availability and communicative intention such as eye gaze. Previous research indicates that when communication is successful, close temporal dependencies arise between adult speakers’ and listeners’ neural activity. However, it is not known whether similar neural contingencies exist within adult–infant dyads. Here, we used dual-electroencephalography to assess whether direct gaze increases neural coupling between adults and infants during screen-based and live interactions. In experiment 1 (n = 17), infants viewed videos of an adult who was singing nursery rhymes with (i) direct gaze (looking forward), (ii) indirect gaze (head and eyes averted by 20°), or (iii) direct-oblique gaze (head averted but eyes orientated forward). In experiment 2 (n = 19), infants viewed the same adult in a live context, singing with direct or indirect gaze. Gaze-related changes in adult–infant neural network connectivity were measured using partial directed coherence. Across both experiments, the adult had a significant (Granger) causal influence on infants’ neural activity, which was stronger during direct and direct-oblique gaze relative to indirect gaze. During live interactions, infants also influenced the adult more during direct than indirect gaze. Further, infants vocalized more frequently during live direct gaze, and individual infants who vocalized longer also elicited stronger synchronization from the adult. These results demonstrate that direct gaze strengthens bidirectional adult–infant neural connectivity during communication. Thus, ostensive social signals could act to bring brains into mutual temporal alignment, creating a joint-networked state that is structured to facilitate information transfer during early communication and learning. PMID:29183980

  2. Towards brain-activity-controlled information retrieval: Decoding image relevance from MEG signals.

    PubMed

    Kauppi, Jukka-Pekka; Kandemir, Melih; Saarinen, Veli-Matti; Hirvenkari, Lotta; Parkkonen, Lauri; Klami, Arto; Hari, Riitta; Kaski, Samuel

    2015-05-15

    We hypothesize that brain activity can be used to control future information retrieval systems. To this end, we conducted a feasibility study on predicting the relevance of visual objects from brain activity. We analyze both magnetoencephalographic (MEG) and gaze signals from nine subjects who were viewing image collages, a subset of which was relevant to a predetermined task. We report three findings: i) the relevance of an image a subject looks at can be decoded from MEG signals with performance significantly better than chance, ii) fusion of gaze-based and MEG-based classifiers significantly improves the prediction performance compared to using either signal alone, and iii) non-linear classification of the MEG signals using Gaussian process classifiers outperforms linear classification. These findings break new ground for building brain-activity-based interactive image retrieval systems, as well as for systems utilizing feedback both from brain activity and eye movements. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Evolution of Biological Image Stabilization.

    PubMed

    Hardcastle, Ben J; Krapp, Holger G

    2016-10-24

    The use of vision to coordinate behavior requires an efficient control design that stabilizes the world on the retina or directs the gaze towards salient features in the surroundings. With a level gaze, visual processing tasks are simplified and behaviorally relevant features from the visual environment can be extracted. No matter how simple or sophisticated the eye design, mechanisms have evolved across phyla to stabilize gaze. In this review, we describe functional similarities in eyes and gaze stabilization reflexes, emphasizing their fundamental role in transforming sensory information into motor commands that support postural and locomotor control. We then focus on gaze stabilization design in flying insects and detail some of the underlying principles. Systems analysis reveals that gaze stabilization often involves several sensory modalities, including vision itself, and makes use of feedback as well as feedforward signals. Independent of phylogenetic distance, the physical interaction between an animal and its natural environment - its available senses and how it moves - appears to shape the adaptation of all aspects of gaze stabilization. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Interactions between gaze-evoked blinks and gaze shifts in monkeys.

    PubMed

    Gandhi, Neeraj J

    2012-02-01

    Rapid eyelid closure, or a blink, often accompanies head-restrained and head-unrestrained gaze shifts. This study examines the interactions between such gaze-evoked blinks and gaze shifts in monkeys. Blink probability increases with gaze amplitude and at a faster rate for head-unrestrained movements. Across animals, blink likelihood is inversely correlated with the average gaze velocity of large-amplitude control movements. Gaze-evoked blinks induce robust perturbations in eye velocity. Peak and average velocities are reduced, duration is increased, but accuracy is preserved. The temporal features of the perturbation depend on factors such as the time of blink relative to gaze onset, inherent velocity kinematics of control movements, and perhaps initial eye-in-head position. Although variable across animals, the initial effect is a reduction in eye velocity, followed by a reacceleration that yields two or more peaks in its waveform. Interestingly, head velocity is not attenuated; instead, it peaks slightly later and with a larger magnitude. Gaze latency is slightly reduced on trials with gaze-evoked blinks, although the effect was more variable during head-unrestrained movements; no reduction in head latency is observed. Preliminary data also demonstrate a similar perturbation of gaze-evoked blinks during vertical saccades. The results are compared with previously reported effects of reflexive blinks (evoked by air-puff delivered to one eye or supraorbital nerve stimulation) and discussed in terms of effects of blinks on saccadic suppression, neural correlates of the altered eye velocity signals, and implications on the hypothesis that the attenuation in eye velocity is produced by a head movement command.

  5. Contribution of the frontal eye field to gaze shifts in the head-unrestrained rhesus monkey: neuronal activity.

    PubMed

    Knight, T A

    2012-12-06

    The frontal eye field (FEF) has a strong influence on saccadic eye movements with the head restrained. With the head unrestrained, eye saccades combine with head movements to produce large gaze shifts, and microstimulation of the FEF evokes both eye and head movements. To test whether the dorsomedial FEF provides commands for the entire gaze shift or its separate eye and head components, we recorded extracellular single-unit activity in monkeys trained to make large head-unrestrained gaze shifts. We recorded 80 units active during gaze shifts, and closely examined 26 of these that discharged a burst of action potentials that preceded horizontal gaze movements. These units were movement or visuomovement related and most exhibited open movement fields with respect to amplitude. To reveal the relations of burst parameters to gaze, eye, and/or head movement metrics, we used behavioral dissociations of gaze, eye, and head movements and linear regression analyses. The burst number of spikes (NOS) was strongly correlated with movement amplitude and burst temporal parameters were strongly correlated with movement temporal metrics for eight gaze-related burst neurons and five saccade-related burst neurons. For the remaining 13 neurons, the NOS was strongly correlated with the head movement amplitude, but burst temporal parameters were most strongly correlated with eye movement temporal metrics (head-eye-related burst neurons, HEBNs). These results suggest that FEF units do not encode a command for the unified gaze shift only; instead, different units may carry signals related to the overall gaze shift or its eye and/or head components. Moreover, the HEBNs exhibit bursts whose magnitude and timing may encode a head displacement signal and a signal that influences the timing of the eye saccade, thereby serving as a mechanism for coordinating the eye and head movements of a gaze shift. Copyright © 2012 IBRO. Published by Elsevier Ltd. All rights reserved.

  6. Gazing at me: the importance of social meaning in understanding direct-gaze cues

    PubMed Central

    Hamilton, Antonia F. de C.

    2016-01-01

    Direct gaze is an engaging and important social cue, but the meaning of direct gaze depends heavily on the surrounding context. This paper reviews some recent studies of direct gaze, to understand more about what neural and cognitive systems are engaged by this social cue and why. The data show that gaze can act as an arousal cue and can modulate actions, and can activate brain regions linked to theory of mind and self-related processing. However, all these results are strongly modulated by the social meaning of a gaze cue and by whether participants believe that another person is really watching them. The implications of these contextual effects and audience effects for our theories of gaze are considered. PMID:26644598

  7. Electrocortical Reflections of Face and Gaze Processing in Children with Pervasive Developmental Disorder

    ERIC Educational Resources Information Center

    Kemner, C.; Schuller, A-M.; Van Engeland, H.

    2006-01-01

    Background: Children with pervasive developmental disorder (PDD) show behavioral abnormalities in gaze and face processing, but recent studies have indicated that normal activation of face-specific brain areas in response to faces is possible in this group. It is not clear whether the brain activity related to gaze processing is also normal in…

  8. Gaze-based assistive technology used in daily life by children with severe physical impairments - parents' experiences.

    PubMed

    Borgestig, Maria; Rytterström, Patrik; Hemmingsson, Helena

    2017-07-01

    To describe and explore parents' experiences when their children with severe physical impairments receive gaze-based assistive technology (gaze-based assistive technology (AT)) for use in daily life. Semi-structured interviews were conducted twice, with one year in between, with parents of eight children with cerebral palsy that used gaze-based AT in their daily activities. To understand the parents' experiences, hermeneutical interpretations were used during data analysis. The findings demonstrate that for parents, children's gaze-based AT usage meant that children demonstrated agency, provided them with opportunities to show personality and competencies, and gave children possibilities to develop. Overall, children's gaze-based AT provides hope for a better future for their children with severe physical impairments; a future in which the children can develop and gain influence in life. Gaze-based AT provides children with new opportunities to perform activities and take initiatives to communicate, giving parents hope about the children's future.

  9. Attentional synchrony and the influence of viewing task on gaze behavior in static and dynamic scenes.

    PubMed

    Smith, Tim J; Mital, Parag K

    2013-07-17

    Does viewing task influence gaze during dynamic scene viewing? Research into the factors influencing gaze allocation during free viewing of dynamic scenes has reported that the gaze of multiple viewers clusters around points of high motion (attentional synchrony), suggesting that gaze may be primarily under exogenous control. However, the influence of viewing task on gaze behavior in static scenes and during real-world interaction has been widely demonstrated. To dissociate exogenous from endogenous factors during dynamic scene viewing we tracked participants' eye movements while they (a) freely watched unedited videos of real-world scenes (free viewing) or (b) quickly identified where the video was filmed (spot-the-location). Static scenes were also presented as controls for scene dynamics. Free viewing of dynamic scenes showed greater attentional synchrony, longer fixations, and more gaze to people and areas of high flicker compared with static scenes. These differences were minimized by the viewing task. In comparison with the free viewing of dynamic scenes, during the spot-the-location task fixation durations were shorter, saccade amplitudes were longer, and gaze exhibited less attentional synchrony and was biased away from areas of flicker and people. These results suggest that the viewing task can have a significant influence on gaze during a dynamic scene but that endogenous control is slow to kick in as initial saccades default toward the screen center, areas of high motion and people before shifting to task-relevant features. This default-like viewing behavior returns after the viewing task is completed, confirming that gaze behavior is more predictable during free viewing of dynamic than static scenes but that this may be due to natural correlation between regions of interest (e.g., people) and motion.

  10. FAR and NEAR Target Dynamic Visual Acuity: A Functional Assessment of Canal and Otolith Performance

    NASA Technical Reports Server (NTRS)

    Peters, Brian T.; Brady, Rachel A.; Landsness, Eric C.; Black, F. Owen; Bloomberg, Jacob J.

    2004-01-01

    Upon their return to earth, astronauts experience the effects of vestibular adaptation to microgravity. The postflight changes in vestibular information processing can affect postural and locomotor stability and may lead to oscillopsia during activities of daily living. However, it is likely that time spent in microgravity affects canal and otolith function differently. As a result, the isolated rotational stimuli used in traditional tests of canal function may fail to identify vestibular deficits after spaceflight. Also, the functional consequences of deficits that are identified often remain unknown. In a gaze control task, the relative contributions of the canal and otolith organs are modulated with viewing distance. The ability to stabilize gaze during a perturbation, on visual targets placed at different distances from the head may therefore provide independent insight into the function of this systems. Our goal was to develop a functional measure of gaze control that can also offer independent information about the function of the canal and otolith organs.

  11. Visual and somatic sensory feedback of brain activity for intuitive surgical robot manipulation.

    PubMed

    Miura, Satoshi; Matsumoto, Yuya; Kobayashi, Yo; Kawamura, Kazuya; Nakashima, Yasutaka; Fujie, Masakatsu G

    2015-01-01

    This paper presents a method to evaluate the hand-eye coordination of the master-slave surgical robot by measuring the activation of the intraparietal sulcus in users brain activity during controlling virtual manipulation. The objective is to examine the changes in activity of the intraparietal sulcus when the user's visual or somatic feedback is passed through or intercepted. The hypothesis is that the intraparietal sulcus activates significantly when both the visual and somatic sense pass feedback, but deactivates when either visual or somatic is intercepted. The brain activity of three subjects was measured by the functional near-infrared spectroscopic-topography brain imaging while they used a hand controller to move a virtual arm of a surgical simulator. The experiment was performed several times with three conditions: (i) the user controlled the virtual arm naturally under both visual and somatic feedback passed, (ii) the user moved with closed eyes under only somatic feedback passed, (iii) the user only gazed at the screen under only visual feedback passed. Brain activity showed significantly better control of the virtual arm naturally (p<;0.05) when compared with moving with closed eyes or only gazing among all participants. In conclusion, the brain can activate according to visual and somatic sensory feedback agreement.

  12. Beliefs about human agency influence the neural processing of gaze during joint attention.

    PubMed

    Caruana, Nathan; de Lissa, Peter; McArthur, Genevieve

    2017-04-01

    The current study measured adults' P350 and N170 ERPs while they interacted with a character in a virtual reality paradigm. Some participants believed the character was controlled by a human ("avatar" condition, n = 19); others believed it was controlled by a computer program ("agent" condition, n = 19). In each trial, participants initiated joint attention in order to direct the character's gaze toward a target. In 50% of trials, the character gazed toward the target (congruent responses), and in 50% of trials the character gazed to a different location (incongruent response). In the avatar condition, the character's incongruent gaze responses generated significantly larger P350 peaks at centro-parietal sites than congruent gaze responses. In the agent condition, the P350 effect was strikingly absent. Left occipitotemporal N170 responses were significantly smaller in the agent condition compared to the avatar condition for both congruent and incongruent gaze shifts. These data suggest that beliefs about human agency may recruit mechanisms that discriminate the social outcome of a gaze shift after approximately 350 ms, and that these mechanisms may modulate the early perceptual processing of gaze. These findings also suggest that the ecologically valid measurement of social cognition may depend upon paradigms that simulate genuine social interactions.

  13. Eye-Hand Coordination during Visuomotor Adaptation with Different Rotation Angles

    PubMed Central

    Rentsch, Sebastian; Rand, Miya K.

    2014-01-01

    This study examined adaptive changes of eye-hand coordination during a visuomotor rotation task. Young adults made aiming movements to targets on a horizontal plane, while looking at the rotated feedback (cursor) of hand movements on a monitor. To vary the task difficulty, three rotation angles (30°, 75°, and 150°) were tested in three groups. All groups shortened hand movement time and trajectory length with practice. However, control strategies used were different among groups. The 30° group used proportionately more implicit adjustments of hand movements than other groups. The 75° group used more on-line feedback control, whereas the 150° group used explicit strategic adjustments. Regarding eye-hand coordination, timing of gaze shift to the target was gradually changed with practice from the late to early phase of hand movements in all groups, indicating an emerging gaze-anchoring behavior. Gaze locations prior to the gaze anchoring were also modified with practice from the cursor vicinity to an area between the starting position and the target. Reflecting various task difficulties, these changes occurred fastest in the 30° group, followed by the 75° group. The 150° group persisted in gazing at the cursor vicinity. These results suggest that the function of gaze control during visuomotor adaptation changes from a reactive control for exploring the relation between cursor and hand movements to a predictive control for guiding the hand to the task goal. That gaze-anchoring behavior emerged in all groups despite various control strategies indicates a generality of this adaptive pattern for eye-hand coordination in goal-directed actions. PMID:25333942

  14. Gaze perception in social anxiety and social anxiety disorder

    PubMed Central

    Schulze, Lars; Renneberg, Babette; Lobmaier, Janek S.

    2013-01-01

    Clinical observations suggest abnormal gaze perception to be an important indicator of social anxiety disorder (SAD). Experimental research has yet paid relatively little attention to the study of gaze perception in SAD. In this article we first discuss gaze perception in healthy human beings before reviewing self-referential and threat-related biases of gaze perception in clinical and non-clinical socially anxious samples. Relative to controls, socially anxious individuals exhibit an enhanced self-directed perception of gaze directions and demonstrate a pronounced fear of direct eye contact, though findings are less consistent regarding the avoidance of mutual gaze in SAD. Prospects for future research and clinical implications are discussed. PMID:24379776

  15. Eye gaze performance for children with severe physical impairments using gaze-based assistive technology-A longitudinal study.

    PubMed

    Borgestig, Maria; Sandqvist, Jan; Parsons, Richard; Falkmer, Torbjörn; Hemmingsson, Helena

    2016-01-01

    Gaze-based assistive technology (gaze-based AT) has the potential to provide children affected by severe physical impairments with opportunities for communication and activities. This study aimed to examine changes in eye gaze performance over time (time on task and accuracy) in children with severe physical impairments, without speaking ability, using gaze-based AT. A longitudinal study with a before and after design was conducted on 10 children (aged 1-15 years) with severe physical impairments, who were beginners to gaze-based AT at baseline. Thereafter, all children used the gaze-based AT in daily activities over the course of the study. Compass computer software was used to measure time on task and accuracy with eye selection of targets on screen, and tests were performed with the children at baseline, after 5 months, 9-11 months, and after 15-20 months. Findings showed that the children improved in time on task after 5 months and became more accurate in selecting targets after 15-20 months. This study indicates that these children with severe physical impairments, who were unable to speak, could improve in eye gaze performance. However, the children needed time to practice on a long-term basis to acquire skills needed to develop fast and accurate eye gaze performance.

  16. Eye contact with neutral and smiling faces: effects on autonomic responses and frontal EEG asymmetry

    PubMed Central

    Pönkänen, Laura M.; Hietanen, Jari K.

    2012-01-01

    In our previous studies we have shown that seeing another person “live” with a direct vs. averted gaze results in enhanced skin conductance responses (SCRs) indicating autonomic arousal and in greater relative left-sided frontal activity in the electroencephalography (asymmetry in the alpha-band power), associated with approach motivation. In our studies, however, the stimulus persons had a neutral expression. In real-life social interaction, eye contact is often associated with a smile, which is another signal of the sender's approach-related motivation. A smile could, therefore, enhance the affective-motivational responses to eye contact. In the present study, we investigated whether the facial expression (neutral vs. social smile) would modulate autonomic arousal and frontal EEG alpha-band asymmetry to seeing a direct vs. an averted gaze in faces presented “live” through a liquid crystal (LC) shutter. The results showed that the SCRs were greater for the direct than the averted gaze and that the effect of gaze direction was more pronounced for a smiling than a neutral face. However, in this study, gaze direction and facial expression did not affect the frontal EEG asymmetry, although, for gaze direction, we found a marginally significant correlation between the degree of an overall bias for asymmetric frontal activity and the degree to which direct gaze elicited stronger left-sided frontal activity than did averted gaze. PMID:22586387

  17. Social attention in children with epilepsy.

    PubMed

    Lunn, Judith; Donovan, Tim; Litchfield, Damien; Lewis, Charlie; Davies, Robert; Crawford, Trevor

    2017-04-01

    Children with epilepsy may be vulnerable to impaired social attention given the increased risk of neurobehavioural comorbidities. Social attentional orienting and the potential modulatory role of attentional control on the perceptual processing of gaze and emotion cues have not been examined in childhood onset epilepsies. Social attention mechanisms were investigated in patients with epilepsy (n=25) aged 8-18years old and performance compared to healthy controls (n=30). Dynamic gaze and emotion facial stimuli were integrated into an antisaccade eye-tracking paradigm. The time to orient attention and execute a horizontal saccade toward (prosaccade) or away (antisaccade) from a peripheral target measured processing speed of social signals under conditions of low or high attentional control. Patients with epilepsy had impaired processing speed compared to healthy controls under conditions of high attentional control only when gaze and emotions were combined meaningfully to signal motivational intent of approach (happy or anger with a direct gaze) or avoidance (fear or sad with an averted gaze). Group differences were larger in older adolescent patients. Analyses of the discrete gaze emotion combinations found independent effects of epilepsy-related, cognitive and behavioural problems. A delayed disengagement from fearful gaze was also found under low attentional control that was linked to epilepsy developmental factors and was similarly observed in patients with higher reported anxiety problems. Overall, findings indicate increased perceptual processing of developmentally relevant social motivations during increased cognitive control, and the possibility of a persistent fear-related attentional bias. This was not limited to patients with chronic epilepsy, lower IQ or reported behavioural problems and has implications for social and emotional development in individuals with childhood onset epilepsies beyond remission. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Gazing at me: the importance of social meaning in understanding direct-gaze cues.

    PubMed

    de C Hamilton, Antonia F

    2016-01-19

    Direct gaze is an engaging and important social cue, but the meaning of direct gaze depends heavily on the surrounding context. This paper reviews some recent studies of direct gaze, to understand more about what neural and cognitive systems are engaged by this social cue and why. The data show that gaze can act as an arousal cue and can modulate actions, and can activate brain regions linked to theory of mind and self-related processing. However, all these results are strongly modulated by the social meaning of a gaze cue and by whether participants believe that another person is really watching them. The implications of these contextual effects and audience effects for our theories of gaze are considered. © 2015 The Author(s).

  19. Kinematics and eye-head coordination of gaze shifts evoked from different sites in the superior colliculus of the cat.

    PubMed

    Guillaume, Alain; Pélisson, Denis

    2006-12-15

    Shifting gaze requires precise coordination of eye and head movements. It is clear that the superior colliculus (SC) is involved with saccadic gaze shifts. Here we investigate its role in controlling both eye and head movements during gaze shifts. Gaze shifts of the same amplitude can be evoked from different SC sites by controlled electrical microstimulation. To describe how the SC coordinates the eye and the head, we compare the characteristics of these amplitude-matched gaze shifts evoked from different SC sites. We show that matched amplitude gaze shifts elicited from progressively more caudal sites are progressively slower and associated with a greater head contribution. Stimulation at more caudal SC sites decreased the peak velocity of the eye but not of the head, suggesting that the lower peak gaze velocity for the caudal sites is due to the increased contribution of the slower-moving head. Eye-head coordination across the SC motor map is also indicated by the relative latencies of the eye and head movements. For some amplitudes of gaze shift, rostral stimulation evoked eye movement before head movement, whereas this reversed with caudal stimulation, which caused the head to move before the eyes. These results show that gaze shifts of similar amplitude evoked from different SC sites are produced with different kinematics and coordination of eye and head movements. In other words, gaze shifts evoked from different SC sites follow different amplitude-velocity curves, with different eye-head contributions. These findings shed light on mechanisms used by the central nervous system to translate a high-level motor representation (a desired gaze displacement on the SC map) into motor commands appropriate for the involved body segments (the eye and the head).

  20. Assessing the precision of gaze following using a stereoscopic 3D virtual reality setting.

    PubMed

    Atabaki, Artin; Marciniak, Karolina; Dicke, Peter W; Thier, Peter

    2015-07-01

    Despite the ecological importance of gaze following, little is known about the underlying neuronal processes, which allow us to extract gaze direction from the geometric features of the eye and head of a conspecific. In order to understand the neuronal mechanisms underlying this ability, a careful description of the capacity and the limitations of gaze following at the behavioral level is needed. Previous studies of gaze following, which relied on naturalistic settings have the disadvantage of allowing only very limited control of potentially relevant visual features guiding gaze following, such as the contrast of iris and sclera, the shape of the eyelids and--in the case of photographs--they lack depth. Hence, in order to get full control of potentially relevant features we decided to study gaze following of human observers guided by the gaze of a human avatar seen stereoscopically. To this end we established a stereoscopic 3D virtual reality setup, in which we tested human subjects' abilities to detect at which target a human avatar was looking at. Following the gaze of the avatar showed all the features of the gaze following of a natural person, namely a substantial degree of precision associated with a consistent pattern of systematic deviations from the target. Poor stereo vision affected performance surprisingly little (only in certain experimental conditions). Only gaze following guided by targets at larger downward eccentricities exhibited a differential effect of the presence or absence of accompanying movements of the avatar's eyelids and eyebrows. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Parent Perception of Two Eye-Gaze Control Technology Systems in Young Children with Cerebral Palsy: Pilot Study.

    PubMed

    Karlsson, Petra; Wallen, Margaret

    2017-01-01

    Eye-gaze control technology enables people with significant physical disability to access computers for communication, play, learning and environmental control. This pilot study used a multiple case study design with repeated baseline assessment and parents' evaluations to compare two eye-gaze control technology systems to identify any differences in factors such as ease of use and impact of the systems for their young children. Five children, aged 3 to 5 years, with dyskinetic cerebral palsy, and their families participated. Overall, families were satisfied with both the Tobii PCEye Go and myGaze® eye tracker, found them easy to position and use, and children learned to operate them quickly. This technology provides young children with important opportunities for learning, play, leisure, and developing communication.

  2. Intact unconscious processing of eye contact in schizophrenia.

    PubMed

    Seymour, Kiley; Rhodes, Gillian; Stein, Timo; Langdon, Robyn

    2016-03-01

    The perception of eye gaze is crucial for social interaction, providing essential information about another person's goals, intentions, and focus of attention. People with schizophrenia suffer a wide range of social cognitive deficits, including abnormalities in eye gaze perception. For instance, patients have shown an increased bias to misjudge averted gaze as being directed toward them. In this study we probed early unconscious mechanisms of gaze processing in schizophrenia using a technique known as continuous flash suppression. Previous research using this technique to render faces with direct and averted gaze initially invisible reveals that direct eye contact gains privileged access to conscious awareness in healthy adults. We found that patients, as with healthy control subjects, showed the same effect: faces with direct eye gaze became visible significantly faster than faces with averted gaze. This suggests that early unconscious processing of eye gaze is intact in schizophrenia and implies that any misjudgments of gaze direction must manifest at a later conscious stage of gaze processing where deficits and/or biases in attributing mental states to gaze and/or beliefs about being watched may play a role.

  3. Object tracking with stereo vision

    NASA Technical Reports Server (NTRS)

    Huber, Eric

    1994-01-01

    A real-time active stereo vision system incorporating gaze control and task directed vision is described. Emphasis is placed on object tracking and object size and shape determination. Techniques include motion-centroid tracking, depth tracking, and contour tracking.

  4. Does gaze cueing produce automatic response activation: a lateralized readiness potential (LRP) study.

    PubMed

    Vainio, L; Heimola, M; Heino, H; Iljin, I; Laamanen, P; Seesjärvi, E; Paavilainen, P

    2014-05-01

    Previous research has shown that gaze cues facilitate responses to an upcoming target if the target location is compatible with the direction of the cue. Similar cueing effects have also been observed with central arrow cues. Both of these cueing effects have been attributed to a reflexive orienting of attention triggered by the cue. In addition, orienting of attention has been proposed to result in a partial response activation of the corresponding hand that, in turn, can be observed in the lateralized readiness potential (LRP), an electrophysiological indicator of automatic hand-motor response preparation. For instance, a central arrow cue has been observed to produce automatic hand-motor activation as indicated by the LRPs. The present study investigated whether gaze cues could also produce similar activation patterns in LRP. Although the standard gaze cueing effect was observed in the behavioural data, the LRP data did not reveal any consistent automatic hand-motor activation. The study suggests that motor processes associated with gaze cueing effect may operate exclusively at the level of oculomotor programming. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  5. Psychomotor control in a virtual laparoscopic surgery training environment: gaze control parameters differentiate novices from experts.

    PubMed

    Wilson, Mark; McGrath, John; Vine, Samuel; Brewer, James; Defriend, David; Masters, Richard

    2010-10-01

    Surgical simulation is increasingly used to facilitate the adoption of technical skills during surgical training. This study sought to determine if gaze control parameters could differentiate between the visual control of experienced and novice operators performing an eye-hand coordination task on a virtual reality laparoscopic surgical simulator (LAP Mentor™). Typically adopted hand movement metrics reflect only one half of the eye-hand coordination relationship; therefore, little is known about how hand movements are guided and controlled by vision. A total of 14 right-handed surgeons were categorised as being either experienced (having led more than 70 laparoscopic procedures) or novice (having performed fewer than 10 procedures) operators. The eight experienced and six novice surgeons completed the eye-hand coordination task from the LAP Mentor basic skills package while wearing a gaze registration system. A variety of performance, movement, and gaze parameters were recorded and compared between groups. The experienced surgeons completed the task significantly more quickly than the novices, but only the economy of movement of the left tool differentiated skill level from the LAP Mentor parameters. Gaze analyses revealed that experienced surgeons spent significantly more time fixating the target locations than novices, who split their time between focusing on the targets and tracking the tools. The findings of the study provide support for the utility of assessing strategic gaze behaviour to better understand the way in which surgeons utilise visual information to plan and control tool movements in a virtual reality laparoscopic environment. It is hoped that by better understanding the limitations of the psychomotor system, effective gaze training programs may be developed.

  6. Psychomotor control in a virtual laparoscopic surgery training environment: gaze control parameters differentiate novices from experts

    PubMed Central

    McGrath, John; Vine, Samuel; Brewer, James; Defriend, David; Masters, Richard

    2010-01-01

    Background Surgical simulation is increasingly used to facilitate the adoption of technical skills during surgical training. This study sought to determine if gaze control parameters could differentiate between the visual control of experienced and novice operators performing an eye-hand coordination task on a virtual reality laparoscopic surgical simulator (LAP Mentor™). Typically adopted hand movement metrics reflect only one half of the eye-hand coordination relationship; therefore, little is known about how hand movements are guided and controlled by vision. Methods A total of 14 right-handed surgeons were categorised as being either experienced (having led more than 70 laparoscopic procedures) or novice (having performed fewer than 10 procedures) operators. The eight experienced and six novice surgeons completed the eye-hand coordination task from the LAP Mentor basic skills package while wearing a gaze registration system. A variety of performance, movement, and gaze parameters were recorded and compared between groups. Results The experienced surgeons completed the task significantly more quickly than the novices, but only the economy of movement of the left tool differentiated skill level from the LAP Mentor parameters. Gaze analyses revealed that experienced surgeons spent significantly more time fixating the target locations than novices, who split their time between focusing on the targets and tracking the tools. Conclusion The findings of the study provide support for the utility of assessing strategic gaze behaviour to better understand the way in which surgeons utilise visual information to plan and control tool movements in a virtual reality laparoscopic environment. It is hoped that by better understanding the limitations of the psychomotor system, effective gaze training programs may be developed. PMID:20333405

  7. Coordination of eye and head components of movements evoked by stimulation of the paramedian pontine reticular formation.

    PubMed

    Gandhi, Neeraj J; Barton, Ellen J; Sparks, David L

    2008-07-01

    Constant frequency microstimulation of the paramedian pontine reticular formation (PPRF) in head-restrained monkeys evokes a constant velocity eye movement. Since the PPRF receives significant projections from structures that control coordinated eye-head movements, we asked whether stimulation of the pontine reticular formation in the head-unrestrained animal generates a combined eye-head movement or only an eye movement. Microstimulation of most sites yielded a constant-velocity gaze shift executed as a coordinated eye-head movement, although eye-only movements were evoked from some sites. The eye and head contributions to the stimulation-evoked movements varied across stimulation sites and were drastically different from the lawful relationship observed for visually-guided gaze shifts. These results indicate that the microstimulation activated elements that issued movement commands to the extraocular and, for most sites, neck motoneurons. In addition, the stimulation-evoked changes in gaze were similar in the head-restrained and head-unrestrained conditions despite the assortment of eye and head contributions, suggesting that the vestibulo-ocular reflex (VOR) gain must be near unity during the coordinated eye-head movements evoked by stimulation of the PPRF. These findings contrast the attenuation of VOR gain associated with visually-guided gaze shifts and suggest that the vestibulo-ocular pathway processes volitional and PPRF stimulation-evoked gaze shifts differently.

  8. A review of adaptive change in musculoskeletal impedance during space flight and associated implications for postflight head movement control

    NASA Technical Reports Server (NTRS)

    McDonald, P. V.; Bloomberg, J. J.; Layne, C. S.

    1997-01-01

    We present a review of converging sources of evidence which suggest that the differences between loading histories experienced in 1-g and weightlessness are sufficient to stimulate adaptation in mechanical impedance of the musculoskeletal system. As a consequence of this adaptive change we argue that we should observe changes in the ability to attenuate force transmission through the musculoskeletal system both during and after space flight. By focusing attention on the relation between human sensorimotor activity and support surfaces, the importance of controlling mechanical energy flow through the musculoskeletal system is demonstrated. The implications of such control are discussed in light of visual-vestibular function in the specific context of head and gaze control during postflight locomotion. Evidence from locomotory biomechanics, visual-vestibular function, ergonomic evaluations of human vibration, and specific investigations of locomotion and head and gaze control after space flight, is considered.

  9. Learning rational temporal eye movement strategies.

    PubMed

    Hoppe, David; Rothkopf, Constantin A

    2016-07-19

    During active behavior humans redirect their gaze several times every second within the visual environment. Where we look within static images is highly efficient, as quantified by computational models of human gaze shifts in visual search and face recognition tasks. However, when we shift gaze is mostly unknown despite its fundamental importance for survival in a dynamic world. It has been suggested that during naturalistic visuomotor behavior gaze deployment is coordinated with task-relevant events, often predictive of future events, and studies in sportsmen suggest that timing of eye movements is learned. Here we establish that humans efficiently learn to adjust the timing of eye movements in response to environmental regularities when monitoring locations in the visual scene to detect probabilistically occurring events. To detect the events humans adopt strategies that can be understood through a computational model that includes perceptual and acting uncertainties, a minimal processing time, and, crucially, the intrinsic costs of gaze behavior. Thus, subjects traded off event detection rate with behavioral costs of carrying out eye movements. Remarkably, based on this rational bounded actor model the time course of learning the gaze strategies is fully explained by an optimal Bayesian learner with humans' characteristic uncertainty in time estimation, the well-known scalar law of biological timing. Taken together, these findings establish that the human visual system is highly efficient in learning temporal regularities in the environment and that it can use these regularities to control the timing of eye movements to detect behaviorally relevant events.

  10. Hierarchical control of two-dimensional gaze saccades

    PubMed Central

    Optican, Lance M.; Blohm, Gunnar; Lefèvre, Philippe

    2014-01-01

    Coordinating the movements of different body parts is a challenging process for the central nervous system because of several problems. Four of these main difficulties are: first, moving one part can move others; second, the parts can have different dynamics; third, some parts can have different motor goals; and fourth, some parts may be perturbed by outside forces. Here, we propose a novel approach for the control of linked systems with feedback loops for each part. The proximal parts have separate goals, but critically the most distal part has only the common goal. We apply this new control policy to eye-head coordination in two-dimensions, specifically head-unrestrained gaze saccades. Paradoxically, the hierarchical structure has controllers for the gaze and the head, but not for the eye (the most distal part). Our simulations demonstrate that the proposed control structure reproduces much of the published empirical data about gaze movements, e.g., it compensates for perturbations, accurately reaches goals for gaze and head from arbitrary initial positions, simulates the nine relationships of the head-unrestrained main sequence, and reproduces observations from lesion and single-unit recording experiments. We conclude by showing how our model can be easily extended to control structures with more linked segments, such as the control of coordinated eye on head on trunk movements. PMID:24062206

  11. Relationship between abstract thinking and eye gaze pattern in patients with schizophrenia.

    PubMed

    Oh, Jooyoung; Chun, Ji-Won; Lee, Jung Suk; Kim, Jae-Jin

    2014-04-16

    Effective integration of visual information is necessary to utilize abstract thinking, but patients with schizophrenia have slow eye movement and usually explore limited visual information. This study examines the relationship between abstract thinking ability and the pattern of eye gaze in patients with schizophrenia using a novel theme identification task. Twenty patients with schizophrenia and 22 healthy controls completed the theme identification task, in which subjects selected which word, out of a set of provided words, best described the theme of a picture. Eye gaze while performing the task was recorded by the eye tracker. Patients exhibited a significantly lower correct rate for theme identification and lesser fixation and saccade counts than controls. The correct rate was significantly correlated with the fixation count in patients, but not in controls. Patients with schizophrenia showed impaired abstract thinking and decreased quality of gaze, which were positively associated with each other. Theme identification and eye gaze appear to be useful as tools for the objective measurement of abstract thinking in patients with schizophrenia.

  12. 3-D-Gaze-Based Robotic Grasping Through Mimicking Human Visuomotor Function for People With Motion Impairments.

    PubMed

    Li, Songpo; Zhang, Xiaoli; Webb, Jeremy D

    2017-12-01

    The goal of this paper is to achieve a novel 3-D-gaze-based human-robot-interaction modality, with which a user with motion impairment can intuitively express what tasks he/she wants the robot to do by directly looking at the object of interest in the real world. Toward this goal, we investigate 1) the technology to accurately sense where a person is looking in real environments and 2) the method to interpret the human gaze and convert it into an effective interaction modality. Looking at a specific object reflects what a person is thinking related to that object, and the gaze location contains essential information for object manipulation. A novel gaze vector method is developed to accurately estimate the 3-D coordinates of the object being looked at in real environments, and a novel interpretation framework that mimics human visuomotor functions is designed to increase the control capability of gaze in object grasping tasks. High tracking accuracy was achieved using the gaze vector method. Participants successfully controlled a robotic arm for object grasping by directly looking at the target object. Human 3-D gaze can be effectively employed as an intuitive interaction modality for robotic object manipulation. It is the first time that 3-D gaze is utilized in a real environment to command a robot for a practical application. Three-dimensional gaze tracking is promising as an intuitive alternative for human-robot interaction especially for disabled and elderly people who cannot handle the conventional interaction modalities.

  13. Eye’m talking to you: speakers’ gaze direction modulates co-speech gesture processing in the right MTG

    PubMed Central

    Toni, Ivan; Hagoort, Peter; Kelly, Spencer D.; Özyürek, Aslı

    2015-01-01

    Recipients process information from speech and co-speech gestures, but it is currently unknown how this processing is influenced by the presence of other important social cues, especially gaze direction, a marker of communicative intent. Such cues may modulate neural activity in regions associated either with the processing of ostensive cues, such as eye gaze, or with the processing of semantic information, provided by speech and gesture. Participants were scanned (fMRI) while taking part in triadic communication involving two recipients and a speaker. The speaker uttered sentences that were and were not accompanied by complementary iconic gestures. Crucially, the speaker alternated her gaze direction, thus creating two recipient roles: addressed (direct gaze) vs unaddressed (averted gaze) recipient. The comprehension of Speech&Gesture relative to SpeechOnly utterances recruited middle occipital, middle temporal and inferior frontal gyri, bilaterally. The calcarine sulcus and posterior cingulate cortex were sensitive to differences between direct and averted gaze. Most importantly, Speech&Gesture utterances, but not SpeechOnly utterances, produced additional activity in the right middle temporal gyrus when participants were addressed. Marking communicative intent with gaze direction modulates the processing of speech–gesture utterances in cerebral areas typically associated with the semantic processing of multi-modal communicative acts. PMID:24652857

  14. Social decisions affect neural activity to perceived dynamic gaze

    PubMed Central

    Latinus, Marianne; Love, Scott A.; Rossi, Alejandra; Parada, Francisco J.; Huang, Lisa; Conty, Laurence; George, Nathalie; James, Karin

    2015-01-01

    Gaze direction, a cue of both social and spatial attention, is known to modulate early neural responses to faces e.g. N170. However, findings in the literature have been inconsistent, likely reflecting differences in stimulus characteristics and task requirements. Here, we investigated the effect of task on neural responses to dynamic gaze changes: away and toward transitions (resulting or not in eye contact). Subjects performed, in random order, social (away/toward them) and non-social (left/right) judgment tasks on these stimuli. Overall, in the non-social task, results showed a larger N170 to gaze aversion than gaze motion toward the observer. In the social task, however, this difference was no longer present in the right hemisphere, likely reflecting an enhanced N170 to gaze motion toward the observer. Our behavioral and event-related potential data indicate that performing social judgments enhances saliency of gaze motion toward the observer, even those that did not result in gaze contact. These data and that of previous studies suggest two modes of processing visual information: a ‘default mode’ that may focus on spatial information; a ‘socially aware mode’ that might be activated when subjects are required to make social judgments. The exact mechanism that allows switching from one mode to the other remains to be clarified. PMID:25925272

  15. The reticular formation.

    PubMed

    Horn, Anja K E

    2006-01-01

    The reticular formation of the brainstem contains functional cell groups that are important for the control of eye, head, or lid movements. The mesencephalic reticular formation is primarily involved in the control of vertical gaze, the paramedian pontine reticular formation in horizontal gaze, and the medullary pontine reticular formation in head movements and gaze holding. In this chapter, the locations, connections, and histochemical properties of the functional cell groups are reviewed and correlated with specific subdivisions of the reticular formation.

  16. Responding to Other People's Direct Gaze: Alterations in Gaze Behavior in Infants at Risk for Autism Occur on Very Short Timescales

    ERIC Educational Resources Information Center

    Nyström, Pär; Bölte, Sven; Falck-Ytter, Terje; Achermann, Sheila; Andersson Konke, Linn; Brocki, Karin; Cauvet, Elodie; Gredebäck, Gustaf; Lundin Kleberg, Johan; Nilsson Jobs, Elisabeth; Thorup, Emilia; Zander, Eric

    2017-01-01

    Atypical gaze processing has been reported in children with autism spectrum disorders (ASD). Here we explored how infants at risk for ASD respond behaviorally to others' direct gaze. We assessed 10-month-olds with a sibling with ASD (high risk group; n = 61) and a control group (n = 18) during interaction with an adult. Eye-tracking revealed less…

  17. Control over the processing of the opponent's gaze direction in basketball experts.

    PubMed

    Weigelt, Matthias; Güldenpenning, Iris; Steggemann-Weinrich, Yvonne; Alhaj Ahmad Alaboud, Mustafa; Kunde, Wilfried

    2017-06-01

    Basketball players' responses to an opposing players' pass direction are typically delayed when the opposing player gazes in another than the pass direction. Here, we studied the role of basketball expertise on this, the so-called head-fake effect, in three groups of participants (basketball experts, soccer players, and non-athletes). The specific focus was on the dependency of the head-fake effect on previous fake experience as an index of control over the processing of task-irrelevant gaze information. Whereas (overall) the head-fake effect was of similar size in all expertise groups, preceding fake experience removed the head-fake effect in basketball players, but not in non-experts. Accordingly, basketball expertise allows for higher levels of control over the processing of task-irrelevant gaze information.

  18. Training for eye contact modulates gaze following in dogs.

    PubMed

    Wallis, Lisa J; Range, Friederike; Müller, Corsin A; Serisier, Samuel; Huber, Ludwig; Virányi, Zsófia

    2015-08-01

    Following human gaze in dogs and human infants can be considered a socially facilitated orientation response, which in object choice tasks is modulated by human-given ostensive cues. Despite their similarities to human infants, and extensive skills in reading human cues in foraging contexts, no evidence that dogs follow gaze into distant space has been found. We re-examined this question, and additionally whether dogs' propensity to follow gaze was affected by age and/or training to pay attention to humans. We tested a cross-sectional sample of 145 border collies aged 6 months to 14 years with different amounts of training over their lives. The dogs' gaze-following response in test and control conditions before and after training for initiating eye contact with the experimenter was compared with that of a second group of 13 border collies trained to touch a ball with their paw. Our results provide the first evidence that dogs can follow human gaze into distant space. Although we found no age effect on gaze following, the youngest and oldest age groups were more distractible, which resulted in a higher number of looks in the test and control conditions. Extensive lifelong formal training as well as short-term training for eye contact decreased dogs' tendency to follow gaze and increased their duration of gaze to the face. The reduction in gaze following after training for eye contact cannot be explained by fatigue or short-term habituation, as in the second group gaze following increased after a different training of the same length. Training for eye contact created a competing tendency to fixate the face, which prevented the dogs from following the directional cues. We conclude that following human gaze into distant space in dogs is modulated by training, which may explain why dogs perform poorly in comparison to other species in this task.

  19. What We Observe Is Biased by What Other People Tell Us: Beliefs about the Reliability of Gaze Behavior Modulate Attentional Orienting to Gaze Cues

    PubMed Central

    Wiese, Eva; Wykowska, Agnieszka; Müller, Hermann J.

    2014-01-01

    For effective social interactions with other people, information about the physical environment must be integrated with information about the interaction partner. In order to achieve this, processing of social information is guided by two components: a bottom-up mechanism reflexively triggered by stimulus-related information in the social scene and a top-down mechanism activated by task-related context information. In the present study, we investigated whether these components interact during attentional orienting to gaze direction. In particular, we examined whether the spatial specificity of gaze cueing is modulated by expectations about the reliability of gaze behavior. Expectations were either induced by instruction or could be derived from experience with displayed gaze behavior. Spatially specific cueing effects were observed with highly predictive gaze cues, but also when participants merely believed that actually non-predictive cues were highly predictive. Conversely, cueing effects for the whole gazed-at hemifield were observed with non-predictive gaze cues, and spatially specific cueing effects were attenuated when actually predictive gaze cues were believed to be non-predictive. This pattern indicates that (i) information about cue predictivity gained from sampling gaze behavior across social episodes can be incorporated in the attentional orienting to social cues, and that (ii) beliefs about gaze behavior modulate attentional orienting to gaze direction even when they contradict information available from social episodes. PMID:24722348

  20. Culture, gaze and the neural processing of fear expressions

    PubMed Central

    Franklin, Robert G.; Rule, Nicholas O.; Freeman, Jonathan B.; Kveraga, Kestutis; Hadjikhani, Nouchine; Yoshikawa, Sakiko; Ambady, Nalini

    2010-01-01

    The direction of others’ eye gaze has important influences on how we perceive their emotional expressions. Here, we examined differences in neural activation to direct- versus averted-gaze fear faces as a function of culture of the participant (Japanese versus US Caucasian), culture of the stimulus face (Japanese versus US Caucasian), and the relation between the two. We employed a previously validated paradigm to examine differences in neural activation in response to rapidly presented direct- versus averted-fear expressions, finding clear evidence for a culturally determined role of gaze in the processing of fear. Greater neural responsivity was apparent to averted- versus direct-gaze fear in several regions related to face and emotion processing, including bilateral amygdalae, when posed on same-culture faces, whereas greater response to direct- versus averted-gaze fear was apparent in these same regions when posed on other-culture faces. We also found preliminary evidence for intercultural variation including differential responses across participants to Japanese versus US Caucasian stimuli, and to a lesser degree differences in how Japanese and US Caucasian participants responded to these stimuli. These findings reveal a meaningful role of culture in the processing of eye gaze and emotion, and highlight their interactive influences in neural processing. PMID:20019073

  1. Deficits in eye gaze during negative social interactions in patients with schizophrenia.

    PubMed

    Choi, Soo-Hee; Ku, Jeonghun; Han, Kiwan; Kim, Eosu; Kim, Sun I; Park, Junyoung; Kim, Jae-Jin

    2010-11-01

    Impaired social functioning has been reported in patients with schizophrenia. This study aimed to examine characteristics of interpersonal behaviors in patients with schizophrenia during various social interactions using the virtual reality system. Twenty-six patients and 26 controls engaged in the virtual conversation tasks, including 3 positive and 3 negative emotion-laden conversations. Eye gaze and other behavioral parameters were recorded during the listening and answering phases. The amount of eye gaze was assessed as smaller in the patients than in the controls. A significant interaction effect of group status and emotional type was found for the listening phase. The amount of eye gaze in the patients inversely correlated with self-rated scores of assertiveness for the listening phase. These results suggest that the patients displayed inadequate levels of augmentations in eye gaze during negative emotional situations. These deficits should be considered in the treatment and social skills training for patients with schizophrenia.

  2. Cheating experience: Guiding novices to adopt the gaze strategies of experts expedites the learning of technical laparoscopic skills.

    PubMed

    Vine, Samuel J; Masters, Rich S W; McGrath, John S; Bright, Elizabeth; Wilson, Mark R

    2012-07-01

    Previous research has demonstrated that trainees can be taught (via explicit verbal instruction) to adopt the gaze strategies of expert laparoscopic surgeons. The current study examined a software template designed to guide trainees to adopt expert gaze control strategies passively, without being provided with explicit instructions. We examined 27 novices (who had no laparoscopic training) performing 50 learning trials of a laparoscopic training task in either a discovery-learning (DL) group or a gaze-training (GT) group while wearing an eye tracker to assess gaze control. The GT group performed trials using a surgery-training template (STT); software that is designed to guide expert-like gaze strategies by highlighting the key locations on the monitor screen. The DL group had a normal, unrestricted view of the scene on the monitor screen. Both groups then took part in a nondelayed retention test (to assess learning) and a stress test (under social evaluative threat) with a normal view of the scene. The STT was successful in guiding the GT group to adopt an expert-like gaze strategy (displaying more target-locking fixations). Adopting expert gaze strategies led to an improvement in performance for the GT group, which outperformed the DL group in both retention and stress tests (faster completion time and fewer errors). The STT is a practical and cost-effective training interface that automatically promotes an optimal gaze strategy. Trainees who are trained to adopt the efficient target-locking gaze strategy of experts gain a performance advantage over trainees left to discover their own strategies for task completion. Copyright © 2012 Mosby, Inc. All rights reserved.

  3. See You See Me: the Role of Eye Contact in Multimodal Human-Robot Interaction.

    PubMed

    Xu, Tian Linger; Zhang, Hui; Yu, Chen

    2016-05-01

    We focus on a fundamental looking behavior in human-robot interactions - gazing at each other's face. Eye contact and mutual gaze between two social partners are critical in smooth human-human interactions. Therefore, investigating at what moments and in what ways a robot should look at a human user's face as a response to the human's gaze behavior is an important topic. Toward this goal, we developed a gaze-contingent human-robot interaction system, which relied on momentary gaze behaviors from a human user to control an interacting robot in real time. Using this system, we conducted an experiment in which human participants interacted with the robot in a joint attention task. In the experiment, we systematically manipulated the robot's gaze toward the human partner's face in real time and then analyzed the human's gaze behavior as a response to the robot's gaze behavior. We found that more face looks from the robot led to more look-backs (to the robot's face) from human participants and consequently created more mutual gaze and eye contact between the two. Moreover, participants demonstrated more coordinated and synchronized multimodal behaviors between speech and gaze when more eye contact was successfully established and maintained.

  4. Dissociation of eye and head components of gaze shifts by stimulation of the omnipause neuron region.

    PubMed

    Gandhi, Neeraj J; Sparks, David L

    2007-07-01

    Natural movements often include actions integrated across multiple effectors. Coordinated eye-head movements are driven by a command to shift the line of sight by a desired displacement vector. Yet because extraocular and neck motoneurons are separate entities, the gaze shift command must be separated into independent signals for eye and head movement control. We report that this separation occurs, at least partially, at or before the level of pontine omnipause neurons (OPNs). Stimulation of the OPNs prior to and during gaze shifts temporally decoupled the eye and head components by inhibiting gaze and eye saccades. In contrast, head movements were consistently initiated before gaze onset, and ongoing head movements continued along their trajectories, albeit with some characteristic modulations. After stimulation offset, a gaze shift composed of an eye saccade, and a reaccelerated head movement was produced to preserve gaze accuracy. We conclude that signals subject to OPN inhibition produce the eye-movement component of a coordinated eye-head gaze shift and are not the only signals involved in the generation of the head component of the gaze shift.

  5. Is social attention impaired in schizophrenia? Gaze, but not pointing gestures, is associated with spatial attention deficits.

    PubMed

    Dalmaso, Mario; Galfano, Giovanni; Tarqui, Luana; Forti, Bruno; Castelli, Luigi

    2013-09-01

    The nature of possible impairments in orienting attention to social signals in schizophrenia is controversial. The present research was aimed at addressing this issue further by comparing gaze and arrow cues. Unlike previous studies, we also included pointing gestures as social cues, with the goal of addressing whether any eventual impairment in the attentional response was specific to gaze signals or reflected a more general deficit in dealing with social stimuli. Patients with schizophrenia or schizoaffective disorder and matched controls performed a spatial-cuing paradigm in which task-irrelevant centrally displayed gaze, pointing finger, and arrow cues oriented rightward or leftward, preceded a lateralized target requiring a simple detection response. Healthy controls responded faster to spatially congruent targets than to spatially incongruent targets, irrespective of cue type. In contrast, schizophrenic patients responded faster to spatially congruent targets than to spatially incongruent targets only for arrow and pointing finger cues. No cuing effect emerged for gaze cues. The results support the notion that gaze cuing is impaired in schizophrenia, and suggest that this deficit may not extend to all social cues.

  6. Saccadic movement deficiencies in adults with ADHD tendencies.

    PubMed

    Lee, Yun-Jeong; Lee, Sangil; Chang, Munseon; Kwak, Ho-Wan

    2015-12-01

    The goal of the present study was to explore deficits in gaze detection and emotional value judgment during a saccadic eye movement task in adults with attention deficit/hyperactivity disorder (ADHD) tendencies. Thirty-two participants, consisting of 16 ADHD tendencies and 16 controls, were recruited from a pool of 243 university students. Among the many problems in adults with ADHDs, our research focused on the deficits in the processing of nonverbal cues, such as gaze direction and the emotional value of others' faces. In Experiment 1, a cue display containing a face with emotional value and gaze direction was followed by a target display containing two faces located on the left and right side of the display. The participant's task was to make an anti-saccade opposite to the gaze direction if the cue face was not emotionally neutral. ADHD tendencies showed more overall errors than controls in making anti-saccades. Based on the hypothesis that the exposure duration of the cue display in Experiment 1 may have been too long, we presented the cue and target display simultaneously to prevent participants from preparing saccades in advance. Participants in Experiment 2 were asked to make either a pro-saccade or an anti-saccade depending on the emotional value of the central cue face. Interestingly, significant group differences were observed for errors of omission and commission. In addition, a significant three-way interaction among groups, cue emotion, and target gaze direction suggests that the emotional recognition and gaze control systems might somehow be interconnected. The result also shows that ADHDs are more easily distracted by a task-irrelevant gaze direction. Taken together, these results suggest that tasks requiring both response inhibition (anti-saccade) and gaze-emotion recognition might be useful in developing a diagnostic test for discriminating adults with ADHDs from healthy adults.

  7. The role of emotion in learning trustworthiness from eye-gaze: Evidence from facial electromyography

    PubMed Central

    Manssuer, Luis R.; Pawling, Ralph; Hayes, Amy E.; Tipper, Steven P.

    2016-01-01

    Gaze direction can be used to rapidly and reflexively lead or mislead others’ attention as to the location of important stimuli. When perception of gaze direction is congruent with the location of a target, responses are faster compared to when incongruent. Faces that consistently gaze congruently are also judged more trustworthy than faces that consistently gaze incongruently. However, it’s unclear how gaze-cues elicit changes in trust. We measured facial electromyography (EMG) during an identity-contingent gaze-cueing task to examine whether embodied emotional reactions to gaze-cues mediate trust learning. Gaze-cueing effects were found to be equivalent regardless of whether participants showed learning of trust in the expected direction or did not. In contrast, we found distinctly different patterns of EMG activity in these two populations. In a further experiment we showed the learning effects were specific to viewing faces, as no changes in liking were detected when viewing arrows that evoked similar attentional orienting responses. These findings implicate embodied emotion in learning trust from identity-contingent gaze-cueing, possibly due to the social value of shared attention or deception rather than domain-general attentional orienting. PMID:27153239

  8. Biasing moral decisions by exploiting the dynamics of eye gaze.

    PubMed

    Pärnamets, Philip; Johansson, Petter; Hall, Lars; Balkenius, Christian; Spivey, Michael J; Richardson, Daniel C

    2015-03-31

    Eye gaze is a window onto cognitive processing in tasks such as spatial memory, linguistic processing, and decision making. We present evidence that information derived from eye gaze can be used to change the course of individuals' decisions, even when they are reasoning about high-level, moral issues. Previous studies have shown that when an experimenter actively controls what an individual sees the experimenter can affect simple decisions with alternatives of almost equal valence. Here we show that if an experimenter passively knows when individuals move their eyes the experimenter can change complex moral decisions. This causal effect is achieved by simply adjusting the timing of the decisions. We monitored participants' eye movements during a two-alternative forced-choice task with moral questions. One option was randomly predetermined as a target. At the moment participants had fixated the target option for a set amount of time we terminated their deliberation and prompted them to choose between the two alternatives. Although participants were unaware of this gaze-contingent manipulation, their choices were systematically biased toward the target option. We conclude that even abstract moral cognition is partly constituted by interactions with the immediate environment and is likely supported by gaze-dependent decision processes. By tracking the interplay between individuals, their sensorimotor systems, and the environment, we can influence the outcome of a decision without directly manipulating the content of the information available to them.

  9. Eye gaze tracking for endoscopic camera positioning: an application of a hardware/software interface developed to automate Aesop.

    PubMed

    Ali, S M; Reisner, L A; King, B; Cao, A; Auner, G; Klein, M; Pandya, A K

    2008-01-01

    A redesigned motion control system for the medical robot Aesop allows automating and programming its movements. An IR eye tracking system has been integrated with this control interface to implement an intelligent, autonomous eye gaze-based laparoscopic positioning system. A laparoscopic camera held by Aesop can be moved based on the data from the eye tracking interface to keep the user's gaze point region at the center of a video feedback monitor. This system setup provides autonomous camera control that works around the surgeon, providing an optimal robotic camera platform.

  10. Relationship between abstract thinking and eye gaze pattern in patients with schizophrenia

    PubMed Central

    2014-01-01

    Background Effective integration of visual information is necessary to utilize abstract thinking, but patients with schizophrenia have slow eye movement and usually explore limited visual information. This study examines the relationship between abstract thinking ability and the pattern of eye gaze in patients with schizophrenia using a novel theme identification task. Methods Twenty patients with schizophrenia and 22 healthy controls completed the theme identification task, in which subjects selected which word, out of a set of provided words, best described the theme of a picture. Eye gaze while performing the task was recorded by the eye tracker. Results Patients exhibited a significantly lower correct rate for theme identification and lesser fixation and saccade counts than controls. The correct rate was significantly correlated with the fixation count in patients, but not in controls. Conclusions Patients with schizophrenia showed impaired abstract thinking and decreased quality of gaze, which were positively associated with each other. Theme identification and eye gaze appear to be useful as tools for the objective measurement of abstract thinking in patients with schizophrenia. PMID:24739356

  11. Attention to gaze and emotion in schizophrenia.

    PubMed

    Schwartz, Barbara L; Vaidya, Chandan J; Howard, James H; Deutsch, Stephen I

    2010-11-01

    Individuals with schizophrenia have difficulty interpreting social and emotional cues such as facial expression, gaze direction, body position, and voice intonation. Nonverbal cues are powerful social signals but are often processed implicitly, outside the focus of attention. The aim of this research was to assess implicit processing of social cues in individuals with schizophrenia. Patients with schizophrenia or schizoaffective disorder and matched controls performed a primary task of word classification with social cues in the background. Participants were asked to classify target words (LEFT/RIGHT) by pressing a key that corresponded to the word, in the context of facial expressions with eye gaze averted to the left or right. Although facial expression and gaze direction were irrelevant to the task, these facial cues influenced word classification performance. Participants were slower to classify target words (e.g., LEFT) that were incongruent to gaze direction (e.g., eyes averted to the right) compared to target words (e.g., LEFT) that were congruent to gaze direction (e.g., eyes averted to the left), but this only occurred for expressions of fear. This pattern did not differ for patients and controls. The results showed that threat-related signals capture the attention of individuals with schizophrenia. These data suggest that implicit processing of eye gaze and fearful expressions is intact in schizophrenia. (c) 2010 APA, all rights reserved

  12. Sustained neural activity to gaze and emotion perception in dynamic social scenes

    PubMed Central

    Ulloa, José Luis; Puce, Aina; Hugueville, Laurent; George, Nathalie

    2014-01-01

    To understand social interactions, we must decode dynamic social cues from seen faces. Here, we used magnetoencephalography (MEG) to study the neural responses underlying the perception of emotional expressions and gaze direction changes as depicted in an interaction between two agents. Subjects viewed displays of paired faces that first established a social scenario of gazing at each other (mutual attention) or gazing laterally together (deviated group attention) and then dynamically displayed either an angry or happy facial expression. The initial gaze change elicited a significantly larger M170 under the deviated than the mutual attention scenario. At around 400 ms after the dynamic emotion onset, responses at posterior MEG sensors differentiated between emotions, and between 1000 and 2200 ms, left posterior sensors were additionally modulated by social scenario. Moreover, activity on right anterior sensors showed both an early and prolonged interaction between emotion and social scenario. These results suggest that activity in right anterior sensors reflects an early integration of emotion and social attention, while posterior activity first differentiated between emotions only, supporting the view of a dual route for emotion processing. Altogether, our data demonstrate that both transient and sustained neurophysiological responses underlie social processing when observing interactions between others. PMID:23202662

  13. The Stationary-Gaze Task Should Not Be Systematically Used as the Control Task in Studies of Postural Control.

    PubMed

    Bonnet, Cédrick T; Szaffarczyk, Sébastien

    2017-01-01

    In studies of postural control, a control task is often used to understand significant effects obtained with experimental manipulations. This task should be the easiest task and (therefore) engage the lowest behavioral variability and cognitive workload. Since 1983, the stationary-gaze task is considered as the most relevant control task. Instead, the authors expected that free looking at small targets (white paper or images; visual angle: 12°) could be an easier task. To verify this assumption, 16 young individuals performed stationary-gaze, white-panel, and free-viewing 12° tasks in steady and relaxed stances. The stationary-gaze task led to significantly higher cognitive workload (mean score in the National Aeronotics and Space Administration Task Load Index questionnaire), higher interindividual body (head, neck, and lower back) linear variability, and higher interindividual body angular variability-not systematically yet-than both other tasks. There was more cognitive workload in steady than relaxed stances. The authors also tested if a free-viewing 24° task could lead to greater angular displacement, and hence greater body sway, than could the other tasks in relaxed stance. Unexpectedly, the participants mostly moved their eyes and not their body in this task. In the discussion, the authors explain why the stationary-gaze task may not be an ideal control task and how to choose this neutral task.

  14. Attention and Gaze Control in Picture Naming, Word Reading, and Word Categorizing

    ERIC Educational Resources Information Center

    Roelofs, Ardi

    2007-01-01

    The trigger for shifting gaze between stimuli requiring vocal and manual responses was examined. Participants were presented with picture-word stimuli and left- or right-pointing arrows. They vocally named the picture (Experiment 1), read the word (Experiment 2), or categorized the word (Experiment 3) and shifted their gaze to the arrow to…

  15. Human cortical activity evoked by contextual processing in attentional orienting.

    PubMed

    Zhao, Shuo; Li, Chunlin; Uono, Shota; Yoshimura, Sayaka; Toichi, Motomi

    2017-06-07

    The ability to assess another person's direction of attention is paramount in social communication, many studies have reported a similar pattern between gaze and arrow cues in attention orienting. Neuroimaging research has also demonstrated no qualitative differences in attention to gaze and arrow cues. However, these studies were implemented under simple experiment conditions. Researchers have highlighted the importance of contextual processing (i.e., the semantic congruence between cue and target) in attentional orienting, showing that attentional orienting by social gaze or arrow cues could be modulated through contextual processing. Here, we examine the neural activity of attentional orienting by gaze and arrow cues in response to contextual processing using functional magnetic resonance imaging. The results demonstrated that the influence of neural activity through contextual processing to attentional orienting occurred under invalid conditions (when the cue and target were incongruent versus congruent) in the ventral frontoparietal network, although we did not identify any differences in the neural substrates of attentional orienting in contextual processing between gaze and arrow cues. These results support behavioural data of attentional orienting modulated by contextual processing based on the neurocognitive architecture.

  16. Eyes on the Mind: Investigating the Influence of Gaze Dynamics on the Perception of Others in Real-Time Social Interaction

    PubMed Central

    Pfeiffer, Ulrich J.; Schilbach, Leonhard; Jording, Mathis; Timmermans, Bert; Bente, Gary; Vogeley, Kai

    2012-01-01

    Social gaze provides a window into the interests and intentions of others and allows us to actively point out our own. It enables us to engage in triadic interactions involving human actors and physical objects and to build an indispensable basis for coordinated action and collaborative efforts. The object-related aspect of gaze in combination with the fact that any motor act of looking encompasses both input and output of the minds involved makes this non-verbal cue system particularly interesting for research in embodied social cognition. Social gaze comprises several core components, such as gaze-following or gaze aversion. Gaze-following can result in situations of either “joint attention” or “shared attention.” The former describes situations in which the gaze-follower is aware of sharing a joint visual focus with the gazer. The latter refers to a situation in which gazer and gaze-follower focus on the same object and both are aware of their reciprocal awareness of this joint focus. Here, a novel interactive eye-tracking paradigm suited for studying triadic interactions was used to explore two aspects of social gaze. Experiments 1a and 1b assessed how the latency of another person’s gaze reactions (i.e., gaze-following or gaze version) affected participants’ sense of agency, which was measured by their experience of relatedness of these reactions. Results demonstrate that both timing and congruency of a gaze reaction as well as the other’s action options influence the sense of agency. Experiment 2 explored differences in gaze dynamics when participants were asked to establish either joint or shared attention. Findings indicate that establishing shared attention takes longer and requires a larger number of gaze shifts as compared to joint attention, which more closely seems to resemble simple visual detection. Taken together, novel insights into the sense of agency and the awareness of others in gaze-based interaction are provided. PMID:23227017

  17. Active head rotations and eye-head coordination

    NASA Technical Reports Server (NTRS)

    Zangemeister, W. H.; Stark, L.

    1981-01-01

    It is pointed out that head movements play an important role in gaze. The interaction between eye and head movements involves both their shared role in directing gaze and the compensatory vestibular ocular reflex. The dynamics of head trajectories are discussed, taking into account the use of parameterization to obtain the peak velocity, peak accelerations, the times of these extrema, and the duration of the movement. Attention is given to the main sequence, neck muscle EMG and details of the head-movement trajectory, types of head model accelerations, the latency of eye and head movement in coordinated gaze, gaze latency as a function of various factors, and coordinated gaze types. Clinical examples of gaze-plane analysis are considered along with the instantaneous change of compensatory eye movement (CEM) gain, and aspects of variability.

  18. Modification of Eccentric Gaze-Holding

    NASA Technical Reports Server (NTRS)

    Reschke, M. F.; Paloski, W. H.; Somers, J. T.; Leigh, R. J.; Wood, S. J.; Kornilova, L.

    2006-01-01

    Clear vision and accurate localization of objects in the environment are prerequisites for reliable performance of motor tasks. Space flight confronts the crewmember with a stimulus rearrangement that requires adaptation to function effectively with the new requirements of altered spatial orientation and motor coordination. Adaptation and motor learning driven by the effects of cerebellar disorders may share some of the same demands that face our astronauts. One measure of spatial localization shared by the astronauts and those suffering from cerebellar disorders that is easily quantified, and for which a neurobiological substrate has been identified, is the control of the angle of gaze (the "line of sight"). The disturbances of gaze control that have been documented to occur in astronauts and cosmonauts, both in-flight and postflight, can be directly related to changes in the extrinsic gravitational environment and intrinsic proprioceptive mechanisms thus, lending themselves to description by simple non-linear statistical models. Because of the necessity of developing robust normal response populations and normative populations against which abnormal responses can be evaluated, the basic models can be formulated using normal, non-astronaut test subjects and subsequently extended using centrifugation techniques to alter the gravitational and proprioceptive environment of these subjects. Further tests and extensions of the models can be made by studying abnormalities of gaze control in patients with cerebellar disease. A series of investigations were conducted in which a total of 62 subjects were tested to: (1) Define eccentric gaze-holding parameters in a normative population, and (2) explore the effects of linear acceleration on gaze-holding parameters. For these studies gaze-holding was evaluated with the subjects seated upright (the normative values), rolled 45 degrees to both the left and right, or pitched back 30 and 90 degrees. In a separate study the further effects of acceleration on gaze stability was examined during centrifugation (+2 G (sub x) and +2 G (sub z) using a total of 23 subjects. In all of our investigations eccentric gaze-holding was established by having the subjects acquire an eccentric target (+/-30 degrees horizontal, +/- 15 degrees vertical) that was flashed for 750 msec in an otherwise dark room. Subjects were instructed to hold gaze on the remembered position of the flashed target for 20 sec. Immediately following the 20 sec period, subjects were cued to return to the remembered center position and to hold gaze there for an additional 20 sec. Following this 20 sec period the center target was briefly flashed and the subject made any corrective eye movement back to the true center position. Conventionally, the ability to hold eccentric gaze is estimated by fitting the natural log of centripetal eye drifts by linear regression and calculating the time constant (G) of these slow phases of "gaze-evoked nystagmus". However, because our normative subjects sometimes showed essentially no drift (tau (sub c) = m), statistical estimation and inference on the effect of target direction was performed on values of the decay constant theta = 1/(tau (sub c)) which we found was well modeled by a gamma distribution. Subjects showed substantial variance of their eye drifts, which were centrifugal in approximately 20 % of cases, and > 40% for down gaze. Using the ensuing estimated gamma distributions, we were able to conclude that rightward and leftward gaze holding were not significantly different, but that upward gaze holding was significantly worse than downward (p<0.05). We also concluded that vertical gaze holding was significantly worse than horizontal (p<0.05). In the case of left and right roll, we found that both had a similar improvement to horizontal gaze holding (p<0.05), but didn't have a significant effect on vertical gaze holding. For pitch tilts, both tilt angles significantly decreased gaze-holding ility in all directions (p<0.05). Finally, we found that hyper-g centrifugation significantly decreased gaze holding ability in the vertical plane. The main findings of this study are as follows: (1) vertical gaze-holding is less stable than horizontal, (2) gaze-holding to upward targets is less stable than to downward targets, (3) tilt affects gaze holding, and (4) hyper-g affects gaze holding. This difference between horizontal and vertical gaze-holding may be ascribed to separate components of the velocity-to-position neural integrator for eye movements, and to differences in orbital mechanics. The differences between upward and downward gaze-holding may be ascribed to an inherent vertical imbalance in the vestibular system. Because whole body tilt and hyper-g affects gaze-holding, it is implied that the otolith organs have direct connections to the neural integrator and further studies of astronaut gaze-holding are warranted. Our statistical method for representing the range of normal eccentric gaze stability can be readily applied to normals who maybe exposed to environments which may modify the central integrator and require monitoring, and to evaluate patients with gaze-evoked nystagmus by comparing to the above established normative criteria.

  19. See You See Me: the Role of Eye Contact in Multimodal Human-Robot Interaction

    PubMed Central

    XU, TIAN (LINGER); ZHANG, HUI; YU, CHEN

    2016-01-01

    We focus on a fundamental looking behavior in human-robot interactions – gazing at each other’s face. Eye contact and mutual gaze between two social partners are critical in smooth human-human interactions. Therefore, investigating at what moments and in what ways a robot should look at a human user’s face as a response to the human’s gaze behavior is an important topic. Toward this goal, we developed a gaze-contingent human-robot interaction system, which relied on momentary gaze behaviors from a human user to control an interacting robot in real time. Using this system, we conducted an experiment in which human participants interacted with the robot in a joint attention task. In the experiment, we systematically manipulated the robot’s gaze toward the human partner’s face in real time and then analyzed the human’s gaze behavior as a response to the robot’s gaze behavior. We found that more face looks from the robot led to more look-backs (to the robot’s face) from human participants and consequently created more mutual gaze and eye contact between the two. Moreover, participants demonstrated more coordinated and synchronized multimodal behaviors between speech and gaze when more eye contact was successfully established and maintained. PMID:28966875

  20. I want to help you, but I am not sure why: gaze-cuing induces altruistic giving.

    PubMed

    Rogers, Robert D; Bayliss, Andrew P; Szepietowska, Anna; Dale, Laura; Reeder, Lydia; Pizzamiglio, Gloria; Czarna, Karolina; Wakeley, Judi; Cowen, Phillip J; Tipper, Steven P

    2014-04-01

    Detecting subtle indicators of trustworthiness is highly adaptive for moving effectively amongst social partners. One powerful signal is gaze direction, which individuals can use to inform (or deceive) by looking toward (or away from) important objects or events in the environment. Here, across 5 experiments, we investigate whether implicit learning about gaze cues can influence subsequent economic transactions; we also examine some of the underlying mechanisms. In the 1st experiment, we demonstrate that people invest more money with individuals whose gaze information has previously been helpful, possibly reflecting enhanced trust appraisals. However, in 2 further experiments, we show that other mechanisms driving this behavior include obligations to fairness or (painful) altruism, since people also make more generous offers and allocations of money to individuals with reliable gaze cues in adapted 1-shot ultimatum games and 1-shot dictator games. In 2 final experiments, we show that the introduction of perceptual noise while following gaze can disrupt these effects, but only when the social partners are unfamiliar. Nonconscious detection of reliable gaze cues can prompt altruism toward others, probably reflecting the interplay of systems that encode identity and control gaze-evoked attention, integrating the reinforcement value of gaze cues.

  1. Eye-gaze control of the computer interface: Discrimination of zoom intent

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldberg, J.H.; Schryver, J.C.

    1993-10-01

    An analysis methodology and associated experiment were developed to assess whether definable and repeatable signatures of eye-gaze characteristics are evident, preceding a decision to zoom-in, zoom-out, or not to zoom at a computer interface. This user intent discrimination procedure can have broad application in disability aids and telerobotic control. Eye-gaze was collected from 10 subjects in a controlled experiment, requiring zoom decisions. The eye-gaze data were clustered, then fed into a multiple discriminant analysis (MDA) for optimal definition of heuristics separating the zoom-in, zoom-out, and no-zoom conditions. Confusion matrix analyses showed that a number of variable combinations classified at amore » statistically significant level, but practical significance was more difficult to establish. Composite contour plots demonstrated the regions in parameter space consistently assigned by the MDA to unique zoom conditions. Peak classification occurred at about 1200--1600 msec. Improvements in the methodology to achieve practical real-time zoom control are considered.« less

  2. Early Left Parietal Activity Elicited by Direct Gaze: A High-Density EEG Study

    PubMed Central

    Burra, Nicolas; Kerzel, Dirk; George, Nathalie

    2016-01-01

    Gaze is one of the most important cues for human communication and social interaction. In particular, gaze contact is the most primary form of social contact and it is thought to capture attention. A very early-differentiated brain response to direct versus averted gaze has been hypothesized. Here, we used high-density electroencephalography to test this hypothesis. Topographical analysis allowed us to uncover a very early topographic modulation (40–80 ms) of event-related responses to faces with direct as compared to averted gaze. This modulation was obtained only in the condition where intact broadband faces–as opposed to high-pass or low-pas filtered faces–were presented. Source estimation indicated that this early modulation involved the posterior parietal region, encompassing the left precuneus and inferior parietal lobule. This supports the idea that it reflected an early orienting response to direct versus averted gaze. Accordingly, in a follow-up behavioural experiment, we found faster response times to the direct gaze than to the averted gaze broadband faces. In addition, classical evoked potential analysis showed that the N170 peak amplitude was larger for averted gaze than for direct gaze. Taken together, these results suggest that direct gaze may be detected at a very early processing stage, involving a parallel route to the ventral occipito-temporal route of face perceptual analysis. PMID:27880776

  3. Abnormal social reward processing in autism as indexed by pupillary responses to happy faces

    PubMed Central

    2012-01-01

    Background Individuals with Autism Spectrum Disorders (ASD) typically show impaired eye contact during social interactions. From a young age, they look less at faces than typically developing (TD) children and tend to avoid direct gaze. However, the reason for this behavior remains controversial; ASD children might avoid eye contact because they perceive the eyes as aversive or because they do not find social engagement through mutual gaze rewarding. Methods We monitored pupillary diameter as a measure of autonomic response in children with ASD (n = 20, mean age = 12.4) and TD controls (n = 18, mean age = 13.7) while they looked at faces displaying different emotions. Each face displayed happy, fearful, angry or neutral emotions with the gaze either directed to or averted from the subjects. Results Overall, children with ASD and TD controls showed similar pupillary responses; however, they differed significantly in their sensitivity to gaze direction for happy faces. Specifically, pupillary diameter increased among TD children when viewing happy faces with direct gaze as compared to those with averted gaze, whereas children with ASD did not show such sensitivity to gaze direction. We found no group differences in fixation that could explain the differential pupillary responses. There was no effect of gaze direction on pupil diameter for negative affect or neutral faces among either the TD or ASD group. Conclusions We interpret the increased pupillary diameter to happy faces with direct gaze in TD children to reflect the intrinsic reward value of a smiling face looking directly at an individual. The lack of this effect in children with ASD is consistent with the hypothesis that individuals with ASD may have reduced sensitivity to the reward value of social stimuli. PMID:22958650

  4. Variation in gaze-following between two Asian colobine monkeys.

    PubMed

    Chen, Tao; Gao, Jie; Tan, Jingzhi; Tao, Ruoting; Su, Yanjie

    2017-10-01

    Gaze-following is a basic cognitive ability found in numerous primate and nonprimate species. However, little is known about this ability and its variation in colobine monkeys. We compared gaze-following of two Asian colobines-François' langurs (Trachypithecus francoisi) and golden snub-nosed monkeys (Rhinopithecus roxellana). Although both species live in small polygynous family units, units of the latter form multilevel societies with up to hundreds of individuals. François' langurs (N = 15) were less sensitive to the gaze of a human experimenter than were golden snub-nosed monkeys (N = 12). We then tested the two species using two classic inhibitory control tasks-the cylinder test and the A-not-B test. We found no difference between species in inhibitory control, which called into question the nonsocial explanation for François' langur's weaker sensitivity to human gaze. These findings are consistent with the social intelligence hypothesis, which predicted that golden snub-nosed monkeys would outperform François' langurs in gaze-following because of the greater size and complexity of their social groups. Furthermore, our results underscore the need for more comparative studies of cognition in colobines, which should provide valuable opportunities to test hypotheses of cognitive evolution.

  5. Provider interaction with the electronic health record: the effects on patient-centered communication in medical encounters.

    PubMed

    Street, Richard L; Liu, Lin; Farber, Neil J; Chen, Yunan; Calvitti, Alan; Zuest, Danielle; Gabuzda, Mark T; Bell, Kristin; Gray, Barbara; Rick, Steven; Ashfaq, Shazia; Agha, Zia

    2014-09-01

    The computer with the electronic health record (EHR) is an additional 'interactant' in the medical consultation, as clinicians must simultaneously or in alternation engage patient and computer to provide medical care. Few studies have examined how clinicians' EHR workflow (e.g., gaze, keyboard activity, and silence) influences the quality of their communication, the patient's involvement in the encounter, and conversational control of the visit. Twenty-three primary care providers (PCPs) from USA Veterans Administration (VA) primary care clinics participated in the study. Up to 6 patients per PCP were recruited. The proportion of time PCPs spent gazing at the computer was captured in real time via video-recording. Mouse click/scrolling activity was captured through Morae, a usability software that logs mouse clicks and scrolling activity. Conversational silence was coded as the proportion of time in the visit when PCP and patient were not talking. After the visit, patients completed patient satisfaction measures. Trained coders independently viewed videos of the interactions and rated the degree to which PCPs were patient-centered (informative, supportive, partnering) and patients were involved in the consultation. Conversational control was measured as the proportion of time the PCP held the floor compared to the patient. The final sample included 125 consultations. PCPs who spent more time in the consultation gazing at the computer and whose visits had more conversational silence were rated lower in patient-centeredness. PCPs controlled more of the talk time in the visits that also had longer periods of mutual silence. PCPs were rated as having less effective communication when they spent more time looking at the computer and when there was more periods of silence in the consultation. Because PCPs increasingly are using the EHR in their consultations, more research is needed to determine effective ways that they can verbally engage patients while simultaneously managing data in the EHR. EHR activity consumes an increasing proportion of clinicians' time during consultations. To ensure effective communication with their patients, clinicians may benefit from using communication strategies that maintain the flow of conversation when working with the computer, as well as from learning EHR management skills that prevent extended periods of gaze at computer and long periods of silence. Next-generation EHR design must address better usability and clinical workflow integration, including facilitating patient-clinician communication. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  6. Provider interaction with the electronic health record: The effects on patient-centered communication in medical encounters

    PubMed Central

    Street, Richard L.; Liu, Lin; Farber, Neil J.; Chen, Yunan; Calvitti, Alan; Zuest, Danielle; Gabuzda, Mark T.; Bell, Kristin; Gray, Barbara; Rick, Steven; Ashfaq, Shazia; Agha, Zia

    2015-01-01

    Objective The computer with the electronic health record (EHR) is an additional ‘interactant’ in the medical consultation, as clinicians must simultaneously or in alternation engage patient and computer to provide medical care. Few studies have examined how clinicians' EHR workflow (e.g., gaze, keyboard activity, and silence) influences the quality of their communication, the patient's involvement in the encounter, and conversational control of the visit. Methods Twenty-three primary care providers (PCPs) from USA Veterans Administration (VA) primary care clinics participated in the study. Up to 6 patients per PCP were recruited. The proportion of time PCPs spent gazing at the computer was captured in real time via video-recording. Mouse click/scrolling activity was captured through Morae, a usability software that logs mouse clicks and scrolling activity. Conversational silence was coded as the proportion of time in the visit when PCP and patient were not talking. After the visit, patients completed patient satisfaction measures. Trained coders independently viewed videos of the interactions and rated the degree to which PCPs were patient-centered (informative, supportive, partnering) and patients were involved in the consultation. Conversational control was measured as the proportion of time the PCP held the floor compared to the patient. Results The final sample included 125 consultations. PCPs who spent more time in the consultation gazing at the computer and whose visits had more conversational silence were rated lower inpatient-centeredness. PCPs controlled more of the talk time in the visits that also had longer periods of mutual silence. Conclusions PCPs were rated as having less effective communication when they spent more time looking at the computer and when there was more periods of silence in the consultation. Because PCPs increasingly are using the EHR in their consultations, more research is needed to determine effective ways that they can verbally engage patients while simultaneously managing data in the EHR. Practice implications EHR activity consumes an increasing proportion of clinicians' time during consultations. To ensure effective communication with their patients, clinicians may benefit from using communication strategies that maintain the flow of conversation when working with the computer, as well as from learning EHR management skills that prevent extended periods of gaze at computer and long periods of silence. Next-generation EHR design must address better usability and clinical workflow integration, including facilitating patient-clinician communication. PMID:24882086

  7. Gaze Control in Complex Scene Perception

    DTIC Science & Technology

    2004-01-01

    retained in memory from previously attended objects in natural scenes. Psychonomic Bulletin & Review , 8, 761-768. • The nature of the internal memory...scenes. Psychonomic Bulletin & Review , 8, 761-768. o Henderson, J. M., Falk, R. J., Minut, S., Dyer, F. C., & Mahadevan, S. (2001). Gaze control for face

  8. Comparisons of Neuronal and Excitatory Network Properties between the Rat Brainstem Nuclei that Participate in Vertical and Horizontal Gaze Holding

    PubMed Central

    Sugimura, Taketoshi; Yanagawa, Yuchio

    2017-01-01

    Gaze holding is primarily controlled by neural structures including the prepositus hypoglossi nucleus (PHN) for horizontal gaze and the interstitial nucleus of Cajal (INC) for vertical and torsional gaze. In contrast to the accumulating findings of the PHN, there is no report regarding the membrane properties of INC neurons or the local networks in the INC. In this study, to verify whether the neural structure of the INC is similar to that of the PHN, we investigated the neuronal and network properties of the INC using whole-cell recordings in rat brainstem slices. Three types of afterhyperpolarization (AHP) profiles and five firing patterns observed in PHN neurons were also observed in INC neurons. However, the overall distributions based on the AHP profile and the firing patterns of INC neurons were different from those of PHN neurons. The application of burst stimulation to a nearby site of a recorded INC neuron induced an increase in the frequency of spontaneous EPSCs. The duration of the increased EPSC frequency of INC neurons was not significantly different from that of PHN neurons. The percent of duration reduction induced by a Ca2+-permeable AMPA (CP-AMPA) receptor antagonist was significantly smaller in the INC than in the PHN. These findings suggest that local excitatory networks that activate sustained EPSC responses also exist in the INC, but their activation mechanisms including the contribution of CP-AMPA receptors differ between the INC and the PHN. PMID:28966973

  9. Is improved lane keeping during cognitive load caused by increased physical arousal or gaze concentration toward the road center?

    PubMed

    Li, Penghui; Markkula, Gustav; Li, Yibing; Merat, Natasha

    2018-08-01

    Driver distraction is one of the main causes of motor-vehicle accidents. However, the impact on traffic safety of tasks that impose cognitive (non-visual) distraction remains debated. One particularly intriguing finding is that cognitive load seems to improve lane keeping performance, most often quantified as reduced standard deviation of lateral position (SDLP). The main competing hypotheses, supported by current empirical evidence, suggest that cognitive load improves lane keeping via either increased physical arousal, or higher gaze concentration toward the road center, but views are mixed regarding if, and how, these possible mediators influence lane keeping performance. Hence, a simulator study was conducted, with participants driving on a straight city road section whilst completing a cognitive task at different levels of difficulty. In line with previous studies, cognitive load led to increased physical arousal, higher gaze concentration toward the road center, and higher levels of micro-steering activity, accompanied by improved lane keeping performance. More importantly, during the high cognitive task, both physical arousal and gaze concentration changed earlier in time than micro-steering activity, which in turn changed earlier than lane keeping performance. In addition, our results did not show a significant correlation between gaze concentration and physical arousal on the level of individual task recordings. Based on these findings, various multilevel models for micro-steering activity and lane keeping performance were conducted and compared, and the results suggest that all of the mechanisms proposed by existing hypotheses could be simultaneously involved. In other words, it is suggested that cognitive load leads to: (i) an increase in arousal, causing increased micro-steering activity, which in turn improves lane keeping performance, and (ii) an increase in gaze concentration, causing lane keeping improvement through both (a) further increased micro-steering activity and (b) a tendency to steer toward the gaze target. Copyright © 2018 Elsevier Ltd. All rights reserved.

  10. Effects of Optical Pitch on Oculomotor Control and the Perception of Target Elevation

    NASA Technical Reports Server (NTRS)

    Cohen, Malcom M.; Ebenholtz, Sheldon M.; Linder, Barry J.

    1995-01-01

    In two experiments, we used an ISCAN infrared video system to examine the influence of a pitched visual array on gaze elevation and on judgments of visually perceived eye level. In Experiment 1, subjects attempted to direct their gaze to a relaxed or to a horizontal orientation while they were seated in a room whose walls were pitched at various angles with respect to gravity. Gaze elevation was biased in the direction in which the room was pitched. In Experiment 2, subjects looked into a small box that was pitched at various angles while they attempted simply to direct their gaze alone, or to direct their gaze and place a visual target at their apparent horizon. Both gaze elevation and target settings varied systematically with the pitch orientation of the box. Our results suggest that under these conditions, an optostatic response, of which the subject is unaware, is responsible for the changes in both gaze elevation and judgments of target elevation.

  11. Symptoms elicited in persons with vestibular dysfunction while performing gaze movements in optic flow environments

    PubMed Central

    Whitney, Susan L.; Sparto, Patrick J.; Cook, James R.; Redfern, Mark S.; Furman, Joseph M.

    2016-01-01

    Introduction People with vestibular disorders often experience space and motion discomfort when exposed to moving or highly textured visual scenes. The purpose of this study was to measure the type and severity of symptoms in people with vestibular dysfunction during coordinated head and eye movements in optic flow environments. Methods Seven subjects with vestibular disorders and 25 controls viewed four different full-field optic flow environments on six different visits. The optic flow environments consisted of textures with various contrasts and spatial frequencies. Subjects performed 8 gaze movement tasks, including eye saccades, gaze saccades, and gaze stabilization tasks. Subjects reported symptoms using Subjective Units of Discomfort (SUD) and the Simulator Sickness Questionnaire (SSQ). Self-reported dizziness handicap and space and motion discomfort were also measured. Results/ Conclusion Subjects with vestibular disorders had greater discomfort and experienced greater oculomotor and disorientation symptoms. The magnitude of the symptoms increased during each visit, but did not depend on the optic flow condition. Subjects who reported greater dizziness handicap and space motion discomfort had greater severity of symptoms during the experiment. Symptoms of fatigue, difficulty focusing, and dizziness during the experiment were evident. Compared with controls, subjects with vestibular disorders had less head movement during the gaze saccade tasks. Overall, performance of gaze pursuit and gaze stabilization tasks in moving visual environments elicited greater symptoms in subjects with vestibular disorders compared with healthy subjects. PMID:23549055

  12. Gaze holding deficits discriminate early from late onset cerebellar degeneration.

    PubMed

    Tarnutzer, Alexander A; Weber, K P; Schuknecht, B; Straumann, D; Marti, S; Bertolini, G

    2015-08-01

    The vestibulo-cerebellum calibrates the output of the inherently leaky brainstem neural velocity-to-position integrator to provide stable gaze holding. In healthy humans small-amplitude centrifugal nystagmus is present at extreme gaze-angles, with a non-linear relationship between eye-drift velocity and eye eccentricity. In cerebellar degeneration this calibration is impaired, resulting in pathological gaze-evoked nystagmus (GEN). For cerebellar dysfunction, increased eye drift may be present at any gaze angle (reflecting pure scaling of eye drift found in controls) or restricted to far-lateral gaze (reflecting changes in shape of the non-linear relationship) and resulting eyed-drift patterns could be related to specific disorders. We recorded horizontal eye positions in 21 patients with cerebellar neurodegeneration (gaze-angle = ±40°) and clinically confirmed GEN. Eye-drift velocity, linearity and symmetry of drift were determined. MR-images were assessed for cerebellar atrophy. In our patients, the relation between eye-drift velocity and gaze eccentricity was non-linear, yielding (compared to controls) significant GEN at gaze-eccentricities ≥20°. Pure scaling was most frequently observed (n = 10/18), followed by pure shape-changing (n = 4/18) and a mixed pattern (n = 4/18). Pure shape-changing patients were significantly (p = 0.001) younger at disease-onset compared to pure scaling patients. Atrophy centered around the superior/dorsal vermis, flocculus/paraflocculus and dentate nucleus and did not correlate with the specific drift behaviors observed. Eye drift in cerebellar degeneration varies in magnitude; however, it retains its non-linear properties. With different drift patterns being linked to age at disease-onset, we propose that the gaze-holding pattern (scaling vs. shape-changing) may discriminate early- from late-onset cerebellar degeneration. Whether this allows a distinction among specific cerebellar disorders remains to be determined.

  13. Perceived Gaze Direction Modulates Neural Processing of Prosocial Decision Making

    PubMed Central

    Sun, Delin; Shao, Robin; Wang, Zhaoxin; Lee, Tatia M. C.

    2018-01-01

    Gaze direction is a common social cue implying potential interpersonal interaction. However, little is known about the neural processing of social decision making influenced by perceived gaze direction. Here, we employed functional magnetic resonance imaging (fMRI) method to investigate 27 females when they were engaging in an economic exchange game task during which photos of direct or averted eye gaze were shown. We found that, when averted but not direct gaze was presented, prosocial vs. selfish choices were associated with stronger activations in the right superior temporal gyrus (STG) as well as larger functional couplings between right STG and the posterior cingulate cortex (PCC). Moreover, stronger activations in right STG was associated with quicker actions for making prosocial choice accompanied with averted gaze. The findings suggest that, when the cue implying social contact is absent, the processing of understanding others’ intention and the relationship between self and others is more involved for making prosocial than selfish decisions. These findings could advance our understanding of the roles of subtle cues in influencing prosocial decision making, as well as shedding lights on deficient social cue processing and functioning among individuals with autism spectrum disorder (ASD). PMID:29487516

  14. An eye on reactor and computer control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schryver, J.; Knee, B.

    1992-01-01

    At ORNL computer software has been developed to make possible an improved eye-gaze measurement technology. Such an inovation could be the basis for advanced eye-gaze systems that may have applications in reactor control, software development, cognitive engineering, evaluation of displays, prediction of mental workloads, and military target recognition.

  15. Balance, mobility and gaze stability deficits remain following surgical removal of vestibular schwannoma (acoustic neuroma): an observational study.

    PubMed

    Choy, Nancy Low; Johnson, Natalie; Treleaven, Julia; Jull, Gwendolen; Panizza, Benedict; Brown-Rothwell, David

    2006-01-01

    Are there residual deficits in balance, mobility, and gaze stability after surgical removal of vestibular schwannoma? Observational study. Twelve people with a mean age of 52 years who had undergone surgical removal of vestibular schwannoma at least three months previously and had not undergone vestibular rehabilitation. Twelve age- and gender-matched healthy people who acted as controls. Handicap due to dizziness, balance, mobility, and gaze stability was measured. Handicap due to dizziness was moderate for the clinical group. They swayed significantly more than the controls in comfortable stance: firm surface eyes open and visual conflict (p < 0.05); foam surface eyes closed (p < 0.05) and visual conflict (p < 0.05); and feet together: firm surface, eyes closed (p < 0.05), foam surface, eyes open (p < 0.05) and eyes closed (p < 0.01). They displayed a higher rate of failure for timed stance and gaze stability (p < 0.05) than the controls. Step Test (p < 0.01), Tandem Walk Test (p < 0.05) and Dynamic Gait Index (p < 0.01) scores were also significantly reduced compared with controls. There was a significant correlation between handicap due to dizziness and the inability to maintain balance in single limb and tandem stance (r = 0.68, p = 0.02) and the ability to maintain gaze stability during passive head movement (r = 0.78; p = 0.02). A prospective study is required to evaluate vestibular rehabilitation to ameliorate dizziness and to improve balance, mobility, and gaze stability for this clinical group.

  16. The effect of arousal and eye gaze direction on trust evaluations of stranger's faces: A potential pathway to paranoid thinking.

    PubMed

    Abbott, Jennie; Middlemiss, Megan; Bruce, Vicki; Smailes, David; Dudley, Robert

    2018-09-01

    When asked to evaluate faces of strangers, people with paranoia show a tendency to rate others as less trustworthy. The present study investigated the impact of arousal on this interpersonal bias, and whether this bias was specific to evaluations of trust or additionally affected other trait judgements. The study also examined the impact of eye gaze direction, as direct eye gaze has been shown to heighten arousal. In two experiments, non-clinical participants completed face rating tasks before and after either an arousal manipulation or control manipulation. Experiment one examined the effects of heightened arousal on judgements of trustworthiness. Experiment two examined the specificity of the bias, and the impact of gaze direction. Experiment one indicated that the arousal manipulation led to lower trustworthiness ratings. Experiment two showed that heightened arousal reduced trust evaluations of trustworthy faces, particularly trustworthy faces with averted gaze. The control group rated trustworthy faces with direct gaze as more trustworthy post-manipulation. There was some evidence that attractiveness ratings were affected similarly to the trust judgements, whereas judgements of intelligence were not affected by higher arousal. In both studies, participants reported low levels of arousal even after the manipulation and the use of a non-clinical sample limits the generalisability to clinical samples. There is a complex interplay between arousal, evaluations of trustworthiness and gaze direction. Heightened arousal influences judgements of trustworthiness, but within the context of face type and gaze direction. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Eye Gaze Correlates of Motor Impairment in VR Observation of Motor Actions.

    PubMed

    Alves, J; Vourvopoulos, A; Bernardino, A; Bermúdez I Badia, S

    2016-01-01

    This article is part of the Focus Theme of Methods of Information in Medicine on "Methodologies, Models and Algorithms for Patients Rehabilitation". Identify eye gaze correlates of motor impairment in a virtual reality motor observation task in a study with healthy participants and stroke patients. Participants consisted of a group of healthy subjects (N = 20) and a group of stroke survivors (N = 10). Both groups were required to observe a simple reach-and-grab and place-and-release task in a virtual environment. Additionally, healthy subjects were required to observe the task in a normal condition and a constrained movement condition. Eye movements were recorded during the observation task for later analysis. For healthy participants, results showed differences in gaze metrics when comparing the normal and arm-constrained conditions. Differences in gaze metrics were also found when comparing dominant and non-dominant arm for saccades and smooth pursuit events. For stroke patients, results showed longer smooth pursuit segments in action observation when observing the paretic arm, thus providing evidence that the affected circuitry may be activated for eye gaze control during observation of the simulated motor action. This study suggests that neural motor circuits are involved, at multiple levels, in observation of motor actions displayed in a virtual reality environment. Thus, eye tracking combined with action observation tasks in a virtual reality display can be used to monitor motor deficits derived from stroke, and consequently can also be used for rehabilitation of stroke patients.

  18. Simple gaze-contingent cues guide eye movements in a realistic driving simulator

    NASA Astrophysics Data System (ADS)

    Pomarjanschi, Laura; Dorr, Michael; Bex, Peter J.; Barth, Erhardt

    2013-03-01

    Looking at the right place at the right time is a critical component of driving skill. Therefore, gaze guidance has the potential to become a valuable driving assistance system. In previous work, we have already shown that complex gaze-contingent stimuli can guide attention and reduce the number of accidents in a simple driving simulator. We here set out to investigate whether cues that are simple enough to be implemented in a real car can also capture gaze during a more realistic driving task in a high-fidelity driving simulator. We used a state-of-the-art, wide-field-of-view driving simulator with an integrated eye tracker. Gaze-contingent warnings were implemented using two arrays of light-emitting diodes horizontally fitted below and above the simulated windshield. Thirteen volunteering subjects drove along predetermined routes in a simulated environment popu­ lated with autonomous traffic. Warnings were triggered during the approach to half of the intersections, cueing either towards the right or to the left. The remaining intersections were not cued, and served as controls. The analysis of the recorded gaze data revealed that the gaze-contingent cues did indeed have a gaze guiding effect, triggering a significant shift in gaze position towards the highlighted direction. This gaze shift was not accompanied by changes in driving behaviour, suggesting that the cues do not interfere with the driving task itself.

  19. Deficient gaze pattern during virtual multiparty conversation in patients with schizophrenia.

    PubMed

    Han, Kiwan; Shin, Jungeun; Yoon, Sang Young; Jang, Dong-Pyo; Kim, Jae-Jin

    2014-06-01

    Virtual reality has been used to measure abnormal social characteristics, particularly in one-to-one situations. In real life, however, conversations with multiple companions are common and more complicated than two-party conversations. In this study, we explored the features of social behaviors in patients with schizophrenia during virtual multiparty conversations. Twenty-three patients with schizophrenia and 22 healthy controls performed the virtual three-party conversation task, which included leading and aiding avatars, positive- and negative-emotion-laden situations, and listening and speaking phases. Patients showed a significant negative correlation in the listening phase between the amount of gaze on the between-avatar space and reasoning ability, and demonstrated increased gaze on the between-avatar space in the speaking phase that was uncorrelated with attentional ability. These results suggest that patients with schizophrenia have active avoidance of eye contact during three-party conversations. Virtual reality may provide a useful way to measure abnormal social characteristics during multiparty conversations in schizophrenia. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Integrated Locomotor Function Tests for Countermeasure Evaluation

    NASA Technical Reports Server (NTRS)

    Bloomberg, J. J.; Mulavara, A. P.; Peters, B. T.; Cohen, H. S.; Landsness, E. C.; Black, F. O.

    2005-01-01

    Following spaceflight crewmembers experience locomotor dysfunction due to inflight adaptive alterations in sensorimotor function. Countermeasures designed to mitigate these postflight gait alterations need to be assessed with a new generation of tests that evaluate the interaction of various sensorimotor sub-systems central to locomotor control. The goal of the present study was to develop new functional tests of locomotor control that could be used to test the efficacy of countermeasures. These tests were designed to simultaneously examine the function of multiple sensorimotor systems underlying the control of locomotion and be operationally relevant to the astronaut population. Traditionally, gaze stabilization has been studied almost exclusively in seated subjects performing target acquisition tasks requiring only the involvement of coordinated eye-head movements. However, activities like walking involve full-body movement and require coordination between lower limbs and the eye-head-trunk complex to achieve stabilized gaze during locomotion. Therefore the first goal of this study was to determine how the multiple, interdependent, full-body sensorimotor gaze stabilization subsystems are functionally coordinated during locomotion. In an earlier study we investigated how alteration in gaze tasking changes full-body locomotor control strategies. Subjects walked on a treadmill and either focused on a central point target or read numeral characters. We measured: temporal parameters of gait, full body sagittal plane segmental kinematics of the head, trunk, thigh, shank and foot, accelerations along the vertical axis at the head and the shank, and the vertical forces acting on the support surface. In comparison to the point target fixation condition, the results of the number reading task showed that compensatory head pitch movements increased, peak head acceleration was reduced and knee flexion at heel-strike was increased. In a more recent study we investigated the adaptive remodeling of the full-body gaze control systems following exposure to visual-vestibular conflict. Subjects walked on a treadmill before and after a 30- minute exposure to 0.5X minifying during which self-generated sinusoidal vertical head rotations were performed while seated. Following exposure to visual-vestibular conflict subjects showed a restriction in compensatory head movements, increased knee and ankle flexion after heel-strike and a decrease in the rate of body loading during the rapid weight transfer phase after the heel strike event. Taken together, results from both studies provide evidence that the full body contributes to gaze stabilization during locomotion, and that different functional elements are responsive to changes in visual task constraints and are subject to adaptive alterations following exposure to visual-vestibular conflict. This information provides the basis for the design of a new generation of integrative tests that incorporate the evaluation of multiple neural control systems relevant to astronaut operational performance.

  1. Stimulus exposure and gaze bias: a further test of the gaze cascade model.

    PubMed

    Glaholt, Mackenzie G; Reingold, Eyal M

    2009-04-01

    We tested predictions derived from the gaze cascade model of preference decision making (Shimojo, Simion, Shimojo, & Scheier, 2003; Simion & Shimojo, 2006, 2007). In each trial, participants' eye movements were monitored while they performed an eight-alternative decision task in which four of the items in the array were preexposed prior to the trial. Replicating previous findings, we found a gaze bias toward the chosen item prior to the response. However, contrary to the prediction of the gaze cascade model, preexposure of stimuli decreased, rather than increased, the magnitude of the gaze bias in preference decisions. Furthermore, unlike the prediction of the model, preexposure did not affect the likelihood of an item being chosen, and the pattern of looking behavior in preference decisions and on a non preference control task was remarkably similar. Implications of the present findings in multistage models of decision making are discussed.

  2. I Want to Help You, But I Am Not Sure Why: Gaze-Cuing Induces Altruistic Giving

    PubMed Central

    2013-01-01

    Detecting subtle indicators of trustworthiness is highly adaptive for moving effectively amongst social partners. One powerful signal is gaze direction, which individuals can use to inform (or deceive) by looking toward (or away from) important objects or events in the environment. Here, across 5 experiments, we investigate whether implicit learning about gaze cues can influence subsequent economic transactions; we also examine some of the underlying mechanisms. In the 1st experiment, we demonstrate that people invest more money with individuals whose gaze information has previously been helpful, possibly reflecting enhanced trust appraisals. However, in 2 further experiments, we show that other mechanisms driving this behavior include obligations to fairness or (painful) altruism, since people also make more generous offers and allocations of money to individuals with reliable gaze cues in adapted 1-shot ultimatum games and 1-shot dictator games. In 2 final experiments, we show that the introduction of perceptual noise while following gaze can disrupt these effects, but only when the social partners are unfamiliar. Nonconscious detection of reliable gaze cues can prompt altruism toward others, probably reflecting the interplay of systems that encode identity and control gaze-evoked attention, integrating the reinforcement value of gaze cues. PMID:23937180

  3. Exploring associations between gaze patterns and putative human mirror neuron system activity.

    PubMed

    Donaldson, Peter H; Gurvich, Caroline; Fielding, Joanne; Enticott, Peter G

    2015-01-01

    The human mirror neuron system (MNS) is hypothesized to be crucial to social cognition. Given that key MNS-input regions such as the superior temporal sulcus are involved in biological motion processing, and mirror neuron activity in monkeys has been shown to vary with visual attention, aberrant MNS function may be partly attributable to atypical visual input. To examine the relationship between gaze pattern and interpersonal motor resonance (IMR; an index of putative MNS activity), healthy right-handed participants aged 18-40 (n = 26) viewed videos of transitive grasping actions or static hands, whilst the left primary motor cortex received transcranial magnetic stimulation. Motor-evoked potentials recorded in contralateral hand muscles were used to determine IMR. Participants also underwent eyetracking analysis to assess gaze patterns whilst viewing the same videos. No relationship was observed between predictive gaze and IMR. However, IMR was positively associated with fixation counts in areas of biological motion in the videos, and negatively associated with object areas. These findings are discussed with reference to visual influences on the MNS, and the possibility that MNS atypicalities might be influenced by visual processes such as aberrant gaze pattern.

  4. The Role of Global and Local Visual Information during Gaze-Cued Orienting of Attention.

    PubMed

    Munsters, Nicolette M; van den Boomen, Carlijn; Hooge, Ignace T C; Kemner, Chantal

    2016-01-01

    Gaze direction is an important social communication tool. Global and local visual information are known to play specific roles in processing socially relevant information from a face. The current study investigated whether global visual information has a primary role during gaze-cued orienting of attention and, as such, may influence quality of interaction. Adults performed a gaze-cueing task in which a centrally presented face cued (valid or invalid) the location of a peripheral target through a gaze shift. We measured brain activity (electroencephalography) towards the cue and target and behavioral responses (manual and saccadic reaction times) towards the target. The faces contained global (i.e. lower spatial frequencies), local (i.e. higher spatial frequencies), or a selection of both global and local (i.e. mid-band spatial frequencies) visual information. We found a gaze cue-validity effect (i.e. valid versus invalid), but no interaction effects with spatial frequency content. Furthermore, behavioral responses towards the target were in all cue conditions slower when lower spatial frequencies were not present in the gaze cue. These results suggest that whereas gaze-cued orienting of attention can be driven by both global and local visual information, global visual information determines the speed of behavioral responses towards other entities appearing in the surrounding of gaze cue stimuli.

  5. Participation Through Gaze Controlled Computer for Children with Severe Multiple Disabilities.

    PubMed

    Holmqvist, Eva; Derbring, Sandra; Wallin, Sofia

    2017-01-01

    This paper presents work on developing methodology material for use of gaze controlled computers. The target group is families and professionals around children with severe multiple disabilities. The material includes software grids for children at various levels, aimed for communication, leisure and learning and will be available for download.

  6. Effect of Acceleration Frequency on Spatial Orientation Mechanisms

    DTIC Science & Technology

    2010-09-30

    by aircraft, ground vehicle, and ship motion. Method. With controlled laboratory off-vertical axis rotation (OVAR), gaze reflexes respond to low...finding that vestibular gaze reflexes become altered at the same frequency where OVAR becomes most sickening will have important implications for...the collected data, a revised crossover rate of 0.42 Hz was extrapolated as the most probable spin frequency for inducing gaze reflex changes with the

  7. Beliefs about the Minds of Others Influence How We Process Sensory Information

    PubMed Central

    Prosser, Aaron; Müller, Hermann J.

    2014-01-01

    Attending where others gaze is one of the most fundamental mechanisms of social cognition. The present study is the first to examine the impact of the attribution of mind to others on gaze-guided attentional orienting and its ERP correlates. Using a paradigm in which attention was guided to a location by the gaze of a centrally presented face, we manipulated participants' beliefs about the gazer: gaze behavior was believed to result either from operations of a mind or from a machine. In Experiment 1, beliefs were manipulated by cue identity (human or robot), while in Experiment 2, cue identity (robot) remained identical across conditions and beliefs were manipulated solely via instruction, which was irrelevant to the task. ERP results and behavior showed that participants' attention was guided by gaze only when gaze was believed to be controlled by a human. Specifically, the P1 was more enhanced for validly, relative to invalidly, cued targets only when participants believed the gaze behavior was the result of a mind, rather than of a machine. This shows that sensory gain control can be influenced by higher-order (task-irrelevant) beliefs about the observed scene. We propose a new interdisciplinary model of social attention, which integrates ideas from cognitive and social neuroscience, as well as philosophy in order to provide a framework for understanding a crucial aspect of how humans' beliefs about the observed scene influence sensory processing. PMID:24714419

  8. Eye Movements Affect Postural Control in Young and Older Females

    PubMed Central

    Thomas, Neil M.; Bampouras, Theodoros M.; Donovan, Tim; Dewhurst, Susan

    2016-01-01

    Visual information is used for postural stabilization in humans. However, little is known about how eye movements prevalent in everyday life interact with the postural control system in older individuals. Therefore, the present study assessed the effects of stationary gaze fixations, smooth pursuits, and saccadic eye movements, with combinations of absent, fixed and oscillating large-field visual backgrounds to generate different forms of retinal flow, on postural control in healthy young and older females. Participants were presented with computer generated visual stimuli, whilst postural sway and gaze fixations were simultaneously assessed with a force platform and eye tracking equipment, respectively. The results showed that fixed backgrounds and stationary gaze fixations attenuated postural sway. In contrast, oscillating backgrounds and smooth pursuits increased postural sway. There were no differences regarding saccades. There were also no differences in postural sway or gaze errors between age groups in any visual condition. The stabilizing effect of the fixed visual stimuli show how retinal flow and extraocular factors guide postural adjustments. The destabilizing effect of oscillating visual backgrounds and smooth pursuits may be related to more challenging conditions for determining body shifts from retinal flow, and more complex extraocular signals, respectively. Because the older participants matched the young group's performance in all conditions, decreases of posture and gaze control during stance may not be a direct consequence of healthy aging. Further research examining extraocular and retinal mechanisms of balance control and the effects of eye movements, during locomotion, is needed to better inform fall prevention interventions. PMID:27695412

  9. Eye Movements Affect Postural Control in Young and Older Females.

    PubMed

    Thomas, Neil M; Bampouras, Theodoros M; Donovan, Tim; Dewhurst, Susan

    2016-01-01

    Visual information is used for postural stabilization in humans. However, little is known about how eye movements prevalent in everyday life interact with the postural control system in older individuals. Therefore, the present study assessed the effects of stationary gaze fixations, smooth pursuits, and saccadic eye movements, with combinations of absent, fixed and oscillating large-field visual backgrounds to generate different forms of retinal flow, on postural control in healthy young and older females. Participants were presented with computer generated visual stimuli, whilst postural sway and gaze fixations were simultaneously assessed with a force platform and eye tracking equipment, respectively. The results showed that fixed backgrounds and stationary gaze fixations attenuated postural sway. In contrast, oscillating backgrounds and smooth pursuits increased postural sway. There were no differences regarding saccades. There were also no differences in postural sway or gaze errors between age groups in any visual condition. The stabilizing effect of the fixed visual stimuli show how retinal flow and extraocular factors guide postural adjustments. The destabilizing effect of oscillating visual backgrounds and smooth pursuits may be related to more challenging conditions for determining body shifts from retinal flow, and more complex extraocular signals, respectively. Because the older participants matched the young group's performance in all conditions, decreases of posture and gaze control during stance may not be a direct consequence of healthy aging. Further research examining extraocular and retinal mechanisms of balance control and the effects of eye movements, during locomotion, is needed to better inform fall prevention interventions.

  10. GazeParser: an open-source and multiplatform library for low-cost eye tracking and analysis.

    PubMed

    Sogo, Hiroyuki

    2013-09-01

    Eye movement analysis is an effective method for research on visual perception and cognition. However, recordings of eye movements present practical difficulties related to the cost of the recording devices and the programming of device controls for use in experiments. GazeParser is an open-source library for low-cost eye tracking and data analysis; it consists of a video-based eyetracker and libraries for data recording and analysis. The libraries are written in Python and can be used in conjunction with PsychoPy and VisionEgg experimental control libraries. Three eye movement experiments are reported on performance tests of GazeParser. These showed that the means and standard deviations for errors in sampling intervals were less than 1 ms. Spatial accuracy ranged from 0.7° to 1.2°, depending on participant. In gap/overlap tasks and antisaccade tasks, the latency and amplitude of the saccades detected by GazeParser agreed with those detected by a commercial eyetracker. These results showed that the GazeParser demonstrates adequate performance for use in psychological experiments.

  11. Automatic attentional orienting to other people's gaze in schizophrenia.

    PubMed

    Langdon, Robyn; Seymour, Kiley; Williams, Tracey; Ward, Philip B

    2017-08-01

    Explicit tests of social cognition have revealed pervasive deficits in schizophrenia. Less is known of automatic social cognition in schizophrenia. We used a spatial orienting task to investigate automatic shifts of attention cued by another person's eye gaze in 29 patients and 28 controls. Central photographic images of a face with eyes shifted left or right, or looking straight ahead, preceded targets that appeared left or right of the cue. To examine automatic effects, cue direction was non-predictive of target location. Cue-target intervals were 100, 300, and 800 ms. In non-social control trials, arrows replaced eye-gaze cues. Both groups showed automatic attentional orienting indexed by faster reaction times (RTs) when arrows were congruent with target location across all cue-target intervals. Similar congruency effects were seen for eye-shift cues at 300 and 800 ms intervals, but patients showed significantly larger congruency effects at 800 ms, which were driven by delayed responses to incongruent target locations. At short 100-ms cue-target intervals, neither group showed faster RTs for congruent than for incongruent eye-shift cues, but patients were significantly slower to detect targets after direct-gaze cues. These findings conflict with previous studies using schematic line drawings of eye-shifts that have found automatic attentional orienting to be reduced in schizophrenia. Instead, our data indicate that patients display abnormalities in responding to gaze direction at various stages of gaze processing-reflected by a stronger preferential capture of attention by another person's direct eye contact at initial stages of gaze processing and difficulties disengaging from a gazed-at location once shared attention is established.

  12. Aberrant face and gaze habituation in fragile x syndrome.

    PubMed

    Bruno, Jennifer Lynn; Garrett, Amy S; Quintin, Eve-Marie; Mazaika, Paul K; Reiss, Allan L

    2014-10-01

    The authors sought to investigate neural system habituation to face and eye gaze in fragile X syndrome, a disorder characterized by eye-gaze aversion, among other social and cognitive deficits. Participants (ages 15-25 years) were 30 individuals with fragile X syndrome (females, N=14) and a comparison group of 25 individuals without fragile X syndrome (females, N=12) matched for general cognitive ability and autism symptoms. Functional MRI (fMRI) was used to assess brain activation during a gaze habituation task. Participants viewed repeated presentations of four unique faces with either direct or averted eye gaze and judged the direction of eye gaze. Four participants (males, N=4/4; fragile X syndrome, N=3) were excluded because of excessive head motion during fMRI scanning. Behavioral performance did not differ between the groups. Less neural habituation (and significant sensitization) in the fragile X syndrome group was found in the cingulate gyrus, fusiform gyrus, and frontal cortex in response to all faces (direct and averted gaze). Left fusiform habituation in female participants was directly correlated with higher, more typical levels of the fragile X mental retardation protein and inversely correlated with autism symptoms. There was no evidence for differential habituation to direct gaze compared with averted gaze within or between groups. Impaired habituation and accentuated sensitization in response to face/eye gaze was distributed across multiple levels of neural processing. These results could help inform interventions, such as desensitization therapy, which may help patients with fragile X syndrome modulate anxiety and arousal associated with eye gaze, thereby improving social functioning.

  13. Manifold decoding for neural representations of face viewpoint and gaze direction using magnetoencephalographic data.

    PubMed

    Kuo, Po-Chih; Chen, Yong-Sheng; Chen, Li-Fen

    2018-05-01

    The main challenge in decoding neural representations lies in linking neural activity to representational content or abstract concepts. The transformation from a neural-based to a low-dimensional representation may hold the key to encoding perceptual processes in the human brain. In this study, we developed a novel model by which to represent two changeable features of faces: face viewpoint and gaze direction. These features are embedded in spatiotemporal brain activity derived from magnetoencephalographic data. Our decoding results demonstrate that face viewpoint and gaze direction can be represented by manifold structures constructed from brain responses in the bilateral occipital face area and right superior temporal sulcus, respectively. Our results also show that the superposition of brain activity in the manifold space reveals the viewpoints of faces as well as directions of gazes as perceived by the subject. The proposed manifold representation model provides a novel opportunity to gain further insight into the processing of information in the human brain. © 2018 Wiley Periodicals, Inc.

  14. Fuzzy integral-based gaze control architecture incorporated with modified-univector field-based navigation for humanoid robots.

    PubMed

    Yoo, Jeong-Ki; Kim, Jong-Hwan

    2012-02-01

    When a humanoid robot moves in a dynamic environment, a simple process of planning and following a path may not guarantee competent performance for dynamic obstacle avoidance because the robot acquires limited information from the environment using a local vision sensor. Thus, it is essential to update its local map as frequently as possible to obtain more information through gaze control while walking. This paper proposes a fuzzy integral-based gaze control architecture incorporated with the modified-univector field-based navigation for humanoid robots. To determine the gaze direction, four criteria based on local map confidence, waypoint, self-localization, and obstacles, are defined along with their corresponding partial evaluation functions. Using the partial evaluation values and the degree of consideration for criteria, fuzzy integral is applied to each candidate gaze direction for global evaluation. For the effective dynamic obstacle avoidance, partial evaluation functions about self-localization error and surrounding obstacles are also used for generating virtual dynamic obstacle for the modified-univector field method which generates the path and velocity of robot toward the next waypoint. The proposed architecture is verified through the comparison with the conventional weighted sum-based approach with the simulations using a developed simulator for HanSaRam-IX (HSR-IX).

  15. Constraining eye movement in individuals with Parkinson's disease during walking turns.

    PubMed

    Ambati, V N Pradeep; Saucedo, Fabricio; Murray, Nicholas G; Powell, Douglas W; Reed-Jones, Rebecca J

    2016-10-01

    Walking and turning is a movement that places individuals with Parkinson's disease (PD) at increased risk for fall-related injury. However, turning is an essential movement in activities of daily living, making up to 45 % of the total steps taken in a given day. Hypotheses regarding how turning is controlled suggest an essential role of anticipatory eye movements to provide feedforward information for body coordination. However, little research has investigated control of turning in individuals with PD with specific consideration for eye movements. The purpose of this study was to examine eye movement behavior and body segment coordination in individuals with PD during walking turns. Three experimental groups, a group of individuals with PD, a group of healthy young adults (YAC), and a group of healthy older adults (OAC), performed walking and turning tasks under two visual conditions: free gaze and fixed gaze. Whole-body motion capture and eye tracking characterized body segment coordination and eye movement behavior during walking trials. Statistical analysis revealed significant main effects of group (PD, YAC, and OAC) and visual condition (free and fixed gaze) on timing of segment rotation and horizontal eye movement. Within group comparisons, revealed timing of eye and head movement was significantly different between the free and fixed gaze conditions for YAC (p < 0.001) and OAC (p < 0.05), but not for the PD group (p > 0.05). In addition, while intersegment timings (reflecting segment coordination) were significantly different for YAC and OAC during free gaze (p < 0.05), they were not significantly different in PD. These results suggest individuals with PD do not make anticipatory eye and head movements ahead of turning and that this may result in altered segment coordination during turning. As such, eye movements may be an important addition to training programs for those with PD, possibly promoting better coordination during turning and potentially reducing the risk of falls.

  16. Sustained attention to the owner is enhanced in dogs trained for animal assisted interventions.

    PubMed

    Mongillo, Paolo; Pitteri, Elisa; Marinelli, Lieta

    2017-07-01

    Adaptation in human societies requires dogs to pay attention to socially relevant human beings, in contexts that may greatly vary in social complexity. In turn, such selective attention may depend on the dog's training and involvement in specific activities. Therefore, we recruited untrained pet dogs (N=32), dogs trained for agility (N=32) and for animal assisted interventions (N=32) to investigate differences in attention to the owner in relation to the dogs' training/working experience. Average gaze length and frequency of gaze shifting towards the owner were measured in a 'baseline attention test', where dogs were exposed to the owner walking in and out of the experimental room and in a 'selective attention test', where the owner's movements were mirrored by an unfamiliar figurant. In baseline, gazes to the owner by assistance dogs were longer than gazes by untrained dogs, which were longer than gazes by agility dogs. The latter shifted gaze to the owner more frequently than assistance and untrained dogs. In the selective attention test, assistance dogs showed longer and less frequent gazes towards the owner than untrained dogs, with intermediate values for agility dogs. Correlations were found for gaze length between the baseline and selective attention test for untrained and assistance dogs, but not for agility dogs. Therefore, dogs trained for Animal Assisted Interventions express enhanced sustained attention to their owners, and the lack of similar effects in agility dogs suggests that involvement in specific activities is associated with large differences in the patterns of attention paid by dogs to their handler/owner. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Closed-Loop Hybrid Gaze Brain-Machine Interface Based Robotic Arm Control with Augmented Reality Feedback

    PubMed Central

    Zeng, Hong; Wang, Yanxin; Wu, Changcheng; Song, Aiguo; Liu, Jia; Ji, Peng; Xu, Baoguo; Zhu, Lifeng; Li, Huijun; Wen, Pengcheng

    2017-01-01

    Brain-machine interface (BMI) can be used to control the robotic arm to assist paralysis people for performing activities of daily living. However, it is still a complex task for the BMI users to control the process of objects grasping and lifting with the robotic arm. It is hard to achieve high efficiency and accuracy even after extensive trainings. One important reason is lacking of sufficient feedback information for the user to perform the closed-loop control. In this study, we proposed a method of augmented reality (AR) guiding assistance to provide the enhanced visual feedback to the user for a closed-loop control with a hybrid Gaze-BMI, which combines the electroencephalography (EEG) signals based BMI and the eye tracking for an intuitive and effective control of the robotic arm. Experiments for the objects manipulation tasks while avoiding the obstacle in the workspace are designed to evaluate the performance of our method for controlling the robotic arm. According to the experimental results obtained from eight subjects, the advantages of the proposed closed-loop system (with AR feedback) over the open-loop system (with visual inspection only) have been verified. The number of trigger commands used for controlling the robotic arm to grasp and lift the objects with AR feedback has reduced significantly and the height gaps of the gripper in the lifting process have decreased more than 50% compared to those trials with normal visual inspection only. The results reveal that the hybrid Gaze-BMI user can benefit from the information provided by the AR interface, improving the efficiency and reducing the cognitive load during the grasping and lifting processes. PMID:29163123

  18. The Expressive Gaze Model: Using Gaze to Express Emotion

    DTIC Science & Technology

    2010-07-01

    World of Warcraft or Oblivion , have thou- sands of computer-controlled nonplayer characters with which users can interact. Producing hand- generated...increasing to the right and the vertical increasing upward. In both cases, 0 degrees is straight ahead. Although the mechani- cal limits of human eye...to gaze from a target directly in front of her to one 60 degrees to her right , while performing these behaviors in a manner that expressed the de

  19. Control of gaze in natural environments: effects of rewards and costs, uncertainty and memory in target selection.

    PubMed

    Hayhoe, Mary M; Matthis, Jonathan Samir

    2018-08-06

    The development of better eye and body tracking systems, and more flexible virtual environments have allowed more systematic exploration of natural vision and contributed a number of insights. In natural visually guided behaviour, humans make continuous sequences of sensory-motor decisions to satisfy current goals, and the role of vision is to provide the relevant information in order to achieve those goals. This paper reviews the factors that control gaze in natural visually guided actions such as locomotion, including the rewards and costs associated with the immediate behavioural goals, uncertainty about the state of the world and prior knowledge of the environment. These general features of human gaze control may inform the development of artificial systems.

  20. Influences of High-Level Features, Gaze, and Scene Transitions on the Reliability of BOLD Responses to Natural Movie Stimuli

    PubMed Central

    Lu, Kun-Han; Hung, Shao-Chin; Wen, Haiguang; Marussich, Lauren; Liu, Zhongming

    2016-01-01

    Complex, sustained, dynamic, and naturalistic visual stimulation can evoke distributed brain activities that are highly reproducible within and across individuals. However, the precise origins of such reproducible responses remain incompletely understood. Here, we employed concurrent functional magnetic resonance imaging (fMRI) and eye tracking to investigate the experimental and behavioral factors that influence fMRI activity and its intra- and inter-subject reproducibility during repeated movie stimuli. We found that widely distributed and highly reproducible fMRI responses were attributed primarily to the high-level natural content in the movie. In the absence of such natural content, low-level visual features alone in a spatiotemporally scrambled control stimulus evoked significantly reduced degree and extent of reproducible responses, which were mostly confined to the primary visual cortex (V1). We also found that the varying gaze behavior affected the cortical response at the peripheral part of V1 and in the oculomotor network, with minor effects on the response reproducibility over the extrastriate visual areas. Lastly, scene transitions in the movie stimulus due to film editing partly caused the reproducible fMRI responses at widespread cortical areas, especially along the ventral visual pathway. Therefore, the naturalistic nature of a movie stimulus is necessary for driving highly reliable visual activations. In a movie-stimulation paradigm, scene transitions and individuals’ gaze behavior should be taken as potential confounding factors in order to properly interpret cortical activity that supports natural vision. PMID:27564573

  1. Gaze-informed, task-situated representation of space in primate hippocampus during virtual navigation

    PubMed Central

    Wirth, Sylvia; Baraduc, Pierre; Planté, Aurélie; Pinède, Serge; Duhamel, Jean-René

    2017-01-01

    To elucidate how gaze informs the construction of mental space during wayfinding in visual species like primates, we jointly examined navigation behavior, visual exploration, and hippocampal activity as macaque monkeys searched a virtual reality maze for a reward. Cells sensitive to place also responded to one or more variables like head direction, point of gaze, or task context. Many cells fired at the sight (and in anticipation) of a single landmark in a viewpoint- or task-dependent manner, simultaneously encoding the animal’s logical situation within a set of actions leading to the goal. Overall, hippocampal activity was best fit by a fine-grained state space comprising current position, view, and action contexts. Our findings indicate that counterparts of rodent place cells in primates embody multidimensional, task-situated knowledge pertaining to the target of gaze, therein supporting self-awareness in the construction of space. PMID:28241007

  2. Speaking and Listening with the Eyes: Gaze Signaling during Dyadic Interactions.

    PubMed

    Ho, Simon; Foulsham, Tom; Kingstone, Alan

    2015-01-01

    Cognitive scientists have long been interested in the role that eye gaze plays in social interactions. Previous research suggests that gaze acts as a signaling mechanism and can be used to control turn-taking behaviour. However, early research on this topic employed methods of analysis that aggregated gaze information across an entire trial (or trials), which masks any temporal dynamics that may exist in social interactions. More recently, attempts have been made to understand the temporal characteristics of social gaze but little research has been conducted in a natural setting with two interacting participants. The present study combines a temporally sensitive analysis technique with modern eye tracking technology to 1) validate the overall results from earlier aggregated analyses and 2) provide insight into the specific moment-to-moment temporal characteristics of turn-taking behaviour in a natural setting. Dyads played two social guessing games (20 Questions and Heads Up) while their eyes were tracked. Our general results are in line with past aggregated data, and using cross-correlational analysis on the specific gaze and speech signals of both participants we found that 1) speakers end their turn with direct gaze at the listener and 2) the listener in turn begins to speak with averted gaze. Convergent with theoretical models of social interaction, our data suggest that eye gaze can be used to signal both the end and the beginning of a speaking turn during a social interaction. The present study offers insight into the temporal dynamics of live dyadic interactions and also provides a new method of analysis for eye gaze data when temporal relationships are of interest.

  3. Toward a reliable gaze-independent hybrid BCI combining visual and natural auditory stimuli.

    PubMed

    Barbosa, Sara; Pires, Gabriel; Nunes, Urbano

    2016-03-01

    Brain computer interfaces (BCIs) are one of the last communication options for patients in the locked-in state (LIS). For complete LIS patients, interfaces must be gaze-independent due to their eye impairment. However, unimodal gaze-independent approaches typically present levels of performance substantially lower than gaze-dependent approaches. The combination of multimodal stimuli has been pointed as a viable way to increase users' performance. A hybrid visual and auditory (HVA) P300-based BCI combining simultaneously visual and auditory stimulation is proposed. Auditory stimuli are based on natural meaningful spoken words, increasing stimuli discrimination and decreasing user's mental effort in associating stimuli to the symbols. The visual part of the interface is covertly controlled ensuring gaze-independency. Four conditions were experimentally tested by 10 healthy participants: visual overt (VO), visual covert (VC), auditory (AU) and covert HVA. Average online accuracy for the hybrid approach was 85.3%, which is more than 32% over VC and AU approaches. Questionnaires' results indicate that the HVA approach was the less demanding gaze-independent interface. Interestingly, the P300 grand average for HVA approach coincides with an almost perfect sum of P300 evoked separately by VC and AU tasks. The proposed HVA-BCI is the first solution simultaneously embedding natural spoken words and visual words to provide a communication lexicon. Online accuracy and task demand of the approach compare favorably with state-of-the-art. The proposed approach shows that the simultaneous combination of visual covert control and auditory modalities can effectively improve the performance of gaze-independent BCIs. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Full-Body Gaze Control Mechanisms Elicited During Locomotion: Effects Of VOR Adaptation

    NASA Technical Reports Server (NTRS)

    Mulavara, A. P.; Houser, J.; Peters, B.; Miller, C.; Richards, J.; Marshburn, A.; Brady, R.; Cohen, H.; Bloomberg, J. J.

    2004-01-01

    Control of locomotion requires precise interaction between several sensorimotor subsystems. During locomotion the performer must satisfy two performance criteria: maintain stable forward translation and to stabilize gaze (McDonald, et al., 1997). Precise coordination demands integration of multiple sensorimotor subsystems for fulfilling both criteria. In order to test the general hypothesis that the whole body can serve as an integrated gaze stabilization system, we have previously investigated how the multiple, interdependent full-body sensorimotor subsystems respond to changes in gaze stabilization task constraints during locomotion (Mulavara and Bloomberg, 2003). The results suggest that the full body contributes to gaze stabilization during locomotion, and that its different functional elements respond to changes in visual task constraints. The goal of this study was to determine how the multiple, interdependent, full-body sensorimotor subsystems aiding gaze stabilization during locomotion are functionally coordinated after the vestibulo-ocular reflex (VOR) gain has been altered. We investigated the potential of adaptive remodeling of the full-body gaze control system following exposure to visual-vestibular conflict known to adaptively reduce the VOR. Subjects (n=14) walked (6.4 km/h) on the treadmill before and after they were exposed to 0.5X manifying lenses worn for 30 minutes during self-generated sinusoidal vertical head rotations performed while seated. In this study we measured: temporal parameters of gait, full body sagittal plane segmental kinematics of the head, trunk, thigh, shank and foot, accelerations along the vertical axis at the head and the shank, and the vertical forces acting on the support surface. Results indicate that, following exposure to the 0.5X minifying lenses, there was a significant increase in the duration of stance and stride times, alteration in the amplitude of head movement with respect to space and a significant increase in the amount of knee flexion during the initial stance phase of the gait cycle. This study provides further evidence that the full body contributes to gaze stabilization during locomotion, and that different functional elements are responsive to changes in visual task constraints and are subject to adaptive alteration following exposure to visual-vestibular conflict.

  5. Oxytocin Promotes Facial Emotion Recognition and Amygdala Reactivity in Adults with Asperger Syndrome

    PubMed Central

    Domes, Gregor; Kumbier, Ekkehardt; Heinrichs, Markus; Herpertz, Sabine C

    2014-01-01

    The neuropeptide oxytocin has recently been shown to enhance eye gaze and emotion recognition in healthy men. Here, we report a randomized double-blind, placebo-controlled trial that examined the neural and behavioral effects of a single dose of intranasal oxytocin on emotion recognition in individuals with Asperger syndrome (AS), a clinical condition characterized by impaired eye gaze and facial emotion recognition. Using functional magnetic resonance imaging, we examined whether oxytocin would enhance emotion recognition from facial sections of the eye vs the mouth region and modulate regional activity in brain areas associated with face perception in both adults with AS, and a neurotypical control group. Intranasal administration of the neuropeptide oxytocin improved performance in a facial emotion recognition task in individuals with AS. This was linked to increased left amygdala reactivity in response to facial stimuli and increased activity in the neural network involved in social cognition. Our data suggest that the amygdala, together with functionally associated cortical areas mediate the positive effect of oxytocin on social cognitive functioning in AS. PMID:24067301

  6. Oxytocin promotes facial emotion recognition and amygdala reactivity in adults with asperger syndrome.

    PubMed

    Domes, Gregor; Kumbier, Ekkehardt; Heinrichs, Markus; Herpertz, Sabine C

    2014-02-01

    The neuropeptide oxytocin has recently been shown to enhance eye gaze and emotion recognition in healthy men. Here, we report a randomized double-blind, placebo-controlled trial that examined the neural and behavioral effects of a single dose of intranasal oxytocin on emotion recognition in individuals with Asperger syndrome (AS), a clinical condition characterized by impaired eye gaze and facial emotion recognition. Using functional magnetic resonance imaging, we examined whether oxytocin would enhance emotion recognition from facial sections of the eye vs the mouth region and modulate regional activity in brain areas associated with face perception in both adults with AS, and a neurotypical control group. Intranasal administration of the neuropeptide oxytocin improved performance in a facial emotion recognition task in individuals with AS. This was linked to increased left amygdala reactivity in response to facial stimuli and increased activity in the neural network involved in social cognition. Our data suggest that the amygdala, together with functionally associated cortical areas mediate the positive effect of oxytocin on social cognitive functioning in AS.

  7. The mesencephalic reticular formation as a conduit for primate collicular gaze control: tectal inputs to neurons targeting the spinal cord and medulla.

    PubMed

    Perkins, Eddie; Warren, Susan; May, Paul J

    2009-08-01

    The superior colliculus (SC), which directs orienting movements of both the eyes and head, is reciprocally connected to the mesencephalic reticular formation (MRF), suggesting the latter is involved in gaze control. The MRF has been provisionally subdivided to include a rostral portion, which subserves vertical gaze, and a caudal portion, which subserves horizontal gaze. Both regions contain cells projecting downstream that may provide a conduit for tectal signals targeting the gaze control centers which direct head movements. We determined the distribution of cells targeting the cervical spinal cord and rostral medullary reticular formation (MdRF), and investigated whether these MRF neurons receive input from the SC by the use of dual tracer techniques in Macaca fascicularis monkeys. Either biotinylated dextran amine or Phaseolus vulgaris leucoagglutinin was injected into the SC. Wheat germ agglutinin conjugated horseradish peroxidase was placed into the ipsilateral cervical spinal cord or medial MdRF to retrogradely label MRF neurons. A small number of medially located cells in the rostral and caudal MRF were labeled following spinal cord injections, and greater numbers were labeled in the same region following MdRF injections. In both cases, anterogradely labeled tectoreticular terminals were observed in close association with retrogradely labeled neurons. These close associations between tectoreticular terminals and neurons with descending projections suggest the presence of a trans-MRF pathway that provides a conduit for tectal control over head orienting movements. The medial location of these reticulospinal and reticuloreticular neurons suggests this MRF region may be specialized for head movement control. (c) 2009 Wiley-Liss, Inc.

  8. Eye Movement in Response to Single and Multiple Targets

    DTIC Science & Technology

    1985-02-01

    pursuit control system. METHOD The SVFB technique was described in detail elsewhere (Zeevi et al., 1979). Displaying, to the subject, the point of gaze , in...34 The subject was presented with his point of gaze using the unconditioned SVFB signal (gain = 1, eccentric bias = 0). The SVFB signal was locked on the...superimposing the SVFB on the target, is gazing away from it and thus achieves eccentric fixation (Zeevi et al., 1979). As the subject moves from one

  9. The response of guide dogs and pet dogs (Canis familiaris) to cues of human referential communication (pointing and gaze).

    PubMed

    Ittyerah, Miriam; Gaunet, Florence

    2009-03-01

    The study raises the question of whether guide dogs and pet dogs are expected to differ in response to cues of referential communication given by their owners; especially since guide dogs grow up among sighted humans, and while living with their blind owners, they still have interactions with several sighted people. Guide dogs and pet dogs were required to respond to point, point and gaze, gaze and control cues of referential communication given by their owners. Results indicate that the two groups of dogs do not differ from each other, revealing that the visual status of the owner is not a factor in the use of cues of referential communication. Both groups of dogs have higher frequencies of performance and faster latencies for the point and the point and gaze cues as compared to gaze cue only. However, responses to control cues are below chance performance for the guide dogs, whereas the pet dogs perform at chance. The below chance performance of the guide dogs may be explained by a tendency among them to go and stand by the owner. The study indicates that both groups of dogs respond similarly in normal daily dyadic interaction with their owners and the lower comprehension of the human gaze may be a less salient cue among dogs in comparison to the pointing gesture.

  10. Gaze Estimation for Off-Angle Iris Recognition Based on the Biometric Eye Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karakaya, Mahmut; Barstow, Del R; Santos-Villalobos, Hector J

    Iris recognition is among the highest accuracy biometrics. However, its accuracy relies on controlled high quality capture data and is negatively affected by several factors such as angle, occlusion, and dilation. Non-ideal iris recognition is a new research focus in biometrics. In this paper, we present a gaze estimation method designed for use in an off-angle iris recognition framework based on the ANONYMIZED biometric eye model. Gaze estimation is an important prerequisite step to correct an off-angle iris images. To achieve the accurate frontal reconstruction of an off-angle iris image, we first need to estimate the eye gaze direction frommore » elliptical features of an iris image. Typically additional information such as well-controlled light sources, head mounted equipment, and multiple cameras are not available. Our approach utilizes only the iris and pupil boundary segmentation allowing it to be applicable to all iris capture hardware. We compare the boundaries with a look-up-table generated by using our biologically inspired biometric eye model and find the closest feature point in the look-up-table to estimate the gaze. Based on the results from real images, the proposed method shows effectiveness in gaze estimation accuracy for our biometric eye model with an average error of approximately 3.5 degrees over a 50 degree range.« less

  11. Neurocognitive mechanisms behind emotional attention: Inverse effects of anodal tDCS over the left and right DLPFC on gaze disengagement from emotional faces.

    PubMed

    Sanchez-Lopez, Alvaro; Vanderhasselt, Marie-Anne; Allaert, Jens; Baeken, Chris; De Raedt, Rudi

    2018-06-01

    Attention to relevant emotional information in the environment is an important process related to vulnerability and resilience for mood and anxiety disorders. In the present study, the effects of left and right dorsolateral prefrontal cortex (i.e., DLPFC) stimulation on attentional mechanisms of emotional processing were tested and contrasted. A sample of 54 healthy participants received 20 min of active and sham anodal transcranial direct current stimulation (i.e., tDCS) either of the left (n = 27) or of the right DLPFC (n = 27) on two separate days. The anode electrode was placed over the left or the right DLPFC, the cathode over the corresponding contra lateral supraorbital area. After each neurostimulation session, participants completed an eye-tracking task assessing direct processes of attentional engagement towards and attentional disengagement away from emotional faces (happy, disgusted, and sad expressions). Compared to sham, active tDCS over the left DLPFC led to faster gaze disengagement, whereas active tDCS over the right DLPFC led to slower gaze disengagement from emotional faces. Between-group comparisons showed that such inverse change patterns were significantly different and generalized for all types of emotion. Our findings support a lateralized role of left and right DLPFC activity in enhancing/worsening the top-down regulation of emotional attention processing. These results support the rationale of new therapies for affective disorders aimed to increase the activation of the left over the right DLPFC in combination with attentional control training, and identify specific target attention mechanisms to be trained.

  12. The impact of visual gaze direction on auditory object tracking.

    PubMed

    Pomper, Ulrich; Chait, Maria

    2017-07-05

    Subjective experience suggests that we are able to direct our auditory attention independent of our visual gaze, e.g when shadowing a nearby conversation at a cocktail party. But what are the consequences at the behavioural and neural level? While numerous studies have investigated both auditory attention and visual gaze independently, little is known about their interaction during selective listening. In the present EEG study, we manipulated visual gaze independently of auditory attention while participants detected targets presented from one of three loudspeakers. We observed increased response times when gaze was directed away from the locus of auditory attention. Further, we found an increase in occipital alpha-band power contralateral to the direction of gaze, indicative of a suppression of distracting input. Finally, this condition also led to stronger central theta-band power, which correlated with the observed effect in response times, indicative of differences in top-down processing. Our data suggest that a misalignment between gaze and auditory attention both reduce behavioural performance and modulate underlying neural processes. The involvement of central theta-band and occipital alpha-band effects are in line with compensatory neural mechanisms such as increased cognitive control and the suppression of task irrelevant inputs.

  13. Real-Time Gaze Tracking for Public Displays

    NASA Astrophysics Data System (ADS)

    Sippl, Andreas; Holzmann, Clemens; Zachhuber, Doris; Ferscha, Alois

    In this paper, we explore the real-time tracking of human gazes in front of large public displays. The aim of our work is to estimate at which area of a display one ore more people are looking at a time, independently from the distance and angle to the display as well as the height of the tracked people. Gaze tracking is relevant for a variety of purposes, including the automatic recognition of the user's focus of attention, or the control of interactive applications with gaze gestures. The scope of the present paper is on the former, and we show how gaze tracking can be used for implicit interaction in the pervasive advertising domain. We have developed a prototype for this purpose, which (i) uses an overhead mounted camera to distinguish four gaze areas on a large display, (ii) works for a wide range of positions in front of the display, and (iii) provides an estimation of the currently gazed quarters in real time. A detailed description of the prototype as well as the results of a user study with 12 participants, which show the recognition accuracy for different positions in front of the display, are presented.

  14. Effects of galvanic skin response feedback on user experience in gaze-controlled gaming: A pilot study.

    PubMed

    Larradet, Fanny; Barresi, Giacinto; Mattos, Leonardo S

    2017-07-01

    Eye-tracking (ET) is one of the most intuitive solutions for enabling people with severe motor impairments to control devices. Nevertheless, even such an effective assistive solution can detrimentally affect user experience during demanding tasks because of, for instance, the user's mental workload - using gaze-based controls for an extensive period of time can generate fatigue and cause frustration. Thus, it is necessary to design novel solutions for ET contexts able to improve the user experience, with particular attention to its aspects related to workload. In this paper, a pilot study evaluates the effects of a relaxation biofeedback system on the user experience in the context of a gaze-controlled task that is mentally and temporally demanding: ET-based gaming. Different aspects of the subjects' experience were investigated under two conditions of a gaze-controlled game. In the Biofeedback group (BF), the user triggered a command by means of voluntary relaxation, monitored through Galvanic Skin Response (GSR) and represented by visual feedback. In the No Biofeedback group (NBF), the same feedback was timed according to the average frequency of commands in BF. After the experiment, each subject filled out a user experience questionnaire. The results showed a general appreciation for BF, with a significant between-group difference in the perceived session time duration, with the latter being shorter for subjects in BF than for the ones in NBF. This result implies a lower mental workload for BF than for NBF subjects. Other results point toward a potential role of user's engagement in the improvement of user experience in BF. Such an effect highlights the value of relaxation biofeedback for improving the user experience in a demanding gaze-controlled task.

  15. Neural Mechanisms Underlying Conscious and Unconscious Gaze-Triggered Attentional Orienting in Autism Spectrum Disorder

    PubMed Central

    Sato, Wataru; Kochiyama, Takanori; Uono, Shota; Yoshimura, Sayaka; Toichi, Motomi

    2017-01-01

    Impaired joint attention represents the core clinical feature of autism spectrum disorder (ASD). Behavioral studies have suggested that gaze-triggered attentional orienting is intact in response to supraliminally presented eyes but impaired in response to subliminally presented eyes in individuals with ASD. However, the neural mechanisms underlying conscious and unconscious gaze-triggered attentional orienting remain unclear. We investigated this issue in ASD and typically developing (TD) individuals using event-related functional magnetic resonance imaging. The participants viewed cue stimuli of averted or straight eye gaze direction presented either supraliminally or subliminally and then localized a target. Reaction times were shorter when eye-gaze cues were directionally valid compared with when they were neutral under the supraliminal condition in both groups; the same pattern was found in the TD group but not the ASD group under the subliminal condition. The temporo–parieto–frontal regions showed stronger activation in response to averted eyes than to straight eyes in both groups under the supraliminal condition. The left amygdala was more activated while viewing averted vs. straight eyes in the TD group than in the ASD group under the subliminal condition. These findings provide an explanation for the neural mechanisms underlying the impairment in unconscious but not conscious gaze-triggered attentional orienting in individuals with ASD and suggest possible neurological and behavioral interventions to facilitate their joint attention behaviors. PMID:28701942

  16. Gaze pursuit responses in nucleus reticularis tegmenti pontis of head-unrestrained macaques.

    PubMed

    Suzuki, David A; Betelak, Kathleen F; Yee, Robert D

    2009-01-01

    Eye-head gaze pursuit-related activity was recorded in rostral portions of the nucleus reticularis tegmenti pontis (rNRTP) in alert macaques. The head was unrestrained in the horizontal plane, and macaques were trained to pursue a moving target either with their head, with the eyes stationary in the orbits, or with their eyes, with their head voluntarily held stationary in space. Head-pursuit-related modulations in rNRTP activity were observed with some cells exhibiting increases in firing rate with increases in head-pursuit frequency. For many units, this head-pursuit response appeared to saturate at higher frequencies (>0.6 Hz). The response phase re:peak head-pursuit velocity formed a continuum, containing cells that could encode head-pursuit velocity and those encoding head-pursuit acceleration. The latter cells did not exhibit head position-related activity. Sensitivities were calculated with respect to peak head-pursuit velocity and averaged 1.8 spikes/s/deg/s. Of the cells that were tested for both head- and eye-pursuit-related activity, 86% exhibited responses to both head- and eye-pursuit and therefore carried a putative gaze-pursuit signal. For these gaze-pursuit units, the ratio of head to eye response sensitivities averaged approximately 1.4. Pursuit eccentricity seemed to affect head-pursuit response amplitude even in the absence of a head position response per se. The results indicated that rNRTP is a strong candidate for the source of an active head-pursuit signal that projects to the cerebellum, specifically to the target-velocity and gaze-velocity Purkinje cells that have been observed in vermal lobules VI and VII.

  17. Mirror Neurons of Ventral Premotor Cortex Are Modulated by Social Cues Provided by Others' Gaze.

    PubMed

    Coudé, Gino; Festante, Fabrizia; Cilia, Adriana; Loiacono, Veronica; Bimbi, Marco; Fogassi, Leonardo; Ferrari, Pier Francesco

    2016-03-16

    Mirror neurons (MNs) in the inferior parietal lobule and ventral premotor cortex (PMv) can code the intentions of other individuals using contextual cues. Gaze direction is an important social cue that can be used for understanding the meaning of actions made by other individuals. Here we addressed the issue of whether PMv MNs are influenced by the gaze direction of another individual. We recorded single-unit activity in macaque PMv while the monkey was observing an experimenter performing a grasping action and orienting his gaze either toward (congruent gaze condition) or away (incongruent gaze condition) from a target object. The results showed that one-half of the recorded MNs were modulated by the gaze direction of the human agent. These gaze-modulated neurons were evenly distributed between those preferring a gaze direction congruent with the direction where the grasping action was performed and the others that preferred an incongruent gaze. Whereas the presence of congruent responses is in line with the usual coupling of hand and gaze in both executed and observed actions, the incongruent responses can be explained by the long exposure of the monkeys to this condition. Our results reveal that the representation of observed actions in PMv is influenced by contextual information not only extracted from physical cues, but also from cues endowed with biological or social value. In this study, we present the first evidence showing that social cues modulate MNs in the monkey ventral premotor cortex. These data suggest that there is an integrated representation of other's hand actions and gaze direction at the single neuron level in the ventral premotor cortex, and support the hypothesis of a functional role of MNs in decoding actions and understanding motor intentions. Copyright © 2016 the authors 0270-6474/16/363145-12$15.00/0.

  18. Interaction between gaze and visual and proprioceptive position judgements.

    PubMed

    Fiehler, Katja; Rösler, Frank; Henriques, Denise Y P

    2010-06-01

    There is considerable evidence that targets for action are represented in a dynamic gaze-centered frame of reference, such that each gaze shift requires an internal updating of the target. Here, we investigated the effect of eye movements on the spatial representation of targets used for position judgements. Participants had their hand passively placed to a location, and then judged whether this location was left or right of a remembered visual or remembered proprioceptive target, while gaze direction was varied. Estimates of position of the remembered targets relative to the unseen position of the hand were assessed with an adaptive psychophysical procedure. These positional judgements significantly varied relative to gaze for both remembered visual and remembered proprioceptive targets. Our results suggest that relative target positions may also be represented in eye-centered coordinates. This implies similar spatial reference frames for action control and space perception when positions are coded relative to the hand.

  19. Role of Gaze Cues in Interpersonal Motor Coordination: Towards Higher Affiliation in Human-Robot Interaction.

    PubMed

    Khoramshahi, Mahdi; Shukla, Ashwini; Raffard, Stéphane; Bardy, Benoît G; Billard, Aude

    2016-01-01

    The ability to follow one another's gaze plays an important role in our social cognition; especially when we synchronously perform tasks together. We investigate how gaze cues can improve performance in a simple coordination task (i.e., the mirror game), whereby two players mirror each other's hand motions. In this game, each player is either a leader or follower. To study the effect of gaze in a systematic manner, the leader's role is played by a robotic avatar. We contrast two conditions, in which the avatar provides or not explicit gaze cues that indicate the next location of its hand. Specifically, we investigated (a) whether participants are able to exploit these gaze cues to improve their coordination, (b) how gaze cues affect action prediction and temporal coordination, and (c) whether introducing active gaze behavior for avatars makes them more realistic and human-like (from the user point of view). 43 subjects participated in 8 trials of the mirror game. Each subject performed the game in the two conditions (with and without gaze cues). In this within-subject study, the order of the conditions was randomized across participants, and subjective assessment of the avatar's realism was assessed by administering a post-hoc questionnaire. When gaze cues were provided, a quantitative assessment of synchrony between participants and the avatar revealed a significant improvement in subject reaction-time (RT). This confirms our hypothesis that gaze cues improve the follower's ability to predict the avatar's action. An analysis of the pattern of frequency across the two players' hand movements reveals that the gaze cues improve the overall temporal coordination across the two players. Finally, analysis of the subjective evaluations from the questionnaires reveals that, in the presence of gaze cues, participants found it not only more human-like/realistic, but also easier to interact with the avatar. This work confirms that people can exploit gaze cues to predict another person's movements and to better coordinate their motions with their partners, even when the partner is a computer-animated avatar. Moreover, this study contributes further evidence that implementing biological features, here task-relevant gaze cues, enable the humanoid robotic avatar to appear more human-like, and thus increase the user's sense of affiliation.

  20. Speaking and Listening with the Eyes: Gaze Signaling during Dyadic Interactions

    PubMed Central

    Ho, Simon; Foulsham, Tom; Kingstone, Alan

    2015-01-01

    Cognitive scientists have long been interested in the role that eye gaze plays in social interactions. Previous research suggests that gaze acts as a signaling mechanism and can be used to control turn-taking behaviour. However, early research on this topic employed methods of analysis that aggregated gaze information across an entire trial (or trials), which masks any temporal dynamics that may exist in social interactions. More recently, attempts have been made to understand the temporal characteristics of social gaze but little research has been conducted in a natural setting with two interacting participants. The present study combines a temporally sensitive analysis technique with modern eye tracking technology to 1) validate the overall results from earlier aggregated analyses and 2) provide insight into the specific moment-to-moment temporal characteristics of turn-taking behaviour in a natural setting. Dyads played two social guessing games (20 Questions and Heads Up) while their eyes were tracked. Our general results are in line with past aggregated data, and using cross-correlational analysis on the specific gaze and speech signals of both participants we found that 1) speakers end their turn with direct gaze at the listener and 2) the listener in turn begins to speak with averted gaze. Convergent with theoretical models of social interaction, our data suggest that eye gaze can be used to signal both the end and the beginning of a speaking turn during a social interaction. The present study offers insight into the temporal dynamics of live dyadic interactions and also provides a new method of analysis for eye gaze data when temporal relationships are of interest. PMID:26309216

  1. Objective eye-gaze behaviour during face-to-face communication with proficient alaryngeal speakers: a preliminary study.

    PubMed

    Evitts, Paul; Gallop, Robert

    2011-01-01

    There is a large body of research demonstrating the impact of visual information on speaker intelligibility in both normal and disordered speaker populations. However, there is minimal information on which specific visual features listeners find salient during conversational discourse. To investigate listeners' eye-gaze behaviour during face-to-face conversation with normal, laryngeal and proficient alaryngeal speakers. Sixty participants individually participated in a 10-min conversation with one of four speakers (typical laryngeal, tracheoesophageal, oesophageal, electrolaryngeal; 15 participants randomly assigned to one mode of speech). All speakers were > 85% intelligible and were judged to be 'proficient' by two certified speech-language pathologists. Participants were fitted with a head-mounted eye-gaze tracking device (Mobile Eye, ASL) that calculated the region of interest and mean duration of eye-gaze. Self-reported gaze behaviour was also obtained following the conversation using a 10 cm visual analogue scale. While listening, participants viewed the lower facial region of the oesophageal speaker more than the normal or tracheoesophageal speaker. Results of non-hierarchical cluster analyses showed that while listening, the pattern of eye-gaze was predominantly directed at the lower face of the oesophageal and electrolaryngeal speaker and more evenly dispersed among the background, lower face, and eyes of the normal and tracheoesophageal speakers. Finally, results show a low correlation between self-reported eye-gaze behaviour and objective regions of interest data. Overall, results suggest similar eye-gaze behaviour when healthy controls converse with normal and tracheoesophageal speakers and that participants had significantly different eye-gaze patterns when conversing with an oesophageal speaker. Results are discussed in terms of existing eye-gaze data and its potential implications on auditory-visual speech perception. © 2011 Royal College of Speech & Language Therapists.

  2. Visual perception during mirror gazing at one's own face in schizophrenia.

    PubMed

    Caputo, Giovanni B; Ferrucci, Roberta; Bortolomasi, Marco; Giacopuzzi, Mario; Priori, Alberto; Zago, Stefano

    2012-09-01

    In normal observers gazing at one's own face in the mirror for some minutes, at a low illumination level, triggers the perception of strange faces, a new perceptual illusion that has been named 'strange-face in the mirror'. Subjects see distortions of their own faces, but often they see monsters, archetypical faces, faces of dead relatives, and of animals. We designed this study to primarily compare strange-face apparitions in response to mirror gazing in patients with schizophrenia and healthy controls. The study included 16 patients with schizophrenia and 21 healthy controls. In this paper we administered a 7 minute mirror gazing test (MGT). Before the mirror gazing session, all subjects underwent assessment with the Cardiff Anomalous Perception Scale (CAPS). When the 7minute MGT ended, the experimenter assessed patients and controls with a specifically designed questionnaire and interviewed them, asking them to describe strange-face perceptions. Apparitions of strange-faces in the mirror were significantly more intense in schizophrenic patients than in controls. All the following variables were higher in patients than in healthy controls: frequency (p<.005) and cumulative duration of apparitions (p<.009), number and types of strange-faces (p<.002), self-evaluation scores on Likert-type scales of apparition strength (p<.03) and of reality of apparitions (p<.001). In schizophrenic patients, these Likert-type scales showed correlations (p<.05) with CAPS total scores. These results suggest that the increase of strange-face apparitions in schizophrenia can be produced by ego dysfunction, by body dysmorphic disorder and by misattribution of self-agency. MGT may help in completing the standard assessment of patients with schizophrenia, independently of hallucinatory psychopathology. Copyright © 2012 Elsevier B.V. All rights reserved.

  3. Altered attentional and perceptual processes as indexed by N170 during gaze perception in schizophrenia: Relationship with perceived threat and paranoid delusions.

    PubMed

    Tso, Ivy F; Calwas, Anita M; Chun, Jinsoo; Mueller, Savanna A; Taylor, Stephan F; Deldin, Patricia J

    2015-08-01

    Using gaze information to orient attention and guide behavior is critical to social adaptation. Previous studies have suggested that abnormal gaze perception in schizophrenia (SCZ) may originate in abnormal early attentional and perceptual processes and may be related to paranoid symptoms. Using event-related brain potentials (ERPs), this study investigated altered early attentional and perceptual processes during gaze perception and their relationship to paranoid delusions in SCZ. Twenty-eight individuals with SCZ or schizoaffective disorder and 32 demographically matched healthy controls (HCs) completed a gaze-discrimination task with face stimuli varying in gaze direction (direct, averted), head orientation (forward, deviated), and emotion (neutral, fearful). ERPs were recorded during the task. Participants rated experienced threat from each face after the task. Participants with SCZ were as accurate as, though slower than, HCs on the task. Participants with SCZ displayed enlarged N170 responses over the left hemisphere to averted gaze presented in fearful relative to neutral faces, indicating a heightened encoding sensitivity to faces signaling external threat. This abnormality was correlated with increased perceived threat and paranoid delusions. Participants with SCZ also showed a reduction of N170 modulation by head orientation (normally increased amplitude to deviated faces relative to forward faces), suggesting less integration of contextual cues of head orientation in gaze perception. The psychophysiological deviations observed during gaze discrimination in SCZ underscore the role of early attentional and perceptual abnormalities in social information processing and paranoid symptoms of SCZ. (c) 2015 APA, all rights reserved).

  4. Gaze shifts during dual-tasking stair descent.

    PubMed

    Miyasike-daSilva, Veronica; McIlroy, William E

    2016-11-01

    To investigate the role of vision in stair locomotion, young adults descended a seven-step staircase during unrestricted walking (CONTROL), and while performing a concurrent visual reaction time (RT) task displayed on a monitor. The monitor was located at either 3.5 m (HIGH) or 0.5 m (LOW) above ground level at the end of the stairway, which either restricted (HIGH) or facilitated (LOW) the view of the stairs in the lower field of view as participants walked downstairs. Downward gaze shifts (recorded with an eye tracker) and gait speed were significantly reduced in HIGH and LOW compared with CONTROL. Gaze and locomotor behaviour were not different between HIGH and LOW. However, inter-individual variability increased in HIGH, in which participants combined different response characteristics including slower walking, handrail use, downward gaze, and/or increasing RTs. The fastest RTs occurred in the midsteps (non-transition steps). While gait and visual task performance were not statistically different prior to the top and bottom transition steps, gaze behaviour and RT were more variable prior to transition steps in HIGH. This study demonstrated that, in the presence of a visual task, people do not look down as often when walking downstairs and require minimum adjustments provided that the view of the stairs is available in the lower field of view. The middle of the stairs seems to require less from executive function, whereas visual attention appears a requirement to detect the last transition via gaze shifts or peripheral vision.

  5. Is the Theory of Mind deficit observed in visual paradigms in schizophrenia explained by an impaired attention toward gaze orientation?

    PubMed

    Roux, Paul; Forgeot d'Arc, Baudoin; Passerieux, Christine; Ramus, Franck

    2014-08-01

    Schizophrenia is associated with poor Theory of Mind (ToM), particularly in goal and belief attribution to others. It is also associated with abnormal gaze behaviors toward others: individuals with schizophrenia usually look less to others' face and gaze, which are crucial epistemic cues that contribute to correct mental states inferences. This study tests the hypothesis that impaired ToM in schizophrenia might be related to a deficit in visual attention toward gaze orientation. We adapted a previous non-verbal ToM paradigm consisting of animated cartoons allowing the assessment of goal and belief attribution. In the true and false belief conditions, an object was displaced while an agent was either looking at it or away, respectively. Eye movements were recorded to quantify visual attention to gaze orientation (proportion of time participants spent looking at the head of the agent while the target object changed locations). 29 patients with schizophrenia and 29 matched controls were tested. Compared to controls, patients looked significantly less at the agent's head and had lower performance in belief and goal attribution. Performance in belief and goal attribution significantly increased with the head looking percentage. When the head looking percentage was entered as a covariate, the group effect on belief and goal attribution performance was not significant anymore. Patients' deficit on this visual ToM paradigm is thus entirely explained by a decreased visual attention toward gaze. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Gaze Behavior in a Natural Environment with a Task-Relevant Distractor: How the Presence of a Goalkeeper Distracts the Penalty Taker

    PubMed Central

    Kurz, Johannes; Hegele, Mathias; Munzert, Jörn

    2018-01-01

    Gaze behavior in natural scenes has been shown to be influenced not only by top–down factors such as task demands and action goals but also by bottom–up factors such as stimulus salience and scene context. Whereas gaze behavior in the context of static pictures emphasizes spatial accuracy, gazing in natural scenes seems to rely more on where to direct the gaze involving both anticipative components and an evaluation of ongoing actions. Not much is known about gaze behavior in far-aiming tasks in which multiple task-relevant targets and distractors compete for the allocation of visual attention via gaze. In the present study, we examined gaze behavior in the far-aiming task of taking a soccer penalty. This task contains a proximal target, the ball; a distal target, an empty location within the goal; and a salient distractor, the goalkeeper. Our aim was to investigate where participants direct their gaze in a natural environment with multiple potential fixation targets that differ in task relevance and salience. Results showed that the early phase of the run-up seems to be driven by both the salience of the stimulus setting and the need to perform a spatial calibration of the environment. The late run-up, in contrast, seems to be controlled by attentional demands of the task with penalty takers having habitualized a visual routine that is not disrupted by external influences (e.g., the goalkeeper). In addition, when trying to shoot a ball as accurately as possible, penalty takers directed their gaze toward the ball in order to achieve optimal foot-ball contact. These results indicate that whether gaze is driven by salience of the stimulus setting or by attentional demands depends on the phase of the actual task. PMID:29434560

  7. Decline of vertical gaze and convergence with aging.

    PubMed

    Oguro, Hiroaki; Okada, Kazunori; Suyama, Nobuo; Yamashita, Kazuya; Yamaguchi, Shuhei; Kobayashi, Shotai

    2004-01-01

    Disturbance of vertical eye movement and ocular convergence is often observed in elderly people, but little is known about its frequency. The purpose of this study was to investigate age-associated changes in vertical eye movement and convergence in healthy elderly people, using a digital video camera system. We analyzed vertical eye movements and convergence in 113 neurologically normal elderly subjects (mean age 70 years) in comparison with 20 healthy young controls (mean age 32 years). The range of vertical eye movement was analyzed quantitatively and convergence was analyzed qualitatively. In the elderly subjects, the angle of vertical gaze decreased with advancing age and it was significantly smaller than that of the younger subjects. The mean angle of upward gaze was significantly smaller than that of downward gaze for both young and elderly subjects. Upward gaze impairment became apparent in subjects in their 70s, and downward gaze impairment in subjects in their 60s. Disturbance in convergence also increased with advancing age, and was found in 40.7% of the elderly subjects. These findings indicate that the mechanisms of age-related change are different for upward and downward vertical gaze. Digital video camera monitoring was useful for assessing and monitoring eye movements. Copyright 2004 S. Karger AG, Basel

  8. Analysis of Spatial Disorientation Mishaps in the US Navy

    DTIC Science & Technology

    2003-02-01

    optokinetic after- nystagmus (OKAN) and vestibular nystagmus . In: Baker R, Berthoz A, eds. Control of gaze by grain stem neurons, Amsterdam: Elsevier...of explaining by modeling. In: Baker R, Berthoz A, eds. Control of gaze by grain stem neurons, developments in neuroscience, Vol. 1. Amsterdam...Elsevier/North-Holland Biomedical Press, 49-58. Raphan T, Matsuo V, Cohen B. (1977) A velocity storage mechanism responsible for optokinetic nystagmus (OKN

  9. Eye-gaze determination of user intent at the computer interface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldberg, J.H.; Schryver, J.C.

    1993-12-31

    Determination of user intent at the computer interface through eye-gaze monitoring can significantly aid applications for the disabled, as well as telerobotics and process control interfaces. Whereas current eye-gaze control applications are limited to object selection and x/y gazepoint tracking, a methodology was developed here to discriminate a more abstract interface operation: zooming-in or out. This methodology first collects samples of eve-gaze location looking at controlled stimuli, at 30 Hz, just prior to a user`s decision to zoom. The sample is broken into data frames, or temporal snapshots. Within a data frame, all spatial samples are connected into a minimummore » spanning tree, then clustered, according to user defined parameters. Each cluster is mapped to one in the prior data frame, and statistics are computed from each cluster. These characteristics include cluster size, position, and pupil size. A multiple discriminant analysis uses these statistics both within and between data frames to formulate optimal rules for assigning the observations into zooming, zoom-out, or no zoom conditions. The statistical procedure effectively generates heuristics for future assignments, based upon these variables. Future work will enhance the accuracy and precision of the modeling technique, and will empirically test users in controlled experiments.« less

  10. Gaze Pursuit Responses in Nucleus Reticularis Tegmenti Pontis of Head-Unrestrained Macaques

    PubMed Central

    Suzuki, David A.; Betelak, Kathleen F.; Yee, Robert D.

    2009-01-01

    Eye-head gaze pursuit–related activity was recorded in rostral portions of the nucleus reticularis tegmenti pontis (rNRTP) in alert macaques. The head was unrestrained in the horizontal plane, and macaques were trained to pursue a moving target either with their head, with the eyes stationary in the orbits, or with their eyes, with their head voluntarily held stationary in space. Head-pursuit–related modulations in rNRTP activity were observed with some cells exhibiting increases in firing rate with increases in head-pursuit frequency. For many units, this head-pursuit response appeared to saturate at higher frequencies (>0.6 Hz). The response phase re:peak head-pursuit velocity formed a continuum, containing cells that could encode head-pursuit velocity and those encoding head-pursuit acceleration. The latter cells did not exhibit head position–related activity. Sensitivities were calculated with respect to peak head-pursuit velocity and averaged 1.8 spikes/s/deg/s. Of the cells that were tested for both head- and eye-pursuit–related activity, 86% exhibited responses to both head- and eye-pursuit and therefore carried a putative gaze-pursuit signal. For these gaze-pursuit units, the ratio of head to eye response sensitivities averaged ∼1.4. Pursuit eccentricity seemed to affect head-pursuit response amplitude even in the absence of a head position response per se. The results indicated that rNRTP is a strong candidate for the source of an active head-pursuit signal that projects to the cerebellum, specifically to the target-velocity and gaze-velocity Purkinje cells that have been observed in vermal lobules VI and VII. PMID:18987125

  11. Toddler learning from video: Effect of matched pedagogical cues.

    PubMed

    Lauricella, Alexis R; Barr, Rachel; Calvert, Sandra L

    2016-11-01

    Toddlers learn about their social world by following visual and verbal cues from adults, but they have difficulty transferring what they see in one context to another (e.g., from a screen to real life). Therefore, it is important to understand how the use of matched pedagogical cues, specifically adult eye gaze and language, influence toddlers' imitation from live and digital presentations. Fifteen- and 18-month-old toddlers (N=123) were randomly assigned to one of four experimental conditions or a baseline control condition. The four experimental conditions differed as a function of the interactive cues (audience gaze with interactive language or object gaze with non-interactive language) and presentation type (live or video). Results indicate that toddlers' successfully imitate a task when eye gaze was directed at the object or at the audience and equally well when the task was demonstrated live or via video. All four experimental conditions performed significantly better than the baseline control, indicating learned behavior. Additionally, results demonstrate that girls attended more to the demonstrations and outperformed the boys on the imitation task. In sum, this study demonstrates that young toddlers can learn from video when the models use matched eye gaze and verbal cues, providing additional evidence for ways in which the transfer deficit effect can be ameliorated. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Reading as Active Sensing: A Computational Model of Gaze Planning in Word Recognition

    PubMed Central

    Ferro, Marcello; Ognibene, Dimitri; Pezzulo, Giovanni; Pirrelli, Vito

    2010-01-01

    We offer a computational model of gaze planning during reading that consists of two main components: a lexical representation network, acquiring lexical representations from input texts (a subset of the Italian CHILDES database), and a gaze planner, designed to recognize written words by mapping strings of characters onto lexical representations. The model implements an active sensing strategy that selects which characters of the input string are to be fixated, depending on the predictions dynamically made by the lexical representation network. We analyze the developmental trajectory of the system in performing the word recognition task as a function of both increasing lexical competence, and correspondingly increasing lexical prediction ability. We conclude by discussing how our approach can be scaled up in the context of an active sensing strategy applied to a robotic setting. PMID:20577589

  13. Reading as active sensing: a computational model of gaze planning in word recognition.

    PubMed

    Ferro, Marcello; Ognibene, Dimitri; Pezzulo, Giovanni; Pirrelli, Vito

    2010-01-01

    WE OFFER A COMPUTATIONAL MODEL OF GAZE PLANNING DURING READING THAT CONSISTS OF TWO MAIN COMPONENTS: a lexical representation network, acquiring lexical representations from input texts (a subset of the Italian CHILDES database), and a gaze planner, designed to recognize written words by mapping strings of characters onto lexical representations. The model implements an active sensing strategy that selects which characters of the input string are to be fixated, depending on the predictions dynamically made by the lexical representation network. We analyze the developmental trajectory of the system in performing the word recognition task as a function of both increasing lexical competence, and correspondingly increasing lexical prediction ability. We conclude by discussing how our approach can be scaled up in the context of an active sensing strategy applied to a robotic setting.

  14. Eye, head, and body coordination during large gaze shifts in rhesus monkeys: movement kinematics and the influence of posture.

    PubMed

    McCluskey, Meaghan K; Cullen, Kathleen E

    2007-04-01

    Coordinated movements of the eye, head, and body are used to redirect the axis of gaze between objects of interest. However, previous studies of eye-head gaze shifts in head-unrestrained primates generally assumed the contribution of body movement to be negligible. Here we characterized eye-head-body coordination during horizontal gaze shifts made by trained rhesus monkeys to visual targets while they sat upright in a standard primate chair and assumed a more natural sitting posture in a custom-designed chair. In both postures, gaze shifts were characterized by the sequential onset of eye, head, and body movements, which could be described by predictable relationships. Body motion made a small but significant contribution to gaze shifts that were > or =40 degrees in amplitude. Furthermore, as gaze shift amplitude increased (40-120 degrees ), body contribution and velocity increased systematically. In contrast, peak eye and head velocities plateaued at velocities of approximately 250-300 degrees /s, and the rotation of the eye-in-orbit and head-on-body remained well within the physical limits of ocular and neck motility during large gaze shifts, saturating at approximately 35 and 60 degrees , respectively. Gaze shifts initiated with the eye more contralateral in the orbit were accompanied by smaller body as well as head movement amplitudes and velocities were greater when monkeys were seated in the more natural body posture. Taken together, our findings show that body movement makes a predictable contribution to gaze shifts that is systematically influenced by factors such as orbital position and posture. We conclude that body movements are part of a coordinated series of motor events that are used to voluntarily reorient gaze and that these movements can be significant even in a typical laboratory setting. Our results emphasize the need for caution in the interpretation of data from neurophysiological studies of the control of saccadic eye movements and/or eye-head gaze shifts because single neurons can code motor commands to move the body as well as the head and eyes.

  15. Nonverbal Expression in Autism of Asperger Type.

    ERIC Educational Resources Information Center

    Tantam, Digby; And Others

    1993-01-01

    Two experiments evaluated the social interactions of 15 Asperger-type autistic subjects with either normal or schizoid control subjects. Asperger subjects tended to avoid gazing at the interviewer when the interviewer was talking. Results suggest that a lifelong absence of gaze response to social clues including speech may explain some features of…

  16. Perception of direct vs. averted gaze in portrait paintings: An fMRI and eye-tracking study.

    PubMed

    Kesner, Ladislav; Grygarová, Dominika; Fajnerová, Iveta; Lukavský, Jiří; Nekovářová, Tereza; Tintěra, Jaroslav; Zaytseva, Yuliya; Horáček, Jiří

    2018-06-15

    In this study, we use separate eye-tracking measurements and functional magnetic resonance imaging to investigate the neuronal and behavioral response to painted portraits with direct versus averted gaze. We further explored modulatory effects of several painting characteristics (premodern vs modern period, influence of style and pictorial context). In the fMRI experiment, we show that the direct versus averted gaze elicited increased activation in lingual and inferior occipital and the fusiform face area, as well as in several areas involved in attentional and social cognitive processes, especially the theory of mind: angular gyrus/temporo-parietal junction, inferior frontal gyrus and dorsolateral prefrontal cortex. The additional eye-tracking experiment showed that participants spent more time viewing the portrait's eyes and mouth when the portrait's gaze was directed towards the observer. These results suggest that static and, in some cases, highly stylized depictions of human beings in artistic portraits elicit brain activation commensurate with the experience of being observed by a watchful intelligent being. They thus involve observers in implicit inferences of the painted subject's mental states and emotions. We further confirm the substantial influence of representational medium on brain activity. Copyright © 2018 Elsevier Inc. All rights reserved.

  17. A Non-Verbal Turing Test: Differentiating Mind from Machine in Gaze-Based Social Interaction

    PubMed Central

    Pfeiffer, Ulrich J.; Timmermans, Bert; Bente, Gary; Vogeley, Kai; Schilbach, Leonhard

    2011-01-01

    In social interaction, gaze behavior provides important signals that have a significant impact on our perception of others. Previous investigations, however, have relied on paradigms in which participants are passive observers of other persons’ gazes and do not adjust their gaze behavior as is the case in real-life social encounters. We used an interactive eye-tracking paradigm that allows participants to interact with an anthropomorphic virtual character whose gaze behavior is responsive to where the participant looks on the stimulus screen in real time. The character’s gaze reactions were systematically varied along a continuum from a maximal probability of gaze aversion to a maximal probability of gaze-following during brief interactions, thereby varying contingency and congruency of the reactions. We investigated how these variations influenced whether participants believed that the character was controlled by another person (i.e., a confederate) or a computer program. In a series of experiments, the human confederate was either introduced as naïve to the task, cooperative, or competitive. Results demonstrate that the ascription of humanness increases with higher congruency of gaze reactions when participants are interacting with a naïve partner. In contrast, humanness ascription is driven by the degree of contingency irrespective of congruency when the confederate was introduced as cooperative. Conversely, during interaction with a competitive confederate, judgments were neither based on congruency nor on contingency. These results offer important insights into what renders the experience of an interaction truly social: Humans appear to have a default expectation of reciprocation that can be influenced drastically by the presumed disposition of the interactor to either cooperate or compete. PMID:22096599

  18. Techniques used for the analysis of oculometer eye-scanning data obtained from an air traffic control display

    NASA Technical Reports Server (NTRS)

    Crawford, Daniel J.; Burdette, Daniel W.; Capron, William R.

    1993-01-01

    The methodology and techniques used to collect and analyze look-point position data from a real-time ATC display-format comparison experiment are documented. That study compared the delivery precision and controller workload of three final approach spacing aid display formats. Using an oculometer, controller lookpoint position data were collected, associated with gaze objects (e.g., moving aircraft) on the ATC display, and analyzed to determine eye-scan behavior. The equipment involved and algorithms for saving, synchronizing with the ATC simulation output, and filtering the data are described. Target (gaze object) and cross-check scanning identification algorithms are also presented. Data tables are provided of total dwell times, average dwell times, and cross-check scans. Flow charts, block diagrams, file record descriptors, and source code are included. The techniques and data presented are intended to benefit researchers in other studies that incorporate non-stationary gaze objects and oculometer equipment.

  19. Creepy White Gaze: Rethinking the Diorama as a Pedagogical Activity

    ERIC Educational Resources Information Center

    Sterzuk, Andrea; Mulholland, Valerie

    2011-01-01

    Drawing on gaze and postcolonial theory, this article provides a theoretical discussion of a problematic photograph published in a provincial teachers' newsletter. The photo consists of a White settler child and two White settler educators gathered around his heritage fair entry diorama entitled "Great Plains Indians." This article…

  20. The Disturbance of Gaze in Progressive Supranuclear Palsy: Implications for Pathogenesis

    PubMed Central

    Chen, Athena L.; Riley, David E.; King, Susan A.; Joshi, Anand C.; Serra, Alessandro; Liao, Ke; Cohen, Mark L.; Otero-Millan, Jorge; Martinez-Conde, Susana; Strupp, Michael; Leigh, R. John

    2010-01-01

    Progressive supranuclear palsy (PSP) is a disease of later life that is currently regarded as a form of neurodegenerative tauopathy. Disturbance of gaze is a cardinal clinical feature of PSP that often helps clinicians to establish the diagnosis. Since the neurobiology of gaze control is now well understood, it is possible to use eye movements as investigational tools to understand aspects of the pathogenesis of PSP. In this review, we summarize each disorder of gaze control that occurs in PSP, drawing on our studies of 50 patients, and on reports from other laboratories that have measured the disturbances of eye movements. When these gaze disorders are approached by considering each functional class of eye movements and its neurobiological basis, a distinct pattern of eye movement deficits emerges that provides insight into the pathogenesis of PSP. Although some aspects of all forms of eye movements are affected in PSP, the predominant defects concern vertical saccades (slow and hypometric, both up and down), impaired vergence, and inability to modulate the linear vestibulo-ocular reflex appropriately for viewing distance. These vertical and vergence eye movements habitually work in concert to enable visuomotor skills that are important during locomotion with the hands free. Taken with the prominent early feature of falls, these findings suggest that PSP tauopathy impairs a recently evolved neural system concerned with bipedal locomotion in an erect posture and frequent gaze shifts between the distant environment and proximate hands. This approach provides a conceptual framework that can be used to address the nosological challenge posed by overlapping clinical and neuropathological features of neurodegenerative tauopathies. PMID:21188269

  1. Watching Eyes effects: When others meet the self.

    PubMed

    Conty, Laurence; George, Nathalie; Hietanen, Jari K

    2016-10-01

    The perception of direct gaze-that is, of another individual's gaze directed at the observer-is known to influence a wide range of cognitive processes and behaviors. We present a new theoretical proposal to provide a unified account of these effects. We argue that direct gaze first captures the beholder's attention and then triggers self-referential processing, i.e., a heightened processing of stimuli in relation with the self. Self-referential processing modulates incoming information processing and leads to the Watching Eyes effects, which we classify into four main categories: the enhancement of self-awareness, memory effects, the activation of pro-social behavior, and positive appraisals of others. We advance that the belief to be the object of another's attention is embedded in direct gaze perception and gives direct gaze its self-referential power. Finally, we stress that the Watching Eyes effects reflect a positive impact on human cognition; therefore, they may have a therapeutic potential, which future research should delineate. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  2. Gaze Tracking System for User Wearing Glasses

    PubMed Central

    Gwon, Su Yeong; Cho, Chul Woo; Lee, Hyeon Chang; Lee, Won Oh; Park, Kang Ryoung

    2014-01-01

    Conventional gaze tracking systems are limited in cases where the user is wearing glasses because the glasses usually produce noise due to reflections caused by the gaze tracker's lights. This makes it difficult to locate the pupil and the specular reflections (SRs) from the cornea of the user's eye. These difficulties increase the likelihood of gaze detection errors because the gaze position is estimated based on the location of the pupil center and the positions of the corneal SRs. In order to overcome these problems, we propose a new gaze tracking method that can be used by subjects who are wearing glasses. Our research is novel in the following four ways: first, we construct a new control device for the illuminator, which includes four illuminators that are positioned at the four corners of a monitor. Second, our system automatically determines whether a user is wearing glasses or not in the initial stage by counting the number of white pixels in an image that is captured using the low exposure setting on the camera. Third, if it is determined that the user is wearing glasses, the four illuminators are turned on and off sequentially in order to obtain an image that has a minimal amount of noise due to reflections from the glasses. As a result, it is possible to avoid the reflections and accurately locate the pupil center and the positions of the four corneal SRs. Fourth, by turning off one of the four illuminators, only three corneal SRs exist in the captured image. Since the proposed gaze detection method requires four corneal SRs for calculating the gaze position, the unseen SR position is estimated based on the parallelogram shape that is defined by the three SR positions and the gaze position is calculated. Experimental results showed that the average gaze detection error with 20 persons was about 0.70° and the processing time is 63.72 ms per each frame. PMID:24473283

  3. Quality control of 3D Geological Models using an Attention Model based on Gaze

    NASA Astrophysics Data System (ADS)

    Busschers, Freek S.; van Maanen, Peter-Paul; Brouwer, Anne-Marie

    2014-05-01

    The Geological Survey of the Netherlands (GSN) produces 3D stochastic geological models of the upper 50 meters of the Dutch subsurface. The voxel models are regarded essential in answering subsurface questions on, for example, aggregate resources, groundwater flow, land subsidence studies and the planning of large-scale infrastructural works such as tunnels. GeoTOP is the most recent and detailed generation of 3D voxel models. This model describes 3D lithological variability up to a depth of 50 m using voxels of 100*100*0.5m. Due to the expected increase in data-flow, model output and user demands, the development of (semi-)automated quality control systems is getting more important in the near future. Besides numerical control systems, capturing model errors as seen from the expert geologist viewpoint is of increasing interest. We envision the use of eye gaze to support and speed up detection of errors in the geological voxel models. As a first step in this direction we explore gaze behavior of 12 geological experts from the GSN during quality control of part of the GeoTOP 3D geological model using an eye-tracker. Gaze is used as input of an attention model that results in 'attended areas' for each individual examined image of the GeoTOP model and each individual expert. We compared these attended areas to errors as marked by the experts using a mouse. Results show that: 1) attended areas as determined from experts' gaze data largely match with GeoTOP errors as indicated by the experts using a mouse, and 2) a substantial part of the match can be reached using only gaze data from the first few seconds of the time geologists spend to search for errors. These results open up the possibility of faster GeoTOP model control using gaze if geologists accept a small decrease of error detection accuracy. Attention data may also be used to make independent comparisons between different geologists varying in focus and expertise. This would facilitate a more effective use of experts in specific different projects or areas. Part of such a procedure could be to confront geological experts with their own results, allowing possible training steps in order to improve their geological expertise and eventually improve the GeoTop model. Besides the directions as indicated above, future research should focus on concrete implementation of facilitating and optimizing error detection in present and future 3D voxel models that are commonly characterized by very large amounts of data.

  4. Attentional effects on gaze preference for salient loci in traffic scenes.

    PubMed

    Sakai, Hiroyuki; Shin, Duk; Kohama, Takeshi; Uchiyama, Yuji

    2012-01-01

    Alerting drivers for self-regulation of attention might decrease crash risks attributable to absent-minded driving. However, no reliable method exists for monitoring driver attention. Therefore, we examined attentional effects on gaze preference for salient loci (GPS) in traffic scenes. In an active viewing (AV) condition requiring endogenous attention for traffic scene comprehension, participants identified appropriate speeds for driving in presented traffic scene images. In a passive viewing (PV) condition requiring no endogenous attention, participants passively viewed traffic scene images. GPS was quantified by the mean saliency value averaged across fixation locations. Results show that GPS was less during AV than during PV. Additionally, gaze dwell time on signboards was shorter for AV than for PV. These results suggest that, in the absence of endogenous attention for traffic scene comprehension, gaze tends to concentrate on irrelevant salient loci in a traffic environment. Therefore, increased GPS can indicate absent-minded driving. The present study demonstrated that, without endogenous attention for traffic scene comprehension, gaze tends to concentrate on irrelevant salient loci in a traffic environment. This result suggests that increased gaze preference for salient loci indicates absent-minded driving, which is otherwise difficult to detect.

  5. Stationary gaze entropy predicts lane departure events in sleep-deprived drivers.

    PubMed

    Shiferaw, Brook A; Downey, Luke A; Westlake, Justine; Stevens, Bronwyn; Rajaratnam, Shantha M W; Berlowitz, David J; Swann, Phillip; Howard, Mark E

    2018-02-02

    Performance decrement associated with sleep deprivation is a leading contributor to traffic accidents and fatalities. While current research has focused on eye blink parameters as physiological indicators of driver drowsiness, little is understood of how gaze behaviour alters as a result of sleep deprivation. In particular, the effect of sleep deprivation on gaze entropy has not been previously examined. In this randomised, repeated measures study, 9 (4 male, 5 female) healthy participants completed two driving sessions in a fully instrumented vehicle (1 after a night of sleep deprivation and 1 after normal sleep) on a closed track, during which eye movement activity and lane departure events were recorded. Following sleep deprivation, the rate of fixations reduced while blink rate and duration as well as saccade amplitude increased. In addition, stationary and transition entropy of gaze also increased following sleep deprivation as well as with amount of time driven. An increase in stationary gaze entropy in particular was associated with higher odds of a lane departure event occurrence. These results highlight how fatigue induced by sleep deprivation and time-on-task effects can impair drivers' visual awareness through disruption of gaze distribution and scanning patterns.

  6. Gaze characteristics of elite and near-elite athletes in ice hockey defensive tactics.

    PubMed

    Martell, Stephen G; Vickers, Joan N

    2004-04-01

    Traditional visual search experiments, where the researcher pre-selects video-based scenes for the participant to respond to, shows that elite players make more efficient decisions than non-elites, but disagree on how they temporally regulate their gaze. Using the vision-in-action [J.N. Vickers, J. Exp. Psychol.: Human Percept. Perform. 22 (1996) 342] approach, we tested whether the significant gaze that differentiates elite and non-elite athletes occurred either: early in the task and was of more rapid duration [A.M. Williams et al., Res. Quart. Exer. Sport 65 (1994) 127; A.M. Williams and K. Davids, Res. Quart. Exer. Sport 69 (1998) 111], or late in the task and was of longer duration [W. Helsen, J.M. Pauwels, A cognitive approach to visual search in sport, in: D. Brogan, K. Carr (Eds.), Visual Search, vol. II, Taylor and Francis, London, 1992], or whether a more complex gaze control strategy was used that consisted of both early and rapid fixations followed by a late fixation of long duration prior to the final execution. We tested this using a live defensive zone task in ice hockey. Results indicated that athletes temporally regulated their gaze using two different gaze control strategies. First, fixation/tracking (F/T) gaze early in the trial were significantly shorter than the final F/T and confirmed that the elite group fixated the tactical locations more rapidly than the non-elite on successful plays. And secondly, the final F/T prior to critical movement initiation (i.e. F/T-1) was significantly longer for both groups, averaging 30% of the final part of the phase and occurred as the athletes isolated a single object or location to end the play. The results imply that expertise in defensive tactics is defined by a cascade of F/T, which began with the athletes fixating or tracking specific locations for short durations at the beginning of the play, and concluded with a final gaze of long duration to a relatively stable target at the end. The results are discussed within the context of gaze research in open and closed skills, as well as theoretical models of long-term memory and decision making in sport.

  7. Semantic Preview Benefit in Eye Movements during Reading: A Parafoveal Fast-Priming Study

    ERIC Educational Resources Information Center

    Hohenstein, Sven; Laubrock, Jochen; Kliegl, Reinhold

    2010-01-01

    Eye movements in reading are sensitive to foveal and parafoveal word features. Whereas the influence of orthographic or phonological parafoveal information on gaze control is undisputed, there has been no reliable evidence for early parafoveal extraction of semantic information in alphabetic script. Using a novel combination of the gaze-contingent…

  8. Joint Attention without Gaze Following: Human Infants and Their Parents Coordinate Visual Attention to Objects through Eye-Hand Coordination

    PubMed Central

    Yu, Chen; Smith, Linda B.

    2013-01-01

    The coordination of visual attention among social partners is central to many components of human behavior and human development. Previous research has focused on one pathway to the coordination of looking behavior by social partners, gaze following. The extant evidence shows that even very young infants follow the direction of another's gaze but they do so only in highly constrained spatial contexts because gaze direction is not a spatially precise cue as to the visual target and not easily used in spatially complex social interactions. Our findings, derived from the moment-to-moment tracking of eye gaze of one-year-olds and their parents as they actively played with toys, provide evidence for an alternative pathway, through the coordination of hands and eyes in goal-directed action. In goal-directed actions, the hands and eyes of the actor are tightly coordinated both temporally and spatially, and thus, in contexts including manual engagement with objects, hand movements and eye movements provide redundant information about where the eyes are looking. Our findings show that one-year-olds rarely look to the parent's face and eyes in these contexts but rather infants and parents coordinate looking behavior without gaze following by attending to objects held by the self or the social partner. This pathway, through eye-hand coupling, leads to coordinated joint switches in visual attention and to an overall high rate of looking at the same object at the same time, and may be the dominant pathway through which physically active toddlers align their looking behavior with a social partner. PMID:24236151

  9. Identifying cognitive distraction using steering wheel reversal rates.

    PubMed

    Kountouriotis, Georgios K; Spyridakos, Panagiotis; Carsten, Oliver M J; Merat, Natasha

    2016-11-01

    The influence of driver distraction on driving performance is not yet well understood, but it can have detrimental effects on road safety. In this study, we examined the effects of visual and non-visual distractions during driving, using a high-fidelity driving simulator. The visual task was presented either at an offset angle on an in-vehicle screen, or on the back of a moving lead vehicle. Similar to results from previous studies in this area, non-visual (cognitive) distraction resulted in improved lane keeping performance and increased gaze concentration towards the centre of the road, compared to baseline driving, and further examination of the steering control metrics indicated an increase in steering wheel reversal rates, steering wheel acceleration, and steering entropy. We show, for the first time, that when the visual task is presented centrally, drivers' lane deviation reduces (similar to non-visual distraction), whilst measures of steering control, overall, indicated more steering activity, compared to baseline. When using a visual task that required the diversion of gaze to an in-vehicle display, but without a manual element, lane keeping performance was similar to baseline driving. Steering wheel reversal rates were found to adequately tease apart the effects of non-visual distraction (increase of 0.5° reversals) and visual distraction with offset gaze direction (increase of 2.5° reversals). These findings are discussed in terms of steering control during different types of in-vehicle distraction, and the possible role of manual interference by distracting secondary tasks. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. The attracting power of the gaze of politicians is modulated by the personality and ideological attitude of their voters: a functional magnetic resonance imaging study.

    PubMed

    Cazzato, Valentina; Liuzza, Marco Tullio; Caprara, Gian Vittorio; Macaluso, Emiliano; Aglioti, Salvatore Maria

    2015-10-01

    Observing someone rapidly moving their eyes induces reflexive shifts of overt and covert attention in the onlooker. Previous studies have shown that this process can be modulated by the onlooker's personality, as well as by the social features of the person depicted in the cued face. Here, we investigated whether an individual's preference for social dominance orientation, in-group perceived similarity (PS), and political affiliation of the cued-face modulated neural activity within specific nodes of the social attention network. During functional magnetic resonance imaging, participants were requested to perform a gaze-following task to investigate whether the directional gaze of various Italian political personages might influence the oculomotor behaviour of in-group or out-group voters. After scanning, we acquired measures of PS in personality traits with each political personage and preference for social dominance orientation. Behavioural data showed that higher gaze interference for in-group than out-group political personages was predicted by a higher preference for social hierarchy. Higher blood oxygenation level-dependent activity in incongruent vs. congruent conditions was found in areas associated with orienting to socially salient events and monitoring response conflict, namely the left frontal eye field, right supramarginal gyrus, mid-cingulate cortex and left anterior insula. Interestingly, higher ratings of PS with the in-group and less preference for social hierarchy predicted increased activity in the left frontal eye field during distracting gaze movements of in-group as compared with out-group political personages. Our results suggest that neural activity in the social orienting circuit is modulated by higher-order social dimensions, such as in-group PS and individual differences in ideological attitudes. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  11. Virtual social interactions in social anxiety--the impact of sex, gaze, and interpersonal distance.

    PubMed

    Wieser, Matthias J; Pauli, Paul; Grosseibl, Miriam; Molzow, Ina; Mühlberger, Andreas

    2010-10-01

    In social interactions, interpersonal distance between interaction partners plays an important role in determining the status of the relationship. Interpersonal distance is an important nonverbal behavior, and is used to regulate personal space in a complex interplay with other nonverbal behaviors such as eye gaze. In social anxiety, studies regarding the impact of interpersonal distance on within-situation avoidance behavior are so far rare. Thus the present study aimed to scrutinize the relationship between gaze direction, sex, interpersonal distance, and social anxiety in social interactions. Social interactions were modeled in a virtual-reality (VR) environment, where 20 low and 19 high socially anxious women were confronted with approaching male and female characters, who stopped in front of the participant, either some distance away or close to them, and displayed either a direct or an averted gaze. Gaze and head movements, as well as heart rate, were measured as indices of avoidance behavior and fear reactions. High socially anxious participants showed a complex pattern of avoidance behavior: when the avatar was standing farther away, high socially anxious women avoided gaze contact with male avatars showing a direct gaze. Furthermore, they showed avoidance behavior (backward head movements) in response to male avatars showing a direct gaze, regardless of the interpersonal distance. Overall, the current study proved that VR social interactions might be a very useful tool for investigating avoidance behavior of socially anxious individuals in highly controlled situations. This might also be the first step in using VR social interactions in clinical protocols for the therapy of social anxiety disorder.

  12. Eye gazing direction inspection based on image processing technique

    NASA Astrophysics Data System (ADS)

    Hao, Qun; Song, Yong

    2005-02-01

    According to the research result in neural biology, human eyes can obtain high resolution only at the center of view of field. In the research of Virtual Reality helmet, we design to detect the gazing direction of human eyes in real time and feed it back to the control system to improve the resolution of the graph at the center of field of view. In the case of current display instruments, this method can both give attention to the view field of virtual scene and resolution, and improve the immersion of virtual system greatly. Therefore, detecting the gazing direction of human eyes rapidly and exactly is the basis of realizing the design scheme of this novel VR helmet. In this paper, the conventional method of gazing direction detection that based on Purklinje spot is introduced firstly. In order to overcome the disadvantage of the method based on Purklinje spot, this paper proposed a method based on image processing to realize the detection and determination of the gazing direction. The locations of pupils and shapes of eye sockets change with the gazing directions. With the aid of these changes, analyzing the images of eyes captured by the cameras, gazing direction of human eyes can be determined finally. In this paper, experiments have been done to validate the efficiency of this method by analyzing the images. The algorithm can carry out the detection of gazing direction base on normal eye image directly, and it eliminates the need of special hardware. Experiment results show that the method is easy to implement and have high precision.

  13. Evaluation of a gaze-controlled vision enhancement system for reading in visually impaired people

    PubMed Central

    Aguilar, Carlos; Castet, Eric

    2017-01-01

    People with low vision, especially those with Central Field Loss (CFL), need magnification to read. The flexibility of Electronic Vision Enhancement Systems (EVES) offers several ways of magnifying text. Due to the restricted field of view of EVES, the need for magnification is conflicting with the need to navigate through text (panning). We have developed and implemented a real-time gaze-controlled system whose goal is to optimize the possibility of magnifying a portion of text while maintaining global viewing of the other portions of the text (condition 1). Two other conditions were implemented that mimicked commercially available advanced systems known as CCTV (closed-circuit television systems)—conditions 2 and 3. In these two conditions, magnification was uniformly applied to the whole text without any possibility to specifically select a region of interest. The three conditions were implemented on the same computer to remove differences that might have been induced by dissimilar equipment. A gaze-contingent artificial 10° scotoma (a mask continuously displayed in real time on the screen at the gaze location) was used in the three conditions in order to simulate macular degeneration. Ten healthy subjects with a gaze-contingent scotoma read aloud sentences from a French newspaper in nine experimental one-hour sessions. Reading speed was measured and constituted the main dependent variable to compare the three conditions. All subjects were able to use condition 1 and they found it slightly more comfortable to use than condition 2 (and similar to condition 3). Importantly, reading speed results did not show any significant difference between the three systems. In addition, learning curves were similar in the three conditions. This proof of concept study suggests that the principles underlying the gaze-controlled enhanced system might be further developed and fruitfully incorporated in different kinds of EVES for low vision reading. PMID:28380004

  14. Social communication with virtual agents: The effects of body and gaze direction on attention and emotional responding in human observers.

    PubMed

    Marschner, Linda; Pannasch, Sebastian; Schulz, Johannes; Graupner, Sven-Thomas

    2015-08-01

    In social communication, the gaze direction of other persons provides important information to perceive and interpret their emotional response. Previous research investigated the influence of gaze by manipulating mutual eye contact. Therefore, gaze and body direction have been changed as a whole, resulting in only congruent gaze and body directions (averted or directed) of another person. Here, we aimed to disentangle these effects by using short animated sequences of virtual agents posing with either direct or averted body or gaze. Attention allocation by means of eye movements, facial muscle response, and emotional experience to agents of different gender and facial expressions were investigated. Eye movement data revealed longer fixation durations, i.e., a stronger allocation of attention, when gaze and body direction were not congruent with each other or when both were directed towards the observer. This suggests that direct interaction as well as incongruous signals increase the demands of attentional resources in the observer. For the facial muscle response, only the reaction of muscle zygomaticus major revealed an effect of body direction, expressed by stronger activity in response to happy expressions for direct compared to averted gaze when the virtual character's body was directed towards the observer. Finally, body direction also influenced the emotional experience ratings towards happy expressions. While earlier findings suggested that mutual eye contact is the main source for increased emotional responding and attentional allocation, the present results indicate that direction of the virtual agent's body and head also plays a minor but significant role. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Horizontal gaze nystagmus: a review of vision science and application issues.

    PubMed

    Rubenzer, Steven J; Stevenson, Scott B

    2010-03-01

    The Horizontal Gaze Nystagmus (HGN) test is one component of the Standardized Field Sobriety Test battery. This article reviews the literature on smooth pursuit eye movement and gaze nystagmus with a focus on normative responses, the influence of alcohol on these behaviors, and stimulus conditions similar to those used in the HGN sobriety test. Factors such as age, stimulus and background conditions, medical conditions, prescription medications, and psychiatric disorder were found to affect the smooth pursuit phase of HGN. Much less literature is available for gaze nystagmus, but onset of nystagmus may occur in some sober subjects at 45 degrees or less. We conclude that HGN is limited by large variability in the underlying normative behavior, from methods and testing environments that are often poorly controlled, and from a lack of rigorous validation in laboratory settings.

  16. Look at my poster! Active gaze, preference and memory during a poster session.

    PubMed

    Foulsham, Tom; Kingstone, Alan

    2011-01-01

    In science, as in advertising, people often present information on a poster, yet little is known about attention during a poster session. A mobile eye-tracker was used to record participants' gaze during a mock poster session featuring a range of academic psychology posters. Participants spent the most time looking at introductions and conclusions. Larger posters were looked at for longer, as were posters rated more interesting (but not necessarily more aesthetically pleasing). Interestingly, gaze did not correlate with memory for poster details or liking, suggesting that attracting someone towards your poster may not be enough.

  17. Surgeons' display reduced mental effort and workload while performing robotically assisted surgical tasks, when compared to conventional laparoscopy.

    PubMed

    Moore, Lee J; Wilson, Mark R; McGrath, John S; Waine, Elizabeth; Masters, Rich S W; Vine, Samuel J

    2015-09-01

    Research has demonstrated the benefits of robotic surgery for the patient; however, research examining the benefits of robotic technology for the surgeon is limited. This study aimed to adopt validated measures of workload, mental effort, and gaze control to assess the benefits of robotic surgery for the surgeon. We predicted that the performance of surgical training tasks on a surgical robot would require lower investments of workload and mental effort, and would be accompanied by superior gaze control and better performance, when compared to conventional laparoscopy. Thirty-two surgeons performed two trials on a ball pick-and-drop task and a rope-threading task on both robotic and laparoscopic systems. Measures of workload (the surgery task load index), mental effort (subjective: rating scale for mental effort and objective: standard deviation of beat-to-beat intervals), gaze control (using a mobile eye movement recorder), and task performance (completion time and number of errors) were recorded. As expected, surgeons performed both tasks more quickly and accurately (with fewer errors) on the robotic system. Self-reported measures of workload and mental effort were significantly lower on the robotic system compared to the laparoscopic system. Similarly, an objective cardiovascular measure of mental effort revealed lower investment of mental effort when using the robotic platform relative to the laparoscopic platform. Gaze control distinguished the robotic from the laparoscopic systems, but not in the predicted fashion, with the robotic system associated with poorer (more novice like) gaze control. The findings highlight the benefits of robotic technology for surgical operators. Specifically, they suggest that tasks can be performed more proficiently, at a lower workload, and with the investment of less mental effort, this may allow surgeons greater cognitive resources for dealing with other demands such as communication, decision-making, or periods of increased complexity in the operating room.

  18. Gaze‐evoked nystagmus induced by alcohol intoxication

    PubMed Central

    Tarnutzer, Alexander A.; Straumann, Dominik; Ramat, Stefano; Bertolini, Giovanni

    2017-01-01

    Key points The cerebellum is the core structure controlling gaze stability. Chronic cerebellar diseases and acute alcohol intoxication affect cerebellar function, inducing, among others, gaze instability as gaze‐evoked nystagmus.Gaze‐evoked nystagmus is characterized by increased centripetal eye‐drift. It is used as an important diagnostic sign for patients with cerebellar degeneration and to assess the ‘driving while intoxicated’ condition.We quantified the effect of alcohol on gaze‐holding using an approach allowing, for the first time, the comparison of deficits induced by alcohol intoxication and cerebellar degeneration.Our results showed that alcohol intoxication induces a two‐fold increase of centripetal eye‐drift.We establish analysis techniques for using controlled alcohol intake as a model to support the study of cerebellar deficits.The observed similarity between the effect of alcohol and the clinical signs observed in cerebellar patients suggests a possible pathomechanism for gaze‐holding deficits. Abstract Gaze‐evoked nystagmus (GEN) is an ocular‐motor finding commonly observed in cerebellar disease, characterized by increased centripetal eye‐drift with centrifugal correcting saccades at eccentric gaze. With cerebellar degeneration being a rare and clinically heterogeneous disease, data from patients are limited. We hypothesized that a transient inhibition of cerebellar function by defined amounts of alcohol may provide a suitable model to study gaze‐holding deficits in cerebellar disease. We recorded gaze‐holding at varying horizontal eye positions in 15 healthy participants before and 30 min after alcohol intake required to reach 0.6‰ blood alcohol content (BAC). Changes in ocular‐motor behaviour were quantified measuring eye‐drift velocity as a continuous function of gaze eccentricity over a large range (±40 deg) of horizontal gaze angles and characterized using a two‐parameter tangent model. The effect of alcohol on gaze stability was assessed analysing: (1) overall effects on the gaze‐holding system, (2) specific effects on each eye and (3) differences between gaze angles in the temporal and nasal hemifields. For all subjects, alcohol consumption induced gaze instability, causing a two‐fold increase [2.21 (0.55), median (median absolute deviation); P = 0.002] of eye‐drift velocity at all eccentricities. Results were confirmed analysing each eye and hemifield independently. The alcohol‐induced transient global deficit in gaze‐holding matched the pattern previously described in patients with late‐onset cerebellar degeneration. Controlled intake of alcohol seems a suitable disease model to study cerebellar GEN. With alcohol resulting in global cerebellar hypofunction, we hypothesize that patients matching the gaze‐holding behaviour observed here suffered from diffuse deficits in the gaze‐holding system as well. PMID:27981586

  19. Stabilization of gaze during circular locomotion in darkness. II. Contribution of velocity storage to compensatory eye and head nystagmus in the running monkey

    NASA Technical Reports Server (NTRS)

    Solomon, D.; Cohen, B.

    1992-01-01

    1. Yaw eye in head (Eh) and head on body velocities (Hb) were measured in two monkeys that ran around the perimeter of a circular platform in darkness. The platform was stationary or could be counterrotated to reduce body velocity in space (Bs) while increasing gait velocity on the platform (Bp). The animals were also rotated while seated in a primate chair at eccentric locations to provide linear and angular accelerations similar to those experienced while running. 2. Both animals had head and eye nystagmus while running in darkness during which slow phase gaze velocity on the body (Gb) partially compensated for body velocity in space (Bs). The eyes, driven by the vestibuloocular reflex (VOR), supplied high-frequency characteristics, bringing Gb up to compensatory levels at the beginning and end of the slow phases. The head provided substantial gaze compensation during the slow phases, probably through the vestibulocollic reflex (VCR). Synchronous eye and head quick phases moved gaze in the direction of running. Head movements occurred consistently only when animals were running. This indicates that active body and limb motion may be essential for inducing the head-eye gaze synergy. 3. Gaze compensation was good when running in both directions in one animal and in one direction in the other animal. The animals had long VOR time constants in these directions. The VOR time constant was short to one side in one animal, and it had poor gaze compensation in this direction. Postlocomotory nystagmus was weaker after running in directions with a long VOR time constant than when the animals were passively rotated in darkness. We infer that velocity storage in the vestibular system had been activated to produce continuous Eh and Hb during running and to counteract postrotatory afterresponses. 4. Continuous compensatory gaze nystagmus was not produced by passive eccentric rotation with the head stabilized or free. This indicates that an aspect of active locomotion, most likely somatosensory feedback, was responsible for activating velocity storage. 5. Nystagmus was compared when an animal ran in darkness and in light. the beat frequency of eye and head nystagmus was lower, and the quick phases were larger in darkness. The duration of head and eye quick phases covaried. Eye quick phases were larger when animals ran in darkness than when they were passively rotated. The maximum velocity and duration of eye quick phases were the same in both conditions. 6. The platform was counterrotated under one monkey in darkness while it ran in the direction of its long vestibular time constant.(ABSTRACT TRUNCATED AT 400 WORDS).

  20. Autonomic Arousal to Direct Gaze Correlates with Social Impairments among Children with ASD

    ERIC Educational Resources Information Center

    Kaartinen, Miia; Puura, Kaija; Makela, Tiina; Rannisto, Mervi; Lemponen, Riina; Helminen, Mika; Salmelin, Raili; Himanen, Sari-Leena; Hietanen, Jari K.

    2012-01-01

    The present study investigated whether autonomic arousal to direct gaze is related to social impairments among children with autism spectrum disorder (ASD). Arousal was measured through skin conductance responses (SCR) while the participants (15 children with ASD and 16 control children) viewed a live face of another person. Impairments in social…

  1. Atypical Processing of Gaze Cues and Faces Explains Comorbidity between Autism Spectrum Disorder (ASD) and Attention Deficit/Hyperactivity Disorder (ADHD)

    ERIC Educational Resources Information Center

    Groom, Madeleine J.; Kochhar, Puja; Hamilton, Antonia; Liddle, Elizabeth B.; Simeou, Marina; Hollis, Chris

    2017-01-01

    This study investigated the neurobiological basis of comorbidity between autism spectrum disorder (ASD) and attention deficit/hyperactivity disorder (ADHD). We compared children with ASD, ADHD or ADHD+ASD and typically developing controls (CTRL) on behavioural and electrophysiological correlates of gaze cue and face processing. We measured effects…

  2. Gaze behaviour during space perception and spatial decision making.

    PubMed

    Wiener, Jan M; Hölscher, Christoph; Büchner, Simon; Konieczny, Lars

    2012-11-01

    A series of four experiments investigating gaze behavior and decision making in the context of wayfinding is reported. Participants were presented with screenshots of choice points taken in large virtual environments. Each screenshot depicted alternative path options. In Experiment 1, participants had to decide between them to find an object hidden in the environment. In Experiment 2, participants were first informed about which path option to take as if following a guided route. Subsequently, they were presented with the same images in random order and had to indicate which path option they chose during initial exposure. In Experiment 1, we demonstrate (1) that participants have a tendency to choose the path option that featured the longer line of sight, and (2) a robust gaze bias towards the eventually chosen path option. In Experiment 2, systematic differences in gaze behavior towards the alternative path options between encoding and decoding were observed. Based on data from Experiments 1 and 2 and two control experiments ensuring that fixation patterns were specific to the spatial tasks, we develop a tentative model of gaze behavior during wayfinding decision making suggesting that particular attention was paid to image areas depicting changes in the local geometry of the environments such as corners, openings, and occlusions. Together, the results suggest that gaze during a wayfinding tasks is directed toward, and can be predicted by, a subset of environmental features and that gaze bias effects are a general phenomenon of visual decision making.

  3. Incidence and anatomy of gaze-evoked nystagmus in patients with cerebellar lesions.

    PubMed

    Baier, Bernhard; Dieterich, Marianne

    2011-01-25

    Disorders of gaze-holding--organized by a neural network located in the brainstem or the cerebellum--may lead to nystagmus. Based on previous animal studies it was concluded that one key player of the cerebellar part of this gaze-holding neural network is the flocculus. Up to now, in humans there are no systematic studies in patients with cerebellar lesions examining one of the most common forms of nystagmus: gaze-evoked nystagmus (GEN). The aim of our present study was to clarify which cerebellar structures are involved in the generation of GEN. Twenty-one patients with acute unilateral cerebellar stroke were analyzed by means of modern MRI-based voxel-wise lesion-behavior mapping. Our data indicate that cerebellar structures such as the vermal pyramid, the uvula, and the tonsil, but also parts of the biventer lobule and the inferior semilunar lobule, were affected in horizontal GEN. It seems that these structures are part of a gaze-holding neural integrator control system. Furthermore, GEN might present a diagnostic sign pointing toward ipsilesionally located lesions of midline and lower cerebellar structures.

  4. Functional Coordination of a Full-Body Gaze Control Mechanisms Elicited During Locomotion

    NASA Technical Reports Server (NTRS)

    Bloomberg, Jacob J.; Mulavara, Ajitkumar P.; Cohen, Helen S.

    2003-01-01

    Control of locomotion requires precise interaction between several sensorimotor subsystems. Exposure to the microgravity environment of spaceflight leads to postflight adaptive alterations in these multiple subsystems leading to postural and gait disturbances. Countermeasures designed to mitigate these postflight gait alterations will need to be assessed with a new generation of functional tests that evaluate the interaction of various elements central to locomotor control. The goal of this study is to determine how the multiple, interdependent, full- body sensorimotor subsystems aiding gaze stabilization during locomotion are functionally coordinated. To explore this question two experiments were performed. In the first study (Study 1) we investigated how alteration in gaze tasking changes full-body locomotor control strategies. Subjects (n=9) performed two discreet gaze stabilization tasks while walking at 6.4 km/hr on a motorized treadmill: 1) focusing on a central point target; 2) reading numeral characters; both presented at 2m in front at eye level. The second study (Study 2) investigated the potential of adaptive remodeling of the full-body gaze control systems following exposure to visual-vestibular conflict. Subjects (n=14) walked (6.4 km/h) on the treadmill before and after they were exposed to 0.5X minifying lenses worn for 30 minutes during self-generated sinusoidal vertical head rotations performed while seated. In both studies we measured: temporal parameters of gait, full body sagittal plane segmental kinematics of the head, trunk, thigh, shank and foot, accelerations along the vertical axis at the head and the shank, and the vertical forces acting on the support surface. Results from Study 1 showed that while reading numeral characters as compared to the central point target: 1) compensatory head pitch movements were on average 22% greater 2) the peak acceleration measured at the head was significantly reduced by an average of 13% in four of the six subjects 3) the knee joint total movement was on average 11% greater during the period from the heel strike event to the peak knee flexion event in stance phase of the gait cycle. Results from Study 2 indicate that following exposure to visual-vestibular conflict changes in full-body strategies were observed consistent with the requirement to aid gaze stabilization during locomotion.

  5. Look over There! Unilateral Gaze Increases Geographical Memory of the 50 United States

    ERIC Educational Resources Information Center

    Propper, Ruth E.; Brunye, Tad T.; Christman, Stephen D.; Januszewskia, Ashley

    2012-01-01

    Based on their specialized processing abilities, the left and right hemispheres of the brain may not contribute equally to recall of general world knowledge. US college students recalled the verbal names and spatial locations of the 50 US states while sustaining leftward or rightward unilateral gaze, a procedure that selectively activates the…

  6. Talking heads or talking eyes? Effects of head orientation and sudden onset gaze cues on attention capture.

    PubMed

    van der Wel, Robrecht P; Welsh, Timothy; Böckler, Anne

    2018-01-01

    The direction of gaze towards or away from an observer has immediate effects on attentional processing in the observer. Previous research indicates that faces with direct gaze are processed more efficiently than faces with averted gaze. We recently reported additional processing advantages for faces that suddenly adopt direct gaze (abruptly shift from averted to direct gaze) relative to static direct gaze (always in direct gaze), sudden averted gaze (abruptly shift from direct to averted gaze), and static averted gaze (always in averted gaze). Because changes in gaze orientation in previous study co-occurred with changes in head orientation, it was not clear if the effect is contingent on face or eye processing, or whether it requires both the eyes and the face to provide consistent information. The present study delineates the impact of head orientation, sudden onset motion cues, and gaze cues. Participants completed a target-detection task in which head position remained in a static averted or direct orientation while sudden onset motion and eye gaze cues were manipulated within each trial. The results indicate a sudden direct gaze advantage that resulted from the additive role of motion and gaze cues. Interestingly, the orientation of the face towards or away from the observer did not influence the sudden direct gaze effect, suggesting that eye gaze cues, not face orientation cues, are critical for the sudden direct gaze effect.

  7. Context Modulates Congruency Effects in Selective Attention to Social Cues.

    PubMed

    Ravagli, Andrea; Marini, Francesco; Marino, Barbara F M; Ricciardelli, Paola

    2018-01-01

    Head and gaze directions are used during social interactions as essential cues to infer where someone attends. When head and gaze are oriented toward opposite directions, we need to extract socially meaningful information despite stimulus conflict. Recently, a cognitive and neural mechanism for filtering-out conflicting stimuli has been identified while performing non-social attention tasks. This mechanism is engaged proactively when conflict is anticipated in a high proportion of trials and reactively when conflict occurs infrequently. Here, we investigated whether a similar mechanism is at play for limiting distraction from conflicting social cues during gaze or head direction discrimination tasks in contexts with different probabilities of conflict. Results showed that, for the gaze direction task only (Experiment 1), inverse efficiency (IE) scores for distractor-absent trials (i.e., faces with averted gaze and centrally oriented head) were larger (indicating worse performance) when these trials were intermixed with congruent/incongruent distractor-present trials (i.e., faces with averted gaze and tilted head in the same/opposite direction) relative to when the same distractor-absent trials were shown in isolation. Moreover, on distractor-present trials, IE scores for congruent (vs. incongruent) head-gaze pairs in blocks with rare conflict were larger than in blocks with frequent conflict, suggesting that adaptation to conflict was more efficient than adaptation to infrequent events. However, when the task required discrimination of head orientation while ignoring gaze direction, performance was not impacted by both block-level and current trial congruency (Experiment 2), unless the cognitive load of the task was increased by adding a concurrent task (Experiment 3). Overall, our study demonstrates that during attention to social cues proactive cognitive control mechanisms are modulated by the expectation of conflicting stimulus information at both the block- and trial-sequence level, and by the type of task and cognitive load. This helps to clarify the inherent differences in the distracting potential of head and gaze cues during speeded social attention tasks.

  8. MEG Evidence for Dynamic Amygdala Modulations by Gaze and Facial Emotions

    PubMed Central

    Dumas, Thibaud; Dubal, Stéphanie; Attal, Yohan; Chupin, Marie; Jouvent, Roland; Morel, Shasha; George, Nathalie

    2013-01-01

    Background Amygdala is a key brain region for face perception. While the role of amygdala in the perception of facial emotion and gaze has been extensively highlighted with fMRI, the unfolding in time of amydgala responses to emotional versus neutral faces with different gaze directions is scarcely known. Methodology/Principal Findings Here we addressed this question in healthy subjects using MEG combined with an original source imaging method based on individual amygdala volume segmentation and the localization of sources in the amygdala volume. We found an early peak of amygdala activity that was enhanced for fearful relative to neutral faces between 130 and 170 ms. The effect of emotion was again significant in a later time range (310–350 ms). Moreover, the amygdala response was greater for direct relative averted gaze between 190 and 350 ms, and this effect was selective of fearful faces in the right amygdala. Conclusion Altogether, our results show that the amygdala is involved in the processing and integration of emotion and gaze cues from faces in different time ranges, thus underlining its role in multiple stages of face perception. PMID:24040190

  9. Hard to “tune in”: neural mechanisms of live face-to-face interaction with high-functioning autistic spectrum disorder

    PubMed Central

    Tanabe, Hiroki C.; Kosaka, Hirotaka; Saito, Daisuke N.; Koike, Takahiko; Hayashi, Masamichi J.; Izuma, Keise; Komeda, Hidetsugu; Ishitobi, Makoto; Omori, Masao; Munesue, Toshio; Okazawa, Hidehiko; Wada, Yuji; Sadato, Norihiro

    2012-01-01

    Persons with autism spectrum disorders (ASD) are known to have difficulty in eye contact (EC). This may make it difficult for their partners during face to face communication with them. To elucidate the neural substrates of live inter-subject interaction of ASD patients and normal subjects, we conducted hyper-scanning functional MRI with 21 subjects with autistic spectrum disorder (ASD) paired with typically-developed (normal) subjects, and with 19 pairs of normal subjects as a control. Baseline EC was maintained while subjects performed real-time joint-attention task. The task-related effects were modeled out, and inter-individual correlation analysis was performed on the residual time-course data. ASD–Normal pairs were less accurate at detecting gaze direction than Normal–Normal pairs. Performance was impaired both in ASD subjects and in their normal partners. The left occipital pole (OP) activation by gaze processing was reduced in ASD subjects, suggesting that deterioration of eye-cue detection in ASD is related to impairment of early visual processing of gaze. On the other hand, their normal partners showed greater activity in the bilateral occipital cortex and the right prefrontal area, indicating a compensatory workload. Inter-brain coherence in the right IFG that was observed in the Normal-Normal pairs (Saito et al., 2010) during EC diminished in ASD–Normal pairs. Intra-brain functional connectivity between the right IFG and right superior temporal sulcus (STS) in normal subjects paired with ASD subjects was reduced compared with in Normal–Normal pairs. This functional connectivity was positively correlated with performance of the normal partners on the eye-cue detection. Considering the integrative role of the right STS in gaze processing, inter-subject synchronization during EC may be a prerequisite for eye cue detection by the normal partner. PMID:23060772

  10. A model-based theory on the origin of downbeat nystagmus.

    PubMed

    Marti, Sarah; Straumann, Dominik; Büttner, Ulrich; Glasauer, Stefan

    2008-07-01

    The pathomechanism of downbeat nystagmus (DBN), an ocular motor sign typical for vestibulo-cerebellar lesions, remains unclear. Previous hypotheses conjectured various deficits such as an imbalance of central vertical vestibular or smooth pursuit pathways to be causative for the generation of spontaneous upward drift. However, none of the previous theories explains the full range of ocular motor deficits associated with DBN, i.e., impaired vertical smooth pursuit (SP), gaze evoked nystagmus, and gravity dependence of the upward drift. We propose a new hypothesis, which explains the ocular motor signs of DBN by damage of the inhibitory vertical gaze-velocity sensitive Purkinje cells (PCs) in the cerebellar flocculus (FL). These PCs show spontaneous activity and a physiological asymmetry in that most of them exhibit downward on-directions. Accordingly, a loss of vertical floccular PCs will lead to disinhibition of their brainstem target neurons and, consequently, to spontaneous upward drift, i.e., DBN. Since the FL is involved in generation and control of SP and gaze holding, a single lesion, e.g., damage to vertical floccular PCs, may also explain the associated ocular motor deficits. To test our hypothesis, we developed a computational model of vertical eye movements based on known ocular motor anatomy and physiology, which illustrates how cortical, cerebellar, and brainstem regions interact to generate the range of vertical eye movements seen in healthy subjects. Model simulation of the effect of extensive loss of floccular PCs resulted in ocular motor features typically associated with cerebellar DBN: (1) spontaneous upward drift due to decreased spontaneous PC activity, (2) gaze evoked nystagmus corresponding to failure of the cerebellar loop supporting neural integrator function, (3) asymmetric vertical SP deficit due to low gain and asymmetric attenuation of PC firing, and (4) gravity-dependence of DBN caused by an interaction of otolith-ocular pathways with impaired neural integrator function.

  11. Keeping an Eye on Noisy Movements: On Different Approaches to Perceptual-Motor Skill Research and Training.

    PubMed

    Dicks, Matt; Button, Chris; Davids, Keith; Chow, Jia Yi; van der Kamp, John

    2017-04-01

    Contemporary theorizing on the complementary nature of perception and action in expert performance has led to different emphases in the study of movement coordination and gaze behavior. On the one hand, coordination research has examined the role of variability in movement control, evidencing that variability facilitates individualized adaptations during both learning and performance. On the other hand, and at odds with this principle, the majority of gaze behavior studies have tended to average data over participants and trials, proposing the importance of universal 'optimal' gaze patterns in a given task, for all performers, irrespective of stage of learning. In this article, we discuss new lines of inquiry with the aim of reconciling these two distinct approaches. We consider the role of inter- and intra-individual variability in gaze behaviors and suggest directions for future research.

  12. Impaired reflexive orienting to social cues in attention deficit hyperactivity disorder.

    PubMed

    Marotta, Andrea; Casagrande, Maria; Rosa, Caterina; Maccari, Lisa; Berloco, Bianca; Pasini, Augusto

    2014-08-01

    The present study investigated whether another person's social attention, specifically the direction of their eye gaze, and non-social directional cues triggered reflexive orienting in individuals with Attention Deficit Hyperactivity Disorder (ADHD) and age-matched controls. A choice reaction time and a detection tasks were used in which eye gaze, arrow and peripheral cues correctly (congruent) or incorrectly (incongruent) signalled target location. Independently of the type of the task, differences between groups were specific to the cue condition. Typically developing individuals shifted attention to the location cued by both social and non-social cues, whereas ADHD group showed evidence of reflexive orienting only to locations previously cued by non-social stimuli (arrow and peripheral cues) but failed to show such orienting effect in response to social eye gaze cues. The absence of reflexive orienting effect for eye gaze cues observed in the participants with ADHD may reflect an attentional impairment in responding to socially relevant information.

  13. The Role of Gaze and Road Edge Information during High-Speed Locomotion

    ERIC Educational Resources Information Center

    Kountouriotis, Georgios K.; Floyd, Rosalind C.; Gardner, Peter H.; Merat, Natasha; Wilkie, Richard M.

    2012-01-01

    Robust control of skilled actions requires the flexible combination of multiple sources of information. Here we examined the role of gaze during high-speed locomotor steering and in particular the role of feedback from the visible road edges. Participants were required to maintain one of three lateral positions on the road when one or both edges…

  14. Real-Time Mutual Gaze Perception Enhances Collaborative Learning and Collaboration Quality

    ERIC Educational Resources Information Center

    Schneider, Bertrand; Pea, Roy

    2013-01-01

    In this paper we present the results of an eye-tracking study on collaborative problem-solving dyads. Dyads remotely collaborated to learn from contrasting cases involving basic concepts about how the human brain processes visual information. In one condition, dyads saw the eye gazes of their partner on the screen; in a control group, they did not…

  15. Fear of Negative Evaluation Influences Eye Gaze in Adolescents with Autism Spectrum Disorder: A Pilot Study

    ERIC Educational Resources Information Center

    White, Susan W.; Maddox, Brenna B.; Panneton, Robin K.

    2015-01-01

    Social anxiety is common among adolescents with Autism Spectrum Disorder (ASD). In this modest-sized pilot study, we examined the relationship between social worries and gaze patterns to static social stimuli in adolescents with ASD (n = 15) and gender-matched adolescents without ASD (control; n = 18). Among cognitively unimpaired adolescents with…

  16. Controlling Attention to Gaze and Arrows in Childhood: An fMRI Study of Typical Development and Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Vaidya, Chandan J.; Foss-Feig, Jennifer; Shook, Devon; Kaplan, Lauren; Kenworthy, Lauren; Gaillard, William D.

    2011-01-01

    Functional magnetic resonance imaging was used to examine functional anatomy of attention to social (eye gaze) and nonsocial (arrow) communicative stimuli in late childhood and in a disorder defined by atypical processing of social stimuli, Autism Spectrum Disorders (ASD). Children responded to a target word ("LEFT"/"RIGHT") in the context of a…

  17. Feed-forward and feedback projections of midbrain reticular formation neurons in the cat

    PubMed Central

    Perkins, Eddie; May, Paul J.; Warren, Susan

    2014-01-01

    Gaze changes involving the eyes and head are orchestrated by brainstem gaze centers found within the superior colliculus (SC), paramedian pontine reticular formation (PPRF), and medullary reticular formation (MdRF). The mesencephalic reticular formation (MRF) also plays a role in gaze. It receives a major input from the ipsilateral SC and contains cells that fire in relation to gaze changes. Moreover, it provides a feedback projection to the SC and feed-forward projections to the PPRF and MdRF. We sought to determine whether these MRF feedback and feed-forward projections originate from the same or different neuronal populations by utilizing paired fluorescent retrograde tracers in cats. Specifically, we tested: 1. whether MRF neurons that control eye movements form a single population by injecting the SC and PPRF with different tracers, and 2. whether MRF neurons that control head movements form a single population by injecting the SC and MdRF with different tracers. In neither case were double labeled neurons observed, indicating that feedback and feed-forward projections originate from separate MRF populations. In both cases, the labeled reticulotectal and reticuloreticular neurons were distributed bilaterally in the MRF. However, neurons projecting to the MdRF were generally constrained to the medial half of the MRF, while those projecting to the PPRF, like MRF reticulotectal neurons, were spread throughout the mediolateral axis. Thus, the medial MRF may be specialized for control of head movements, with control of eye movements being more widespread in this structure. PMID:24454280

  18. Feed-forward and feedback projections of midbrain reticular formation neurons in the cat.

    PubMed

    Perkins, Eddie; May, Paul J; Warren, Susan

    2014-01-10

    Gaze changes involving the eyes and head are orchestrated by brainstem gaze centers found within the superior colliculus (SC), paramedian pontine reticular formation (PPRF), and medullary reticular formation (MdRF). The mesencephalic reticular formation (MRF) also plays a role in gaze. It receives a major input from the ipsilateral SC and contains cells that fire in relation to gaze changes. Moreover, it provides a feedback projection to the SC and feed-forward projections to the PPRF and MdRF. We sought to determine whether these MRF feedback and feed-forward projections originate from the same or different neuronal populations by utilizing paired fluorescent retrograde tracers in cats. Specifically, we tested: 1. whether MRF neurons that control eye movements form a single population by injecting the SC and PPRF with different tracers, and 2. whether MRF neurons that control head movements form a single population by injecting the SC and MdRF with different tracers. In neither case were double labeled neurons observed, indicating that feedback and feed-forward projections originate from separate MRF populations. In both cases, the labeled reticulotectal and reticuloreticular neurons were distributed bilaterally in the MRF. However, neurons projecting to the MdRF were generally constrained to the medial half of the MRF, while those projecting to the PPRF, like MRF reticulotectal neurons, were spread throughout the mediolateral axis. Thus, the medial MRF may be specialized for control of head movements, with control of eye movements being more widespread in this structure.

  19. The Effects of Varying Contextual Demands on Age-related Positive Gaze Preferences

    PubMed Central

    Noh, Soo Rim; Isaacowitz, Derek M.

    2015-01-01

    Despite many studies on the age-related positivity effect and its role in visual attention, discrepancies remain regarding whether one’s full attention is required for age-related differences to emerge. The present study took a new approach to this question by varying the contextual demands of emotion processing. This was done by adding perceptual distractions, such as visual and auditory noise, that could disrupt attentional control. Younger and older participants viewed pairs of happy–neutral and fearful–neutral faces while their eye movements were recorded. Facial stimuli were shown either without noise, embedded in a background of visual noise (low, medium, or high), or with simultaneous auditory babble. Older adults showed positive gaze preferences, looking toward happy faces and away from fearful faces; however, their gaze preferences tended to be influenced by the level of visual noise. Specifically, the tendency to look away from fearful faces was not present in conditions with low and medium levels of visual noise, but was present where there were high levels of visual noise. It is important to note, however, that in the high-visual-noise condition, external cues were present to facilitate the processing of emotional information. In addition, older adults’ positive gaze preferences disappeared or were reduced when they first viewed emotional faces within a distracting context. The current results indicate that positive gaze preferences may be less likely to occur in distracting contexts that disrupt control of visual attention. PMID:26030774

  20. The effects of varying contextual demands on age-related positive gaze preferences.

    PubMed

    Noh, Soo Rim; Isaacowitz, Derek M

    2015-06-01

    Despite many studies on the age-related positivity effect and its role in visual attention, discrepancies remain regarding whether full attention is required for age-related differences to emerge. The present study took a new approach to this question by varying the contextual demands of emotion processing. This was done by adding perceptual distractions, such as visual and auditory noise, that could disrupt attentional control. Younger and older participants viewed pairs of happy-neutral and fearful-neutral faces while their eye movements were recorded. Facial stimuli were shown either without noise, embedded in a background of visual noise (low, medium, or high), or with simultaneous auditory babble. Older adults showed positive gaze preferences, looking toward happy faces and away from fearful faces; however, their gaze preferences tended to be influenced by the level of visual noise. Specifically, the tendency to look away from fearful faces was not present in conditions with low and medium levels of visual noise but was present when there were high levels of visual noise. It is important to note, however, that in the high-visual-noise condition, external cues were present to facilitate the processing of emotional information. In addition, older adults' positive gaze preferences disappeared or were reduced when they first viewed emotional faces within a distracting context. The current results indicate that positive gaze preferences may be less likely to occur in distracting contexts that disrupt control of visual attention. (c) 2015 APA, all rights reserved.

  1. Magnetic Resonance Imaging of Optic Nerve Traction During Adduction in Primary Open-Angle Glaucoma With Normal Intraocular Pressure

    PubMed Central

    Demer, Joseph L.; Clark, Robert A.; Suh, Soh Youn; Giaconi, JoAnn A.; Nouri-Mahdavi, Kouros; Law, Simon K.; Bonelli, Laura; Coleman, Anne L.; Caprioli, Joseph

    2017-01-01

    Purpose We used magnetic resonance imaging (MRI) to ascertain effects of optic nerve (ON) traction in adduction, a phenomenon proposed as neuropathic in primary open-angle glaucoma (POAG). Methods Seventeen patients with POAG and maximal IOP ≤ 20 mm Hg, and 31 controls underwent MRI in central gaze and 20° to 30° abduction and adduction. Optic nerve and sheath area centroids permitted computation of midorbital lengths versus minimum paths. Results Average mean deviation (±SEM) was −8.2 ± 1.2 dB in the 15 patients with POAG having interpretable perimetry. In central gaze, ON path length in POAG was significantly more redundant (104.5 ± 0.4% of geometric minimum) than in controls (102.9 ± 0.4%, P = 2.96 × 10−4). In both groups the ON became significantly straighter in adduction (28.6 ± 0.8° in POAG, 26.8 ± 1.1° in controls) than central gaze and abduction. In adduction, the ON in POAG straightened to 102.0% ± 0.2% of minimum path length versus 104.5% ± 0.4% in central gaze (P = 5.7 × 10−7), compared with controls who straightened to 101.6% ± 0.1% from 102.9% ± 0.3% in central gaze (P = 8.7 × 10−6); and globes retracted 0.73 ± 0.09 mm in POAG, but only 0.07 ± 0.08 mm in controls (P = 8.8 × 10−7). Both effects were confirmed in age-matched controls, and remained significant after correction for significant effects of age and axial globe length (P = 0.005). Conclusions Although tethering and elongation of ON and sheath are normal in adduction, adduction is associated with abnormally great globe retraction in POAG without elevated IOP. Traction in adduction may cause mechanical overloading of the ON head and peripapillary sclera, thus contributing to or resulting from the optic neuropathy of glaucoma independent of IOP. PMID:28829843

  2. Real-time evaluation of a noninvasive neuroprosthetic interface for control of reach.

    PubMed

    Corbett, Elaine A; Körding, Konrad P; Perreault, Eric J

    2013-07-01

    Injuries of the cervical spinal cord can interrupt the neural pathways controlling the muscles of the arm, resulting in complete or partial paralysis. For individuals unable to reach due to high-level injuries, neuroprostheses can restore some of the lost function. Natural, multidimensional control of neuroprosthetic devices for reaching remains a challenge. Electromyograms (EMGs) from muscles that remain under voluntary control can be used to communicate intended reach trajectories, but when the number of available muscles is limited control can be difficult and unintuitive. We combined shoulder EMGs with target estimates obtained from gaze. Natural gaze data were integrated with EMG during closed-loop robotic control of the arm, using a probabilistic mixture model. We tested the approach with two different sets of EMGs, as might be available to subjects with C4- and C5-level spinal cord injuries. Incorporating gaze greatly improved control of reaching, particularly when there were few EMG signals. We found that subjects naturally adapted their eye-movement precision as we varied the set of available EMGs, attaining accurate performance in both tested conditions. The system performs a near-optimal combination of both physiological signals, making control more intuitive and allowing a natural trajectory that reduces the burden on the user.

  3. Gaze Toward Naturalistic Social Scenes by Individuals With Intellectual and Developmental Disabilities: Implications for Augmentative and Alternative Communication Designs.

    PubMed

    Liang, Jiali; Wilkinson, Krista

    2018-04-18

    A striking characteristic of the social communication deficits in individuals with autism is atypical patterns of eye contact during social interactions. We used eye-tracking technology to evaluate how the number of human figures depicted and the presence of sharing activity between the human figures in still photographs influenced visual attention by individuals with autism, typical development, or Down syndrome. We sought to examine visual attention to the contents of visual scene displays, a growing form of augmentative and alternative communication support. Eye-tracking technology recorded point-of-gaze while participants viewed 32 photographs in which either 2 or 3 human figures were depicted. Sharing activities between these human figures are either present or absent. The sampling rate was 60 Hz; that is, the technology gathered 60 samples of gaze behavior per second, per participant. Gaze behaviors, including latency to fixate and time spent fixating, were quantified. The overall gaze behaviors were quite similar across groups, regardless of the social content depicted. However, individuals with autism were significantly slower than the other groups in latency to first view the human figures, especially when there were 3 people depicted in the photographs (as compared with 2 people). When participants' own viewing pace was considered, individuals with autism resembled those with Down syndrome. The current study supports the inclusion of social content with various numbers of human figures and sharing activities between human figures into visual scene displays, regardless of the population served. Study design and reporting practices in eye-tracking literature as it relates to autism and Down syndrome are discussed. https://doi.org/10.23641/asha.6066545.

  4. Learning visuomotor transformations for gaze-control and grasping.

    PubMed

    Hoffmann, Heiko; Schenck, Wolfram; Möller, Ralf

    2005-08-01

    For reaching to and grasping of an object, visual information about the object must be transformed into motor or postural commands for the arm and hand. In this paper, we present a robot model for visually guided reaching and grasping. The model mimics two alternative processing pathways for grasping, which are also likely to coexist in the human brain. The first pathway directly uses the retinal activation to encode the target position. In the second pathway, a saccade controller makes the eyes (cameras) focus on the target, and the gaze direction is used instead as positional input. For both pathways, an arm controller transforms information on the target's position and orientation into an arm posture suitable for grasping. For the training of the saccade controller, we suggest a novel staged learning method which does not require a teacher that provides the necessary motor commands. The arm controller uses unsupervised learning: it is based on a density model of the sensor and the motor data. Using this density, a mapping is achieved by completing a partially given sensorimotor pattern. The controller can cope with the ambiguity in having a set of redundant arm postures for a given target. The combined model of saccade and arm controller was able to fixate and grasp an elongated object with arbitrary orientation and at arbitrary position on a table in 94% of trials.

  5. Training Basic Visual Attention Leads to Changes in Responsiveness to Social-Communicative Cues in 9-Month-Olds

    ERIC Educational Resources Information Center

    Forssman, Linda; Wass, Sam V.

    2018-01-01

    This study investigated transfer effects of gaze-interactive attention training to more complex social and cognitive skills in infancy. Seventy 9-month-olds were assigned to a training group (n = 35) or an active control group (n = 35). Before, after, and at 6-week follow-up both groups completed an assessment battery assessing transfer to…

  6. Hierarchical Encoding of Social Cues in Primate Inferior Temporal Cortex

    PubMed Central

    Morin, Elyse L.; Hadj-Bouziane, Fadila; Stokes, Mark; Ungerleider, Leslie G.; Bell, Andrew H.

    2015-01-01

    Faces convey information about identity and emotional state, both of which are important for our social interactions. Models of face processing propose that changeable versus invariant aspects of a face, specifically facial expression/gaze direction versus facial identity, are coded by distinct neural pathways and yet neurophysiological data supporting this separation are incomplete. We recorded activity from neurons along the inferior bank of the superior temporal sulcus (STS), while monkeys viewed images of conspecific faces and non-face control stimuli. Eight monkey identities were used, each presented with 3 different facial expressions (neutral, fear grin, and threat). All facial expressions were displayed with both a direct and averted gaze. In the posterior STS, we found that about one-quarter of face-responsive neurons are sensitive to social cues, the majority of which being sensitive to only one of these cues. In contrast, in anterior STS, not only did the proportion of neurons sensitive to social cues increase, but so too did the proportion of neurons sensitive to conjunctions of identity with either gaze direction or expression. These data support a convergence of signals related to faces as one moves anteriorly along the inferior bank of the STS, which forms a fundamental part of the face-processing network. PMID:24836688

  7. Why Do We Move Our Eyes while Trying to Remember? The Relationship between Non-Visual Gaze Patterns and Memory

    ERIC Educational Resources Information Center

    Micic, Dragana; Ehrlichman, Howard; Chen, Rebecca

    2010-01-01

    Non-visual gaze patterns (NVGPs) involve saccades and fixations that spontaneously occur in cognitive activities that are not ostensibly visual. While reasons for their appearance remain obscure, convergent empirical evidence suggests that NVGPs change according to processing requirements of tasks. We examined NVGPs in tasks with long-term memory…

  8. Infant Response to the Still-Face Situation at 3 and 6 Months.

    ERIC Educational Resources Information Center

    Toda, Sueko; Fogel, Alan

    1993-01-01

    Observed the behavior of 37 infants in response to their mothers' normal and still face. Infants reduced their smiling and increased their gazing away from mother during the still face condition compared to normal face condition. Compared to three month olds, six month olds were more likely to use hand activities while gazing away from mother. (MM)

  9. Implicit prosody mining based on the human eye image capture technology

    NASA Astrophysics Data System (ADS)

    Gao, Pei-pei; Liu, Feng

    2013-08-01

    The technology of eye tracker has become the main methods of analyzing the recognition issues in human-computer interaction. Human eye image capture is the key problem of the eye tracking. Based on further research, a new human-computer interaction method introduced to enrich the form of speech synthetic. We propose a method of Implicit Prosody mining based on the human eye image capture technology to extract the parameters from the image of human eyes when reading, control and drive prosody generation in speech synthesis, and establish prosodic model with high simulation accuracy. Duration model is key issues for prosody generation. For the duration model, this paper put forward a new idea for obtaining gaze duration of eyes when reading based on the eye image capture technology, and synchronous controlling this duration and pronunciation duration in speech synthesis. The movement of human eyes during reading is a comprehensive multi-factor interactive process, such as gaze, twitching and backsight. Therefore, how to extract the appropriate information from the image of human eyes need to be considered and the gaze regularity of eyes need to be obtained as references of modeling. Based on the analysis of current three kinds of eye movement control model and the characteristics of the Implicit Prosody reading, relative independence between speech processing system of text and eye movement control system was discussed. It was proved that under the same text familiarity condition, gaze duration of eyes when reading and internal voice pronunciation duration are synchronous. The eye gaze duration model based on the Chinese language level prosodic structure was presented to change previous methods of machine learning and probability forecasting, obtain readers' real internal reading rhythm and to synthesize voice with personalized rhythm. This research will enrich human-computer interactive form, and will be practical significance and application prospect in terms of disabled assisted speech interaction. Experiments show that Implicit Prosody mining based on the human eye image capture technology makes the synthesized speech has more flexible expressions.

  10. Multimodal decoding and congruent sensory information enhance reaching performance in subjects with cervical spinal cord injury.

    PubMed

    Corbett, Elaine A; Sachs, Nicholas A; Körding, Konrad P; Perreault, Eric J

    2014-01-01

    Cervical spinal cord injury (SCI) paralyzes muscles of the hand and arm, making it difficult to perform activities of daily living. Restoring the ability to reach can dramatically improve quality of life for people with cervical SCI. Any reaching system requires a user interface to decode parameters of an intended reach, such as trajectory and target. A challenge in developing such decoders is that often few physiological signals related to the intended reach remain under voluntary control, especially in patients with high cervical injuries. Furthermore, the decoding problem changes when the user is controlling the motion of their limb, as opposed to an external device. The purpose of this study was to investigate the benefits of combining disparate signal sources to control reach in people with a range of impairments, and to consider the effect of two feedback approaches. Subjects with cervical SCI performed robot-assisted reaching, controlling trajectories with either shoulder electromyograms (EMGs) or EMGs combined with gaze. We then evaluated how reaching performance was influenced by task-related sensory feedback, testing the EMG-only decoder in two conditions. The first involved moving the arm with the robot, providing congruent sensory feedback through their remaining sense of proprioception. In the second, the subjects moved the robot without the arm attached, as in applications that control external devices. We found that the multimodal-decoding algorithm worked well for all subjects, enabling them to perform straight, accurate reaches. The inclusion of gaze information, used to estimate target location, was especially important for the most impaired subjects. In the absence of gaze information, congruent sensory feedback improved performance. These results highlight the importance of proprioceptive feedback, and suggest that multi-modal decoders are likely to be most beneficial for highly impaired subjects and in tasks where such feedback is unavailable.

  11. A kinematic model for 3-D head-free gaze-shifts

    PubMed Central

    Daemi, Mehdi; Crawford, J. Douglas

    2015-01-01

    Rotations of the line of sight are mainly implemented by coordinated motion of the eyes and head. Here, we propose a model for the kinematics of three-dimensional (3-D) head-unrestrained gaze-shifts. The model was designed to account for major principles in the known behavior, such as gaze accuracy, spatiotemporal coordination of saccades with vestibulo-ocular reflex (VOR), relative eye and head contributions, the non-commutativity of rotations, and Listing's and Fick constraints for the eyes and head, respectively. The internal design of the model was inspired by known and hypothesized elements of gaze control physiology. Inputs included retinocentric location of the visual target and internal representations of initial 3-D eye and head orientation, whereas outputs were 3-D displacements of eye relative to the head and head relative to shoulder. Internal transformations decomposed the 2-D gaze command into 3-D eye and head commands with the use of three coordinated circuits: (1) a saccade generator, (2) a head rotation generator, (3) a VOR predictor. Simulations illustrate that the model can implement: (1) the correct 3-D reference frame transformations to generate accurate gaze shifts (despite variability in other parameters), (2) the experimentally verified constraints on static eye and head orientations during fixation, and (3) the experimentally observed 3-D trajectories of eye and head motion during gaze-shifts. We then use this model to simulate how 2-D eye-head coordination strategies interact with 3-D constraints to influence 3-D orientations of the eye-in-space, and the implications of this for spatial vision. PMID:26113816

  12. Popularizing Space Education in Indian Context

    NASA Astrophysics Data System (ADS)

    Yalagi, Amrut

    Indians have many mythological stories about many constellations and stars. Hindu months are based on MOON and 27 stars on Zodiac. They are very important for many Indians in ritual, religious functions. By prompting them to identify their birth star, really makes them elevated. Similarly conveying them the importance of star gazing with respect to their day today life makes them to take interest and active participation in Space Activities. Space activities should be driven by public; their requirements; their dreams and imaginations. Their active participation definitely gives valuable inputs to space scientists. Hence, there is a need of involving common man or public mass by appropriate motivation by organising sky gazing sessions, exhibitions, workshops, etc. In this connection, even if the some organisation are able to attract a small percent of qualified engineers/scientists,, enthusiastic students, it would result in the creation of a sizable pool of talent in space sciences,which may well determine the future mankind on this planet. Some simple motivation acts have made the people to take interest in space. we have been using certain methodologies to popularize space science - 1] Conducting theory sessions on basics of star gazing and conveying importance of sky gazing with respect to day-today life. 2] Organising seminars, workshops, lectures and other academic/popular science activities with special reference to space science 3] Projects - a] Cubsat Missions b] Automatic Weather Station Facility c] Model making d] Creating and simulating space models and rover making competitions. The 50 year's of Exploration has left tremendous impact on many society's working towards space education and exploration.

  13. An eye model for uncalibrated eye gaze estimation under variable head pose

    NASA Astrophysics Data System (ADS)

    Hnatow, Justin; Savakis, Andreas

    2007-04-01

    Gaze estimation is an important component of computer vision systems that monitor human activity for surveillance, human-computer interaction, and various other applications including iris recognition. Gaze estimation methods are particularly valuable when they are non-intrusive, do not require calibration, and generalize well across users. This paper presents a novel eye model that is employed for efficiently performing uncalibrated eye gaze estimation. The proposed eye model was constructed from a geometric simplification of the eye and anthropometric data about eye feature sizes in order to circumvent the requirement of calibration procedures for each individual user. The positions of the two eye corners and the midpupil, the distance between the two eye corners, and the radius of the eye sphere are required for gaze angle calculation. The locations of the eye corners and midpupil are estimated via processing following eye detection, and the remaining parameters are obtained from anthropometric data. This eye model is easily extended to estimating eye gaze under variable head pose. The eye model was tested on still images of subjects at frontal pose (0 °) and side pose (34 °). An upper bound of the model's performance was obtained by manually selecting the eye feature locations. The resulting average absolute error was 2.98 ° for frontal pose and 2.87 ° for side pose. The error was consistent across subjects, which indicates that good generalization was obtained. This level of performance compares well with other gaze estimation systems that utilize a calibration procedure to measure eye features.

  14. Subliminal gaze cues increase preference levels for items in the gaze direction.

    PubMed

    Mitsuda, Takashi; Masaki, Syuta

    2017-08-29

    Another individual's gaze automatically shifts an observer's attention to a location. This reflexive response occurs even when the gaze is presented subliminally over a short period. Another's gaze also increases the preference level for items in the gaze direction; however, it was previously unclear if this effect occurs when the gaze is presented subliminally. This study showed that the preference levels for nonsense figures looked at by a subliminal gaze were significantly greater than those for items that were subliminally looked away from (Task 1). Targets that were looked at by a subliminal gaze were detected faster (Task 2); however, the participants were unable to detect the gaze direction (Task 3). These results indicate that another individual's gaze automatically increases the preference levels for items in the gaze direction without conscious awareness.

  15. New perspectives in gaze sensitivity research.

    PubMed

    Davidson, Gabrielle L; Clayton, Nicola S

    2016-03-01

    Attending to where others are looking is thought to be of great adaptive benefit for animals when avoiding predators and interacting with group members. Many animals have been reported to respond to the gaze of others, by co-orienting their gaze with group members (gaze following) and/or responding fearfully to the gaze of predators or competitors (i.e., gaze aversion). Much of the literature has focused on the cognitive underpinnings of gaze sensitivity, namely whether animals have an understanding of the attention and visual perspectives in others. Yet there remain several unanswered questions regarding how animals learn to follow or avoid gaze and how experience may influence their behavioral responses. Many studies on the ontogeny of gaze sensitivity have shed light on how and when gaze abilities emerge and change across development, indicating the necessity to explore gaze sensitivity when animals are exposed to additional information from their environment as adults. Gaze aversion may be dependent upon experience and proximity to different predator types, other cues of predation risk, and the salience of gaze cues. Gaze following in the context of information transfer within social groups may also be dependent upon experience with group-members; therefore we propose novel means to explore the degree to which animals respond to gaze in a flexible manner, namely by inhibiting or enhancing gaze following responses. We hope this review will stimulate gaze sensitivity research to expand beyond the narrow scope of investigating underlying cognitive mechanisms, and to explore how gaze cues may function to communicate information other than attention.

  16. Is gaze following purely reflexive or goal-directed instead? Revisiting the automaticity of orienting attention by gaze cues.

    PubMed

    Ricciardelli, Paola; Carcagno, Samuele; Vallar, Giuseppe; Bricolo, Emanuela

    2013-01-01

    Distracting gaze has been shown to elicit automatic gaze following. However, it is still debated whether the effects of perceived gaze are a simple automatic spatial orienting response or are instead sensitive to the context (i.e. goals and task demands). In three experiments, we investigated the conditions under which gaze following occurs. Participants were instructed to saccade towards one of two lateral targets. A face distracter, always present in the background, could gaze towards: (a) a task-relevant target--("matching" goal-directed gaze shift)--congruent or incongruent with the instructed direction, (b) a task-irrelevant target, orthogonal to the one instructed ("non-matching" goal-directed gaze shift), or (c) an empty spatial location (no-goal-directed gaze shift). Eye movement recordings showed faster saccadic latencies in correct trials in congruent conditions especially when the distracting gaze shift occurred before the instruction to make a saccade. Interestingly, while participants made a higher proportion of gaze-following errors (i.e. errors in the direction of the distracting gaze) in the incongruent conditions when the distracter's gaze shift preceded the instruction onset indicating an automatic gaze following, they never followed the distracting gaze when it was directed towards an empty location or a stimulus that was never the target. Taken together, these findings suggest that gaze following is likely to be a product of both automatic and goal-driven orienting mechanisms.

  17. A multimodal interface to resolve the Midas-Touch problem in gaze controlled wheelchair.

    PubMed

    Meena, Yogesh Kumar; Cecotti, Hubert; Wong-Lin, KongFatt; Prasad, Girijesh

    2017-07-01

    Human-computer interaction (HCI) research has been playing an essential role in the field of rehabilitation. The usability of the gaze controlled powered wheelchair is limited due to Midas-Touch problem. In this work, we propose a multimodal graphical user interface (GUI) to control a powered wheelchair that aims to help upper-limb mobility impaired people in daily living activities. The GUI was designed to include a portable and low-cost eye-tracker and a soft-switch wherein the wheelchair can be controlled in three different ways: 1) with a touchpad 2) with an eye-tracker only, and 3) eye-tracker with soft-switch. The interface includes nine different commands (eight directions and stop) and integrated within a powered wheelchair system. We evaluated the performance of the multimodal interface in terms of lap-completion time, the number of commands, and the information transfer rate (ITR) with eight healthy participants. The analysis of the results showed that the eye-tracker with soft-switch provides superior performance with an ITR of 37.77 bits/min among the three different conditions (p<;0.05). Thus, the proposed system provides an effective and economical solution to the Midas-Touch problem and extended usability for the large population of disabled users.

  18. Perceiving crowd attention: Gaze following in human crowds with conflicting cues.

    PubMed

    Sun, Zhongqiang; Yu, Wenjun; Zhou, Jifan; Shen, Mowei

    2017-05-01

    People automatically redirect their visual attention by following others' gaze orientation, a phenomenon called "gaze following." This is an evolutionarily generated socio-cognitive process that provides people with information about their environments. Often, however, people in crowds can have rather different gaze orientations. This study investigated how gaze following occurs in situations with many conflicting gazes. In two experiments, we modified the gaze cueing paradigm to use a crowd rather than a single individual. Specifically, participants were presented with a group of human avatars with differing gaze orientations, and the target appeared randomly on the left or right side of a display. We found that (a) when a marked difference existed in the number of avatars with divergent gaze orientations, participants automatically followed the majority's gaze orientation, and (b) the strongest gaze cue effect occurred when all gazes shared the same orientation, with the response superiority of the majority's oriented location monotonically diminishing with the number of gazes with divergent orientations. These findings suggested that the majority rule plays a role in gaze following behavior when individuals are confronted with conflicting multigaze scenes, and that an increasing subgroup size appears to enlarge the strength of the gaze cueing effect.

  19. Effects of Peripheral Eccentricity and Head Orientation on Gaze Discrimination.

    PubMed

    Palanica, Adam; Itier, Roxane J

    2014-01-01

    Visual search tasks support a special role for direct gaze in human cognition, while classic gaze judgment tasks suggest the congruency between head orientation and gaze direction plays a central role in gaze perception. Moreover, whether gaze direction can be accurately discriminated in the periphery using covert attention is unknown. In the present study, individual faces in frontal and in deviated head orientations with a direct or an averted gaze were flashed for 150 ms across the visual field; participants focused on a centred fixation while judging the gaze direction. Gaze discrimination speed and accuracy varied with head orientation and eccentricity. The limit of accurate gaze discrimination was less than ±6° eccentricity. Response times suggested a processing facilitation for direct gaze in fovea, irrespective of head orientation, however, by ±3° eccentricity, head orientation started biasing gaze judgments, and this bias increased with eccentricity. Results also suggested a special processing of frontal heads with direct gaze in central vision, rather than a general congruency effect between eye and head cues. Thus, while both head and eye cues contribute to gaze discrimination, their role differs with eccentricity.

  20. Face exploration dynamics differentiate men and women.

    PubMed

    Coutrot, Antoine; Binetti, Nicola; Harrison, Charlotte; Mareschal, Isabelle; Johnston, Alan

    2016-11-01

    The human face is central to our everyday social interactions. Recent studies have shown that while gazing at faces, each one of us has a particular eye-scanning pattern, highly stable across time. Although variables such as culture or personality have been shown to modulate gaze behavior, we still don't know what shapes these idiosyncrasies. Moreover, most previous observations rely on static analyses of small-sized eye-position data sets averaged across time. Here, we probe the temporal dynamics of gaze to explore what information can be extracted about the observers and what is being observed. Controlling for any stimuli effect, we demonstrate that among many individual characteristics, the gender of both the participant (gazer) and the person being observed (actor) are the factors that most influence gaze patterns during face exploration. We record and exploit the largest set of eye-tracking data (405 participants, 58 nationalities) from participants watching videos of another person. Using novel data-mining techniques, we show that female gazers follow a much more exploratory scanning strategy than males. Moreover, female gazers watching female actresses look more at the eye on the left side. These results have strong implications in every field using gaze-based models from computer vision to clinical psychology.

  1. Human Guidance Behavior Decomposition and Modeling

    NASA Astrophysics Data System (ADS)

    Feit, Andrew James

    Trained humans are capable of high performance, adaptable, and robust first-person dynamic motion guidance behavior. This behavior is exhibited in a wide variety of activities such as driving, piloting aircraft, skiing, biking, and many others. Human performance in such activities far exceeds the current capability of autonomous systems in terms of adaptability to new tasks, real-time motion planning, robustness, and trading safety for performance. The present work investigates the structure of human dynamic motion guidance that enables these performance qualities. This work uses a first-person experimental framework that presents a driving task to the subject, measuring control inputs, vehicle motion, and operator visual gaze movement. The resulting data is decomposed into subspace segment clusters that form primitive elements of action-perception interactive behavior. Subspace clusters are defined by both agent-environment system dynamic constraints and operator control strategies. A key contribution of this work is to define transitions between subspace cluster segments, or subgoals, as points where the set of active constraints, either system or operator defined, changes. This definition provides necessary conditions to determine transition points for a given task-environment scenario that allow a solution trajectory to be planned from known behavior elements. In addition, human gaze behavior during this task contains predictive behavior elements, indicating that the identified control modes are internally modeled. Based on these ideas, a generative, autonomous guidance framework is introduced that efficiently generates optimal dynamic motion behavior in new tasks. The new subgoal planning algorithm is shown to generate solutions to certain tasks more quickly than existing approaches currently used in robotics.

  2. Homonymous Visual Field Loss and Its Impact on Visual Exploration: A Supermarket Study.

    PubMed

    Kasneci, Enkelejda; Sippel, Katrin; Heister, Martin; Aehling, Katrin; Rosenstiel, Wolfgang; Schiefer, Ulrich; Papageorgiou, Elena

    2014-10-01

    Homonymous visual field defects (HVFDs) may critically interfere with quality of life. The aim of this study was to assess the impact of HVFDs on a supermarket search task and to investigate the influence of visual search on task performance. Ten patients with HVFDs (four with a right-sided [HR] and six with a left-sided defect [HL]), and 10 healthy-sighted, sex-, and age-matched control subjects were asked to collect 20 products placed on two supermarket shelves as quickly as possible. Task performance was rated as "passed" or "failed" with regard to the time per correctly collected item ( T C -failed = 4.84 seconds based on the performance of healthy subjects). Eye movements were analyzed regarding the horizontal gaze activity, glance frequency, and glance proportion for different VF areas. Seven of 10 HVFD patients (three HR, four HL) passed the supermarket search task. Patients who passed needed significantly less time per correctly collected item and looked more frequently toward the VFD area than patients who failed. HL patients who passed the test showed a higher percentage of glances beyond the 60° VF ( P < 0.05). A considerable number of HVFD patients performed successfully and could compensate for the HVFD by shifting the gaze toward the peripheral VF and the VFD area. These findings provide new insights on gaze adaptations in patients with HVFDs during activities of daily living and will enhance the design and development of realistic examination tools for use in the clinical setting to improve daily functioning. (http://www.clinicaltrials.gov, NCT01372319, NCT01372332).

  3. Effects of Peripheral Eccentricity and Head Orientation on Gaze Discrimination

    PubMed Central

    Palanica, Adam; Itier, Roxane J.

    2017-01-01

    Visual search tasks support a special role for direct gaze in human cognition, while classic gaze judgment tasks suggest the congruency between head orientation and gaze direction plays a central role in gaze perception. Moreover, whether gaze direction can be accurately discriminated in the periphery using covert attention is unknown. In the present study, individual faces in frontal and in deviated head orientations with a direct or an averted gaze were flashed for 150 ms across the visual field; participants focused on a centred fixation while judging the gaze direction. Gaze discrimination speed and accuracy varied with head orientation and eccentricity. The limit of accurate gaze discrimination was less than ±6° eccentricity. Response times suggested a processing facilitation for direct gaze in fovea, irrespective of head orientation, however, by ±3° eccentricity, head orientation started biasing gaze judgments, and this bias increased with eccentricity. Results also suggested a special processing of frontal heads with direct gaze in central vision, rather than a general congruency effect between eye and head cues. Thus, while both head and eye cues contribute to gaze discrimination, their role differs with eccentricity. PMID:28344501

  4. Guiding the mind's eye: improving communication and vision by external control of the scanpath

    NASA Astrophysics Data System (ADS)

    Barth, Erhardt; Dorr, Michael; Böhme, Martin; Gegenfurtner, Karl; Martinetz, Thomas

    2006-02-01

    Larry Stark has emphasised that what we visually perceive is very much determined by the scanpath, i.e. the pattern of eye movements. Inspired by his view, we have studied the implications of the scanpath for visual communication and came up with the idea to not only sense and analyse eye movements, but also guide them by using a special kind of gaze-contingent information display. Our goal is to integrate gaze into visual communication systems by measuring and guiding eye movements. For guidance, we first predict a set of about 10 salient locations. We then change the probability for one of these candidates to be attended: for one candidate the probability is increased, for the others it is decreased. To increase saliency, for example, we add red dots that are displayed very briefly such that they are hardly perceived consciously. To decrease the probability, for example, we locally reduce the temporal frequency content. Again, if performed in a gaze-contingent fashion with low latencies, these manipulations remain unnoticed. Overall, the goal is to find the real-time video transformation minimising the difference between the actual and the desired scanpath without being obtrusive. Applications are in the area of vision-based communication (better control of what information is conveyed) and augmented vision and learning (guide a person's gaze by the gaze of an expert or a computer-vision system). We believe that our research is very much in the spirit of Larry Stark's views on visual perception and the close link between vision research and engineering.

  5. Evidence for a link between changes to gaze behaviour and risk of falling in older adults during adaptive locomotion.

    PubMed

    Chapman, G J; Hollands, M A

    2006-11-01

    There is increasing evidence that gaze stabilization with respect to footfall targets plays a crucial role in the control of visually guided stepping and that there are significant changes to gaze behaviour as we age. However, past research has not measured if age-related changes in gaze behaviour are associated with changes to stepping performance. This paper aims to identify differences in gaze behaviour between young (n=8) adults, older adults determined to be at a low-risk of falling (low-risk, n=4) and older adults prone to falling (high-risk, n=4) performing an adaptive locomotor task and attempts to relate observed differences in gaze behaviour to decline in stepping performance. Participants walked at a self-selected pace along a 9m pathway stepping into two footfall target locations en route. Gaze behaviour and lower limb kinematics were recorded using an ASL 500 gaze tracker interfaced with a Vicon motion analysis system. Results showed that older adults looked significantly sooner to targets, and fixated the targets for longer, than younger adults. There were also significant differences in these measures between high and low-risk older adults. On average, high-risk older adults looked away from targets significantly sooner and demonstrated less accurate and more variable foot placements than younger adults and low-risk older adults. These findings suggest that, as we age, we need more time to plan precise stepping movements and clearly demonstrate that there are differences between low-risk and high-risk older adults in both where and when they look at future stepping targets and the precision with which they subsequently step. We propose that high-risk older adults may prioritize the planning of future actions over the accurate execution of ongoing movements and that adoption of this strategy may contribute to an increased likelihood of falls. Copyright 2005 Elsevier B.V.

  6. In the presence of conflicting gaze cues, fearful expression and eye-size guide attention.

    PubMed

    Carlson, Joshua M; Aday, Jacob

    2017-10-19

    Humans are social beings that often interact in multi-individual environments. As such, we are frequently confronted with nonverbal social signals, including eye-gaze direction, from multiple individuals. Yet, the factors that allow for the prioritisation of certain gaze cues over others are poorly understood. Using a modified conflicting gaze paradigm, we tested the hypothesis that fearful gaze would be favoured amongst competing gaze cues. We further hypothesised that this effect is related to the increased sclera exposure, which is characteristic of fearful expressions. Across three experiments, we found that fearful, but not happy, gaze guides observers' attention over competing non-emotional gaze. The guidance of attention by fearful gaze appears to be linked to increased sclera exposure. However, differences in sclera exposure do not prioritise competing gazes of other types. Thus, fearful gaze guides attention among competing cues and this effect is facilitated by increased sclera exposure - but increased sclera exposure per se does not guide attention. The prioritisation of fearful gaze over non-emotional gaze likely represents an adaptive means of selectively attending to survival-relevant spatial locations.

  7. How Lovebirds Maneuver Rapidly Using Super-Fast Head Saccades and Image Feature Stabilization

    PubMed Central

    Kress, Daniel; van Bokhorst, Evelien; Lentink, David

    2015-01-01

    Diurnal flying animals such as birds depend primarily on vision to coordinate their flight path during goal-directed flight tasks. To extract the spatial structure of the surrounding environment, birds are thought to use retinal image motion (optical flow) that is primarily induced by motion of their head. It is unclear what gaze behaviors birds perform to support visuomotor control during rapid maneuvering flight in which they continuously switch between flight modes. To analyze this, we measured the gaze behavior of rapidly turning lovebirds in a goal-directed task: take-off and fly away from a perch, turn on a dime, and fly back and land on the same perch. High-speed flight recordings revealed that rapidly turning lovebirds perform a remarkable stereotypical gaze behavior with peak saccadic head turns up to 2700 degrees per second, as fast as insects, enabled by fast neck muscles. In between saccades, gaze orientation is held constant. By comparing saccade and wingbeat phase, we find that these super-fast saccades are coordinated with the downstroke when the lateral visual field is occluded by the wings. Lovebirds thus maximize visual perception by overlying behaviors that impair vision, which helps coordinate maneuvers. Before the turn, lovebirds keep a high contrast edge in their visual midline. Similarly, before landing, the lovebirds stabilize the center of the perch in their visual midline. The perch on which the birds land swings, like a branch in the wind, and we find that retinal size of the perch is the most parsimonious visual cue to initiate landing. Our observations show that rapidly maneuvering birds use precisely timed stereotypic gaze behaviors consisting of rapid head turns and frontal feature stabilization, which facilitates optical flow based flight control. Similar gaze behaviors have been reported for visually navigating humans. This finding can inspire more effective vision-based autopilots for drones. PMID:26107413

  8. Eye-Tracking Technology and the Dynamics of Natural Gaze Behavior in Sports: A Systematic Review of 40 Years of Research.

    PubMed

    Kredel, Ralf; Vater, Christian; Klostermann, André; Hossner, Ernst-Joachim

    2017-01-01

    Reviewing 60 studies on natural gaze behavior in sports, it becomes clear that, over the last 40 years, the use of eye-tracking devices has considerably increased. Specifically, this review reveals the large variance of methods applied, analyses performed, and measures derived within the field. The results of sub-sample analyses suggest that sports-related eye-tracking research strives, on the one hand, for ecologically valid test settings (i.e., viewing conditions and response modes), while on the other, for experimental control along with high measurement accuracy (i.e., controlled test conditions with high-frequency eye-trackers linked to algorithmic analyses). To meet both demands, some promising compromises of methodological solutions have been proposed-in particular, the integration of robust mobile eye-trackers in motion-capture systems. However, as the fundamental trade-off between laboratory and field research cannot be solved by technological means, researchers need to carefully weigh the arguments for one or the other approach by accounting for the respective consequences. Nevertheless, for future research on dynamic gaze behavior in sports, further development of the current mobile eye-tracking methodology seems highly advisable to allow for the acquisition and algorithmic analyses of larger amounts of gaze-data and further, to increase the explanatory power of the derived results.

  9. Gaze-contingent control for minimally invasive robotic surgery.

    PubMed

    Mylonas, George P; Darzi, Ara; Yang, Guang Zhong

    2006-09-01

    Recovering tissue depth and deformation during robotically assisted minimally invasive procedures is an important step towards motion compensation, stabilization and co-registration with preoperative data. This work demonstrates that eye gaze derived from binocular eye tracking can be effectively used to recover 3D motion and deformation of the soft tissue. A binocular eye-tracking device was integrated into the stereoscopic surgical console. After calibration, the 3D fixation point of the participating subjects could be accurately resolved in real time. A CT-scanned phantom heart model was used to demonstrate the accuracy of gaze-contingent depth extraction and motion stabilization of the soft tissue. The dynamic response of the oculomotor system was assessed with the proposed framework by using autoregressive modeling techniques. In vivo data were also used to perform gaze-contingent decoupling of cardiac and respiratory motion. Depth reconstruction, deformation tracking, and motion stabilization of the soft tissue were possible with binocular eye tracking. The dynamic response of the oculomotor system was able to cope with frequencies likely to occur under most routine minimally invasive surgical operations. The proposed framework presents a novel approach towards the tight integration of a human and a surgical robot where interaction in response to sensing is required to be under the control of the operating surgeon.

  10. The EyeHarp: A Gaze-Controlled Digital Musical Instrument

    PubMed Central

    Vamvakousis, Zacharias; Ramirez, Rafael

    2016-01-01

    We present and evaluate the EyeHarp, a new gaze-controlled Digital Musical Instrument, which aims to enable people with severe motor disabilities to learn, perform, and compose music using only their gaze as control mechanism. It consists of (1) a step-sequencer layer, which serves for constructing chords/arpeggios, and (2) a melody layer, for playing melodies and changing the chords/arpeggios. We have conducted a pilot evaluation of the EyeHarp involving 39 participants with no disabilities from both a performer and an audience perspective. In the first case, eight people with normal vision and no motor disability participated in a music-playing session in which both quantitative and qualitative data were collected. In the second case 31 people qualitatively evaluated the EyeHarp in a concert setting consisting of two parts: a solo performance part, and an ensemble (EyeHarp, two guitars, and flute) performance part. The obtained results indicate that, similarly to traditional music instruments, the proposed digital musical instrument has a steep learning curve, and allows to produce expressive performances both from the performer and audience perspective. PMID:27445885

  11. Measurement of ocular aberrations in downward gaze using a modified clinical aberrometer

    PubMed Central

    Ghosh, Atanu; Collins, Michael J; Read, Scott A; Davis, Brett A; Iskander, D. Robert

    2011-01-01

    Changes in corneal optics have been measured after downward gaze. However, ocular aberrations during downward gaze have not been previously measured. A commercial Shack-Hartmann aberrometer (COAS-HD) was modified by adding a relay lens system and a rotatable beam splitter to allow on-axis aberration measurements in primary gaze and downward gaze with binocular fixation. Measurements with the modified aberrometer (COAS-HD relay system) in primary and downward gaze were validated against a conventional aberrometer. In human eyes, there were significant changes (p<0.05) in defocus C(2,0), primary astigmatism C(2,2) and vertical coma C(3,−1) in downward gaze (25 degrees) compared to primary gaze, indicating the potential influence of biomechanical forces on the optics of the eye in downward gaze. To demonstrate a further clinical application of this modified aberrometer, we measured ocular aberrations when wearing a progressive addition lens (PAL) in primary gaze (0 degree), 15 degrees downward gaze and 25 degrees downward gaze. PMID:21412451

  12. A Support System for Mouse Operations Using Eye-Gaze Input

    NASA Astrophysics Data System (ADS)

    Abe, Kiyohiko; Nakayama, Yasuhiro; Ohi, Shoichi; Ohyama, Minoru

    We have developed an eye-gaze input system for people with severe physical disabilities, such as amyotrophic lateral sclerosis (ALS) patients. This system utilizes a personal computer and a home video camera to detect eye-gaze under natural light. The system detects both vertical and horizontal eye-gaze by simple image analysis, and does not require special image processing units or sensors. Our conventional eye-gaze input system can detect horizontal eye-gaze with a high degree of accuracy. However, it can only classify vertical eye-gaze into 3 directions (up, middle and down). In this paper, we propose a new method for vertical eye-gaze detection. This method utilizes the limbus tracking method for vertical eye-gaze detection. Therefore our new eye-gaze input system can detect the two-dimension coordinates of user's gazing point. By using this method, we develop a new support system for mouse operation. This system can move the mouse cursor to user's gazing point.

  13. Automatic and strategic measures as predictors of mirror gazing among individuals with body dysmorphic disorder symptoms.

    PubMed

    Clerkin, Elise M; Teachman, Bethany A

    2009-08-01

    The current study tests cognitive-behavioral models of body dysmorphic disorder (BDD) by examining the relationship between cognitive biases and correlates of mirror gazing. To provide a more comprehensive picture, we investigated both relatively strategic (i.e., available for conscious introspection) and automatic (i.e., outside conscious control) measures of cognitive biases in a sample with either high (n = 32) or low (n = 31) BDD symptoms. Specifically, we examined the extent that (1) explicit interpretations tied to appearance, as well as (2) automatic associations and (3) strategic evaluations of the importance of attractiveness predict anxiety and avoidance associated with mirror gazing. Results indicated that interpretations tied to appearance uniquely predicted self-reported desire to avoid, whereas strategic evaluations of appearance uniquely predicted peak anxiety associated with mirror gazing, and automatic appearance associations uniquely predicted behavioral avoidance. These results offer considerable support for cognitive models of BDD, and suggest a dissociation between automatic and strategic measures.

  14. Virtual wayfinding using simulated prosthetic vision in gaze-locked viewing.

    PubMed

    Wang, Lin; Yang, Liancheng; Dagnelie, Gislin

    2008-11-01

    To assess virtual maze navigation performance with simulated prosthetic vision in gaze-locked viewing, under the conditions of varying luminance contrast, background noise, and phosphene dropout. Four normally sighted subjects performed virtual maze navigation using simulated prosthetic vision in gaze-locked viewing, under five conditions of luminance contrast, background noise, and phosphene dropout. Navigation performance was measured as the time required to traverse a 10-room maze using a game controller, and the number of errors made during the trip. Navigation performance time (1) became stable after 6 to 10 trials, (2) remained similar on average at luminance contrast of 68% and 16% but had greater variation at 16%, (3) was not significantly affected by background noise, and (4) increased by 40% when 30% of phosphenes were removed. Navigation performance time and number of errors were significantly and positively correlated. Assuming that the simulated gaze-locked viewing conditions are extended to implant wearers, such prosthetic vision can be helpful for wayfinding in simple mobility tasks, though phosphene dropout may interfere with performance.

  15. Automatic and Strategic Measures as Predictors of Mirror Gazing Among Individuals with Body Dysmorphic Disorder Symptoms

    PubMed Central

    Clerkin, Elise M.; Teachman, Bethany A.

    2011-01-01

    The current study tests cognitive-behavioral models of body dysmorphic disorder (BDD) by examining the relationship between cognitive biases and correlates of mirror gazing. To provide a more comprehensive picture, we investigated both relatively strategic (i.e., available for conscious introspection) and automatic (i.e., outside conscious control) measures of cognitive biases in a sample with either high (n=32) or low (n=31) BDD symptoms. Specifically, we examined the extent that 1) explicit interpretations tied to appearance, as well as 2) automatic associations and 3) strategic evaluations of the importance of attractiveness predict anxiety and avoidance associated with mirror gazing. Results indicated that interpretations tied to appearance uniquely predicted self-reported desire to avoid, while strategic evaluations of appearance uniquely predicted peak anxiety associated with mirror gazing, and automatic appearance associations uniquely predicted behavioral avoidance. These results offer considerable support for cognitive models of BDD, and suggest a dissociation between automatic and strategic measures. PMID:19684496

  16. Visual perception during mirror-gazing at one's own face in patients with depression.

    PubMed

    Caputo, Giovanni B; Bortolomasi, Marco; Ferrucci, Roberta; Giacopuzzi, Mario; Priori, Alberto; Zago, Stefano

    2014-01-01

    In normal observers, gazing at one's own face in the mirror for a few minutes, at a low illumination level, produces the apparition of strange faces. Observers see distortions of their own faces, but they often see hallucinations like monsters, archetypical faces, faces of relatives and deceased, and animals. In this research, patients with depression were compared to healthy controls with respect to strange-face apparitions. The experiment was a 7-minute mirror-gazing test (MGT) under low illumination. When the MGT ended, the experimenter assessed patients and controls with a specifically designed questionnaire and interviewed them, asking them to describe strange-face apparitions. Apparitions of strange faces in the mirror were very reduced in depression patients compared to healthy controls. Depression patients compared to healthy controls showed shorter duration of apparitions; minor number of strange faces; lower self-evaluation rating of apparition strength; lower self-evaluation rating of provoked emotion. These decreases in depression may be produced by deficits of facial expression and facial recognition of emotions, which are involved in the relationship between the patient (or the patient's ego) and his face image (or the patient's bodily self) that is reflected in the mirror.

  17. Spatial transformations between superior colliculus visual and motor response fields during head-unrestrained gaze shifts.

    PubMed

    Sadeh, Morteza; Sajad, Amirsaman; Wang, Hongying; Yan, Xiaogang; Crawford, John Douglas

    2015-12-01

    We previously reported that visuomotor activity in the superior colliculus (SC)--a key midbrain structure for the generation of rapid eye movements--preferentially encodes target position relative to the eye (Te) during low-latency head-unrestrained gaze shifts (DeSouza et al., 2011). Here, we trained two monkeys to perform head-unrestrained gaze shifts after a variable post-stimulus delay (400-700 ms), to test whether temporally separated SC visual and motor responses show different spatial codes. Target positions, final gaze positions and various frames of reference (eye, head, and space) were dissociated through natural (untrained) trial-to-trial variations in behaviour. 3D eye and head orientations were recorded, and 2D response field data were fitted against multiple models by use of a statistical method reported previously (Keith et al., 2009). Of 60 neurons, 17 showed a visual response, 12 showed a motor response, and 31 showed both visual and motor responses. The combined visual response field population (n = 48) showed a significant preference for Te, which was also preferred in each visual subpopulation. In contrast, the motor response field population (n = 43) showed a preference for final (relative to initial) gaze position models, and the Te model was statistically eliminated in the motor-only population. There was also a significant shift of coding from the visual to motor response within visuomotor neurons. These data confirm that SC response fields are gaze-centred, and show a target-to-gaze transformation between visual and motor responses. Thus, visuomotor transformations can occur between, and even within, neurons within a single frame of reference and brain structure. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  18. The effect of Ramadan fasting on spatial attention through emotional stimuli

    PubMed Central

    Molavi, Maziyar; Yunus, Jasmy; Utama, Nugraha P

    2016-01-01

    Fasting can influence psychological and mental states. In the current study, the effect of periodical fasting on the process of emotion through gazed facial expression as a realistic multisource of social information was investigated for the first time. The dynamic cue-target task was applied via behavior and event-related potential measurements for 40 participants to reveal the temporal and spatial brain activities – before, during, and after fasting periods. The significance of fasting included several effects. The amplitude of the N1 component decreased over the centroparietal scalp during fasting. Furthermore, the reaction time during the fasting period decreased. The self-measurement of deficit arousal as well as the mood increased during the fasting period. There was a significant contralateral alteration of P1 over occipital area for the happy facial expression stimuli. The significant effect of gazed expression and its interaction with the emotional stimuli was indicated by the amplitude of N1. Furthermore, the findings of the study approved the validity effect as a congruency between gaze and target position, as indicated by the increment of P3 amplitude over centroparietal area as well as slower reaction time from behavioral response data during incongruency or invalid condition between gaze and target position compared with those during valid condition. Results of this study proved that attention to facial expression stimuli as a kind of communicative social signal was affected by fasting. Also, fasting improved the mood of practitioners. Moreover, findings from the behavioral and event-related potential data analyses indicated that the neural dynamics of facial emotion are processed faster than that of gazing, as the participants tended to react faster and prefer to relay on the type of facial emotions than to gaze direction while doing the task. Because of happy facial expression stimuli, right hemisphere activation was more than that of the left hemisphere. It indicated the consistency of the emotional lateralization concept rather than the valence concept of emotional processing. PMID:27307772

  19. The role of uncertainty and reward on eye movements in a virtual driving task

    PubMed Central

    Sullivan, Brian T.; Johnson, Leif; Rothkopf, Constantin A.; Ballard, Dana; Hayhoe, Mary

    2012-01-01

    Eye movements during natural tasks are well coordinated with ongoing task demands and many variables could influence gaze strategies. Sprague and Ballard (2003) proposed a gaze-scheduling model that uses a utility-weighted uncertainty metric to prioritize fixations on task-relevant objects and predicted that human gaze should be influenced by both reward structure and task-relevant uncertainties. To test this conjecture, we tracked the eye movements of participants in a simulated driving task where uncertainty and implicit reward (via task priority) were varied. Participants were instructed to simultaneously perform a Follow Task where they followed a lead car at a specific distance and a Speed Task where they drove at an exact speed. We varied implicit reward by instructing the participants to emphasize one task over the other and varied uncertainty in the Speed Task with the presence or absence of uniform noise added to the car's velocity. Subjects' gaze data were classified for the image content near fixation and segmented into looks. Gaze measures, including look proportion, duration and interlook interval, showed that drivers more closely monitor the speedometer if it had a high level of uncertainty, but only if it was also associated with high task priority or implicit reward. The interaction observed appears to be an example of a simple mechanism whereby the reduction of visual uncertainty is gated by behavioral relevance. This lends qualitative support for the primary variables controlling gaze allocation proposed in the Sprague and Ballard model. PMID:23262151

  20. Astro STARS Camp

    NASA Image and Video Library

    2011-06-28

    Tom Nicolaides, an aerospace technologist in the Engineering & Test Directorate at Stennis Space Center, looks on as 2011 Astro STARS participants take turns gazing at the sun through a special telescope. The sun-gazing activity was part of the Astro STARS (Spaceflight, Technology, Astronomy & Robotics at Stennis) camp for 13-to-15-year-olds June 27 - July 1. The weeklong science and technology camp is held each year onsite at the rocket engine test facility.

  1. Anxiety dissociates the adaptive functions of sensory and motor response enhancements to social threats

    PubMed Central

    El Zein, Marwa; Wyart, Valentin; Grèzes, Julie

    2015-01-01

    Efficient detection and reaction to negative signals in the environment is essential for survival. In social situations, these signals are often ambiguous and can imply different levels of threat for the observer, thereby making their recognition susceptible to contextual cues – such as gaze direction when judging facial displays of emotion. However, the mechanisms underlying such contextual effects remain poorly understood. By computational modeling of human behavior and electrical brain activity, we demonstrate that gaze direction enhances the perceptual sensitivity to threat-signaling emotions – anger paired with direct gaze, and fear paired with averted gaze. This effect arises simultaneously in ventral face-selective and dorsal motor cortices at 200 ms following face presentation, dissociates across individuals as a function of anxiety, and does not reflect increased attention to threat-signaling emotions. These findings reveal that threat tunes neural processing in fast, selective, yet attention-independent fashion in sensory and motor systems, for different adaptive purposes. DOI: http://dx.doi.org/10.7554/eLife.10274.001 PMID:26712157

  2. People with diabetic peripheral neuropathy display a decreased stepping accuracy during walking: potential implications for risk of tripping.

    PubMed

    Handsaker, J C; Brown, S J; Bowling, F L; Marple-Horvat, D E; Boulton, A J M; Reeves, N D

    2016-05-01

    To examine the stepping accuracy of people with diabetes and diabetic peripheral neuropathy. Fourteen patients with diabetic peripheral neuropathy (DPN), 12 patients with diabetes but no neuropathy (D) and 10 healthy non-diabetic control participants (C). Accuracy of stepping was measured whilst the participants walked along a walkway consisting of 18 stepping targets. Preliminary data on visual gaze characteristics were also captured in a subset of participants (diabetic peripheral neuropathy group: n = 4; diabetes-alone group: n = 4; and control group: n = 4) during the same task. Patients in the diabetic peripheral neuropathy group, and patients in the diabetes-alone group were significantly less accurate at stepping on targets than were control subjects (P < 0.05). Preliminary visual gaze analysis identified that patients diabetic peripheral neuropathy were slower to look between targets, resulting in less time being spent looking at a target before foot-target contact. Impaired motor control is theorized to be a major factor underlying the changes in stepping accuracy, and potentially altered visual gaze behaviour may also play a role. Reduced stepping accuracy may indicate a decreased ability to control the placement of the lower limbs, leading to patients with neuropathy potentially being less able to avoid observed obstacles during walking. © 2015 Diabetes UK.

  3. Hierarchical Encoding of Social Cues in Primate Inferior Temporal Cortex.

    PubMed

    Morin, Elyse L; Hadj-Bouziane, Fadila; Stokes, Mark; Ungerleider, Leslie G; Bell, Andrew H

    2015-09-01

    Faces convey information about identity and emotional state, both of which are important for our social interactions. Models of face processing propose that changeable versus invariant aspects of a face, specifically facial expression/gaze direction versus facial identity, are coded by distinct neural pathways and yet neurophysiological data supporting this separation are incomplete. We recorded activity from neurons along the inferior bank of the superior temporal sulcus (STS), while monkeys viewed images of conspecific faces and non-face control stimuli. Eight monkey identities were used, each presented with 3 different facial expressions (neutral, fear grin, and threat). All facial expressions were displayed with both a direct and averted gaze. In the posterior STS, we found that about one-quarter of face-responsive neurons are sensitive to social cues, the majority of which being sensitive to only one of these cues. In contrast, in anterior STS, not only did the proportion of neurons sensitive to social cues increase, but so too did the proportion of neurons sensitive to conjunctions of identity with either gaze direction or expression. These data support a convergence of signals related to faces as one moves anteriorly along the inferior bank of the STS, which forms a fundamental part of the face-processing network. Published by Oxford University Press 2014. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  4. Facial emotion processing in patients with social anxiety disorder and Williams–Beuren syndrome: an fMRI study

    PubMed Central

    Binelli, Cynthia; Muñiz, Armando; Subira, Susana; Navines, Ricard; Blanco-Hinojo, Laura; Perez-Garcia, Debora; Crippa, Jose; Farré, Magi; Pérez-Jurado, Luis; Pujol, Jesus; Martin-Santos, Rocio

    2016-01-01

    Background Social anxiety disorder (SAD) and Williams–Beuren syndrome (WBS) are 2 conditions with major differences in terms of genetics, development and cognitive profiles. Both conditions are associated with compromised abilities in overlapping areas, including social approach, processing of social emotional cues and gaze behaviour, and to some extent they are associated with opposite behaviours in these domains. We examined common and distinct patterns of brain activation during a facial emotion processing paradigm in patients with SAD and WBS. Methods We examined patients with SAD and WBS and healthy controls matched by age and laterality using functional MRI during the processing of happy, fearful and angry faces. Results We included 20 patients with SAD and 20 with WBS as well as 20 matched controls in our study. Patients with SAD and WBS did not differ in the pattern of limbic activation. We observed differences in early visual areas of the face processing network in patients with WBS and differences in the cortical prefrontal regions involved in the top–down regulation of anxiety and in the fusiform gyrus for patients with SAD. Compared with those in the SAD and control groups, participants in the WBS group did not activate the right lateral inferior occipital cortex. In addition, compared with controls, patients with WBS hypoactivated the posterior primary visual cortex and showed significantly less deactivation in the right temporal operculum. Participants in the SAD group showed decreased prefrontal activation compared with those in the WBS and control groups. In addition, compared with controls, participants with SAD showed decreased fusiform activation. Participants with SAD and WBS also differed in the pattern of activation in the superior temporal gyrus, a region that has been linked to gaze processing. Limitations The results observed in the WBS group are limited by the IQ of the WBS sample; however, the specificity of findings suggests that the pattern of brain activation observed for WBS is more likely to reflect a neurobiological substrate rather than intellectual impairment per se. Conclusion Patients with SAD and WBS showed common and specific patterns of brain activation. Our results highlight the role of cortical regions during facial emotion processing in individuals with SAD and WBS. PMID:26624523

  5. The German Version of the Gaze Anxiety Rating Scale (GARS): Reliability and Validity

    PubMed Central

    Domes, Gregor; Marx, Lisa; Spenthof, Ines; Heinrichs, Markus

    2016-01-01

    Objective Fear of eye gaze and avoidance of eye contact are core features of social anxiety disorders (SAD). To measure self-reported fear and avoidance of eye gaze, the Gaze Anxiety Rating Scale (GARS) has been developed and validated in recent years in its English version. The main objectives of the present study were to psychometrically evaluate the German translation of the GARS concerning its reliability, factorial structure, and validity. Methods Three samples of participants were enrolled in the study. (1) A non-patient sample (n = 353) completed the GARS and a set of trait questionnaires to assess internal consistency, test-retest reliability, factorial structure, and concurrent and divergent validity. (2) A sample of patients with SAD (n = 33) was compared to a healthy control group (n = 30) regarding their scores on the GARS and the trait measures. Results The German GARS fear and avoidance scales exhibited excellent internal consistency and high stability over 2 and 4 months, as did the original version. The English version’s factorial structure was replicated, yielding two categories of situations: (1) everyday situations and (2) situations involving high evaluative threat. GARS fear and avoidance displayed convergent validity with trait measures of social anxiety and were markedly higher in patients with GSAD than in healthy controls. Fear and avoidance of eye contact in situations involving high levels of evaluative threat related more closely to social anxiety than to gaze anxiety in everyday situations. Conclusions The German version of the GARS has demonstrated reliability and validity similar to the original version, and is thus well suited to capture fear and avoidance of eye contact in different social situations as a valid self-report measure of social anxiety and related disorders in the social domain for use in both clinical practice and research. PMID:26937638

  6. The German Version of the Gaze Anxiety Rating Scale (GARS): Reliability and Validity.

    PubMed

    Domes, Gregor; Marx, Lisa; Spenthof, Ines; Heinrichs, Markus

    2016-01-01

    Fear of eye gaze and avoidance of eye contact are core features of social anxiety disorders (SAD). To measure self-reported fear and avoidance of eye gaze, the Gaze Anxiety Rating Scale (GARS) has been developed and validated in recent years in its English version. The main objectives of the present study were to psychometrically evaluate the German translation of the GARS concerning its reliability, factorial structure, and validity. Three samples of participants were enrolled in the study. (1) A non-patient sample (n = 353) completed the GARS and a set of trait questionnaires to assess internal consistency, test-retest reliability, factorial structure, and concurrent and divergent validity. (2) A sample of patients with SAD (n = 33) was compared to a healthy control group (n = 30) regarding their scores on the GARS and the trait measures. The German GARS fear and avoidance scales exhibited excellent internal consistency and high stability over 2 and 4 months, as did the original version. The English version's factorial structure was replicated, yielding two categories of situations: (1) everyday situations and (2) situations involving high evaluative threat. GARS fear and avoidance displayed convergent validity with trait measures of social anxiety and were markedly higher in patients with GSAD than in healthy controls. Fear and avoidance of eye contact in situations involving high levels of evaluative threat related more closely to social anxiety than to gaze anxiety in everyday situations. The German version of the GARS has demonstrated reliability and validity similar to the original version, and is thus well suited to capture fear and avoidance of eye contact in different social situations as a valid self-report measure of social anxiety and related disorders in the social domain for use in both clinical practice and research.

  7. To Gaze or Not to Gaze: Visual Communication in Eastern Zaire. Sociolinguistic Working Paper Number 87.

    ERIC Educational Resources Information Center

    Blakely, Thomas D.

    The nature of gazing at someone or something, as a form of communication among the Bahemba people in eastern Zaire, is analyzed across a range of situations. Variations of steady gazing, a common eye contact routine, are outlined, including: (1) negative non-gazing or glance routines, especially in situations in which gazing would ordinarily…

  8. Etracker: A Mobile Gaze-Tracking System with Near-Eye Display Based on a Combined Gaze-Tracking Algorithm.

    PubMed

    Li, Bin; Fu, Hong; Wen, Desheng; Lo, WaiLun

    2018-05-19

    Eye tracking technology has become increasingly important for psychological analysis, medical diagnosis, driver assistance systems, and many other applications. Various gaze-tracking models have been established by previous researchers. However, there is currently no near-eye display system with accurate gaze-tracking performance and a convenient user experience. In this paper, we constructed a complete prototype of the mobile gaze-tracking system ' Etracker ' with a near-eye viewing device for human gaze tracking. We proposed a combined gaze-tracking algorithm. In this algorithm, the convolutional neural network is used to remove blinking images and predict coarse gaze position, and then a geometric model is defined for accurate human gaze tracking. Moreover, we proposed using the mean value of gazes to resolve pupil center changes caused by nystagmus in calibration algorithms, so that an individual user only needs to calibrate it the first time, which makes our system more convenient. The experiments on gaze data from 26 participants show that the eye center detection accuracy is 98% and Etracker can provide an average gaze accuracy of 0.53° at a rate of 30⁻60 Hz.

  9. Reward Value-Contingent Changes of Visual Responses in the Primate Caudate Tail Associated with a Visuomotor Skill

    PubMed Central

    Kim, Hyoung F.; Hikosaka, Okihide

    2013-01-01

    A goal-directed action aiming at an incentive outcome, if repeated, becomes a skill that may be initiated automatically. We now report that the tail of the caudate nucleus (CDt) may serve to control a visuomotor skill. Monkeys looked at many fractal objects, half of which were always associated with a large reward (high-valued objects) and the other half with a small reward (low-valued objects). After several daily sessions, they developed a gaze bias, looking at high-valued objects even when no reward was associated. CDt neurons developed a response bias, typically showing stronger responses to high-valued objects. In contrast, their responses showed no change when object values were reversed frequently, although monkeys showed a strong gaze bias, looking at high-valued objects in a goal-directed manner. The biased activity of CDt neurons may be transmitted to the oculomotor region so that animals can choose high-valued objects automatically based on stable reward experiences. PMID:23825426

  10. Seductive eyes: attractiveness and direct gaze increase desire for associated objects.

    PubMed

    Strick, Madelijn; Holland, Rob W; van Knippenberg, Ad

    2008-03-01

    Recent research in neuroscience shows that observing attractive faces with direct gaze is more rewarding than observing attractive faces with averted gaze. On the basis of this research, it was hypothesized that object evaluations can be enhanced by associating them with attractive faces displaying direct gaze. In a conditioning paradigm, novel objects were associated with either attractive or unattractive female faces, either displaying direct or averted gaze. An affective priming task showed more positive automatic evaluations of objects that were paired with attractive faces with direct gaze than attractive faces with averted gaze and unattractive faces, irrespective of gaze direction. Participants' self-reported desire for the objects matched the affective priming data. The results are discussed against the background of recent findings on affective consequences of gaze cueing.

  11. Stereo and photometric image sequence interpretation for detecting negative obstacles using active gaze control and performing an autonomous jink

    NASA Astrophysics Data System (ADS)

    Hofmann, Ulrich; Siedersberger, Karl-Heinz

    2003-09-01

    Driving cross-country, the detection and state estimation relative to negative obstacles like ditches and creeks is mandatory for safe operation. Very often, ditches can be detected both by different photometric properties (soil vs. vegetation) and by range (disparity) discontinuities. Therefore, algorithms should make use of both the photometric and geometric properties to reliably detect obstacles. This has been achieved in UBM's EMS-Vision System (Expectation-based, Multifocal, Saccadic) for autonomous vehicles. The perception system uses Sarnoff's image processing hardware for real-time stereo vision. This sensor provides both gray value and disparity information for each pixel at high resolution and framerates. In order to perform an autonomous jink, the boundaries of an obstacle have to be measured accurately for calculating a safe driving trajectory. Especially, ditches are often very extended, so due to the restricted field of vision of the cameras, active gaze control is necessary to explore the boundaries of an obstacle. For successful measurements of image features the system has to satisfy conditions defined by the perception expert. It has to deal with the time constraints of the active camera platform while performing saccades and to keep the geometric conditions defined by the locomotion expert for performing a jink. Therefore, the experts have to cooperate. This cooperation is controlled by a central decision unit (CD), which has knowledge about the mission and the capabilities available in the system and of their limitations. The central decision unit reacts dependent on the result of situation assessment by starting, parameterizing or stopping actions (instances of capabilities). The approach has been tested with the 5-ton van VaMoRs. Experimental results will be shown for driving in a typical off-road scenario.

  12. What Do Eye Gaze Metrics Tell Us about Motor Imagery?

    PubMed

    Poiroux, Elodie; Cavaro-Ménard, Christine; Leruez, Stéphanie; Lemée, Jean Michel; Richard, Isabelle; Dinomais, Mickael

    2015-01-01

    Many of the brain structures involved in performing real movements also have increased activity during imagined movements or during motor observation, and this could be the neural substrate underlying the effects of motor imagery in motor learning or motor rehabilitation. In the absence of any objective physiological method of measurement, it is currently impossible to be sure that the patient is indeed performing the task as instructed. Eye gaze recording during a motor imagery task could be a possible way to "spy" on the activity an individual is really engaged in. The aim of the present study was to compare the pattern of eye movement metrics during motor observation, visual and kinesthetic motor imagery (VI, KI), target fixation, and mental calculation. Twenty-two healthy subjects (16 females and 6 males), were required to perform tests in five conditions using imagery in the Box and Block Test tasks following the procedure described by Liepert et al. Eye movements were analysed by a non-invasive oculometric measure (SMI RED250 system). Two parameters describing gaze pattern were calculated: the index of ocular mobility (saccade duration over saccade + fixation duration) and the number of midline crossings (i.e. the number of times the subjects gaze crossed the midline of the screen when performing the different tasks). Both parameters were significantly different between visual imagery and kinesthesic imagery, visual imagery and mental calculation, and visual imagery and target fixation. For the first time we were able to show that eye movement patterns are different during VI and KI tasks. Our results suggest gaze metric parameters could be used as an objective unobtrusive approach to assess engagement in a motor imagery task. Further studies should define how oculomotor parameters could be used as an indicator of the rehabilitation task a patient is engaged in.

  13. The effects of social pressure and emotional expression on the cone of gaze in patients with social anxiety disorder.

    PubMed

    Harbort, Johannes; Spiegel, Julia; Witthöft, Michael; Hecht, Heiko

    2017-06-01

    Patients with social anxiety disorder suffer from pronounced fears in social situations. As gaze perception is crucial in these situations, we examined which factors influence the range of gaze directions where mutual gaze is experienced (the cone of gaze). The social stimulus was modified by changing the number of people (heads) present and the emotional expression of their faces. Participants completed a psychophysical task, in which they had to adjust the eyes of a virtual head to gaze at the edge of the range where mutual eye-contact was experienced. The number of heads affected the width of the gaze cone: the more heads, the wider the gaze cone. The emotional expression of the virtual head had no consistent effect on the width of the gaze cone, it did however affect the emotional state of the participants. Angry expressions produced the highest arousal values. Highest valence emerged from happy faces, lowest valence from angry faces. These results suggest that the widening of the gaze cone in social anxiety disorder is not primarily mediated by their altered emotional reactivity. Implications for gaze assessment and gaze training in therapeutic contexts are discussed. Due to interindividual variability, enlarged gaze cones are not necessarily indicative of social anxiety disorder, they merely constitute a correlate at the group level. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. 4-aminopyridine restores vertical and horizontal neural integrator function in downbeat nystagmus.

    PubMed

    Kalla, Roger; Glasauer, Stefan; Büttner, Ulrich; Brandt, Thomas; Strupp, Michael

    2007-09-01

    Downbeat nystagmus (DBN), the most common form of acquired fixation nystagmus, is often caused by cerebellar degeneration, especially if the vestibulo-cerebellum is involved. The upward ocular drift in DBN has a spontaneous and a vertical gaze-evoked component. Since cerebellar involvement is suspected to be the underlying pathomechanism of DBN, we tested in 15 patients with DBN whether the application of the potassium-channel blocker 4-aminopyridine (4-AP), which increases the excitability of cerebellar Purkinje cells as shown in animal experiments, reduces the vertical ocular drift leading to nystagmus. Fifteen age-matched healthy subjects served as the control group. 4-AP may affect spontaneous drift or gaze-evoked drift by either enhancing visual fixation ability or restoring vision-independent gaze holding. We therefore recorded 3D slow-phase eye movements using search coils during attempted fixation in nine different eye positions and with or without a continuously visible target before and 45 min after ingestion of 10mg 4-AP. Since the effect of 4-AP may depend on the associated etiology, we divided our patients into three groups (cerebellar atrophy, n = 4; idiopathic DBN, n = 5; other etiology, n = 6). 4-AP decreased DBN during gaze straight ahead in 12 of 15 patients. Statistical analysis showed that improvement occurred predominantly in patients with cerebellar atrophy, in whom the drift was reduced from -4.99 +/- 1.07 deg/s (mean +/- SE) before treatment to -0.60 +/- 0.82 deg/s afterwards. Regression analysis of slow-phase velocity (SPV) in different eye positions revealed that vertical and horizontal gaze-evoked drift was significantly reduced independently of the patient group and caused perfect gaze holding on the average. Since the observed improvements were independent of target visibility, 4-AP improved fixation by restoring gaze-holding ability. All in all, the present study demonstrates that 4-AP has a differential effect on DBN: drift with gaze straight ahead was predominantly reduced in patients with cerebellar atrophy, but less so in the remaining patients; 4-AP on the average improved neural integrator function, i.e. gaze-evoked drift, regardless of etiology. Our results thus show that 4-AP was a successful treatment option in the majority of DBN patients, possibly by increasing Purkinje cell excitability in the cerebellar flocculi. It may work best when DBN is associated with cerebellar atrophy. Furthermore, 4-AP may be a promising treatment option for patients with a dominant gaze-evoked component of nystagmus, regardless of its etiology.

  15. Modeling eye-head gaze shifts in multiple contexts without motor planning

    PubMed Central

    Haji-Abolhassani, Iman; Guitton, Daniel

    2016-01-01

    During gaze shifts, the eyes and head collaborate to rapidly capture a target (saccade) and fixate it. Accordingly, models of gaze shift control should embed both saccadic and fixation modes and a mechanism for switching between them. We demonstrate a model in which the eye and head platforms are driven by a shared gaze error signal. To limit the number of free parameters, we implement a model reduction approach in which steady-state cerebellar effects at each of their projection sites are lumped with the parameter of that site. The model topology is consistent with anatomy and neurophysiology, and can replicate eye-head responses observed in multiple experimental contexts: 1) observed gaze characteristics across species and subjects can emerge from this structure with minor parametric changes; 2) gaze can move to a goal while in the fixation mode; 3) ocular compensation for head perturbations during saccades could rely on vestibular-only cells in the vestibular nuclei with postulated projections to burst neurons; 4) two nonlinearities suffice, i.e., the experimentally-determined mapping of tectoreticular cells onto brain stem targets and the increased recruitment of the head for larger target eccentricities; 5) the effects of initial conditions on eye/head trajectories are due to neural circuit dynamics, not planning; and 6) “compensatory” ocular slow phases exist even after semicircular canal plugging, because of interconnections linking eye-head circuits. Our model structure also simulates classical vestibulo-ocular reflex and pursuit nystagmus, and provides novel neural circuit and behavioral predictions, notably that both eye-head coordination and segmental limb coordination are possible without trajectory planning. PMID:27440248

  16. Eye-gaze independent EEG-based brain-computer interfaces for communication.

    PubMed

    Riccio, A; Mattia, D; Simione, L; Olivetti, M; Cincotti, F

    2012-08-01

    The present review systematically examines the literature reporting gaze independent interaction modalities in non-invasive brain-computer interfaces (BCIs) for communication. BCIs measure signals related to specific brain activity and translate them into device control signals. This technology can be used to provide users with severe motor disability (e.g. late stage amyotrophic lateral sclerosis (ALS); acquired brain injury) with an assistive device that does not rely on muscular contraction. Most of the studies on BCIs explored mental tasks and paradigms using visual modality. Considering that in ALS patients the oculomotor control can deteriorate and also other potential users could have impaired visual function, tactile and auditory modalities have been investigated over the past years to seek alternative BCI systems which are independent from vision. In addition, various attentional mechanisms, such as covert attention and feature-directed attention, have been investigated to develop gaze independent visual-based BCI paradigms. Three areas of research were considered in the present review: (i) auditory BCIs, (ii) tactile BCIs and (iii) independent visual BCIs. Out of a total of 130 search results, 34 articles were selected on the basis of pre-defined exclusion criteria. Thirteen articles dealt with independent visual BCIs, 15 reported on auditory BCIs and the last six on tactile BCIs, respectively. From the review of the available literature, it can be concluded that a crucial point is represented by the trade-off between BCI systems/paradigms with high accuracy and speed, but highly demanding in terms of attention and memory load, and systems requiring lower cognitive effort but with a limited amount of communicable information. These issues should be considered as priorities to be explored in future studies to meet users' requirements in a real-life scenario.

  17. Eye-gaze independent EEG-based brain-computer interfaces for communication

    NASA Astrophysics Data System (ADS)

    Riccio, A.; Mattia, D.; Simione, L.; Olivetti, M.; Cincotti, F.

    2012-08-01

    The present review systematically examines the literature reporting gaze independent interaction modalities in non-invasive brain-computer interfaces (BCIs) for communication. BCIs measure signals related to specific brain activity and translate them into device control signals. This technology can be used to provide users with severe motor disability (e.g. late stage amyotrophic lateral sclerosis (ALS); acquired brain injury) with an assistive device that does not rely on muscular contraction. Most of the studies on BCIs explored mental tasks and paradigms using visual modality. Considering that in ALS patients the oculomotor control can deteriorate and also other potential users could have impaired visual function, tactile and auditory modalities have been investigated over the past years to seek alternative BCI systems which are independent from vision. In addition, various attentional mechanisms, such as covert attention and feature-directed attention, have been investigated to develop gaze independent visual-based BCI paradigms. Three areas of research were considered in the present review: (i) auditory BCIs, (ii) tactile BCIs and (iii) independent visual BCIs. Out of a total of 130 search results, 34 articles were selected on the basis of pre-defined exclusion criteria. Thirteen articles dealt with independent visual BCIs, 15 reported on auditory BCIs and the last six on tactile BCIs, respectively. From the review of the available literature, it can be concluded that a crucial point is represented by the trade-off between BCI systems/paradigms with high accuracy and speed, but highly demanding in terms of attention and memory load, and systems requiring lower cognitive effort but with a limited amount of communicable information. These issues should be considered as priorities to be explored in future studies to meet users’ requirements in a real-life scenario.

  18. Inhibitory control in mind and brain 2.0: Blocked-input models of saccadic countermanding

    PubMed Central

    Logan, Gordon D.; Yamaguchi, Motonori; Schall, Jeffrey D.; Palmeri, Thomas J.

    2015-01-01

    The interactive race model of saccadic countermanding assumes that response inhibition results from an interaction between a go unit, identified with gaze-shifting neurons, and a stop unit, identified with gaze-holding neurons, in which activation of the stop unit inhibits the growth of activation in the go unit to prevent it from reaching threshold. The interactive race model accounts for behavioral data and predicts physiological data in monkeys performing the stop-signal task. We propose an alternative model that assumes that response inhibition results from blocking the input to the go unit. We show that the blocked-input model accounts for behavioral data as accurately as the original interactive race model and predicts aspects of the physiological data more accurately. We extend the models to address the steady-state fixation period before the go stimulus is presented and find that the blocked-input model fits better than the interactive race model. We consider a model in which fixation activity is boosted when a stop signal occurs and find that it fits as well as the blocked input model but predicts very high steady-state fixation activity after the response is inhibited. We discuss the alternative linking propositions that connect computational models to neural mechanisms, the lessons to be learned from model mimicry, and generalization from countermanding saccades to countermanding other kinds of responses. PMID:25706403

  19. Reduction in Dynamic Visual Acuity Reveals Gaze Control Changes Following Spaceflight

    NASA Technical Reports Server (NTRS)

    Peters, Brian T.; Brady, Rachel A.; Miller, Chris; Lawrence, Emily L.; Mulavara Ajitkumar P.; Bloomberg, Jacob J.

    2010-01-01

    INTRODUCTION: Exposure to microgravity causes adaptive changes in eye-head coordination that can lead to altered gaze control. This could affect postflight visual acuity during head and body motion. The goal of this study was to characterize changes in dynamic visual acuity after long-duration spaceflight. METHODS: Dynamic Visual Acuity (DVA) data from 14 astro/cosmonauts were collected after long-duration (6 months) spaceflight. The difference in acuity between seated and walking conditions provided a metric of change in the subjects ability to maintain gaze fixation during self-motion. In each condition, a psychophysical threshold detection algorithm was used to display Landolt ring optotypes at a size that was near each subject s acuity threshold. Verbal responses regarding the orientation of the gap were recorded as the optotypes appeared sequentially on a computer display 4 meters away. During the walking trials, subjects walked at 6.4 km/h on a motorized treadmill. RESULTS: A decrement in mean postflight DVA was found, with mean values returning to baseline within 1 week. The population mean showed a consistent improvement in DVA performance, but it was accompanied by high variability. A closer examination of the individual subject s recovery curves revealed that many did not follow a pattern of continuous improvement with each passing day. When adjusted on the basis of previous long-duration flight experience, the population mean shows a "bounce" in the re-adaptation curve. CONCLUSION: Gaze control during self-motion is altered following long-duration spaceflight and changes in postflight DVA performance indicate that vestibular re-adaptation may be more complex than a gradual return to normal.

  20. Homonymous Visual Field Loss and Its Impact on Visual Exploration: A Supermarket Study

    PubMed Central

    Kasneci, Enkelejda; Sippel, Katrin; Heister, Martin; Aehling, Katrin; Rosenstiel, Wolfgang; Schiefer, Ulrich; Papageorgiou, Elena

    2014-01-01

    Purpose Homonymous visual field defects (HVFDs) may critically interfere with quality of life. The aim of this study was to assess the impact of HVFDs on a supermarket search task and to investigate the influence of visual search on task performance. Methods Ten patients with HVFDs (four with a right-sided [HR] and six with a left-sided defect [HL]), and 10 healthy-sighted, sex-, and age-matched control subjects were asked to collect 20 products placed on two supermarket shelves as quickly as possible. Task performance was rated as “passed” or “failed” with regard to the time per correctly collected item (TC -failed = 4.84 seconds based on the performance of healthy subjects). Eye movements were analyzed regarding the horizontal gaze activity, glance frequency, and glance proportion for different VF areas. Results Seven of 10 HVFD patients (three HR, four HL) passed the supermarket search task. Patients who passed needed significantly less time per correctly collected item and looked more frequently toward the VFD area than patients who failed. HL patients who passed the test showed a higher percentage of glances beyond the 60° VF (P < 0.05). Conclusion A considerable number of HVFD patients performed successfully and could compensate for the HVFD by shifting the gaze toward the peripheral VF and the VFD area. Translational Relevance These findings provide new insights on gaze adaptations in patients with HVFDs during activities of daily living and will enhance the design and development of realistic examination tools for use in the clinical setting to improve daily functioning. (http://www.clinicaltrials.gov, NCT01372319, NCT01372332) PMID:25374771

  1. Dynamic Visual Acuity: a Functionally Relevant Research Tool

    NASA Technical Reports Server (NTRS)

    Peters, Brian T.; Brady, Rachel A.; Miller, Chris A.; Mulavara, Ajitkumar P.; Wood, Scott J.; Cohen, Helen S.; Bloomberg, Jacob J.

    2010-01-01

    Coordinated movements between the eyes and head are required to maintain a stable retinal image during head and body motion. The vestibulo-ocular reflex (VOR) plays a significant role in this gaze control system that functions well for most daily activities. However, certain environmental conditions or interruptions in normal VOR function can lead to inadequate ocular compensation, resulting in oscillopsia, or blurred vision. It is therefore possible to use acuity to determine when the environmental conditions, VOR function, or the combination of the two is not conductive for maintaining clear vision. Over several years we have designed and tested several tests of dynamic visual acuity (DVA). Early tests used the difference between standing and walking acuity to assess decrements in the gaze stabilization system after spaceflight. Supporting ground-based studies measured the responses from patients with bilateral vestibular dysfunction and explored the effects of visual target viewing distance and gait cycle events on walking acuity. Results from these studies show that DVA is affected by spaceflight, is degraded in patients with vestibular dysfunction, changes with target distance, and is not consistent across the gait cycle. We have recently expanded our research to include studies in which seated subjects are translated or rotated passively. Preliminary results from this work indicate that gaze stabilization ability may differ between similar active and passive conditions, may change with age, and can be affected by the location of the visual target with respect to the axis of motion. Use of DVA as a diagnostic tool is becoming more popular but the functional nature of the acuity outcome measure also makes it ideal for identifying conditions that could lead to degraded vision. By doing so, steps can be taken to alter the problematic environments to improve the man-machine interface and optimize performance.

  2. The Eyes Are the Windows to the Mind: Direct Eye Gaze Triggers the Ascription of Others' Minds.

    PubMed

    Khalid, Saara; Deska, Jason C; Hugenberg, Kurt

    2016-12-01

    Eye gaze is a potent source of social information with direct eye gaze signaling the desire to approach and averted eye gaze signaling avoidance. In the current work, we proposed that eye gaze signals whether or not to impute minds into others. Across four studies, we manipulated targets' eye gaze (i.e., direct vs. averted eye gaze) and measured explicit mind ascriptions (Study 1a, Study 1b, and Study 2) and beliefs about the likelihood of targets having mind (Study 3). In all four studies, we find novel evidence that the ascription of sophisticated humanlike minds to others is signaled by the display of direct eye gaze relative to averted eye gaze. Moreover, we provide evidence suggesting that this differential mentalization is due, at least in part, to beliefs that direct gaze targets are more likely to instigate social interaction. In short, eye contact triggers mind perception. © 2016 by the Society for Personality and Social Psychology, Inc.

  3. Sex-related differences in behavioral and amygdalar responses to compound facial threat cues.

    PubMed

    Im, Hee Yeon; Adams, Reginald B; Cushing, Cody A; Boshyan, Jasmine; Ward, Noreen; Kveraga, Kestutis

    2018-03-08

    During face perception, we integrate facial expression and eye gaze to take advantage of their shared signals. For example, fear with averted gaze provides a congruent avoidance cue, signaling both threat presence and its location, whereas fear with direct gaze sends an incongruent cue, leaving threat location ambiguous. It has been proposed that the processing of different combinations of threat cues is mediated by dual processing routes: reflexive processing via magnocellular (M) pathway and reflective processing via parvocellular (P) pathway. Because growing evidence has identified a variety of sex differences in emotional perception, here we also investigated how M and P processing of fear and eye gaze might be modulated by observer's sex, focusing on the amygdala, a structure important to threat perception and affective appraisal. We adjusted luminance and color of face stimuli to selectively engage M or P processing and asked observers to identify emotion of the face. Female observers showed more accurate behavioral responses to faces with averted gaze and greater left amygdala reactivity both to fearful and neutral faces. Conversely, males showed greater right amygdala activation only for M-biased averted-gaze fear faces. In addition to functional reactivity differences, females had proportionately greater bilateral amygdala volumes, which positively correlated with behavioral accuracy for M-biased fear. Conversely, in males only the right amygdala volume was positively correlated with accuracy for M-biased fear faces. Our findings suggest that M and P processing of facial threat cues is modulated by functional and structural differences in the amygdalae associated with observer's sex. © 2018 Wiley Periodicals, Inc.

  4. Oxytocin enhances gaze-following responses to videos of natural social behavior in adult male rhesus monkeys

    PubMed Central

    Putnam, P.T.; Roman, J.M.; Zimmerman, P.E.; Gothard, K.M.

    2017-01-01

    Gaze following is a basic building block of social behavior that has been observed in multiple species, including primates. The absence of gaze following is associated with abnormal development of social cognition, such as in autism spectrum disorders (ASD). Some social deficits in ASD, including the failure to look at eyes and the inability to recognize facial expressions, are ameliorated by intranasal administration of oxytocin (IN-OT). Here we tested the hypothesis that IN-OT might enhance social processes that require active engagement with a social partner, such as gaze following. Alternatively, IN-OT may only enhance the perceptual salience of the eyes, and may not modify behavioral responses to social signals. To test this hypothesis, we presented four monkeys with videos of conspecifics displaying natural behaviors. Each video was viewed multiple times before and after the monkeys received intranasally either 50 IU of OT or saline. We found that despite a gradual decrease in attention to the repeated viewing of the same videos (habituation), IN-OT consistently increased the frequency of gaze following saccades. Further analysis confirmed that these behaviors did not occur randomly, but rather predictably in response to the same segments of the videos. These findings suggest that in response to more naturalistic social stimuli IN-OT enhances the propensity to interact with a social partner rather than merely elevating the perceptual salience of the eyes. In light of these findings, gaze following may serve as a metric for pro-social effects of oxytocin that target social action more than social perception. PMID:27343726

  5. Acquisition of joint attention by olive baboons gesturing toward humans.

    PubMed

    Lamaury, Augustine; Cochet, Hélène; Bourjade, Marie

    2017-07-10

    Joint attention is a core ability of human social cognition which broadly refers to the coordination of attention with both the presence and activity of social partners. In both human and non-human primates, joint attention can be assessed from behaviour; gestures and gaze alternation between the partner and a distal object are standard behavioural manifestations of joint attention. Here we examined the acquisition of joint attention in olive baboons as a function of their individual experience of a human partner's attentional states during training regimes. Eleven olive baboons (Papio anubis) were observed during their training to perform food-requesting gestures, which occurred either by (1) a human facing them (face condition), or (2) by a human positioned in profile who never turned to them (profile condition). We found neither gestures nor gaze alternation were present at the start of the training but rather developed over the training period. Only baboons in the face condition showed an increase in the number of gaze alternations, and their gaze pattern progressively shifted to a coordinated sequence in which gazes and gestures were coordinated in time. In contrast, baboons trained by a human in profile showed significantly less coordination of gazes with gestures but still learned to request food with their gestures. These results suggest that the partner's social attention plays an important role in the acquisition of visual joint attention and, to a lesser extent, in gesture learning in baboons. Interspecific interactions appear to offer rich opportunities to manipulate and thus identify the social contexts in which socio-communicative skills develop.

  6. Do pet dogs (Canis familiaris) follow ostensive and non-ostensive human gaze to distant space and to objects?

    PubMed

    Duranton, Charlotte; Range, Friederike; Virányi, Zsófia

    2017-07-01

    Dogs are renowned for being skilful at using human-given communicative cues such as pointing. Results are contradictory, however, when it comes to dogs' following human gaze, probably due to methodological discrepancies. Here we investigated whether dogs follow human gaze to one of two food locations better than into distant space even after comparable pre-training. In Experiments 1 and 2, the gazing direction of dogs was recorded in a gaze-following into distant space and in an object-choice task where no choice was allowed, in order to allow a direct comparison between tasks, varying the ostensive nature of the gazes. We found that dogs only followed repeated ostensive human gaze into distant space, whereas they followed all gaze cues in the object-choice task. Dogs followed human gaze better in the object-choice task than when there was no obvious target to look at. In Experiment 3, dogs were tested in another object-choice task and were allowed to approach a container. Ostensive cues facilitated the dogs' following gaze with gaze as well as their choices: we found that dogs in the ostensive group chose the indicated container at chance level, whereas they avoided this container in the non-ostensive group. We propose that dogs may perceive the object-choice task as a competition over food and may interpret non-ostensive gaze as an intentional cue that indicates the experimenter's interest in the food location she has looked at. Whether ostensive cues simply mitigate the competitive perception of this situation or they alter how dogs interpret communicative gaze needs further investigation. Our findings also show that following gaze with one's gaze and actually choosing one of the two containers in an object-choice task need to be considered as different variables. The present study clarifies a number of questions related to gaze-following in dogs and adds to a growing body of evidence showing that human ostensive cues can strongly modify dog behaviour.

  7. Toward Optimization of Gaze-Controlled Human-Computer Interaction: Application to Hindi Virtual Keyboard for Stroke Patients.

    PubMed

    Meena, Yogesh Kumar; Cecotti, Hubert; Wong-Lin, Kongfatt; Dutta, Ashish; Prasad, Girijesh

    2018-04-01

    Virtual keyboard applications and alternative communication devices provide new means of communication to assist disabled people. To date, virtual keyboard optimization schemes based on script-specific information, along with multimodal input access facility, are limited. In this paper, we propose a novel method for optimizing the position of the displayed items for gaze-controlled tree-based menu selection systems by considering a combination of letter frequency and command selection time. The optimized graphical user interface layout has been designed for a Hindi language virtual keyboard based on a menu wherein 10 commands provide access to type 88 different characters, along with additional text editing commands. The system can be controlled in two different modes: eye-tracking alone and eye-tracking with an access soft-switch. Five different keyboard layouts have been presented and evaluated with ten healthy participants. Furthermore, the two best performing keyboard layouts have been evaluated with eye-tracking alone on ten stroke patients. The overall performance analysis demonstrated significantly superior typing performance, high usability (87% SUS score), and low workload (NASA TLX with 17 scores) for the letter frequency and time-based organization with script specific arrangement design. This paper represents the first optimized gaze-controlled Hindi virtual keyboard, which can be extended to other languages.

  8. Effect that Smell Presentation Has on an Individual in Regards to Eye Catching and Memory

    NASA Astrophysics Data System (ADS)

    Tomono, Akira; Kanda, Koyori; Otake, Syunya

    If a person's eyes are greater attracted to the target objects by matching the smell to an important scene of a movie or commercial image, the value of the image contents will rise. In this paper, we attempt to describe the image system that can also present smells, and the reason behind the improvement, from gaze point analysis, of the presence of smell when it is matched to the image. The relationship between the eye catching property and the position of the sight object was examined using the image with the scene where someone eats three kinds of fruits. These objects were gazed at for a long time once releasing their smells. When the smell was not released, the gaze moved actively to try and receive a lot of information from the entire screen. On the other hand, when the smell was inserted, the subject was interested in the object and there was a tendency for their gaze to stay within the narrow area surrounding the image. Moreover, we investigated the effect on the memory by putting the smell on the flowers in the virtual flower shop using immersive virtual reality system (HoloStageTM). It was memorized more easily compared with a scentless case. It seems that the viewer obtains the information actively by reacting to its smell.

  9. Is eye to eye contact really threatening and avoided in social anxiety?--An eye-tracking and psychophysiology study.

    PubMed

    Wieser, Matthias J; Pauli, Paul; Alpers, Georg W; Mühlberger, Andreas

    2009-01-01

    The effects of direct and averted gaze on autonomic arousal and gaze behavior in social anxiety were investigated using a new paradigm including animated movie stimuli and eye-tracking methodology. While high, medium, and low socially anxious (HSA vs. MSA vs. LSA) women watched animated movie clips, in which faces responded to the gaze of the participants with either direct or averted gaze, their eye movements, heart rate (HR) and skin conductance responses (SCR) were continuously recorded. Groups did not differ in their gaze behavior concerning direct vs. averted gaze, but high socially anxious women tended to fixate the eye region of the presented face longer than MSA and LSA, respectively. Furthermore, they responded to direct gaze with more pronounced cardiac acceleration. This physiological finding indicates that direct gaze may be a fear-relevant feature for socially anxious individuals in social interaction. However, this seems not to result in gaze avoidance. Future studies should examine the role of gaze direction and its interaction with facial expressions in social anxiety and its consequences for avoidance behavior and fear responses. Additionally, further research is needed to clarify the role of gaze perception in social anxiety.

  10. Transition from Target to Gaze Coding in Primate Frontal Eye Field during Memory Delay and Memory-Motor Transformation.

    PubMed

    Sajad, Amirsaman; Sadeh, Morteza; Yan, Xiaogang; Wang, Hongying; Crawford, John Douglas

    2016-01-01

    The frontal eye fields (FEFs) participate in both working memory and sensorimotor transformations for saccades, but their role in integrating these functions through time remains unclear. Here, we tracked FEF spatial codes through time using a novel analytic method applied to the classic memory-delay saccade task. Three-dimensional recordings of head-unrestrained gaze shifts were made in two monkeys trained to make gaze shifts toward briefly flashed targets after a variable delay (450-1500 ms). A preliminary analysis of visual and motor response fields in 74 FEF neurons eliminated most potential models for spatial coding at the neuron population level, as in our previous study (Sajad et al., 2015). We then focused on the spatiotemporal transition from an eye-centered target code (T; preferred in the visual response) to an eye-centered intended gaze position code (G; preferred in the movement response) during the memory delay interval. We treated neural population codes as a continuous spatiotemporal variable by dividing the space spanning T and G into intermediate T-G models and dividing the task into discrete steps through time. We found that FEF delay activity, especially in visuomovement cells, progressively transitions from T through intermediate T-G codes that approach, but do not reach, G. This was followed by a final discrete transition from these intermediate T-G delay codes to a "pure" G code in movement cells without delay activity. These results demonstrate that FEF activity undergoes a series of sensory-memory-motor transformations, including a dynamically evolving spatial memory signal and an imperfect memory-to-motor transformation.

  11. Activity of long-lead burst neurons in pontine reticular formation during head-unrestrained gaze shifts.

    PubMed

    Walton, Mark M G; Freedman, Edward G

    2014-01-01

    Primates explore a visual scene through a succession of saccades. Much of what is known about the neural circuitry that generates these movements has come from neurophysiological studies using subjects with their heads restrained. Horizontal saccades and the horizontal components of oblique saccades are associated with high-frequency bursts of spikes in medium-lead burst neurons (MLBs) and long-lead burst neurons (LLBNs) in the paramedian pontine reticular formation. For LLBNs, the high-frequency burst is preceded by a low-frequency prelude that begins 12-150 ms before saccade onset. In terms of the lead time between the onset of prelude activity and saccade onset, the anatomical projections, and the movement field characteristics, LLBNs are a heterogeneous group of neurons. Whether this heterogeneity is endemic of multiple functional subclasses is an open question. One possibility is that some may carry signals related to head movement. We recorded from LLBNs while monkeys performed head-unrestrained gaze shifts, during which the kinematics of the eye and head components were dissociable. Many cells had peak firing rates that never exceeded 200 spikes/s for gaze shifts of any vector. The activity of these low-frequency cells often persisted beyond the end of the gaze shift and was usually related to head-movement kinematics. A subset was tested during head-unrestrained pursuit and showed clear modulation in the absence of saccades. These "low-frequency" cells were intermingled with MLBs and traditional LLBNs and may represent a separate functional class carrying signals related to head movement.

  12. Chewing Stimulation Reduces Appetite Ratings and Attentional Bias toward Visual Food Stimuli in Healthy-Weight Individuals.

    PubMed

    Ikeda, Akitsu; Miyamoto, Jun J; Usui, Nobuo; Taira, Masato; Moriyama, Keiji

    2018-01-01

    Based on the theory of incentive sensitization, the exposure to food stimuli sensitizes the brain's reward circuits and enhances attentional bias toward food. Therefore, reducing attentional bias to food could possibly be beneficial in preventing impulsive eating. The importance of chewing has been increasingly implicated as one of the methods for reducing appetite, however, no studies to investigate the effect of chewing on attentional bias to food. In this study, we investigated whether chewing stimulation (i.e., chewing tasteless gum) reduces attentional bias to food as well as an actual feeding (i.e., ingesting a standardized meal) does. We measured reaction time, gaze direction and gaze duration to assess attentional bias toward food images in pairs of food and non-food images that were presented in a visual probe task (Experiment 1, n = 21) and/or eye-tracking task (Experiment 2, n = 20). We also measured appetite ratings using visual analog scale. In addition, we conducted a control study in which the same number of participants performed the identical tasks to Experiments 1 and 2, but the participants did not perform sham feeding with gum-chewing/actual feeding between tasks and they took a rest. Two-way ANOVA revealed that after actual feeding, subjective ratings of hunger, preoccupation with food, and desire to eat significantly decreased, whereas fullness significantly increased. Sham feeding showed the same trends, but to a lesser degree. Results of the visual probe task in Experiment 1 showed that both sham feeding and actual feeding reduced reaction time bias significantly. Eye-tracking data showed that both sham and actual feeding resulted in significant reduction in gaze direction bias, indexing initial attentional orientation. Gaze duration bias was unaffected. In both control experiments, one-way ANOVAs showed no significant differences between immediately before and after the resting state for any of the appetite ratings, reaction time bias, gaze direction bias, or gaze duration bias. In conclusion, chewing stimulation reduced subjective appetite and attentional bias to food, particularly initial attentional orientation to food. These findings suggest that chewing stimulation, even without taste, odor, or ingestion, may affect reward circuits and help prevent impulsive eating.

  13. Chewing Stimulation Reduces Appetite Ratings and Attentional Bias toward Visual Food Stimuli in Healthy-Weight Individuals

    PubMed Central

    Ikeda, Akitsu; Miyamoto, Jun J.; Usui, Nobuo; Taira, Masato; Moriyama, Keiji

    2018-01-01

    Based on the theory of incentive sensitization, the exposure to food stimuli sensitizes the brain’s reward circuits and enhances attentional bias toward food. Therefore, reducing attentional bias to food could possibly be beneficial in preventing impulsive eating. The importance of chewing has been increasingly implicated as one of the methods for reducing appetite, however, no studies to investigate the effect of chewing on attentional bias to food. In this study, we investigated whether chewing stimulation (i.e., chewing tasteless gum) reduces attentional bias to food as well as an actual feeding (i.e., ingesting a standardized meal) does. We measured reaction time, gaze direction and gaze duration to assess attentional bias toward food images in pairs of food and non-food images that were presented in a visual probe task (Experiment 1, n = 21) and/or eye-tracking task (Experiment 2, n = 20). We also measured appetite ratings using visual analog scale. In addition, we conducted a control study in which the same number of participants performed the identical tasks to Experiments 1 and 2, but the participants did not perform sham feeding with gum-chewing/actual feeding between tasks and they took a rest. Two-way ANOVA revealed that after actual feeding, subjective ratings of hunger, preoccupation with food, and desire to eat significantly decreased, whereas fullness significantly increased. Sham feeding showed the same trends, but to a lesser degree. Results of the visual probe task in Experiment 1 showed that both sham feeding and actual feeding reduced reaction time bias significantly. Eye-tracking data showed that both sham and actual feeding resulted in significant reduction in gaze direction bias, indexing initial attentional orientation. Gaze duration bias was unaffected. In both control experiments, one-way ANOVAs showed no significant differences between immediately before and after the resting state for any of the appetite ratings, reaction time bias, gaze direction bias, or gaze duration bias. In conclusion, chewing stimulation reduced subjective appetite and attentional bias to food, particularly initial attentional orientation to food. These findings suggest that chewing stimulation, even without taste, odor, or ingestion, may affect reward circuits and help prevent impulsive eating. PMID:29472880

  14. Eye Contact and Fear of Being Laughed at in a Gaze Discrimination Task

    PubMed Central

    Torres-Marín, Jorge; Carretero-Dios, Hugo; Acosta, Alberto; Lupiáñez, Juan

    2017-01-01

    Current approaches conceptualize gelotophobia as a personality trait characterized by a disproportionate fear of being laughed at by others. Consistently with this perspective, gelotophobes are also described as neurotic and introverted and as having a paranoid tendency to anticipate derision and mockery situations. Although research on gelotophobia has significantly progressed over the past two decades, no evidence exists concerning the potential effects of gelotophobia in reaction to eye contact. Previous research has pointed to difficulties in discriminating gaze direction as the basis of possible misinterpretations of others’ intentions or mental states. The aim of the present research was to examine whether gelotophobia predisposition modulates the effects of eye contact (i.e., gaze discrimination) when processing faces portraying several emotional expressions. In two different experiments, participants performed an experimental gaze discrimination task in which they responded, as quickly and accurately as possible, to the eyes’ directions on faces displaying either a happy, angry, fear, neutral, or sad emotional expression. In particular, we expected trait-gelotophobia to modulate the eye contact effect, showing specific group differences in the happiness condition. The results of Study 1 (N = 40) indicated that gelotophobes made more errors than non-gelotophobes did in the gaze discrimination task. In contrast to our initial hypothesis, the happiness expression did not have any special role in the observed differences between individuals with high vs. low trait-gelotophobia. In Study 2 (N = 40), we replicated the pattern of data concerning gaze discrimination ability, even after controlling for individuals’ scores on social anxiety. Furthermore, in our second experiment, we found that gelotophobes did not exhibit any problem with identifying others’ emotions, or a general incorrect attribution of affective features, such as valence, intensity, or arousal. Therefore, this bias in processing gaze might be related to the global processes of social cognition. Further research is needed to explore how eye contact relates to the fear of being laughed at. PMID:29167652

  15. Differences in gaze anticipation for locomotion with and without vision

    PubMed Central

    Authié, Colas N.; Hilt, Pauline M.; N'Guyen, Steve; Berthoz, Alain; Bennequin, Daniel

    2015-01-01

    Previous experimental studies have shown a spontaneous anticipation of locomotor trajectory by the head and gaze direction during human locomotion. This anticipatory behavior could serve several functions: an optimal selection of visual information, for instance through landmarks and optic flow, as well as trajectory planning and motor control. This would imply that anticipation remains in darkness but with different characteristics. We asked 10 participants to walk along two predefined complex trajectories (limaçon and figure eight) without any cue on the trajectory to follow. Two visual conditions were used: (i) in light and (ii) in complete darkness with eyes open. The whole body kinematics were recorded by motion capture, along with the participant's right eye movements. We showed that in darkness and in light, horizontal gaze anticipates the orientation of the head which itself anticipates the trajectory direction. However, the horizontal angular anticipation decreases by a half in darkness for both gaze and head. In both visual conditions we observed an eye nystagmus with similar properties (frequency and amplitude). The main difference comes from the fact that in light, there is a shift of the orientations of the eye nystagmus and the head in the direction of the trajectory. These results suggest that a fundamental function of gaze is to represent self motion, stabilize the perception of space during locomotion, and to simulate the future trajectory, regardless of the vision condition. PMID:26106313

  16. Optimal eye movement strategies: a comparison of neurosurgeons gaze patterns when using a surgical microscope.

    PubMed

    Eivazi, Shahram; Hafez, Ahmad; Fuhl, Wolfgang; Afkari, Hoorieh; Kasneci, Enkelejda; Lehecka, Martin; Bednarik, Roman

    2017-06-01

    Previous studies have consistently demonstrated gaze behaviour differences related to expertise during various surgical procedures. In micro-neurosurgery, however, there is a lack of evidence of empirically demonstrated individual differences associated with visual attention. It is unknown exactly how neurosurgeons see a stereoscopic magnified view in the context of micro-neurosurgery and what this implies for medical training. We report on an investigation of the eye movement patterns in micro-neurosurgery using a state-of-the-art eye tracker. We studied the eye movements of nine neurosurgeons while performing cutting and suturing tasks under a surgical microscope. Eye-movement characteristics, such as fixation (focus level) and saccade (visual search pattern), were analysed. The results show a strong relationship between the level of microsurgical skill and the gaze pattern, whereas more expertise is associated with greater eye control, stability, and focusing in eye behaviour. For example, in the cutting task, well-trained surgeons increased their fixation durations on the operating field twice as much as the novices (expert, 848 ms; novice, 402 ms). Maintaining steady visual attention on the target (fixation), as well as being able to quickly make eye jumps from one target to another (saccades) are two important elements for the success of neurosurgery. The captured gaze patterns can be used to improve medical education, as part of an assessment system or in a gaze-training application.

  17. Group Differences in the Mutual Gaze of Chimpanzees (Pan Troglodytes)

    ERIC Educational Resources Information Center

    Bard, Kim A.; Myowa-Yamakoshi, Masako; Tomonaga, Masaki; Tanaka, Masayuki; Costall, Alan; Matsuzawa, Tetsuro

    2005-01-01

    A comparative developmental framework was used to determine whether mutual gaze is unique to humans and, if not, whether common mechanisms support the development of mutual gaze in chimpanzees and humans. Mother-infant chimpanzees engaged in approximately 17 instances of mutual gaze per hour. Mutual gaze occurred in positive, nonagonistic…

  18. 3D gaze tracking method using Purkinje images on eye optical model and pupil

    NASA Astrophysics Data System (ADS)

    Lee, Ji Woo; Cho, Chul Woo; Shin, Kwang Yong; Lee, Eui Chul; Park, Kang Ryoung

    2012-05-01

    Gaze tracking is to detect the position a user is looking at. Most research on gaze estimation has focused on calculating the X, Y gaze position on a 2D plane. However, as the importance of stereoscopic displays and 3D applications has increased greatly, research into 3D gaze estimation of not only the X, Y gaze position, but also the Z gaze position has gained attention for the development of next-generation interfaces. In this paper, we propose a new method for estimating the 3D gaze position based on the illuminative reflections (Purkinje images) on the surface of the cornea and lens by considering the 3D optical structure of the human eye model. This research is novel in the following four ways compared with previous work. First, we theoretically analyze the generated models of Purkinje images based on the 3D human eye model for 3D gaze estimation. Second, the relative positions of the first and fourth Purkinje images to the pupil center, inter-distance between these two Purkinje images, and pupil size are used as the features for calculating the Z gaze position. The pupil size is used on the basis of the fact that pupil accommodation happens according to the gaze positions in the Z direction. Third, with these features as inputs, the final Z gaze position is calculated using a multi-layered perceptron (MLP). Fourth, the X, Y gaze position on the 2D plane is calculated by the position of the pupil center based on a geometric transform considering the calculated Z gaze position. Experimental results showed that the average errors of the 3D gaze estimation were about 0.96° (0.48 cm) on the X-axis, 1.60° (0.77 cm) on the Y-axis, and 4.59 cm along the Z-axis in 3D space.

  19. SU-E-J-187: Management of Optic Organ Motion in Fractionated Stereotactic Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manning, M; Maurer, J

    2015-06-15

    Purpose: Fractionated stereotactic radiotherapy (FSRT) for optic nerve tumors can potentially use planning target volume (PTV) expansions as small as 1–5 mm. However, the motion of the intraorbital segment of the optic nerve has not been studied. Methods: A subject with a right optic nerve sheath meningioma underwent CT simulation in three fixed gaze positions: right, left, and fixed forward at a marker. The gross tumor volume (GTV) and the organs-at-risk (OAR) were contoured on all three scans. An IMRT plan using 10 static non-coplanar fields to 50.4 Gy in 28 fractions was designed to treat the fixed-forward gazing GTVmore » with a 1 mm PTV, then resulting coverage was evaluated for the GTV in the three positions. As an alternative, the composite structures were computed to generate the internal target volume (ITV), 1 mm expansion free-gazing PTV, and planning organat-risk volumes (PRVs) for free-gazing treatment. A comparable IMRT plan was created for the free-gazing PTV. Results: If the patient were treated using the fixed forward gaze plan looking straight, right, and left, the V100% for the GTV was 100.0%, 33.1%, and 0.1%, respectively. The volumes of the PTVs for fixed gaze and free-gazing plans were 0.79 and 2.21 cc, respectively, increasing the PTV by a factor of 2.6. The V100% for the fixed gaze and free-gazing plans were 0.85 cc and 2.8 cc, respectively increasing the treated volume by a factor of 3.3. Conclusion: Fixed gaze treatment appears to provide greater organ sparing than free-gazing. However unanticipated intrafraction right or left gaze can produce a geometric miss. Further study of optic nerve motion appears to be warranted in areas such as intrafraction optical confirmation of fixed gaze and optimized gaze directions to minimize lens and other normal organ dose in cranial radiotherapy.« less

  20. Does social presence or the potential for interaction reduce social gaze in online social scenarios? Introducing the "live lab" paradigm.

    PubMed

    Gregory, Nicola J; Antolin, Jastine V

    2018-05-01

    Research has shown that people's gaze is biased away from faces in the real world but towards them when they are viewed onscreen. Non-equivalent stimulus conditions may have represented a confound in this research, however, as participants viewed onscreen stimuli as pre-recordings where interaction was not possible compared with real-world stimuli which were viewed in real time where interaction was possible. We assessed the independent contributions of online social presence and ability for interaction on social gaze by developing the "live lab" paradigm. Participants in three groups ( N = 132) viewed a confederate as (1) a live webcam stream where interaction was not possible (one-way), (2) a live webcam stream where an interaction was possible (two-way), or (3) a pre-recording. Potential for interaction, rather than online social presence, was the primary influence on gaze behaviour: participants in the pre-recorded and one-way conditions looked more to the face than those in the two-way condition, particularly, when the confederate made "eye contact." Fixation durations to the face were shorter when the scene was viewed live, particularly, during a bid for eye contact. Our findings support the dual function of gaze but suggest that online social presence alone is not sufficient to activate social norms of civil inattention. Implications for the reinterpretation of previous research are discussed.

  1. Dynamic modeling of patient and physician eye gaze to understand the effects of electronic health records on doctor-patient communication and attention.

    PubMed

    Montague, Enid; Asan, Onur

    2014-03-01

    The aim of this study was to examine eye gaze patterns between patients and physicians while electronic health records were used to support patient care. Eye gaze provides an indication of physician attention to patient, patient/physician interaction, and physician behaviors such as searching for information and documenting information. A field study was conducted where 100 patient visits were observed and video recorded in a primary care clinic. Videos were then coded for gaze behaviors where patients' and physicians' gaze at each other and artifacts such as electronic health records were coded using a pre-established objective coding scheme. Gaze data were then analyzed using lag sequential methods. Results showed that there are several eye gaze patterns significantly dependent to each other. All doctor-initiated gaze patterns were followed by patient gaze patterns. Some patient-initiated gaze patterns were also followed by doctor gaze patterns significantly unlike the findings in previous studies. Health information technology appears to contribute to some of the new significant patterns that have emerged. Differences were also found in gaze patterns related to technology that differ from patterns identified in studies with paper charts. Several sequences related to patient-doctor-technology were also significant. Electronic health records affect the patient-physician eye contact dynamic differently than paper charts. This study identified several patterns of patient-physician interaction with electronic health record systems. Consistent with previous studies, physician initiated gaze is an important driver of the interactions between patient and physician and patient and technology. Published by Elsevier Ireland Ltd.

  2. Dynamic modeling of patient and physician eye gaze to understand the effects of electronic health records on doctor-patient communication and attention

    PubMed Central

    Montague, Enid; Asan, Onur

    2014-01-01

    Objective The aim of this study was to examine eye gaze patterns between patients and physicians while electronic health records were used to support patient care. Background Eye gaze provides an indication of physician attention to patient, patient/physician interaction, and physician behaviors such as searching for information and documenting information. Methods A field study was conducted where 100 patient visits were observed and video recorded in a primary care clinic. Videos were then coded for gaze behaviors where patients’ and physicians’ gaze at each other and artifacts such as electronic health records were coded using a pre-established objective coding scheme. Gaze data were then analyzed using lag sequential methods. Results Results showed that there are several eye gaze patterns significantly dependent to each other. All doctor-initiated gaze patterns were followed by patient gaze patterns. Some patient-initiated gaze patterns were also followed by doctor gaze patterns significantly unlike the findings in previous studies. Health information technology appears to contribute to some of the new significant patterns that have emerged. Differences were also found in gaze patterns related to technology that differ from patterns identified in studies with paper charts. Several sequences related to patient-doctor- technology were also significant. Electronic health records affect the patient-physician eye contact dynamic differently than paper charts. Conclusion This study identified several patterns of patient-physician interaction with electronic health record systems. Consistent with previous studies, physician initiated gaze is an important driver of the interactions between patient and physician and patient and technology. PMID:24380671

  3. Empirical Study on Designing of Gaze Tracking Camera Based on the Information of User's Head Movement.

    PubMed

    Pan, Weiyuan; Jung, Dongwook; Yoon, Hyo Sik; Lee, Dong Eun; Naqvi, Rizwan Ali; Lee, Kwan Woo; Park, Kang Ryoung

    2016-08-31

    Gaze tracking is the technology that identifies a region in space that a user is looking at. Most previous non-wearable gaze tracking systems use a near-infrared (NIR) light camera with an NIR illuminator. Based on the kind of camera lens used, the viewing angle and depth-of-field (DOF) of a gaze tracking camera can be different, which affects the performance of the gaze tracking system. Nevertheless, to our best knowledge, most previous researches implemented gaze tracking cameras without ground truth information for determining the optimal viewing angle and DOF of the camera lens. Eye-tracker manufacturers might also use ground truth information, but they do not provide this in public. Therefore, researchers and developers of gaze tracking systems cannot refer to such information for implementing gaze tracking system. We address this problem providing an empirical study in which we design an optimal gaze tracking camera based on experimental measurements of the amount and velocity of user's head movements. Based on our results and analyses, researchers and developers might be able to more easily implement an optimal gaze tracking system. Experimental results show that our gaze tracking system shows high performance in terms of accuracy, user convenience and interest.

  4. Empirical Study on Designing of Gaze Tracking Camera Based on the Information of User’s Head Movement

    PubMed Central

    Pan, Weiyuan; Jung, Dongwook; Yoon, Hyo Sik; Lee, Dong Eun; Naqvi, Rizwan Ali; Lee, Kwan Woo; Park, Kang Ryoung

    2016-01-01

    Gaze tracking is the technology that identifies a region in space that a user is looking at. Most previous non-wearable gaze tracking systems use a near-infrared (NIR) light camera with an NIR illuminator. Based on the kind of camera lens used, the viewing angle and depth-of-field (DOF) of a gaze tracking camera can be different, which affects the performance of the gaze tracking system. Nevertheless, to our best knowledge, most previous researches implemented gaze tracking cameras without ground truth information for determining the optimal viewing angle and DOF of the camera lens. Eye-tracker manufacturers might also use ground truth information, but they do not provide this in public. Therefore, researchers and developers of gaze tracking systems cannot refer to such information for implementing gaze tracking system. We address this problem providing an empirical study in which we design an optimal gaze tracking camera based on experimental measurements of the amount and velocity of user’s head movements. Based on our results and analyses, researchers and developers might be able to more easily implement an optimal gaze tracking system. Experimental results show that our gaze tracking system shows high performance in terms of accuracy, user convenience and interest. PMID:27589768

  5. Do pet dogs (Canis familiaris) follow ostensive and non-ostensive human gaze to distant space and to objects?

    PubMed Central

    Range, Friederike; Virányi, Zsófia

    2017-01-01

    Dogs are renowned for being skilful at using human-given communicative cues such as pointing. Results are contradictory, however, when it comes to dogs' following human gaze, probably due to methodological discrepancies. Here we investigated whether dogs follow human gaze to one of two food locations better than into distant space even after comparable pre-training. In Experiments 1 and 2, the gazing direction of dogs was recorded in a gaze-following into distant space and in an object-choice task where no choice was allowed, in order to allow a direct comparison between tasks, varying the ostensive nature of the gazes. We found that dogs only followed repeated ostensive human gaze into distant space, whereas they followed all gaze cues in the object-choice task. Dogs followed human gaze better in the object-choice task than when there was no obvious target to look at. In Experiment 3, dogs were tested in another object-choice task and were allowed to approach a container. Ostensive cues facilitated the dogs’ following gaze with gaze as well as their choices: we found that dogs in the ostensive group chose the indicated container at chance level, whereas they avoided this container in the non-ostensive group. We propose that dogs may perceive the object-choice task as a competition over food and may interpret non-ostensive gaze as an intentional cue that indicates the experimenter's interest in the food location she has looked at. Whether ostensive cues simply mitigate the competitive perception of this situation or they alter how dogs interpret communicative gaze needs further investigation. Our findings also show that following gaze with one's gaze and actually choosing one of the two containers in an object-choice task need to be considered as different variables. The present study clarifies a number of questions related to gaze-following in dogs and adds to a growing body of evidence showing that human ostensive cues can strongly modify dog behaviour. PMID:28791164

  6. On the possible roles of microsaccades and drifts in visual perception.

    PubMed

    Ahissar, Ehud; Arieli, Amos; Fried, Moshe; Bonneh, Yoram

    2016-01-01

    During natural viewing large saccades shift the visual gaze from one target to another every few hundreds of milliseconds. The role of microsaccades (MSs), small saccades that show up during long fixations, is still debated. A major debate is whether MSs are used to redirect the visual gaze to a new location or to encode visual information through their movement. We argue that these two functions cannot be optimized simultaneously and present several pieces of evidence suggesting that MSs redirect the visual gaze and that the visual details are sampled and encoded by ocular drifts. We show that drift movements are indeed suitable for visual encoding. Yet, it is not clear to what extent drift movements are controlled by the visual system, and to what extent they interact with saccadic movements. We analyze several possible control schemes for saccadic and drift movements and propose experiments that can discriminate between them. We present the results of preliminary analyses of existing data as a sanity check to the testability of our predictions. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Visual search, movement behaviour and boat control during the windward mark rounding in sailing.

    PubMed

    Pluijms, Joost P; Cañal-Bruland, Rouwen; Hoozemans, Marco J M; Savelsbergh, Geert J P

    2015-01-01

    In search of key-performance predictors in sailing, we examined to what degree visual search, movement behaviour and boat control contribute to skilled performance while rounding the windward mark. To this end, we analysed 62 windward mark roundings sailed without opponents and 40 windward mark roundings sailed with opponents while competing in small regattas. Across conditions, results revealed that better performances were related to gazing more to the tangent point during the actual rounding. More specifically, in the condition without opponents, skilled performance was associated with gazing more outside the dinghy during the actual rounding, while in the condition with opponents, superior performance was related to gazing less outside the dinghy. With respect to movement behaviour, superior performance was associated with the release of the trimming lines close to rounding the mark. In addition, better performances were related to approaching the mark with little heel, yet heeling the boat more to the windward side when being close to the mark. Potential implications for practice are suggested for each phase of the windward mark rounding.

  8. Visual Representation of Eye Gaze Is Coded by a Nonopponent Multichannel System

    ERIC Educational Resources Information Center

    Calder, Andrew J.; Jenkins, Rob; Cassel, Anneli; Clifford, Colin W. G.

    2008-01-01

    To date, there is no functional account of the visual perception of gaze in humans. Previous work has demonstrated that left gaze and right gaze are represented by separate mechanisms. However, these data are consistent with either a multichannel system comprising separate channels for distinct gaze directions (e.g., left, direct, and right) or an…

  9. High-Speed Noninvasive Eye-Tracking System

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; LaBaw, Clayton; Michael-Morookian, John; Monacos, Steve; Serviss, Orin

    2007-01-01

    The figure schematically depicts a system of electronic hardware and software that noninvasively tracks the direction of a person s gaze in real time. Like prior commercial noninvasive eye-tracking systems, this system is based on (1) illumination of an eye by a low-power infrared light-emitting diode (LED); (2) acquisition of video images of the pupil, iris, and cornea in the reflected infrared light; (3) digitization of the images; and (4) processing the digital image data to determine the direction of gaze from the centroids of the pupil and cornea in the images. Relative to the prior commercial systems, the present system operates at much higher speed and thereby offers enhanced capability for applications that involve human-computer interactions, including typing and computer command and control by handicapped individuals,and eye-based diagnosis of physiological disorders that affect gaze responses.

  10. Atypical Processing of Gaze Cues and Faces Explains Comorbidity between Autism Spectrum Disorder (ASD) and Attention Deficit/Hyperactivity Disorder (ADHD).

    PubMed

    Groom, Madeleine J; Kochhar, Puja; Hamilton, Antonia; Liddle, Elizabeth B; Simeou, Marina; Hollis, Chris

    2017-05-01

    This study investigated the neurobiological basis of comorbidity between autism spectrum disorder (ASD) and attention deficit/hyperactivity disorder (ADHD). We compared children with ASD, ADHD or ADHD+ASD and typically developing controls (CTRL) on behavioural and electrophysiological correlates of gaze cue and face processing. We measured effects of ASD, ADHD and their interaction on the EDAN, an ERP marker of orienting visual attention towards a spatially cued location and the N170, a right-hemisphere lateralised ERP linked to face processing. We identified atypical gaze cue and face processing in children with ASD and ADHD+ASD compared with the ADHD and CTRL groups. The findings indicate a neurobiological basis for the presence of comorbid ASD symptoms in ADHD. Further research using larger samples is needed.

  11. Perceptual impairment and psychomotor control in virtual laparoscopic surgery.

    PubMed

    Wilson, Mark R; McGrath, John S; Vine, Samuel J; Brewer, James; Defriend, David; Masters, Richard S W

    2011-07-01

    It is recognised that one of the major difficulties in performing laparoscopic surgery is the translation of two-dimensional video image information to a three-dimensional working area. However, research has tended to ignore the gaze and eye-hand coordination strategies employed by laparoscopic surgeons as they attempt to overcome these perceptual constraints. This study sought to examine if measures related to tool movements, gaze strategy, and eye-hand coordination (the quiet eye) differentiate between experienced and novice operators performing a two-handed manoeuvres task on a virtual reality laparoscopic surgical simulator (LAP Mentor™). Twenty-five right-handed surgeons were categorised as being either experienced (having led more than 60 laparoscopic procedures) or novice (having performed fewer than 10 procedures) operators. The 10 experienced and 15 novice surgeons completed the "two-hand manoeuvres" task from the LAP Mentor basic skills learning environment while wearing a gaze registration system. Performance, movement, gaze, and eye-hand coordination parameters were recorded and compared between groups. The experienced surgeons completed the task significantly more quickly than the novices, used significantly fewer movements, and displayed shorter tool paths. Gaze analyses revealed that experienced surgeons spent significantly more time fixating the target locations than novices, who split their time between focusing on the targets and tracking the tools. A more detailed analysis of a difficult subcomponent of the task revealed that experienced operators used a significantly longer aiming fixation (the quiet eye period) to guide precision grasping movements and hence needed fewer grasp attempts. The findings of the study provide further support for the utility of examining strategic gaze behaviour and eye-hand coordination measures to help further our understanding of how experienced surgeons attempt to overcome the perceptual difficulties inherent in the laparoscopic environment.

  12. Follow My Eyes: The Gaze of Politicians Reflexively Captures the Gaze of Ingroup Voters

    PubMed Central

    Liuzza, Marco Tullio; Cazzato, Valentina; Vecchione, Michele; Crostella, Filippo; Caprara, Gian Vittorio; Aglioti, Salvatore Maria

    2011-01-01

    Studies in human and non-human primates indicate that basic socio-cognitive operations are inherently linked to the power of gaze in capturing reflexively the attention of an observer. Although monkey studies indicate that the automatic tendency to follow the gaze of a conspecific is modulated by the leader-follower social status, evidence for such effects in humans is meager. Here, we used a gaze following paradigm where the directional gaze of right- or left-wing Italian political characters could influence the oculomotor behavior of ingroup or outgroup voters. We show that the gaze of Berlusconi, the right-wing leader currently dominating the Italian political landscape, potentiates and inhibits gaze following behavior in ingroup and outgroup voters, respectively. Importantly, the higher the perceived similarity in personality traits between voters and Berlusconi, the stronger the gaze interference effect. Thus, higher-order social variables such as political leadership and affiliation prepotently affect reflexive shifts of attention. PMID:21957479

  13. "Avoiding or approaching eyes"? Introversion/extraversion affects the gaze-cueing effect.

    PubMed

    Ponari, Marta; Trojano, Luigi; Grossi, Dario; Conson, Massimiliano

    2013-08-01

    We investigated whether the extra-/introversion personality dimension can influence processing of others' eye gaze direction and emotional facial expression during a target detection task. On the basis of previous evidence showing that self-reported trait anxiety can affect gaze-cueing with emotional faces, we also verified whether trait anxiety can modulate the influence of intro-/extraversion on behavioral performance. Fearful, happy, angry or neutral faces, with either direct or averted gaze, were presented before the target appeared in spatial locations congruent or incongruent with stimuli's eye gaze direction. Results showed a significant influence of intra-/extraversion dimension on gaze-cueing effect for angry, happy, and neutral faces with averted gaze. Introverts did not show the gaze congruency effect when viewing angry expressions, but did so with happy and neutral faces; extraverts showed the opposite pattern. Importantly, the influence of intro-/extraversion on gaze-cueing was not mediated by trait anxiety. These findings demonstrated that personality differences can shape processing of interactions between relevant social signals.

  14. Effect of narrowing the base of support on the gait, gaze and quiet eye of elite ballet dancers and controls.

    PubMed

    Panchuk, Derek; Vickers, Joan N

    2011-08-01

    We determined the gaze and stepping behaviours of elite ballet dancers and controls as they walked normally and along progressively narrower 3-m lines (l0.0, 2.5 cm). The ballet dancers delayed the first step and then stepped more quickly through the approach area and onto the lines, which they exited more slowly than the controls, which stepped immediately but then slowed their gait to navigate the line, which they exited faster. Contrary to predictions, the ballet group did not step more precisely, perhaps due to the unique anatomical requirements of ballet dance and/or due to releasing the degrees of freedom under their feet as they fixated ahead more than the controls. The ballet group used significantly fewer fixations of longer duration, and their final quiet eye (QE) duration prior to stepping on the line was significantly longer (2,353.39 ms) than the controls (1,327.64 ms). The control group favoured a proximal gaze strategy allocating 73.33% of their QE fixations to the line/off the line and 26.66% to the exit/visual straight ahead (VSA), while the ballet group favoured a 'look-ahead' strategy allocating 55.49% of their QE fixations to the exit/VSA and 44.51% on the line/off the line. The results are discussed in the light of the development of expertise and the enhanced role of fixations and visual attention when more tasks become more constrained.

  15. Neural synchrony examined with magnetoencephalography (MEG) during eye gaze processing in autism spectrum disorders: preliminary findings

    PubMed Central

    2014-01-01

    Background Gaze processing deficits are a seminal, early, and enduring behavioral deficit in autism spectrum disorder (ASD); however, a comprehensive characterization of the neural processes mediating abnormal gaze processing in ASD has yet to be conducted. Methods This study investigated whole-brain patterns of neural synchrony during passive viewing of direct and averted eye gaze in ASD adolescents and young adults (M Age  = 16.6) compared to neurotypicals (NT) (M Age  = 17.5) while undergoing magnetoencephalography. Coherence between each pair of 54 brain regions within each of three frequency bands (low frequency (0 to 15 Hz), beta (15 to 30 Hz), and low gamma (30 to 45 Hz)) was calculated. Results Significantly higher coherence and synchronization in posterior brain regions (temporo-parietal-occipital) across all frequencies was evident in ASD, particularly within the low 0 to 15 Hz frequency range. Higher coherence in fronto-temporo-parietal regions was noted in NT. A significantly higher number of low frequency cross-hemispheric synchronous connections and a near absence of right intra-hemispheric coherence in the beta frequency band were noted in ASD. Significantly higher low frequency coherent activity in bilateral temporo-parieto-occipital cortical regions and higher gamma band coherence in right temporo-parieto-occipital brain regions during averted gaze was related to more severe symptomology as reported on the Autism Diagnostic Interview-Revised (ADI-R). Conclusions The preliminary results suggest a pattern of aberrant connectivity that includes higher low frequency synchronization in posterior cortical regions, lack of long-range right hemispheric beta and gamma coherence, and decreased coherence in fronto-temporo-parietal regions necessary for orienting to shifts in eye gaze in ASD; a critical behavior essential for social communication. PMID:24976870

  16. Matching the oculomotor drive during head-restrained and head-unrestrained gaze shifts in monkey.

    PubMed

    Bechara, Bernard P; Gandhi, Neeraj J

    2010-08-01

    High-frequency burst neurons in the pons provide the eye velocity command (equivalently, the primary oculomotor drive) to the abducens nucleus for generation of the horizontal component of both head-restrained (HR) and head-unrestrained (HU) gaze shifts. We sought to characterize how gaze and its eye-in-head component differ when an "identical" oculomotor drive is used to produce HR and HU movements. To address this objective, the activities of pontine burst neurons were recorded during horizontal HR and HU gaze shifts. The burst profile recorded on each HU trial was compared with the burst waveform of every HR trial obtained for the same neuron. The oculomotor drive was assumed to be comparable for the pair yielding the lowest root-mean-squared error. For matched pairs of HR and HU trials, the peak eye-in-head velocity was substantially smaller in the HU condition, and the reduction was usually greater than the peak head velocity of the HU trial. A time-varying attenuation index, defined as the difference in HR and HU eye velocity waveforms divided by head velocity [alpha = (H(hr) - E(hu))/H] was computed. The index was variable at the onset of the gaze shift, but it settled at values several times greater than 1. The index then decreased gradually during the movement and stabilized at 1 around the end of gaze shift. These results imply that substantial attenuation in eye velocity occurs, at least partially, downstream of the burst neurons. We speculate on the potential roles of burst-tonic neurons in the neural integrator and various cell types in the vestibular nuclei in mediating the attenuation in eye velocity in the presence of head movements.

  17. A novel attention training paradigm based on operant conditioning of eye gaze: Preliminary findings.

    PubMed

    Price, Rebecca B; Greven, Inez M; Siegle, Greg J; Koster, Ernst H W; De Raedt, Rudi

    2016-02-01

    Inability to engage with positive stimuli is a widespread problem associated with negative mood states across many conditions, from low self-esteem to anhedonic depression. Though attention retraining procedures have shown promise as interventions in some clinical populations, novel procedures may be necessary to reliably attenuate chronic negative mood in refractory clinical populations (e.g., clinical depression) through, for example, more active, adaptive learning processes. In addition, a focus on individual difference variables predicting intervention outcome may improve the ability to provide such targeted interventions efficiently. To provide preliminary proof-of-principle, we tested a novel paradigm using operant conditioning to train eye gaze patterns toward happy faces. Thirty-two healthy undergraduates were randomized to receive operant conditioning of eye gaze toward happy faces (train-happy) or neutral faces (train-neutral). At the group level, the train-happy condition attenuated sad mood increases following a stressful task, in comparison to train-neutral. In individual differences analysis, greater physiological reactivity (pupil dilation) in response to happy faces (during an emotional face-search task at baseline) predicted decreased mood reactivity after stress. These Preliminary results suggest that operant conditioning of eye gaze toward happy faces buffers against stress-induced effects on mood, particularly in individuals who show sufficient baseline neural engagement with happy faces. Eye gaze patterns to emotional face arrays may have a causal relationship with mood reactivity. Personalized medicine research in depression may benefit from novel cognitive training paradigms that shape eye gaze patterns through feedback. Baseline neural function (pupil dilation) may be a key mechanism, aiding in iterative refinement of this approach. (c) 2016 APA, all rights reserved).

  18. Conflict Tasks of Different Types Divergently Affect the Attentional Processing of Gaze and Arrow.

    PubMed

    Fan, Lingxia; Yu, Huan; Zhang, Xuemin; Feng, Qing; Sun, Mengdan; Xu, Mengsi

    2018-01-01

    The present study explored the attentional processing mechanisms of gaze and arrow cues in two different types of conflict tasks. In Experiment 1, participants performed a flanker task in which gaze and arrow cues were presented as central targets or bilateral distractors. The congruency between the direction of the target and the distractors was manipulated. Results showed that arrow distractors greatly interfered with the attentional processing of gaze, while the processing of arrow direction was immune to conflict from gaze distractors. Using a spatial compatibility task, Experiment 2 explored the conflict effects exerted on gaze and arrow processing by their relative spatial locations. When the direction of the arrow was in conflict with its spatial layout on screen, response times were slowed; however, the encoding of gaze was unaffected by spatial location. In general, processing to an arrow cue is less influenced by bilateral gaze cues but is affected by irrelevant spatial information, while processing to a gaze cue is greatly disturbed by bilateral arrows but is unaffected by irrelevant spatial information. Different effects on gaze and arrow cues by different types of conflicts may reflect two relatively distinct specific modes of the attentional process.

  19. The efference cascade, consciousness, and its self: naturalizing the first person pivot of action control

    PubMed Central

    Merker, Bjorn

    2013-01-01

    The 20 billion neurons of the neocortex have a mere hundred thousand motor neurons by which to express cortical contents in overt behavior. Implemented through a staggered cortical “efference cascade” originating in the descending axons of layer five pyramidal cells throughout the neocortical expanse, this steep convergence accomplishes final integration for action of cortical information through a system of interconnected subcortical way stations. Coherent and effective action control requires the inclusion of a continually updated joint “global best estimate” of current sensory, motivational, and motor circumstances in this process. I have previously proposed that this running best estimate is extracted from cortical probabilistic preliminaries by a subcortical neural “reality model” implementing our conscious sensory phenomenology. As such it must exhibit first person perspectival organization, suggested to derive from formating requirements of the brain's subsystem for gaze control, with the superior colliculus at its base. Gaze movements provide the leading edge of behavior by capturing targets of engagement prior to contact. The rotation-based geometry of directional gaze movements places their implicit origin inside the head, a location recoverable by cortical probabilistic source reconstruction from the rampant primary sensory variance generated by the incessant play of collicularly triggered gaze movements. At the interface between cortex and colliculus lies the dorsal pulvinar. Its unique long-range inhibitory circuitry may precipitate the brain's global best estimate of its momentary circumstances through multiple constraint satisfaction across its afferents from numerous cortical areas and colliculus. As phenomenal content of our sensory awareness, such a global best estimate would exhibit perspectival organization centered on a purely implicit first person origin, inherently incapable of appearing as a phenomenal content of the sensory space it serves. PMID:23950750

  20. Visual exploration during locomotion limited by fear of heights.

    PubMed

    Kugler, Günter; Huppert, Doreen; Eckl, Maria; Schneider, Erich; Brandt, Thomas

    2014-01-01

    Visual exploration of the surroundings during locomotion at heights has not yet been investigated in subjects suffering from fear of heights. Eye and head movements were recorded separately in 16 subjects susceptible to fear of heights and in 16 non-susceptible controls while walking on an emergency escape balcony 20 meters above ground level. Participants wore mobile infrared eye-tracking goggles with a head-fixed scene camera and integrated 6-degrees-of-freedom inertial sensors for recording head movements. Video recordings of the subjects were simultaneously made to correlate gaze and gait behavior. Susceptibles exhibited a limited visual exploration of the surroundings, particularly the depth. Head movements were significantly reduced in all three planes (yaw, pitch, and roll) with less vertical head oscillations, whereas total eye movements (saccade amplitudes, frequencies, fixation durations) did not differ from those of controls. However, there was an anisotropy, with a preference for the vertical as opposed to the horizontal direction of saccades. Comparison of eye and head movement histograms and the resulting gaze-in-space revealed a smaller total area of visual exploration, which was mainly directed straight ahead and covered vertically an area from the horizon to the ground in front of the feet. This gaze behavior was associated with a slow, cautious gait. The visual exploration of the surroundings by susceptibles to fear of heights differs during locomotion at heights from the earlier investigated behavior of standing still and looking from a balcony. During locomotion, anisotropy of gaze-in-space shows a preference for the vertical as opposed to the horizontal direction during stance. Avoiding looking into the abyss may reduce anxiety in both conditions; exploration of the "vertical strip" in the heading direction is beneficial for visual control of balance and avoidance of obstacles during locomotion.

  1. Robot Faces that Follow Gaze Facilitate Attentional Engagement and Increase Their Likeability.

    PubMed

    Willemse, Cesco; Marchesi, Serena; Wykowska, Agnieszka

    2018-01-01

    Gaze behavior of humanoid robots is an efficient mechanism for cueing our spatial orienting, but less is known about the cognitive-affective consequences of robots responding to human directional cues. Here, we examined how the extent to which a humanoid robot (iCub) avatar directed its gaze to the same objects as our participants affected engagement with the robot, subsequent gaze-cueing, and subjective ratings of the robot's characteristic traits. In a gaze-contingent eyetracking task, participants were asked to indicate a preference for one of two objects with their gaze while an iCub avatar was presented between the object photographs. In one condition, the iCub then shifted its gaze toward the object chosen by a participant in 80% of the trials (joint condition) and in the other condition it looked at the opposite object 80% of the time (disjoint condition). Based on the literature in human-human social cognition, we took the speed with which the participants looked back at the robot as a measure of facilitated reorienting and robot-preference, and found these return saccade onset times to be quicker in the joint condition than in the disjoint condition. As indicated by results from a subsequent gaze-cueing tasks, the gaze-following behavior of the robot had little effect on how our participants responded to gaze cues. Nevertheless, subjective reports suggested that our participants preferred the iCub following participants' gaze to the one with a disjoint attention behavior, rated it as more human-like and as more likeable. Taken together, our findings show a preference for robots who follow our gaze. Importantly, such subtle differences in gaze behavior are sufficient to influence our perception of humanoid agents, which clearly provides hints about the design of behavioral characteristics of humanoid robots in more naturalistic settings.

  2. Abnormal center-periphery gradient in spatial attention in simultanagnosia.

    PubMed

    Balslev, Daniela; Odoj, Bartholomaeus; Rennig, Johannes; Karnath, Hans-Otto

    2014-12-01

    Patients suffering from simultanagnosia cannot perceive more than one object at a time. The underlying mechanism is incompletely understood. One hypothesis is that simultanagnosia reflects "tunnel vision," a constricted attention window around gaze, which precludes the grouping of individual objects. Although this idea has a long history in neuropsychology, the question whether the patients indeed have an abnormal attention gradient around the gaze has so far not been addressed. Here we tested this hypothesis in two simultanagnosia patients with bilateral parieto-occipital lesions and two control groups, with and without brain damage. We assessed the participants' ability to discriminate letters presented briefly at fixation with and without a peripheral distractor or in the visual periphery, with or without a foveal distractor. A constricted span of attention around gaze would predict an increased susceptibility to foveated versus peripheral distractors. Contrary to this prediction and unlike both control groups, the patients' ability to discriminate the target decreased more in the presence of peripheral compared with foveated distractors. Thus, the attentional spotlight in simultanagnosia does not fall on foveated objects as previously assumed, but rather abnormally highlights the periphery. Furthermore, we found the same center-periphery gradient in the patients' ability to recognize multiple objects. They detected multiple, but not single objects more accurately in the periphery than at fixation. These results suggest that an abnormal allocation of attention around the gaze can disrupt the grouping of individual objects into an integrated visual scene.

  3. Gaze transfer in remote cooperation: is it always helpful to see what your partner is attending to?

    PubMed

    Müller, Romy; Helmert, Jens R; Pannasch, Sebastian; Velichkovsky, Boris M

    2013-01-01

    Establishing common ground in remote cooperation is challenging because nonverbal means of ambiguity resolution are limited. In such settings, information about a partner's gaze can support cooperative performance, but it is not yet clear whether and to what extent the abundance of information reflected in gaze comes at a cost. Specifically, in tasks that mainly rely on spatial referencing, gaze transfer might be distracting and leave the partner uncertain about the meaning of the gaze cursor. To examine this question, we let pairs of participants perform a joint puzzle task. One partner knew the solution and instructed the other partner's actions by (1) gaze, (2) speech, (3) gaze and speech, or (4) mouse and speech. Based on these instructions, the acting partner moved the pieces under conditions of high or low autonomy. Performance was better when using either gaze or mouse transfer compared to speech alone. However, in contrast to the mouse, gaze transfer induced uncertainty, evidenced in delayed responses to the cursor. Also, participants tried to resolve ambiguities by engaging in more verbal effort, formulating more explicit object descriptions and fewer deictic references. Thus, gaze transfer seems to increase uncertainty and ambiguity, thereby complicating grounding in this spatial referencing task. The results highlight the importance of closely examining task characteristics when considering gaze transfer as a means of support.

  4. Owners' direct gazes increase dogs' attention-getting behaviors.

    PubMed

    Ohkita, Midori; Nagasawa, Miho; Kazutaka, Mogi; Kikusui, Takefumi

    2016-04-01

    This study examined whether dogs gain information about human's attention via their gazes and whether they change their attention-getting behaviors (i.e., whining and whimpering, looking at their owners' faces, pawing, and approaching their owners) in response to their owners' direct gazes. The results showed that when the owners gazed at their dogs, the durations of whining and whimpering and looking at the owners' faces were longer than when the owners averted their gazes. In contrast, there were no differences in duration of pawing and likelihood of approaching the owners between the direct and averted gaze conditions. Therefore, owners' direct gazes increased the behaviors that acted as distant signals and did not necessarily involve touching the owners. We suggest that dogs are sensitive to human gazes, and this sensitivity may act as attachment signals to humans, and may contribute to close relationships between humans and dogs. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Orienting in Response to Gaze and the Social Use of Gaze among Children with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Rombough, Adrienne; Iarocci, Grace

    2013-01-01

    Potential relations between gaze cueing, social use of gaze, and ability to follow line of sight were examined in children with autism and typically developing peers. Children with autism (mean age = 10 years) demonstrated intact gaze cueing. However, they preferred to follow arrows instead of eyes to infer mental state, and showed decreased…

  6. Influence of Eye Gaze on Spoken Word Processing: An ERP Study with Infants

    ERIC Educational Resources Information Center

    Parise, Eugenio; Handl, Andrea; Palumbo, Letizia; Friederici, Angela D.

    2011-01-01

    Eye gaze is an important communicative signal, both as mutual eye contact and as referential gaze to objects. To examine whether attention to speech versus nonspeech stimuli in 4- to 5-month-olds (n = 15) varies as a function of eye gaze, event-related brain potentials were used. Faces with mutual or averted gaze were presented in combination with…

  7. Wolves (Canis lupus) and Dogs (Canis familiaris) Differ in Following Human Gaze Into Distant Space But Respond Similar to Their Packmates’ Gaze

    PubMed Central

    Werhahn, Geraldine; Virányi, Zsófia; Barrera, Gabriela; Sommese, Andrea; Range, Friederike

    2017-01-01

    Gaze following into distant space is defined as visual co-orientation with another individual’s head direction allowing the gaze follower to gain information on its environment. Human and nonhuman animals share this basic gaze following behavior, suggested to rely on a simple reflexive mechanism and believed to be an important prerequisite for complex forms of social cognition. Pet dogs differ from other species in that they follow only communicative human gaze clearly addressed to them. However, in an earlier experiment we showed that wolves follow human gaze into distant space. Here we set out to investigate whether domestication has affected gaze following in dogs by comparing pack-living dogs and wolves raised and kept under the same conditions. In Study 1 we found that in contrast to the wolves, these dogs did not follow minimally communicative human gaze into distant space in the same test paradigm. In the observational Study 2 we found that pack-living dogs and wolves, similarly vigilant to environmental stimuli, follow the spontaneous gaze of their conspecifics similarly often. Our findings suggest that domestication did not affect the gaze following ability of dogs itself. The results raise hypotheses about which other dog skills might have been altered through domestication that may have influenced their performance in Study 1. Because following human gaze in dogs might be influenced by special evolutionary as well as developmental adaptations to interactions with humans, we suggest that comparing dogs to other animal species might be more informative when done in intraspecific social contexts. PMID:27244538

  8. Transition from Target to Gaze Coding in Primate Frontal Eye Field during Memory Delay and Memory–Motor Transformation123

    PubMed Central

    Sajad, Amirsaman; Sadeh, Morteza; Yan, Xiaogang; Wang, Hongying

    2016-01-01

    Abstract The frontal eye fields (FEFs) participate in both working memory and sensorimotor transformations for saccades, but their role in integrating these functions through time remains unclear. Here, we tracked FEF spatial codes through time using a novel analytic method applied to the classic memory-delay saccade task. Three-dimensional recordings of head-unrestrained gaze shifts were made in two monkeys trained to make gaze shifts toward briefly flashed targets after a variable delay (450-1500 ms). A preliminary analysis of visual and motor response fields in 74 FEF neurons eliminated most potential models for spatial coding at the neuron population level, as in our previous study (Sajad et al., 2015). We then focused on the spatiotemporal transition from an eye-centered target code (T; preferred in the visual response) to an eye-centered intended gaze position code (G; preferred in the movement response) during the memory delay interval. We treated neural population codes as a continuous spatiotemporal variable by dividing the space spanning T and G into intermediate T–G models and dividing the task into discrete steps through time. We found that FEF delay activity, especially in visuomovement cells, progressively transitions from T through intermediate T–G codes that approach, but do not reach, G. This was followed by a final discrete transition from these intermediate T–G delay codes to a “pure” G code in movement cells without delay activity. These results demonstrate that FEF activity undergoes a series of sensory–memory–motor transformations, including a dynamically evolving spatial memory signal and an imperfect memory-to-motor transformation. PMID:27092335

  9. Attentional deployment is not necessary for successful emotion regulation via cognitive reappraisal or expressive suppression.

    PubMed

    Bebko, Genna M; Franconeri, Steven L; Ochsner, Kevin N; Chiao, Joan Y

    2014-06-01

    According to appraisal theories of emotion, cognitive reappraisal is a successful emotion regulation strategy because it involves cognitively changing our thoughts, which, in turn, change our emotions. However, recent evidence has challenged the importance of cognitive change and, instead, has suggested that attentional deployment may at least partly explain the emotion regulation success of cognitive reappraisal. The purpose of the current study was to examine the causal relationship between attentional deployment and emotion regulation success. We examined 2 commonly used emotion regulation strategies--cognitive reappraisal and expressive suppression-because both depend on attention but have divergent behavioral, experiential, and physiological outcomes. Participants were either instructed to regulate emotions during free-viewing (unrestricted image viewing) or gaze-controlled (restricted image viewing) conditions and to self-report negative emotional experience. For both emotion regulation strategies, emotion regulation success was not altered by changes in participant control over the (a) direction of attention (free-viewing vs. gaze-controlled) during image viewing and (b) valence (negative vs. neutral) of visual stimuli viewed when gaze was controlled. Taken together, these findings provide convergent evidence that attentional deployment does not alter subjective negative emotional experience during either cognitive reappraisal or expressive suppression, suggesting that strategy-specific processes, such as cognitive appraisal and response modulation, respectively, may have a greater impact on emotional regulation success than processes common to both strategies, such as attention.

  10. Contribution of olivofloccular circuitry developmental defects to atypical gaze in autism

    PubMed Central

    Wegiel, Jerzy; Kuchna, Izabela; Nowicki, Krzysztof; Imaki, Humi; Wegiel, Jarek; Ma, Shuang Yong; Azmitia, Efrain C.; Banerjee, Probal; Flory, Michael; Cohen, Ira L.; London, Eric; Brown, W. Ted; Hare, Carolyn Komich; Wisniewski, Thomas

    2014-01-01

    Individuals with autism demonstrate atypical gaze, impairments in smooth pursuit, altered movement perception and deficits in facial perception. The olivofloccular neuronal circuit is a major contributor to eye movement control. This study of the cerebellum in 12 autistic and 10 control subjects revealed dysplastic changes in the flocculus of eight autistic (67%) and two control (20%) subjects. Defects of the oculomotor system, including avoidance of eye contact and poor or no eye contact, were reported in 88% of autistic subjects with postmortem-detected floccular dysplasia. Focal disorganization of the flocculus cytoarchitecture with deficit, altered morphology, and spatial disorientation of Purkinje cells (PCs); deficit and abnormalities of granule, basket, stellate and unipolar brush cells; and structural defects and abnormal orientation of Bergmann glia are indicators of profound disruption of flocculus circuitry in a dysplastic area. The average volume of PCs was 26% less in the dysplastic region than in the unaffected region of the flocculus (p<0.01) in autistic subjects. Moreover, the average volume of PCs in the entire cerebellum was 25% less in the autistic subjects than in the control subjects (p<0.001). Findings from this study and a parallel study of the inferior olive (IO) suggest that focal floccular dysplasia combined with IO neurons and PC developmental defects may contribute to oculomotor system dysfunction and atypical gaze in autistic subjects. PMID:23558308

  11. Gaze leading is associated with liking.

    PubMed

    Grynszpan, Ouriel; Martin, Jean-Claude; Fossati, Philippe

    2017-02-01

    Gaze plays a pivotal role in human communication, especially for coordinating attention. The ability to guide the gaze orientation of others forms the backbone of joint attention. Recent research has raised the possibility that gaze following behaviors could induce liking. The present study seeks to investigate this hypothesis. We designed two physically different human avatars that could follow the gaze of users via eye-tracking technology. In a preliminary experiment, 20 participants assessed the baseline appeal of the two avatars and confirmed that the avatars differed in this respect. In the main experiment, we compared how 19 participants rated the two avatars in terms of pleasantness, trustworthiness and closeness when the avatars were following their gaze versus when the avatar generated gaze movements autonomously. Although the same avatar as in the preliminary experiment was rated more favorably, the pleasantness attributed to the two avatars increased when they followed the gaze of the participants. This outcome provides evidence that gaze following fosters liking independently of the baseline appeal of the individual. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Eye size and visual acuity influence vestibular anatomy in mammals.

    PubMed

    Kemp, Addison D; Christopher Kirk, E

    2014-04-01

    The semicircular canals of the inner ear detect head rotations and trigger compensatory movements that stabilize gaze and help maintain visual fixation. Mammals with large eyes and high visual acuity require precise gaze stabilization mechanisms because they experience diminished visual functionality at low thresholds of uncompensated motion. Because semicircular canal radius of curvature is a primary determinant of canal sensitivity, species with large canal radii are expected to be capable of more precise gaze stabilization than species with small canal radii. Here, we examine the relationship between mean semicircular canal radius of curvature, eye size, and visual acuity in a large sample of mammals. Our results demonstrate that eye size and visual acuity both explain a significant proportion of the variance in mean canal radius of curvature after statistically controlling for the effects of body mass and phylogeny. These findings suggest that variation in mean semicircular canal radius of curvature among mammals is partly the result of selection for improved gaze stabilization in species with large eyes and acute vision. Our results also provide a possible functional explanation for the small semicircular canal radii of fossorial mammals and plesiadapiforms. Copyright © 2014 Wiley Periodicals, Inc.

  13. Enabling Disabled Persons to Gain Access to Digital Media

    NASA Technical Reports Server (NTRS)

    Beach, Glenn; OGrady, Ryan

    2011-01-01

    A report describes the first phase in an effort to enhance the NaviGaze software to enable profoundly disabled persons to operate computers. (Running on a Windows-based computer equipped with a video camera aimed at the user s head, the original NaviGaze software processes the user's head movements and eye blinks into cursor movements and mouse clicks to enable hands-free control of the computer.) To accommodate large variations in movement capabilities among disabled individuals, one of the enhancements was the addition of a graphical user interface for selection of parameters that affect the way the software interacts with the computer and tracks the user s movements. Tracking algorithms were improved to reduce sensitivity to rotations and reduce the likelihood of tracking the wrong features. Visual feedback to the user was improved to provide an indication of the state of the computer system. It was found that users can quickly learn to use the enhanced software, performing single clicks, double clicks, and drags within minutes of first use. Available programs that could increase the usability of NaviGaze were identified. One of these enables entry of text by using NaviGaze as a mouse to select keys on a virtual keyboard.

  14. A Statistical Physics Perspective to Understand Social Visual Attention in Autism Spectrum Disorder.

    PubMed

    Liberati, Alessio; Fadda, Roberta; Doneddu, Giuseppe; Congiu, Sara; Javarone, Marco A; Striano, Tricia; Chessa, Alessandro

    2017-08-01

    This study investigated social visual attention in children with Autism Spectrum Disorder (ASD) and with typical development (TD) in the light of Brockmann and Geisel's model of visual attention. The probability distribution of gaze movements and clustering of gaze points, registered with eye-tracking technology, was studied during a free visual exploration of a gaze stimulus. A data-driven analysis of the distribution of eye movements was chosen to overcome any possible methodological problems related to the subjective expectations of the experimenters about the informative contents of the image in addition to a computational model to simulate group differences. Analysis of the eye-tracking data indicated that the scanpaths of children with TD and ASD were characterized by eye movements geometrically equivalent to Lévy flights. Children with ASD showed a higher frequency of long saccadic amplitudes compared with controls. A clustering analysis revealed a greater dispersion of eye movements for these children. Modeling of the results indicated higher values of the model parameter modulating the dispersion of eye movements for children with ASD. Together, the experimental results and the model point to a greater dispersion of gaze points in ASD.

  15. Quantifying gaze and mouse interactions on spatial visual interfaces with a new movement analytics methodology

    PubMed Central

    2017-01-01

    Eye movements provide insights into what people pay attention to, and therefore are commonly included in a variety of human-computer interaction studies. Eye movement recording devices (eye trackers) produce gaze trajectories, that is, sequences of gaze location on the screen. Despite recent technological developments that enabled more affordable hardware, gaze data are still costly and time consuming to collect, therefore some propose using mouse movements instead. These are easy to collect automatically and on a large scale. If and how these two movement types are linked, however, is less clear and highly debated. We address this problem in two ways. First, we introduce a new movement analytics methodology to quantify the level of dynamic interaction between the gaze and the mouse pointer on the screen. Our method uses volumetric representation of movement, the space-time densities, which allows us to calculate interaction levels between two physically different types of movement. We describe the method and compare the results with existing dynamic interaction methods from movement ecology. The sensitivity to method parameters is evaluated on simulated trajectories where we can control interaction levels. Second, we perform an experiment with eye and mouse tracking to generate real data with real levels of interaction, to apply and test our new methodology on a real case. Further, as our experiment tasks mimics route-tracing when using a map, it is more than a data collection exercise and it simultaneously allows us to investigate the actual connection between the eye and the mouse. We find that there seem to be natural coupling when eyes are not under conscious control, but that this coupling breaks down when instructed to move them intentionally. Based on these observations, we tentatively suggest that for natural tracing tasks, mouse tracking could potentially provide similar information as eye-tracking and therefore be used as a proxy for attention. However, more research is needed to confirm this. PMID:28777822

  16. Quantifying gaze and mouse interactions on spatial visual interfaces with a new movement analytics methodology.

    PubMed

    Demšar, Urška; Çöltekin, Arzu

    2017-01-01

    Eye movements provide insights into what people pay attention to, and therefore are commonly included in a variety of human-computer interaction studies. Eye movement recording devices (eye trackers) produce gaze trajectories, that is, sequences of gaze location on the screen. Despite recent technological developments that enabled more affordable hardware, gaze data are still costly and time consuming to collect, therefore some propose using mouse movements instead. These are easy to collect automatically and on a large scale. If and how these two movement types are linked, however, is less clear and highly debated. We address this problem in two ways. First, we introduce a new movement analytics methodology to quantify the level of dynamic interaction between the gaze and the mouse pointer on the screen. Our method uses volumetric representation of movement, the space-time densities, which allows us to calculate interaction levels between two physically different types of movement. We describe the method and compare the results with existing dynamic interaction methods from movement ecology. The sensitivity to method parameters is evaluated on simulated trajectories where we can control interaction levels. Second, we perform an experiment with eye and mouse tracking to generate real data with real levels of interaction, to apply and test our new methodology on a real case. Further, as our experiment tasks mimics route-tracing when using a map, it is more than a data collection exercise and it simultaneously allows us to investigate the actual connection between the eye and the mouse. We find that there seem to be natural coupling when eyes are not under conscious control, but that this coupling breaks down when instructed to move them intentionally. Based on these observations, we tentatively suggest that for natural tracing tasks, mouse tracking could potentially provide similar information as eye-tracking and therefore be used as a proxy for attention. However, more research is needed to confirm this.

  17. The Development of Joint Visual Attention: A Longitudinal Study of Gaze following during Interactions with Mothers and Strangers

    ERIC Educational Resources Information Center

    Gredeback, Gustaf; Fikke, Linn; Melinder, Annika

    2010-01-01

    Two- to 8-month-old infants interacted with their mother or a stranger in a prospective longitudinal gaze following study. Gaze following, as assessed by eye tracking, emerged between 2 and 4 months and stabilized between 6 and 8 months of age. Overall, infants followed the gaze of a stranger more than they followed the gaze of their mothers,…

  18. Ultra-low-cost 3D gaze estimation: an intuitive high information throughput compliment to direct brain-machine interfaces

    NASA Astrophysics Data System (ADS)

    Abbott, W. W.; Faisal, A. A.

    2012-08-01

    Eye movements are highly correlated with motor intentions and are often retained by patients with serious motor deficiencies. Despite this, eye tracking is not widely used as control interface for movement in impaired patients due to poor signal interpretation and lack of control flexibility. We propose that tracking the gaze position in 3D rather than 2D provides a considerably richer signal for human machine interfaces by allowing direct interaction with the environment rather than via computer displays. We demonstrate here that by using mass-produced video-game hardware, it is possible to produce an ultra-low-cost binocular eye-tracker with comparable performance to commercial systems, yet 800 times cheaper. Our head-mounted system has 30 USD material costs and operates at over 120 Hz sampling rate with a 0.5-1 degree of visual angle resolution. We perform 2D and 3D gaze estimation, controlling a real-time volumetric cursor essential for driving complex user interfaces. Our approach yields an information throughput of 43 bits s-1, more than ten times that of invasive and semi-invasive brain-machine interfaces (BMIs) that are vastly more expensive. Unlike many BMIs our system yields effective real-time closed loop control of devices (10 ms latency), after just ten minutes of training, which we demonstrate through a novel BMI benchmark—the control of the video arcade game ‘Pong’.

  19. Gaze Control in One Versus One Defensive Situations in Soccer Players With Various Levels of Expertise.

    PubMed

    Krzepota, Justyna; Stępiński, Miłosz; Zwierko, Teresa

    2016-12-01

    Experienced and less experienced soccer players were compared in terms of their gaze behavior (number of fixations, fixation duration, number of fixation regions, and distribution of fixations across specific regions) during frontal 1 vs. 1 defensive situations. Twenty-four men (eight experienced soccer players, eight less experienced players and eight non-players) watched 20 video clips. Gaze behavior was registered with an Eye Tracking System. The video scenes were analyzed frame-by-frame. Significant main effect of the group (experience) was observed for the number of fixation regions. Experienced soccer players had a lower number of fixation regions than the non-soccer players. Moreover, the former group presented with significantly larger percentage of fixations in the ball/foot region. These findings suggest that experienced players may use a more efficient search strategy than novices, involving fixation on a lesser number of areas in specific locations. © The Author(s) 2016.

  20. An automatic calibration procedure for remote eye-gaze tracking systems.

    PubMed

    Model, Dmitri; Guestrin, Elias D; Eizenman, Moshe

    2009-01-01

    Remote gaze estimation systems use calibration procedures to estimate subject-specific parameters that are needed for the calculation of the point-of-gaze. In these procedures, subjects are required to fixate on a specific point or points at specific time instances. Advanced remote gaze estimation systems can estimate the optical axis of the eye without any personal calibration procedure, but use a single calibration point to estimate the angle between the optical axis and the visual axis (line-of-sight). This paper presents a novel automatic calibration procedure that does not require active user participation. To estimate the angles between the optical and visual axes of each eye, this procedure minimizes the distance between the intersections of the visual axes of the left and right eyes with the surface of a display while subjects look naturally at the display (e.g., watching a video clip). Simulation results demonstrate that the performance of the algorithm improves as the range of viewing angles increases. For a subject sitting 75 cm in front of an 80 cm x 60 cm display (40" TV) the standard deviation of the error in the estimation of the angles between the optical and visual axes is 0.5 degrees.

  1. On the use of hidden Markov models for gaze pattern modeling

    NASA Astrophysics Data System (ADS)

    Mannaru, Pujitha; Balasingam, Balakumar; Pattipati, Krishna; Sibley, Ciara; Coyne, Joseph

    2016-05-01

    Some of the conventional metrics derived from gaze patterns (on computer screens) to study visual attention, engagement and fatigue are saccade counts, nearest neighbor index (NNI) and duration of dwells/fixations. Each of these metrics has drawbacks in modeling the behavior of gaze patterns; one such drawback comes from the fact that some portions on the screen are not as important as some other portions on the screen. This is addressed by computing the eye gaze metrics corresponding to important areas of interest (AOI) on the screen. There are some challenges in developing accurate AOI based metrics: firstly, the definition of AOI is always fuzzy; secondly, it is possible that the AOI may change adaptively over time. Hence, there is a need to introduce eye-gaze metrics that are aware of the AOI in the field of view; at the same time, the new metrics should be able to automatically select the AOI based on the nature of the gazes. In this paper, we propose a novel way of computing NNI based on continuous hidden Markov models (HMM) that model the gazes as 2D Gaussian observations (x-y coordinates of the gaze) with the mean at the center of the AOI and covariance that is related to the concentration of gazes. The proposed modeling allows us to accurately compute the NNI metric in the presence of multiple, undefined AOI on the screen in the presence of intermittent casual gazing that is modeled as random gazes on the screen.

  2. Specificity of Age-Related Differences in Eye-Gaze Following: Evidence From Social and Nonsocial Stimuli.

    PubMed

    Slessor, Gillian; Venturini, Cristina; Bonny, Emily J; Insch, Pauline M; Rokaszewicz, Anna; Finnerty, Ailbhe N

    2016-01-01

    Eye-gaze following is a fundamental social skill, facilitating communication. The present series of studies explored adult age-related differences in this key social-cognitive ability. In Study 1 younger and older adult participants completed a cueing task in which eye-gaze cues were predictive or non-predictive of target location. Another eye-gaze cueing task, assessing the influence of congruent and incongruent eye-gaze cues relative to trials which provided no cue to target location, was administered in Study 2. Finally, in Study 3 the eye-gaze cue was replaced by an arrow. In Study 1 older adults showed less evidence of gaze following than younger participants when required to strategically follow predictive eye-gaze cues and when making automatic shifts of attention to non-predictive eye-gaze cues. Findings from Study 2 suggested that, unlike younger adults, older participants showed no facilitation effect and thus did not follow congruent eye-gaze cues. They also had significantly weaker attentional costs than their younger counterparts. These age-related differences were not found in the non-social arrow cueing task. Taken together these findings suggest older adults do not use eye-gaze cues to engage in joint attention, and have specific social difficulties decoding critical information from the eye region. © The Author 2014. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  3. Improving the effectiveness of an interruption lag by inducing a memory-based strategy.

    PubMed

    Morgan, Phillip L; Patrick, John; Tiley, Leyanne

    2013-01-01

    The memory for goals model (Altmann & Trafton, 2002) posits the importance of a short delay (the 'interruption lag') before an interrupting task to encode suspended goals for retrieval post-interruption. Two experiments used the theory of soft constraints (Gray, Simms, Fu & Schoelles, 2006) to investigate whether the efficacy of an interruption lag could be improved by increasing goal-state access cost to induce a more memory-based encoding strategy. Both experiments used a copying task with three access cost conditions (Low, Medium, and High) and a 5-s interruption lag with a no lag control condition. Experiment 1 found that the participants in the High access cost condition resumed more interrupted trials and executed more actions correctly from memory when coupled with an interruption lag. Experiment 2 used a prospective memory test post-interruption and an eyetracker recorded gaze activity during the interruption lag. The participants in the High access cost condition with an interruption lag were best at encoding target information during the interruption lag, evidenced by higher scores on the prospective memory measure and more gaze activity on the goal-state during the interruption lag. Theoretical and practical issues regarding the use of goal-state access cost and an interruption lag are discussed. Copyright © 2012. Published by Elsevier B.V.

  4. SOCIAL AND NON-SOCIAL CUEING OF VISUOSPATIAL ATTENTION IN AUTISM AND TYPICAL DEVELOPMENT

    PubMed Central

    Pruett, John R.; LaMacchia, Angela; Hoertel, Sarah; Squire, Emma; McVey, Kelly; Todd, Richard D.; Constantino, John N.; Petersen, Steven E.

    2013-01-01

    Three experiments explored attention to eye gaze, which is incompletely understood in typical development and is hypothesized to be disrupted in autism. Experiment 1 (n=26 typical adults) involved covert orienting to box, arrow, and gaze cues at two probabilities and cue-target times to test whether reorienting for gaze is endogenous, exogenous, or unique; experiment 2 (total n=80: male and female children and adults) studied age and sex effects on gaze cueing. Gaze cueing appears endogenous and may strengthen in typical development. Experiment 3 tested exogenous, endogenous, and/or gaze-based orienting in 25 typical and 27 Autistic Spectrum Disorder (ASD) children. ASD children made more saccades, slowing their reaction times; however, exogenous and endogenous orienting, including gaze cueing, appear intact in ASD. PMID:20809377

  5. Mind Your Step: the Effects of Mobile Phone Use on Gaze Behavior in Stair Climbing.

    PubMed

    Ioannidou, Flora; Hermens, Frouke; Hodgson, Timothy L

    2017-01-01

    Stair walking is a hazardous activity and a common cause of fatal and non-fatal falls. Previous studies have assessed the role of eye movements in stair walking by asking people to repeatedly go up and down stairs in quiet and controlled conditions, while the role of peripheral vision was examined by giving participants specific fixation instructions or working memory tasks. We here extend this research to stair walking in a natural environment with other people present on the stairs and a now common secondary task: using one's mobile phone. Results show that using the mobile phone strongly draws one's attention away from the stairs, but that the distribution of gaze locations away from the phone is little influenced by using one's phone. Phone use also increased the time needed to walk the stairs, but handrail use remained low. These results indicate that limited foveal vision suffices for adequate stair walking in normal environments, but that mobile phone use has a strong influence on attention, which may pose problems when unexpected obstacles are encountered.

  6. Wolves (Canis lupus) and dogs (Canis familiaris) differ in following human gaze into distant space but respond similar to their packmates' gaze.

    PubMed

    Werhahn, Geraldine; Virányi, Zsófia; Barrera, Gabriela; Sommese, Andrea; Range, Friederike

    2016-08-01

    Gaze following into distant space is defined as visual co-orientation with another individual's head direction allowing the gaze follower to gain information on its environment. Human and nonhuman animals share this basic gaze following behavior, suggested to rely on a simple reflexive mechanism and believed to be an important prerequisite for complex forms of social cognition. Pet dogs differ from other species in that they follow only communicative human gaze clearly addressed to them. However, in an earlier experiment we showed that wolves follow human gaze into distant space. Here we set out to investigate whether domestication has affected gaze following in dogs by comparing pack-living dogs and wolves raised and kept under the same conditions. In Study 1 we found that in contrast to the wolves, these dogs did not follow minimally communicative human gaze into distant space in the same test paradigm. In the observational Study 2 we found that pack-living dogs and wolves, similarly vigilant to environmental stimuli, follow the spontaneous gaze of their conspecifics similarly often. Our findings suggest that domestication did not affect the gaze following ability of dogs itself. The results raise hypotheses about which other dog skills might have been altered through domestication that may have influenced their performance in Study 1. Because following human gaze in dogs might be influenced by special evolutionary as well as developmental adaptations to interactions with humans, we suggest that comparing dogs to other animal species might be more informative when done in intraspecific social contexts. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  7. Frequency analysis of gaze points with CT colonography interpretation using eye gaze tracking system

    NASA Astrophysics Data System (ADS)

    Tsutsumi, Shoko; Tamashiro, Wataru; Sato, Mitsuru; Okajima, Mika; Ogura, Toshihiro; Doi, Kunio

    2017-03-01

    It is important to investigate eye tracking gaze points of experts, in order to assist trainees in understanding of image interpretation process. We investigated gaze points of CT colonography (CTC) interpretation process, and analyzed the difference in gaze points between experts and trainees. In this study, we attempted to understand how trainees can be improved to a level achieved by experts in viewing of CTC. We used an eye gaze point sensing system, Gazefineder (JVCKENWOOD Corporation, Tokyo, Japan), which can detect pupil point and corneal reflection point by the dark pupil eye tracking. This system can provide gaze points images and excel file data. The subjects are radiological technologists who are experienced, and inexperienced in reading CTC. We performed observer studies in reading virtual pathology images and examined observer's image interpretation process using gaze points data. Furthermore, we examined eye tracking frequency analysis by using the Fast Fourier Transform (FFT). We were able to understand the difference in gaze points between experts and trainees by use of the frequency analysis. The result of the trainee had a large amount of both high-frequency components and low-frequency components. In contrast, both components by the expert were relatively low. Regarding the amount of eye movement in every 0.02 second we found that the expert tended to interpret images slowly and calmly. On the other hand, the trainee was moving eyes quickly and also looking for wide areas. We can assess the difference in the gaze points on CTC between experts and trainees by use of the eye gaze point sensing system and based on the frequency analysis. The potential improvements in CTC interpretation for trainees can be evaluated by using gaze points data.

  8. Gaze-cueing effect depends on facial expression of emotion in 9- to 12-month-old infants

    PubMed Central

    Niedźwiecka, Alicja; Tomalski, Przemysław

    2015-01-01

    Efficient processing of gaze direction and facial expression of emotion is crucial for early social and emotional development. Toward the end of the first year of life infants begin to pay more attention to negative expressions, but it remains unclear to what extent emotion expression is processed jointly with gaze direction at this age. This study sought to establish the interactions of gaze direction and emotion expression in visual orienting in 9- to 12-month-olds. In particular, we tested whether these interactions can be explained by the negativity bias hypothesis and the shared signal hypothesis. We measured saccadic latencies in response to peripheral targets in a gaze-cueing paradigm with happy, angry, and fearful female faces. In the Pilot Experiment three gaze directions were used (direct, congruent with target location, incongruent with target location). In the Main Experiment we sought to replicate the results of the Pilot experiment using a simpler design without the direct gaze condition. In both experiments we found a robust gaze-cueing effect for happy faces, i.e., facilitation of orienting toward the target in the gaze-cued location, compared with the gaze-incongruent location. We found more rapid orienting to targets cued by happy relative to angry and fearful faces. We did not find any gaze-cueing effect for angry or fearful faces. These results are not consistent with the shared signal hypothesis. While our results show differential processing of positive and negative emotions, they do not support a general negativity bias. On the contrary, they indicate that toward the age of 12 months infants show a positivity bias in gaze-cueing tasks. PMID:25713555

  9. Gaze Synchrony between Mothers with Mood Disorders and Their Infants: Maternal Emotion Dysregulation Matters

    PubMed Central

    Lotzin, Annett; Romer, Georg; Schiborr, Julia; Noga, Berit; Schulte-Markwort, Michael; Ramsauer, Brigitte

    2015-01-01

    A lowered and heightened synchrony between the mother’s and infant’s nonverbal behavior predicts adverse infant development. We know that maternal depressive symptoms predict lowered and heightened mother-infant gaze synchrony, but it is unclear whether maternal emotion dysregulation is related to mother-infant gaze synchrony. This cross-sectional study examined whether maternal emotion dysregulation in mothers with mood disorders is significantly related to mother-infant gaze synchrony. We also tested whether maternal emotion dysregulation is relatively more important than maternal depressive symptoms in predicting mother-infant gaze synchrony, and whether maternal emotion dysregulation mediates the relation between maternal depressive symptoms and mother-infant gaze synchrony. We observed 68 mothers and their 4- to 9-month-old infants in the Still-Face paradigm during two play interactions, before and after social stress was induced. The mothers’ and infants’ gaze behaviors were coded using microanalysis with the Maternal Regulatory Scoring System and Infant Regulatory Scoring System, respectively. The degree of mother-infant gaze synchrony was computed using time-series analysis. Maternal emotion dysregulation was measured by the Difficulties in Emotion Regulation Scale; depressive symptoms were assessed using the Beck Depression Inventory. Greater maternal emotion dysregulation was significantly related to heightened mother-infant gaze synchrony. The overall effect of maternal emotion dysregulation on mother-infant gaze synchrony was relatively more important than the effect of maternal depressive symptoms in the five tested models. Maternal emotion dysregulation fully mediated the relation between maternal depressive symptoms and mother-infant gaze synchrony. Our findings suggest that the effect of the mother’s depressive symptoms on the mother-infant gaze synchrony may be mediated by the mother’s emotion dysregulation. PMID:26657941

  10. Gaze Synchrony between Mothers with Mood Disorders and Their Infants: Maternal Emotion Dysregulation Matters.

    PubMed

    Lotzin, Annett; Romer, Georg; Schiborr, Julia; Noga, Berit; Schulte-Markwort, Michael; Ramsauer, Brigitte

    2015-01-01

    A lowered and heightened synchrony between the mother's and infant's nonverbal behavior predicts adverse infant development. We know that maternal depressive symptoms predict lowered and heightened mother-infant gaze synchrony, but it is unclear whether maternal emotion dysregulation is related to mother-infant gaze synchrony. This cross-sectional study examined whether maternal emotion dysregulation in mothers with mood disorders is significantly related to mother-infant gaze synchrony. We also tested whether maternal emotion dysregulation is relatively more important than maternal depressive symptoms in predicting mother-infant gaze synchrony, and whether maternal emotion dysregulation mediates the relation between maternal depressive symptoms and mother-infant gaze synchrony. We observed 68 mothers and their 4- to 9-month-old infants in the Still-Face paradigm during two play interactions, before and after social stress was induced. The mothers' and infants' gaze behaviors were coded using microanalysis with the Maternal Regulatory Scoring System and Infant Regulatory Scoring System, respectively. The degree of mother-infant gaze synchrony was computed using time-series analysis. Maternal emotion dysregulation was measured by the Difficulties in Emotion Regulation Scale; depressive symptoms were assessed using the Beck Depression Inventory. Greater maternal emotion dysregulation was significantly related to heightened mother-infant gaze synchrony. The overall effect of maternal emotion dysregulation on mother-infant gaze synchrony was relatively more important than the effect of maternal depressive symptoms in the five tested models. Maternal emotion dysregulation fully mediated the relation between maternal depressive symptoms and mother-infant gaze synchrony. Our findings suggest that the effect of the mother's depressive symptoms on the mother-infant gaze synchrony may be mediated by the mother's emotion dysregulation.

  11. Surface coverage with single vs. multiple gaze surface topography to fit scleral lenses.

    PubMed

    DeNaeyer, Gregory; Sanders, Donald R; Farajian, Timothy S

    2017-06-01

    To determine surface coverage of measurements using the sMap3D ® corneo-scleral topographer in patients presenting for scleral lens fitting. Twenty-five eyes of 23 scleral lens patients were examined. Up-gaze, straight-gaze, and down-gaze positions of each eye were "stitched" into a single map. The percentage surface coverage between 10mm and 20mm diameter circles from corneal center was compared between the straight-gaze and stitched images. Scleral toricity magnitude was calculated at 100% coverage and at the same diameter after 50% of the data was removed. At a 10mm diameter from corneal center, the straight-gaze and stitched images both had 100% coverage. At the 14, 15, 16, 18 and 20mm diameters, the straight-gaze image only covered 68%, 53%, 39%, 18%, and 6% of the ocular surface diameters while the stitched image covered 98%, 96%, 93%, 75%, and 32% respectively. In the case showing the most scleral coverage at 16mm (straight-gaze), there was only 75% coverage (straight-gaze) compared to 100% (stitched image); the case with the least coverage had 7% (straight gaze) and 92% (stitched image). The 95% limits of agreement between the 50% and 100% coverage scleral toricity was between -1.4D (50% coverage value larger) and 1.2D (100% coverage larger), a 2.6D spread. The absolute difference between 50% to 100% coverage scleral toricity was ≥0.50D in 28% and ≥1.0D in 16% of cases. It appears that a single straight-gaze image would introduce significant measurement inaccuracy in fitting scleral lenses using the sMap3D while a 3-gaze stitched image would not. Copyright © 2017 British Contact Lens Association. Published by Elsevier Ltd. All rights reserved.

  12. Timing of gazes in child dialogues: a time-course analysis of requests and back channelling in referential communication.

    PubMed

    Sandgren, Olof; Andersson, Richard; van de Weijer, Joost; Hansson, Kristina; Sahlén, Birgitta

    2012-01-01

    This study investigates gaze behaviour in child dialogues. In earlier studies the authors have investigated the use of requests for clarification and responses in order to study the co-creation of understanding in a referential communication task. By adding eye tracking, this line of research is now expanded to include non-verbal contributions in conversation. To investigate the timing of gazes in face-to-face interaction and to relate the gaze behaviour to the use of requests for clarification. Eight conversational pairs of typically developing 10-15 year olds participated. The pairs (director and executor) performed a referential communication task requiring the description of faces. During the dialogues both participants wore head-mounted eye trackers. All gazes were recorded and categorized according to the area fixated (Task, Face, Off). The verbal context for all instances of gaze at the partner's face was identified and categorized using time-course analysis. The results showed that the executor spends almost 90% of the time fixating the gaze on the task, 10% on the director's face and less than 0.5% elsewhere. Turn shift, primarily requests for clarification, and back channelling significantly predicted the executors' gaze to the face of the task director. The distribution of types of requests showed that requests for previously unmentioned information were significantly more likely to be associated with gaze at the director. The study shows that the executors' gaze at the director accompanies important dynamic shifts in the dialogue. The association with requests for clarification indicates that gaze at the director can be used to monitor the response with two modalities. Furthermore, the significantly higher association with requests for previously unmentioned information indicates that gaze may be used to emphasize the verbal content. The results will be used as a reference for studies of gaze behaviour in clinical populations with hearing and language impairments. © 2012 Royal College of Speech and Language Therapists.

  13. Robot Faces that Follow Gaze Facilitate Attentional Engagement and Increase Their Likeability

    PubMed Central

    Willemse, Cesco; Marchesi, Serena; Wykowska, Agnieszka

    2018-01-01

    Gaze behavior of humanoid robots is an efficient mechanism for cueing our spatial orienting, but less is known about the cognitive–affective consequences of robots responding to human directional cues. Here, we examined how the extent to which a humanoid robot (iCub) avatar directed its gaze to the same objects as our participants affected engagement with the robot, subsequent gaze-cueing, and subjective ratings of the robot’s characteristic traits. In a gaze-contingent eyetracking task, participants were asked to indicate a preference for one of two objects with their gaze while an iCub avatar was presented between the object photographs. In one condition, the iCub then shifted its gaze toward the object chosen by a participant in 80% of the trials (joint condition) and in the other condition it looked at the opposite object 80% of the time (disjoint condition). Based on the literature in human–human social cognition, we took the speed with which the participants looked back at the robot as a measure of facilitated reorienting and robot-preference, and found these return saccade onset times to be quicker in the joint condition than in the disjoint condition. As indicated by results from a subsequent gaze-cueing tasks, the gaze-following behavior of the robot had little effect on how our participants responded to gaze cues. Nevertheless, subjective reports suggested that our participants preferred the iCub following participants’ gaze to the one with a disjoint attention behavior, rated it as more human-like and as more likeable. Taken together, our findings show a preference for robots who follow our gaze. Importantly, such subtle differences in gaze behavior are sufficient to influence our perception of humanoid agents, which clearly provides hints about the design of behavioral characteristics of humanoid robots in more naturalistic settings. PMID:29459842

  14. The Malleability of Age-Related Positive Gaze Preferences: Training to Change Gaze and Mood

    PubMed Central

    Isaacowitz, Derek M.; Choi, YoonSun

    2010-01-01

    Older adults show positive gaze preferences, but to what extent are these preferences malleable? Examining the plasticity of age-related gaze preferences may provide a window into their origins. We therefore designed an attentional training procedure to assess the degree to which we could shift gaze and gaze-related mood in both younger and older adults. Participants completed either a positive or negative dot-probe training. Before and after the attentional training, we obtained measures of fixations to negatively-valenced images along with concurrent mood ratings. We found differential malleability of gaze and mood by age: for young adults, negative training resulted in fewer post-training fixations to the most negative areas of the images, whereas positive training appeared more successful in changing older adults’ fixation patterns. Young adults did not differ in their moods as a function of training, whereas older adults in the train negative group had the worst moods after training. Implications for the etiology of age-related positive gaze preferences are considered. PMID:21401229

  15. Audience gaze while appreciating a multipart musical performance.

    PubMed

    Kawase, Satoshi; Obata, Satoshi

    2016-11-01

    Visual information has been observed to be crucial for audience members during musical performances. The present study used an eye tracker to investigate audience members' gazes while appreciating an audiovisual musical ensemble performance, based on evidence of the dominance of musical part in auditory attention when listening to multipart music that contains different melody lines and the joint-attention theory of gaze. We presented singing performances, by a female duo. The main findings were as follows: (1) the melody part (soprano) attracted more visual attention than the accompaniment part (alto) throughout the piece, (2) joint attention emerged when the singers shifted their gazes toward their co-performer, suggesting that inter-performer gazing interactions that play a spotlight role mediated performer-audience visual interaction, and (3) musical part (melody or accompaniment) strongly influenced the total duration of gazes among audiences, while the spotlight effect of gaze was limited to just after the singers' gaze shifts. Copyright © 2016. Published by Elsevier Inc.

  16. Rebound upbeat nystagmus after lateral gaze in episodic ataxia type 2.

    PubMed

    Kim, Hyo-Jung; Kim, Ji-Soo; Choi, Jae-Hwan; Shin, Jin-Hong; Choi, Kwang-Dong; Zee, David S

    2014-06-01

    Rebound nystagmus is a transient nystagmus that occurs on resuming the straight-ahead position after prolonged eccentric gaze. Even though rebound nystagmus is commonly associated with gaze-evoked nystagmus (GEN), development of rebound nystagmus in a different plane of gaze has not been described. We report a patient with episodic ataxia type 2 who showed transient upbeat nystagmus on resuming the straight-ahead position after sustained lateral gaze that had induced GEN and downbeat nystagmus. The rebound upbeat nystagmus may be ascribed to a shifting null in the vertical plane as a result of an adaptation to the downbeat nystagmus that developed during lateral gaze.

  17. Development of Gaze Following Abilities in Wolves (Canis Lupus)

    PubMed Central

    Range, Friederike; Virányi, Zsófia

    2011-01-01

    The ability to coordinate with others' head and eye orientation to look in the same direction is considered a key step towards an understanding of others mental states like attention and intention. Here, we investigated the ontogeny and habituation patterns of gaze following into distant space and behind barriers in nine hand-raised wolves. We found that these wolves could use conspecific as well as human gaze cues even in the barrier task, which is thought to be more cognitively advanced than gazing into distant space. Moreover, while gaze following into distant space was already present at the age of 14 weeks and subjects did not habituate to repeated cues, gazing around a barrier developed considerably later and animals quickly habituated, supporting the hypothesis that different cognitive mechanisms may underlie the two gaze following modalities. More importantly, this study demonstrated that following another individuals' gaze around a barrier is not restricted to primates and corvids but is also present in canines, with remarkable between-group similarities in the ontogeny of this behaviour. This sheds new light on the evolutionary origins of and selective pressures on gaze following abilities as well as on the sensitivity of domestic dogs towards human communicative cues. PMID:21373192

  18. AUTISTIC TRAITS INFLUENCE GAZE-ORIENTED ATTENTION TO HAPPY BUT NOT FEARFUL FACES

    PubMed Central

    Lassalle, Amandine; Itier, Roxane J.

    2017-01-01

    The relationship between autistic traits and gaze-oriented attention to fearful and happy faces was investigated at the behavioral and neuronal levels. Upright and inverted dynamic face stimuli were used in a gaze-cueing paradigm while ERPs were recorded. Participants responded faster to gazed-at than to non-gazed-at targets and this Gaze Orienting Effect (GOE) diminished with inversion, suggesting it relies on facial configuration. It was also larger for fearful than happy faces but only in participants with high Autism Quotient (AQ) scores. While the GOE to fearful faces was of similar magnitude regardless of AQ scores, a diminished GOE to happy faces was found in participants with high AQ scores. At the ERP level, a congruency effect on target-elicited P1 component reflected enhanced visual processing of gazed-at targets. In addition, cue-triggered early directing attention negativity and anterior directing attention negativity reflected, respectively, attention orienting and attention holding at gazed-at locations. These neural markers of spatial attention orienting were not modulated by emotion and were not found in participants with high AQ scores. Together these findings suggest that autistic traits influence attention orienting to gaze and its modulation by social emotions such as happiness. PMID:25222883

  19. Just one look: Direct gaze briefly disrupts visual working memory.

    PubMed

    Wang, J Jessica; Apperly, Ian A

    2017-04-01

    Direct gaze is a salient social cue that affords rapid detection. A body of research suggests that direct gaze enhances performance on memory tasks (e.g., Hood, Macrae, Cole-Davies, & Dias, Developmental Science, 1, 67-71, 2003). Nonetheless, other studies highlight the disruptive effect direct gaze has on concurrent cognitive processes (e.g., Conty, Gimmig, Belletier, George, & Huguet, Cognition, 115(1), 133-139, 2010). This discrepancy raises questions about the effects direct gaze may have on concurrent memory tasks. We addressed this topic by employing a change detection paradigm, where participants retained information about the color of small sets of agents. Experiment 1 revealed that, despite the irrelevance of the agents' eye gaze to the memory task at hand, participants were worse at detecting changes when the agents looked directly at them compared to when the agents looked away. Experiment 2 showed that the disruptive effect was relatively short-lived. Prolonged presentation of direct gaze led to recovery from the initial disruption, rather than a sustained disruption on change detection performance. The present study provides the first evidence that direct gaze impairs visual working memory with a rapidly-developing yet short-lived effect even when there is no need to attend to agents' gaze.

  20. Metacognitive Monitoring of Executive Control Engagement during Childhood

    ERIC Educational Resources Information Center

    Chevalier, Nicolas; Blaye, Agnès

    2016-01-01

    Emerging executive control supports greater autonomy and increasingly adaptive behavior during childhood. The present study addressed whether children's greater monitoring of how they engage control drives executive control development. Gaze position was recorded while twenty-five 6-year-olds and twenty-eight 10-year-olds performed a self-paced…

  1. Eye-gaze and intent: Application in 3D interface control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schryver, J.C.; Goldberg, J.H.

    1993-06-01

    Computer interface control is typically accomplished with an input ``device`` such as keyboard, mouse, trackball, etc. An input device translates a users input actions, such as mouse clicks and key presses, into appropriate computer commands. To control the interface, the user must first convert intent into the syntax of the input device. A more natural means of computer control is possible when the computer can directly infer user intent, without need of intervening input devices. We describe an application of eye-gaze-contingent control of an interactive three-dimensional (3D) user interface. A salient feature of the user interface is natural input, withmore » a heightened impression of controlling the computer directly by the mind. With this interface, input of rotation and translation are intuitive, whereas other abstract features, such as zoom, are more problematic to match with user intent. This paper describes successes with implementation to date, and ongoing efforts to develop a more sophisticated intent inferencing methodology.« less

  2. Eye-gaze and intent: Application in 3D interface control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schryver, J.C.; Goldberg, J.H.

    1993-01-01

    Computer interface control is typically accomplished with an input device'' such as keyboard, mouse, trackball, etc. An input device translates a users input actions, such as mouse clicks and key presses, into appropriate computer commands. To control the interface, the user must first convert intent into the syntax of the input device. A more natural means of computer control is possible when the computer can directly infer user intent, without need of intervening input devices. We describe an application of eye-gaze-contingent control of an interactive three-dimensional (3D) user interface. A salient feature of the user interface is natural input, withmore » a heightened impression of controlling the computer directly by the mind. With this interface, input of rotation and translation are intuitive, whereas other abstract features, such as zoom, are more problematic to match with user intent. This paper describes successes with implementation to date, and ongoing efforts to develop a more sophisticated intent inferencing methodology.« less

  3. Objectified Body Consciousness in Relation to Recovery from an Eating Disorder

    PubMed Central

    Fitzsimmons, Ellen E.; Bardone-Cone, Anna M.; Kelly, Kathleen A.

    2011-01-01

    In Western society, the feminine body has been positioned as an object to be looked at and sexually gazed upon; thus, females often learn to view themselves as objects to be observed (i.e., objectified body consciousness (OBC)). This study examined the relation between OBC and eating disorder recovery by comparing its components across non-eating disorder controls, fully recovered, partially recovered, and active eating disorder cases. Results revealed that non-eating disorder controls and fully recovered individuals had similarly low levels of two components of OBC, body surveillance and body shame. Partially recovered individuals looked more similar to those with an active eating disorder on these constructs. The third component of OBC, control beliefs, and a conceptually similar construct, weight/shape self-efficacy, did not differ across groups. Results provide support for the importance of measuring aspects of self-objectification, particularly body surveillance and body shame, across the course of an eating disorder. PMID:22051364

  4. Gaze direction affects the magnitude of face identity aftereffects.

    PubMed

    Kloth, Nadine; Jeffery, Linda; Rhodes, Gillian

    2015-02-20

    The face perception system partly owes its efficiency to adaptive mechanisms that constantly recalibrate face coding to our current diet of faces. Moreover, faces that are better attended produce more adaptation. Here, we investigated whether the social cues conveyed by a face can influence the amount of adaptation that face induces. We compared the magnitude of face identity aftereffects induced by adaptors with direct and averted gazes. We reasoned that faces conveying direct gaze may be more engaging and better attended and thus produce larger aftereffects than those with averted gaze. Using an adaptation duration of 5 s, we found that aftereffects for adaptors with direct and averted gazes did not differ (Experiment 1). However, when processing demands were increased by reducing adaptation duration to 1 s, we found that gaze direction did affect the magnitude of the aftereffect, but in an unexpected direction: Aftereffects were larger for adaptors with averted rather than direct gaze (Experiment 2). Eye tracking revealed that differences in looking time to the faces between the two gaze directions could not account for these findings. Subsequent ratings of the stimuli (Experiment 3) showed that adaptors with averted gaze were actually perceived as more expressive and interesting than adaptors with direct gaze. Therefore it appears that the averted-gaze faces were more engaging and better attended, leading to larger aftereffects. Overall, our results suggest that naturally occurring facial signals can modulate the adaptive impact a face exerts on our perceptual system. Specifically, the faces that we perceive as most interesting also appear to calibrate the organization of our perceptual system most strongly. © 2015 ARVO.

  5. Steering by hearing: a bat's acoustic gaze is linked to its flight motor output by a delayed, adaptive linear law.

    PubMed

    Ghose, Kaushik; Moss, Cynthia F

    2006-02-08

    Adaptive behaviors require sensorimotor computations that convert information represented initially in sensory coordinates to commands for action in motor coordinates. Fundamental to these computations is the relationship between the region of the environment sensed by the animal (gaze) and the animal's locomotor plan. Studies of visually guided animals have revealed an anticipatory relationship between gaze direction and the locomotor plan during target-directed locomotion. Here, we study an acoustically guided animal, an echolocating bat, and relate acoustic gaze (direction of the sonar beam) to flight planning as the bat searches for and intercepts insect prey. We show differences in the relationship between gaze and locomotion as the bat progresses through different phases of insect pursuit. We define acoustic gaze angle, theta(gaze), to be the angle between the sonar beam axis and the bat's flight path. We show that there is a strong linear linkage between acoustic gaze angle at time t [theta(gaze)(t)] and flight turn rate at time t + tau into the future [theta(flight) (t + tau)], which can be expressed by the formula theta(flight) (t + tau) = ktheta(gaze)(t). The gain, k, of this linkage depends on the bat's behavioral state, which is indexed by its sonar pulse rate. For high pulse rates, associated with insect attacking behavior, k is twice as high compared with low pulse rates, associated with searching behavior. We suggest that this adjustable linkage between acoustic gaze and motor output in a flying echolocating bat simplifies the transformation of auditory information to flight motor commands.

  6. The effects of simulated vision impairments on the cone of gaze.

    PubMed

    Hecht, Heiko; Hörichs, Jenny; Sheldon, Sarah; Quint, Jessilin; Bowers, Alex

    2015-10-01

    Detecting the gaze direction of others is critical for many social interactions. We explored factors that may make the perception of mutual gaze more difficult, including the degradation of the stimulus and simulated vision impairment. To what extent do these factors affect the complex assessment of mutual gaze? Using an interactive virtual head whose eye direction could be manipulated by the subject, we conducted two experiments to assess the effects of simulated vision impairments on mutual gaze. Healthy subjects had to demarcate the center and the edges of the cone of gaze-that is, the range of gaze directions that are accepted for mutual gaze. When vision was impaired by adding a semitransparent white contrast reduction mask to the display (Exp. 1), judgments became more variable and more influenced by the head direction (indicative of a compensation strategy). When refractive blur was added (Exp. 1), the gaze cone shrank from 12.9° (no blur) to 11.3° (3-diopter lens), which cannot be explained by a low-level process but might reflect a tightening of the criterion for mutual gaze as a response to the increased uncertainty. However, the overall effects of the impairments were relatively modest. Elderly subjects (Exp. 2) produced more variability but did not differ qualitatively from the younger subjects. In the face of artificial vision impairments, compensation mechanisms and criterion changes allow us to perform better in mutual gaze perception than would be predicted by a simple extrapolation from the losses in basic visual acuity and contrast sensitivity.

  7. Gaze Fluctuations Are Not Additively Decomposable: Reply to Bogartz and Staub

    ERIC Educational Resources Information Center

    Kelty-Stephen, Damian G.; Mirman, Daniel

    2013-01-01

    Our previous work interpreted single-lognormal fits to inter-gaze distance (i.e., "gaze steps") histograms as evidence of multiplicativity and hence interactions across scales in visual cognition. Bogartz and Staub (2012) proposed that gaze steps are additively decomposable into fixations and saccades, matching the histograms better and…

  8. Atypical Gaze Cueing Pattern in a Complex Environment in Individuals with ASD

    ERIC Educational Resources Information Center

    Zhao, Shuo; Uono, Shota; Yoshimura, Sayaka; Kubota, Yasutaka; Toichi, Motomi

    2017-01-01

    Clinically, social interaction, including gaze-triggered attention, has been reported to be impaired in autism spectrum disorder (ASD), but psychological studies have generally shown intact gaze-triggered attention in ASD. These studies typically examined gaze-triggered attention under simple environmental conditions. In real life, however, the…

  9. Orienting to Eye Gaze and Face Processing

    ERIC Educational Resources Information Center

    Tipples, Jason

    2005-01-01

    The author conducted 7 experiments to examine possible interactions between orienting to eye gaze and specific forms of face processing. Participants classified a letter following either an upright or inverted face with averted, uninformative eye gaze. Eye gaze orienting effects were recorded for upright and inverted faces, irrespective of whether…

  10. Observing Shared Attention Modulates Gaze Following

    ERIC Educational Resources Information Center

    Bockler, Anne; Knoblich, Gunther; Sebanz, Natalie

    2011-01-01

    Humans' tendency to follow others' gaze is considered to be rather resistant to top-down influences. However, recent evidence indicates that gaze following depends on prior eye contact with the observed agent. Does observing two people engaging in eye contact also modulate gaze following? Participants observed two faces looking at each other or…

  11. Gaze Direction Detection in Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Forgeot d'Arc, Baudouin; Delorme, Richard; Zalla, Tiziana; Lefebvre, Aline; Amsellem, Frédérique; Moukawane, Sanaa; Letellier, Laurence; Leboyer, Marion; Mouren, Marie-Christine; Ramus, Franck

    2017-01-01

    Detecting where our partners direct their gaze is an important aspect of social interaction. An atypical gaze processing has been reported in autism. However, it remains controversial whether children and adults with autism spectrum disorder interpret indirect gaze direction with typical accuracy. This study investigated whether the detection of…

  12. A novel approach to training attention and gaze in ASD: A feasibility and efficacy pilot study.

    PubMed

    Chukoskie, Leanne; Westerfield, Marissa; Townsend, Jeanne

    2018-05-01

    In addition to the social, communicative and behavioral symptoms that define the disorder, individuals with ASD have difficulty re-orienting attention quickly and accurately. Similarly, fast re-orienting saccadic eye movements are also inaccurate and more variable in both endpoint and timing. Atypical gaze and attention are among the earliest symptoms observed in ASD. Disruption of these foundation skills critically affects the development of higher level cognitive and social behavior. We propose that interventions aimed at these early deficits that support social and cognitive skills will be broadly effective. We conducted a pilot clinical trial designed to demonstrate the feasibility and preliminary efficacy of using gaze-contingent video games for low-cost in-home training of attention and eye movement. Eight adolescents with ASD participated in an 8-week training, with pre-, mid- and post-testing of eye movement and attention control. Six of the eight adolescents completed the 8 weeks of training and all six showed improvement in attention (orienting, disengagement) and eye movement control or both. All game systems remained intact for the duration of training and all participants could use the system independently. We delivered a robust, low-cost, gaze-contingent game system for home use that, in our pilot training sample, improved the attention orienting and eye movement performance of adolescent participants in 8 weeks of training. We are currently conducting a clinical trial to replicate these results and to examine what, if any, aspects of training transfer to more real-world tasks. © 2017 Wiley Periodicals, Inc. Develop Neurobiol 78: 546-554, 2018. © 2017 Wiley Periodicals, Inc.

  13. Can upbeat nystagmus increase in downward, but not upward, gaze?

    PubMed

    Kim, Hyun-Ah; Yi, Hyon-Ah; Lee, Hyung

    2012-04-01

    Upbeat nystagmus (UBN) is typically increased with upward gaze and decreased with downward gaze. We describe a patient with acute multiple sclerosis who developed primary position UBN with a linear slow phase waveform, in which the velocity of nystagmus was intensified in downward gaze and decreased during upward gaze. Brain MRI showed high signal lesions in the paramedian dorsal area of the caudal medulla encompassing the most caudal part of the perihypoglossal nuclei. Clinicians should be aware of possibility of a caudal medullary lesion in a patient with UBN, especially when the velocity of the UBN is increased in downward gaze. Copyright © 2011 Elsevier Ltd. All rights reserved.

  14. A testimony to Muzil: Hervé Guibert, Foucault, and the medical gaze.

    PubMed

    Rendell, Joanne

    2004-01-01

    Testimony to Muzil: Hervé Guibert, Michel Foucault, and the "Medical Gaze" examines the fictional/autobiographical AIDS writings of the French writer Hervé Guibert. Locating Guibert's writings alongside the work of his friend Michel Foucault, the article explores how they echo Foucault's evolving notions of the "medical gaze." The article also explores how Guilbert's narrators and Guibert himself (as writer) resist and challenge the medical gaze; a gaze which particularly in the era of AIDS has subjected, objectified, and even sometimes punished the body of the gay man. It is argued that these resistances to the gaze offer a literary extension to Foucault's later work on power and resistance strategies.

  15. Mental state attribution and the gaze cueing effect.

    PubMed

    Cole, Geoff G; Smith, Daniel T; Atkinson, Mark A

    2015-05-01

    Theory of mind is said to be possessed by an individual if he or she is able to impute mental states to others. Recently, some authors have demonstrated that such mental state attributions can mediate the "gaze cueing" effect, in which observation of another individual shifts an observer's attention. One question that follows from this work is whether such mental state attributions produce mandatory modulations of gaze cueing. Employing the basic gaze cueing paradigm, together with a technique commonly used to assess mental-state attribution in nonhuman animals, we manipulated whether the gazing agent could see the same thing as the participant (i.e., the target) or had this view obstructed by a physical barrier. We found robust gaze cueing effects, even when the observed agent in the display could not see the same thing as the participant. These results suggest that the attribution of "seeing" does not necessarily modulate the gaze cueing effect.

  16. Eye Gaze in Creative Sign Language

    ERIC Educational Resources Information Center

    Kaneko, Michiko; Mesch, Johanna

    2013-01-01

    This article discusses the role of eye gaze in creative sign language. Because eye gaze conveys various types of linguistic and poetic information, it is an intrinsic part of sign language linguistics in general and of creative signing in particular. We discuss various functions of eye gaze in poetic signing and propose a classification of gaze…

  17. Seductive Eyes: Attractiveness and Direct Gaze Increase Desire for Associated Objects

    ERIC Educational Resources Information Center

    Strick, Madelijn; Holland, Rob W.; van Knippenberg, Ad

    2008-01-01

    Recent research in neuroscience shows that observing attractive faces with direct gaze is more rewarding than observing attractive faces with averted gaze. On the basis of this research, it was hypothesized that object evaluations can be enhanced by associating them with attractive faces displaying direct gaze. In a conditioning paradigm, novel…

  18. Mobile gaze tracking system for outdoor walking behavioral studies

    PubMed Central

    Tomasi, Matteo; Pundlik, Shrinivas; Bowers, Alex R.; Peli, Eli; Luo, Gang

    2016-01-01

    Most gaze tracking techniques estimate gaze points on screens, on scene images, or in confined spaces. Tracking of gaze in open-world coordinates, especially in walking situations, has rarely been addressed. We use a head-mounted eye tracker combined with two inertial measurement units (IMU) to track gaze orientation relative to the heading direction in outdoor walking. Head movements relative to the body are measured by the difference in output between the IMUs on the head and body trunk. The use of the IMU pair reduces the impact of environmental interference on each sensor. The system was tested in busy urban areas and allowed drift compensation for long (up to 18 min) gaze recording. Comparison with ground truth revealed an average error of 3.3° while walking straight segments. The range of gaze scanning in walking is frequently larger than the estimation error by about one order of magnitude. Our proposed method was also tested with real cases of natural walking and it was found to be suitable for the evaluation of gaze behaviors in outdoor environments. PMID:26894511

  19. A closer look at the size of the gaze-liking effect: a preregistered replication.

    PubMed

    Tipples, Jason; Pecchinenda, Anna

    2018-04-30

    This study is a direct replication of gaze-liking effect using the same design, stimuli and procedure. The gaze-liking effect describes the tendency for people to rate objects as more likeable when they have recently seen a person repeatedly gaze toward rather than away from the object. However, as subsequent studies show considerable variability in the size of this effect, we sampled a larger number of participants (N = 98) than the original study (N = 24) to gain a more precise estimate of the gaze-liking effect size. Our results indicate a much smaller standardised effect size (d z  = 0.02) than that of the original study (d z  = 0.94). Our smaller effect size was not due to general insensitivity to eye-gaze effects because the same sample showed a clear (d z  = 1.09) gaze-cuing effect - faster reaction times when eyes looked toward vs away from target objects. We discuss the implications of our findings for future studies wishing to study the gaze-liking effect.

  20. Gaze and viewing angle influence visual stabilization of upright posture

    PubMed Central

    Ustinova, KI; Perkins, J

    2011-01-01

    Focusing gaze on a target helps stabilize upright posture. We investigated how this visual stabilization can be affected by observing a target presented under different gaze and viewing angles. In a series of 10-second trials, participants (N = 20, 29.3 ± 9 years of age) stood on a force plate and fixed their gaze on a figure presented on a screen at a distance of 1 m. The figure changed position (gaze angle: eye level (0°), 25° up or down), vertical body orientation (viewing angle: at eye level but rotated 25° as if leaning toward or away from the participant), or both (gaze and viewing angle: 25° up or down with the rotation equivalent of a natural visual perspective). Amplitude of participants’ sagittal displacement, surface area, and angular position of the center of gravity (COG) were compared. Results showed decreased COG velocity and amplitude for up and down gaze angles. Changes in viewing angles resulted in altered body alignment and increased amplitude of COG displacement. No significant changes in postural stability were observed when both gaze and viewing angles were altered. Results suggest that both the gaze angle and viewing perspective may be essential variables of the visuomotor system modulating postural responses. PMID:22398978

  1. Face Age and Eye Gaze Influence Older Adults' Emotion Recognition.

    PubMed

    Campbell, Anna; Murray, Janice E; Atkinson, Lianne; Ruffman, Ted

    2017-07-01

    Eye gaze has been shown to influence emotion recognition. In addition, older adults (over 65 years) are not as influenced by gaze direction cues as young adults (18-30 years). Nevertheless, these differences might stem from the use of young to middle-aged faces in emotion recognition research because older adults have an attention bias toward old-age faces. Therefore, using older face stimuli might allow older adults to process gaze direction cues to influence emotion recognition. To investigate this idea, young and older adults completed an emotion recognition task with young and older face stimuli displaying direct and averted gaze, assessing labeling accuracy for angry, disgusted, fearful, happy, and sad faces. Direct gaze rather than averted gaze improved young adults' recognition of emotions in young and older faces, but for older adults this was true only for older faces. The current study highlights the impact of stimulus face age and gaze direction on emotion recognition in young and older adults. The use of young face stimuli with direct gaze in most research might contribute to age-related emotion recognition differences. © The Author 2015. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  2. A sLORETA study for gaze-independent BCI speller.

    PubMed

    Xingwei An; Jinwen Wei; Shuang Liu; Dong Ming

    2017-07-01

    EEG-based BCI (brain-computer-interface) speller, especially gaze-independent BCI speller, has become a hot topic in recent years. It provides direct spelling device by non-muscular method for people with severe motor impairments and with limited gaze movement. Brain needs to conduct both stimuli-driven and stimuli-related attention in fast presented BCI paradigms for such BCI speller applications. Few researchers studied the mechanism of brain response to such fast presented BCI applications. In this study, we compared the distribution of brain activation in visual, auditory, and audio-visual combined stimuli paradigms using sLORETA (standardized low-resolution brain electromagnetic tomography). Between groups comparisons showed the importance of visual and auditory stimuli in audio-visual combined paradigm. They both contribute to the activation of brain regions, with visual stimuli being the predominate stimuli. Visual stimuli related brain region was mainly located at parietal and occipital lobe, whereas response in frontal-temporal lobes might be caused by auditory stimuli. These regions played an important role in audio-visual bimodal paradigms. These new findings are important for future study of ERP speller as well as the mechanism of fast presented stimuli.

  3. Age differences in conscious versus subconscious social perception: the influence of face age and valence on gaze following.

    PubMed

    Bailey, Phoebe E; Slessor, Gillian; Rendell, Peter G; Bennetts, Rachel J; Campbell, Anna; Ruffman, Ted

    2014-09-01

    Gaze following is the primary means of establishing joint attention with others and is subject to age-related decline. In addition, young but not older adults experience an own-age bias in gaze following. The current research assessed the effects of subconscious processing on these age-related differences. Participants responded to targets that were either congruent or incongruent with the direction of gaze displayed in supraliminal and subliminal images of young and older faces. These faces displayed either neutral (Study 1) or happy and fearful (Study 2) expressions. In Studies 1 and 2, both age groups demonstrated gaze-directed attention by responding faster to targets that were congruent as opposed to incongruent with gaze-cues. In Study 1, subliminal stimuli did not attenuate the age-related decline in gaze-cuing, but did result in an own-age bias among older participants. In Study 2, gaze-cuing was reduced for older relative to young adults in response to supraliminal stimuli, and this could not be attributed to reduced visual acuity or age group differences in the perceived emotional intensity of the gaze-cue faces. Moreover, there were no age differences in gaze-cuing when responding to subliminal faces that were emotionally arousing. In addition, older adults demonstrated an own-age bias for both conscious and subconscious gaze-cuing when faces expressed happiness but not fear. We discuss growing evidence for age-related preservation of subconscious relative to conscious social perception, as well as an interaction between face age and valence in social perception. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  4. Gaze Compensation as a Technique for Improving Hand–Eye Coordination in Prosthetic Vision

    PubMed Central

    Titchener, Samuel A.; Shivdasani, Mohit N.; Fallon, James B.; Petoe, Matthew A.

    2018-01-01

    Purpose Shifting the region-of-interest within the input image to compensate for gaze shifts (“gaze compensation”) may improve hand–eye coordination in visual prostheses that incorporate an external camera. The present study investigated the effects of eye movement on hand-eye coordination under simulated prosthetic vision (SPV), and measured the coordination benefits of gaze compensation. Methods Seven healthy-sighted subjects performed a target localization-pointing task under SPV. Three conditions were tested, modeling: retinally stabilized phosphenes (uncompensated); gaze compensation; and no phosphene movement (center-fixed). The error in pointing was quantified for each condition. Results Gaze compensation yielded a significantly smaller pointing error than the uncompensated condition for six of seven subjects, and a similar or smaller pointing error than the center-fixed condition for all subjects (two-way ANOVA, P < 0.05). Pointing error eccentricity and gaze eccentricity were moderately correlated in the uncompensated condition (azimuth: R2 = 0.47; elevation: R2 = 0.51) but not in the gaze-compensated condition (azimuth: R2 = 0.01; elevation: R2 = 0.00). Increased variability in gaze at the time of pointing was correlated with greater reduction in pointing error in the center-fixed condition compared with the uncompensated condition (R2 = 0.64). Conclusions Eccentric eye position impedes hand–eye coordination in SPV. While limiting eye eccentricity in uncompensated viewing can reduce errors, gaze compensation is effective in improving coordination for subjects unable to maintain fixation. Translational Relevance The results highlight the present necessity for suppressing eye movement and support the use of gaze compensation to improve hand–eye coordination and localization performance in prosthetic vision. PMID:29321945

  5. Frames of reference for gaze saccades evoked during stimulation of lateral intraparietal cortex.

    PubMed

    Constantin, A G; Wang, H; Martinez-Trujillo, J C; Crawford, J D

    2007-08-01

    Previous studies suggest that stimulation of lateral intraparietal cortex (LIP) evokes saccadic eye movements toward eye- or head-fixed goals, whereas most single-unit studies suggest that LIP uses an eye-fixed frame with eye-position modulations. The goal of our study was to determine the reference frame for gaze shifts evoked during LIP stimulation in head-unrestrained monkeys. Two macaques (M1 and M2) were implanted with recording chambers over the right intraparietal sulcus and with search coils for recording three-dimensional eye and head movements. The LIP region was microstimulated using pulse trains of 300 Hz, 100-150 microA, and 200 ms. Eighty-five putative LIP sites in M1 and 194 putative sites in M2 were used in our quantitative analysis throughout this study. Average amplitude of the stimulation-evoked gaze shifts was 8.67 degrees for M1 and 7.97 degrees for M2 with very small head movements. When these gaze-shift trajectories were rotated into three coordinate frames (eye, head, and body), gaze endpoint distribution for all sites was most convergent to a common point when plotted in eye coordinates. Across all sites, the eye-centered model provided a significantly better fit compared with the head, body, or fixed-vector models (where the latter model signifies no modulation of the gaze trajectory as a function of initial gaze position). Moreover, the probability of evoking a gaze shift from any one particular position was modulated by the current gaze direction (independent of saccade direction). These results provide causal evidence that the motor commands from LIP encode gaze command in eye-fixed coordinates but are also subtly modulated by initial gaze position.

  6. Communicative interactions between visually impaired mothers and their sighted children: analysis of gaze, facial expressions, voice and physical contacts.

    PubMed

    Chiesa, S; Galati, D; Schmidt, S

    2015-11-01

    Social and emotional development of infants and young children is largely based on the communicative interaction with their mother, or principal caretaker (Trevarthen ). The main modalities implied in this early communication are voice, facial expressions and gaze (Stern ). This study aims at analysing early mother-child interactions in the case of visually impaired mothers who do not have access to their children's gaze and facial expressions. Spontaneous play interactions between seven visually impaired mothers and their sighted children aged between 6 months and 3 years were filmed. These dyads were compared with a control group of sighted mothers and children analysing four modalities of communication and interaction regulation: gaze, physical contacts, verbal productions and facial expressions. The visually impaired mothers' facial expressions differed from the ones of sighted mothers mainly with respect to forehead movements, leading to an impoverishment of conveyed meaning. Regarding the other communicative modalities, results suggest that visually impaired mothers and their children use compensatory strategies to guaranty harmonic interaction despite the mother's impairment: whereas gaze results the main factor of interaction regulation in sighted dyads, physical contacts and verbal productions assume a prevalent role in dyads with visually impaired mothers. Moreover, visually impaired mother's children seem to be able to differentiate between their mother and sighted interaction partners, adapting differential modes of communication. The results of this study show that, in spite of the obvious differences in the modes of communication, visual impairment does not prevent a harmonious interaction with the child. © 2015 John Wiley & Sons Ltd.

  7. Gazing into Thin Air: The Dual-Task Costs of Movement Planning and Execution during Adaptive Gait

    PubMed Central

    Ellmers, Toby J.; Cocks, Adam J.; Doumas, Michail; Williams, A. Mark; Young, William R.

    2016-01-01

    We examined the effect of increased cognitive load on visual search behavior and measures of gait performance during locomotion. Also, we investigated how personality traits, specifically the propensity to consciously control or monitor movements (trait movement ‘reinvestment’), impacted the ability to maintain effective gaze under conditions of cognitive load. Healthy young adults traversed a novel adaptive walking path while performing a secondary serial subtraction task. Performance was assessed using correct responses to the cognitive task, gaze behavior, stepping accuracy, and time to complete the walking task. When walking while simultaneously carrying out the secondary serial subtraction task, participants visually fixated on task-irrelevant areas ‘outside’ the walking path more often and for longer durations of time, and fixated on task-relevant areas ‘inside’ the walkway for shorter durations. These changes were most pronounced in high-trait-reinvesters. We speculate that reinvestment-related processes placed an additional cognitive demand upon working memory. These increased task-irrelevant ‘outside’ fixations were accompanied by slower completion rates on the walking task and greater gross stepping errors. Findings suggest that attention is important for the maintenance of effective gaze behaviors, supporting previous claims that the maladaptive changes in visual search observed in high-risk older adults may be a consequence of inefficiencies in attentional processing. Identifying the underlying attentional processes that disrupt effective gaze behaviour during locomotion is an essential step in the development of rehabilitation, with this information allowing for the emergence of interventions that reduce the risk of falling. PMID:27824937

  8. Effect of terminal accuracy requirements on temporal gaze-hand coordination during fast discrete and reciprocal pointings

    PubMed Central

    2011-01-01

    Background Rapid discrete goal-directed movements are characterized by a well known coordination pattern between the gaze and the hand displacements. The gaze always starts prior to the hand movement and reaches the target before hand velocity peak. Surprisingly, the effect of the target size on the temporal gaze-hand coordination has not been directly investigated. Moreover, goal-directed movements are often produced in a reciprocal rather than in a discrete manner. The objectives of this work were to assess the effect of the target size on temporal gaze-hand coordination during fast 1) discrete and 2) reciprocal pointings. Methods Subjects performed fast discrete (experiment 1) and reciprocal (experiment 2) pointings with an amplitude of 50 cm and four target diameters (7.6, 3.8, 1.9 and 0.95 cm) leading to indexes of difficulty (ID = log2[2A/D]) of 3.7, 4.7, 5.7 and 6.7 bits. Gaze and hand displacements were synchronously recorded. Temporal gaze-hand coordination parameters were compared between experiments (discrete and reciprocal pointings) and IDs using analyses of variance (ANOVAs). Results Data showed that the magnitude of the gaze-hand lead pattern was much higher for discrete than for reciprocal pointings. Moreover, while it was constant for discrete pointings, it decreased systematically with an increasing ID for reciprocal pointings because of the longer duration of gaze anchoring on target. Conclusion Overall, the temporal gaze-hand coordination analysis revealed that even for high IDs, fast reciprocal pointings could not be considered as a concatenation of discrete units. Moreover, our data clearly illustrate the smooth adaptation of temporal gaze-hand coordination to terminal accuracy requirements during fast reciprocal pointings. It will be interesting for further researches to investigate if the methodology used in the experiment 2 allows assessing the effect of sensori-motor deficits on gaze-hand coordination. PMID:21320315

  9. "Wolves (Canis lupus) and dogs (Canis familiaris) differ in following human gaze into distant space but respond similar to their packmates' gaze": Correction to Werhahn et al. (2016).

    PubMed

    2017-02-01

    Reports an error in "Wolves ( Canis lupus ) and dogs ( Canis familiaris ) differ in following human gaze into distant space but respond similar to their packmates' gaze" by Geraldine Werhahn, Zsófia Virányi, Gabriela Barrera, Andrea Sommese and Friederike Range ( Journal of Comparative Psychology , 2016[Aug], Vol 130[3], 288-298). In the article, the affiliations for the second and fifth authors should be Wolf Science Center, Ernstbrunn, Austria, and Comparative Cognition, Messerli Research Institute, University of Veterinary Medicine Vienna/ Medical University of Vienna/University of Vienna. The online version of this article has been corrected. (The following abstract of the original article appeared in record 2016-26311-001.) Gaze following into distant space is defined as visual co-orientation with another individual's head direction allowing the gaze follower to gain information on its environment. Human and nonhuman animals share this basic gaze following behavior, suggested to rely on a simple reflexive mechanism and believed to be an important prerequisite for complex forms of social cognition. Pet dogs differ from other species in that they follow only communicative human gaze clearly addressed to them. However, in an earlier experiment we showed that wolves follow human gaze into distant space. Here we set out to investigate whether domestication has affected gaze following in dogs by comparing pack-living dogs and wolves raised and kept under the same conditions. In Study 1 we found that in contrast to the wolves, these dogs did not follow minimally communicative human gaze into distant space in the same test paradigm. In the observational Study 2 we found that pack-living dogs and wolves, similarly vigilant to environmental stimuli, follow the spontaneous gaze of their conspecifics similarly often. Our findings suggest that domestication did not affect the gaze following ability of dogs itself. The results raise hypotheses about which other dog skills might have been altered through domestication that may have influenced their performance in Study 1. Because following human gaze in dogs might be influenced by special evolutionary as well as developmental adaptations to interactions with humans, we suggest that comparing dogs to other animal species might be more informative when done in intraspecific social contexts. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  10. Functionally referential and intentional communication in the domestic dog: effects of spatial and social contexts.

    PubMed

    Gaunet, Florence; Deputte, Bertrand L

    2011-11-01

    In apes, four criteria are set to explore referential and intentional communication: (1) successive visual orienting between a partner and distant targets, (2) the presence of apparent attention-getting behaviours, (3) the requirement of an audience to exhibit the behaviours, and (4) the influence of the direction of attention of an observer on the behaviours. The present study aimed at identifying these criteria in behaviours used by dogs in communicative episodes with their owner when their toy is out of reach, i.e. gaze at a hidden target or at the owner, gaze alternation between a hidden target and the owner, vocalisations and contacts. In this study, an additional variable was analysed: the position of the dog in relation to the location of the target. Dogs witnessed the hiding of a favourite toy, in a place where they could not get access to. We analysed how dogs engaged in communicative deictic behaviours in the presence of their owner; four heights of the target were tested. To control for the motivational effects of the toy on the dogs' behaviour and for the referential nature of the behaviours, observations were staged where only the toy or only the owner was present, for one of the four heights. The results show that gazing at the container and gaze alternation were used as functionally referential and intentional communicative behaviours. Behavioural patterns of dog position, the new variable, fulfilled the operational criteria for functionally referential behaviour and a subset of operational criteria for intentional communication: the dogs used their own position as a local enhancement signal. Finally, our results suggest that the dogs gazed at their owner at optimal locations in the experimental area, with respect to the target height and their owner's (or their own) line of gaze. © Springer-Verlag 2011

  11. A Metric to Quantify Shared Visual Attention in Two-Person Teams

    NASA Technical Reports Server (NTRS)

    Gontar, Patrick; Mulligan, Jeffrey B.

    2015-01-01

    Introduction: Critical tasks in high-risk environments are often performed by teams, the members of which must work together efficiently. In some situations, the team members may have to work together to solve a particular problem, while in others it may be better for them to divide the work into separate tasks that can be completed in parallel. We hypothesize that these two team strategies can be differentiated on the basis of shared visual attention, measured by gaze tracking. 2) Methods: Gaze recordings were obtained for two-person flight crews flying a high-fidelity simulator (Gontar, Hoermann, 2014). Gaze was categorized with respect to 12 areas of interest (AOIs). We used these data to construct time series of 12 dimensional vectors, with each vector component representing one of the AOIs. At each time step, each vector component was set to 0, except for the one corresponding to the currently fixated AOI, which was set to 1. This time series could then be averaged in time, with the averaging window time (t) as a variable parameter. For example, when we average with a t of one minute, each vector component represents the proportion of time that the corresponding AOI was fixated within the corresponding one minute interval. We then computed the Pearson product-moment correlation coefficient between the gaze proportion vectors for each of the two crew members, at each point in time, resulting in a signal representing the time-varying correlation between gaze behaviors. We determined criteria for concluding correlated gaze behavior using two methods: first, a permutation test was applied to the subjects' data. When one crew member's gaze proportion vector is correlated with a random time sample from the other crewmember's data, a distribution of correlation values is obtained that differs markedly from the distribution obtained from temporally aligned samples. In addition to validating that the gaze tracker was functioning reasonably well, this also allows us to compute probabilities of coordinated behavior for each value of the correlation. As an alternative, we also tabulated distributions of correlation coefficients for synthetic data sets, in which the behavior was modeled as a first-order Markov process, and compared correlation distributions for identical processes with those for disparate processes, allowing us to choose criteria and estimate error rates. 3) Discussion: Our method of gaze correlation is able to measure shared visual attention, and can distinguish between activities involving different instruments. We plan to analyze whether pilots strategies of sharing visual attention can predict performance. Possible measurements of performance include expert ratings from instructors, fuel consumption, total task time, and failure rate. While developed for two-person crews, our approach can be applied to larger groups, using intra-class correlation coefficients instead of the Pearson product-moment correlation.

  12. The role of central and peripheral vision in expert decision making.

    PubMed

    Ryu, Donghyun; Abernethy, Bruce; Mann, David L; Poolton, Jamie M; Gorman, Adam D

    2013-01-01

    The purpose of this study was to investigate the role of central and peripheral vision in expert decision making. A gaze-contingent display was used to selectively present information to the central and peripheral areas of the visual field while participants performed a decision-making task. Eleven skilled and eleven less-skilled male basketball players watched video clips of basketball scenarios in three different viewing conditions: full-image control, moving window (central vision only), and moving mask (peripheral vision only). At the conclusion of each clip participants were required to decide whether it was more appropriate for the ball-carrier to pass the ball or to drive to the basket. The skilled players showed significantly higher response accuracy and faster response times compared with their lesser-skilled counterparts in all three viewing conditions, demonstrating superiority in information extraction that held irrespective of whether they were using central or peripheral vision. The gaze behaviour of the skilled players was less influenced by the gaze-contingent manipulations, suggesting they were better able to use the remaining information to sustain their normal gaze behaviour. The superior capacity of experts to interpret dynamic visual information is evident regardless of whether the visual information is presented across the whole visual field or selectively to either central or peripheral vision alone.

  13. Right Hemispheric Dominance in Gaze-Triggered Reflexive Shift of Attention in Humans

    ERIC Educational Resources Information Center

    Okada, Takashi; Sato, Wataru; Toichi, Motomi

    2006-01-01

    Recent findings suggest a right hemispheric dominance in gaze-triggered shifts of attention. The aim of this study was to clarify the dominant hemisphere in the gaze processing that mediates attentional shift. A target localization task, with preceding non-predicative gaze cues presented to each visual field, was undertaken by 44 healthy subjects,…

  14. Gaze Step Distributions Reflect Fixations and Saccades: A Comment on Stephen and Mirman (2010)

    ERIC Educational Resources Information Center

    Bogartz, Richard S.; Staub, Adrian

    2012-01-01

    In three experimental tasks Stephen and Mirman (2010) measured gaze steps, the distance in pixels between gaze positions on successive samples from an eyetracker. They argued that the distribution of gaze steps is best fit by the lognormal distribution, and based on this analysis they concluded that interactive cognitive processes underlie eye…

  15. Gaze movements and spatial working memory in collision avoidance: a traffic intersection task

    PubMed Central

    Hardiess, Gregor; Hansmann-Roth, Sabrina; Mallot, Hanspeter A.

    2013-01-01

    Street crossing under traffic is an everyday activity including collision detection as well as avoidance of objects in the path of motion. Such tasks demand extraction and representation of spatio-temporal information about relevant obstacles in an optimized format. Relevant task information is extracted visually by the use of gaze movements and represented in spatial working memory. In a virtual reality traffic intersection task, subjects are confronted with a two-lane intersection where cars are appearing with different frequencies, corresponding to high and low traffic densities. Under free observation and exploration of the scenery (using unrestricted eye and head movements) the overall task for the subjects was to predict the potential-of-collision (POC) of the cars or to adjust an adequate driving speed in order to cross the intersection without collision (i.e., to find the free space for crossing). In a series of experiments, gaze movement parameters, task performance, and the representation of car positions within working memory at distinct time points were assessed in normal subjects as well as in neurological patients suffering from homonymous hemianopia. In the following, we review the findings of these experiments together with other studies and provide a new perspective of the role of gaze behavior and spatial memory in collision detection and avoidance, focusing on the following questions: (1) which sensory variables can be identified supporting adequate collision detection? (2) How do gaze movements and working memory contribute to collision avoidance when multiple moving objects are present and (3) how do they correlate with task performance? (4) How do patients with homonymous visual field defects (HVFDs) use gaze movements and working memory to compensate for visual field loss? In conclusion, we extend the theory of collision detection and avoidance in the case of multiple moving objects and provide a new perspective on the combined operation of external (bottom-up) and internal (top-down) cues in a traffic intersection task. PMID:23760667

  16. Selective looking at natural scenes: Hedonic content and gender.

    PubMed

    Bradley, Margaret M; Costa, Vincent D; Lang, Peter J

    2015-10-01

    Choice viewing behavior when looking at affective scenes was assessed to examine differences due to hedonic content and gender by monitoring eye movements in a selective looking paradigm. On each trial, participants viewed a pair of pictures that included a neutral picture together with an affective scene depicting either contamination, mutilation, threat, food, nude males, or nude females. The duration of time that gaze was directed to each picture in the pair was determined from eye fixations. Results indicated that viewing choices varied with both hedonic content and gender. Initially, gaze duration for both men and women was heightened when viewing all affective contents, but was subsequently followed by significant avoidance of scenes depicting contamination or nude males. Gender differences were most pronounced when viewing pictures of nude females, with men continuing to devote longer gaze time to pictures of nude females throughout viewing, whereas women avoided scenes of nude people, whether male or female, later in the viewing interval. For women, reported disgust of sexual activity was also inversely related to gaze duration for nude scenes. Taken together, selective looking as indexed by eye movements reveals differential perceptual intake as a function of specific content, gender, and individual differences. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Visuomotor properties of corticotectal cells in area 17 and posteromedial lateral suprasylvian (PMLS) cortex of the cat.

    PubMed

    Weyand, T G; Gafka, A C

    2001-01-01

    We studied the visuomotor activity of corticotectal (CT) cells in two visual cortical areas [area 17 and the posteromedial lateral suprasylvian cortex (PMLS)] of the cat. The cats were trained in simple oculomotor tasks, and head position was fixed. Most CT cells in both cortical areas gave a vigorous discharge to a small stimulus used to control gaze when it fell within the retinotopically defined visual field. However, the vigor of the visual response did not predict latency to initiate a saccade, saccade velocity, amplitude, or even if a saccade would be made, minimizing any potential role these cells might have in premotor or attentional processes. Most CT cells in both areas were selective for direction of stimulus motion, and cells in PMLS showed a direction preference favoring motion away from points of central gaze. CT cells did not discharge with eye movements in the dark. During eye movements in the light, many CT cells in area 17 increased their activity. In contrast, cells in PMLS, including CT cells, were generally unresponsive during saccades. Paradoxically, cells in PMLS responded vigorously to stimuli moving at saccadic velocities, indicating that the oculomotor system suppresses visual activity elicited by moving the retina across an illuminated scene. Nearly all CT cells showed oscillatory activity in the frequency range of 20-90 Hz, especially in response to visual stimuli. However, this activity was capricious; strong oscillations in one trial could disappear in the next despite identical stimulus conditions. Although the CT cells in both of these regions share many characteristics, the direction anisotropy and the suppression of activity during eye movements which characterize the neurons in PMLS suggests that these two areas have different roles in facilitating perceptual/motor processes at the level of the superior colliculus.

  18. Physical and sporting activities improve vestibular afferent usage and balance in elderly human subjects.

    PubMed

    Gauchard, G C; Jeandel, C; Perrin, P P

    2001-01-01

    Ageing is associated with a reduction in balance, in particular through dysfunction of each level of postural control, which results in an increased risk of falling. Conversely, the practice of physical activities has been shown to modulate postural control in elderly people. This study examined the potential positive effects of two types of regular physical and sporting activities on vestibular information and their relation to posture. Gaze and postural stabilisation was evaluated by caloric and rotational vestibular tests on 18 healthy subjects over the age of 60 who regularly practised low-energy or bioenergetic physical activities and on 18 controls of a similar age who only walked on a regular basis. These subjects were also submitted to static and dynamic posturographic tests. The control group displayed less balance control, with a lower vestibular sensitivity and a relatively high dependency on vision compared to the group practising low-energy physical activities, which had better postural control with good vestibular sensitivity and less dependency on vision. The postural control and vestibular sensitivity of subjects practising bioenergetic activities was average, and required higher visual afferent contribution. Low-energy exercises, already shown to have the most positive impact on balance control by relying more on proprioception, also appear to develop or maintain a high level of vestibular sensitivity allowing elderly people practising such exercises to reduce the weight of vision. Copyright 2001 S. Karger AG, Basel

  19. Women gaze behaviour in assessing female bodies: the effects of clothing, body size, own body composition and body satisfaction.

    PubMed

    Cundall, Amelia; Guo, Kun

    2017-01-01

    Often with minimally clothed figures depicting extreme body sizes, previous studies have shown women tend to gaze at evolutionary determinants of attractiveness when viewing female bodies, possibly for self-evaluation purposes, and their gaze distribution is modulated by own body dissatisfaction level. To explore to what extent women's body-viewing gaze behaviour is affected by clothing type, dress size, subjective measurements of regional body satisfaction and objective measurements of own body composition (e.g., chest size, body mass index, waist-to-hip ratio), in this self-paced body attractiveness and body size judgement experiment, we compared healthy, young women's gaze distributions when viewing female bodies in tight and loose clothing of different dress sizes. In contrast to tight clothing, loose clothing biased gaze away from the waist-hip to the leg region, and subsequently led to enhanced body attractiveness ratings and body size underestimation for larger female bodies, indicating the important role of clothing in mediating women's body perception. When viewing preferred female bodies, women's higher satisfaction of a specific body region was associated with an increased gaze towards neighbouring body areas, implying satisfaction might reduce the need for comparison of confident body parts; furthermore undesirable body composition measurements were correlated with a gaze avoidance process if the construct was less changeable (i.e. chest size) but a gaze comparison process if the region was more changeable (i.e. body mass index, dress size). Clearly, own body satisfaction and body composition measurements had an evident impact on women's body-viewing gaze allocation, possibly through different cognitive processes.

  20. Assessing Self-Awareness through Gaze Agency

    PubMed Central

    Crespi, Sofia Allegra; de’Sperati, Claudio

    2016-01-01

    We define gaze agency as the awareness of the causal effect of one’s own eye movements in gaze-contingent environments, which might soon become a widespread reality with the diffusion of gaze-operated devices. Here we propose a method for measuring gaze agency based on self-monitoring propensity and sensitivity. In one task, naïf observers watched bouncing balls on a computer monitor with the goal of discovering the cause of concurrently presented beeps, which were generated in real-time by their saccades or by other events (Discovery Task). We manipulated observers’ self-awareness by pre-exposing them to a condition in which beeps depended on gaze direction or by focusing their attention to their own eyes. These manipulations increased propensity to agency discovery. In a second task, which served to monitor agency sensitivity at the sensori-motor level, observers were explicitly asked to detect gaze agency (Detection Task). Both tasks turned out to be well suited to measure both increases and decreases of gaze agency. We did not find evident oculomotor correlates of agency discovery or detection. A strength of our approach is that it probes self-monitoring propensity–difficult to evaluate with traditional tasks based on bodily agency. In addition to putting a lens on this novel cognitive function, measuring gaze agency could reveal subtle self-awareness deficits in pathological conditions and during development. PMID:27812138

  1. Revisiting the Relationship between the Processing of Gaze Direction and the Processing of Facial Expression

    ERIC Educational Resources Information Center

    Ganel, Tzvi

    2011-01-01

    There is mixed evidence on the nature of the relationship between the perception of gaze direction and the perception of facial expressions. Major support for shared processing of gaze and expression comes from behavioral studies that showed that observers cannot process expression or gaze and ignore irrelevant variations in the other dimension.…

  2. E-ducating the Gaze: The Idea of a Poor Pedagogy

    ERIC Educational Resources Information Center

    Masschelein, Jan

    2010-01-01

    Educating the gaze is easily understood as becoming conscious about what is "really" happening in the world and becoming aware of the way our gaze is itself bound to a perspective and particular position. However, the paper explores a different idea. It understands educating the gaze not in the sense of "educare" (teaching) but of "e-ducere" as…

  3. Can Infants Use a Nonhuman Agent's Gaze Direction to Establish Word-Object Relations?

    ERIC Educational Resources Information Center

    O'Connell, Laura; Poulin-Dubois, Diane; Demke, Tamara; Guay, Amanda

    2009-01-01

    Adopting a procedure developed with human speakers, we examined infants' ability to follow a nonhuman agent's gaze direction and subsequently to use its gaze to learn new words. When a programmable robot acted as the speaker (Experiment 1), infants followed its gaze toward the word referent whether or not it coincided with their own focus of…

  4. Gaze Behavior of Children with ASD toward Pictures of Facial Expressions.

    PubMed

    Matsuda, Soichiro; Minagawa, Yasuyo; Yamamoto, Junichi

    2015-01-01

    Atypical gaze behavior in response to a face has been well documented in individuals with autism spectrum disorders (ASDs). Children with ASD appear to differ from typically developing (TD) children in gaze behavior for spoken and dynamic face stimuli but not for nonspeaking, static face stimuli. Furthermore, children with ASD and TD children show a difference in their gaze behavior for certain expressions. However, few studies have examined the relationship between autism severity and gaze behavior toward certain facial expressions. The present study replicated and extended previous studies by examining gaze behavior towards pictures of facial expressions. We presented ASD and TD children with pictures of surprised, happy, neutral, angry, and sad facial expressions. Autism severity was assessed using the Childhood Autism Rating Scale (CARS). The results showed that there was no group difference in gaze behavior when looking at pictures of facial expressions. Conversely, the children with ASD who had more severe autistic symptomatology had a tendency to gaze at angry facial expressions for a shorter duration in comparison to other facial expressions. These findings suggest that autism severity should be considered when examining atypical responses to certain facial expressions.

  5. Space-based and object-centered gaze cuing of attention in right hemisphere-damaged patients.

    PubMed

    Dalmaso, Mario; Castelli, Luigi; Priftis, Konstantinos; Buccheri, Marta; Primon, Daniela; Tronco, Silvia; Galfano, Giovanni

    2015-01-01

    Gaze cuing of attention is a well established phenomenon consisting of the tendency to shift attention to the location signaled by the averted gaze of other individuals. Evidence suggests that such phenomenon might follow intrinsic object-centered features of the head containing the gaze cue. In the present exploratory study, we aimed to investigate whether such object-centered component is present in neuropsychological patients with a lesion involving the right hemisphere, which is known to play a critical role both in orienting of attention and in face processing. To this purpose, we used a modified gaze-cuing paradigm in which a centrally placed head with averted gaze was presented either in the standard upright position or rotated 90° clockwise or anti-clockwise. Afterward, a to-be-detected target was presented either in the right or in the left hemifield. The results showed that gaze cuing of attention was present only when the target appeared in the left visual hemifield and was not modulated by head orientation. This suggests that gaze cuing of attention in right hemisphere-damaged patients can operate within different frames of reference.

  6. Space-based and object-centered gaze cuing of attention in right hemisphere-damaged patients

    PubMed Central

    Dalmaso, Mario; Castelli, Luigi; Priftis, Konstantinos; Buccheri, Marta; Primon, Daniela; Tronco, Silvia; Galfano, Giovanni

    2015-01-01

    Gaze cuing of attention is a well established phenomenon consisting of the tendency to shift attention to the location signaled by the averted gaze of other individuals. Evidence suggests that such phenomenon might follow intrinsic object-centered features of the head containing the gaze cue. In the present exploratory study, we aimed to investigate whether such object-centered component is present in neuropsychological patients with a lesion involving the right hemisphere, which is known to play a critical role both in orienting of attention and in face processing. To this purpose, we used a modified gaze-cuing paradigm in which a centrally placed head with averted gaze was presented either in the standard upright position or rotated 90° clockwise or anti-clockwise. Afterward, a to-be-detected target was presented either in the right or in the left hemifield. The results showed that gaze cuing of attention was present only when the target appeared in the left visual hemifield and was not modulated by head orientation. This suggests that gaze cuing of attention in right hemisphere-damaged patients can operate within different frames of reference. PMID:26300815

  7. Gaze Behavior of Children with ASD toward Pictures of Facial Expressions

    PubMed Central

    Matsuda, Soichiro; Minagawa, Yasuyo; Yamamoto, Junichi

    2015-01-01

    Atypical gaze behavior in response to a face has been well documented in individuals with autism spectrum disorders (ASDs). Children with ASD appear to differ from typically developing (TD) children in gaze behavior for spoken and dynamic face stimuli but not for nonspeaking, static face stimuli. Furthermore, children with ASD and TD children show a difference in their gaze behavior for certain expressions. However, few studies have examined the relationship between autism severity and gaze behavior toward certain facial expressions. The present study replicated and extended previous studies by examining gaze behavior towards pictures of facial expressions. We presented ASD and TD children with pictures of surprised, happy, neutral, angry, and sad facial expressions. Autism severity was assessed using the Childhood Autism Rating Scale (CARS). The results showed that there was no group difference in gaze behavior when looking at pictures of facial expressions. Conversely, the children with ASD who had more severe autistic symptomatology had a tendency to gaze at angry facial expressions for a shorter duration in comparison to other facial expressions. These findings suggest that autism severity should be considered when examining atypical responses to certain facial expressions. PMID:26090223

  8. A model of face selection in viewing video stories.

    PubMed

    Suda, Yuki; Kitazawa, Shigeru

    2015-01-19

    When typical adults watch TV programs, they show surprisingly stereo-typed gaze behaviours, as indicated by the almost simultaneous shifts of their gazes from one face to another. However, a standard saliency model based on low-level physical features alone failed to explain such typical gaze behaviours. To find rules that explain the typical gaze behaviours, we examined temporo-spatial gaze patterns in adults while they viewed video clips with human characters that were played with or without sound, and in the forward or reverse direction. We here show the following: 1) the "peak" face scanpath, which followed the face that attracted the largest number of views but ignored other objects in the scene, still retained the key features of actual scanpaths, 2) gaze behaviours remained unchanged whether the sound was provided or not, 3) the gaze behaviours were sensitive to time reversal, and 4) nearly 60% of the variance of gaze behaviours was explained by the face saliency that was defined as a function of its size, novelty, head movements, and mouth movements. These results suggest that humans share a face-oriented network that integrates several visual features of multiple faces, and directs our eyes to the most salient face at each moment.

  9. Perception and Processing of Faces in the Human Brain Is Tuned to Typical Feature Locations

    PubMed Central

    Schwarzkopf, D. Samuel; Alvarez, Ivan; Lawson, Rebecca P.; Henriksson, Linda; Kriegeskorte, Nikolaus; Rees, Geraint

    2016-01-01

    Faces are salient social stimuli whose features attract a stereotypical pattern of fixations. The implications of this gaze behavior for perception and brain activity are largely unknown. Here, we characterize and quantify a retinotopic bias implied by typical gaze behavior toward faces, which leads to eyes and mouth appearing most often in the upper and lower visual field, respectively. We found that the adult human visual system is tuned to these contingencies. In two recognition experiments, recognition performance for isolated face parts was better when they were presented at typical, rather than reversed, visual field locations. The recognition cost of reversed locations was equal to ∼60% of that for whole face inversion in the same sample. Similarly, an fMRI experiment showed that patterns of activity evoked by eye and mouth stimuli in the right inferior occipital gyrus could be separated with significantly higher accuracy when these features were presented at typical, rather than reversed, visual field locations. Our findings demonstrate that human face perception is determined not only by the local position of features within a face context, but by whether features appear at the typical retinotopic location given normal gaze behavior. Such location sensitivity may reflect fine-tuning of category-specific visual processing to retinal input statistics. Our findings further suggest that retinotopic heterogeneity might play a role for face inversion effects and for the understanding of conditions affecting gaze behavior toward faces, such as autism spectrum disorders and congenital prosopagnosia. SIGNIFICANCE STATEMENT Faces attract our attention and trigger stereotypical patterns of visual fixations, concentrating on inner features, like eyes and mouth. Here we show that the visual system represents face features better when they are shown at retinal positions where they typically fall during natural vision. When facial features were shown at typical (rather than reversed) visual field locations, they were discriminated better by humans and could be decoded with higher accuracy from brain activity patterns in the right occipital face area. This suggests that brain representations of face features do not cover the visual field uniformly. It may help us understand the well-known face-inversion effect and conditions affecting gaze behavior toward faces, such as prosopagnosia and autism spectrum disorders. PMID:27605606

  10. Gaze and visual search strategies of children with Asperger syndrome/high functioning autism viewing a magic trick.

    PubMed

    Joosten, Annette; Girdler, Sonya; Albrecht, Matthew A; Horlin, Chiara; Falkmer, Marita; Leung, Denise; Ordqvist, Anna; Fleischer, Håkan; Falkmer, Torbjörn

    2016-01-01

    To examine visual search patterns and strategies used by children with and without Asperger syndrome/high functioning autism (AS/HFA) while watching a magic trick. Limited responsivity to gaze cues is hypothesised to contribute to social deficits in children with AS/HFA. Twenty-one children with AS/HFA and 31 matched peers viewed a video of a gaze-cued magic trick twice. Between the viewings, they were informed about how the trick was performed. Participants' eye movements were recorded using a head-mounted eye-tracker. Children with AS/HFA looked less frequently and had shorter fixation on the magician's direct and averted gazes during both viewings and more frequently at not gaze-cued objects and on areas outside the magician's face. After being informed of how the trick was conducted, both groups made fewer fixations on gaze-cued objects and direct gaze. Information may enhance effective visual strategies in children with and without AS/HFA.

  11. Neural bases of eye and gaze processing: The core of social cognition

    PubMed Central

    Itier, Roxane J.; Batty, Magali

    2014-01-01

    Eyes and gaze are very important stimuli for human social interactions. Recent studies suggest that impairments in recognizing face identity, facial emotions or in inferring attention and intentions of others could be linked to difficulties in extracting the relevant information from the eye region including gaze direction. In this review, we address the central role of eyes and gaze in social cognition. We start with behavioral data demonstrating the importance of the eye region and the impact of gaze on the most significant aspects of face processing. We review neuropsychological cases and data from various imaging techniques such as fMRI/PET and ERP/MEG, in an attempt to best describe the spatio-temporal networks underlying these processes. The existence of a neuronal eye detector mechanism is discussed as well as the links between eye gaze and social cognition impairments in autism. We suggest impairments in processing eyes and gaze may represent a core deficiency in several other brain pathologies and may be central to abnormal social cognition. PMID:19428496

  12. Exploiting Listener Gaze to Improve Situated Communication in Dynamic Virtual Environments.

    PubMed

    Garoufi, Konstantina; Staudte, Maria; Koller, Alexander; Crocker, Matthew W

    2016-09-01

    Beyond the observation that both speakers and listeners rapidly inspect the visual targets of referring expressions, it has been argued that such gaze may constitute part of the communicative signal. In this study, we investigate whether a speaker may, in principle, exploit listener gaze to improve communicative success. In the context of a virtual environment where listeners follow computer-generated instructions, we provide two kinds of support for this claim. First, we show that listener gaze provides a reliable real-time index of understanding even in dynamic and complex environments, and on a per-utterance basis. Second, we show that a language generation system that uses listener gaze to provide rapid feedback improves overall task performance in comparison with two systems that do not use gaze. Aside from demonstrating the utility of listener gaze in situated communication, our findings open the door to new methods for developing and evaluating multi-modal models of situated interaction. Copyright © 2015 Cognitive Science Society, Inc.

  13. ScreenMasker: An Open-source Gaze-contingent Screen Masking Environment.

    PubMed

    Orlov, Pavel A; Bednarik, Roman

    2016-09-01

    The moving-window paradigm, based on gazecontingent technic, traditionally used in a studies of the visual perceptual span. There is a strong demand for new environments that could be employed by non-technical researchers. We have developed an easy-to-use tool with a graphical user interface (GUI) allowing both execution and control of visual gaze-contingency studies. This work describes ScreenMasker, an environment that allows create gaze-contingent textured displays used together with stimuli presentation software. ScreenMasker has an architecture that meets the requirements of low-latency real-time eye-movement experiments. It also provides a variety of settings and functions. Effective rendering times and performance are ensured by means of GPU processing under CUDA technology. Performance tests show ScreenMasker's latency to be 67-74 ms on a typical office computer, and high-end 144-Hz screen latencies of about 25-28 ms. ScreenMasker is an open-source system distributed under the GNU Lesser General Public License and is available at https://github.com/PaulOrlov/ScreenMasker .

  14. Gaze stability of observers watching Op Art pictures.

    PubMed

    Zanker, Johannes M; Doyle, Melanie; Robin, Walker

    2003-01-01

    It has been the matter of some debate why we can experience vivid dynamic illusions when looking at static pictures composed from simple black and white patterns. The impression of illusory motion is particularly strong when viewing some of the works of 'Op Artists, such as Bridget Riley's painting Fall. Explanations of the illusory motion have ranged from retinal to cortical mechanisms, and an important role has been attributed to eye movements. To assess the possible contribution of eye movements to the illusory-motion percept we studied the strength of the illusion under different viewing conditions, and analysed the gaze stability of observers viewing the Riley painting and control patterns that do not produce the illusion. Whereas the illusion was reduced, but not abolished, when watching the painting through a pinhole, which reduces the effects of accommodation, it was not perceived in flash afterimages, suggesting an important role for eye movements in generating the illusion for this image. Recordings of eye movements revealed an abundance of small involuntary saccades when looking at the Riley pattern, despite the fact that gaze was kept within the dedicated fixation region. The frequency and particular characteristics of these rapid eye movements can vary considerably between different observers, but, although there was a tendency for gaze stability to deteriorate while viewing a Riley painting, there was no significant difference in saccade frequency between the stimulus and control patterns. Theoretical considerations indicate that such small image displacements can generate patterns of motion signals in a motion-detector network, which may serve as a simple and sufficient, but not necessarily exclusive, explanation for the illusion. Why such image displacements lead to perceptual results with a group of Op Art and similar patterns, but remain invisible for other stimuli, is discussed.

  15. Looking at Eye Gaze Processing and Its Neural Correlates in Infancy--Implications for Social Development and Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Hoehl, Stefanie; Reid, Vincent M.; Parise, Eugenio; Handl, Andrea; Palumbo, Letizia; Striano, Tricia

    2009-01-01

    The importance of eye gaze as a means of communication is indisputable. However, there is debate about whether there is a dedicated neural module, which functions as an eye gaze detector and when infants are able to use eye gaze cues in a referential way. The application of neuroscience methodologies to developmental psychology has provided new…

  16. No Evidence of Emotional Dysregulation or Aversion to Mutual Gaze in Preschoolers with Autism Spectrum Disorder: An Eye-Tracking Pupillometry Study

    ERIC Educational Resources Information Center

    Nuske, Heather J.; Vivanti, Giacomo; Dissanayake, Cheryl

    2015-01-01

    The "gaze aversion hypothesis", suggests that people with Autism Spectrum Disorder (ASD) avoid mutual gaze because they experience it as hyper-arousing. To test this hypothesis we showed mutual and averted gaze stimuli to 23 mixed-ability preschoolers with ASD ("M" Mullen DQ = 68) and 21 typically-developing preschoolers, aged…

  17. The effect of challenge and threat states on performance: An examination of potential mechanisms

    PubMed Central

    Moore, Lee J; Vine, Samuel J; Wilson, Mark R; Freeman, Paul

    2012-01-01

    Challenge and threat states predict future performance; however, no research has examined their immediate effect on motor task performance. The present study examined the effect of challenge and threat states on golf putting performance and several possible mechanisms. One hundred twenty-seven participants were assigned to a challenge or threat group and performed six putts during which emotions, gaze, putting kinematics, muscle activity, and performance were recorded. Challenge and threat states were successively manipulated via task instructions. The challenge group performed more accurately, reported more favorable emotions, and displayed more effective gaze, putting kinematics, and muscle activity than the threat group. Multiple putting kinematic variables mediated the relationship between group and performance, suggesting that challenge and threat states impact performance at a predominately kinematic level. PMID:22913339

  18. Limitations of gaze transfer: without visual context, eye movements do not to help to coordinate joint action, whereas mouse movements do.

    PubMed

    Müller, Romy; Helmert, Jens R; Pannasch, Sebastian

    2014-10-01

    Remote cooperation can be improved by transferring the gaze of one participant to the other. However, based on a partner's gaze, an interpretation of his communicative intention can be difficult. Thus, gaze transfer has been inferior to mouse transfer in remote spatial referencing tasks where locations had to be pointed out explicitly. Given that eye movements serve as an indicator of visual attention, it remains to be investigated whether gaze and mouse transfer differentially affect the coordination of joint action when the situation demands an understanding of the partner's search strategies. In the present study, a gaze or mouse cursor was transferred from a searcher to an assistant in a hierarchical decision task. The assistant could use this cursor to guide his movement of a window which continuously opened up the display parts the searcher needed to find the right solution. In this context, we investigated how the ease of using gaze transfer depended on whether a link could be established between the partner's eye movements and the objects he was looking at. Therefore, in addition to the searcher's cursor, the assistant either saw the positions of these objects or only a grey background. When the objects were visible, performance and the number of spoken words were similar for gaze and mouse transfer. However, without them, gaze transfer resulted in longer solution times and more verbal effort as participants relied more strongly on speech to coordinate the window movement. Moreover, an analysis of the spatio-temporal coupling of the transmitted cursor and the window indicated that when no visual object information was available, assistants confidently followed the searcher's mouse but not his gaze cursor. Once again, the results highlight the importance of carefully considering task characteristics when applying gaze transfer in remote cooperation. Copyright © 2013 Elsevier B.V. All rights reserved.

  19. Gaze Behavior Consistency among Older and Younger Adults When Looking at Emotional Faces

    PubMed Central

    Chaby, Laurence; Hupont, Isabelle; Avril, Marie; Luherne-du Boullay, Viviane; Chetouani, Mohamed

    2017-01-01

    The identification of non-verbal emotional signals, and especially of facial expressions, is essential for successful social communication among humans. Previous research has reported an age-related decline in facial emotion identification, and argued for socio-emotional or aging-brain model explanations. However, more perceptual differences in the gaze strategies that accompany facial emotional processing with advancing age have been under-explored yet. In this study, 22 young (22.2 years) and 22 older (70.4 years) adults were instructed to look at basic facial expressions while their gaze movements were recorded by an eye-tracker. Participants were then asked to identify each emotion, and the unbiased hit rate was applied as performance measure. Gaze data were first analyzed using traditional measures of fixations over two preferential regions of the face (upper and lower areas) for each emotion. Then, to better capture core gaze changes with advancing age, spatio-temporal gaze behaviors were deeper examined using data-driven analysis (dimension reduction, clustering). Results first confirmed that older adults performed worse than younger adults at identifying facial expressions, except for “joy” and “disgust,” and this was accompanied by a gaze preference toward the lower-face. Interestingly, this phenomenon was maintained during the whole time course of stimulus presentation. More importantly, trials corresponding to older adults were more tightly clustered, suggesting that the gaze behavior patterns of older adults are more consistent than those of younger adults. This study demonstrates that, confronted to emotional faces, younger and older adults do not prioritize or ignore the same facial areas. Older adults mainly adopted a focused-gaze strategy, consisting in focusing only on the lower part of the face throughout the whole stimuli display time. This consistency may constitute a robust and distinctive “social signature” of emotional identification in aging. Younger adults, however, were more dispersed in terms of gaze behavior and used a more exploratory-gaze strategy, consisting in repeatedly visiting both facial areas. PMID:28450841

  20. I Reach Faster When I See You Look: Gaze Effects in Human-Human and Human-Robot Face-to-Face Cooperation.

    PubMed

    Boucher, Jean-David; Pattacini, Ugo; Lelong, Amelie; Bailly, Gerrard; Elisei, Frederic; Fagel, Sascha; Dominey, Peter Ford; Ventre-Dominey, Jocelyne

    2012-01-01

    Human-human interaction in natural environments relies on a variety of perceptual cues. Humanoid robots are becoming increasingly refined in their sensorimotor capabilities, and thus should now be able to manipulate and exploit these social cues in cooperation with their human partners. Previous studies have demonstrated that people follow human and robot gaze, and that it can help them to cope with spatially ambiguous language. Our goal is to extend these findings into the domain of action, to determine how human and robot gaze can influence the speed and accuracy of human action. We report on results from a human-human cooperation experiment demonstrating that an agent's vision of her/his partner's gaze can significantly improve that agent's performance in a cooperative task. We then implement a heuristic capability to generate such gaze cues by a humanoid robot that engages in the same cooperative interaction. The subsequent human-robot experiments demonstrate that a human agent can indeed exploit the predictive gaze of their robot partner in a cooperative task. This allows us to render the humanoid robot more human-like in its ability to communicate with humans. The long term objectives of the work are thus to identify social cooperation cues, and to validate their pertinence through implementation in a cooperative robot. The current research provides the robot with the capability to produce appropriate speech and gaze cues in the context of human-robot cooperation tasks. Gaze is manipulated in three conditions: Full gaze (coordinated eye and head), eyes hidden with sunglasses, and head fixed. We demonstrate the pertinence of these cues in terms of statistical measures of action times for humans in the context of a cooperative task, as gaze significantly facilitates cooperation as measured by human response times.

  1. Use of Robotic Pets in Providing Stimulation for Nursing Home Residents with Dementia.

    PubMed

    Naganuma, M; Ohkubo, E; Kato, N

    2015-01-01

    Trial experiments utilized robotic pets to facilitate self-reliance in nursing home residents. A remote-control robot modeled clear and meaningful behaviors to elderly residents. Special attention was paid to its effects on mental and social domains. Employing the robot as a gaze target and center of attention created a cue to initiate a communication channel between residents who normally show no interest in each other. The Sony AIBO robot in this study uses commercially available wireless equipment, and all its components are easily accessible to any medical or welfare institution interested in additional practice of these activities.

  2. Oculomotor Apraxia

    MedlinePlus

    ... a defect in, the control of voluntary purposeful eye movement. Children with this condition have difficulty moving their ... to compensate for this inability to initiate horizontal eye movements away from the straight-ahead gaze position. Typically, ...

  3. Salience network dynamics underlying successful resistance of temptation

    PubMed Central

    Nomi, Jason S; Calhoun, Vince D; Stelzel, Christine; Paschke, Lena M; Gaschler, Robert; Goschke, Thomas; Walter, Henrik; Uddin, Lucina Q

    2017-01-01

    Abstract Self-control and the ability to resist temptation are critical for successful completion of long-term goals. Contemporary models in cognitive neuroscience emphasize the primary role of prefrontal cognitive control networks in aligning behavior with such goals. Here, we use gaze pattern analysis and dynamic functional connectivity fMRI data to explore how individual differences in the ability to resist temptation are related to intrinsic brain dynamics of the cognitive control and salience networks. Behaviorally, individuals exhibit greater gaze distance from target location (e.g. higher distractibility) during presentation of tempting erotic images compared with neutral images. Individuals whose intrinsic dynamic functional connectivity patterns gravitate toward configurations in which salience detection systems are less strongly coupled with visual systems resist tempting distractors more effectively. The ability to resist tempting distractors was not significantly related to intrinsic dynamics of the cognitive control network. These results suggest that susceptibility to temptation is governed in part by individual differences in salience network dynamics and provide novel evidence for involvement of brain systems outside canonical cognitive control networks in contributing to individual differences in self-control. PMID:29048582

  4. Adaptive eye-gaze tracking using neural-network-based user profiles to assist people with motor disability.

    PubMed

    Sesin, Anaelis; Adjouadi, Malek; Cabrerizo, Mercedes; Ayala, Melvin; Barreto, Armando

    2008-01-01

    This study developed an adaptive real-time human-computer interface (HCI) that serves as an assistive technology tool for people with severe motor disability. The proposed HCI design uses eye gaze as the primary computer input device. Controlling the mouse cursor with raw eye coordinates results in sporadic motion of the pointer because of the saccadic nature of the eye. Even though eye movements are subtle and completely imperceptible under normal circumstances, they considerably affect the accuracy of an eye-gaze-based HCI. The proposed HCI system is novel because it adapts to each specific user's different and potentially changing jitter characteristics through the configuration and training of an artificial neural network (ANN) that is structured to minimize the mouse jitter. This task is based on feeding the ANN a user's initially recorded eye-gaze behavior through a short training session. The ANN finds the relationship between the gaze coordinates and the mouse cursor position based on the multilayer perceptron model. An embedded graphical interface is used during the training session to generate user profiles that make up these unique ANN configurations. The results with 12 subjects in test 1, which involved following a moving target, showed an average jitter reduction of 35%; the results with 9 subjects in test 2, which involved following the contour of a square object, showed an average jitter reduction of 53%. For both results, the outcomes led to trajectories that were significantly smoother and apt at reaching fixed or moving targets with relative ease and within a 5% error margin or deviation from desired trajectories. The positive effects of such jitter reduction are presented graphically for visual appreciation.

  5. Self-Regulation and Infant-Directed Singing in Infants with Down Syndrome.

    PubMed

    de l'Etoile, Shannon K

    2015-01-01

    Infants learn how to regulate internal states and subsequent behavior through dyadic interactions with caregivers. During infant-directed (ID) singing, mothers help infants practice attentional control and arousal modulation, thus providing critical experience in self-regulation. Infants with Down syndrome are known to have attention deficits and delayed information processing as well as difficulty managing arousability, factors that may disrupt their efforts at self-regulation. The researcher explored responses to ID singing in infants with Down syndrome (DS) and compared them with those of typically developing (TD) infants. Behaviors measured included infant gaze and affect as indicators of self-regulation. Participants included 3- to 9-month-old infants with and without DS who were videotaped throughout a 2-minute face-to-face interaction during which their mothers sang to them any song(s) of their choosing. Infant behavior was then coded for percentage of time spent demonstrating a specific gaze or affect type. All infants displayed sustained gaze more than any other gaze type. TD infants demonstrated intermittent gaze significantly more often than infants with DS. Infant status had no effect on affect type, and all infants showed predominantly neutral affect. Findings suggest that ID singing effectively maintains infant attention for both TD infants and infants with DS. However, infants with DS may have difficulty shifting attention during ID singing as needed to adjust arousal levels and self-regulate. High levels of neutral affect for all infants imply that ID singing is likely to promote a calm, curious state, regardless of infant status. © the American Music Therapy Association 2015. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  6. Does the 'P300' speller depend on eye gaze?

    NASA Astrophysics Data System (ADS)

    Brunner, P.; Joshi, S.; Briskin, S.; Wolpaw, J. R.; Bischof, H.; Schalk, G.

    2010-10-01

    Many people affected by debilitating neuromuscular disorders such as amyotrophic lateral sclerosis, brainstem stroke or spinal cord injury are impaired in their ability to, or are even unable to, communicate. A brain-computer interface (BCI) uses brain signals, rather than muscles, to re-establish communication with the outside world. One particular BCI approach is the so-called 'P300 matrix speller' that was first described by Farwell and Donchin (1988 Electroencephalogr. Clin. Neurophysiol. 70 510-23). It has been widely assumed that this method does not depend on the ability to focus on the desired character, because it was thought that it relies primarily on the P300-evoked potential and minimally, if at all, on other EEG features such as the visual-evoked potential (VEP). This issue is highly relevant for the clinical application of this BCI method, because eye movements may be impaired or lost in the relevant user population. This study investigated the extent to which the performance in a 'P300' speller BCI depends on eye gaze. We evaluated the performance of 17 healthy subjects using a 'P300' matrix speller under two conditions. Under one condition ('letter'), the subjects focused their eye gaze on the intended letter, while under the second condition ('center'), the subjects focused their eye gaze on a fixation cross that was located in the center of the matrix. The results show that the performance of the 'P300' matrix speller in normal subjects depends in considerable measure on gaze direction. They thereby disprove a widespread assumption in BCI research, and suggest that this BCI might function more effectively for people who retain some eye-movement control. The applicability of these findings to people with severe neuromuscular disabilities (particularly in eye-movements) remains to be determined.

  7. Vestibulo-Cervico-Ocular Responses and Tracking Eye Movements after Prolonged Exposure to Microgravity

    NASA Technical Reports Server (NTRS)

    Kornilova, L. N.; Naumov, I. A.; Azarov, K. A.; Sagalovitch, S. V.; Reschke, Millard F.; Kozlovskaya, I. B.

    2007-01-01

    The vestibular function and tracking eye movements were investigated in 12 Russian crew members of ISS missions on days 1(2), 4(5-6), and 8(9-10) after prolonged exposure to microgravity (126 to 195 days). The spontaneous oculomotor activity, static torsional otolith-cervico-ocular reflex, dynamic vestibulo-cervico-ocular responses, vestibular reactivity, tracking eye movements, and gaze-holding were studied using videooculography (VOG) and electrooculography (EOG) for parallel eye movement recording. On post-flight days 1-2 (R+1-2) some cosmonauts demonstrated: - an increased spontaneous oculomotor activity (floating eye movements, spontaneous nystagmus of the typical and atypical form, square wave jerks, gaze nystagmus) with the head held in the vertical position; - suppressed otolith function (absent or reduced by one half amplitude of torsional compensatory eye counter-rolling) with the head inclined statically right- or leftward by 300; - increased vestibular reactivity (lowered threshold and increased intensity of the vestibular nystagmus) during head turns around the longitudinal body axis at 0.125 Hz; - a significant change in the accuracy, velocity, and temporal characteristics of the eye tracking. The pattern, depth, dynamics, and velocity of the vestibular function and tracking eye movements recovery varied with individual participants in the investigation. However, there were also regular responses during readaptation to the normal gravity: - suppression of the otolith function was typically accompanied by an exaggerated vestibular reactivity; - the structure of visual tracking (the accuracy of fixational eye rotations, smooth tracking, and gaze-holding) was disturbed (the appearance of correcting saccades, the transition of smooth tracking to saccadic tracking) only in those cosmonauts who, in parallel to an increased reactivity of the vestibular input, also had central changes in the oculomotor system (spontaneous nystagmus, gaze nystagmus).

  8. GazeAppraise v. 0.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, Andrew; Haass, Michael; Rintoul, Mark Daniel

    GazeAppraise advances the state of the art of gaze pattern analysis using methods that simultaneously analyze spatial and temporal characteristics of gaze patterns. GazeAppraise enables novel research in visual perception and cognition; for example, using shape features as distinguishing elements to assess individual differences in visual search strategy. Given a set of point-to-point gaze sequences, hereafter referred to as scanpaths, the method constructs multiple descriptive features for each scanpath. Once the scanpath features have been calculated, they are used to form a multidimensional vector representing each scanpath and cluster analysis is performed on the set of vectors from all scanpaths.more » An additional benefit of this method is the identification of causal or correlated characteristics of the stimuli, subjects, and visual task through statistical analysis of descriptive metadata distributions within and across clusters.« less

  9. Considerations for the Use of Remote Gaze Tracking to Assess Behavior in Flight Simulators

    NASA Technical Reports Server (NTRS)

    Kalar, Donald J.; Liston, Dorion; Mulligan, Jeffrey B.; Beutter, Brent; Feary, Michael

    2016-01-01

    Complex user interfaces (such as those found in an aircraft cockpit) may be designed from first principles, but inevitably must be evaluated with real users. User gaze data can provide valuable information that can help to interpret other actions that change the state of the system. However, care must be taken to ensure that any conclusions drawn from gaze data are well supported. Through a combination of empirical and simulated data, we identify several considerations and potential pitfalls when measuring gaze behavior in high-fidelity simulators. We show that physical layout, behavioral differences, and noise levels can all substantially alter the quality of fit for algorithms that segment gaze measurements into individual fixations. We provide guidelines to help investigators ensure that conclusions drawn from gaze tracking data are not artifactual consequences of data quality or analysis techniques.

  10. Gaze Strategies in Skateboard Trick Jumps: Spatiotemporal Constraints in Complex Locomotion.

    PubMed

    Klostermann, André; Küng, Philip

    2017-03-01

    This study aimed to further the knowledge on gaze behavior in locomotion by studying gaze strategies in skateboard jumps of different difficulty that had to be performed either with or without an obstacle. Nine experienced skateboarders performed "Ollie" and "Kickflip" jumps either over an obstacle or over a plane surface. The stable gaze at 5 different areas of interest was calculated regarding its relative duration as well as its temporal order. During the approach phase, an interaction between area of interest and obstacle condition, F(3, 24) = 12.91, p <  .05, η p 2  = .62, was found with longer stable-gaze locations at the takeoff area in attempts with an obstacle (p <  .05, η p 2  = .47). In contrast, in attempts over a plane surface, longer stable-gaze locations at the skateboard were revealed (p <  .05, η p 2  = .73). Regarding the trick difficulty factor, the skateboarders descriptively showed longer stable-gaze locations at the skateboard for the "Kickflip" than for the "Ollie" in the no-obstacle condition only (p>.05, d = 0.74). Finally, during the jump phase, neither obstacle condition nor trick difficulty affected gaze behavior differentially. This study underlines the functional adaptability of the visuomotor system to changing demands in highly dynamic situations. As a function of certain constraints, different gaze strategies were observed that can be considered as highly relevant for successfully performing skateboard jumps.

  11. Gaze3DFix: Detecting 3D fixations with an ellipsoidal bounding volume.

    PubMed

    Weber, Sascha; Schubert, Rebekka S; Vogt, Stefan; Velichkovsky, Boris M; Pannasch, Sebastian

    2017-10-26

    Nowadays, the use of eyetracking to determine 2-D gaze positions is common practice, and several approaches to the detection of 2-D fixations exist, but ready-to-use algorithms to determine eye movements in three dimensions are still missing. Here we present a dispersion-based algorithm with an ellipsoidal bounding volume that estimates 3D fixations. Therefore, 3D gaze points are obtained using a vector-based approach and are further processed with our algorithm. To evaluate the accuracy of our method, we performed experimental studies with real and virtual stimuli. We obtained good congruence between stimulus position and both the 3D gaze points and the 3D fixation locations within the tested range of 200-600 mm. The mean deviation of the 3D fixations from the stimulus positions was 17 mm for the real as well as for the virtual stimuli, with larger variances at increasing stimulus distances. The described algorithms are implemented in two dynamic linked libraries (Gaze3D.dll and Fixation3D.dll), and we provide a graphical user interface (Gaze3DFixGUI.exe) that is designed for importing 2-D binocular eyetracking data and calculating both 3D gaze points and 3D fixations using the libraries. The Gaze3DFix toolkit, including both libraries and the graphical user interface, is available as open-source software at https://github.com/applied-cognition-research/Gaze3DFix .

  12. Deep Learning-Based Gaze Detection System for Automobile Drivers Using a NIR Camera Sensor.

    PubMed

    Naqvi, Rizwan Ali; Arsalan, Muhammad; Batchuluun, Ganbayar; Yoon, Hyo Sik; Park, Kang Ryoung

    2018-02-03

    A paradigm shift is required to prevent the increasing automobile accident deaths that are mostly due to the inattentive behavior of drivers. Knowledge of gaze region can provide valuable information regarding a driver's point of attention. Accurate and inexpensive gaze classification systems in cars can improve safe driving. However, monitoring real-time driving behaviors and conditions presents some challenges: dizziness due to long drives, extreme lighting variations, glasses reflections, and occlusions. Past studies on gaze detection in cars have been chiefly based on head movements. The margin of error in gaze detection increases when drivers gaze at objects by moving their eyes without moving their heads. To solve this problem, a pupil center corneal reflection (PCCR)-based method has been considered. However, the error of accurately detecting the pupil center and corneal reflection center is increased in a car environment due to various environment light changes, reflections on glasses surface, and motion and optical blurring of captured eye image. In addition, existing PCCR-based methods require initial user calibration, which is difficult to perform in a car environment. To address this issue, we propose a deep learning-based gaze detection method using a near-infrared (NIR) camera sensor considering driver head and eye movement that does not require any initial user calibration. The proposed system is evaluated on our self-constructed database as well as on open Columbia gaze dataset (CAVE-DB). The proposed method demonstrated greater accuracy than the previous gaze classification methods.

  13. A comparison of facial color pattern and gazing behavior in canid species suggests gaze communication in gray wolves (Canis lupus).

    PubMed

    Ueda, Sayoko; Kumagai, Gaku; Otaki, Yusuke; Yamaguchi, Shinya; Kohshima, Shiro

    2014-01-01

    As facial color pattern around the eyes has been suggested to serve various adaptive functions related to the gaze signal, we compared the patterns among 25 canid species, focusing on the gaze signal, to estimate the function of facial color pattern in these species. The facial color patterns of the studied species could be categorized into the following three types based on contrast indices relating to the gaze signal: A-type (both pupil position in the eye outline and eye position in the face are clear), B-type (only the eye position is clear), and C-type (both the pupil and eye position are unclear). A-type faces with light-colored irises were observed in most studied species of the wolf-like clade and some of the red fox-like clade. A-type faces tended to be observed in species living in family groups all year-round, whereas B-type faces tended to be seen in solo/pair-living species. The duration of gazing behavior during which the facial gaze-signal is displayed to the other individual was longest in gray wolves with typical A-type faces, of intermediate length in fennec foxes with typical B-type faces, and shortest in bush dogs with typical C-type faces. These results suggest that the facial color pattern of canid species is related to their gaze communication and that canids with A-type faces, especially gray wolves, use the gaze signal in conspecific communication.

  14. Deep Learning-Based Gaze Detection System for Automobile Drivers Using a NIR Camera Sensor

    PubMed Central

    Naqvi, Rizwan Ali; Arsalan, Muhammad; Batchuluun, Ganbayar; Yoon, Hyo Sik; Park, Kang Ryoung

    2018-01-01

    A paradigm shift is required to prevent the increasing automobile accident deaths that are mostly due to the inattentive behavior of drivers. Knowledge of gaze region can provide valuable information regarding a driver’s point of attention. Accurate and inexpensive gaze classification systems in cars can improve safe driving. However, monitoring real-time driving behaviors and conditions presents some challenges: dizziness due to long drives, extreme lighting variations, glasses reflections, and occlusions. Past studies on gaze detection in cars have been chiefly based on head movements. The margin of error in gaze detection increases when drivers gaze at objects by moving their eyes without moving their heads. To solve this problem, a pupil center corneal reflection (PCCR)-based method has been considered. However, the error of accurately detecting the pupil center and corneal reflection center is increased in a car environment due to various environment light changes, reflections on glasses surface, and motion and optical blurring of captured eye image. In addition, existing PCCR-based methods require initial user calibration, which is difficult to perform in a car environment. To address this issue, we propose a deep learning-based gaze detection method using a near-infrared (NIR) camera sensor considering driver head and eye movement that does not require any initial user calibration. The proposed system is evaluated on our self-constructed database as well as on open Columbia gaze dataset (CAVE-DB). The proposed method demonstrated greater accuracy than the previous gaze classification methods. PMID:29401681

  15. Evaluating gaze-driven power wheelchair with navigation support for persons with disabilities.

    PubMed

    Wästlund, Erik; Sponseller, Kay; Pettersson, Ola; Bared, Anders

    2015-01-01

    This article describes a novel add-on for powered wheelchairs that is composed of a gaze-driven control system and a navigation support system. The add-on was tested by three users. All of the users were individuals with severe disabilities and no possibility of moving independently. The system is an add-on to a standard power wheelchair and can be customized for different levels of support according to the cognitive level, motor control, perceptual skills, and specific needs of the user. The primary aim of this study was to test the functionality and safety of the system in the user's home environment. The secondary aim was to evaluate whether access to a gaze-driven powered wheelchair with navigation support is perceived as meaningful in terms of independence and participation. The results show that the system has the potential to provide safe, independent indoor mobility and that the users perceive doing so as fun, meaningful, and a way to reduce dependency on others. Independent mobility has numerous benefits in addition to psychological and emotional well-being. By observing users' actions, caregivers and healthcare professionals can assess the individual's capabilities, which was not previously possible. Rehabilitation can be better adapted to the individual's specific needs, and driving a wheelchair independently can be a valuable, motivating training tool.

  16. The complex duration perception of emotional faces: effects of face direction.

    PubMed

    Kliegl, Katrin M; Limbrecht-Ecklundt, Kerstin; Dürr, Lea; Traue, Harald C; Huckauf, Anke

    2015-01-01

    The perceived duration of emotional face stimuli strongly depends on the expressed emotion. But, emotional faces also differ regarding a number of other features like gaze, face direction, or sex. Usually, these features have been controlled by only using pictures of female models with straight gaze and face direction. Doi and Shinohara (2009) reported that an overestimation of angry faces could only be found when the model's gaze was oriented toward the observer. We aimed at replicating this effect for face direction. Moreover, we explored the effect of face direction on the duration perception sad faces. Controlling for the sex of the face model and the participant, female and male participants rated the duration of neutral, angry, and sad face stimuli of both sexes photographed from different perspectives in a bisection task. In line with current findings, we report a significant overestimation of angry compared to neutral face stimuli that was modulated by face direction. Moreover, the perceived duration of sad face stimuli did not differ from that of neutral faces and was not influenced by face direction. Furthermore, we found that faces of the opposite sex appeared to last longer than those of the same sex. This outcome is discussed with regards to stimulus parameters like the induced arousal, social relevance, and an evolutionary context.

  17. Bumetanide for autism: more eye contact, less amygdala activation.

    PubMed

    Hadjikhani, Nouchine; Åsberg Johnels, Jakob; Lassalle, Amandine; Zürcher, Nicole R; Hippolyte, Loyse; Gillberg, Christopher; Lemonnier, Eric; Ben-Ari, Yehezkel

    2018-02-26

    We recently showed that constraining eye contact leads to exaggerated increase of amygdala activation in autism. Here, in a proof of concept pilot study, we demonstrate that administration of bumetanide (a NKCC1 chloride importer antagonist that restores GABAergic inhibition) normalizes the level of amygdala activation during constrained eye contact with dynamic emotional face stimuli in autism. In addition, eye-tracking data reveal that bumetanide administration increases the time spent in spontaneous eye gaze during in a free-viewing mode of the same face stimuli. In keeping with clinical trials, our data support the Excitatory/Inhibitory dysfunction hypothesis in autism, and indicate that bumetanide may improve specific aspects of social processing in autism. Future double-blind placebo controlled studies with larger cohorts of participants will help clarify the mechanisms of bumetanide action in autism.

  18. Visuo-spatial orienting during active exploratory behavior: Processing of task-related and stimulus-related signals.

    PubMed

    Macaluso, Emiliano; Ogawa, Akitoshi

    2018-05-01

    Functional imaging studies have associated dorsal and ventral fronto-parietal regions with the control of visuo-spatial attention. Previous studies demonstrated that the activity of both the dorsal and the ventral attention systems can be modulated by many different factors, related both to the stimuli and the task. However, the vast majority of this work utilized stereotyped paradigms with simple and repeated stimuli. This is at odd with any real life situation that instead involve complex combinations of different types of co-occurring signals, thus raising the question of the ecological significance of the previous findings. Here we investigated how the brain responds to task-related and stimulus-related signals using an innovative approach that involved active exploration of a virtual environment. This enabled us to study visuo-spatial orienting in conditions entailing a dynamic and coherent flow of visual signals, to some extent analogous to real life situations. The environment comprised colored/textured spheres and cubes, which allowed us to implement a standard feature-conjunction search task (task-related signals), and included one physically salient object that served to track the processing of stimulus-related signals. The imaging analyses showed that the posterior parietal cortex (PPC) activated when the participants' gaze was directed towards the salient-objects. By contrast, the right inferior partial cortex was associated with the processing of the target-objects and of distractors that shared the target-color and shape, consistent with goal-directed template-matching operations. The study highlights the possibility of combining measures of gaze orienting and functional imaging to investigate the processing of different types of signals during active behavior in complex environments. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Intentional gaze shift to neglected space: a compensatory strategy during recovery after unilateral spatial neglect.

    PubMed

    Takamura, Yusaku; Imanishi, Maho; Osaka, Madoka; Ohmatsu, Satoko; Tominaga, Takanori; Yamanaka, Kentaro; Morioka, Shu; Kawashima, Noritaka

    2016-11-01

    Unilateral spatial neglect is a common neurological syndrome following predominantly right hemispheric stroke. While most patients lack insight into their neglect behaviour and do not initiate compensatory behaviours in the early recovery phase, some patients recognize it and start to pay attention towards the neglected space. We aimed to characterize visual attention capacity in patients with unilateral spatial neglect with specific focus on cortical processes underlying compensatory gaze shift towards the neglected space during the recovery process. Based on the Behavioural Inattention Test score and presence or absence of experience of neglect in their daily life from stroke onset to the enrolment date, participants were divided into USN+‰‰+ (do not compensate, n = 15), USN+ (compensate, n = 10), and right hemisphere damage groups (no neglect, n = 24). The patients participated in eye pursuit-based choice reaction tasks and were asked to pursue one of five horizontally located circular objects flashed on a computer display. The task consisted of 25 trials with 4-s intervals, and the order of highlighted objects was randomly determined. From the recorded eye tracking data, eye movement onset and gaze shift were calculated. To elucidate the cortical mechanism underlying behavioural results, electroencephalagram activities were recorded in three USN+‰‰+, 13 USN+ and eight patients with right hemisphere damage. We found that while lower Behavioural Inattention Test scoring patients (USN+‰‰+) showed gaze shift to non-neglected space, some higher scoring patients (USN+) showed clear leftward gaze shift at visual stimuli onset. Moreover, we found a significant correlation between Behavioural Inattention Test score and gaze shift extent in the unilateral spatial neglect group (r = -0.62, P < 0.01). Electroencephalography data clearly demonstrated that the extent of increase in theta power in the frontal cortex strongly correlated with the leftward gaze shift extent in the USN+‰‰+ and USN+ groups. Our results revealed a compensatory strategy (continuous attention to the neglected space) and its neural correlates in patients with unilateral spatial neglect. In conclusion, patients with unilateral spatial neglect who recognized their own neglect behaviour intentionally focused on the neglected space as a compensatory strategy to avoid careless oversight. © The Author (2016). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  20. Brief Report: Using a Point-of-View Camera to Measure Eye Gaze in Young Children with Autism Spectrum Disorder during Naturalistic Social Interactions--A Pilot Study

    ERIC Educational Resources Information Center

    Edmunds, Sarah R.; Rozga, Agata; Li, Yin; Karp, Elizabeth A.; Ibanez, Lisa V.; Rehg, James M.; Stone, Wendy L.

    2017-01-01

    Children with autism spectrum disorder (ASD) show reduced gaze to social partners. Eye contact during live interactions is often measured using stationary cameras that capture various views of the child, but determining a child's precise gaze target within another's face is nearly impossible. This study compared eye gaze coding derived from…

  1. Neurons in the human amygdala encode face identity, but not gaze direction.

    PubMed

    Mormann, Florian; Niediek, Johannes; Tudusciuc, Oana; Quesada, Carlos M; Coenen, Volker A; Elger, Christian E; Adolphs, Ralph

    2015-11-01

    The amygdala is important for face processing, and direction of eye gaze is one of the most socially salient facial signals. Recording from over 200 neurons in the amygdala of neurosurgical patients, we found robust encoding of the identity of neutral-expression faces, but not of their direction of gaze. Processing of gaze direction may rely on a predominantly cortical network rather than the amygdala.

  2. The Brainstem Switch for Gaze Shifts in Humans

    DTIC Science & Technology

    2001-10-25

    Page 1 of 4 THE BRAINSTEM SWITCH FOR GAZE SHIFTS IN HUMANS A. N. Kumar1, R. J. Leigh1,2, S. Ramat3 Department of 1Biomedical Engineering, Case...omnipause neurons during gaze shifts. Using the scleral search coil technique, eye movements were measured in seven normal subjects, as they made...voluntary, disjunctive gaze shifts comprising saccades and vergence movements. Conjugate oscillations of small amplitude and high frequency were identified

  3. Learning under your gaze: the mediating role of affective arousal between perceived direct gaze and memory performance.

    PubMed

    Helminen, Terhi M; Pasanen, Tytti P; Hietanen, Jari K

    2016-03-01

    Previous studies have shown that cognitive performance can be affected by the presence of an observer and self-directed gaze. We investigated whether the effect of gaze direction (direct vs. downcast) on verbal memory is mediated by autonomic arousal. Male participants responded with enhanced affective arousal to both male and female storytellers' direct gaze which, according to a path analysis, was negatively associated with the performance. On the other hand, parallel to this arousal-mediated effect, males' performance was affected by another process impacting the performance positively and suggested to be related to effort allocation on the task. The effect of this process was observed only when the storyteller was a male. The participants remembered more details from a story told by a male with a direct vs. downcast gaze. The effect of gaze direction on performance was the opposite for female storytellers, which was explained by the arousal-mediated process. Surprisingly, these results were restricted to male participants only and no effects of gaze were observed among female participants. We also investigated whether the participants' belief of being seen or not (through an electronic window) by the storyteller influenced the memory and arousal, but this manipulation had no effect on the results.

  4. Comparison of dogs and humans in visual scanning of social interaction.

    PubMed

    Törnqvist, Heini; Somppi, Sanni; Koskela, Aija; Krause, Christina M; Vainio, Outi; Kujala, Miiamaaria V

    2015-09-01

    Previous studies have demonstrated similarities in gazing behaviour of dogs and humans, but comparisons under similar conditions are rare, and little is known about dogs' visual attention to social scenes. Here, we recorded the eye gaze of dogs while they viewed images containing two humans or dogs either interacting socially or facing away: the results were compared with equivalent data measured from humans. Furthermore, we compared the gazing behaviour of two dog and two human populations with different social experiences: family and kennel dogs; dog experts and non-experts. Dogs' gazing behaviour was similar to humans: both species gazed longer at the actors in social interaction than in non-social images. However, humans gazed longer at the actors in dog than human social interaction images, whereas dogs gazed longer at the actors in human than dog social interaction images. Both species also made more saccades between actors in images representing non-conspecifics, which could indicate that processing social interaction of non-conspecifics may be more demanding. Dog experts and non-experts viewed the images very similarly. Kennel dogs viewed images less than family dogs, but otherwise their gazing behaviour did not differ, indicating that the basic processing of social stimuli remains similar regardless of social experiences.

  5. A model of face selection in viewing video stories

    PubMed Central

    Suda, Yuki; Kitazawa, Shigeru

    2015-01-01

    When typical adults watch TV programs, they show surprisingly stereo-typed gaze behaviours, as indicated by the almost simultaneous shifts of their gazes from one face to another. However, a standard saliency model based on low-level physical features alone failed to explain such typical gaze behaviours. To find rules that explain the typical gaze behaviours, we examined temporo-spatial gaze patterns in adults while they viewed video clips with human characters that were played with or without sound, and in the forward or reverse direction. We here show the following: 1) the “peak” face scanpath, which followed the face that attracted the largest number of views but ignored other objects in the scene, still retained the key features of actual scanpaths, 2) gaze behaviours remained unchanged whether the sound was provided or not, 3) the gaze behaviours were sensitive to time reversal, and 4) nearly 60% of the variance of gaze behaviours was explained by the face saliency that was defined as a function of its size, novelty, head movements, and mouth movements. These results suggest that humans share a face-oriented network that integrates several visual features of multiple faces, and directs our eyes to the most salient face at each moment. PMID:25597621

  6. Investigating the Link Between Radiologists Gaze, Diagnostic Decision, and Image Content

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tourassi, Georgia; Voisin, Sophie; Paquit, Vincent C

    2013-01-01

    Objective: To investigate machine learning for linking image content, human perception, cognition, and error in the diagnostic interpretation of mammograms. Methods: Gaze data and diagnostic decisions were collected from six radiologists who reviewed 20 screening mammograms while wearing a head-mounted eye-tracker. Texture analysis was performed in mammographic regions that attracted radiologists attention and in all abnormal regions. Machine learning algorithms were investigated to develop predictive models that link: (i) image content with gaze, (ii) image content and gaze with cognition, and (iii) image content, gaze, and cognition with diagnostic error. Both group-based and individualized models were explored. Results: By poolingmore » the data from all radiologists machine learning produced highly accurate predictive models linking image content, gaze, cognition, and error. Merging radiologists gaze metrics and cognitive opinions with computer-extracted image features identified 59% of the radiologists diagnostic errors while confirming 96.2% of their correct diagnoses. The radiologists individual errors could be adequately predicted by modeling the behavior of their peers. However, personalized tuning appears to be beneficial in many cases to capture more accurately individual behavior. Conclusions: Machine learning algorithms combining image features with radiologists gaze data and diagnostic decisions can be effectively developed to recognize cognitive and perceptual errors associated with the diagnostic interpretation of mammograms.« less

  7. Examining the influence of a spatially irrelevant working memory load on attentional allocation.

    PubMed

    McDonnell, Gerald P; Dodd, Michael D

    2013-08-01

    The present study examined the influence of holding task-relevant gaze cues in working memory during a target detection task. Gaze cues shift attention in gaze-consistent directions, even when they are irrelevant to a primary detection task. It is unclear, however, whether gaze cues need to be perceived online to elicit these effects, or how these effects may be moderated if the gaze cues are relevant to a secondary task. In Experiment 1, participants encoded a face for a subsequent memory task, after which they performed an unrelated target detection task. Critically, gaze direction was irrelevant to the target detection task, but memory for the perceived face was tested at trial conclusion. Surprisingly, participants exhibited inhibition-of-return (IOR) and not facilitation, with slower response times for the gazed-at location. In Experiments 2, presentation duration and cue-target stimulus-onset asynchrony were manipulated and we continued to observe IOR with no early facilitation. Experiment 3 revealed facilitation but not IOR when the memory task was removed; Experiment 4 also revealed facilitation when the gaze cue memory task was replaced with arrows cues. The present experiments provide an important dissociation between perceiving cues online versus holding them in memory as it relates to attentional allocation. 2013 APA, all rights reserved

  8. Use of a Remote Eye-Tracker for the Analysis of Gaze during Treadmill Walking and Visual Stimuli Exposition.

    PubMed

    Serchi, V; Peruzzi, A; Cereatti, A; Della Croce, U

    2016-01-01

    The knowledge of the visual strategies adopted while walking in cognitively engaging environments is extremely valuable. Analyzing gaze when a treadmill and a virtual reality environment are used as motor rehabilitation tools is therefore critical. Being completely unobtrusive, remote eye-trackers are the most appropriate way to measure the point of gaze. Still, the point of gaze measurements are affected by experimental conditions such as head range of motion and visual stimuli. This study assesses the usability limits and measurement reliability of a remote eye-tracker during treadmill walking while visual stimuli are projected. During treadmill walking, the head remained within the remote eye-tracker workspace. Generally, the quality of the point of gaze measurements declined as the distance from the remote eye-tracker increased and data loss occurred for large gaze angles. The stimulus location (a dot-target) did not influence the point of gaze accuracy, precision, and trackability during both standing and walking. Similar results were obtained when the dot-target was replaced by a static or moving 2D target and "region of interest" analysis was applied. These findings foster the feasibility of the use of a remote eye-tracker for the analysis of gaze during treadmill walking in virtual reality environments.

  9. Coordination of gaze and speech in communication between children with hearing impairment and normal-hearing peers.

    PubMed

    Sandgren, Olof; Andersson, Richard; van de Weijer, Joost; Hansson, Kristina; Sahlén, Birgitta

    2014-06-01

    To investigate gaze behavior during communication between children with hearing impairment (HI) and normal-hearing (NH) peers. Ten HI-NH and 10 NH-NH dyads performed a referential communication task requiring description of faces. During task performance, eye movements and speech were tracked. Using verbal event (questions, statements, back channeling, and silence) as the predictor variable, group characteristics in gaze behavior were expressed with Kaplan-Meier survival functions (estimating time to gaze-to-partner) and odds ratios (comparing number of verbal events with and without gaze-to-partner). Analyses compared the listeners in each dyad (HI: n = 10, mean age = 12;6 years, mean better ear pure-tone average = 33.0 dB HL; NH: n = 10, mean age = 13;7 years). Log-rank tests revealed significant group differences in survival distributions for all verbal events, reflecting a higher probability of gaze to the partner's face for participants with HI. Expressed as odds ratios (OR), participants with HI displayed greater odds for gaze-to-partner (ORs ranging between 1.2 and 2.1) during all verbal events. The results show an increased probability for listeners with HI to gaze at the speaker's face in association with verbal events. Several explanations for the finding are possible, and implications for further research are discussed.

  10. Selective Visual Attention during Mirror Exposure in Anorexia and Bulimia Nervosa.

    PubMed

    Tuschen-Caffier, Brunna; Bender, Caroline; Caffier, Detlef; Klenner, Katharina; Braks, Karsten; Svaldi, Jennifer

    2015-01-01

    Cognitive theories suggest that body dissatisfaction results from the activation of maladaptive appearance schemata, which guide mental processes such as selective attention to shape and weight-related information. In line with this, the present study hypothesized that patients with anorexia nervosa (AN) and bulimia nervosa (BN) are characterized by increased visual attention for the most dissatisfying/ugly body part compared to their most satisfying/beautiful body part, while a more balanced viewing pattern was expected for controls without eating disorders (CG). Eye movements were recorded in a group of patients with AN (n = 16), BN (n = 16) and a CG (n = 16) in an ecologically valid setting, i.e., during a 3-min mirror exposure. Evidence was found that patients with AN and BN display longer and more frequent gazes towards the most dissatisfying relative to the most satisfying and towards their most ugly compared to their most beautiful body parts, whereas the CG showed a more balanced gaze pattern. The results converge with theoretical models that emphasize the role of information processing in the maintenance of body dissatisfaction. Given the etiological importance of body dissatisfaction in the development of eating disorders, future studies should focus on the modification of the reported patterns.

  11. [Case of acute ophthalmoparesis with gaze nystagmus].

    PubMed

    Ikuta, Naomi; Tada, Yukiko; Koga, Michiaki

    2012-01-01

    A 61-year-old man developed double vision subsequent to diarrheal illness. Mixed horizontal-vertical gaze palsy in both eyes, diminution of tendon reflexes, and gaze nystagmus were noted. His horizontal gaze palsy was accompanied by gaze nystagmus in the abducent direction, indicative of the disturbance in central nervous system. Neither limb weakness nor ataxia was noted. Serum anti-GQ1b antibody was detected. Brain magnetic resonance imaging (MRI) findings were normal. The patient was diagnosed as having acute ophthalmoparesis. The ophthalmoparesis and nystagmus gradually disappeared in 3 months. The accompanying nystagmus suggests that central nervous system disturbance may also be present with acute ophthalmoparesis.

  12. A novel video-based paradigm to study the mechanisms underlying age- and falls risk-related differences in gaze behaviour during walking.

    PubMed

    Stanley, Jennifer; Hollands, Mark

    2014-07-01

    The current study aimed to quantitatively assess differences in gaze behaviour between participants grouped on the basis of their age and measures of functional mobility during a virtual walking paradigm. The gaze behaviour of nine young adults, seven older adults with a relatively low risk of falling and seven older adults with a relatively higher risk of falling was measured while they watched five first-person perspective movies representing the viewpoint of a pedestrian walking through various environments. Participants also completed a number of cognitive tests: Stroop task, visual search, trail making task, Mini Mental Status Examination, and reaction time, visual tests (visual acuity and contrast sensitivity) and assessments of balance (Activities Balance Confidence Scale and Berg Balance Scale) to aid in the interpretation of differences in gaze behaviour. The high risk older adult group spent significantly more time fixating aspects of the travel path than the low risk and young adult groups. High risk older adults were also significantly slower in performing a number of the cognitive tasks than young adults. Correlations were conducted to compare the extent to which travel path fixation durations co-varied with scores on the tests of visual search, motor, and cognitive function. A positive significant correlation was found between the speed of response to the incongruent Stroop task and travel path fixation duration r21  = 0.44, p < 0.05. The results indicate that our movie-viewing paradigm can identify differences in gaze behaviour between participants grouped on the basis of their age and measures of functional mobility and that these differences are associated with cognitive decline. © 2014 The Authors Ophthalmic & Physiological Optics © 2014 The College of Optometrists.

  13. Does It Really Matter Where You Look When Walking on Stairs? Insights from a Dual-Task Study

    PubMed Central

    Miyasike-daSilva, Veronica; McIlroy, William E.

    2012-01-01

    Although the visual system is known to provide relevant information to guide stair locomotion, there is less understanding of the specific contributions of foveal and peripheral visual field information. The present study investigated the specific role of foveal vision during stair locomotion and ground-stairs transitions by using a dual-task paradigm to influence the ability to rely on foveal vision. Fifteen healthy adults (26.9±3.3 years; 8 females) ascended a 7-step staircase under four conditions: no secondary tasks (CONTROL); gaze fixation on a fixed target located at the end of the pathway (TARGET); visual reaction time task (VRT); and auditory reaction time task (ART). Gaze fixations towards stair features were significantly reduced in TARGET and VRT compared to CONTROL and ART. Despite the reduced fixations, participants were able to successfully ascend stairs and rarely used the handrail. Step time was increased during VRT compared to CONTROL in most stair steps. Navigating on the transition steps did not require more gaze fixations than the middle steps. However, reaction time tended to increase during locomotion on transitions suggesting additional executive demands during this phase. These findings suggest that foveal vision may not be an essential source of visual information regarding stair features to guide stair walking, despite the unique control challenges at transition phases as highlighted by phase-specific challenges in dual-tasking. Instead, the tendency to look at the steps in usual conditions likely provides a stable reference frame for extraction of visual information regarding step features from the entire visual field. PMID:22970297

  14. Direct Speaker Gaze Promotes Trust in Truth-Ambiguous Statements.

    PubMed

    Kreysa, Helene; Kessler, Luise; Schweinberger, Stefan R

    2016-01-01

    A speaker's gaze behaviour can provide perceivers with a multitude of cues which are relevant for communication, thus constituting an important non-verbal interaction channel. The present study investigated whether direct eye gaze of a speaker affects the likelihood of listeners believing truth-ambiguous statements. Participants were presented with videos in which a speaker produced such statements with either direct or averted gaze. The statements were selected through a rating study to ensure that participants were unlikely to know a-priori whether they were true or not (e.g., "sniffer dogs cannot smell the difference between identical twins"). Participants indicated in a forced-choice task whether or not they believed each statement. We found that participants were more likely to believe statements by a speaker looking at them directly, compared to a speaker with averted gaze. Moreover, when participants disagreed with a statement, they were slower to do so when the statement was uttered with direct (compared to averted) gaze, suggesting that the process of rejecting a statement as untrue may be inhibited when that statement is accompanied by direct gaze.

  15. Anxiety and sensitivity to gaze direction in emotionally expressive faces.

    PubMed

    Fox, Elaine; Mathews, Andrew; Calder, Andrew J; Yiend, Jenny

    2007-08-01

    This study investigated the role of neutral, happy, fearful, and angry facial expressions in enhancing orienting to the direction of eye gaze. Photographs of faces with either direct or averted gaze were presented. A target letter (T or L) appeared unpredictably to the left or the right of the face, either 300 ms or 700 ms after gaze direction changed. Response times were faster in congruent conditions (i.e., when the eyes gazed toward the target) relative to incongruent conditions (when the eyes gazed away from the target letter). Facial expression did influence reaction times, but these effects were qualified by individual differences in self-reported anxiety. High trait-anxious participants showed an enhanced orienting to the eye gaze of faces with fearful expressions relative to all other expressions. In contrast, when the eyes stared straight ahead, trait anxiety was associated with slower responding when the facial expressions depicted anger. Thus, in anxiety-prone people attention is more likely to be held by an expression of anger, whereas attention is guided more potently by fearful facial expressions. ((c) 2007 APA, all rights reserved).

  16. Direct Speaker Gaze Promotes Trust in Truth-Ambiguous Statements

    PubMed Central

    Kessler, Luise; Schweinberger, Stefan R.

    2016-01-01

    A speaker’s gaze behaviour can provide perceivers with a multitude of cues which are relevant for communication, thus constituting an important non-verbal interaction channel. The present study investigated whether direct eye gaze of a speaker affects the likelihood of listeners believing truth-ambiguous statements. Participants were presented with videos in which a speaker produced such statements with either direct or averted gaze. The statements were selected through a rating study to ensure that participants were unlikely to know a-priori whether they were true or not (e.g., “sniffer dogs cannot smell the difference between identical twins”). Participants indicated in a forced-choice task whether or not they believed each statement. We found that participants were more likely to believe statements by a speaker looking at them directly, compared to a speaker with averted gaze. Moreover, when participants disagreed with a statement, they were slower to do so when the statement was uttered with direct (compared to averted) gaze, suggesting that the process of rejecting a statement as untrue may be inhibited when that statement is accompanied by direct gaze. PMID:27643789

  17. Design and Evaluation of Fusion Approach for Combining Brain and Gaze Inputs for Target Selection

    PubMed Central

    Évain, Andéol; Argelaguet, Ferran; Casiez, Géry; Roussel, Nicolas; Lécuyer, Anatole

    2016-01-01

    Gaze-based interfaces and Brain-Computer Interfaces (BCIs) allow for hands-free human–computer interaction. In this paper, we investigate the combination of gaze and BCIs. We propose a novel selection technique for 2D target acquisition based on input fusion. This new approach combines the probabilistic models for each input, in order to better estimate the intent of the user. We evaluated its performance against the existing gaze and brain–computer interaction techniques. Twelve participants took part in our study, in which they had to search and select 2D targets with each of the evaluated techniques. Our fusion-based hybrid interaction technique was found to be more reliable than the previous gaze and BCI hybrid interaction techniques for 10 participants over 12, while being 29% faster on average. However, similarly to what has been observed in hybrid gaze-and-speech interaction, gaze-only interaction technique still provides the best performance. Our results should encourage the use of input fusion, as opposed to sequential interaction, in order to design better hybrid interfaces. PMID:27774048

  18. Post-traumatic Vertical Gaze Paresis in Nine Patients: Special Vulnerability of the Artery of Percheron in Trauma?

    PubMed Central

    Galvez-Ruiz, Alberto

    2015-01-01

    Purpose: The purpose was to present a case series of vertical gaze paresis in patients with a history of cranioencephalic trauma (CET). Methods: The clinical characteristics and management are presented of nine patients with a history of CET secondary to motor vehicle accidents with associated vertical gaze paresis. Results: Neuroimaging studies indicated posttraumatic contusion of the thalamic-mesencephalic region in all nine patients who corresponded to the artery of Percheron region; four patients had signs of hemorrhagic transformation. Vertical gaze paresis was present in all patients, ranging from complete paralysis of the upward and downward gaze to a slight limitation of upward gaze. Discussion: Posttraumatic vertical gaze paresis is a rare phenomenon that can occur in isolation or in association with other neurological deficits and can cause a significant limitation in the quality-of-life. Studies in the literature have postulated that the unique anatomy of the angle of penetration of the thalamoperforating and lenticulostriate arteries makes these vessels more vulnerable to isolated selective damage in certain individuals and can cause-specific patterns of CET. PMID:26180479

  19. Post-traumatic Vertical Gaze Paresis in Nine Patients: Special Vulnerability of the Artery of Percheron in Trauma?

    PubMed

    Galvez-Ruiz, Alberto

    2015-01-01

    The purpose was to present a case series of vertical gaze paresis in patients with a history of cranioencephalic trauma (CET). The clinical characteristics and management are presented of nine patients with a history of CET secondary to motor vehicle accidents with associated vertical gaze paresis. Neuroimaging studies indicated posttraumatic contusion of the thalamic-mesencephalic region in all nine patients who corresponded to the artery of Percheron region; four patients had signs of hemorrhagic transformation. Vertical gaze paresis was present in all patients, ranging from complete paralysis of the upward and downward gaze to a slight limitation of upward gaze. Posttraumatic vertical gaze paresis is a rare phenomenon that can occur in isolation or in association with other neurological deficits and can cause a significant limitation in the quality-of-life. Studies in the literature have postulated that the unique anatomy of the angle of penetration of the thalamoperforating and lenticulostriate arteries makes these vessels more vulnerable to isolated selective damage in certain individuals and can cause-specific patterns of CET.

  20. Age differences in emotion regulation effort: Pupil response distinguishes reappraisal and distraction for older but not younger adults.

    PubMed

    Martins, Bruna; Florjanczyk, Jan; Jackson, Nicholas J; Gatz, Margaret; Mather, Mara

    2018-03-01

    In previous research, older adults show greater emotional benefits from distracting themselves than from reappraising an event when strategically regulating emotion. Older adults also demonstrate an attentional preference to avoid, while younger adults show a bias toward approaching negative stimuli. This suggests a possible age-related differentiation of cognitive effort across approach and avoidance of negative stimuli during emotion regulation. In this study, we tracked cognitive effort via pupil dilation during the use of distraction (avoidance) and reappraisal (approach) strategies across age. Forty-eight younger adults (M = 20.94, SD = 1.78; 19 men) and 48 older adults (M = 68.82, SD = 5.40; 15 men) viewed a slideshow of negative images and were instructed to distract, reappraise, or passively view each image. Older adults showed greater pupil dilation during reappraisal than distraction, but younger adults displayed no difference between conditions-an effect that survived when controlling for gaze patterns. Gaze findings revealed that older adults looked less within images during active emotion regulation compared with passive viewing (no difference between distraction and reappraisal), and younger adults showed no difference across strategies. Younger adults gazed less within the most emotional image areas during distraction, but this did not significantly contribute to pupil response. Our findings support that distraction is less cognitively effortful than reinterpreting negative information in later life. These findings could be explained by older adults' motivational bias to disengage from negative information because of the age-related positivity effect, or compensation for decreased working memory resources across the life span. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  1. Gaze-independent BCI-spelling using rapid serial visual presentation (RSVP).

    PubMed

    Acqualagna, Laura; Blankertz, Benjamin

    2013-05-01

    A Brain Computer Interface (BCI) speller is a communication device, which can be used by patients suffering from neurodegenerative diseases to select symbols in a computer application. For patients unable to overtly fixate the target symbol, it is crucial to develop a speller independent of gaze shifts. In the present online study, we investigated rapid serial visual presentation (RSVP) as a paradigm for mental typewriting. We investigated the RSVP speller in three conditions, regarding the Stimulus Onset Asynchrony (SOA) and the use of color features. A vocabulary of 30 symbols was presented one-by-one in a pseudo random sequence at the same location of display. All twelve participants were able to successfully operate the RSVP speller. The results show a mean online spelling rate of 1.43 symb/min and a mean symbol selection accuracy of 94.8% in the best condition. We conclude that the RSVP is a promising paradigm for BCI spelling and its performance is competitive with the fastest gaze-independent spellers in literature. The RSVP speller does not require gaze shifts towards different target locations and can be operated by non-spatial visual attention, therefore it can be considered as a valid paradigm in applications with patients for impaired oculo-motor control. Copyright © 2013 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  2. Affine Transform to Reform Pixel Coordinates of EOG Signals for Controlling Robot Manipulators Using Gaze Motions

    PubMed Central

    Rusydi, Muhammad Ilhamdi; Sasaki, Minoru; Ito, Satoshi

    2014-01-01

    Biosignals will play an important role in building communication between machines and humans. One of the types of biosignals that is widely used in neuroscience are electrooculography (EOG) signals. An EOG has a linear relationship with eye movement displacement. Experiments were performed to construct a gaze motion tracking method indicated by robot manipulator movements. Three operators looked at 24 target points displayed on a monitor that was 40 cm in front of them. Two channels (Ch1 and Ch2) produced EOG signals for every single eye movement. These signals were converted to pixel units by using the linear relationship between EOG signals and gaze motion distances. The conversion outcomes were actual pixel locations. An affine transform method is proposed to determine the shift of actual pixels to target pixels. This method consisted of sequences of five geometry processes, which are translation-1, rotation, translation-2, shear and dilatation. The accuracy was approximately 0.86° ± 0.67° in the horizontal direction and 0.54° ± 0.34° in the vertical. This system successfully tracked the gaze motions not only in direction, but also in distance. Using this system, three operators could operate a robot manipulator to point at some targets. This result shows that the method is reliable in building communication between humans and machines using EOGs. PMID:24919013

  3. Differential impact of partial cortical blindness on gaze strategies when sitting and walking - an immersive virtual reality study

    PubMed Central

    Iorizzo, Dana B.; Riley, Meghan E.; Hayhoe, Mary; Huxlin, Krystel R.

    2011-01-01

    The present experiments aimed to characterize the visual performance of subjects with long-standing, unilateral cortical blindness when walking in a naturalistic, virtual environment. Under static, seated testing conditions, cortically blind subjects are known to exhibit compensatory eye movement strategies. However, they still complain of significant impairment in visual detection during navigation. To assess whether this is due to a change in compensatory eye movement strategy between sitting and walking, we measured eye and head movements in subjects asked to detect peripherally-presented, moving basketballs. When seated, cortically blind subjects detected ~80% of balls, while controls detected almost all balls. Seated blind subjects did not make larger head movements than controls, but they consistently biased their fixation distribution towards their blind hemifield. When walking, head movements were similar in the two groups, but the fixation bias decreased to the point that fixation distribution in cortically blind subjects became similar to that in controls - with one major exception: at the time of basketball appearance, walking controls looked primarily at the far ground, in upper quadrants of the virtual field of view; cortically blind subjects looked significantly more at the near ground, in lower quadrants of the virtual field. Cortically blind subjects detected only 58% of the balls when walking while controls detected ~90%. Thus, the adaptive gaze strategies adopted by cortically blind individuals as a compensation for their visual loss are strongest and most effective when seated and stationary. Walking significantly alters these gaze strategies in a way that seems to favor walking performance, but impairs peripheral target detection. It is possible that this impairment underlies the experienced difficulty of those with cortical blindness when navigating in real life. PMID:21414339

  4. Differential impact of partial cortical blindness on gaze strategies when sitting and walking - an immersive virtual reality study.

    PubMed

    Iorizzo, Dana B; Riley, Meghan E; Hayhoe, Mary; Huxlin, Krystel R

    2011-05-25

    The present experiments aimed to characterize the visual performance of subjects with long-standing, unilateral cortical blindness when walking in a naturalistic, virtual environment. Under static, seated testing conditions, cortically blind subjects are known to exhibit compensatory eye movement strategies. However, they still complain of significant impairment in visual detection during navigation. To assess whether this is due to a change in compensatory eye movement strategy between sitting and walking, we measured eye and head movements in subjects asked to detect peripherally-presented, moving basketballs. When seated, cortically blind subjects detected ∼80% of balls, while controls detected almost all balls. Seated blind subjects did not make larger head movements than controls, but they consistently biased their fixation distribution towards their blind hemifield. When walking, head movements were similar in the two groups, but the fixation bias decreased to the point that fixation distribution in cortically blind subjects became similar to that in controls - with one major exception: at the time of basketball appearance, walking controls looked primarily at the far ground, in upper quadrants of the virtual field of view; cortically blind subjects looked significantly more at the near ground, in lower quadrants of the virtual field. Cortically blind subjects detected only 58% of the balls when walking while controls detected ∼90%. Thus, the adaptive gaze strategies adopted by cortically blind individuals as a compensation for their visual loss are strongest and most effective when seated and stationary. Walking significantly alters these gaze strategies in a way that seems to favor walking performance, but impairs peripheral target detection. It is possible that this impairment underlies the experienced difficulty of those with cortical blindness when navigating in real life. Copyright © 2011 Elsevier Ltd. All rights reserved.

  5. A Formal Investigation of Human Spatial Control Skills: Mathematical Formalization, Skill Development, and Skill Assessment

    NASA Astrophysics Data System (ADS)

    Li, Bin

    Spatial control behaviors account for a large proportion of human everyday activities from normal daily tasks, such as reaching for objects, to specialized tasks, such as driving, surgery, or operating equipment. These behaviors involve intensive interactions within internal processes (i.e. cognitive, perceptual, and motor control) and with the physical world. This dissertation builds on a concept of interaction pattern and a hierarchical functional model. Interaction pattern represents a type of behavior synergy that humans coordinates cognitive, perceptual, and motor control processes. It contributes to the construction of the hierarchical functional model that delineates humans spatial control behaviors as the coordination of three functional subsystems: planning, guidance, and tracking/pursuit. This dissertation formalizes and validates these two theories and extends them for the investigation of human spatial control skills encompassing development and assessment. Specifically, this dissertation first presents an overview of studies in human spatial control skills encompassing definition, characteristic, development, and assessment, to provide theoretical evidence for the concept of interaction pattern and the hierarchical functional model. The following, the human experiments for collecting motion and gaze data and techniques to register and classify gaze data, are described. This dissertation then elaborates and mathematically formalizes the hierarchical functional model and the concept of interaction pattern. These theories then enables the construction of a succinct simulation model that can reproduce a variety of human performance with a minimal set of hypotheses. This validates the hierarchical functional model as a normative framework for interpreting human spatial control behaviors. The dissertation then investigates human skill development and captures the emergence of interaction pattern. The final part of the dissertation applies the hierarchical functional model for skill assessment and introduces techniques to capture interaction patterns both from the top down using their geometric features and from the bottom up using their dynamical characteristics. The validity and generality of the skill assessment is illustrated using two the remote-control flight and laparoscopic surgical training experiments.

  6. Auditory, Vestibular and Cognitive Effects due to Repeated Blast Exposure on the Warfighter

    DTIC Science & Technology

    2012-10-01

    Gaze Horizontal (Left and Right) Description: The primary purpose of the Gaze Horizontal subtest was to detect nystagmus when the head is fixed and...to detect nystagmus when the head is fixed and the eyes are gazing off center from the primary (straight ahead) gaze position. This test is designed...physiological target area and examiner instructions for testing): Spontaneous Nystagmus Smooth Harmonic Acceleration (.01, .08, .32, .64, 1.75

  7. An Experimental Evaluation of a Field Sobriety Test Battery in the Marine Environment

    DTIC Science & Technology

    1990-06-01

    Turn, Horizontal Gaze Nystagmus , Finger to Nose, Finger Count, and Tracing. Of these six tests, Walk and Turn, One-Leg Stand, and Horizontal Gaze ...served as the lead officer, administering the tests while the other two officers observed. All officers administered the Horizontal Gaze Nystagmus ...administered the Horizontal Gaze Nystagmus (HGN) individually. After giving a tes’ or pair of tests (as designated) each officer on the team gave a

  8. Radiologically defining horizontal gaze using EOS imaging-a prospective study of healthy subjects and a retrospective audit.

    PubMed

    Hey, Hwee Weng Dennis; Tan, Kimberly-Anne; Ho, Vivienne Chien-Lin; Azhar, Syifa Bte; Lim, Joel-Louis; Liu, Gabriel Ka-Po; Wong, Hee-Kit

    2018-06-01

    As sagittal alignment of the cervical spine is important for maintaining horizontal gaze, it is important to determine the former for surgical correction. However, horizontal gaze remains poorly-defined from a radiological point of view. The objective of this study was to establish radiological criteria to define horizontal gaze. This study was conducted at a tertiary health-care institution over a 1-month period. A prospective cohort of healthy patients was used to determine the best radiological criteria for defining horizontal gaze. A retrospective cohort of patients without rigid spinal deformities was used to audit the incidence of horizontal gaze. Two categories of radiological parameters for determining horizontal gaze were tested: (1) the vertical offset distances of key identifiable structures from the horizontal gaze axis and (2) imaginary lines convergent with the horizontal gaze axis. Sixty-seven healthy subjects underwent whole-body EOS radiographs taken in a directed standing posture. Horizontal gaze was radiologically defined using each parameter, as represented by their means, 95% confidence intervals (CIs), and associated 2 standard deviations (SDs). Subsequently, applying the radiological criteria, we conducted a retrospective audit of such radiographs (before the implementation of a strict radioimaging standardization). The mean age of our prospective cohort was 46.8 years, whereas that of our retrospective cohort was 37.2 years. Gender was evenly distributed across both cohorts. The four parameters with the lowest 95% CI and 2 SD were the distance offsets of the midpoint of the hard palate (A) and the base of the sella turcica (B), the horizontal convergents formed by the tangential line to the hard palate (C), and the line joining the center of the orbital orifice with the internal occipital protuberance (D). In the prospective cohort, good sensitivity (>98%) was attained when two or more parameters were used. Audit using Criterion B+D yielded compliance rates of 76.7%, a figure much closer to that of A+B+C+D (74.8%). From a practical viewpoint, Criterion B+D were most suitable for clinical use and could be simplified to the "3-6-12 rule" as a form of cursory assessment. Verbal instructions in the absence of stringent postural checks only ensured that ~75% of subjects achieved horizontal gaze. Fulfillment of Criterion B+D is sufficient to evaluate for horizontal gaze. Further criteria can be added to increase sensitivity. Verbal instructions alone yield high rates of inaccuracy when attempting to image patients in horizontal gaze. Apart from improving methods for obtaining radiographs, a radiological definition of horizontal gaze should be routinely applied for better evaluation of sagittal spinal alignment. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. A Comparison of Facial Color Pattern and Gazing Behavior in Canid Species Suggests Gaze Communication in Gray Wolves (Canis lupus)

    PubMed Central

    Ueda, Sayoko; Kumagai, Gaku; Otaki, Yusuke; Yamaguchi, Shinya; Kohshima, Shiro

    2014-01-01

    As facial color pattern around the eyes has been suggested to serve various adaptive functions related to the gaze signal, we compared the patterns among 25 canid species, focusing on the gaze signal, to estimate the function of facial color pattern in these species. The facial color patterns of the studied species could be categorized into the following three types based on contrast indices relating to the gaze signal: A-type (both pupil position in the eye outline and eye position in the face are clear), B-type (only the eye position is clear), and C-type (both the pupil and eye position are unclear). A-type faces with light-colored irises were observed in most studied species of the wolf-like clade and some of the red fox-like clade. A-type faces tended to be observed in species living in family groups all year-round, whereas B-type faces tended to be seen in solo/pair-living species. The duration of gazing behavior during which the facial gaze-signal is displayed to the other individual was longest in gray wolves with typical A-type faces, of intermediate length in fennec foxes with typical B-type faces, and shortest in bush dogs with typical C-type faces. These results suggest that the facial color pattern of canid species is related to their gaze communication and that canids with A-type faces, especially gray wolves, use the gaze signal in conspecific communication. PMID:24918751

  10. Surgical planning and innervation in pontine gaze palsy with ipsilateral esotropia.

    PubMed

    Somer, Deniz; Cinar, Fatma Gul; Kaderli, Ahmet; Ornek, Firdevs

    2016-10-01

    To discuss surgical intervention strategies among patients with horizontal gaze palsy with concurrent esotropia. Five consecutive patients with dorsal pontine lesions are presented. Each patient had horizontal gaze palsy with symptomatic diplopia as a consequence of esotropia in primary gaze and an anomalous head turn to attain single binocular vision. Clinical findings in the first 2 patients led us to presume there was complete loss of rectus muscle function from rectus muscle palsy. Based on this assumption, medial rectus recessions with simultaneous partial vertical muscle transposition (VRT) on the ipsilateral eye of the gaze palsy and recession-resection surgery on the contralateral eye were performed, resulting in significant motility limitation. Sequential recession-resection surgery without simultaneous VRT on the 3rd patient created an unexpected motility improvement to the side of gaze palsy, an observation differentiating rectus muscle palsy from paresis. Recession combined with VRT approach in the esotropic eye was abandoned on subsequent patients. Simultaneous recession-resection surgery without VRT in the next 2 patients resulted in alleviation of head postures, resolution of esotropia, and also substantial motility improvements to the ipsilateral hemifield of gaze palsy without limitations in adduction and vertical deviations. Ocular misalignment and abnormal head posture as a result of conjugate gaze palsy can be successfully treated by basic recession-resection surgery, with the advantage of increasing versions to the ipsilateral side of the gaze palsy. Improved motility after surgery presumably represents paresis, not "paralysis," with residual innervation in rectus muscles. Copyright © 2016 American Association for Pediatric Ophthalmology and Strabismus. Published by Elsevier Inc. All rights reserved.

  11. Combined Effects of Gaze and Orientation of Faces on Person Judgments in Social Situations

    PubMed Central

    Kaisler, Raphaela E.; Leder, Helmut

    2017-01-01

    In social situations, faces of others can vary simultaneously in gaze and orientation. How these variations affect different kinds of social judgments, such as attractiveness or trustworthiness, is only partly understood. Therefore, we studied how different gaze directions, head angles, but also levels of facial attractiveness affect perceived attractiveness and trustworthiness. We always presented pairs of faces – either two average attractive faces or a highly attractive together with a less attractive face. We also varied gaze and head angles showing faces in three different orientations, front, three-quarter and profile view. In Experiment 1 (N = 62), participants rated averted gaze in three-quarter views as more attractive than in front and profile views, and evaluated faces with direct gaze in front views as most trustworthy. Moreover, faces that were being looked at by another face were seen as more attractive. Independent of the head orientation or gaze direction, highly attractive faces were rated as more attractive and more trustworthy. In Experiment 2 (N = 54), we found that the three-quarter advantage vanished when the second face was blurred during judgments, which demonstrates the importance of the presence of another person-as in a triadic social situation-as well as the importance of their visible gaze. The findings emphasize that social evaluations such as trustworthiness are unaffected by the esthetic advantage of three-quarter views of two average attractive faces, and that the effect of a faces’ attractiveness is more powerful than the more subtle effects of gaze and orientations. PMID:28275364

  12. Multisensory Control of Stabilization Reflexes

    DTIC Science & Technology

    2012-08-22

    Dr Simon Schultz (Neural Coding), Dr Manos Drakakis (Low-power VLSI technology), and Dr Reiko Tanaka (Compound Control). To study the functional...Krapp H.G., and Schultz S.R.: Spike-triggered independent component analysis: application to a fly motion-sensitive neuron. Visual Neuroscience, 8...Tanaka, RI.: Characterization of insect gaze control systems. 18th World Congress of International Federation of Automated Control (IFAC), Milan

  13. Upward gaze-evoked nystagmus with organoarsenic poisoning.

    PubMed

    Nakamagoe, Kiyotaka; Ishii, Kazuhiro; Tamaoka, Akira; Shoji, Shin'ichi

    2006-01-10

    The authors report assessment of abnormal ocular movements in three patients after organoarsenic poisoning from diphenylarsinic acid. The characteristic and principal sign is upward gaze-evoked nystagmus. Moreover, vertical gaze holding impairment was shown by electronystagmography on direct current recording.

  14. Gaze Cueing of Attention

    PubMed Central

    Frischen, Alexandra; Bayliss, Andrew P.; Tipper, Steven P.

    2007-01-01

    During social interactions, people’s eyes convey a wealth of information about their direction of attention and their emotional and mental states. This review aims to provide a comprehensive overview of past and current research into the perception of gaze behavior and its effect on the observer. This encompasses the perception of gaze direction and its influence on perception of the other person, as well as gaze-following behavior such as joint attention, in infant, adult, and clinical populations. Particular focus is given to the gaze-cueing paradigm that has been used to investigate the mechanisms of joint attention. The contribution of this paradigm has been significant and will likely continue to advance knowledge across diverse fields within psychology and neuroscience. PMID:17592962

  15. Social evolution. Oxytocin-gaze positive loop and the coevolution of human-dog bonds.

    PubMed

    Nagasawa, Miho; Mitsui, Shouhei; En, Shiori; Ohtani, Nobuyo; Ohta, Mitsuaki; Sakuma, Yasuo; Onaka, Tatsushi; Mogi, Kazutaka; Kikusui, Takefumi

    2015-04-17

    Human-like modes of communication, including mutual gaze, in dogs may have been acquired during domestication with humans. We show that gazing behavior from dogs, but not wolves, increased urinary oxytocin concentrations in owners, which consequently facilitated owners' affiliation and increased oxytocin concentration in dogs. Further, nasally administered oxytocin increased gazing behavior in dogs, which in turn increased urinary oxytocin concentrations in owners. These findings support the existence of an interspecies oxytocin-mediated positive loop facilitated and modulated by gazing, which may have supported the coevolution of human-dog bonding by engaging common modes of communicating social attachment. Copyright © 2015, American Association for the Advancement of Science.

  16. Gaze failure, drifting eye movements, and centripetal nystagmus in cerebellar disease.

    PubMed Central

    Leech, J; Gresty, M; Hess, K; Rudge, P

    1977-01-01

    Three abnormalities of eye movement in man are described which are indicative of cerebellar system disorder, namely, centripetally beating nystagmus, failure to maintain lateral gaze either in darkness or with eye closure, and slow drifting movements of the eyes in the absence of fixation. Similar eye movement signs follow cerebellectomy in the primate and the cat. These abnormalities of eye movement, together with other signs of cerebellar disease, such as rebound alternating, and gaze paretic nystagmus, are explained by the hypothesis that the cerebellum helps to maintain lateral gaze and that brain stem mechanisms which monitor gaze position generate compensatory biases in the absence of normal cerebellar function. PMID:603785

  17. Algorithms for High-Speed Noninvasive Eye-Tracking System

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Morookian, John-Michael; Lambert, James

    2010-01-01

    Two image-data-processing algorithms are essential to the successful operation of a system of electronic hardware and software that noninvasively tracks the direction of a person s gaze in real time. The system was described in High-Speed Noninvasive Eye-Tracking System (NPO-30700) NASA Tech Briefs, Vol. 31, No. 8 (August 2007), page 51. To recapitulate from the cited article: Like prior commercial noninvasive eyetracking systems, this system is based on (1) illumination of an eye by a low-power infrared light-emitting diode (LED); (2) acquisition of video images of the pupil, iris, and cornea in the reflected infrared light; (3) digitization of the images; and (4) processing the digital image data to determine the direction of gaze from the centroids of the pupil and cornea in the images. Most of the prior commercial noninvasive eyetracking systems rely on standard video cameras, which operate at frame rates of about 30 Hz. Such systems are limited to slow, full-frame operation. The video camera in the present system includes a charge-coupled-device (CCD) image detector plus electronic circuitry capable of implementing an advanced control scheme that effects readout from a small region of interest (ROI), or subwindow, of the full image. Inasmuch as the image features of interest (the cornea and pupil) typically occupy a small part of the camera frame, this ROI capability can be exploited to determine the direction of gaze at a high frame rate by reading out from the ROI that contains the cornea and pupil (but not from the rest of the image) repeatedly. One of the present algorithms exploits the ROI capability. The algorithm takes horizontal row slices and takes advantage of the symmetry of the pupil and cornea circles and of the gray-scale contrasts of the pupil and cornea with respect to other parts of the eye. The algorithm determines which horizontal image slices contain the pupil and cornea, and, on each valid slice, the end coordinates of the pupil and cornea. Information from multiple slices is then combined to robustly locate the centroids of the pupil and cornea images. The other of the two present algorithms is a modified version of an older algorithm for estimating the direction of gaze from the centroids of the pupil and cornea. The modification lies in the use of the coordinates of the centroids, rather than differences between the coordinates of the centroids, in a gaze-mapping equation. The equation locates a gaze point, defined as the intersection of the gaze axis with a surface of interest, which is typically a computer display screen (see figure). The expected advantage of the modification is to make the gaze computation less dependent on some simplifying assumptions that are sometimes not accurate

  18. Gaze direction differentially affects avoidance tendencies to happy and angry faces in socially anxious individuals.

    PubMed

    Roelofs, Karin; Putman, Peter; Schouten, Sonja; Lange, Wolf-Gero; Volman, Inge; Rinck, Mike

    2010-04-01

    Increasing evidence indicates that eye gaze direction affects the processing of emotional faces in anxious individuals. However, the effects of eye gaze direction on the behavioral responses elicited by emotional faces, such as avoidance behavior, remain largely unexplored. We administered an Approach-Avoidance Task (AAT) in high (HSA) and low socially anxious (LSA) individuals. All participants responded to photographs of angry, happy and neutral faces (presented with direct and averted gaze), by either pushing a joystick away from them (avoidance) or pulling it towards them (approach). Compared to LSA, HSA were faster in avoiding than approaching angry faces. Most crucially, this avoidance tendency was only present when the perceived anger was directed towards the subject (direct gaze) and not when the gaze of the face-stimulus was averted. In contrast, HSA individuals tended to avoid happy faces irrespectively of gaze direction. Neutral faces elicited no approach-avoidance tendencies. Thus avoidance of angry faces in social anxiety as measured by AA-tasks reflects avoidance of subject-directed anger and not of negative stimuli in general. In addition, although both anger and joy are considered to reflect approach-related emotions, gaze direction did not affect HSA's avoidance of happy faces, suggesting differential mechanisms affecting responses to happy and angry faces in social anxiety. 2009 Elsevier Ltd. All rights reserved.

  19. Temporal dynamics underlying the modulation of social status on social attention.

    PubMed

    Dalmaso, Mario; Galfano, Giovanni; Coricelli, Carol; Castelli, Luigi

    2014-01-01

    Fixating someone suddenly moving the eyes is known to trigger a corresponding shift of attention in the observer. This phenomenon, known as gaze-cueing effect, can be modulated as a function of the social status of the individual depicted in the cueing face. Here, in two experiments, we investigated the temporal dynamics underlying this modulation. To this end, a gaze-cueing paradigm was implemented in which centrally-placed faces depicting high- and low-status individuals suddenly shifted the eyes towards a location either spatially congruent or incongruent with that occupied by a subsequent target stimulus. Social status was manipulated by presenting fictive Curriculum Vitae before the experimental phase. In Experiment 1, in which two temporal intervals (50 ms vs. 900 ms) occurred between the direct-gaze face and the averted-gaze face onsets, a stronger gaze-cueing effect in response to high-status faces than low-status faces was observed, irrespective of the time participants were allowed for extracting social information. In Experiment 2, in which two temporal intervals (200 ms vs. 1000 ms) occurred between the averted-gaze face and target onset, a stronger gaze cueing for high-status faces was observed at the shorter interval only. Taken together, these results suggest that information regarding social status is extracted from faces rapidly (Experiment 1), and that the tendency to selectively attend to the locations gazed by high-status individuals may decay with time (Experiment 2).

  20. "The Gaze Heuristic:" Biography of an Adaptively Rational Decision Process.

    PubMed

    Hamlin, Robert P

    2017-04-01

    This article is a case study that describes the natural and human history of the gaze heuristic. The gaze heuristic is an interception heuristic that utilizes a single input (deviation from a constant angle of approach) repeatedly as a task is performed. Its architecture, advantages, and limitations are described in detail. A history of the gaze heuristic is then presented. In natural history, the gaze heuristic is the only known technique used by predators to intercept prey. In human history the gaze heuristic was discovered accidentally by Royal Air Force (RAF) fighter command just prior to World War II. As it was never discovered by the Luftwaffe, the technique conferred a decisive advantage upon the RAF throughout the war. After the end of the war in America, German technology was combined with the British heuristic to create the Sidewinder AIM9 missile, the most successful autonomous weapon ever built. There are no plans to withdraw it or replace its guiding gaze heuristic. The case study demonstrates that the gaze heuristic is a specific heuristic type that takes a single best input at the best time (take the best 2 ). Its use is an adaptively rational response to specific, rapidly evolving decision environments that has allowed those animals/humans/machines who use it to survive, prosper, and multiply relative to those who do not. Copyright © 2017 Cognitive Science Society, Inc.

Top