Sample records for active gaze control

  1. Contribution of the cerebellar flocculus to gaze control during active head movements

    NASA Technical Reports Server (NTRS)

    Belton, T.; McCrea, R. A.; Peterson, B. W. (Principal Investigator)

    1999-01-01

    The flocculus and ventral paraflocculus are adjacent regions of the cerebellar cortex that are essential for controlling smooth pursuit eye movements and for altering the performance of the vestibulo-ocular reflex (VOR). The question addressed in this study is whether these regions of the cerebellum are more globally involved in controlling gaze, regardless of whether eye or active head movements are used to pursue moving visual targets. Single-unit recordings were obtained from Purkinje (Pk) cells in the floccular region of squirrel monkeys that were trained to fixate and pursue small visual targets. Cell firing rate was recorded during smooth pursuit eye movements, cancellation of the VOR, combined eye-head pursuit, and spontaneous gaze shifts in the absence of targets. Pk cells were found to be much less sensitive to gaze velocity during combined eye-head pursuit than during ocular pursuit. They were not sensitive to gaze or head velocity during gaze saccades. Temporary inactivation of the floccular region by muscimol injection compromised ocular pursuit but had little effect on the ability of monkeys to pursue visual targets with head movements or to cancel the VOR during active head movements. Thus the signals produced by Pk cells in the floccular region are necessary for controlling smooth pursuit eye movements but not for coordinating gaze during active head movements. The results imply that individual functional modules in the cerebellar cortex are less involved in the global organization and coordination of movements than with parametric control of movements produced by a specific part of the body.

  2. Getting a Grip on Social Gaze: Control over Others' Gaze Helps Gaze Detection in High-Functioning Autism

    ERIC Educational Resources Information Center

    Dratsch, Thomas; Schwartz, Caroline; Yanev, Kliment; Schilbach, Leonhard; Vogeley, Kai; Bente, Gary

    2013-01-01

    We investigated the influence of control over a social stimulus on the ability to detect direct gaze in high-functioning autism (HFA). In a pilot study, 19 participants with and 19 without HFA were compared on a gaze detection and a gaze setting task. Participants with HFA were less accurate in detecting direct gaze in the detection task, but did…

  3. Seeing direct and averted gaze activates the approach-avoidance motivational brain systems.

    PubMed

    Hietanen, Jari K; Leppänen, Jukka M; Peltola, Mikko J; Linna-Aho, Kati; Ruuhiala, Heidi J

    2008-01-01

    Gaze direction is known to be an important factor in regulating social interaction. Recent evidence suggests that direct and averted gaze can signal the sender's motivational tendencies of approach and avoidance, respectively. We aimed at determining whether seeing another person's direct vs. averted gaze has an influence on the observer's neural approach-avoidance responses. We also examined whether it would make a difference if the participants were looking at the face of a real person or a picture. Measurements of hemispheric asymmetry in the frontal electroencephalographic activity indicated that another person's direct gaze elicited a relative left-sided frontal EEG activation (indicative of a tendency to approach), whereas averted gaze activated right-sided asymmetry (indicative of avoidance). Skin conductance responses were larger to faces than to control objects and to direct relative to averted gaze, indicating that faces, in general, and faces with direct gaze, in particular, elicited more intense autonomic activation and strength of the motivational tendencies than did control stimuli. Gaze direction also influenced subjective ratings of emotional arousal and valence. However, all these effects were observed only when participants were facing a real person, not when looking at a picture of a face. This finding was suggested to be due to the motivational responses to gaze direction being activated in the context of enhanced self-awareness by the presence of another person. The present results, thus, provide direct evidence that eye contact and gaze aversion between two persons influence the neural mechanisms regulating basic motivational-emotional responses and differentially activate the motivational approach-avoidance brain systems.

  4. A telephoto camera system with shooting direction control by gaze detection

    NASA Astrophysics Data System (ADS)

    Teraya, Daiki; Hachisu, Takumi; Yendo, Tomohiro

    2015-05-01

    For safe driving, it is important for driver to check traffic conditions such as traffic lights, or traffic signs as early as soon. If on-vehicle camera takes image of important objects to understand traffic conditions from long distance and shows these to driver, driver can understand traffic conditions earlier. To take image of long distance objects clearly, the focal length of camera must be long. When the focal length is long, on-vehicle camera doesn't have enough field of view to check traffic conditions. Therefore, in order to get necessary images from long distance, camera must have long-focal length and controllability of shooting direction. In previous study, driver indicates shooting direction on displayed image taken by a wide-angle camera, a direction controllable camera takes telescopic image, and displays these to driver. However, driver uses a touch panel to indicate the shooting direction in previous study. It is cause of disturb driving. So, we propose a telephoto camera system for driving support whose shooting direction is controlled by driver's gaze to avoid disturbing drive. This proposed system is composed of a gaze detector and an active telephoto camera whose shooting direction is controlled. We adopt non-wear detecting method to avoid hindrance to drive. The gaze detector measures driver's gaze by image processing. The shooting direction of the active telephoto camera is controlled by galvanometer scanners and the direction can be switched within a few milliseconds. We confirmed that the proposed system takes images of gazing straight ahead of subject by experiments.

  5. Brain stem omnipause neurons and the control of combined eye-head gaze saccades in the alert cat.

    PubMed

    Paré, M; Guitton, D

    1998-06-01

    When the head is unrestrained, rapid displacements of the visual axis-gaze shifts (eye-re-space)-are made by coordinated movements of the eyes (eye-re-head) and head (head-re-space). To address the problem of the neural control of gaze shifts, we studied and contrasted the discharges of omnipause neurons (OPNs) during a variety of combined eye-head gaze shifts and head-fixed eye saccades executed by alert cats. OPNs discharged tonically during intersaccadic intervals and at a reduced level during slow perisaccadic gaze movements sometimes accompanying saccades. Their activity ceased for the duration of the saccadic gaze shifts the animal executed, either by head-fixed eye saccades alone or by combined eye-head movements. This was true for all types of gaze shifts studied: active movements to visual targets; passive movements induced by whole-body rotation or by head rotation about stationary body; and electrically evoked movements by stimulation of the caudal part of the superior colliculus (SC), a central structure for gaze control. For combined eye-head gaze shifts, the OPN pause was therefore not correlated to the eye-in-head trajectory. For instance, in active gaze movements, the end of the pause was better correlated with the gaze end than with either the eye saccade end or the time of eye counterrotation. The hypothesis that cat OPNs participate in controlling gaze shifts is supported by these results, and also by the observation that the movements of both the eyes and the head were transiently interrupted by stimulation of OPNs during gaze shifts. However, we found that the OPN pause could be dissociated from the gaze-motor-error signal producing the gaze shift. First, OPNs resumed discharging when perturbation of head motion briefly interrupted a gaze shift before its intended amplitude was attained. Second, stimulation of caudal SC sites in head-free cat elicited large head-free gaze shifts consistent with the creation of a large gaze-motor-error signal

  6. Hierarchical control of two-dimensional gaze saccades

    PubMed Central

    Optican, Lance M.; Blohm, Gunnar; Lefèvre, Philippe

    2014-01-01

    Coordinating the movements of different body parts is a challenging process for the central nervous system because of several problems. Four of these main difficulties are: first, moving one part can move others; second, the parts can have different dynamics; third, some parts can have different motor goals; and fourth, some parts may be perturbed by outside forces. Here, we propose a novel approach for the control of linked systems with feedback loops for each part. The proximal parts have separate goals, but critically the most distal part has only the common goal. We apply this new control policy to eye-head coordination in two-dimensions, specifically head-unrestrained gaze saccades. Paradoxically, the hierarchical structure has controllers for the gaze and the head, but not for the eye (the most distal part). Our simulations demonstrate that the proposed control structure reproduces much of the published empirical data about gaze movements, e.g., it compensates for perturbations, accurately reaches goals for gaze and head from arbitrary initial positions, simulates the nine relationships of the head-unrestrained main sequence, and reproduces observations from lesion and single-unit recording experiments. We conclude by showing how our model can be easily extended to control structures with more linked segments, such as the control of coordinated eye on head on trunk movements. PMID:24062206

  7. The Effectiveness of Gaze-Contingent Control in Computer Games.

    PubMed

    Orlov, Paul A; Apraksin, Nikolay

    2015-01-01

    Eye-tracking technology and gaze-contingent control in human-computer interaction have become an objective reality. This article reports on a series of eye-tracking experiments, in which we concentrated on one aspect of gaze-contingent interaction: Its effectiveness compared with mouse-based control in a computer strategy game. We propose a measure for evaluating the effectiveness of interaction based on "the time of recognition" the game unit. In this article, we use this measure to compare gaze- and mouse-contingent systems, and we present the analysis of the differences as a function of the number of game units. Our results indicate that performance of gaze-contingent interaction is typically higher than mouse manipulation in a visual searching task. When tested on 60 subjects, the results showed that the effectiveness of gaze-contingent systems over 1.5 times higher. In addition, we obtained that eye behavior stays quite stabile with or without mouse interaction. © The Author(s) 2015.

  8. Robust gaze-steering of an active vision system against errors in the estimated parameters

    NASA Astrophysics Data System (ADS)

    Han, Youngmo

    2015-01-01

    Gaze-steering is often used to broaden the viewing range of an active vision system. Gaze-steering procedures are usually based on estimated parameters such as image position, image velocity, depth and camera calibration parameters. However, there may be uncertainties in these estimated parameters because of measurement noise and estimation errors. In this case, robust gaze-steering cannot be guaranteed. To compensate for such problems, this paper proposes a gaze-steering method based on a linear matrix inequality (LMI). In this method, we first propose a proportional derivative (PD) control scheme on the unit sphere that does not use depth parameters. This proposed PD control scheme can avoid uncertainties in the estimated depth and camera calibration parameters, as well as inconveniences in their estimation process, including the use of auxiliary feature points and highly non-linear computation. Furthermore, the control gain of the proposed PD control scheme on the unit sphere is designed using LMI such that the designed control is robust in the presence of uncertainties in the other estimated parameters, such as image position and velocity. Simulation results demonstrate that the proposed method provides a better compensation for uncertainties in the estimated parameters than the contemporary linear method and steers the gaze of the camera more steadily over time than the contemporary non-linear method.

  9. Gaze-Contingent Music Reward Therapy for Social Anxiety Disorder: A Randomized Controlled Trial.

    PubMed

    Lazarov, Amit; Pine, Daniel S; Bar-Haim, Yair

    2017-07-01

    Patients with social anxiety disorder exhibit increased attentional dwelling on social threats, providing a viable target for therapeutics. This randomized controlled trial examined the efficacy of a novel gaze-contingent music reward therapy for social anxiety disorder designed to reduce attention dwelling on threats. Forty patients with social anxiety disorder were randomly assigned to eight sessions of either gaze-contingent music reward therapy, designed to divert patients' gaze toward neutral stimuli rather than threat stimuli, or to a control condition. Clinician and self-report measures of social anxiety were acquired pretreatment, posttreatment, and at 3-month follow-up. Dwell time on socially threatening faces was assessed during the training sessions and at pre- and posttreatment. Gaze-contingent music reward therapy yielded greater reductions of symptoms of social anxiety disorder than the control condition on both clinician-rated and self-reported measures. Therapeutic effects were maintained at follow-up. Gaze-contingent music reward therapy, but not the control condition, also reduced dwell time on threat, which partially mediated clinical effects. Finally, gaze-contingent music reward therapy, but not the control condition, also altered dwell time on socially threatening faces not used in training, reflecting near-transfer training generalization. This is the first randomized controlled trial to examine a gaze-contingent intervention in social anxiety disorder. The results demonstrate target engagement and clinical effects. This study sets the stage for larger randomized controlled trials and testing in other emotional disorders.

  10. Gaze shifts and fixations dominate gaze behavior of walking cats

    PubMed Central

    Rivers, Trevor J.; Sirota, Mikhail G.; Guttentag, Andrew I.; Ogorodnikov, Dmitri A.; Shah, Neet A.; Beloozerova, Irina N.

    2014-01-01

    Vision is important for locomotion in complex environments. How it is used to guide stepping is not well understood. We used an eye search coil technique combined with an active marker-based head recording system to characterize the gaze patterns of cats walking over terrains of different complexity: (1) on a flat surface in the dark when no visual information was available, (2) on the flat surface in light when visual information was available but not required, (3) along the highly structured but regular and familiar surface of a horizontal ladder, a task for which visual guidance of stepping was required, and (4) along a pathway cluttered with many small stones, an irregularly structured surface that was new each day. Three cats walked in a 2.5 m corridor, and 958 passages were analyzed. Gaze activity during the time when the gaze was directed at the walking surface was subdivided into four behaviors based on speed of gaze movement along the surface: gaze shift (fast movement), gaze fixation (no movement), constant gaze (movement at the body’s speed), and slow gaze (the remainder). We found that gaze shifts and fixations dominated the cats’ gaze behavior during all locomotor tasks, jointly occupying 62–84% of the time when the gaze was directed at the surface. As visual complexity of the surface and demand on visual guidance of stepping increased, cats spent more time looking at the surface, looked closer to them, and switched between gaze behaviors more often. During both visually guided locomotor tasks, gaze behaviors predominantly followed a repeated cycle of forward gaze shift followed by fixation. We call this behavior “gaze stepping”. Each gaze shift took gaze to a site approximately 75–80 cm in front of the cat, which the cat reached in 0.7–1.2 s and 1.1–1.6 strides. Constant gaze occupied only 5–21% of the time cats spent looking at the walking surface. PMID:24973656

  11. Social decisions affect neural activity to perceived dynamic gaze

    PubMed Central

    Latinus, Marianne; Love, Scott A.; Rossi, Alejandra; Parada, Francisco J.; Huang, Lisa; Conty, Laurence; George, Nathalie; James, Karin

    2015-01-01

    Gaze direction, a cue of both social and spatial attention, is known to modulate early neural responses to faces e.g. N170. However, findings in the literature have been inconsistent, likely reflecting differences in stimulus characteristics and task requirements. Here, we investigated the effect of task on neural responses to dynamic gaze changes: away and toward transitions (resulting or not in eye contact). Subjects performed, in random order, social (away/toward them) and non-social (left/right) judgment tasks on these stimuli. Overall, in the non-social task, results showed a larger N170 to gaze aversion than gaze motion toward the observer. In the social task, however, this difference was no longer present in the right hemisphere, likely reflecting an enhanced N170 to gaze motion toward the observer. Our behavioral and event-related potential data indicate that performing social judgments enhances saliency of gaze motion toward the observer, even those that did not result in gaze contact. These data and that of previous studies suggest two modes of processing visual information: a ‘default mode’ that may focus on spatial information; a ‘socially aware mode’ that might be activated when subjects are required to make social judgments. The exact mechanism that allows switching from one mode to the other remains to be clarified. PMID:25925272

  12. Fuzzy Integral-Based Gaze Control of a Robotic Head for Human Robot Interaction.

    PubMed

    Yoo, Bum-Soo; Kim, Jong-Hwan

    2015-09-01

    During the last few decades, as a part of effort to enhance natural human robot interaction (HRI), considerable research has been carried out to develop human-like gaze control. However, most studies did not consider hardware implementation, real-time processing, and the real environment, factors that should be taken into account to achieve natural HRI. This paper proposes a fuzzy integral-based gaze control algorithm, operating in real-time and the real environment, for a robotic head. We formulate the gaze control as a multicriteria decision making problem and devise seven human gaze-inspired criteria. Partial evaluations of all candidate gaze directions are carried out with respect to the seven criteria defined from perceived visual, auditory, and internal inputs, and fuzzy measures are assigned to a power set of the criteria to reflect the user defined preference. A fuzzy integral of the partial evaluations with respect to the fuzzy measures is employed to make global evaluations of all candidate gaze directions. The global evaluation values are adjusted by applying inhibition of return and are compared with the global evaluation values of the previous gaze directions to decide the final gaze direction. The effectiveness of the proposed algorithm is demonstrated with a robotic head, developed in the Robot Intelligence Technology Laboratory at Korea Advanced Institute of Science and Technology, through three interaction scenarios and three comparison scenarios with another algorithm.

  13. Gaze Control in Complex Scene Perception

    DTIC Science & Technology

    2004-01-01

    retained in memory from previously attended objects in natural scenes. Psychonomic Bulletin & Review , 8, 761-768. • The nature of the internal memory...scenes. Psychonomic Bulletin & Review , 8, 761-768. o Henderson, J. M., Falk, R. J., Minut, S., Dyer, F. C., & Mahadevan, S. (2001). Gaze control for face

  14. Discriminating between intentional and unintentional gaze fixation using multimodal-based fuzzy logic algorithm for gaze tracking system with NIR camera sensor

    NASA Astrophysics Data System (ADS)

    Naqvi, Rizwan Ali; Park, Kang Ryoung

    2016-06-01

    Gaze tracking systems are widely used in human-computer interfaces, interfaces for the disabled, game interfaces, and for controlling home appliances. Most studies on gaze detection have focused on enhancing its accuracy, whereas few have considered the discrimination of intentional gaze fixation (looking at a target to activate or select it) from unintentional fixation while using gaze detection systems. Previous research methods based on the use of a keyboard or mouse button, eye blinking, and the dwell time of gaze position have various limitations. Therefore, we propose a method for discriminating between intentional and unintentional gaze fixation using a multimodal fuzzy logic algorithm applied to a gaze tracking system with a near-infrared camera sensor. Experimental results show that the proposed method outperforms the conventional method for determining gaze fixation.

  15. Interactions between gaze-evoked blinks and gaze shifts in monkeys.

    PubMed

    Gandhi, Neeraj J

    2012-02-01

    Rapid eyelid closure, or a blink, often accompanies head-restrained and head-unrestrained gaze shifts. This study examines the interactions between such gaze-evoked blinks and gaze shifts in monkeys. Blink probability increases with gaze amplitude and at a faster rate for head-unrestrained movements. Across animals, blink likelihood is inversely correlated with the average gaze velocity of large-amplitude control movements. Gaze-evoked blinks induce robust perturbations in eye velocity. Peak and average velocities are reduced, duration is increased, but accuracy is preserved. The temporal features of the perturbation depend on factors such as the time of blink relative to gaze onset, inherent velocity kinematics of control movements, and perhaps initial eye-in-head position. Although variable across animals, the initial effect is a reduction in eye velocity, followed by a reacceleration that yields two or more peaks in its waveform. Interestingly, head velocity is not attenuated; instead, it peaks slightly later and with a larger magnitude. Gaze latency is slightly reduced on trials with gaze-evoked blinks, although the effect was more variable during head-unrestrained movements; no reduction in head latency is observed. Preliminary data also demonstrate a similar perturbation of gaze-evoked blinks during vertical saccades. The results are compared with previously reported effects of reflexive blinks (evoked by air-puff delivered to one eye or supraorbital nerve stimulation) and discussed in terms of effects of blinks on saccadic suppression, neural correlates of the altered eye velocity signals, and implications on the hypothesis that the attenuation in eye velocity is produced by a head movement command.

  16. Trained Eyes: Experience Promotes Adaptive Gaze Control in Dynamic and Uncertain Visual Environments

    PubMed Central

    Taya, Shuichiro; Windridge, David; Osman, Magda

    2013-01-01

    Current eye-tracking research suggests that our eyes make anticipatory movements to a location that is relevant for a forthcoming task. Moreover, there is evidence to suggest that with more practice anticipatory gaze control can improve. However, these findings are largely limited to situations where participants are actively engaged in a task. We ask: does experience modulate anticipative gaze control while passively observing a visual scene? To tackle this we tested people with varying degrees of experience of tennis, in order to uncover potential associations between experience and eye movement behaviour while they watched tennis videos. The number, size, and accuracy of saccades (rapid eye-movements) made around ‘events,’ which is critical for the scene context (i.e. hit and bounce) were analysed. Overall, we found that experience improved anticipatory eye-movements while watching tennis clips. In general, those with extensive experience showed greater accuracy of saccades to upcoming event locations; this was particularly prevalent for events in the scene that carried high uncertainty (i.e. ball bounces). The results indicate that, even when passively observing, our gaze control system utilizes prior relevant knowledge in order to anticipate upcoming uncertain event locations. PMID:23951147

  17. Direct gaze elicits atypical activation of the theory-of-mind network in autism spectrum conditions.

    PubMed

    von dem Hagen, Elisabeth A H; Stoyanova, Raliza S; Rowe, James B; Baron-Cohen, Simon; Calder, Andrew J

    2014-06-01

    Eye contact plays a key role in social interaction and is frequently reported to be atypical in individuals with autism spectrum conditions (ASCs). Despite the importance of direct gaze, previous functional magnetic resonance imaging in ASC has generally focused on paradigms using averted gaze. The current study sought to determine the neural processing of faces displaying direct and averted gaze in 18 males with ASC and 23 matched controls. Controls showed an increased response to direct gaze in brain areas implicated in theory-of-mind and gaze perception, including medial prefrontal cortex, temporoparietal junction, posterior superior temporal sulcus region, and amygdala. In contrast, the same regions showed an increased response to averted gaze in individuals with an ASC. This difference was confirmed by a significant gaze direction × group interaction. Relative to controls, participants with ASC also showed reduced functional connectivity between these regions. We suggest that, in the typical brain, perceiving another person gazing directly at you triggers spontaneous attributions of mental states (e.g. he is "interested" in me), and that such mental state attributions to direct gaze may be reduced or absent in the autistic brain.

  18. Gaze-based assistive technology in daily activities in children with severe physical impairments-An intervention study.

    PubMed

    Borgestig, Maria; Sandqvist, Jan; Ahlsten, Gunnar; Falkmer, Torbjörn; Hemmingsson, Helena

    2017-04-01

    To establish the impact of a gaze-based assistive technology (AT) intervention on activity repertoire, autonomous use, and goal attainment in children with severe physical impairments, and to examine parents' satisfaction with the gaze-based AT and with services related to the gaze-based AT intervention. Non-experimental multiple case study with before, after, and follow-up design. Ten children with severe physical impairments without speaking ability (aged 1-15 years) participated in gaze-based AT intervention for 9-10 months, during which period the gaze-based AT was implemented in daily activities. Repertoire of computer activities increased for seven children. All children had sustained usage of gaze-based AT in daily activities at follow-up, all had attained goals, and parents' satisfaction with the AT and with services was high. The gaze-based AT intervention was effective in guiding parents and teachers to continue supporting the children to perform activities with the AT after the intervention program.

  19. Gaze training enhances laparoscopic technical skill acquisition and multi-tasking performance: a randomized, controlled study.

    PubMed

    Wilson, Mark R; Vine, Samuel J; Bright, Elizabeth; Masters, Rich S W; Defriend, David; McGrath, John S

    2011-12-01

    The operating room environment is replete with stressors and distractions that increase the attention demands of what are already complex psychomotor procedures. Contemporary research in other fields (e.g., sport) has revealed that gaze training interventions may support the development of robust movement skills. This current study was designed to examine the utility of gaze training for technical laparoscopic skills and to test performance under multitasking conditions. Thirty medical trainees with no laparoscopic experience were divided randomly into one of three treatment groups: gaze trained (GAZE), movement trained (MOVE), and discovery learning/control (DISCOVERY). Participants were fitted with a Mobile Eye gaze registration system, which measures eye-line of gaze at 25 Hz. Training consisted of ten repetitions of the "eye-hand coordination" task from the LAP Mentor VR laparoscopic surgical simulator while receiving instruction and video feedback (specific to each treatment condition). After training, all participants completed a control test (designed to assess learning) and a multitasking transfer test, in which they completed the procedure while performing a concurrent tone counting task. Not only did the GAZE group learn more quickly than the MOVE and DISCOVERY groups (faster completion times in the control test), but the performance difference was even more pronounced when multitasking. Differences in gaze control (target locking fixations), rather than tool movement measures (tool path length), underpinned this performance advantage for GAZE training. These results suggest that although the GAZE intervention focused on training gaze behavior only, there were indirect benefits for movement behaviors and performance efficiency. Additionally, focusing on a single external target when learning, rather than on complex movement patterns, may have freed-up attentional resources that could be applied to concurrent cognitive tasks.

  20. Psychomotor control in a virtual laparoscopic surgery training environment: gaze control parameters differentiate novices from experts.

    PubMed

    Wilson, Mark; McGrath, John; Vine, Samuel; Brewer, James; Defriend, David; Masters, Richard

    2010-10-01

    Surgical simulation is increasingly used to facilitate the adoption of technical skills during surgical training. This study sought to determine if gaze control parameters could differentiate between the visual control of experienced and novice operators performing an eye-hand coordination task on a virtual reality laparoscopic surgical simulator (LAP Mentor™). Typically adopted hand movement metrics reflect only one half of the eye-hand coordination relationship; therefore, little is known about how hand movements are guided and controlled by vision. A total of 14 right-handed surgeons were categorised as being either experienced (having led more than 70 laparoscopic procedures) or novice (having performed fewer than 10 procedures) operators. The eight experienced and six novice surgeons completed the eye-hand coordination task from the LAP Mentor basic skills package while wearing a gaze registration system. A variety of performance, movement, and gaze parameters were recorded and compared between groups. The experienced surgeons completed the task significantly more quickly than the novices, but only the economy of movement of the left tool differentiated skill level from the LAP Mentor parameters. Gaze analyses revealed that experienced surgeons spent significantly more time fixating the target locations than novices, who split their time between focusing on the targets and tracking the tools. The findings of the study provide support for the utility of assessing strategic gaze behaviour to better understand the way in which surgeons utilise visual information to plan and control tool movements in a virtual reality laparoscopic environment. It is hoped that by better understanding the limitations of the psychomotor system, effective gaze training programs may be developed.

  1. Psychomotor control in a virtual laparoscopic surgery training environment: gaze control parameters differentiate novices from experts

    PubMed Central

    McGrath, John; Vine, Samuel; Brewer, James; Defriend, David; Masters, Richard

    2010-01-01

    Background Surgical simulation is increasingly used to facilitate the adoption of technical skills during surgical training. This study sought to determine if gaze control parameters could differentiate between the visual control of experienced and novice operators performing an eye-hand coordination task on a virtual reality laparoscopic surgical simulator (LAP Mentor™). Typically adopted hand movement metrics reflect only one half of the eye-hand coordination relationship; therefore, little is known about how hand movements are guided and controlled by vision. Methods A total of 14 right-handed surgeons were categorised as being either experienced (having led more than 70 laparoscopic procedures) or novice (having performed fewer than 10 procedures) operators. The eight experienced and six novice surgeons completed the eye-hand coordination task from the LAP Mentor basic skills package while wearing a gaze registration system. A variety of performance, movement, and gaze parameters were recorded and compared between groups. Results The experienced surgeons completed the task significantly more quickly than the novices, but only the economy of movement of the left tool differentiated skill level from the LAP Mentor parameters. Gaze analyses revealed that experienced surgeons spent significantly more time fixating the target locations than novices, who split their time between focusing on the targets and tracking the tools. Conclusion The findings of the study provide support for the utility of assessing strategic gaze behaviour to better understand the way in which surgeons utilise visual information to plan and control tool movements in a virtual reality laparoscopic environment. It is hoped that by better understanding the limitations of the psychomotor system, effective gaze training programs may be developed. PMID:20333405

  2. Flexible Coordination of Stationary and Mobile Conversations with Gaze: Resource Allocation among Multiple Joint Activities

    PubMed Central

    Mayor, Eric; Bangerter, Adrian

    2016-01-01

    Gaze is instrumental in coordinating face-to-face social interactions. But little is known about gaze use when social interactions co-occur with other joint activities. We investigated the case of walking while talking. We assessed how gaze gets allocated among various targets in mobile conversations, whether allocation of gaze to other targets affects conversational coordination, and whether reduced availability of gaze for conversational coordination affects conversational performance and content. In an experimental study, pairs were videotaped in four conditions of mobility (standing still, talking while walking along a straight-line itinerary, talking while walking along a complex itinerary, or walking along a complex itinerary with no conversational task). Gaze to partners was substantially reduced in mobile conversations, but gaze was still used to coordinate conversation via displays of mutual orientation, and conversational performance and content was not different between stationary and mobile conditions. Results expand the phenomena of multitasking to joint activities. PMID:27822189

  3. Eye-gaze control of the computer interface: Discrimination of zoom intent

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldberg, J.H.; Schryver, J.C.

    1993-10-01

    An analysis methodology and associated experiment were developed to assess whether definable and repeatable signatures of eye-gaze characteristics are evident, preceding a decision to zoom-in, zoom-out, or not to zoom at a computer interface. This user intent discrimination procedure can have broad application in disability aids and telerobotic control. Eye-gaze was collected from 10 subjects in a controlled experiment, requiring zoom decisions. The eye-gaze data were clustered, then fed into a multiple discriminant analysis (MDA) for optimal definition of heuristics separating the zoom-in, zoom-out, and no-zoom conditions. Confusion matrix analyses showed that a number of variable combinations classified at amore » statistically significant level, but practical significance was more difficult to establish. Composite contour plots demonstrated the regions in parameter space consistently assigned by the MDA to unique zoom conditions. Peak classification occurred at about 1200--1600 msec. Improvements in the methodology to achieve practical real-time zoom control are considered.« less

  4. Control over the processing of the opponent's gaze direction in basketball experts.

    PubMed

    Weigelt, Matthias; Güldenpenning, Iris; Steggemann-Weinrich, Yvonne; Alhaj Ahmad Alaboud, Mustafa; Kunde, Wilfried

    2017-06-01

    Basketball players' responses to an opposing players' pass direction are typically delayed when the opposing player gazes in another than the pass direction. Here, we studied the role of basketball expertise on this, the so-called head-fake effect, in three groups of participants (basketball experts, soccer players, and non-athletes). The specific focus was on the dependency of the head-fake effect on previous fake experience as an index of control over the processing of task-irrelevant gaze information. Whereas (overall) the head-fake effect was of similar size in all expertise groups, preceding fake experience removed the head-fake effect in basketball players, but not in non-experts. Accordingly, basketball expertise allows for higher levels of control over the processing of task-irrelevant gaze information.

  5. Does gaze cueing produce automatic response activation: a lateralized readiness potential (LRP) study.

    PubMed

    Vainio, L; Heimola, M; Heino, H; Iljin, I; Laamanen, P; Seesjärvi, E; Paavilainen, P

    2014-05-01

    Previous research has shown that gaze cues facilitate responses to an upcoming target if the target location is compatible with the direction of the cue. Similar cueing effects have also been observed with central arrow cues. Both of these cueing effects have been attributed to a reflexive orienting of attention triggered by the cue. In addition, orienting of attention has been proposed to result in a partial response activation of the corresponding hand that, in turn, can be observed in the lateralized readiness potential (LRP), an electrophysiological indicator of automatic hand-motor response preparation. For instance, a central arrow cue has been observed to produce automatic hand-motor activation as indicated by the LRPs. The present study investigated whether gaze cues could also produce similar activation patterns in LRP. Although the standard gaze cueing effect was observed in the behavioural data, the LRP data did not reveal any consistent automatic hand-motor activation. The study suggests that motor processes associated with gaze cueing effect may operate exclusively at the level of oculomotor programming. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  6. Gazing at me: the importance of social meaning in understanding direct-gaze cues

    PubMed Central

    Hamilton, Antonia F. de C.

    2016-01-01

    Direct gaze is an engaging and important social cue, but the meaning of direct gaze depends heavily on the surrounding context. This paper reviews some recent studies of direct gaze, to understand more about what neural and cognitive systems are engaged by this social cue and why. The data show that gaze can act as an arousal cue and can modulate actions, and can activate brain regions linked to theory of mind and self-related processing. However, all these results are strongly modulated by the social meaning of a gaze cue and by whether participants believe that another person is really watching them. The implications of these contextual effects and audience effects for our theories of gaze are considered. PMID:26644598

  7. Gazing at me: the importance of social meaning in understanding direct-gaze cues.

    PubMed

    de C Hamilton, Antonia F

    2016-01-19

    Direct gaze is an engaging and important social cue, but the meaning of direct gaze depends heavily on the surrounding context. This paper reviews some recent studies of direct gaze, to understand more about what neural and cognitive systems are engaged by this social cue and why. The data show that gaze can act as an arousal cue and can modulate actions, and can activate brain regions linked to theory of mind and self-related processing. However, all these results are strongly modulated by the social meaning of a gaze cue and by whether participants believe that another person is really watching them. The implications of these contextual effects and audience effects for our theories of gaze are considered. © 2015 The Author(s).

  8. Postural control and head stability during natural gaze behaviour in 6- to 12-year-old children.

    PubMed

    Schärli, A M; van de Langenberg, R; Murer, K; Müller, R M

    2013-06-01

    We investigated how the influence of natural exploratory gaze behaviour on postural control develops from childhood into adulthood. In a cross-sectional design, we compared four age groups: 6-, 9-, 12-year-olds and young adults. Two experimental trials were performed: quiet stance with a fixed gaze (fixed) and quiet stance with natural exploratory gaze behaviour (exploratory). The latter was elicited by having participants watch an animated short film on a large screen in front of them. 3D head rotations in space and centre of pressure (COP) excursions on the ground plane were measured. Across conditions, both head rotation and COP displacement decreased with increasing age. Head movement was greater in the exploratory condition in all age groups. In all children-but not in adults-COP displacement was markedly greater in the exploratory condition. Bivariate correlations across groups showed highly significant positive correlations between COP displacement in ML direction and head rotation in yaw, roll, and pitch in both conditions. The regularity of COP displacements did not show a clear developmental trend, which indicates that COP dynamics were qualitatively similar across age groups. Together, the results suggest that the contribution of head movement to eye-head saccades decreases with age and that head instability-in part resulting from such gaze-related head movements-is an important limiting factor in children's postural control. The lack of head stabilisation might particularly affect children in everyday activities in which both postural control and visual exploration are required.

  9. Sustained neural activity to gaze and emotion perception in dynamic social scenes

    PubMed Central

    Ulloa, José Luis; Puce, Aina; Hugueville, Laurent; George, Nathalie

    2014-01-01

    To understand social interactions, we must decode dynamic social cues from seen faces. Here, we used magnetoencephalography (MEG) to study the neural responses underlying the perception of emotional expressions and gaze direction changes as depicted in an interaction between two agents. Subjects viewed displays of paired faces that first established a social scenario of gazing at each other (mutual attention) or gazing laterally together (deviated group attention) and then dynamically displayed either an angry or happy facial expression. The initial gaze change elicited a significantly larger M170 under the deviated than the mutual attention scenario. At around 400 ms after the dynamic emotion onset, responses at posterior MEG sensors differentiated between emotions, and between 1000 and 2200 ms, left posterior sensors were additionally modulated by social scenario. Moreover, activity on right anterior sensors showed both an early and prolonged interaction between emotion and social scenario. These results suggest that activity in right anterior sensors reflects an early integration of emotion and social attention, while posterior activity first differentiated between emotions only, supporting the view of a dual route for emotion processing. Altogether, our data demonstrate that both transient and sustained neurophysiological responses underlie social processing when observing interactions between others. PMID:23202662

  10. Gaze-contingent control for minimally invasive robotic surgery.

    PubMed

    Mylonas, George P; Darzi, Ara; Yang, Guang Zhong

    2006-09-01

    Recovering tissue depth and deformation during robotically assisted minimally invasive procedures is an important step towards motion compensation, stabilization and co-registration with preoperative data. This work demonstrates that eye gaze derived from binocular eye tracking can be effectively used to recover 3D motion and deformation of the soft tissue. A binocular eye-tracking device was integrated into the stereoscopic surgical console. After calibration, the 3D fixation point of the participating subjects could be accurately resolved in real time. A CT-scanned phantom heart model was used to demonstrate the accuracy of gaze-contingent depth extraction and motion stabilization of the soft tissue. The dynamic response of the oculomotor system was assessed with the proposed framework by using autoregressive modeling techniques. In vivo data were also used to perform gaze-contingent decoupling of cardiac and respiratory motion. Depth reconstruction, deformation tracking, and motion stabilization of the soft tissue were possible with binocular eye tracking. The dynamic response of the oculomotor system was able to cope with frequencies likely to occur under most routine minimally invasive surgical operations. The proposed framework presents a novel approach towards the tight integration of a human and a surgical robot where interaction in response to sensing is required to be under the control of the operating surgeon.

  11. Exploring associations between gaze patterns and putative human mirror neuron system activity.

    PubMed

    Donaldson, Peter H; Gurvich, Caroline; Fielding, Joanne; Enticott, Peter G

    2015-01-01

    The human mirror neuron system (MNS) is hypothesized to be crucial to social cognition. Given that key MNS-input regions such as the superior temporal sulcus are involved in biological motion processing, and mirror neuron activity in monkeys has been shown to vary with visual attention, aberrant MNS function may be partly attributable to atypical visual input. To examine the relationship between gaze pattern and interpersonal motor resonance (IMR; an index of putative MNS activity), healthy right-handed participants aged 18-40 (n = 26) viewed videos of transitive grasping actions or static hands, whilst the left primary motor cortex received transcranial magnetic stimulation. Motor-evoked potentials recorded in contralateral hand muscles were used to determine IMR. Participants also underwent eyetracking analysis to assess gaze patterns whilst viewing the same videos. No relationship was observed between predictive gaze and IMR. However, IMR was positively associated with fixation counts in areas of biological motion in the videos, and negatively associated with object areas. These findings are discussed with reference to visual influences on the MNS, and the possibility that MNS atypicalities might be influenced by visual processes such as aberrant gaze pattern.

  12. Full-Body Gaze Control Mechanisms Elicited During Locomotion: Effects Of VOR Adaptation

    NASA Technical Reports Server (NTRS)

    Mulavara, A. P.; Houser, J.; Peters, B.; Miller, C.; Richards, J.; Marshburn, A.; Brady, R.; Cohen, H.; Bloomberg, J. J.

    2004-01-01

    Control of locomotion requires precise interaction between several sensorimotor subsystems. During locomotion the performer must satisfy two performance criteria: maintain stable forward translation and to stabilize gaze (McDonald, et al., 1997). Precise coordination demands integration of multiple sensorimotor subsystems for fulfilling both criteria. In order to test the general hypothesis that the whole body can serve as an integrated gaze stabilization system, we have previously investigated how the multiple, interdependent full-body sensorimotor subsystems respond to changes in gaze stabilization task constraints during locomotion (Mulavara and Bloomberg, 2003). The results suggest that the full body contributes to gaze stabilization during locomotion, and that its different functional elements respond to changes in visual task constraints. The goal of this study was to determine how the multiple, interdependent, full-body sensorimotor subsystems aiding gaze stabilization during locomotion are functionally coordinated after the vestibulo-ocular reflex (VOR) gain has been altered. We investigated the potential of adaptive remodeling of the full-body gaze control system following exposure to visual-vestibular conflict known to adaptively reduce the VOR. Subjects (n=14) walked (6.4 km/h) on the treadmill before and after they were exposed to 0.5X manifying lenses worn for 30 minutes during self-generated sinusoidal vertical head rotations performed while seated. In this study we measured: temporal parameters of gait, full body sagittal plane segmental kinematics of the head, trunk, thigh, shank and foot, accelerations along the vertical axis at the head and the shank, and the vertical forces acting on the support surface. Results indicate that, following exposure to the 0.5X minifying lenses, there was a significant increase in the duration of stance and stride times, alteration in the amplitude of head movement with respect to space and a significant increase in

  13. The Stationary-Gaze Task Should Not Be Systematically Used as the Control Task in Studies of Postural Control.

    PubMed

    Bonnet, Cédrick T; Szaffarczyk, Sébastien

    2017-01-01

    In studies of postural control, a control task is often used to understand significant effects obtained with experimental manipulations. This task should be the easiest task and (therefore) engage the lowest behavioral variability and cognitive workload. Since 1983, the stationary-gaze task is considered as the most relevant control task. Instead, the authors expected that free looking at small targets (white paper or images; visual angle: 12°) could be an easier task. To verify this assumption, 16 young individuals performed stationary-gaze, white-panel, and free-viewing 12° tasks in steady and relaxed stances. The stationary-gaze task led to significantly higher cognitive workload (mean score in the National Aeronotics and Space Administration Task Load Index questionnaire), higher interindividual body (head, neck, and lower back) linear variability, and higher interindividual body angular variability-not systematically yet-than both other tasks. There was more cognitive workload in steady than relaxed stances. The authors also tested if a free-viewing 24° task could lead to greater angular displacement, and hence greater body sway, than could the other tasks in relaxed stance. Unexpectedly, the participants mostly moved their eyes and not their body in this task. In the discussion, the authors explain why the stationary-gaze task may not be an ideal control task and how to choose this neutral task.

  14. Gaze-evoked nystagmus induced by alcohol intoxication.

    PubMed

    Romano, Fausto; Tarnutzer, Alexander A; Straumann, Dominik; Ramat, Stefano; Bertolini, Giovanni

    2017-03-15

    The cerebellum is the core structure controlling gaze stability. Chronic cerebellar diseases and acute alcohol intoxication affect cerebellar function, inducing, among others, gaze instability as gaze-evoked nystagmus. Gaze-evoked nystagmus is characterized by increased centripetal eye-drift. It is used as an important diagnostic sign for patients with cerebellar degeneration and to assess the 'driving while intoxicated' condition. We quantified the effect of alcohol on gaze-holding using an approach allowing, for the first time, the comparison of deficits induced by alcohol intoxication and cerebellar degeneration. Our results showed that alcohol intoxication induces a two-fold increase of centripetal eye-drift. We establish analysis techniques for using controlled alcohol intake as a model to support the study of cerebellar deficits. The observed similarity between the effect of alcohol and the clinical signs observed in cerebellar patients suggests a possible pathomechanism for gaze-holding deficits. Gaze-evoked nystagmus (GEN) is an ocular-motor finding commonly observed in cerebellar disease, characterized by increased centripetal eye-drift with centrifugal correcting saccades at eccentric gaze. With cerebellar degeneration being a rare and clinically heterogeneous disease, data from patients are limited. We hypothesized that a transient inhibition of cerebellar function by defined amounts of alcohol may provide a suitable model to study gaze-holding deficits in cerebellar disease. We recorded gaze-holding at varying horizontal eye positions in 15 healthy participants before and 30 min after alcohol intake required to reach 0.6‰ blood alcohol content (BAC). Changes in ocular-motor behaviour were quantified measuring eye-drift velocity as a continuous function of gaze eccentricity over a large range (±40 deg) of horizontal gaze angles and characterized using a two-parameter tangent model. The effect of alcohol on gaze stability was assessed analysing: (1

  15. Altered activity of the primary visual area during gaze processing in individuals with high-functioning autistic spectrum disorder: a magnetoencephalography study.

    PubMed

    Hasegawa, Naoya; Kitamura, Hideaki; Murakami, Hiroatsu; Kameyama, Shigeki; Sasagawa, Mutsuo; Egawa, Jun; Tamura, Ryu; Endo, Taro; Someya, Toshiyuki

    2013-01-01

    Individuals with autistic spectrum disorder (ASD) demonstrate an impaired ability to infer the mental states of others from their gaze. Thus, investigating the relationship between ASD and eye gaze processing is crucial for understanding the neural basis of social impairments seen in individuals with ASD. In addition, characteristics of ASD are observed in more comprehensive visual perception tasks. These visual characteristics of ASD have been well-explained in terms of the atypical relationship between high- and low-level gaze processing in ASD. We studied neural activity during gaze processing in individuals with ASD using magnetoencephalography, with a focus on the relationship between high- and low-level gaze processing both temporally and spatially. Minimum Current Estimate analysis was applied to perform source analysis of magnetic responses to gaze stimuli. The source analysis showed that later activity in the primary visual area (V1) was affected by gaze direction only in the ASD group. Conversely, the right posterior superior temporal sulcus, which is a brain region that processes gaze as a social signal, in the typically developed group showed a tendency toward greater activation during direct compared with averted gaze processing. These results suggest that later activity in V1 relating to gaze processing is altered or possibly enhanced in high-functioning individuals with ASD, which may underpin the social cognitive impairments in these individuals. © 2013 S. Karger AG, Basel.

  16. Modification of Eccentric Gaze-Holding

    NASA Technical Reports Server (NTRS)

    Reschke, M. F.; Paloski, W. H.; Somers, J. T.; Leigh, R. J.; Wood, S. J.; Kornilova, L.

    2006-01-01

    Clear vision and accurate localization of objects in the environment are prerequisites for reliable performance of motor tasks. Space flight confronts the crewmember with a stimulus rearrangement that requires adaptation to function effectively with the new requirements of altered spatial orientation and motor coordination. Adaptation and motor learning driven by the effects of cerebellar disorders may share some of the same demands that face our astronauts. One measure of spatial localization shared by the astronauts and those suffering from cerebellar disorders that is easily quantified, and for which a neurobiological substrate has been identified, is the control of the angle of gaze (the "line of sight"). The disturbances of gaze control that have been documented to occur in astronauts and cosmonauts, both in-flight and postflight, can be directly related to changes in the extrinsic gravitational environment and intrinsic proprioceptive mechanisms thus, lending themselves to description by simple non-linear statistical models. Because of the necessity of developing robust normal response populations and normative populations against which abnormal responses can be evaluated, the basic models can be formulated using normal, non-astronaut test subjects and subsequently extended using centrifugation techniques to alter the gravitational and proprioceptive environment of these subjects. Further tests and extensions of the models can be made by studying abnormalities of gaze control in patients with cerebellar disease. A series of investigations were conducted in which a total of 62 subjects were tested to: (1) Define eccentric gaze-holding parameters in a normative population, and (2) explore the effects of linear acceleration on gaze-holding parameters. For these studies gaze-holding was evaluated with the subjects seated upright (the normative values), rolled 45 degrees to both the left and right, or pitched back 30 and 90 degrees. In a separate study the further

  17. Contribution of the frontal eye field to gaze shifts in the head-unrestrained rhesus monkey: neuronal activity.

    PubMed

    Knight, T A

    2012-12-06

    The frontal eye field (FEF) has a strong influence on saccadic eye movements with the head restrained. With the head unrestrained, eye saccades combine with head movements to produce large gaze shifts, and microstimulation of the FEF evokes both eye and head movements. To test whether the dorsomedial FEF provides commands for the entire gaze shift or its separate eye and head components, we recorded extracellular single-unit activity in monkeys trained to make large head-unrestrained gaze shifts. We recorded 80 units active during gaze shifts, and closely examined 26 of these that discharged a burst of action potentials that preceded horizontal gaze movements. These units were movement or visuomovement related and most exhibited open movement fields with respect to amplitude. To reveal the relations of burst parameters to gaze, eye, and/or head movement metrics, we used behavioral dissociations of gaze, eye, and head movements and linear regression analyses. The burst number of spikes (NOS) was strongly correlated with movement amplitude and burst temporal parameters were strongly correlated with movement temporal metrics for eight gaze-related burst neurons and five saccade-related burst neurons. For the remaining 13 neurons, the NOS was strongly correlated with the head movement amplitude, but burst temporal parameters were most strongly correlated with eye movement temporal metrics (head-eye-related burst neurons, HEBNs). These results suggest that FEF units do not encode a command for the unified gaze shift only; instead, different units may carry signals related to the overall gaze shift or its eye and/or head components. Moreover, the HEBNs exhibit bursts whose magnitude and timing may encode a head displacement signal and a signal that influences the timing of the eye saccade, thereby serving as a mechanism for coordinating the eye and head movements of a gaze shift. Copyright © 2012 IBRO. Published by Elsevier Ltd. All rights reserved.

  18. Look over there! Unilateral gaze increases geographical memory of the 50 United States.

    PubMed

    Propper, Ruth E; Brunyé, Tad T; Christman, Stephen D; Januszewskia, Ashley

    2012-02-01

    Based on their specialized processing abilities, the left and right hemispheres of the brain may not contribute equally to recall of general world knowledge. US college students recalled the verbal names and spatial locations of the 50 US states while sustaining leftward or rightward unilateral gaze, a procedure that selectively activates the contralateral hemisphere. Compared to a no-unilateral gaze control, right gaze/left hemisphere activation resulted in better recall, demonstrating left hemisphere superiority in recall of general world knowledge and offering equivocal support for the hemispheric encoding asymmetry model of memory. Unilateral gaze- regardless of direction- improved recall of spatial, but not verbal, information. Future research could investigate the conditions under which unilateral gaze increases recall. Sustained unilateral gaze can be used as a simple, inexpensive, means for testing theories of hemispheric specialization of cognitive functions. Results support an overall deficit in US geographical knowledge in undergraduate college students. Copyright © 2011 Elsevier Inc. All rights reserved.

  19. Perceptual Training in Beach Volleyball Defence: Different Effects of Gaze-Path Cueing on Gaze and Decision-Making

    PubMed Central

    Klostermann, André; Vater, Christian; Kredel, Ralf; Hossner, Ernst-Joachim

    2015-01-01

    For perceptual-cognitive skill training, a variety of intervention methods has been proposed, including the so-called “color-cueing method” which aims on superior gaze-path learning by applying visual markers. However, recent findings challenge this method, especially, with regards to its actual effects on gaze behavior. Consequently, after a preparatory study on the identification of appropriate visual cues for life-size displays, a perceptual-training experiment on decision-making in beach volleyball was conducted, contrasting two cueing interventions (functional vs. dysfunctional gaze path) with a conservative control condition (anticipation-related instructions). Gaze analyses revealed learning effects for the dysfunctional group only. Regarding decision-making, all groups showed enhanced performance with largest improvements for the control group followed by the functional and the dysfunctional group. Hence, the results confirm cueing effects on gaze behavior, but they also question its benefit for enhancing decision-making. However, before completely denying the method’s value, optimisations should be checked regarding, for instance, cueing-pattern characteristics and gaze-related feedback. PMID:26648894

  20. Functional Coordination of a Full-Body Gaze Control Mechanisms Elicited During Locomotion

    NASA Technical Reports Server (NTRS)

    Bloomberg, Jacob J.; Mulavara, Ajitkumar P.; Cohen, Helen S.

    2003-01-01

    Control of locomotion requires precise interaction between several sensorimotor subsystems. Exposure to the microgravity environment of spaceflight leads to postflight adaptive alterations in these multiple subsystems leading to postural and gait disturbances. Countermeasures designed to mitigate these postflight gait alterations will need to be assessed with a new generation of functional tests that evaluate the interaction of various elements central to locomotor control. The goal of this study is to determine how the multiple, interdependent, full- body sensorimotor subsystems aiding gaze stabilization during locomotion are functionally coordinated. To explore this question two experiments were performed. In the first study (Study 1) we investigated how alteration in gaze tasking changes full-body locomotor control strategies. Subjects (n=9) performed two discreet gaze stabilization tasks while walking at 6.4 km/hr on a motorized treadmill: 1) focusing on a central point target; 2) reading numeral characters; both presented at 2m in front at eye level. The second study (Study 2) investigated the potential of adaptive remodeling of the full-body gaze control systems following exposure to visual-vestibular conflict. Subjects (n=14) walked (6.4 km/h) on the treadmill before and after they were exposed to 0.5X minifying lenses worn for 30 minutes during self-generated sinusoidal vertical head rotations performed while seated. In both studies we measured: temporal parameters of gait, full body sagittal plane segmental kinematics of the head, trunk, thigh, shank and foot, accelerations along the vertical axis at the head and the shank, and the vertical forces acting on the support surface. Results from Study 1 showed that while reading numeral characters as compared to the central point target: 1) compensatory head pitch movements were on average 22% greater 2) the peak acceleration measured at the head was significantly reduced by an average of 13% in four of the six

  1. Eye gaze performance for children with severe physical impairments using gaze-based assistive technology-A longitudinal study.

    PubMed

    Borgestig, Maria; Sandqvist, Jan; Parsons, Richard; Falkmer, Torbjörn; Hemmingsson, Helena

    2016-01-01

    Gaze-based assistive technology (gaze-based AT) has the potential to provide children affected by severe physical impairments with opportunities for communication and activities. This study aimed to examine changes in eye gaze performance over time (time on task and accuracy) in children with severe physical impairments, without speaking ability, using gaze-based AT. A longitudinal study with a before and after design was conducted on 10 children (aged 1-15 years) with severe physical impairments, who were beginners to gaze-based AT at baseline. Thereafter, all children used the gaze-based AT in daily activities over the course of the study. Compass computer software was used to measure time on task and accuracy with eye selection of targets on screen, and tests were performed with the children at baseline, after 5 months, 9-11 months, and after 15-20 months. Findings showed that the children improved in time on task after 5 months and became more accurate in selecting targets after 15-20 months. This study indicates that these children with severe physical impairments, who were unable to speak, could improve in eye gaze performance. However, the children needed time to practice on a long-term basis to acquire skills needed to develop fast and accurate eye gaze performance.

  2. Teachers' Experiences of Using Eye Gaze-Controlled Computers for Pupils with Severe Motor Impairments and without Speech

    ERIC Educational Resources Information Center

    Rytterström, Patrik; Borgestig, Maria; Hemmingsson, Helena

    2016-01-01

    The purpose of this study is to explore teachers' experiences of using eye gaze-controlled computers with pupils with severe disabilities. Technology to control a computer with eye gaze is a fast growing field and has promising implications for people with severe disabilities. This is a new assistive technology and a new learning situation for…

  3. The EyeHarp: A Gaze-Controlled Digital Musical Instrument

    PubMed Central

    Vamvakousis, Zacharias; Ramirez, Rafael

    2016-01-01

    We present and evaluate the EyeHarp, a new gaze-controlled Digital Musical Instrument, which aims to enable people with severe motor disabilities to learn, perform, and compose music using only their gaze as control mechanism. It consists of (1) a step-sequencer layer, which serves for constructing chords/arpeggios, and (2) a melody layer, for playing melodies and changing the chords/arpeggios. We have conducted a pilot evaluation of the EyeHarp involving 39 participants with no disabilities from both a performer and an audience perspective. In the first case, eight people with normal vision and no motor disability participated in a music-playing session in which both quantitative and qualitative data were collected. In the second case 31 people qualitatively evaluated the EyeHarp in a concert setting consisting of two parts: a solo performance part, and an ensemble (EyeHarp, two guitars, and flute) performance part. The obtained results indicate that, similarly to traditional music instruments, the proposed digital musical instrument has a steep learning curve, and allows to produce expressive performances both from the performer and audience perspective. PMID:27445885

  4. Parent Perception of Two Eye-Gaze Control Technology Systems in Young Children with Cerebral Palsy: Pilot Study.

    PubMed

    Karlsson, Petra; Wallen, Margaret

    2017-01-01

    Eye-gaze control technology enables people with significant physical disability to access computers for communication, play, learning and environmental control. This pilot study used a multiple case study design with repeated baseline assessment and parents' evaluations to compare two eye-gaze control technology systems to identify any differences in factors such as ease of use and impact of the systems for their young children. Five children, aged 3 to 5 years, with dyskinetic cerebral palsy, and their families participated. Overall, families were satisfied with both the Tobii PCEye Go and myGaze® eye tracker, found them easy to position and use, and children learned to operate them quickly. This technology provides young children with important opportunities for learning, play, leisure, and developing communication.

  5. Spatiotemporal commonalities of fronto-parietal activation in attentional orienting triggered by supraliminal and subliminal gaze cues: An event-related potential study.

    PubMed

    Uono, Shota; Sato, Wataru; Sawada, Reiko; Kochiyama, Takanori; Toichi, Motomi

    2018-05-04

    Eye gaze triggers attentional shifts with and without conscious awareness. It remains unclear whether the spatiotemporal patterns of electric neural activity are the same for conscious and unconscious attentional shifts. Thus, the present study recorded event-related potentials (ERPs) and evaluated the neural activation involved in attentional orienting induced by subliminal and supraliminal gaze cues. Nonpredictive gaze cues were presented in the central field of vision, and participants were asked to detect a subsequent peripheral target. The mean reaction time was shorter for congruent gaze cues than for incongruent gaze cues under both presentation conditions, indicating that both types of cues reliably trigger attentional orienting. The ERP analysis revealed that averted versus straight gaze induced greater negative deflection in the bilateral fronto-central and temporal regions between 278 and 344 ms under both supraliminal and subliminal presentation conditions. Supraliminal cues, irrespective of gaze direction, induced a greater negative amplitude than did subliminal cues at the right posterior cortices at a peak of approximately 170 ms and in the 200-300 ms. These results suggest that similar spatial and temporal fronto-parietal activity is involved in attentional orienting triggered by both supraliminal and subliminal gaze cues, although inputs from different visual processing routes (cortical and subcortical regions) may trigger activity in the attentional network. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. Learning visuomotor transformations for gaze-control and grasping.

    PubMed

    Hoffmann, Heiko; Schenck, Wolfram; Möller, Ralf

    2005-08-01

    For reaching to and grasping of an object, visual information about the object must be transformed into motor or postural commands for the arm and hand. In this paper, we present a robot model for visually guided reaching and grasping. The model mimics two alternative processing pathways for grasping, which are also likely to coexist in the human brain. The first pathway directly uses the retinal activation to encode the target position. In the second pathway, a saccade controller makes the eyes (cameras) focus on the target, and the gaze direction is used instead as positional input. For both pathways, an arm controller transforms information on the target's position and orientation into an arm posture suitable for grasping. For the training of the saccade controller, we suggest a novel staged learning method which does not require a teacher that provides the necessary motor commands. The arm controller uses unsupervised learning: it is based on a density model of the sensor and the motor data. Using this density, a mapping is achieved by completing a partially given sensorimotor pattern. The controller can cope with the ambiguity in having a set of redundant arm postures for a given target. The combined model of saccade and arm controller was able to fixate and grasp an elongated object with arbitrary orientation and at arbitrary position on a table in 94% of trials.

  7. Countermanding eye-head gaze shifts in humans: marching orders are delivered to the head first.

    PubMed

    Corneil, Brian D; Elsley, James K

    2005-07-01

    The countermanding task requires subjects to cancel a planned movement on appearance of a stop signal, providing insights into response generation and suppression. Here, we studied human eye-head gaze shifts in a countermanding task with targets located beyond the horizontal oculomotor range. Consistent with head-restrained saccadic countermanding studies, the proportion of gaze shifts on stop trials increased the longer the stop signal was delayed after target presentation, and gaze shift stop-signal reaction times (SSRTs: a derived statistic measuring how long it takes to cancel a movement) averaged approximately 120 ms across seven subjects. We also observed a marked proportion of trials (13% of all stop trials) during which gaze remained stable but the head moved toward the target. Such head movements were more common at intermediate stop signal delays. We never observed the converse sequence wherein gaze moved while the head remained stable. SSRTs for head movements averaged approximately 190 ms or approximately 70-75 ms longer than gaze SSRTs. Although our findings are inconsistent with a single race to threshold as proposed for controlling saccadic eye movements, movement parameters on stop trials attested to interactions consistent with a race model architecture. To explain our data, we tested two extensions to the saccadic race model. The first assumed that gaze shifts and head movements are controlled by parallel but independent races. The second model assumed that gaze shifts and head movements are controlled by a single race, preceded by terminal ballistic intervals not under inhibitory control, and that the head-movement branch is activated at a lower threshold. Although simulations of both models produced acceptable fits to the empirical data, we favor the second alternative as it is more parsimonious with recent findings in the oculomotor system. Using the second model, estimates for gaze and head ballistic intervals were approximately 25 and 90 ms

  8. Stimulus exposure and gaze bias: a further test of the gaze cascade model.

    PubMed

    Glaholt, Mackenzie G; Reingold, Eyal M

    2009-04-01

    We tested predictions derived from the gaze cascade model of preference decision making (Shimojo, Simion, Shimojo, & Scheier, 2003; Simion & Shimojo, 2006, 2007). In each trial, participants' eye movements were monitored while they performed an eight-alternative decision task in which four of the items in the array were preexposed prior to the trial. Replicating previous findings, we found a gaze bias toward the chosen item prior to the response. However, contrary to the prediction of the gaze cascade model, preexposure of stimuli decreased, rather than increased, the magnitude of the gaze bias in preference decisions. Furthermore, unlike the prediction of the model, preexposure did not affect the likelihood of an item being chosen, and the pattern of looking behavior in preference decisions and on a non preference control task was remarkably similar. Implications of the present findings in multistage models of decision making are discussed.

  9. The Gaze-Cueing Effect in the United States and Japan: Influence of Cultural Differences in Cognitive Strategies on Control of Attention

    PubMed Central

    Takao, Saki; Yamani, Yusuke; Ariga, Atsunori

    2018-01-01

    The direction of gaze automatically and exogenously guides visual spatial attention, a phenomenon termed as the gaze-cueing effect. Although this effect arises when the duration of stimulus onset asynchrony (SOA) between a non-predictive gaze cue and the target is relatively long, no empirical research has examined the factors underlying this extended cueing effect. Two experiments compared the gaze-cueing effect at longer SOAs (700 ms) in Japanese and American participants. Cross-cultural studies on cognition suggest that Westerners tend to use a context-independent analytical strategy to process visual environments, whereas Asians use a context-dependent holistic approach. We hypothesized that Japanese participants would not demonstrate the gaze-cueing effect at longer SOAs because they are more sensitive to contextual information, such as the knowledge that the direction of a gaze is not predictive. Furthermore, we hypothesized that American participants would demonstrate the gaze-cueing effect at the long SOAs because they tend to follow gaze direction whether it is predictive or not. In Experiment 1, American participants demonstrated the gaze-cueing effect at the long SOA, indicating that their attention was driven by the central non-predictive gaze direction regardless of the SOAs. In Experiment 2, Japanese participants demonstrated no gaze-cueing effect at the long SOA, suggesting that the Japanese participants exercised voluntary control of their attention, which inhibited the gaze-cueing effect with the long SOA. Our findings suggest that the control of visual spatial attention elicited by social stimuli systematically differs between American and Japanese individuals. PMID:29379457

  10. The Gaze-Cueing Effect in the United States and Japan: Influence of Cultural Differences in Cognitive Strategies on Control of Attention.

    PubMed

    Takao, Saki; Yamani, Yusuke; Ariga, Atsunori

    2017-01-01

    The direction of gaze automatically and exogenously guides visual spatial attention, a phenomenon termed as the gaze-cueing effect . Although this effect arises when the duration of stimulus onset asynchrony (SOA) between a non-predictive gaze cue and the target is relatively long, no empirical research has examined the factors underlying this extended cueing effect. Two experiments compared the gaze-cueing effect at longer SOAs (700 ms) in Japanese and American participants. Cross-cultural studies on cognition suggest that Westerners tend to use a context-independent analytical strategy to process visual environments, whereas Asians use a context-dependent holistic approach. We hypothesized that Japanese participants would not demonstrate the gaze-cueing effect at longer SOAs because they are more sensitive to contextual information, such as the knowledge that the direction of a gaze is not predictive. Furthermore, we hypothesized that American participants would demonstrate the gaze-cueing effect at the long SOAs because they tend to follow gaze direction whether it is predictive or not. In Experiment 1, American participants demonstrated the gaze-cueing effect at the long SOA, indicating that their attention was driven by the central non-predictive gaze direction regardless of the SOAs. In Experiment 2, Japanese participants demonstrated no gaze-cueing effect at the long SOA, suggesting that the Japanese participants exercised voluntary control of their attention, which inhibited the gaze-cueing effect with the long SOA. Our findings suggest that the control of visual spatial attention elicited by social stimuli systematically differs between American and Japanese individuals.

  11. Subliminal gaze cues increase preference levels for items in the gaze direction.

    PubMed

    Mitsuda, Takashi; Masaki, Syuta

    2017-08-29

    Another individual's gaze automatically shifts an observer's attention to a location. This reflexive response occurs even when the gaze is presented subliminally over a short period. Another's gaze also increases the preference level for items in the gaze direction; however, it was previously unclear if this effect occurs when the gaze is presented subliminally. This study showed that the preference levels for nonsense figures looked at by a subliminal gaze were significantly greater than those for items that were subliminally looked away from (Task 1). Targets that were looked at by a subliminal gaze were detected faster (Task 2); however, the participants were unable to detect the gaze direction (Task 3). These results indicate that another individual's gaze automatically increases the preference levels for items in the gaze direction without conscious awareness.

  12. Emotion Unchained: Facial Expression Modulates Gaze Cueing under Cognitive Load.

    PubMed

    Pecchinenda, Anna; Petrucci, Manuel

    2016-01-01

    Direction of eye gaze cues spatial attention, and typically this cueing effect is not modulated by the expression of a face unless top-down processes are explicitly or implicitly involved. To investigate the role of cognitive control on gaze cueing by emotional faces, participants performed a gaze cueing task with happy, angry, or neutral faces under high (i.e., counting backward by 7) or low cognitive load (i.e., counting forward by 2). Results show that high cognitive load enhances gaze cueing effects for angry facial expressions. In addition, cognitive load reduces gaze cueing for neutral faces, whereas happy facial expressions and gaze affected object preferences regardless of load. This evidence clearly indicates a differential role of cognitive control in processing gaze direction and facial expression, suggesting that under typical conditions, when we shift attention based on social cues from another person, cognitive control processes are used to reduce interference from emotional information.

  13. Emotion Unchained: Facial Expression Modulates Gaze Cueing under Cognitive Load

    PubMed Central

    Petrucci, Manuel

    2016-01-01

    Direction of eye gaze cues spatial attention, and typically this cueing effect is not modulated by the expression of a face unless top-down processes are explicitly or implicitly involved. To investigate the role of cognitive control on gaze cueing by emotional faces, participants performed a gaze cueing task with happy, angry, or neutral faces under high (i.e., counting backward by 7) or low cognitive load (i.e., counting forward by 2). Results show that high cognitive load enhances gaze cueing effects for angry facial expressions. In addition, cognitive load reduces gaze cueing for neutral faces, whereas happy facial expressions and gaze affected object preferences regardless of load. This evidence clearly indicates a differential role of cognitive control in processing gaze direction and facial expression, suggesting that under typical conditions, when we shift attention based on social cues from another person, cognitive control processes are used to reduce interference from emotional information. PMID:27959925

  14. Participation Through Gaze Controlled Computer for Children with Severe Multiple Disabilities.

    PubMed

    Holmqvist, Eva; Derbring, Sandra; Wallin, Sofia

    2017-01-01

    This paper presents work on developing methodology material for use of gaze controlled computers. The target group is families and professionals around children with severe multiple disabilities. The material includes software grids for children at various levels, aimed for communication, leisure and learning and will be available for download.

  15. Early Left Parietal Activity Elicited by Direct Gaze: A High-Density EEG Study

    PubMed Central

    Burra, Nicolas; Kerzel, Dirk; George, Nathalie

    2016-01-01

    Gaze is one of the most important cues for human communication and social interaction. In particular, gaze contact is the most primary form of social contact and it is thought to capture attention. A very early-differentiated brain response to direct versus averted gaze has been hypothesized. Here, we used high-density electroencephalography to test this hypothesis. Topographical analysis allowed us to uncover a very early topographic modulation (40–80 ms) of event-related responses to faces with direct as compared to averted gaze. This modulation was obtained only in the condition where intact broadband faces–as opposed to high-pass or low-pas filtered faces–were presented. Source estimation indicated that this early modulation involved the posterior parietal region, encompassing the left precuneus and inferior parietal lobule. This supports the idea that it reflected an early orienting response to direct versus averted gaze. Accordingly, in a follow-up behavioural experiment, we found faster response times to the direct gaze than to the averted gaze broadband faces. In addition, classical evoked potential analysis showed that the N170 peak amplitude was larger for averted gaze than for direct gaze. Taken together, these results suggest that direct gaze may be detected at a very early processing stage, involving a parallel route to the ventral occipito-temporal route of face perceptual analysis. PMID:27880776

  16. Evaluation of a gaze-controlled vision enhancement system for reading in visually impaired people

    PubMed Central

    Aguilar, Carlos; Castet, Eric

    2017-01-01

    People with low vision, especially those with Central Field Loss (CFL), need magnification to read. The flexibility of Electronic Vision Enhancement Systems (EVES) offers several ways of magnifying text. Due to the restricted field of view of EVES, the need for magnification is conflicting with the need to navigate through text (panning). We have developed and implemented a real-time gaze-controlled system whose goal is to optimize the possibility of magnifying a portion of text while maintaining global viewing of the other portions of the text (condition 1). Two other conditions were implemented that mimicked commercially available advanced systems known as CCTV (closed-circuit television systems)—conditions 2 and 3. In these two conditions, magnification was uniformly applied to the whole text without any possibility to specifically select a region of interest. The three conditions were implemented on the same computer to remove differences that might have been induced by dissimilar equipment. A gaze-contingent artificial 10° scotoma (a mask continuously displayed in real time on the screen at the gaze location) was used in the three conditions in order to simulate macular degeneration. Ten healthy subjects with a gaze-contingent scotoma read aloud sentences from a French newspaper in nine experimental one-hour sessions. Reading speed was measured and constituted the main dependent variable to compare the three conditions. All subjects were able to use condition 1 and they found it slightly more comfortable to use than condition 2 (and similar to condition 3). Importantly, reading speed results did not show any significant difference between the three systems. In addition, learning curves were similar in the three conditions. This proof of concept study suggests that the principles underlying the gaze-controlled enhanced system might be further developed and fruitfully incorporated in different kinds of EVES for low vision reading. PMID:28380004

  17. Quality control of 3D Geological Models using an Attention Model based on Gaze

    NASA Astrophysics Data System (ADS)

    Busschers, Freek S.; van Maanen, Peter-Paul; Brouwer, Anne-Marie

    2014-05-01

    The Geological Survey of the Netherlands (GSN) produces 3D stochastic geological models of the upper 50 meters of the Dutch subsurface. The voxel models are regarded essential in answering subsurface questions on, for example, aggregate resources, groundwater flow, land subsidence studies and the planning of large-scale infrastructural works such as tunnels. GeoTOP is the most recent and detailed generation of 3D voxel models. This model describes 3D lithological variability up to a depth of 50 m using voxels of 100*100*0.5m. Due to the expected increase in data-flow, model output and user demands, the development of (semi-)automated quality control systems is getting more important in the near future. Besides numerical control systems, capturing model errors as seen from the expert geologist viewpoint is of increasing interest. We envision the use of eye gaze to support and speed up detection of errors in the geological voxel models. As a first step in this direction we explore gaze behavior of 12 geological experts from the GSN during quality control of part of the GeoTOP 3D geological model using an eye-tracker. Gaze is used as input of an attention model that results in 'attended areas' for each individual examined image of the GeoTOP model and each individual expert. We compared these attended areas to errors as marked by the experts using a mouse. Results show that: 1) attended areas as determined from experts' gaze data largely match with GeoTOP errors as indicated by the experts using a mouse, and 2) a substantial part of the match can be reached using only gaze data from the first few seconds of the time geologists spend to search for errors. These results open up the possibility of faster GeoTOP model control using gaze if geologists accept a small decrease of error detection accuracy. Attention data may also be used to make independent comparisons between different geologists varying in focus and expertise. This would facilitate a more effective use of

  18. Gaze perception in social anxiety and social anxiety disorder

    PubMed Central

    Schulze, Lars; Renneberg, Babette; Lobmaier, Janek S.

    2013-01-01

    Clinical observations suggest abnormal gaze perception to be an important indicator of social anxiety disorder (SAD). Experimental research has yet paid relatively little attention to the study of gaze perception in SAD. In this article we first discuss gaze perception in healthy human beings before reviewing self-referential and threat-related biases of gaze perception in clinical and non-clinical socially anxious samples. Relative to controls, socially anxious individuals exhibit an enhanced self-directed perception of gaze directions and demonstrate a pronounced fear of direct eye contact, though findings are less consistent regarding the avoidance of mutual gaze in SAD. Prospects for future research and clinical implications are discussed. PMID:24379776

  19. Reading as Active Sensing: A Computational Model of Gaze Planning in Word Recognition

    PubMed Central

    Ferro, Marcello; Ognibene, Dimitri; Pezzulo, Giovanni; Pirrelli, Vito

    2010-01-01

    We offer a computational model of gaze planning during reading that consists of two main components: a lexical representation network, acquiring lexical representations from input texts (a subset of the Italian CHILDES database), and a gaze planner, designed to recognize written words by mapping strings of characters onto lexical representations. The model implements an active sensing strategy that selects which characters of the input string are to be fixated, depending on the predictions dynamically made by the lexical representation network. We analyze the developmental trajectory of the system in performing the word recognition task as a function of both increasing lexical competence, and correspondingly increasing lexical prediction ability. We conclude by discussing how our approach can be scaled up in the context of an active sensing strategy applied to a robotic setting. PMID:20577589

  20. Reading as active sensing: a computational model of gaze planning in word recognition.

    PubMed

    Ferro, Marcello; Ognibene, Dimitri; Pezzulo, Giovanni; Pirrelli, Vito

    2010-01-01

    WE OFFER A COMPUTATIONAL MODEL OF GAZE PLANNING DURING READING THAT CONSISTS OF TWO MAIN COMPONENTS: a lexical representation network, acquiring lexical representations from input texts (a subset of the Italian CHILDES database), and a gaze planner, designed to recognize written words by mapping strings of characters onto lexical representations. The model implements an active sensing strategy that selects which characters of the input string are to be fixated, depending on the predictions dynamically made by the lexical representation network. We analyze the developmental trajectory of the system in performing the word recognition task as a function of both increasing lexical competence, and correspondingly increasing lexical prediction ability. We conclude by discussing how our approach can be scaled up in the context of an active sensing strategy applied to a robotic setting.

  1. Attention to gaze and emotion in schizophrenia.

    PubMed

    Schwartz, Barbara L; Vaidya, Chandan J; Howard, James H; Deutsch, Stephen I

    2010-11-01

    Individuals with schizophrenia have difficulty interpreting social and emotional cues such as facial expression, gaze direction, body position, and voice intonation. Nonverbal cues are powerful social signals but are often processed implicitly, outside the focus of attention. The aim of this research was to assess implicit processing of social cues in individuals with schizophrenia. Patients with schizophrenia or schizoaffective disorder and matched controls performed a primary task of word classification with social cues in the background. Participants were asked to classify target words (LEFT/RIGHT) by pressing a key that corresponded to the word, in the context of facial expressions with eye gaze averted to the left or right. Although facial expression and gaze direction were irrelevant to the task, these facial cues influenced word classification performance. Participants were slower to classify target words (e.g., LEFT) that were incongruent to gaze direction (e.g., eyes averted to the right) compared to target words (e.g., LEFT) that were congruent to gaze direction (e.g., eyes averted to the left), but this only occurred for expressions of fear. This pattern did not differ for patients and controls. The results showed that threat-related signals capture the attention of individuals with schizophrenia. These data suggest that implicit processing of eye gaze and fearful expressions is intact in schizophrenia. (c) 2010 APA, all rights reserved

  2. Fuzzy integral-based gaze control architecture incorporated with modified-univector field-based navigation for humanoid robots.

    PubMed

    Yoo, Jeong-Ki; Kim, Jong-Hwan

    2012-02-01

    When a humanoid robot moves in a dynamic environment, a simple process of planning and following a path may not guarantee competent performance for dynamic obstacle avoidance because the robot acquires limited information from the environment using a local vision sensor. Thus, it is essential to update its local map as frequently as possible to obtain more information through gaze control while walking. This paper proposes a fuzzy integral-based gaze control architecture incorporated with the modified-univector field-based navigation for humanoid robots. To determine the gaze direction, four criteria based on local map confidence, waypoint, self-localization, and obstacles, are defined along with their corresponding partial evaluation functions. Using the partial evaluation values and the degree of consideration for criteria, fuzzy integral is applied to each candidate gaze direction for global evaluation. For the effective dynamic obstacle avoidance, partial evaluation functions about self-localization error and surrounding obstacles are also used for generating virtual dynamic obstacle for the modified-univector field method which generates the path and velocity of robot toward the next waypoint. The proposed architecture is verified through the comparison with the conventional weighted sum-based approach with the simulations using a developed simulator for HanSaRam-IX (HSR-IX).

  3. Effects of galvanic skin response feedback on user experience in gaze-controlled gaming: A pilot study.

    PubMed

    Larradet, Fanny; Barresi, Giacinto; Mattos, Leonardo S

    2017-07-01

    Eye-tracking (ET) is one of the most intuitive solutions for enabling people with severe motor impairments to control devices. Nevertheless, even such an effective assistive solution can detrimentally affect user experience during demanding tasks because of, for instance, the user's mental workload - using gaze-based controls for an extensive period of time can generate fatigue and cause frustration. Thus, it is necessary to design novel solutions for ET contexts able to improve the user experience, with particular attention to its aspects related to workload. In this paper, a pilot study evaluates the effects of a relaxation biofeedback system on the user experience in the context of a gaze-controlled task that is mentally and temporally demanding: ET-based gaming. Different aspects of the subjects' experience were investigated under two conditions of a gaze-controlled game. In the Biofeedback group (BF), the user triggered a command by means of voluntary relaxation, monitored through Galvanic Skin Response (GSR) and represented by visual feedback. In the No Biofeedback group (NBF), the same feedback was timed according to the average frequency of commands in BF. After the experiment, each subject filled out a user experience questionnaire. The results showed a general appreciation for BF, with a significant between-group difference in the perceived session time duration, with the latter being shorter for subjects in BF than for the ones in NBF. This result implies a lower mental workload for BF than for NBF subjects. Other results point toward a potential role of user's engagement in the improvement of user experience in BF. Such an effect highlights the value of relaxation biofeedback for improving the user experience in a demanding gaze-controlled task.

  4. Speaker gaze increases information coupling between infant and adult brains

    PubMed Central

    Leong, Victoria; Byrne, Elizabeth; Clackson, Kaili; Georgieva, Stanimira; Lam, Sarah

    2017-01-01

    When infants and adults communicate, they exchange social signals of availability and communicative intention such as eye gaze. Previous research indicates that when communication is successful, close temporal dependencies arise between adult speakers’ and listeners’ neural activity. However, it is not known whether similar neural contingencies exist within adult–infant dyads. Here, we used dual-electroencephalography to assess whether direct gaze increases neural coupling between adults and infants during screen-based and live interactions. In experiment 1 (n = 17), infants viewed videos of an adult who was singing nursery rhymes with (i) direct gaze (looking forward), (ii) indirect gaze (head and eyes averted by 20°), or (iii) direct-oblique gaze (head averted but eyes orientated forward). In experiment 2 (n = 19), infants viewed the same adult in a live context, singing with direct or indirect gaze. Gaze-related changes in adult–infant neural network connectivity were measured using partial directed coherence. Across both experiments, the adult had a significant (Granger) causal influence on infants’ neural activity, which was stronger during direct and direct-oblique gaze relative to indirect gaze. During live interactions, infants also influenced the adult more during direct than indirect gaze. Further, infants vocalized more frequently during live direct gaze, and individual infants who vocalized longer also elicited stronger synchronization from the adult. These results demonstrate that direct gaze strengthens bidirectional adult–infant neural connectivity during communication. Thus, ostensive social signals could act to bring brains into mutual temporal alignment, creating a joint-networked state that is structured to facilitate information transfer during early communication and learning. PMID:29183980

  5. Training for eye contact modulates gaze following in dogs.

    PubMed

    Wallis, Lisa J; Range, Friederike; Müller, Corsin A; Serisier, Samuel; Huber, Ludwig; Virányi, Zsófia

    2015-08-01

    Following human gaze in dogs and human infants can be considered a socially facilitated orientation response, which in object choice tasks is modulated by human-given ostensive cues. Despite their similarities to human infants, and extensive skills in reading human cues in foraging contexts, no evidence that dogs follow gaze into distant space has been found. We re-examined this question, and additionally whether dogs' propensity to follow gaze was affected by age and/or training to pay attention to humans. We tested a cross-sectional sample of 145 border collies aged 6 months to 14 years with different amounts of training over their lives. The dogs' gaze-following response in test and control conditions before and after training for initiating eye contact with the experimenter was compared with that of a second group of 13 border collies trained to touch a ball with their paw. Our results provide the first evidence that dogs can follow human gaze into distant space. Although we found no age effect on gaze following, the youngest and oldest age groups were more distractible, which resulted in a higher number of looks in the test and control conditions. Extensive lifelong formal training as well as short-term training for eye contact decreased dogs' tendency to follow gaze and increased their duration of gaze to the face. The reduction in gaze following after training for eye contact cannot be explained by fatigue or short-term habituation, as in the second group gaze following increased after a different training of the same length. Training for eye contact created a competing tendency to fixate the face, which prevented the dogs from following the directional cues. We conclude that following human gaze into distant space in dogs is modulated by training, which may explain why dogs perform poorly in comparison to other species in this task.

  6. Reduction in Dynamic Visual Acuity Reveals Gaze Control Changes Following Spaceflight

    NASA Technical Reports Server (NTRS)

    Peters, Brian T.; Brady, Rachel A.; Miller, Chris; Lawrence, Emily L.; Mulavara Ajitkumar P.; Bloomberg, Jacob J.

    2010-01-01

    INTRODUCTION: Exposure to microgravity causes adaptive changes in eye-head coordination that can lead to altered gaze control. This could affect postflight visual acuity during head and body motion. The goal of this study was to characterize changes in dynamic visual acuity after long-duration spaceflight. METHODS: Dynamic Visual Acuity (DVA) data from 14 astro/cosmonauts were collected after long-duration (6 months) spaceflight. The difference in acuity between seated and walking conditions provided a metric of change in the subjects ability to maintain gaze fixation during self-motion. In each condition, a psychophysical threshold detection algorithm was used to display Landolt ring optotypes at a size that was near each subject s acuity threshold. Verbal responses regarding the orientation of the gap were recorded as the optotypes appeared sequentially on a computer display 4 meters away. During the walking trials, subjects walked at 6.4 km/h on a motorized treadmill. RESULTS: A decrement in mean postflight DVA was found, with mean values returning to baseline within 1 week. The population mean showed a consistent improvement in DVA performance, but it was accompanied by high variability. A closer examination of the individual subject s recovery curves revealed that many did not follow a pattern of continuous improvement with each passing day. When adjusted on the basis of previous long-duration flight experience, the population mean shows a "bounce" in the re-adaptation curve. CONCLUSION: Gaze control during self-motion is altered following long-duration spaceflight and changes in postflight DVA performance indicate that vestibular re-adaptation may be more complex than a gradual return to normal.

  7. Speaker gaze increases information coupling between infant and adult brains.

    PubMed

    Leong, Victoria; Byrne, Elizabeth; Clackson, Kaili; Georgieva, Stanimira; Lam, Sarah; Wass, Sam

    2017-12-12

    When infants and adults communicate, they exchange social signals of availability and communicative intention such as eye gaze. Previous research indicates that when communication is successful, close temporal dependencies arise between adult speakers' and listeners' neural activity. However, it is not known whether similar neural contingencies exist within adult-infant dyads. Here, we used dual-electroencephalography to assess whether direct gaze increases neural coupling between adults and infants during screen-based and live interactions. In experiment 1 ( n = 17), infants viewed videos of an adult who was singing nursery rhymes with ( i ) direct gaze (looking forward), ( ii ) indirect gaze (head and eyes averted by 20°), or ( iii ) direct-oblique gaze (head averted but eyes orientated forward). In experiment 2 ( n = 19), infants viewed the same adult in a live context, singing with direct or indirect gaze. Gaze-related changes in adult-infant neural network connectivity were measured using partial directed coherence. Across both experiments, the adult had a significant (Granger) causal influence on infants' neural activity, which was stronger during direct and direct-oblique gaze relative to indirect gaze. During live interactions, infants also influenced the adult more during direct than indirect gaze. Further, infants vocalized more frequently during live direct gaze, and individual infants who vocalized longer also elicited stronger synchronization from the adult. These results demonstrate that direct gaze strengthens bidirectional adult-infant neural connectivity during communication. Thus, ostensive social signals could act to bring brains into mutual temporal alignment, creating a joint-networked state that is structured to facilitate information transfer during early communication and learning. Copyright © 2017 the Author(s). Published by PNAS.

  8. Gaze stability, dynamic balance and participation deficits in people with multiple sclerosis at fall-risk.

    PubMed

    Garg, Hina; Dibble, Leland E; Schubert, Michael C; Sibthorp, Jim; Foreman, K Bo; Gappmaier, Eduard

    2018-05-05

    Despite the common complaints of dizziness and demyelination of afferent or efferent pathways to and from the vestibular nuclei which may adversely affect the angular Vestibulo-Ocular Reflex (aVOR) and vestibulo-spinal function in persons with Multiple Sclerosis (PwMS), few studies have examined gaze and dynamic balance function in PwMS. 1) Determine the differences in gaze stability, dynamic balance and participation measures between PwMS and controls, 2) Examine the relationships between gaze stability, dynamic balance and participation. Nineteen ambulatory PwMS at fall-risk and 14 age-matched controls were recruited. Outcomes included (a) gaze stability [angular Vestibulo-Ocular Reflex (aVOR) gain (ratio of eye to head velocity); number of Compensatory Saccades (CS) per head rotation; CS latency; gaze position error; Coefficient of Variation (CV) of aVOR gain], (b) dynamic balance [Functional Gait Assessment, FGA; four square step test], and (c) participation [dizziness handicap inventory; activities-specific balance confidence scale]. Separate independent t-tests and Pearson's correlations were calculated. PwMS were age = 53 ± 11.7yrs and had 4.2 ± 3.3 falls/yr. PwMS demonstrated significant (p<0.05) impairments in gaze stability, dynamic balance and participation measures compared to controls. CV of aVOR gain and CS latency were significantly correlated with FGA. Deficits and correlations across a spectrum of disability measures highlight the relevance of gaze and dynamic balance assessment in PwMS. This article is protected by copyright. All rights reserved. © 2018 Wiley Periodicals, Inc.

  9. Home-Based Computer Gaming in Vestibular Rehabilitation of Gaze and Balance Impairment.

    PubMed

    Szturm, Tony; Reimer, Karen M; Hochman, Jordan

    2015-06-01

    Disease or damage of the vestibular sense organs cause a range of distressing symptoms and functional problems that could include loss of balance, gaze instability, disorientation, and dizziness. A novel computer-based rehabilitation system with therapeutic gaming application has been developed. This method allows different gaze and head movement exercises to be coupled to a wide range of inexpensive, commercial computer games. It can be used in standing, and thus graded balance demands using a sponge pad can be incorporated into the program. A case series pre- and postintervention study was conducted of nine adults diagnosed with peripheral vestibular dysfunction who received a 12-week home rehabilitation program. The feasibility and usability of the home computer-based therapeutic program were established. Study findings revealed that using head rotation to interact with computer games, when coupled to demanding balance conditions, resulted in significant improvements in standing balance, dynamic visual acuity, gaze control, and walking performance. Perception of dizziness as measured by the Dizziness Handicap Inventory also decreased significantly. These preliminary findings provide support that a low-cost home game-based exercise program is well suited to train standing balance and gaze control (with active and passive head motion).

  10. Real-Time Gaze Tracking for Public Displays

    NASA Astrophysics Data System (ADS)

    Sippl, Andreas; Holzmann, Clemens; Zachhuber, Doris; Ferscha, Alois

    In this paper, we explore the real-time tracking of human gazes in front of large public displays. The aim of our work is to estimate at which area of a display one ore more people are looking at a time, independently from the distance and angle to the display as well as the height of the tracked people. Gaze tracking is relevant for a variety of purposes, including the automatic recognition of the user's focus of attention, or the control of interactive applications with gaze gestures. The scope of the present paper is on the former, and we show how gaze tracking can be used for implicit interaction in the pervasive advertising domain. We have developed a prototype for this purpose, which (i) uses an overhead mounted camera to distinguish four gaze areas on a large display, (ii) works for a wide range of positions in front of the display, and (iii) provides an estimation of the currently gazed quarters in real time. A detailed description of the prototype as well as the results of a user study with 12 participants, which show the recognition accuracy for different positions in front of the display, are presented.

  11. Electrical stimulation of rhesus monkey nucleus reticularis gigantocellularis. II. Effects on metrics and kinematics of ongoing gaze shifts to visual targets.

    PubMed

    Freedman, Edward G; Quessy, Stephan

    2004-06-01

    Saccade kinematics are altered by ongoing head movements. The hypothesis that a head movement command signal, proportional to head velocity, transiently reduces the gain of the saccadic burst generator (Freedman 2001, Biol Cybern 84:453-462) can account for this observation. Using electrical stimulation of the rhesus monkey nucleus reticularis gigantocellularis (NRG) to alter the head contribution to ongoing gaze shifts, two critical predictions of this gaze control hypothesis were tested. First, this hypothesis predicts that activation of the head command pathway will cause a transient reduction in the gain of the saccadic burst generator. This should alter saccade kinematics by initially reducing velocity without altering saccade amplitude. Second, because this hypothesis does not assume that gaze amplitude is controlled via feedback, the added head contribution (produced by NRG stimulation on the side ipsilateral to the direction of an ongoing gaze shift) should lead to hypermetric gaze shifts. At every stimulation site tested, saccade kinematics were systematically altered in a way that was consistent with transient reduction of the gain of the saccadic burst generator. In addition, gaze shifts produced during NRG stimulation were hypermetric compared with control movements. For example, when targets were briefly flashed 30 degrees from an initial fixation location, gaze shifts during NRG stimulation were on average 140% larger than control movements. These data are consistent with the predictions of the tested hypothesis, and may be problematic for gaze control models that rely on feedback control of gaze amplitude, as well as for models that do not posit an interaction between head commands and the saccade burst generator.

  12. Closed-Loop Hybrid Gaze Brain-Machine Interface Based Robotic Arm Control with Augmented Reality Feedback

    PubMed Central

    Zeng, Hong; Wang, Yanxin; Wu, Changcheng; Song, Aiguo; Liu, Jia; Ji, Peng; Xu, Baoguo; Zhu, Lifeng; Li, Huijun; Wen, Pengcheng

    2017-01-01

    Brain-machine interface (BMI) can be used to control the robotic arm to assist paralysis people for performing activities of daily living. However, it is still a complex task for the BMI users to control the process of objects grasping and lifting with the robotic arm. It is hard to achieve high efficiency and accuracy even after extensive trainings. One important reason is lacking of sufficient feedback information for the user to perform the closed-loop control. In this study, we proposed a method of augmented reality (AR) guiding assistance to provide the enhanced visual feedback to the user for a closed-loop control with a hybrid Gaze-BMI, which combines the electroencephalography (EEG) signals based BMI and the eye tracking for an intuitive and effective control of the robotic arm. Experiments for the objects manipulation tasks while avoiding the obstacle in the workspace are designed to evaluate the performance of our method for controlling the robotic arm. According to the experimental results obtained from eight subjects, the advantages of the proposed closed-loop system (with AR feedback) over the open-loop system (with visual inspection only) have been verified. The number of trigger commands used for controlling the robotic arm to grasp and lift the objects with AR feedback has reduced significantly and the height gaps of the gripper in the lifting process have decreased more than 50% compared to those trials with normal visual inspection only. The results reveal that the hybrid Gaze-BMI user can benefit from the information provided by the AR interface, improving the efficiency and reducing the cognitive load during the grasping and lifting processes. PMID:29163123

  13. Experimental Test of Spatial Updating Models for Monkey Eye-Head Gaze Shifts

    PubMed Central

    Van Grootel, Tom J.; Van der Willigen, Robert F.; Van Opstal, A. John

    2012-01-01

    How the brain maintains an accurate and stable representation of visual target locations despite the occurrence of saccadic gaze shifts is a classical problem in oculomotor research. Here we test and dissociate the predictions of different conceptual models for head-unrestrained gaze-localization behavior of macaque monkeys. We adopted the double-step paradigm with rapid eye-head gaze shifts to measure localization accuracy in response to flashed visual stimuli in darkness. We presented the second target flash either before (static), or during (dynamic) the first gaze displacement. In the dynamic case the brief visual flash induced a small retinal streak of up to about 20 deg at an unpredictable moment and retinal location during the eye-head gaze shift, which provides serious challenges for the gaze-control system. However, for both stimulus conditions, monkeys localized the flashed targets with accurate gaze shifts, which rules out several models of visuomotor control. First, these findings exclude the possibility that gaze-shift programming relies on retinal inputs only. Instead, they support the notion that accurate eye-head motor feedback updates the gaze-saccade coordinates. Second, in dynamic trials the visuomotor system cannot rely on the coordinates of the planned first eye-head saccade either, which rules out remapping on the basis of a predictive corollary gaze-displacement signal. Finally, because gaze-related head movements were also goal-directed, requiring continuous access to eye-in-head position, we propose that our results best support a dynamic feedback scheme for spatial updating in which visuomotor control incorporates accurate signals about instantaneous eye- and head positions rather than relative eye- and head displacements. PMID:23118883

  14. Elevated amygdala response to faces and gaze aversion in autism spectrum disorder.

    PubMed

    Tottenham, Nim; Hertzig, Margaret E; Gillespie-Lynch, Kristen; Gilhooly, Tara; Millner, Alexander J; Casey, B J

    2014-01-01

    Autism spectrum disorders (ASD) are often associated with impairments in judgment of facial expressions. This impairment is often accompanied by diminished eye contact and atypical amygdala responses to face stimuli. The current study used a within-subjects design to examine the effects of natural viewing and an experimental eye-gaze manipulation on amygdala responses to faces. Individuals with ASD showed less gaze toward the eye region of faces relative to a control group. Among individuals with ASD, reduced eye gaze was associated with higher threat ratings of neutral faces. Amygdala signal was elevated in the ASD group relative to controls. This elevated response was further potentiated by experimentally manipulating gaze to the eye region. Potentiation by the gaze manipulation was largest for those individuals who exhibited the least amount of naturally occurring gaze toward the eye region and was associated with their subjective threat ratings. Effects were largest for neutral faces, highlighting the importance of examining neutral faces in the pathophysiology of autism and questioning their use as control stimuli with this population. Overall, our findings provide support for the notion that gaze direction modulates affective response to faces in ASD.

  15. What We Observe Is Biased by What Other People Tell Us: Beliefs about the Reliability of Gaze Behavior Modulate Attentional Orienting to Gaze Cues

    PubMed Central

    Wiese, Eva; Wykowska, Agnieszka; Müller, Hermann J.

    2014-01-01

    For effective social interactions with other people, information about the physical environment must be integrated with information about the interaction partner. In order to achieve this, processing of social information is guided by two components: a bottom-up mechanism reflexively triggered by stimulus-related information in the social scene and a top-down mechanism activated by task-related context information. In the present study, we investigated whether these components interact during attentional orienting to gaze direction. In particular, we examined whether the spatial specificity of gaze cueing is modulated by expectations about the reliability of gaze behavior. Expectations were either induced by instruction or could be derived from experience with displayed gaze behavior. Spatially specific cueing effects were observed with highly predictive gaze cues, but also when participants merely believed that actually non-predictive cues were highly predictive. Conversely, cueing effects for the whole gazed-at hemifield were observed with non-predictive gaze cues, and spatially specific cueing effects were attenuated when actually predictive gaze cues were believed to be non-predictive. This pattern indicates that (i) information about cue predictivity gained from sampling gaze behavior across social episodes can be incorporated in the attentional orienting to social cues, and that (ii) beliefs about gaze behavior modulate attentional orienting to gaze direction even when they contradict information available from social episodes. PMID:24722348

  16. New perspectives in gaze sensitivity research.

    PubMed

    Davidson, Gabrielle L; Clayton, Nicola S

    2016-03-01

    Attending to where others are looking is thought to be of great adaptive benefit for animals when avoiding predators and interacting with group members. Many animals have been reported to respond to the gaze of others, by co-orienting their gaze with group members (gaze following) and/or responding fearfully to the gaze of predators or competitors (i.e., gaze aversion). Much of the literature has focused on the cognitive underpinnings of gaze sensitivity, namely whether animals have an understanding of the attention and visual perspectives in others. Yet there remain several unanswered questions regarding how animals learn to follow or avoid gaze and how experience may influence their behavioral responses. Many studies on the ontogeny of gaze sensitivity have shed light on how and when gaze abilities emerge and change across development, indicating the necessity to explore gaze sensitivity when animals are exposed to additional information from their environment as adults. Gaze aversion may be dependent upon experience and proximity to different predator types, other cues of predation risk, and the salience of gaze cues. Gaze following in the context of information transfer within social groups may also be dependent upon experience with group-members; therefore we propose novel means to explore the degree to which animals respond to gaze in a flexible manner, namely by inhibiting or enhancing gaze following responses. We hope this review will stimulate gaze sensitivity research to expand beyond the narrow scope of investigating underlying cognitive mechanisms, and to explore how gaze cues may function to communicate information other than attention.

  17. Culture, gaze and the neural processing of fear expressions

    PubMed Central

    Franklin, Robert G.; Rule, Nicholas O.; Freeman, Jonathan B.; Kveraga, Kestutis; Hadjikhani, Nouchine; Yoshikawa, Sakiko; Ambady, Nalini

    2010-01-01

    The direction of others’ eye gaze has important influences on how we perceive their emotional expressions. Here, we examined differences in neural activation to direct- versus averted-gaze fear faces as a function of culture of the participant (Japanese versus US Caucasian), culture of the stimulus face (Japanese versus US Caucasian), and the relation between the two. We employed a previously validated paradigm to examine differences in neural activation in response to rapidly presented direct- versus averted-fear expressions, finding clear evidence for a culturally determined role of gaze in the processing of fear. Greater neural responsivity was apparent to averted- versus direct-gaze fear in several regions related to face and emotion processing, including bilateral amygdalae, when posed on same-culture faces, whereas greater response to direct- versus averted-gaze fear was apparent in these same regions when posed on other-culture faces. We also found preliminary evidence for intercultural variation including differential responses across participants to Japanese versus US Caucasian stimuli, and to a lesser degree differences in how Japanese and US Caucasian participants responded to these stimuli. These findings reveal a meaningful role of culture in the processing of eye gaze and emotion, and highlight their interactive influences in neural processing. PMID:20019073

  18. Gaze stabilization in chronic vestibular-loss and in cerebellar ataxia: interactions of feedforward and sensory feedback mechanisms.

    PubMed

    Sağlam, M; Lehnen, N

    2014-01-01

    During gaze shifts, humans can use visual, vestibular, and proprioceptive feedback, as well as feedforward mechanisms, for stabilization against active and passive head movements. The contributions of feedforward and sensory feedback control, and the role of the cerebellum, are still under debate. To quantify these contributions, we increased the head moment of inertia in three groups (ten healthy, five chronic vestibular-loss and nine cerebellar-ataxia patients) while they performed large gaze shifts to flashed targets in darkness. This induces undesired head oscillations. Consequently, both active (desired) and passive (undesired) head movements had to be compensated for to stabilize gaze. All groups compensated for active and passive head movements, vestibular-loss patients less than the other groups (P < 0.001, passive/active compensatory gains: vestibular-loss 0.23 ± 0.09/0.43 ± 0.12, healthy 0.80 ± 0.17/0.83 ± 0.15, cerebellar-ataxia 0.68 ± 0.17/0.77 ± 0.30, mean ± SD). The compensation gain ratio against passive and active movements was smaller than one in vestibular-loss patients (0.54 ± 0.10, P=0.001). Healthy and cerebellar-ataxia patients did not differ in active and passive compensation. In summary, vestibular-loss patients can better stabilize gaze against active than against passive head movements. Therefore, feedforward mechanisms substantially contribute to gaze stabilization. Proprioception alone is not sufficient (gain 0.2). Stabilization against active and passive head movements was not impaired in our cerebellar ataxia patients.

  19. Head eye co-ordination and gaze stability in subjects with persistent whiplash associated disorders.

    PubMed

    Treleaven, Julia; Jull, Gwendolen; Grip, Helena

    2011-06-01

    Symptoms of dizziness, unsteadiness and visual disturbances are frequent complaints in persons with persistent whiplash associated disorders. This study investigated eye, head co-ordination and gaze stability in subjects with persistent whiplash (n = 20) and asymptomatic controls (n = 20). Wireless motion sensors and electro-oculography were used to measure: head rotation during unconstrained head movement, head rotation during gaze stability and sequential head and eye movements. Ten control subjects participated in a repeatability study (two occasions one week apart). Between-day repeatability was acceptable (ICC > 0.6) for most measures. The whiplash group had significantly less maximal eye angle to the left, range of head movement during the gaze stability task and decreased velocity of head movement in head eye co-ordination and gaze stability tasks compared to the control group (p < 0.01). There were significant correlations (r > 0.55) between both unrestrained neck movement and neck pain and head movement and velocity in the whiplash group. Deficits in gaze stability and head eye co-ordination may be related to disturbed reflex activity associated with decreased head range of motion and/or neck pain. Further research is required to explore the mechanisms behind these deficits, the nature of changes over time and the tests' ability to measure change in response to rehabilitation. Crown Copyright © 2010. Published by Elsevier Ltd. All rights reserved.

  20. Follow My Eyes: The Gaze of Politicians Reflexively Captures the Gaze of Ingroup Voters

    PubMed Central

    Liuzza, Marco Tullio; Cazzato, Valentina; Vecchione, Michele; Crostella, Filippo; Caprara, Gian Vittorio; Aglioti, Salvatore Maria

    2011-01-01

    Studies in human and non-human primates indicate that basic socio-cognitive operations are inherently linked to the power of gaze in capturing reflexively the attention of an observer. Although monkey studies indicate that the automatic tendency to follow the gaze of a conspecific is modulated by the leader-follower social status, evidence for such effects in humans is meager. Here, we used a gaze following paradigm where the directional gaze of right- or left-wing Italian political characters could influence the oculomotor behavior of ingroup or outgroup voters. We show that the gaze of Berlusconi, the right-wing leader currently dominating the Italian political landscape, potentiates and inhibits gaze following behavior in ingroup and outgroup voters, respectively. Importantly, the higher the perceived similarity in personality traits between voters and Berlusconi, the stronger the gaze interference effect. Thus, higher-order social variables such as political leadership and affiliation prepotently affect reflexive shifts of attention. PMID:21957479

  1. Visual–Motor Transformations Within Frontal Eye Fields During Head-Unrestrained Gaze Shifts in the Monkey

    PubMed Central

    Sajad, Amirsaman; Sadeh, Morteza; Keith, Gerald P.; Yan, Xiaogang; Wang, Hongying; Crawford, John Douglas

    2015-01-01

    A fundamental question in sensorimotor control concerns the transformation of spatial signals from the retina into eye and head motor commands required for accurate gaze shifts. Here, we investigated these transformations by identifying the spatial codes embedded in visually evoked and movement-related responses in the frontal eye fields (FEFs) during head-unrestrained gaze shifts. Monkeys made delayed gaze shifts to the remembered location of briefly presented visual stimuli, with delay serving to dissociate visual and movement responses. A statistical analysis of nonparametric model fits to response field data from 57 neurons (38 with visual and 49 with movement activities) eliminated most effector-specific, head-fixed, and space-fixed models, but confirmed the dominance of eye-centered codes observed in head-restrained studies. More importantly, the visual response encoded target location, whereas the movement response mainly encoded the final position of the imminent gaze shift (including gaze errors). This spatiotemporal distinction between target and gaze coding was present not only at the population level, but even at the single-cell level. We propose that an imperfect visual–motor transformation occurs during the brief memory interval between perception and action, and further transformations from the FEF's eye-centered gaze motor code to effector-specific codes in motor frames occur downstream in the subcortical areas. PMID:25491118

  2. Control of gaze in natural environments: effects of rewards and costs, uncertainty and memory in target selection.

    PubMed

    Hayhoe, Mary M; Matthis, Jonathan Samir

    2018-08-06

    The development of better eye and body tracking systems, and more flexible virtual environments have allowed more systematic exploration of natural vision and contributed a number of insights. In natural visually guided behaviour, humans make continuous sequences of sensory-motor decisions to satisfy current goals, and the role of vision is to provide the relevant information in order to achieve those goals. This paper reviews the factors that control gaze in natural visually guided actions such as locomotion, including the rewards and costs associated with the immediate behavioural goals, uncertainty about the state of the world and prior knowledge of the environment. These general features of human gaze control may inform the development of artificial systems.

  3. Aberrant face and gaze habituation in fragile x syndrome.

    PubMed

    Bruno, Jennifer Lynn; Garrett, Amy S; Quintin, Eve-Marie; Mazaika, Paul K; Reiss, Allan L

    2014-10-01

    The authors sought to investigate neural system habituation to face and eye gaze in fragile X syndrome, a disorder characterized by eye-gaze aversion, among other social and cognitive deficits. Participants (ages 15-25 years) were 30 individuals with fragile X syndrome (females, N=14) and a comparison group of 25 individuals without fragile X syndrome (females, N=12) matched for general cognitive ability and autism symptoms. Functional MRI (fMRI) was used to assess brain activation during a gaze habituation task. Participants viewed repeated presentations of four unique faces with either direct or averted eye gaze and judged the direction of eye gaze. Four participants (males, N=4/4; fragile X syndrome, N=3) were excluded because of excessive head motion during fMRI scanning. Behavioral performance did not differ between the groups. Less neural habituation (and significant sensitization) in the fragile X syndrome group was found in the cingulate gyrus, fusiform gyrus, and frontal cortex in response to all faces (direct and averted gaze). Left fusiform habituation in female participants was directly correlated with higher, more typical levels of the fragile X mental retardation protein and inversely correlated with autism symptoms. There was no evidence for differential habituation to direct gaze compared with averted gaze within or between groups. Impaired habituation and accentuated sensitization in response to face/eye gaze was distributed across multiple levels of neural processing. These results could help inform interventions, such as desensitization therapy, which may help patients with fragile X syndrome modulate anxiety and arousal associated with eye gaze, thereby improving social functioning.

  4. Is gaze following purely reflexive or goal-directed instead? Revisiting the automaticity of orienting attention by gaze cues.

    PubMed

    Ricciardelli, Paola; Carcagno, Samuele; Vallar, Giuseppe; Bricolo, Emanuela

    2013-01-01

    Distracting gaze has been shown to elicit automatic gaze following. However, it is still debated whether the effects of perceived gaze are a simple automatic spatial orienting response or are instead sensitive to the context (i.e. goals and task demands). In three experiments, we investigated the conditions under which gaze following occurs. Participants were instructed to saccade towards one of two lateral targets. A face distracter, always present in the background, could gaze towards: (a) a task-relevant target--("matching" goal-directed gaze shift)--congruent or incongruent with the instructed direction, (b) a task-irrelevant target, orthogonal to the one instructed ("non-matching" goal-directed gaze shift), or (c) an empty spatial location (no-goal-directed gaze shift). Eye movement recordings showed faster saccadic latencies in correct trials in congruent conditions especially when the distracting gaze shift occurred before the instruction to make a saccade. Interestingly, while participants made a higher proportion of gaze-following errors (i.e. errors in the direction of the distracting gaze) in the incongruent conditions when the distracter's gaze shift preceded the instruction onset indicating an automatic gaze following, they never followed the distracting gaze when it was directed towards an empty location or a stimulus that was never the target. Taken together, these findings suggest that gaze following is likely to be a product of both automatic and goal-driven orienting mechanisms.

  5. Gaze Tracking System for User Wearing Glasses

    PubMed Central

    Gwon, Su Yeong; Cho, Chul Woo; Lee, Hyeon Chang; Lee, Won Oh; Park, Kang Ryoung

    2014-01-01

    Conventional gaze tracking systems are limited in cases where the user is wearing glasses because the glasses usually produce noise due to reflections caused by the gaze tracker's lights. This makes it difficult to locate the pupil and the specular reflections (SRs) from the cornea of the user's eye. These difficulties increase the likelihood of gaze detection errors because the gaze position is estimated based on the location of the pupil center and the positions of the corneal SRs. In order to overcome these problems, we propose a new gaze tracking method that can be used by subjects who are wearing glasses. Our research is novel in the following four ways: first, we construct a new control device for the illuminator, which includes four illuminators that are positioned at the four corners of a monitor. Second, our system automatically determines whether a user is wearing glasses or not in the initial stage by counting the number of white pixels in an image that is captured using the low exposure setting on the camera. Third, if it is determined that the user is wearing glasses, the four illuminators are turned on and off sequentially in order to obtain an image that has a minimal amount of noise due to reflections from the glasses. As a result, it is possible to avoid the reflections and accurately locate the pupil center and the positions of the four corneal SRs. Fourth, by turning off one of the four illuminators, only three corneal SRs exist in the captured image. Since the proposed gaze detection method requires four corneal SRs for calculating the gaze position, the unseen SR position is estimated based on the parallelogram shape that is defined by the three SR positions and the gaze position is calculated. Experimental results showed that the average gaze detection error with 20 persons was about 0.70° and the processing time is 63.72 ms per each frame. PMID:24473283

  6. Attention and Gaze Control in Picture Naming, Word Reading, and Word Categorizing

    ERIC Educational Resources Information Center

    Roelofs, Ardi

    2007-01-01

    The trigger for shifting gaze between stimuli requiring vocal and manual responses was examined. Participants were presented with picture-word stimuli and left- or right-pointing arrows. They vocally named the picture (Experiment 1), read the word (Experiment 2), or categorized the word (Experiment 3) and shifted their gaze to the arrow to…

  7. Gaze holding deficits discriminate early from late onset cerebellar degeneration.

    PubMed

    Tarnutzer, Alexander A; Weber, K P; Schuknecht, B; Straumann, D; Marti, S; Bertolini, G

    2015-08-01

    The vestibulo-cerebellum calibrates the output of the inherently leaky brainstem neural velocity-to-position integrator to provide stable gaze holding. In healthy humans small-amplitude centrifugal nystagmus is present at extreme gaze-angles, with a non-linear relationship between eye-drift velocity and eye eccentricity. In cerebellar degeneration this calibration is impaired, resulting in pathological gaze-evoked nystagmus (GEN). For cerebellar dysfunction, increased eye drift may be present at any gaze angle (reflecting pure scaling of eye drift found in controls) or restricted to far-lateral gaze (reflecting changes in shape of the non-linear relationship) and resulting eyed-drift patterns could be related to specific disorders. We recorded horizontal eye positions in 21 patients with cerebellar neurodegeneration (gaze-angle = ±40°) and clinically confirmed GEN. Eye-drift velocity, linearity and symmetry of drift were determined. MR-images were assessed for cerebellar atrophy. In our patients, the relation between eye-drift velocity and gaze eccentricity was non-linear, yielding (compared to controls) significant GEN at gaze-eccentricities ≥20°. Pure scaling was most frequently observed (n = 10/18), followed by pure shape-changing (n = 4/18) and a mixed pattern (n = 4/18). Pure shape-changing patients were significantly (p = 0.001) younger at disease-onset compared to pure scaling patients. Atrophy centered around the superior/dorsal vermis, flocculus/paraflocculus and dentate nucleus and did not correlate with the specific drift behaviors observed. Eye drift in cerebellar degeneration varies in magnitude; however, it retains its non-linear properties. With different drift patterns being linked to age at disease-onset, we propose that the gaze-holding pattern (scaling vs. shape-changing) may discriminate early- from late-onset cerebellar degeneration. Whether this allows a distinction among specific cerebellar disorders remains to be determined.

  8. Gaze shifts during dual-tasking stair descent.

    PubMed

    Miyasike-daSilva, Veronica; McIlroy, William E

    2016-11-01

    To investigate the role of vision in stair locomotion, young adults descended a seven-step staircase during unrestricted walking (CONTROL), and while performing a concurrent visual reaction time (RT) task displayed on a monitor. The monitor was located at either 3.5 m (HIGH) or 0.5 m (LOW) above ground level at the end of the stairway, which either restricted (HIGH) or facilitated (LOW) the view of the stairs in the lower field of view as participants walked downstairs. Downward gaze shifts (recorded with an eye tracker) and gait speed were significantly reduced in HIGH and LOW compared with CONTROL. Gaze and locomotor behaviour were not different between HIGH and LOW. However, inter-individual variability increased in HIGH, in which participants combined different response characteristics including slower walking, handrail use, downward gaze, and/or increasing RTs. The fastest RTs occurred in the midsteps (non-transition steps). While gait and visual task performance were not statistically different prior to the top and bottom transition steps, gaze behaviour and RT were more variable prior to transition steps in HIGH. This study demonstrated that, in the presence of a visual task, people do not look down as often when walking downstairs and require minimum adjustments provided that the view of the stairs is available in the lower field of view. The middle of the stairs seems to require less from executive function, whereas visual attention appears a requirement to detect the last transition via gaze shifts or peripheral vision.

  9. Where We Look When We Drive with or without Active Steering Wheel Control

    PubMed Central

    Mars, Franck; Navarro, Jordan

    2012-01-01

    Current theories on the role of visuomotor coordination in driving agree that active sampling of the road by the driver informs the arm-motor system in charge of performing actions on the steering wheel. Still under debate, however, is the nature of visual cues and gaze strategies used by drivers. In particular, the tangent point hypothesis, which states that drivers look at a specific point on the inside edge line, has recently become the object of controversy. An alternative hypothesis proposes that drivers orient gaze toward the desired future path, which happens to be often situated in the vicinity of the tangent point. The present study contributed to this debate through the analyses of the distribution of gaze orientation with respect to the tangent point. The results revealed that drivers sampled the roadway in the close vicinity of the tangent point rather than the tangent point proper. This supports the idea that drivers look at the boundary of a safe trajectory envelop near the inside edge line. Furthermore, the study investigated for the first time the reciprocal influence of manual control on gaze control in the context of driving. This was achieved through the comparison of gaze behavior when drivers actively steered the vehicle or when steering was performed by an automatic controller. The results showed an increase in look-ahead fixations in the direction of the bend exit and a small but consistent reduction in the time spent looking in the area of the tangent point when steering was passive. This may be the consequence of a change in the balance between cognitive and sensorimotor anticipatory gaze strategies. It might also reflect bidirectional coordination control between the eye and arm-motor systems, which goes beyond the common assumption that the eyes lead the hands when driving. PMID:22928043

  10. Automatic attentional orienting to other people's gaze in schizophrenia.

    PubMed

    Langdon, Robyn; Seymour, Kiley; Williams, Tracey; Ward, Philip B

    2017-08-01

    Explicit tests of social cognition have revealed pervasive deficits in schizophrenia. Less is known of automatic social cognition in schizophrenia. We used a spatial orienting task to investigate automatic shifts of attention cued by another person's eye gaze in 29 patients and 28 controls. Central photographic images of a face with eyes shifted left or right, or looking straight ahead, preceded targets that appeared left or right of the cue. To examine automatic effects, cue direction was non-predictive of target location. Cue-target intervals were 100, 300, and 800 ms. In non-social control trials, arrows replaced eye-gaze cues. Both groups showed automatic attentional orienting indexed by faster reaction times (RTs) when arrows were congruent with target location across all cue-target intervals. Similar congruency effects were seen for eye-shift cues at 300 and 800 ms intervals, but patients showed significantly larger congruency effects at 800 ms, which were driven by delayed responses to incongruent target locations. At short 100-ms cue-target intervals, neither group showed faster RTs for congruent than for incongruent eye-shift cues, but patients were significantly slower to detect targets after direct-gaze cues. These findings conflict with previous studies using schematic line drawings of eye-shifts that have found automatic attentional orienting to be reduced in schizophrenia. Instead, our data indicate that patients display abnormalities in responding to gaze direction at various stages of gaze processing-reflected by a stronger preferential capture of attention by another person's direct eye contact at initial stages of gaze processing and difficulties disengaging from a gazed-at location once shared attention is established.

  11. Decline of vertical gaze and convergence with aging.

    PubMed

    Oguro, Hiroaki; Okada, Kazunori; Suyama, Nobuo; Yamashita, Kazuya; Yamaguchi, Shuhei; Kobayashi, Shotai

    2004-01-01

    Disturbance of vertical eye movement and ocular convergence is often observed in elderly people, but little is known about its frequency. The purpose of this study was to investigate age-associated changes in vertical eye movement and convergence in healthy elderly people, using a digital video camera system. We analyzed vertical eye movements and convergence in 113 neurologically normal elderly subjects (mean age 70 years) in comparison with 20 healthy young controls (mean age 32 years). The range of vertical eye movement was analyzed quantitatively and convergence was analyzed qualitatively. In the elderly subjects, the angle of vertical gaze decreased with advancing age and it was significantly smaller than that of the younger subjects. The mean angle of upward gaze was significantly smaller than that of downward gaze for both young and elderly subjects. Upward gaze impairment became apparent in subjects in their 70s, and downward gaze impairment in subjects in their 60s. Disturbance in convergence also increased with advancing age, and was found in 40.7% of the elderly subjects. These findings indicate that the mechanisms of age-related change are different for upward and downward vertical gaze. Digital video camera monitoring was useful for assessing and monitoring eye movements. Copyright 2004 S. Karger AG, Basel

  12. Frequency analysis of gaze points with CT colonography interpretation using eye gaze tracking system

    NASA Astrophysics Data System (ADS)

    Tsutsumi, Shoko; Tamashiro, Wataru; Sato, Mitsuru; Okajima, Mika; Ogura, Toshihiro; Doi, Kunio

    2017-03-01

    It is important to investigate eye tracking gaze points of experts, in order to assist trainees in understanding of image interpretation process. We investigated gaze points of CT colonography (CTC) interpretation process, and analyzed the difference in gaze points between experts and trainees. In this study, we attempted to understand how trainees can be improved to a level achieved by experts in viewing of CTC. We used an eye gaze point sensing system, Gazefineder (JVCKENWOOD Corporation, Tokyo, Japan), which can detect pupil point and corneal reflection point by the dark pupil eye tracking. This system can provide gaze points images and excel file data. The subjects are radiological technologists who are experienced, and inexperienced in reading CTC. We performed observer studies in reading virtual pathology images and examined observer's image interpretation process using gaze points data. Furthermore, we examined eye tracking frequency analysis by using the Fast Fourier Transform (FFT). We were able to understand the difference in gaze points between experts and trainees by use of the frequency analysis. The result of the trainee had a large amount of both high-frequency components and low-frequency components. In contrast, both components by the expert were relatively low. Regarding the amount of eye movement in every 0.02 second we found that the expert tended to interpret images slowly and calmly. On the other hand, the trainee was moving eyes quickly and also looking for wide areas. We can assess the difference in the gaze points on CTC between experts and trainees by use of the eye gaze point sensing system and based on the frequency analysis. The potential improvements in CTC interpretation for trainees can be evaluated by using gaze points data.

  13. Variation in gaze-following between two Asian colobine monkeys.

    PubMed

    Chen, Tao; Gao, Jie; Tan, Jingzhi; Tao, Ruoting; Su, Yanjie

    2017-10-01

    Gaze-following is a basic cognitive ability found in numerous primate and nonprimate species. However, little is known about this ability and its variation in colobine monkeys. We compared gaze-following of two Asian colobines-François' langurs (Trachypithecus francoisi) and golden snub-nosed monkeys (Rhinopithecus roxellana). Although both species live in small polygynous family units, units of the latter form multilevel societies with up to hundreds of individuals. François' langurs (N = 15) were less sensitive to the gaze of a human experimenter than were golden snub-nosed monkeys (N = 12). We then tested the two species using two classic inhibitory control tasks-the cylinder test and the A-not-B test. We found no difference between species in inhibitory control, which called into question the nonsocial explanation for François' langur's weaker sensitivity to human gaze. These findings are consistent with the social intelligence hypothesis, which predicted that golden snub-nosed monkeys would outperform François' langurs in gaze-following because of the greater size and complexity of their social groups. Furthermore, our results underscore the need for more comparative studies of cognition in colobines, which should provide valuable opportunities to test hypotheses of cognitive evolution.

  14. Visual-Motor Transformations Within Frontal Eye Fields During Head-Unrestrained Gaze Shifts in the Monkey.

    PubMed

    Sajad, Amirsaman; Sadeh, Morteza; Keith, Gerald P; Yan, Xiaogang; Wang, Hongying; Crawford, John Douglas

    2015-10-01

    A fundamental question in sensorimotor control concerns the transformation of spatial signals from the retina into eye and head motor commands required for accurate gaze shifts. Here, we investigated these transformations by identifying the spatial codes embedded in visually evoked and movement-related responses in the frontal eye fields (FEFs) during head-unrestrained gaze shifts. Monkeys made delayed gaze shifts to the remembered location of briefly presented visual stimuli, with delay serving to dissociate visual and movement responses. A statistical analysis of nonparametric model fits to response field data from 57 neurons (38 with visual and 49 with movement activities) eliminated most effector-specific, head-fixed, and space-fixed models, but confirmed the dominance of eye-centered codes observed in head-restrained studies. More importantly, the visual response encoded target location, whereas the movement response mainly encoded the final position of the imminent gaze shift (including gaze errors). This spatiotemporal distinction between target and gaze coding was present not only at the population level, but even at the single-cell level. We propose that an imperfect visual-motor transformation occurs during the brief memory interval between perception and action, and further transformations from the FEF's eye-centered gaze motor code to effector-specific codes in motor frames occur downstream in the subcortical areas. © The Author 2014. Published by Oxford University Press.

  15. Evaluating Gaze-Based Interface Tools to Facilitate Point-and-Select Tasks with Small Targets

    ERIC Educational Resources Information Center

    Skovsgaard, Henrik; Mateo, Julio C.; Hansen, John Paulin

    2011-01-01

    Gaze interaction affords hands-free control of computers. Pointing to and selecting small targets using gaze alone is difficult because of the limited accuracy of gaze pointing. This is the first experimental comparison of gaze-based interface tools for small-target (e.g. less than 12 x 12 pixels) point-and-select tasks. We conducted two…

  16. Etracker: A Mobile Gaze-Tracking System with Near-Eye Display Based on a Combined Gaze-Tracking Algorithm.

    PubMed

    Li, Bin; Fu, Hong; Wen, Desheng; Lo, WaiLun

    2018-05-19

    Eye tracking technology has become increasingly important for psychological analysis, medical diagnosis, driver assistance systems, and many other applications. Various gaze-tracking models have been established by previous researchers. However, there is currently no near-eye display system with accurate gaze-tracking performance and a convenient user experience. In this paper, we constructed a complete prototype of the mobile gaze-tracking system ' Etracker ' with a near-eye viewing device for human gaze tracking. We proposed a combined gaze-tracking algorithm. In this algorithm, the convolutional neural network is used to remove blinking images and predict coarse gaze position, and then a geometric model is defined for accurate human gaze tracking. Moreover, we proposed using the mean value of gazes to resolve pupil center changes caused by nystagmus in calibration algorithms, so that an individual user only needs to calibrate it the first time, which makes our system more convenient. The experiments on gaze data from 26 participants show that the eye center detection accuracy is 98% and Etracker can provide an average gaze accuracy of 0.53° at a rate of 30⁻60 Hz.

  17. The Expressive Gaze Model: Using Gaze to Express Emotion

    DTIC Science & Technology

    2010-07-01

    World of Warcraft or Oblivion , have thou- sands of computer-controlled nonplayer characters with which users can interact. Producing hand- generated...increasing to the right and the vertical increasing upward. In both cases, 0 degrees is straight ahead. Although the mechani- cal limits of human eye...to gaze from a target directly in front of her to one 60 degrees to her right , while performing these behaviors in a manner that expressed the de

  18. To Gaze or Not to Gaze: Visual Communication in Eastern Zaire. Sociolinguistic Working Paper Number 87.

    ERIC Educational Resources Information Center

    Blakely, Thomas D.

    The nature of gazing at someone or something, as a form of communication among the Bahemba people in eastern Zaire, is analyzed across a range of situations. Variations of steady gazing, a common eye contact routine, are outlined, including: (1) negative non-gazing or glance routines, especially in situations in which gazing would ordinarily…

  19. Eye-gaze and intent: Application in 3D interface control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schryver, J.C.; Goldberg, J.H.

    1993-06-01

    Computer interface control is typically accomplished with an input ``device`` such as keyboard, mouse, trackball, etc. An input device translates a users input actions, such as mouse clicks and key presses, into appropriate computer commands. To control the interface, the user must first convert intent into the syntax of the input device. A more natural means of computer control is possible when the computer can directly infer user intent, without need of intervening input devices. We describe an application of eye-gaze-contingent control of an interactive three-dimensional (3D) user interface. A salient feature of the user interface is natural input, withmore » a heightened impression of controlling the computer directly by the mind. With this interface, input of rotation and translation are intuitive, whereas other abstract features, such as zoom, are more problematic to match with user intent. This paper describes successes with implementation to date, and ongoing efforts to develop a more sophisticated intent inferencing methodology.« less

  20. Eye-gaze and intent: Application in 3D interface control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schryver, J.C.; Goldberg, J.H.

    1993-01-01

    Computer interface control is typically accomplished with an input device'' such as keyboard, mouse, trackball, etc. An input device translates a users input actions, such as mouse clicks and key presses, into appropriate computer commands. To control the interface, the user must first convert intent into the syntax of the input device. A more natural means of computer control is possible when the computer can directly infer user intent, without need of intervening input devices. We describe an application of eye-gaze-contingent control of an interactive three-dimensional (3D) user interface. A salient feature of the user interface is natural input, withmore » a heightened impression of controlling the computer directly by the mind. With this interface, input of rotation and translation are intuitive, whereas other abstract features, such as zoom, are more problematic to match with user intent. This paper describes successes with implementation to date, and ongoing efforts to develop a more sophisticated intent inferencing methodology.« less

  1. Beliefs about human agency influence the neural processing of gaze during joint attention.

    PubMed

    Caruana, Nathan; de Lissa, Peter; McArthur, Genevieve

    2017-04-01

    The current study measured adults' P350 and N170 ERPs while they interacted with a character in a virtual reality paradigm. Some participants believed the character was controlled by a human ("avatar" condition, n = 19); others believed it was controlled by a computer program ("agent" condition, n = 19). In each trial, participants initiated joint attention in order to direct the character's gaze toward a target. In 50% of trials, the character gazed toward the target (congruent responses), and in 50% of trials the character gazed to a different location (incongruent response). In the avatar condition, the character's incongruent gaze responses generated significantly larger P350 peaks at centro-parietal sites than congruent gaze responses. In the agent condition, the P350 effect was strikingly absent. Left occipitotemporal N170 responses were significantly smaller in the agent condition compared to the avatar condition for both congruent and incongruent gaze shifts. These data suggest that beliefs about human agency may recruit mechanisms that discriminate the social outcome of a gaze shift after approximately 350 ms, and that these mechanisms may modulate the early perceptual processing of gaze. These findings also suggest that the ecologically valid measurement of social cognition may depend upon paradigms that simulate genuine social interactions.

  2. Type of gesture, valence, and gaze modulate the influence of gestures on observer's behaviors

    PubMed Central

    De Stefani, Elisa; Innocenti, Alessandro; Secchi, Claudio; Papa, Veronica; Gentilucci, Maurizio

    2013-01-01

    The present kinematic study aimed at determining whether the observation of arm/hand gestures performed by conspecifics affected an action apparently unrelated to the gesture (i.e., reaching-grasping). In 3 experiments we examined the influence of different gestures on action kinematics. We also analyzed the effects of words corresponding in meaning to the gestures, on the same action. In Experiment 1, the type of gesture, valence and actor's gaze were the investigated variables Participants executed the action of reaching-grasping after discriminating whether the gestures produced by a conspecific were meaningful or not. The meaningful gestures were request or symbolic and their valence was positive or negative. They were presented by the conspecific either blindfolded or not. In control Experiment 2 we searched for effects of the sole gaze, and, in Experiment 3, the effects of the same characteristics of words corresponding in meaning to the gestures and visually presented by the conspecific. Type of gesture, valence, and gaze influenced the actual action kinematics; these effects were similar, but not the same as those induced by words. We proposed that the signal activated a response which made the actual action faster for negative valence of gesture, whereas for request signals and available gaze, the response interfered with the actual action more than symbolic signals and not available gaze. Finally, we proposed the existence of a common circuit involved in the comprehension of gestures and words and in the activation of consequent responses to them. PMID:24046742

  3. Perceived Gaze Direction Modulates Neural Processing of Prosocial Decision Making

    PubMed Central

    Sun, Delin; Shao, Robin; Wang, Zhaoxin; Lee, Tatia M. C.

    2018-01-01

    Gaze direction is a common social cue implying potential interpersonal interaction. However, little is known about the neural processing of social decision making influenced by perceived gaze direction. Here, we employed functional magnetic resonance imaging (fMRI) method to investigate 27 females when they were engaging in an economic exchange game task during which photos of direct or averted eye gaze were shown. We found that, when averted but not direct gaze was presented, prosocial vs. selfish choices were associated with stronger activations in the right superior temporal gyrus (STG) as well as larger functional couplings between right STG and the posterior cingulate cortex (PCC). Moreover, stronger activations in right STG was associated with quicker actions for making prosocial choice accompanied with averted gaze. The findings suggest that, when the cue implying social contact is absent, the processing of understanding others’ intention and the relationship between self and others is more involved for making prosocial than selfish decisions. These findings could advance our understanding of the roles of subtle cues in influencing prosocial decision making, as well as shedding lights on deficient social cue processing and functioning among individuals with autism spectrum disorder (ASD). PMID:29487516

  4. The Malleability of Age-Related Positive Gaze Preferences: Training to Change Gaze and Mood

    PubMed Central

    Isaacowitz, Derek M.; Choi, YoonSun

    2010-01-01

    Older adults show positive gaze preferences, but to what extent are these preferences malleable? Examining the plasticity of age-related gaze preferences may provide a window into their origins. We therefore designed an attentional training procedure to assess the degree to which we could shift gaze and gaze-related mood in both younger and older adults. Participants completed either a positive or negative dot-probe training. Before and after the attentional training, we obtained measures of fixations to negatively-valenced images along with concurrent mood ratings. We found differential malleability of gaze and mood by age: for young adults, negative training resulted in fewer post-training fixations to the most negative areas of the images, whereas positive training appeared more successful in changing older adults’ fixation patterns. Young adults did not differ in their moods as a function of training, whereas older adults in the train negative group had the worst moods after training. Implications for the etiology of age-related positive gaze preferences are considered. PMID:21401229

  5. Responding to Other People's Direct Gaze: Alterations in Gaze Behavior in Infants at Risk for Autism Occur on Very Short Timescales

    ERIC Educational Resources Information Center

    Nyström, Pär; Bölte, Sven; Falck-Ytter, Terje; Achermann, Sheila; Andersson Konke, Linn; Brocki, Karin; Cauvet, Elodie; Gredebäck, Gustaf; Lundin Kleberg, Johan; Nilsson Jobs, Elisabeth; Thorup, Emilia; Zander, Eric

    2017-01-01

    Atypical gaze processing has been reported in children with autism spectrum disorders (ASD). Here we explored how infants at risk for ASD respond behaviorally to others' direct gaze. We assessed 10-month-olds with a sibling with ASD (high risk group; n = 61) and a control group (n = 18) during interaction with an adult. Eye-tracking revealed less…

  6. Gaze leading is associated with liking.

    PubMed

    Grynszpan, Ouriel; Martin, Jean-Claude; Fossati, Philippe

    2017-02-01

    Gaze plays a pivotal role in human communication, especially for coordinating attention. The ability to guide the gaze orientation of others forms the backbone of joint attention. Recent research has raised the possibility that gaze following behaviors could induce liking. The present study seeks to investigate this hypothesis. We designed two physically different human avatars that could follow the gaze of users via eye-tracking technology. In a preliminary experiment, 20 participants assessed the baseline appeal of the two avatars and confirmed that the avatars differed in this respect. In the main experiment, we compared how 19 participants rated the two avatars in terms of pleasantness, trustworthiness and closeness when the avatars were following their gaze versus when the avatar generated gaze movements autonomously. Although the same avatar as in the preliminary experiment was rated more favorably, the pleasantness attributed to the two avatars increased when they followed the gaze of the participants. This outcome provides evidence that gaze following fosters liking independently of the baseline appeal of the individual. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. ASB clinical biomechanics award winner 2016: Assessment of gaze stability within 24-48hours post-concussion.

    PubMed

    Murray, Nicholas G; D'Amico, Nathan R; Powell, Douglas; Mormile, Megan E; Grimes, Katelyn E; Munkasy, Barry A; Gore, Russell K; Reed-Jones, Rebecca J

    2017-05-01

    Approximately 90% of athletes with concussion experience a certain degree of visual system dysfunction immediately post-concussion. Of these abnormalities, gaze stability deficits are denoted as among the most common. Little research quantitatively explores these variables post-concussion. As such, the purpose of this study was to investigate and compare gaze stability between a control group of healthy non-injured athletes and a group of athletes with concussions 24-48hours post-injury. Ten collegiate NCAA Division I athletes with concussions and ten healthy control collegiate athletes completed two trials of a sport-like antisaccade postural control task, the Wii Fit Soccer Heading Game. During play all participants were instructed to minimize gaze deviations away from a central fixed area. Athletes with concussions were assessed within 24-48 post-concussion while healthy control data were collected during pre-season athletic screening. Raw ocular point of gaze coordinates were tracked with a monocular eye tracking device (240Hz) and motion capture during the postural task to determine the instantaneous gaze coordinates. This data was exported and analyzed using a custom algorithm. Independent t-tests analyzed gaze resultant distance, prosaccade errors, mean vertical velocity, and mean horizontal velocity. Athletes with concussions had significantly greater gaze resultant distance (p=0.006), prosaccade errors (p<0.001), and horizontal velocity (p=0.029) when compared to healthy controls. These data suggest that athletes with concussions had less control of gaze during play of the Wii Fit Soccer Heading Game. This could indicate a gaze stability deficit via potentially reduced cortical inhibition that is present within 24-48hours post-concussion. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Gaze Cueing of Attention

    PubMed Central

    Frischen, Alexandra; Bayliss, Andrew P.; Tipper, Steven P.

    2007-01-01

    During social interactions, people’s eyes convey a wealth of information about their direction of attention and their emotional and mental states. This review aims to provide a comprehensive overview of past and current research into the perception of gaze behavior and its effect on the observer. This encompasses the perception of gaze direction and its influence on perception of the other person, as well as gaze-following behavior such as joint attention, in infant, adult, and clinical populations. Particular focus is given to the gaze-cueing paradigm that has been used to investigate the mechanisms of joint attention. The contribution of this paradigm has been significant and will likely continue to advance knowledge across diverse fields within psychology and neuroscience. PMID:17592962

  9. Design and control of active vision based mechanisms for intelligent robots

    NASA Technical Reports Server (NTRS)

    Wu, Liwei; Marefat, Michael M.

    1994-01-01

    In this paper, we propose a design of an active vision system for intelligent robot application purposes. The system has the degrees of freedom of pan, tilt, vergence, camera height adjustment, and baseline adjustment with a hierarchical control system structure. Based on this vision system, we discuss two problems involved in the binocular gaze stabilization process: fixation point selection and vergence disparity extraction. A hierarchical approach to determining point of fixation from potential gaze targets using evaluation function representing human visual behavior to outside stimuli is suggested. We also characterize different visual tasks in two cameras for vergence control purposes, and a phase-based method based on binarized images to extract vergence disparity for vergence control is presented. A control algorithm for vergence control is discussed.

  10. Gaze as a biometric

    NASA Astrophysics Data System (ADS)

    Yoon, Hong-Jun; Carmichael, Tandy R.; Tourassi, Georgia

    2014-03-01

    Two people may analyze a visual scene in two completely different ways. Our study sought to determine whether human gaze may be used to establish the identity of an individual. To accomplish this objective we investigated the gaze pattern of twelve individuals viewing still images with different spatial relationships. Specifically, we created 5 visual "dotpattern" tests to be shown on a standard computer monitor. These tests challenged the viewer's capacity to distinguish proximity, alignment, and perceptual organization. Each test included 50 images of varying difficulty (total of 250 images). Eye-tracking data were collected from each individual while taking the tests. The eye-tracking data were converted into gaze velocities and analyzed with Hidden Markov Models to develop personalized gaze profiles. Using leave-one-out cross-validation, we observed that these personalized profiles could differentiate among the 12 users with classification accuracy ranging between 53% and 76%, depending on the test. This was statistically significantly better than random guessing (i.e., 8.3% or 1 out of 12). Classification accuracy was higher for the tests where the users' average gaze velocity per case was lower. The study findings support the feasibility of using gaze as a biometric or personalized biomarker. These findings could have implications in Radiology training and the development of personalized e-learning environments.

  11. The impact of visual gaze direction on auditory object tracking.

    PubMed

    Pomper, Ulrich; Chait, Maria

    2017-07-05

    Subjective experience suggests that we are able to direct our auditory attention independent of our visual gaze, e.g when shadowing a nearby conversation at a cocktail party. But what are the consequences at the behavioural and neural level? While numerous studies have investigated both auditory attention and visual gaze independently, little is known about their interaction during selective listening. In the present EEG study, we manipulated visual gaze independently of auditory attention while participants detected targets presented from one of three loudspeakers. We observed increased response times when gaze was directed away from the locus of auditory attention. Further, we found an increase in occipital alpha-band power contralateral to the direction of gaze, indicative of a suppression of distracting input. Finally, this condition also led to stronger central theta-band power, which correlated with the observed effect in response times, indicative of differences in top-down processing. Our data suggest that a misalignment between gaze and auditory attention both reduce behavioural performance and modulate underlying neural processes. The involvement of central theta-band and occipital alpha-band effects are in line with compensatory neural mechanisms such as increased cognitive control and the suppression of task irrelevant inputs.

  12. GazeAppraise v. 0.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, Andrew; Haass, Michael; Rintoul, Mark Daniel

    GazeAppraise advances the state of the art of gaze pattern analysis using methods that simultaneously analyze spatial and temporal characteristics of gaze patterns. GazeAppraise enables novel research in visual perception and cognition; for example, using shape features as distinguishing elements to assess individual differences in visual search strategy. Given a set of point-to-point gaze sequences, hereafter referred to as scanpaths, the method constructs multiple descriptive features for each scanpath. Once the scanpath features have been calculated, they are used to form a multidimensional vector representing each scanpath and cluster analysis is performed on the set of vectors from all scanpaths.more » An additional benefit of this method is the identification of causal or correlated characteristics of the stimuli, subjects, and visual task through statistical analysis of descriptive metadata distributions within and across clusters.« less

  13. Biasing moral decisions by exploiting the dynamics of eye gaze.

    PubMed

    Pärnamets, Philip; Johansson, Petter; Hall, Lars; Balkenius, Christian; Spivey, Michael J; Richardson, Daniel C

    2015-03-31

    Eye gaze is a window onto cognitive processing in tasks such as spatial memory, linguistic processing, and decision making. We present evidence that information derived from eye gaze can be used to change the course of individuals' decisions, even when they are reasoning about high-level, moral issues. Previous studies have shown that when an experimenter actively controls what an individual sees the experimenter can affect simple decisions with alternatives of almost equal valence. Here we show that if an experimenter passively knows when individuals move their eyes the experimenter can change complex moral decisions. This causal effect is achieved by simply adjusting the timing of the decisions. We monitored participants' eye movements during a two-alternative forced-choice task with moral questions. One option was randomly predetermined as a target. At the moment participants had fixated the target option for a set amount of time we terminated their deliberation and prompted them to choose between the two alternatives. Although participants were unaware of this gaze-contingent manipulation, their choices were systematically biased toward the target option. We conclude that even abstract moral cognition is partly constituted by interactions with the immediate environment and is likely supported by gaze-dependent decision processes. By tracking the interplay between individuals, their sensorimotor systems, and the environment, we can influence the outcome of a decision without directly manipulating the content of the information available to them.

  14. Assessing the precision of gaze following using a stereoscopic 3D virtual reality setting.

    PubMed

    Atabaki, Artin; Marciniak, Karolina; Dicke, Peter W; Thier, Peter

    2015-07-01

    Despite the ecological importance of gaze following, little is known about the underlying neuronal processes, which allow us to extract gaze direction from the geometric features of the eye and head of a conspecific. In order to understand the neuronal mechanisms underlying this ability, a careful description of the capacity and the limitations of gaze following at the behavioral level is needed. Previous studies of gaze following, which relied on naturalistic settings have the disadvantage of allowing only very limited control of potentially relevant visual features guiding gaze following, such as the contrast of iris and sclera, the shape of the eyelids and--in the case of photographs--they lack depth. Hence, in order to get full control of potentially relevant features we decided to study gaze following of human observers guided by the gaze of a human avatar seen stereoscopically. To this end we established a stereoscopic 3D virtual reality setup, in which we tested human subjects' abilities to detect at which target a human avatar was looking at. Following the gaze of the avatar showed all the features of the gaze following of a natural person, namely a substantial degree of precision associated with a consistent pattern of systematic deviations from the target. Poor stereo vision affected performance surprisingly little (only in certain experimental conditions). Only gaze following guided by targets at larger downward eccentricities exhibited a differential effect of the presence or absence of accompanying movements of the avatar's eyelids and eyebrows. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Aversive eye gaze during a speech in virtual environment in patients with social anxiety disorder.

    PubMed

    Kim, Haena; Shin, Jung Eun; Hong, Yeon-Ju; Shin, Yu-Bin; Shin, Young Seok; Han, Kiwan; Kim, Jae-Jin; Choi, Soo-Hee

    2018-03-01

    One of the main characteristics of social anxiety disorder is excessive fear of social evaluation. In such situations, anxiety can influence gaze behaviour. Thus, the current study adopted virtual reality to examine eye gaze pattern of social anxiety disorder patients while presenting different types of speeches. A total of 79 social anxiety disorder patients and 51 healthy controls presented prepared speeches on general topics and impromptu speeches on self-related topics to a virtual audience while their eye gaze was recorded. Their presentation performance was also evaluated. Overall, social anxiety disorder patients showed less eye gaze towards the audience than healthy controls. Types of speech did not influence social anxiety disorder patients' gaze allocation towards the audience. However, patients with social anxiety disorder showed significant correlations between the amount of eye gaze towards the audience while presenting self-related speeches and social anxiety cognitions. The current study confirms that eye gaze behaviour of social anxiety disorder patients is aversive and that their anxiety symptoms are more dependent on the nature of topic.

  16. Eye gazing direction inspection based on image processing technique

    NASA Astrophysics Data System (ADS)

    Hao, Qun; Song, Yong

    2005-02-01

    According to the research result in neural biology, human eyes can obtain high resolution only at the center of view of field. In the research of Virtual Reality helmet, we design to detect the gazing direction of human eyes in real time and feed it back to the control system to improve the resolution of the graph at the center of field of view. In the case of current display instruments, this method can both give attention to the view field of virtual scene and resolution, and improve the immersion of virtual system greatly. Therefore, detecting the gazing direction of human eyes rapidly and exactly is the basis of realizing the design scheme of this novel VR helmet. In this paper, the conventional method of gazing direction detection that based on Purklinje spot is introduced firstly. In order to overcome the disadvantage of the method based on Purklinje spot, this paper proposed a method based on image processing to realize the detection and determination of the gazing direction. The locations of pupils and shapes of eye sockets change with the gazing directions. With the aid of these changes, analyzing the images of eyes captured by the cameras, gazing direction of human eyes can be determined finally. In this paper, experiments have been done to validate the efficiency of this method by analyzing the images. The algorithm can carry out the detection of gazing direction base on normal eye image directly, and it eliminates the need of special hardware. Experiment results show that the method is easy to implement and have high precision.

  17. Activity of long-lead burst neurons in pontine reticular formation during head-unrestrained gaze shifts.

    PubMed

    Walton, Mark M G; Freedman, Edward G

    2014-01-01

    Primates explore a visual scene through a succession of saccades. Much of what is known about the neural circuitry that generates these movements has come from neurophysiological studies using subjects with their heads restrained. Horizontal saccades and the horizontal components of oblique saccades are associated with high-frequency bursts of spikes in medium-lead burst neurons (MLBs) and long-lead burst neurons (LLBNs) in the paramedian pontine reticular formation. For LLBNs, the high-frequency burst is preceded by a low-frequency prelude that begins 12-150 ms before saccade onset. In terms of the lead time between the onset of prelude activity and saccade onset, the anatomical projections, and the movement field characteristics, LLBNs are a heterogeneous group of neurons. Whether this heterogeneity is endemic of multiple functional subclasses is an open question. One possibility is that some may carry signals related to head movement. We recorded from LLBNs while monkeys performed head-unrestrained gaze shifts, during which the kinematics of the eye and head components were dissociable. Many cells had peak firing rates that never exceeded 200 spikes/s for gaze shifts of any vector. The activity of these low-frequency cells often persisted beyond the end of the gaze shift and was usually related to head-movement kinematics. A subset was tested during head-unrestrained pursuit and showed clear modulation in the absence of saccades. These "low-frequency" cells were intermingled with MLBs and traditional LLBNs and may represent a separate functional class carrying signals related to head movement.

  18. Simple gaze-contingent cues guide eye movements in a realistic driving simulator

    NASA Astrophysics Data System (ADS)

    Pomarjanschi, Laura; Dorr, Michael; Bex, Peter J.; Barth, Erhardt

    2013-03-01

    Looking at the right place at the right time is a critical component of driving skill. Therefore, gaze guidance has the potential to become a valuable driving assistance system. In previous work, we have already shown that complex gaze-contingent stimuli can guide attention and reduce the number of accidents in a simple driving simulator. We here set out to investigate whether cues that are simple enough to be implemented in a real car can also capture gaze during a more realistic driving task in a high-fidelity driving simulator. We used a state-of-the-art, wide-field-of-view driving simulator with an integrated eye tracker. Gaze-contingent warnings were implemented using two arrays of light-emitting diodes horizontally fitted below and above the simulated windshield. Thirteen volunteering subjects drove along predetermined routes in a simulated environment popu­ lated with autonomous traffic. Warnings were triggered during the approach to half of the intersections, cueing either towards the right or to the left. The remaining intersections were not cued, and served as controls. The analysis of the recorded gaze data revealed that the gaze-contingent cues did indeed have a gaze guiding effect, triggering a significant shift in gaze position towards the highlighted direction. This gaze shift was not accompanied by changes in driving behaviour, suggesting that the cues do not interfere with the driving task itself.

  19. Interaction between gaze and visual and proprioceptive position judgements.

    PubMed

    Fiehler, Katja; Rösler, Frank; Henriques, Denise Y P

    2010-06-01

    There is considerable evidence that targets for action are represented in a dynamic gaze-centered frame of reference, such that each gaze shift requires an internal updating of the target. Here, we investigated the effect of eye movements on the spatial representation of targets used for position judgements. Participants had their hand passively placed to a location, and then judged whether this location was left or right of a remembered visual or remembered proprioceptive target, while gaze direction was varied. Estimates of position of the remembered targets relative to the unseen position of the hand were assessed with an adaptive psychophysical procedure. These positional judgements significantly varied relative to gaze for both remembered visual and remembered proprioceptive targets. Our results suggest that relative target positions may also be represented in eye-centered coordinates. This implies similar spatial reference frames for action control and space perception when positions are coded relative to the hand.

  20. Assessing Self-Awareness through Gaze Agency

    PubMed Central

    Crespi, Sofia Allegra; de’Sperati, Claudio

    2016-01-01

    We define gaze agency as the awareness of the causal effect of one’s own eye movements in gaze-contingent environments, which might soon become a widespread reality with the diffusion of gaze-operated devices. Here we propose a method for measuring gaze agency based on self-monitoring propensity and sensitivity. In one task, naïf observers watched bouncing balls on a computer monitor with the goal of discovering the cause of concurrently presented beeps, which were generated in real-time by their saccades or by other events (Discovery Task). We manipulated observers’ self-awareness by pre-exposing them to a condition in which beeps depended on gaze direction or by focusing their attention to their own eyes. These manipulations increased propensity to agency discovery. In a second task, which served to monitor agency sensitivity at the sensori-motor level, observers were explicitly asked to detect gaze agency (Detection Task). Both tasks turned out to be well suited to measure both increases and decreases of gaze agency. We did not find evident oculomotor correlates of agency discovery or detection. A strength of our approach is that it probes self-monitoring propensity–difficult to evaluate with traditional tasks based on bodily agency. In addition to putting a lens on this novel cognitive function, measuring gaze agency could reveal subtle self-awareness deficits in pathological conditions and during development. PMID:27812138

  1. Relationship between abstract thinking and eye gaze pattern in patients with schizophrenia.

    PubMed

    Oh, Jooyoung; Chun, Ji-Won; Lee, Jung Suk; Kim, Jae-Jin

    2014-04-16

    Effective integration of visual information is necessary to utilize abstract thinking, but patients with schizophrenia have slow eye movement and usually explore limited visual information. This study examines the relationship between abstract thinking ability and the pattern of eye gaze in patients with schizophrenia using a novel theme identification task. Twenty patients with schizophrenia and 22 healthy controls completed the theme identification task, in which subjects selected which word, out of a set of provided words, best described the theme of a picture. Eye gaze while performing the task was recorded by the eye tracker. Patients exhibited a significantly lower correct rate for theme identification and lesser fixation and saccade counts than controls. The correct rate was significantly correlated with the fixation count in patients, but not in controls. Patients with schizophrenia showed impaired abstract thinking and decreased quality of gaze, which were positively associated with each other. Theme identification and eye gaze appear to be useful as tools for the objective measurement of abstract thinking in patients with schizophrenia.

  2. Relationship between abstract thinking and eye gaze pattern in patients with schizophrenia

    PubMed Central

    2014-01-01

    Background Effective integration of visual information is necessary to utilize abstract thinking, but patients with schizophrenia have slow eye movement and usually explore limited visual information. This study examines the relationship between abstract thinking ability and the pattern of eye gaze in patients with schizophrenia using a novel theme identification task. Methods Twenty patients with schizophrenia and 22 healthy controls completed the theme identification task, in which subjects selected which word, out of a set of provided words, best described the theme of a picture. Eye gaze while performing the task was recorded by the eye tracker. Results Patients exhibited a significantly lower correct rate for theme identification and lesser fixation and saccade counts than controls. The correct rate was significantly correlated with the fixation count in patients, but not in controls. Conclusions Patients with schizophrenia showed impaired abstract thinking and decreased quality of gaze, which were positively associated with each other. Theme identification and eye gaze appear to be useful as tools for the objective measurement of abstract thinking in patients with schizophrenia. PMID:24739356

  3. Effects of Facial Symmetry and Gaze Direction on Perception of Social Attributes: A Study in Experimental Art History.

    PubMed

    Folgerø, Per O; Hodne, Lasse; Johansson, Christer; Andresen, Alf E; Sætren, Lill C; Specht, Karsten; Skaar, Øystein O; Reber, Rolf

    2016-01-01

    This article explores the possibility of testing hypotheses about art production in the past by collecting data in the present. We call this enterprise "experimental art history". Why did medieval artists prefer to paint Christ with his face directed towards the beholder, while profane faces were noticeably more often painted in different degrees of profile? Is a preference for frontal faces motivated by deeper evolutionary and biological considerations? Head and gaze direction is a significant factor for detecting the intentions of others, and accurate detection of gaze direction depends on strong contrast between a dark iris and a bright sclera, a combination that is only found in humans among the primates. One uniquely human capacity is language acquisition, where the detection of shared or joint attention, for example through detection of gaze direction, contributes significantly to the ease of acquisition. The perceived face and gaze direction is also related to fundamental emotional reactions such as fear, aggression, empathy and sympathy. The fast-track modulator model presents a related fast and unconscious subcortical route that involves many central brain areas. Activity in this pathway mediates the affective valence of the stimulus. In particular, different sub-regions of the amygdala show specific activation as response to gaze direction, head orientation and the valence of facial expression. We present three experiments on the effects of face orientation and gaze direction on the judgments of social attributes. We observed that frontal faces with direct gaze were more highly associated with positive adjectives. Does this help to associate positive values to the Holy Face in a Western context? The formal result indicates that the Holy Face is perceived more positively than profiles with both direct and averted gaze. Two control studies, using a Brazilian and a Dutch database of photographs, showed a similar but weaker effect with a larger contrast

  4. Electrocortical Reflections of Face and Gaze Processing in Children with Pervasive Developmental Disorder

    ERIC Educational Resources Information Center

    Kemner, C.; Schuller, A-M.; Van Engeland, H.

    2006-01-01

    Background: Children with pervasive developmental disorder (PDD) show behavioral abnormalities in gaze and face processing, but recent studies have indicated that normal activation of face-specific brain areas in response to faces is possible in this group. It is not clear whether the brain activity related to gaze processing is also normal in…

  5. The mesencephalic reticular formation as a conduit for primate collicular gaze control: tectal inputs to neurons targeting the spinal cord and medulla.

    PubMed

    Perkins, Eddie; Warren, Susan; May, Paul J

    2009-08-01

    The superior colliculus (SC), which directs orienting movements of both the eyes and head, is reciprocally connected to the mesencephalic reticular formation (MRF), suggesting the latter is involved in gaze control. The MRF has been provisionally subdivided to include a rostral portion, which subserves vertical gaze, and a caudal portion, which subserves horizontal gaze. Both regions contain cells projecting downstream that may provide a conduit for tectal signals targeting the gaze control centers which direct head movements. We determined the distribution of cells targeting the cervical spinal cord and rostral medullary reticular formation (MdRF), and investigated whether these MRF neurons receive input from the SC by the use of dual tracer techniques in Macaca fascicularis monkeys. Either biotinylated dextran amine or Phaseolus vulgaris leucoagglutinin was injected into the SC. Wheat germ agglutinin conjugated horseradish peroxidase was placed into the ipsilateral cervical spinal cord or medial MdRF to retrogradely label MRF neurons. A small number of medially located cells in the rostral and caudal MRF were labeled following spinal cord injections, and greater numbers were labeled in the same region following MdRF injections. In both cases, anterogradely labeled tectoreticular terminals were observed in close association with retrogradely labeled neurons. These close associations between tectoreticular terminals and neurons with descending projections suggest the presence of a trans-MRF pathway that provides a conduit for tectal control over head orienting movements. The medial location of these reticulospinal and reticuloreticular neurons suggests this MRF region may be specialized for head movement control. (c) 2009 Wiley-Liss, Inc.

  6. The "Social Gaze Space": A Taxonomy for Gaze-Based Communication in Triadic Interactions.

    PubMed

    Jording, Mathis; Hartz, Arne; Bente, Gary; Schulte-Rüther, Martin; Vogeley, Kai

    2018-01-01

    Humans substantially rely on non-verbal cues in their communication and interaction with others. The eyes represent a "simultaneous input-output device": While we observe others and obtain information about their mental states (including feelings, thoughts, and intentions-to-act), our gaze simultaneously provides information about our own attention and inner experiences. This substantiates its pivotal role for the coordination of communication. The communicative and coordinative capacities - and their phylogenetic and ontogenetic impacts - become fully apparent in triadic interactions constituted in its simplest form by two persons and an object. Technological advances have sparked renewed interest in social gaze and provide new methodological approaches. Here we introduce the 'Social Gaze Space' as a new conceptual framework for the systematic study of gaze behavior during social information processing. It covers all possible categorical states, namely 'partner-oriented,' 'object-oriented,' 'introspective,' 'initiating joint attention,' and 'responding joint attention.' Different combinations of these states explain several interpersonal phenomena. We argue that this taxonomy distinguishes the most relevant interactional states along their distinctive features, and will showcase the implications for prominent social gaze phenomena. The taxonomy allows to identify research desiderates that have been neglected so far. We argue for a systematic investigation of these phenomena and discuss some related methodological issues.

  7. Gaze behaviour during space perception and spatial decision making.

    PubMed

    Wiener, Jan M; Hölscher, Christoph; Büchner, Simon; Konieczny, Lars

    2012-11-01

    A series of four experiments investigating gaze behavior and decision making in the context of wayfinding is reported. Participants were presented with screenshots of choice points taken in large virtual environments. Each screenshot depicted alternative path options. In Experiment 1, participants had to decide between them to find an object hidden in the environment. In Experiment 2, participants were first informed about which path option to take as if following a guided route. Subsequently, they were presented with the same images in random order and had to indicate which path option they chose during initial exposure. In Experiment 1, we demonstrate (1) that participants have a tendency to choose the path option that featured the longer line of sight, and (2) a robust gaze bias towards the eventually chosen path option. In Experiment 2, systematic differences in gaze behavior towards the alternative path options between encoding and decoding were observed. Based on data from Experiments 1 and 2 and two control experiments ensuring that fixation patterns were specific to the spatial tasks, we develop a tentative model of gaze behavior during wayfinding decision making suggesting that particular attention was paid to image areas depicting changes in the local geometry of the environments such as corners, openings, and occlusions. Together, the results suggest that gaze during a wayfinding tasks is directed toward, and can be predicted by, a subset of environmental features and that gaze bias effects are a general phenomenon of visual decision making.

  8. Observing Shared Attention Modulates Gaze Following

    ERIC Educational Resources Information Center

    Bockler, Anne; Knoblich, Gunther; Sebanz, Natalie

    2011-01-01

    Humans' tendency to follow others' gaze is considered to be rather resistant to top-down influences. However, recent evidence indicates that gaze following depends on prior eye contact with the observed agent. Does observing two people engaging in eye contact also modulate gaze following? Participants observed two faces looking at each other or…

  9. Deficits in eye gaze during negative social interactions in patients with schizophrenia.

    PubMed

    Choi, Soo-Hee; Ku, Jeonghun; Han, Kiwan; Kim, Eosu; Kim, Sun I; Park, Junyoung; Kim, Jae-Jin

    2010-11-01

    Impaired social functioning has been reported in patients with schizophrenia. This study aimed to examine characteristics of interpersonal behaviors in patients with schizophrenia during various social interactions using the virtual reality system. Twenty-six patients and 26 controls engaged in the virtual conversation tasks, including 3 positive and 3 negative emotion-laden conversations. Eye gaze and other behavioral parameters were recorded during the listening and answering phases. The amount of eye gaze was assessed as smaller in the patients than in the controls. A significant interaction effect of group status and emotional type was found for the listening phase. The amount of eye gaze in the patients inversely correlated with self-rated scores of assertiveness for the listening phase. These results suggest that the patients displayed inadequate levels of augmentations in eye gaze during negative emotional situations. These deficits should be considered in the treatment and social skills training for patients with schizophrenia.

  10. Speaking and Listening with the Eyes: Gaze Signaling during Dyadic Interactions.

    PubMed

    Ho, Simon; Foulsham, Tom; Kingstone, Alan

    2015-01-01

    Cognitive scientists have long been interested in the role that eye gaze plays in social interactions. Previous research suggests that gaze acts as a signaling mechanism and can be used to control turn-taking behaviour. However, early research on this topic employed methods of analysis that aggregated gaze information across an entire trial (or trials), which masks any temporal dynamics that may exist in social interactions. More recently, attempts have been made to understand the temporal characteristics of social gaze but little research has been conducted in a natural setting with two interacting participants. The present study combines a temporally sensitive analysis technique with modern eye tracking technology to 1) validate the overall results from earlier aggregated analyses and 2) provide insight into the specific moment-to-moment temporal characteristics of turn-taking behaviour in a natural setting. Dyads played two social guessing games (20 Questions and Heads Up) while their eyes were tracked. Our general results are in line with past aggregated data, and using cross-correlational analysis on the specific gaze and speech signals of both participants we found that 1) speakers end their turn with direct gaze at the listener and 2) the listener in turn begins to speak with averted gaze. Convergent with theoretical models of social interaction, our data suggest that eye gaze can be used to signal both the end and the beginning of a speaking turn during a social interaction. The present study offers insight into the temporal dynamics of live dyadic interactions and also provides a new method of analysis for eye gaze data when temporal relationships are of interest.

  11. Speaking and Listening with the Eyes: Gaze Signaling during Dyadic Interactions

    PubMed Central

    Ho, Simon; Foulsham, Tom; Kingstone, Alan

    2015-01-01

    Cognitive scientists have long been interested in the role that eye gaze plays in social interactions. Previous research suggests that gaze acts as a signaling mechanism and can be used to control turn-taking behaviour. However, early research on this topic employed methods of analysis that aggregated gaze information across an entire trial (or trials), which masks any temporal dynamics that may exist in social interactions. More recently, attempts have been made to understand the temporal characteristics of social gaze but little research has been conducted in a natural setting with two interacting participants. The present study combines a temporally sensitive analysis technique with modern eye tracking technology to 1) validate the overall results from earlier aggregated analyses and 2) provide insight into the specific moment-to-moment temporal characteristics of turn-taking behaviour in a natural setting. Dyads played two social guessing games (20 Questions and Heads Up) while their eyes were tracked. Our general results are in line with past aggregated data, and using cross-correlational analysis on the specific gaze and speech signals of both participants we found that 1) speakers end their turn with direct gaze at the listener and 2) the listener in turn begins to speak with averted gaze. Convergent with theoretical models of social interaction, our data suggest that eye gaze can be used to signal both the end and the beginning of a speaking turn during a social interaction. The present study offers insight into the temporal dynamics of live dyadic interactions and also provides a new method of analysis for eye gaze data when temporal relationships are of interest. PMID:26309216

  12. The role of emotion in learning trustworthiness from eye-gaze: Evidence from facial electromyography

    PubMed Central

    Manssuer, Luis R.; Pawling, Ralph; Hayes, Amy E.; Tipper, Steven P.

    2016-01-01

    Gaze direction can be used to rapidly and reflexively lead or mislead others’ attention as to the location of important stimuli. When perception of gaze direction is congruent with the location of a target, responses are faster compared to when incongruent. Faces that consistently gaze congruently are also judged more trustworthy than faces that consistently gaze incongruently. However, it’s unclear how gaze-cues elicit changes in trust. We measured facial electromyography (EMG) during an identity-contingent gaze-cueing task to examine whether embodied emotional reactions to gaze-cues mediate trust learning. Gaze-cueing effects were found to be equivalent regardless of whether participants showed learning of trust in the expected direction or did not. In contrast, we found distinctly different patterns of EMG activity in these two populations. In a further experiment we showed the learning effects were specific to viewing faces, as no changes in liking were detected when viewing arrows that evoked similar attentional orienting responses. These findings implicate embodied emotion in learning trust from identity-contingent gaze-cueing, possibly due to the social value of shared attention or deception rather than domain-general attentional orienting. PMID:27153239

  13. Eye Gaze in Creative Sign Language

    ERIC Educational Resources Information Center

    Kaneko, Michiko; Mesch, Johanna

    2013-01-01

    This article discusses the role of eye gaze in creative sign language. Because eye gaze conveys various types of linguistic and poetic information, it is an intrinsic part of sign language linguistics in general and of creative signing in particular. We discuss various functions of eye gaze in poetic signing and propose a classification of gaze…

  14. A multimodal interface to resolve the Midas-Touch problem in gaze controlled wheelchair.

    PubMed

    Meena, Yogesh Kumar; Cecotti, Hubert; Wong-Lin, KongFatt; Prasad, Girijesh

    2017-07-01

    Human-computer interaction (HCI) research has been playing an essential role in the field of rehabilitation. The usability of the gaze controlled powered wheelchair is limited due to Midas-Touch problem. In this work, we propose a multimodal graphical user interface (GUI) to control a powered wheelchair that aims to help upper-limb mobility impaired people in daily living activities. The GUI was designed to include a portable and low-cost eye-tracker and a soft-switch wherein the wheelchair can be controlled in three different ways: 1) with a touchpad 2) with an eye-tracker only, and 3) eye-tracker with soft-switch. The interface includes nine different commands (eight directions and stop) and integrated within a powered wheelchair system. We evaluated the performance of the multimodal interface in terms of lap-completion time, the number of commands, and the information transfer rate (ITR) with eight healthy participants. The analysis of the results showed that the eye-tracker with soft-switch provides superior performance with an ITR of 37.77 bits/min among the three different conditions (p<;0.05). Thus, the proposed system provides an effective and economical solution to the Midas-Touch problem and extended usability for the large population of disabled users.

  15. Gaze-based assistive technology used in daily life by children with severe physical impairments - parents' experiences.

    PubMed

    Borgestig, Maria; Rytterström, Patrik; Hemmingsson, Helena

    2017-07-01

    To describe and explore parents' experiences when their children with severe physical impairments receive gaze-based assistive technology (gaze-based assistive technology (AT)) for use in daily life. Semi-structured interviews were conducted twice, with one year in between, with parents of eight children with cerebral palsy that used gaze-based AT in their daily activities. To understand the parents' experiences, hermeneutical interpretations were used during data analysis. The findings demonstrate that for parents, children's gaze-based AT usage meant that children demonstrated agency, provided them with opportunities to show personality and competencies, and gave children possibilities to develop. Overall, children's gaze-based AT provides hope for a better future for their children with severe physical impairments; a future in which the children can develop and gain influence in life. Gaze-based AT provides children with new opportunities to perform activities and take initiatives to communicate, giving parents hope about the children's future.

  16. Stereo and photometric image sequence interpretation for detecting negative obstacles using active gaze control and performing an autonomous jink

    NASA Astrophysics Data System (ADS)

    Hofmann, Ulrich; Siedersberger, Karl-Heinz

    2003-09-01

    Driving cross-country, the detection and state estimation relative to negative obstacles like ditches and creeks is mandatory for safe operation. Very often, ditches can be detected both by different photometric properties (soil vs. vegetation) and by range (disparity) discontinuities. Therefore, algorithms should make use of both the photometric and geometric properties to reliably detect obstacles. This has been achieved in UBM's EMS-Vision System (Expectation-based, Multifocal, Saccadic) for autonomous vehicles. The perception system uses Sarnoff's image processing hardware for real-time stereo vision. This sensor provides both gray value and disparity information for each pixel at high resolution and framerates. In order to perform an autonomous jink, the boundaries of an obstacle have to be measured accurately for calculating a safe driving trajectory. Especially, ditches are often very extended, so due to the restricted field of vision of the cameras, active gaze control is necessary to explore the boundaries of an obstacle. For successful measurements of image features the system has to satisfy conditions defined by the perception expert. It has to deal with the time constraints of the active camera platform while performing saccades and to keep the geometric conditions defined by the locomotion expert for performing a jink. Therefore, the experts have to cooperate. This cooperation is controlled by a central decision unit (CD), which has knowledge about the mission and the capabilities available in the system and of their limitations. The central decision unit reacts dependent on the result of situation assessment by starting, parameterizing or stopping actions (instances of capabilities). The approach has been tested with the 5-ton van VaMoRs. Experimental results will be shown for driving in a typical off-road scenario.

  17. Orienting in Response to Gaze and the Social Use of Gaze among Children with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Rombough, Adrienne; Iarocci, Grace

    2013-01-01

    Potential relations between gaze cueing, social use of gaze, and ability to follow line of sight were examined in children with autism and typically developing peers. Children with autism (mean age = 10 years) demonstrated intact gaze cueing. However, they preferred to follow arrows instead of eyes to infer mental state, and showed decreased…

  18. Attentional synchrony and the influence of viewing task on gaze behavior in static and dynamic scenes.

    PubMed

    Smith, Tim J; Mital, Parag K

    2013-07-17

    Does viewing task influence gaze during dynamic scene viewing? Research into the factors influencing gaze allocation during free viewing of dynamic scenes has reported that the gaze of multiple viewers clusters around points of high motion (attentional synchrony), suggesting that gaze may be primarily under exogenous control. However, the influence of viewing task on gaze behavior in static scenes and during real-world interaction has been widely demonstrated. To dissociate exogenous from endogenous factors during dynamic scene viewing we tracked participants' eye movements while they (a) freely watched unedited videos of real-world scenes (free viewing) or (b) quickly identified where the video was filmed (spot-the-location). Static scenes were also presented as controls for scene dynamics. Free viewing of dynamic scenes showed greater attentional synchrony, longer fixations, and more gaze to people and areas of high flicker compared with static scenes. These differences were minimized by the viewing task. In comparison with the free viewing of dynamic scenes, during the spot-the-location task fixation durations were shorter, saccade amplitudes were longer, and gaze exhibited less attentional synchrony and was biased away from areas of flicker and people. These results suggest that the viewing task can have a significant influence on gaze during a dynamic scene but that endogenous control is slow to kick in as initial saccades default toward the screen center, areas of high motion and people before shifting to task-relevant features. This default-like viewing behavior returns after the viewing task is completed, confirming that gaze behavior is more predictable during free viewing of dynamic than static scenes but that this may be due to natural correlation between regions of interest (e.g., people) and motion.

  19. Toward Optimization of Gaze-Controlled Human-Computer Interaction: Application to Hindi Virtual Keyboard for Stroke Patients.

    PubMed

    Meena, Yogesh Kumar; Cecotti, Hubert; Wong-Lin, Kongfatt; Dutta, Ashish; Prasad, Girijesh

    2018-04-01

    Virtual keyboard applications and alternative communication devices provide new means of communication to assist disabled people. To date, virtual keyboard optimization schemes based on script-specific information, along with multimodal input access facility, are limited. In this paper, we propose a novel method for optimizing the position of the displayed items for gaze-controlled tree-based menu selection systems by considering a combination of letter frequency and command selection time. The optimized graphical user interface layout has been designed for a Hindi language virtual keyboard based on a menu wherein 10 commands provide access to type 88 different characters, along with additional text editing commands. The system can be controlled in two different modes: eye-tracking alone and eye-tracking with an access soft-switch. Five different keyboard layouts have been presented and evaluated with ten healthy participants. Furthermore, the two best performing keyboard layouts have been evaluated with eye-tracking alone on ten stroke patients. The overall performance analysis demonstrated significantly superior typing performance, high usability (87% SUS score), and low workload (NASA TLX with 17 scores) for the letter frequency and time-based organization with script specific arrangement design. This paper represents the first optimized gaze-controlled Hindi virtual keyboard, which can be extended to other languages.

  20. Eye Gaze Correlates of Motor Impairment in VR Observation of Motor Actions.

    PubMed

    Alves, J; Vourvopoulos, A; Bernardino, A; Bermúdez I Badia, S

    2016-01-01

    This article is part of the Focus Theme of Methods of Information in Medicine on "Methodologies, Models and Algorithms for Patients Rehabilitation". Identify eye gaze correlates of motor impairment in a virtual reality motor observation task in a study with healthy participants and stroke patients. Participants consisted of a group of healthy subjects (N = 20) and a group of stroke survivors (N = 10). Both groups were required to observe a simple reach-and-grab and place-and-release task in a virtual environment. Additionally, healthy subjects were required to observe the task in a normal condition and a constrained movement condition. Eye movements were recorded during the observation task for later analysis. For healthy participants, results showed differences in gaze metrics when comparing the normal and arm-constrained conditions. Differences in gaze metrics were also found when comparing dominant and non-dominant arm for saccades and smooth pursuit events. For stroke patients, results showed longer smooth pursuit segments in action observation when observing the paretic arm, thus providing evidence that the affected circuitry may be activated for eye gaze control during observation of the simulated motor action. This study suggests that neural motor circuits are involved, at multiple levels, in observation of motor actions displayed in a virtual reality environment. Thus, eye tracking combined with action observation tasks in a virtual reality display can be used to monitor motor deficits derived from stroke, and consequently can also be used for rehabilitation of stroke patients.

  1. Effects of Peripheral Eccentricity and Head Orientation on Gaze Discrimination

    PubMed Central

    Palanica, Adam; Itier, Roxane J.

    2017-01-01

    Visual search tasks support a special role for direct gaze in human cognition, while classic gaze judgment tasks suggest the congruency between head orientation and gaze direction plays a central role in gaze perception. Moreover, whether gaze direction can be accurately discriminated in the periphery using covert attention is unknown. In the present study, individual faces in frontal and in deviated head orientations with a direct or an averted gaze were flashed for 150 ms across the visual field; participants focused on a centred fixation while judging the gaze direction. Gaze discrimination speed and accuracy varied with head orientation and eccentricity. The limit of accurate gaze discrimination was less than ±6° eccentricity. Response times suggested a processing facilitation for direct gaze in fovea, irrespective of head orientation, however, by ±3° eccentricity, head orientation started biasing gaze judgments, and this bias increased with eccentricity. Results also suggested a special processing of frontal heads with direct gaze in central vision, rather than a general congruency effect between eye and head cues. Thus, while both head and eye cues contribute to gaze discrimination, their role differs with eccentricity. PMID:28344501

  2. Effects of Peripheral Eccentricity and Head Orientation on Gaze Discrimination.

    PubMed

    Palanica, Adam; Itier, Roxane J

    2014-01-01

    Visual search tasks support a special role for direct gaze in human cognition, while classic gaze judgment tasks suggest the congruency between head orientation and gaze direction plays a central role in gaze perception. Moreover, whether gaze direction can be accurately discriminated in the periphery using covert attention is unknown. In the present study, individual faces in frontal and in deviated head orientations with a direct or an averted gaze were flashed for 150 ms across the visual field; participants focused on a centred fixation while judging the gaze direction. Gaze discrimination speed and accuracy varied with head orientation and eccentricity. The limit of accurate gaze discrimination was less than ±6° eccentricity. Response times suggested a processing facilitation for direct gaze in fovea, irrespective of head orientation, however, by ±3° eccentricity, head orientation started biasing gaze judgments, and this bias increased with eccentricity. Results also suggested a special processing of frontal heads with direct gaze in central vision, rather than a general congruency effect between eye and head cues. Thus, while both head and eye cues contribute to gaze discrimination, their role differs with eccentricity.

  3. Gaze Direction Detection in Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Forgeot d'Arc, Baudouin; Delorme, Richard; Zalla, Tiziana; Lefebvre, Aline; Amsellem, Frédérique; Moukawane, Sanaa; Letellier, Laurence; Leboyer, Marion; Mouren, Marie-Christine; Ramus, Franck

    2017-01-01

    Detecting where our partners direct their gaze is an important aspect of social interaction. An atypical gaze processing has been reported in autism. However, it remains controversial whether children and adults with autism spectrum disorder interpret indirect gaze direction with typical accuracy. This study investigated whether the detection of…

  4. A Support System for Mouse Operations Using Eye-Gaze Input

    NASA Astrophysics Data System (ADS)

    Abe, Kiyohiko; Nakayama, Yasuhiro; Ohi, Shoichi; Ohyama, Minoru

    We have developed an eye-gaze input system for people with severe physical disabilities, such as amyotrophic lateral sclerosis (ALS) patients. This system utilizes a personal computer and a home video camera to detect eye-gaze under natural light. The system detects both vertical and horizontal eye-gaze by simple image analysis, and does not require special image processing units or sensors. Our conventional eye-gaze input system can detect horizontal eye-gaze with a high degree of accuracy. However, it can only classify vertical eye-gaze into 3 directions (up, middle and down). In this paper, we propose a new method for vertical eye-gaze detection. This method utilizes the limbus tracking method for vertical eye-gaze detection. Therefore our new eye-gaze input system can detect the two-dimension coordinates of user's gazing point. By using this method, we develop a new support system for mouse operation. This system can move the mouse cursor to user's gazing point.

  5. Orienting to Eye Gaze and Face Processing

    ERIC Educational Resources Information Center

    Tipples, Jason

    2005-01-01

    The author conducted 7 experiments to examine possible interactions between orienting to eye gaze and specific forms of face processing. Participants classified a letter following either an upright or inverted face with averted, uninformative eye gaze. Eye gaze orienting effects were recorded for upright and inverted faces, irrespective of whether…

  6. Audience gaze while appreciating a multipart musical performance.

    PubMed

    Kawase, Satoshi; Obata, Satoshi

    2016-11-01

    Visual information has been observed to be crucial for audience members during musical performances. The present study used an eye tracker to investigate audience members' gazes while appreciating an audiovisual musical ensemble performance, based on evidence of the dominance of musical part in auditory attention when listening to multipart music that contains different melody lines and the joint-attention theory of gaze. We presented singing performances, by a female duo. The main findings were as follows: (1) the melody part (soprano) attracted more visual attention than the accompaniment part (alto) throughout the piece, (2) joint attention emerged when the singers shifted their gazes toward their co-performer, suggesting that inter-performer gazing interactions that play a spotlight role mediated performer-audience visual interaction, and (3) musical part (melody or accompaniment) strongly influenced the total duration of gazes among audiences, while the spotlight effect of gaze was limited to just after the singers' gaze shifts. Copyright © 2016. Published by Elsevier Inc.

  7. Owners' direct gazes increase dogs' attention-getting behaviors.

    PubMed

    Ohkita, Midori; Nagasawa, Miho; Kazutaka, Mogi; Kikusui, Takefumi

    2016-04-01

    This study examined whether dogs gain information about human's attention via their gazes and whether they change their attention-getting behaviors (i.e., whining and whimpering, looking at their owners' faces, pawing, and approaching their owners) in response to their owners' direct gazes. The results showed that when the owners gazed at their dogs, the durations of whining and whimpering and looking at the owners' faces were longer than when the owners averted their gazes. In contrast, there were no differences in duration of pawing and likelihood of approaching the owners between the direct and averted gaze conditions. Therefore, owners' direct gazes increased the behaviors that acted as distant signals and did not necessarily involve touching the owners. We suggest that dogs are sensitive to human gazes, and this sensitivity may act as attachment signals to humans, and may contribute to close relationships between humans and dogs. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. 3-D-Gaze-Based Robotic Grasping Through Mimicking Human Visuomotor Function for People With Motion Impairments.

    PubMed

    Li, Songpo; Zhang, Xiaoli; Webb, Jeremy D

    2017-12-01

    The goal of this paper is to achieve a novel 3-D-gaze-based human-robot-interaction modality, with which a user with motion impairment can intuitively express what tasks he/she wants the robot to do by directly looking at the object of interest in the real world. Toward this goal, we investigate 1) the technology to accurately sense where a person is looking in real environments and 2) the method to interpret the human gaze and convert it into an effective interaction modality. Looking at a specific object reflects what a person is thinking related to that object, and the gaze location contains essential information for object manipulation. A novel gaze vector method is developed to accurately estimate the 3-D coordinates of the object being looked at in real environments, and a novel interpretation framework that mimics human visuomotor functions is designed to increase the control capability of gaze in object grasping tasks. High tracking accuracy was achieved using the gaze vector method. Participants successfully controlled a robotic arm for object grasping by directly looking at the target object. Human 3-D gaze can be effectively employed as an intuitive interaction modality for robotic object manipulation. It is the first time that 3-D gaze is utilized in a real environment to command a robot for a practical application. Three-dimensional gaze tracking is promising as an intuitive alternative for human-robot interaction especially for disabled and elderly people who cannot handle the conventional interaction modalities.

  9. Functional changes of the reward system underlie blunted response to social gaze in cocaine users

    PubMed Central

    Preller, Katrin H.; Herdener, Marcus; Schilbach, Leonhard; Stämpfli, Philipp; Hulka, Lea M.; Vonmoos, Matthias; Ingold, Nina; Vogeley, Kai; Tobler, Philippe N.; Seifritz, Erich; Quednow, Boris B.

    2014-01-01

    Social interaction deficits in drug users likely impede treatment, increase the burden of the affected families, and consequently contribute to the high costs for society associated with addiction. Despite its significance, the neural basis of altered social interaction in drug users is currently unknown. Therefore, we investigated basal social gaze behavior in cocaine users by applying behavioral, psychophysiological, and functional brain-imaging methods. In study I, 80 regular cocaine users and 63 healthy controls completed an interactive paradigm in which the participants’ gaze was recorded by an eye-tracking device that controlled the gaze of an anthropomorphic virtual character. Valence ratings of different eye-contact conditions revealed that cocaine users show diminished emotional engagement in social interaction, which was also supported by reduced pupil responses. Study II investigated the neural underpinnings of changes in social reward processing observed in study I. Sixteen cocaine users and 16 controls completed a similar interaction paradigm as used in study I while undergoing functional magnetic resonance imaging. In response to social interaction, cocaine users displayed decreased activation of the medial orbitofrontal cortex, a key region of reward processing. Moreover, blunted activation of the medial orbitofrontal cortex was significantly correlated with a decreased social network size, reflecting problems in real-life social behavior because of reduced social reward. In conclusion, basic social interaction deficits in cocaine users as observed here may arise from altered social reward processing. Consequently, these results point to the importance of reinstatement of social reward in the treatment of stimulant addiction. PMID:24449854

  10. Learning under your gaze: the mediating role of affective arousal between perceived direct gaze and memory performance.

    PubMed

    Helminen, Terhi M; Pasanen, Tytti P; Hietanen, Jari K

    2016-03-01

    Previous studies have shown that cognitive performance can be affected by the presence of an observer and self-directed gaze. We investigated whether the effect of gaze direction (direct vs. downcast) on verbal memory is mediated by autonomic arousal. Male participants responded with enhanced affective arousal to both male and female storytellers' direct gaze which, according to a path analysis, was negatively associated with the performance. On the other hand, parallel to this arousal-mediated effect, males' performance was affected by another process impacting the performance positively and suggested to be related to effort allocation on the task. The effect of this process was observed only when the storyteller was a male. The participants remembered more details from a story told by a male with a direct vs. downcast gaze. The effect of gaze direction on performance was the opposite for female storytellers, which was explained by the arousal-mediated process. Surprisingly, these results were restricted to male participants only and no effects of gaze were observed among female participants. We also investigated whether the participants' belief of being seen or not (through an electronic window) by the storyteller influenced the memory and arousal, but this manipulation had no effect on the results.

  11. The Role of Global and Local Visual Information during Gaze-Cued Orienting of Attention.

    PubMed

    Munsters, Nicolette M; van den Boomen, Carlijn; Hooge, Ignace T C; Kemner, Chantal

    2016-01-01

    Gaze direction is an important social communication tool. Global and local visual information are known to play specific roles in processing socially relevant information from a face. The current study investigated whether global visual information has a primary role during gaze-cued orienting of attention and, as such, may influence quality of interaction. Adults performed a gaze-cueing task in which a centrally presented face cued (valid or invalid) the location of a peripheral target through a gaze shift. We measured brain activity (electroencephalography) towards the cue and target and behavioral responses (manual and saccadic reaction times) towards the target. The faces contained global (i.e. lower spatial frequencies), local (i.e. higher spatial frequencies), or a selection of both global and local (i.e. mid-band spatial frequencies) visual information. We found a gaze cue-validity effect (i.e. valid versus invalid), but no interaction effects with spatial frequency content. Furthermore, behavioral responses towards the target were in all cue conditions slower when lower spatial frequencies were not present in the gaze cue. These results suggest that whereas gaze-cued orienting of attention can be driven by both global and local visual information, global visual information determines the speed of behavioral responses towards other entities appearing in the surrounding of gaze cue stimuli.

  12. Eye-gaze determination of user intent at the computer interface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldberg, J.H.; Schryver, J.C.

    1993-12-31

    Determination of user intent at the computer interface through eye-gaze monitoring can significantly aid applications for the disabled, as well as telerobotics and process control interfaces. Whereas current eye-gaze control applications are limited to object selection and x/y gazepoint tracking, a methodology was developed here to discriminate a more abstract interface operation: zooming-in or out. This methodology first collects samples of eve-gaze location looking at controlled stimuli, at 30 Hz, just prior to a user`s decision to zoom. The sample is broken into data frames, or temporal snapshots. Within a data frame, all spatial samples are connected into a minimummore » spanning tree, then clustered, according to user defined parameters. Each cluster is mapped to one in the prior data frame, and statistics are computed from each cluster. These characteristics include cluster size, position, and pupil size. A multiple discriminant analysis uses these statistics both within and between data frames to formulate optimal rules for assigning the observations into zooming, zoom-out, or no zoom conditions. The statistical procedure effectively generates heuristics for future assignments, based upon these variables. Future work will enhance the accuracy and precision of the modeling technique, and will empirically test users in controlled experiments.« less

  13. Attentional effects on gaze preference for salient loci in traffic scenes.

    PubMed

    Sakai, Hiroyuki; Shin, Duk; Kohama, Takeshi; Uchiyama, Yuji

    2012-01-01

    Alerting drivers for self-regulation of attention might decrease crash risks attributable to absent-minded driving. However, no reliable method exists for monitoring driver attention. Therefore, we examined attentional effects on gaze preference for salient loci (GPS) in traffic scenes. In an active viewing (AV) condition requiring endogenous attention for traffic scene comprehension, participants identified appropriate speeds for driving in presented traffic scene images. In a passive viewing (PV) condition requiring no endogenous attention, participants passively viewed traffic scene images. GPS was quantified by the mean saliency value averaged across fixation locations. Results show that GPS was less during AV than during PV. Additionally, gaze dwell time on signboards was shorter for AV than for PV. These results suggest that, in the absence of endogenous attention for traffic scene comprehension, gaze tends to concentrate on irrelevant salient loci in a traffic environment. Therefore, increased GPS can indicate absent-minded driving. The present study demonstrated that, without endogenous attention for traffic scene comprehension, gaze tends to concentrate on irrelevant salient loci in a traffic environment. This result suggests that increased gaze preference for salient loci indicates absent-minded driving, which is otherwise difficult to detect.

  14. Stationary gaze entropy predicts lane departure events in sleep-deprived drivers.

    PubMed

    Shiferaw, Brook A; Downey, Luke A; Westlake, Justine; Stevens, Bronwyn; Rajaratnam, Shantha M W; Berlowitz, David J; Swann, Phillip; Howard, Mark E

    2018-02-02

    Performance decrement associated with sleep deprivation is a leading contributor to traffic accidents and fatalities. While current research has focused on eye blink parameters as physiological indicators of driver drowsiness, little is understood of how gaze behaviour alters as a result of sleep deprivation. In particular, the effect of sleep deprivation on gaze entropy has not been previously examined. In this randomised, repeated measures study, 9 (4 male, 5 female) healthy participants completed two driving sessions in a fully instrumented vehicle (1 after a night of sleep deprivation and 1 after normal sleep) on a closed track, during which eye movement activity and lane departure events were recorded. Following sleep deprivation, the rate of fixations reduced while blink rate and duration as well as saccade amplitude increased. In addition, stationary and transition entropy of gaze also increased following sleep deprivation as well as with amount of time driven. An increase in stationary gaze entropy in particular was associated with higher odds of a lane departure event occurrence. These results highlight how fatigue induced by sleep deprivation and time-on-task effects can impair drivers' visual awareness through disruption of gaze distribution and scanning patterns.

  15. Dissociation of eye and head components of gaze shifts by stimulation of the omnipause neuron region.

    PubMed

    Gandhi, Neeraj J; Sparks, David L

    2007-07-01

    Natural movements often include actions integrated across multiple effectors. Coordinated eye-head movements are driven by a command to shift the line of sight by a desired displacement vector. Yet because extraocular and neck motoneurons are separate entities, the gaze shift command must be separated into independent signals for eye and head movement control. We report that this separation occurs, at least partially, at or before the level of pontine omnipause neurons (OPNs). Stimulation of the OPNs prior to and during gaze shifts temporally decoupled the eye and head components by inhibiting gaze and eye saccades. In contrast, head movements were consistently initiated before gaze onset, and ongoing head movements continued along their trajectories, albeit with some characteristic modulations. After stimulation offset, a gaze shift composed of an eye saccade, and a reaccelerated head movement was produced to preserve gaze accuracy. We conclude that signals subject to OPN inhibition produce the eye-movement component of a coordinated eye-head gaze shift and are not the only signals involved in the generation of the head component of the gaze shift.

  16. Creepy White Gaze: Rethinking the Diorama as a Pedagogical Activity

    ERIC Educational Resources Information Center

    Sterzuk, Andrea; Mulholland, Valerie

    2011-01-01

    Drawing on gaze and postcolonial theory, this article provides a theoretical discussion of a problematic photograph published in a provincial teachers' newsletter. The photo consists of a White settler child and two White settler educators gathered around his heritage fair entry diorama entitled "Great Plains Indians." This article…

  17. Eye gaze tracking using correlation filters

    NASA Astrophysics Data System (ADS)

    Karakaya, Mahmut; Bolme, David; Boehnen, Chris

    2014-03-01

    In this paper, we studied a method for eye gaze tracking that provide gaze estimation from a standard webcam with a zoom lens and reduce the setup and calibration requirements for new users. Specifically, we have developed a gaze estimation method based on the relative locations of points on the top of the eyelid and eye corners. Gaze estimation method in this paper is based on the distances between top point of the eyelid and eye corner detected by the correlation filters. Advanced correlation filters were found to provide facial landmark detections that are accurate enough to determine the subjects gaze direction up to angle of approximately 4-5 degrees although calibration errors often produce a larger overall shift in the estimates. This is approximately a circle of diameter 2 inches for a screen that is arm's length from the subject. At this accuracy it is possible to figure out what regions of text or images the subject is looking but it falls short of being able to determine which word the subject has looked at.

  18. Eye Gaze Tracking using Correlation Filters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karakaya, Mahmut; Boehnen, Chris Bensing; Bolme, David S

    In this paper, we studied a method for eye gaze tracking that provide gaze estimation from a standard webcam with a zoom lens and reduce the setup and calibration requirements for new users. Specifically, we have developed a gaze estimation method based on the relative locations of points on the top of the eyelid and eye corners. Gaze estimation method in this paper is based on the distances between top point of the eyelid and eye corner detected by the correlation filters. Advanced correlation filters were found to provide facial landmark detections that are accurate enough to determine the subjectsmore » gaze direction up to angle of approximately 4-5 degrees although calibration errors often produce a larger overall shift in the estimates. This is approximately a circle of diameter 2 inches for a screen that is arm s length from the subject. At this accuracy it is possible to figure out what regions of text or images the subject is looking but it falls short of being able to determine which word the subject has looked at.« less

  19. A kinematic model for 3-D head-free gaze-shifts

    PubMed Central

    Daemi, Mehdi; Crawford, J. Douglas

    2015-01-01

    Rotations of the line of sight are mainly implemented by coordinated motion of the eyes and head. Here, we propose a model for the kinematics of three-dimensional (3-D) head-unrestrained gaze-shifts. The model was designed to account for major principles in the known behavior, such as gaze accuracy, spatiotemporal coordination of saccades with vestibulo-ocular reflex (VOR), relative eye and head contributions, the non-commutativity of rotations, and Listing's and Fick constraints for the eyes and head, respectively. The internal design of the model was inspired by known and hypothesized elements of gaze control physiology. Inputs included retinocentric location of the visual target and internal representations of initial 3-D eye and head orientation, whereas outputs were 3-D displacements of eye relative to the head and head relative to shoulder. Internal transformations decomposed the 2-D gaze command into 3-D eye and head commands with the use of three coordinated circuits: (1) a saccade generator, (2) a head rotation generator, (3) a VOR predictor. Simulations illustrate that the model can implement: (1) the correct 3-D reference frame transformations to generate accurate gaze shifts (despite variability in other parameters), (2) the experimentally verified constraints on static eye and head orientations during fixation, and (3) the experimentally observed 3-D trajectories of eye and head motion during gaze-shifts. We then use this model to simulate how 2-D eye-head coordination strategies interact with 3-D constraints to influence 3-D orientations of the eye-in-space, and the implications of this for spatial vision. PMID:26113816

  20. "Beloved" as an Oppositional Gaze

    ERIC Educational Resources Information Center

    Mao, Weiqiang; Zhang, Mingquan

    2009-01-01

    This paper studies the strategy Morrison adopts in "Beloved" to give voice to black Americans long silenced by the dominant white American culture. Instead of being objects passively accepting their aphasia, black Americans become speaking subjects that are able to cast an oppositional gaze to avert the objectifying gaze of white…

  1. Perceiving crowd attention: Gaze following in human crowds with conflicting cues.

    PubMed

    Sun, Zhongqiang; Yu, Wenjun; Zhou, Jifan; Shen, Mowei

    2017-05-01

    People automatically redirect their visual attention by following others' gaze orientation, a phenomenon called "gaze following." This is an evolutionarily generated socio-cognitive process that provides people with information about their environments. Often, however, people in crowds can have rather different gaze orientations. This study investigated how gaze following occurs in situations with many conflicting gazes. In two experiments, we modified the gaze cueing paradigm to use a crowd rather than a single individual. Specifically, participants were presented with a group of human avatars with differing gaze orientations, and the target appeared randomly on the left or right side of a display. We found that (a) when a marked difference existed in the number of avatars with divergent gaze orientations, participants automatically followed the majority's gaze orientation, and (b) the strongest gaze cue effect occurred when all gazes shared the same orientation, with the response superiority of the majority's oriented location monotonically diminishing with the number of gazes with divergent orientations. These findings suggested that the majority rule plays a role in gaze following behavior when individuals are confronted with conflicting multigaze scenes, and that an increasing subgroup size appears to enlarge the strength of the gaze cueing effect.

  2. Judgments at Gaze Value: Gaze Cuing in Banner Advertisements, Its Effect on Attention Allocation and Product Judgments.

    PubMed

    Palcu, Johanna; Sudkamp, Jennifer; Florack, Arnd

    2017-01-01

    Banner advertising is a popular means of promoting products and brands online. Although banner advertisements are often designed to be particularly attention grabbing, they frequently go unnoticed. Applying an eye-tracking procedure, the present research aimed to (a) determine whether presenting human faces (static or animated) in banner advertisements is an adequate tool for capturing consumers' attention and thus overcoming the frequently observed phenomenon of banner blindness, (b) to examine whether the gaze of a featured face possesses the ability to direct consumers' attention toward specific elements (i.e., the product) in an advertisement, and (c) to establish whether the gaze direction of an advertised face influences consumers subsequent evaluation of the advertised product. We recorded participants' eye gaze while they viewed a fictional online shopping page displaying banner advertisements that featured either no human face or a human face that was either static or animated and involved different gaze directions (toward or away from the advertised product). Moreover, we asked participants to subsequently evaluate a set of products, one of which was the product previously featured in the banner advertisement. Results showed that, when advertisements included a human face, participants' attention was more attracted by and they looked longer at animated compared with static banner advertisements. Moreover, when a face gazed toward the product region, participants' likelihood of looking at the advertised product increased regardless of whether the face was animated or not. Most important, gaze direction influenced subsequent product evaluations; that is, consumers indicated a higher intention to buy a product when it was previously presented in a banner advertisement that featured a face that gazed toward the product. The results suggest that while animation in banner advertising constitutes a salient feature that captures consumers' visual attention, gaze

  3. Judgments at Gaze Value: Gaze Cuing in Banner Advertisements, Its Effect on Attention Allocation and Product Judgments

    PubMed Central

    Palcu, Johanna; Sudkamp, Jennifer; Florack, Arnd

    2017-01-01

    Banner advertising is a popular means of promoting products and brands online. Although banner advertisements are often designed to be particularly attention grabbing, they frequently go unnoticed. Applying an eye-tracking procedure, the present research aimed to (a) determine whether presenting human faces (static or animated) in banner advertisements is an adequate tool for capturing consumers’ attention and thus overcoming the frequently observed phenomenon of banner blindness, (b) to examine whether the gaze of a featured face possesses the ability to direct consumers’ attention toward specific elements (i.e., the product) in an advertisement, and (c) to establish whether the gaze direction of an advertised face influences consumers subsequent evaluation of the advertised product. We recorded participants’ eye gaze while they viewed a fictional online shopping page displaying banner advertisements that featured either no human face or a human face that was either static or animated and involved different gaze directions (toward or away from the advertised product). Moreover, we asked participants to subsequently evaluate a set of products, one of which was the product previously featured in the banner advertisement. Results showed that, when advertisements included a human face, participants’ attention was more attracted by and they looked longer at animated compared with static banner advertisements. Moreover, when a face gazed toward the product region, participants’ likelihood of looking at the advertised product increased regardless of whether the face was animated or not. Most important, gaze direction influenced subsequent product evaluations; that is, consumers indicated a higher intention to buy a product when it was previously presented in a banner advertisement that featured a face that gazed toward the product. The results suggest that while animation in banner advertising constitutes a salient feature that captures consumers’ visual

  4. Look at my poster! Active gaze, preference and memory during a poster session.

    PubMed

    Foulsham, Tom; Kingstone, Alan

    2011-01-01

    In science, as in advertising, people often present information on a poster, yet little is known about attention during a poster session. A mobile eye-tracker was used to record participants' gaze during a mock poster session featuring a range of academic psychology posters. Participants spent the most time looking at introductions and conclusions. Larger posters were looked at for longer, as were posters rated more interesting (but not necessarily more aesthetically pleasing). Interestingly, gaze did not correlate with memory for poster details or liking, suggesting that attracting someone towards your poster may not be enough.

  5. EDITORIAL: Special section on gaze-independent brain-computer interfaces Special section on gaze-independent brain-computer interfaces

    NASA Astrophysics Data System (ADS)

    Treder, Matthias S.

    2012-08-01

    Restoring the ability to communicate and interact with the environment in patients with severe motor disabilities is a vision that has been the main catalyst of early brain-computer interface (BCI) research. The past decade has brought a diversification of the field. BCIs have been examined as a tool for motor rehabilitation and their benefit in non-medical applications such as mental-state monitoring for improved human-computer interaction and gaming has been confirmed. At the same time, the weaknesses of some approaches have been pointed out. One of these weaknesses is gaze-dependence, that is, the requirement that the user of a BCI system voluntarily directs his or her eye gaze towards a visual target in order to efficiently operate a BCI. This not only contradicts the main doctrine of BCI research, namely that BCIs should be independent of muscle activity, but it can also limit its real-world applicability both in clinical and non-medical settings. It is only in a scenario devoid of any motor activity that a BCI solution is without alternative. Gaze-dependencies have surfaced at two different points in the BCI loop. Firstly, a BCI that relies on visual stimulation may require users to fixate on the target location. Secondly, feedback is often presented visually, which implies that the user may have to move his or her eyes in order to perceive the feedback. This special section was borne out of a BCI workshop on gaze-independent BCIs held at the 2011 Society for Applied Neurosciences (SAN) Conference and has then been extended with additional contributions from other research groups. It compiles experimental and methodological work that aims toward gaze-independent communication and mental-state monitoring. Riccio et al review the current state-of-the-art in research on gaze-independent BCIs [1]. Van der Waal et al present a tactile speller that builds on the stimulation of the fingers of the right and left hand [2]. H¨ohne et al analyze the ergonomic aspects

  6. [Case of acute ophthalmoparesis with gaze nystagmus].

    PubMed

    Ikuta, Naomi; Tada, Yukiko; Koga, Michiaki

    2012-01-01

    A 61-year-old man developed double vision subsequent to diarrheal illness. Mixed horizontal-vertical gaze palsy in both eyes, diminution of tendon reflexes, and gaze nystagmus were noted. His horizontal gaze palsy was accompanied by gaze nystagmus in the abducent direction, indicative of the disturbance in central nervous system. Neither limb weakness nor ataxia was noted. Serum anti-GQ1b antibody was detected. Brain magnetic resonance imaging (MRI) findings were normal. The patient was diagnosed as having acute ophthalmoparesis. The ophthalmoparesis and nystagmus gradually disappeared in 3 months. The accompanying nystagmus suggests that central nervous system disturbance may also be present with acute ophthalmoparesis.

  7. Gaze entropy reflects surgical task load.

    PubMed

    Di Stasi, Leandro L; Diaz-Piedra, Carolina; Rieiro, Héctor; Sánchez Carrión, José M; Martin Berrido, Mercedes; Olivares, Gonzalo; Catena, Andrés

    2016-11-01

    Task (over-)load imposed on surgeons is a main contributing factor to surgical errors. Recent research has shown that gaze metrics represent a valid and objective index to asses operator task load in non-surgical scenarios. Thus, gaze metrics have the potential to improve workplace safety by providing accurate measurements of task load variations. However, the direct relationship between gaze metrics and surgical task load has not been investigated yet. We studied the effects of surgical task complexity on the gaze metrics of surgical trainees. We recorded the eye movements of 18 surgical residents, using a mobile eye tracker system, during the performance of three high-fidelity virtual simulations of laparoscopic exercises of increasing complexity level: Clip Applying exercise, Cutting Big exercise, and Translocation of Objects exercise. We also measured performance accuracy and subjective rating of complexity. Gaze entropy and velocity linearly increased with increased task complexity: Visual exploration pattern became less stereotyped (i.e., more random) and faster during the more complex exercises. Residents performed better the Clip Applying exercise and the Cutting Big exercise than the Translocation of Objects exercise and their perceived task complexity differed accordingly. Our data show that gaze metrics are a valid and reliable surgical task load index. These findings have potential impacts to improve patient safety by providing accurate measurements of surgeon task (over-)load and might provide future indices to assess residents' learning curves, independently of expensive virtual simulators or time-consuming expert evaluation.

  8. Wolves (Canis lupus) and dogs (Canis familiaris) differ in following human gaze into distant space but respond similar to their packmates' gaze.

    PubMed

    Werhahn, Geraldine; Virányi, Zsófia; Barrera, Gabriela; Sommese, Andrea; Range, Friederike

    2016-08-01

    Gaze following into distant space is defined as visual co-orientation with another individual's head direction allowing the gaze follower to gain information on its environment. Human and nonhuman animals share this basic gaze following behavior, suggested to rely on a simple reflexive mechanism and believed to be an important prerequisite for complex forms of social cognition. Pet dogs differ from other species in that they follow only communicative human gaze clearly addressed to them. However, in an earlier experiment we showed that wolves follow human gaze into distant space. Here we set out to investigate whether domestication has affected gaze following in dogs by comparing pack-living dogs and wolves raised and kept under the same conditions. In Study 1 we found that in contrast to the wolves, these dogs did not follow minimally communicative human gaze into distant space in the same test paradigm. In the observational Study 2 we found that pack-living dogs and wolves, similarly vigilant to environmental stimuli, follow the spontaneous gaze of their conspecifics similarly often. Our findings suggest that domestication did not affect the gaze following ability of dogs itself. The results raise hypotheses about which other dog skills might have been altered through domestication that may have influenced their performance in Study 1. Because following human gaze in dogs might be influenced by special evolutionary as well as developmental adaptations to interactions with humans, we suggest that comparing dogs to other animal species might be more informative when done in intraspecific social contexts. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  9. Wolves (Canis lupus) and Dogs (Canis familiaris) Differ in Following Human Gaze Into Distant Space But Respond Similar to Their Packmates’ Gaze

    PubMed Central

    Werhahn, Geraldine; Virányi, Zsófia; Barrera, Gabriela; Sommese, Andrea; Range, Friederike

    2017-01-01

    Gaze following into distant space is defined as visual co-orientation with another individual’s head direction allowing the gaze follower to gain information on its environment. Human and nonhuman animals share this basic gaze following behavior, suggested to rely on a simple reflexive mechanism and believed to be an important prerequisite for complex forms of social cognition. Pet dogs differ from other species in that they follow only communicative human gaze clearly addressed to them. However, in an earlier experiment we showed that wolves follow human gaze into distant space. Here we set out to investigate whether domestication has affected gaze following in dogs by comparing pack-living dogs and wolves raised and kept under the same conditions. In Study 1 we found that in contrast to the wolves, these dogs did not follow minimally communicative human gaze into distant space in the same test paradigm. In the observational Study 2 we found that pack-living dogs and wolves, similarly vigilant to environmental stimuli, follow the spontaneous gaze of their conspecifics similarly often. Our findings suggest that domestication did not affect the gaze following ability of dogs itself. The results raise hypotheses about which other dog skills might have been altered through domestication that may have influenced their performance in Study 1. Because following human gaze in dogs might be influenced by special evolutionary as well as developmental adaptations to interactions with humans, we suggest that comparing dogs to other animal species might be more informative when done in intraspecific social contexts. PMID:27244538

  10. Eye’m talking to you: speakers’ gaze direction modulates co-speech gesture processing in the right MTG

    PubMed Central

    Toni, Ivan; Hagoort, Peter; Kelly, Spencer D.; Özyürek, Aslı

    2015-01-01

    Recipients process information from speech and co-speech gestures, but it is currently unknown how this processing is influenced by the presence of other important social cues, especially gaze direction, a marker of communicative intent. Such cues may modulate neural activity in regions associated either with the processing of ostensive cues, such as eye gaze, or with the processing of semantic information, provided by speech and gesture. Participants were scanned (fMRI) while taking part in triadic communication involving two recipients and a speaker. The speaker uttered sentences that were and were not accompanied by complementary iconic gestures. Crucially, the speaker alternated her gaze direction, thus creating two recipient roles: addressed (direct gaze) vs unaddressed (averted gaze) recipient. The comprehension of Speech&Gesture relative to SpeechOnly utterances recruited middle occipital, middle temporal and inferior frontal gyri, bilaterally. The calcarine sulcus and posterior cingulate cortex were sensitive to differences between direct and averted gaze. Most importantly, Speech&Gesture utterances, but not SpeechOnly utterances, produced additional activity in the right middle temporal gyrus when participants were addressed. Marking communicative intent with gaze direction modulates the processing of speech–gesture utterances in cerebral areas typically associated with the semantic processing of multi-modal communicative acts. PMID:24652857

  11. I want to help you, but I am not sure why: gaze-cuing induces altruistic giving.

    PubMed

    Rogers, Robert D; Bayliss, Andrew P; Szepietowska, Anna; Dale, Laura; Reeder, Lydia; Pizzamiglio, Gloria; Czarna, Karolina; Wakeley, Judi; Cowen, Phillip J; Tipper, Steven P

    2014-04-01

    Detecting subtle indicators of trustworthiness is highly adaptive for moving effectively amongst social partners. One powerful signal is gaze direction, which individuals can use to inform (or deceive) by looking toward (or away from) important objects or events in the environment. Here, across 5 experiments, we investigate whether implicit learning about gaze cues can influence subsequent economic transactions; we also examine some of the underlying mechanisms. In the 1st experiment, we demonstrate that people invest more money with individuals whose gaze information has previously been helpful, possibly reflecting enhanced trust appraisals. However, in 2 further experiments, we show that other mechanisms driving this behavior include obligations to fairness or (painful) altruism, since people also make more generous offers and allocations of money to individuals with reliable gaze cues in adapted 1-shot ultimatum games and 1-shot dictator games. In 2 final experiments, we show that the introduction of perceptual noise while following gaze can disrupt these effects, but only when the social partners are unfamiliar. Nonconscious detection of reliable gaze cues can prompt altruism toward others, probably reflecting the interplay of systems that encode identity and control gaze-evoked attention, integrating the reinforcement value of gaze cues.

  12. Gaze Estimation for Off-Angle Iris Recognition Based on the Biometric Eye Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karakaya, Mahmut; Barstow, Del R; Santos-Villalobos, Hector J

    Iris recognition is among the highest accuracy biometrics. However, its accuracy relies on controlled high quality capture data and is negatively affected by several factors such as angle, occlusion, and dilation. Non-ideal iris recognition is a new research focus in biometrics. In this paper, we present a gaze estimation method designed for use in an off-angle iris recognition framework based on the ANONYMIZED biometric eye model. Gaze estimation is an important prerequisite step to correct an off-angle iris images. To achieve the accurate frontal reconstruction of an off-angle iris image, we first need to estimate the eye gaze direction frommore » elliptical features of an iris image. Typically additional information such as well-controlled light sources, head mounted equipment, and multiple cameras are not available. Our approach utilizes only the iris and pupil boundary segmentation allowing it to be applicable to all iris capture hardware. We compare the boundaries with a look-up-table generated by using our biologically inspired biometric eye model and find the closest feature point in the look-up-table to estimate the gaze. Based on the results from real images, the proposed method shows effectiveness in gaze estimation accuracy for our biometric eye model with an average error of approximately 3.5 degrees over a 50 degree range.« less

  13. Mental state attribution and the gaze cueing effect.

    PubMed

    Cole, Geoff G; Smith, Daniel T; Atkinson, Mark A

    2015-05-01

    Theory of mind is said to be possessed by an individual if he or she is able to impute mental states to others. Recently, some authors have demonstrated that such mental state attributions can mediate the "gaze cueing" effect, in which observation of another individual shifts an observer's attention. One question that follows from this work is whether such mental state attributions produce mandatory modulations of gaze cueing. Employing the basic gaze cueing paradigm, together with a technique commonly used to assess mental-state attribution in nonhuman animals, we manipulated whether the gazing agent could see the same thing as the participant (i.e., the target) or had this view obstructed by a physical barrier. We found robust gaze cueing effects, even when the observed agent in the display could not see the same thing as the participant. These results suggest that the attribution of "seeing" does not necessarily modulate the gaze cueing effect.

  14. Visual perception during mirror gazing at one's own face in schizophrenia.

    PubMed

    Caputo, Giovanni B; Ferrucci, Roberta; Bortolomasi, Marco; Giacopuzzi, Mario; Priori, Alberto; Zago, Stefano

    2012-09-01

    In normal observers gazing at one's own face in the mirror for some minutes, at a low illumination level, triggers the perception of strange faces, a new perceptual illusion that has been named 'strange-face in the mirror'. Subjects see distortions of their own faces, but often they see monsters, archetypical faces, faces of dead relatives, and of animals. We designed this study to primarily compare strange-face apparitions in response to mirror gazing in patients with schizophrenia and healthy controls. The study included 16 patients with schizophrenia and 21 healthy controls. In this paper we administered a 7 minute mirror gazing test (MGT). Before the mirror gazing session, all subjects underwent assessment with the Cardiff Anomalous Perception Scale (CAPS). When the 7minute MGT ended, the experimenter assessed patients and controls with a specifically designed questionnaire and interviewed them, asking them to describe strange-face perceptions. Apparitions of strange-faces in the mirror were significantly more intense in schizophrenic patients than in controls. All the following variables were higher in patients than in healthy controls: frequency (p<.005) and cumulative duration of apparitions (p<.009), number and types of strange-faces (p<.002), self-evaluation scores on Likert-type scales of apparition strength (p<.03) and of reality of apparitions (p<.001). In schizophrenic patients, these Likert-type scales showed correlations (p<.05) with CAPS total scores. These results suggest that the increase of strange-face apparitions in schizophrenia can be produced by ego dysfunction, by body dysmorphic disorder and by misattribution of self-agency. MGT may help in completing the standard assessment of patients with schizophrenia, independently of hallucinatory psychopathology. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. The Disturbance of Gaze in Progressive Supranuclear Palsy: Implications for Pathogenesis

    PubMed Central

    Chen, Athena L.; Riley, David E.; King, Susan A.; Joshi, Anand C.; Serra, Alessandro; Liao, Ke; Cohen, Mark L.; Otero-Millan, Jorge; Martinez-Conde, Susana; Strupp, Michael; Leigh, R. John

    2010-01-01

    Progressive supranuclear palsy (PSP) is a disease of later life that is currently regarded as a form of neurodegenerative tauopathy. Disturbance of gaze is a cardinal clinical feature of PSP that often helps clinicians to establish the diagnosis. Since the neurobiology of gaze control is now well understood, it is possible to use eye movements as investigational tools to understand aspects of the pathogenesis of PSP. In this review, we summarize each disorder of gaze control that occurs in PSP, drawing on our studies of 50 patients, and on reports from other laboratories that have measured the disturbances of eye movements. When these gaze disorders are approached by considering each functional class of eye movements and its neurobiological basis, a distinct pattern of eye movement deficits emerges that provides insight into the pathogenesis of PSP. Although some aspects of all forms of eye movements are affected in PSP, the predominant defects concern vertical saccades (slow and hypometric, both up and down), impaired vergence, and inability to modulate the linear vestibulo-ocular reflex appropriately for viewing distance. These vertical and vergence eye movements habitually work in concert to enable visuomotor skills that are important during locomotion with the hands free. Taken with the prominent early feature of falls, these findings suggest that PSP tauopathy impairs a recently evolved neural system concerned with bipedal locomotion in an erect posture and frequent gaze shifts between the distant environment and proximate hands. This approach provides a conceptual framework that can be used to address the nosological challenge posed by overlapping clinical and neuropathological features of neurodegenerative tauopathies. PMID:21188269

  16. Toward a reliable gaze-independent hybrid BCI combining visual and natural auditory stimuli.

    PubMed

    Barbosa, Sara; Pires, Gabriel; Nunes, Urbano

    2016-03-01

    Brain computer interfaces (BCIs) are one of the last communication options for patients in the locked-in state (LIS). For complete LIS patients, interfaces must be gaze-independent due to their eye impairment. However, unimodal gaze-independent approaches typically present levels of performance substantially lower than gaze-dependent approaches. The combination of multimodal stimuli has been pointed as a viable way to increase users' performance. A hybrid visual and auditory (HVA) P300-based BCI combining simultaneously visual and auditory stimulation is proposed. Auditory stimuli are based on natural meaningful spoken words, increasing stimuli discrimination and decreasing user's mental effort in associating stimuli to the symbols. The visual part of the interface is covertly controlled ensuring gaze-independency. Four conditions were experimentally tested by 10 healthy participants: visual overt (VO), visual covert (VC), auditory (AU) and covert HVA. Average online accuracy for the hybrid approach was 85.3%, which is more than 32% over VC and AU approaches. Questionnaires' results indicate that the HVA approach was the less demanding gaze-independent interface. Interestingly, the P300 grand average for HVA approach coincides with an almost perfect sum of P300 evoked separately by VC and AU tasks. The proposed HVA-BCI is the first solution simultaneously embedding natural spoken words and visual words to provide a communication lexicon. Online accuracy and task demand of the approach compare favorably with state-of-the-art. The proposed approach shows that the simultaneous combination of visual covert control and auditory modalities can effectively improve the performance of gaze-independent BCIs. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Towards brain-activity-controlled information retrieval: Decoding image relevance from MEG signals.

    PubMed

    Kauppi, Jukka-Pekka; Kandemir, Melih; Saarinen, Veli-Matti; Hirvenkari, Lotta; Parkkonen, Lauri; Klami, Arto; Hari, Riitta; Kaski, Samuel

    2015-05-15

    We hypothesize that brain activity can be used to control future information retrieval systems. To this end, we conducted a feasibility study on predicting the relevance of visual objects from brain activity. We analyze both magnetoencephalographic (MEG) and gaze signals from nine subjects who were viewing image collages, a subset of which was relevant to a predetermined task. We report three findings: i) the relevance of an image a subject looks at can be decoded from MEG signals with performance significantly better than chance, ii) fusion of gaze-based and MEG-based classifiers significantly improves the prediction performance compared to using either signal alone, and iii) non-linear classification of the MEG signals using Gaussian process classifiers outperforms linear classification. These findings break new ground for building brain-activity-based interactive image retrieval systems, as well as for systems utilizing feedback both from brain activity and eye movements. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. Gaze characteristics of elite and near-elite athletes in ice hockey defensive tactics.

    PubMed

    Martell, Stephen G; Vickers, Joan N

    2004-04-01

    Traditional visual search experiments, where the researcher pre-selects video-based scenes for the participant to respond to, shows that elite players make more efficient decisions than non-elites, but disagree on how they temporally regulate their gaze. Using the vision-in-action [J.N. Vickers, J. Exp. Psychol.: Human Percept. Perform. 22 (1996) 342] approach, we tested whether the significant gaze that differentiates elite and non-elite athletes occurred either: early in the task and was of more rapid duration [A.M. Williams et al., Res. Quart. Exer. Sport 65 (1994) 127; A.M. Williams and K. Davids, Res. Quart. Exer. Sport 69 (1998) 111], or late in the task and was of longer duration [W. Helsen, J.M. Pauwels, A cognitive approach to visual search in sport, in: D. Brogan, K. Carr (Eds.), Visual Search, vol. II, Taylor and Francis, London, 1992], or whether a more complex gaze control strategy was used that consisted of both early and rapid fixations followed by a late fixation of long duration prior to the final execution. We tested this using a live defensive zone task in ice hockey. Results indicated that athletes temporally regulated their gaze using two different gaze control strategies. First, fixation/tracking (F/T) gaze early in the trial were significantly shorter than the final F/T and confirmed that the elite group fixated the tactical locations more rapidly than the non-elite on successful plays. And secondly, the final F/T prior to critical movement initiation (i.e. F/T-1) was significantly longer for both groups, averaging 30% of the final part of the phase and occurred as the athletes isolated a single object or location to end the play. The results imply that expertise in defensive tactics is defined by a cascade of F/T, which began with the athletes fixating or tracking specific locations for short durations at the beginning of the play, and concluded with a final gaze of long duration to a relatively stable target at the end. The results are

  19. I Want to Help You, But I Am Not Sure Why: Gaze-Cuing Induces Altruistic Giving

    PubMed Central

    2013-01-01

    Detecting subtle indicators of trustworthiness is highly adaptive for moving effectively amongst social partners. One powerful signal is gaze direction, which individuals can use to inform (or deceive) by looking toward (or away from) important objects or events in the environment. Here, across 5 experiments, we investigate whether implicit learning about gaze cues can influence subsequent economic transactions; we also examine some of the underlying mechanisms. In the 1st experiment, we demonstrate that people invest more money with individuals whose gaze information has previously been helpful, possibly reflecting enhanced trust appraisals. However, in 2 further experiments, we show that other mechanisms driving this behavior include obligations to fairness or (painful) altruism, since people also make more generous offers and allocations of money to individuals with reliable gaze cues in adapted 1-shot ultimatum games and 1-shot dictator games. In 2 final experiments, we show that the introduction of perceptual noise while following gaze can disrupt these effects, but only when the social partners are unfamiliar. Nonconscious detection of reliable gaze cues can prompt altruism toward others, probably reflecting the interplay of systems that encode identity and control gaze-evoked attention, integrating the reinforcement value of gaze cues. PMID:23937180

  20. Gaze direction affects the magnitude of face identity aftereffects.

    PubMed

    Kloth, Nadine; Jeffery, Linda; Rhodes, Gillian

    2015-02-20

    The face perception system partly owes its efficiency to adaptive mechanisms that constantly recalibrate face coding to our current diet of faces. Moreover, faces that are better attended produce more adaptation. Here, we investigated whether the social cues conveyed by a face can influence the amount of adaptation that face induces. We compared the magnitude of face identity aftereffects induced by adaptors with direct and averted gazes. We reasoned that faces conveying direct gaze may be more engaging and better attended and thus produce larger aftereffects than those with averted gaze. Using an adaptation duration of 5 s, we found that aftereffects for adaptors with direct and averted gazes did not differ (Experiment 1). However, when processing demands were increased by reducing adaptation duration to 1 s, we found that gaze direction did affect the magnitude of the aftereffect, but in an unexpected direction: Aftereffects were larger for adaptors with averted rather than direct gaze (Experiment 2). Eye tracking revealed that differences in looking time to the faces between the two gaze directions could not account for these findings. Subsequent ratings of the stimuli (Experiment 3) showed that adaptors with averted gaze were actually perceived as more expressive and interesting than adaptors with direct gaze. Therefore it appears that the averted-gaze faces were more engaging and better attended, leading to larger aftereffects. Overall, our results suggest that naturally occurring facial signals can modulate the adaptive impact a face exerts on our perceptual system. Specifically, the faces that we perceive as most interesting also appear to calibrate the organization of our perceptual system most strongly. © 2015 ARVO.

  1. Mirror Neurons of Ventral Premotor Cortex Are Modulated by Social Cues Provided by Others' Gaze.

    PubMed

    Coudé, Gino; Festante, Fabrizia; Cilia, Adriana; Loiacono, Veronica; Bimbi, Marco; Fogassi, Leonardo; Ferrari, Pier Francesco

    2016-03-16

    Mirror neurons (MNs) in the inferior parietal lobule and ventral premotor cortex (PMv) can code the intentions of other individuals using contextual cues. Gaze direction is an important social cue that can be used for understanding the meaning of actions made by other individuals. Here we addressed the issue of whether PMv MNs are influenced by the gaze direction of another individual. We recorded single-unit activity in macaque PMv while the monkey was observing an experimenter performing a grasping action and orienting his gaze either toward (congruent gaze condition) or away (incongruent gaze condition) from a target object. The results showed that one-half of the recorded MNs were modulated by the gaze direction of the human agent. These gaze-modulated neurons were evenly distributed between those preferring a gaze direction congruent with the direction where the grasping action was performed and the others that preferred an incongruent gaze. Whereas the presence of congruent responses is in line with the usual coupling of hand and gaze in both executed and observed actions, the incongruent responses can be explained by the long exposure of the monkeys to this condition. Our results reveal that the representation of observed actions in PMv is influenced by contextual information not only extracted from physical cues, but also from cues endowed with biological or social value. In this study, we present the first evidence showing that social cues modulate MNs in the monkey ventral premotor cortex. These data suggest that there is an integrated representation of other's hand actions and gaze direction at the single neuron level in the ventral premotor cortex, and support the hypothesis of a functional role of MNs in decoding actions and understanding motor intentions. Copyright © 2016 the authors 0270-6474/16/363145-12$15.00/0.

  2. Development of Gaze Following Abilities in Wolves (Canis Lupus)

    PubMed Central

    Range, Friederike; Virányi, Zsófia

    2011-01-01

    The ability to coordinate with others' head and eye orientation to look in the same direction is considered a key step towards an understanding of others mental states like attention and intention. Here, we investigated the ontogeny and habituation patterns of gaze following into distant space and behind barriers in nine hand-raised wolves. We found that these wolves could use conspecific as well as human gaze cues even in the barrier task, which is thought to be more cognitively advanced than gazing into distant space. Moreover, while gaze following into distant space was already present at the age of 14 weeks and subjects did not habituate to repeated cues, gazing around a barrier developed considerably later and animals quickly habituated, supporting the hypothesis that different cognitive mechanisms may underlie the two gaze following modalities. More importantly, this study demonstrated that following another individuals' gaze around a barrier is not restricted to primates and corvids but is also present in canines, with remarkable between-group similarities in the ontogeny of this behaviour. This sheds new light on the evolutionary origins of and selective pressures on gaze following abilities as well as on the sensitivity of domestic dogs towards human communicative cues. PMID:21373192

  3. Mobile gaze tracking system for outdoor walking behavioral studies

    PubMed Central

    Tomasi, Matteo; Pundlik, Shrinivas; Bowers, Alex R.; Peli, Eli; Luo, Gang

    2016-01-01

    Most gaze tracking techniques estimate gaze points on screens, on scene images, or in confined spaces. Tracking of gaze in open-world coordinates, especially in walking situations, has rarely been addressed. We use a head-mounted eye tracker combined with two inertial measurement units (IMU) to track gaze orientation relative to the heading direction in outdoor walking. Head movements relative to the body are measured by the difference in output between the IMUs on the head and body trunk. The use of the IMU pair reduces the impact of environmental interference on each sensor. The system was tested in busy urban areas and allowed drift compensation for long (up to 18 min) gaze recording. Comparison with ground truth revealed an average error of 3.3° while walking straight segments. The range of gaze scanning in walking is frequently larger than the estimation error by about one order of magnitude. Our proposed method was also tested with real cases of natural walking and it was found to be suitable for the evaluation of gaze behaviors in outdoor environments. PMID:26894511

  4. Group Differences in the Mutual Gaze of Chimpanzees (Pan Troglodytes)

    ERIC Educational Resources Information Center

    Bard, Kim A.; Myowa-Yamakoshi, Masako; Tomonaga, Masaki; Tanaka, Masayuki; Costall, Alan; Matsuzawa, Tetsuro

    2005-01-01

    A comparative developmental framework was used to determine whether mutual gaze is unique to humans and, if not, whether common mechanisms support the development of mutual gaze in chimpanzees and humans. Mother-infant chimpanzees engaged in approximately 17 instances of mutual gaze per hour. Mutual gaze occurred in positive, nonagonistic…

  5. Symptoms elicited in persons with vestibular dysfunction while performing gaze movements in optic flow environments

    PubMed Central

    Whitney, Susan L.; Sparto, Patrick J.; Cook, James R.; Redfern, Mark S.; Furman, Joseph M.

    2016-01-01

    Introduction People with vestibular disorders often experience space and motion discomfort when exposed to moving or highly textured visual scenes. The purpose of this study was to measure the type and severity of symptoms in people with vestibular dysfunction during coordinated head and eye movements in optic flow environments. Methods Seven subjects with vestibular disorders and 25 controls viewed four different full-field optic flow environments on six different visits. The optic flow environments consisted of textures with various contrasts and spatial frequencies. Subjects performed 8 gaze movement tasks, including eye saccades, gaze saccades, and gaze stabilization tasks. Subjects reported symptoms using Subjective Units of Discomfort (SUD) and the Simulator Sickness Questionnaire (SSQ). Self-reported dizziness handicap and space and motion discomfort were also measured. Results/ Conclusion Subjects with vestibular disorders had greater discomfort and experienced greater oculomotor and disorientation symptoms. The magnitude of the symptoms increased during each visit, but did not depend on the optic flow condition. Subjects who reported greater dizziness handicap and space motion discomfort had greater severity of symptoms during the experiment. Symptoms of fatigue, difficulty focusing, and dizziness during the experiment were evident. Compared with controls, subjects with vestibular disorders had less head movement during the gaze saccade tasks. Overall, performance of gaze pursuit and gaze stabilization tasks in moving visual environments elicited greater symptoms in subjects with vestibular disorders compared with healthy subjects. PMID:23549055

  6. A Comparison of Facial Color Pattern and Gazing Behavior in Canid Species Suggests Gaze Communication in Gray Wolves (Canis lupus)

    PubMed Central

    Ueda, Sayoko; Kumagai, Gaku; Otaki, Yusuke; Yamaguchi, Shinya; Kohshima, Shiro

    2014-01-01

    As facial color pattern around the eyes has been suggested to serve various adaptive functions related to the gaze signal, we compared the patterns among 25 canid species, focusing on the gaze signal, to estimate the function of facial color pattern in these species. The facial color patterns of the studied species could be categorized into the following three types based on contrast indices relating to the gaze signal: A-type (both pupil position in the eye outline and eye position in the face are clear), B-type (only the eye position is clear), and C-type (both the pupil and eye position are unclear). A-type faces with light-colored irises were observed in most studied species of the wolf-like clade and some of the red fox-like clade. A-type faces tended to be observed in species living in family groups all year-round, whereas B-type faces tended to be seen in solo/pair-living species. The duration of gazing behavior during which the facial gaze-signal is displayed to the other individual was longest in gray wolves with typical A-type faces, of intermediate length in fennec foxes with typical B-type faces, and shortest in bush dogs with typical C-type faces. These results suggest that the facial color pattern of canid species is related to their gaze communication and that canids with A-type faces, especially gray wolves, use the gaze signal in conspecific communication. PMID:24918751

  7. A comparison of facial color pattern and gazing behavior in canid species suggests gaze communication in gray wolves (Canis lupus).

    PubMed

    Ueda, Sayoko; Kumagai, Gaku; Otaki, Yusuke; Yamaguchi, Shinya; Kohshima, Shiro

    2014-01-01

    As facial color pattern around the eyes has been suggested to serve various adaptive functions related to the gaze signal, we compared the patterns among 25 canid species, focusing on the gaze signal, to estimate the function of facial color pattern in these species. The facial color patterns of the studied species could be categorized into the following three types based on contrast indices relating to the gaze signal: A-type (both pupil position in the eye outline and eye position in the face are clear), B-type (only the eye position is clear), and C-type (both the pupil and eye position are unclear). A-type faces with light-colored irises were observed in most studied species of the wolf-like clade and some of the red fox-like clade. A-type faces tended to be observed in species living in family groups all year-round, whereas B-type faces tended to be seen in solo/pair-living species. The duration of gazing behavior during which the facial gaze-signal is displayed to the other individual was longest in gray wolves with typical A-type faces, of intermediate length in fennec foxes with typical B-type faces, and shortest in bush dogs with typical C-type faces. These results suggest that the facial color pattern of canid species is related to their gaze communication and that canids with A-type faces, especially gray wolves, use the gaze signal in conspecific communication.

  8. Role of Gaze Cues in Interpersonal Motor Coordination: Towards Higher Affiliation in Human-Robot Interaction.

    PubMed

    Khoramshahi, Mahdi; Shukla, Ashwini; Raffard, Stéphane; Bardy, Benoît G; Billard, Aude

    2016-01-01

    The ability to follow one another's gaze plays an important role in our social cognition; especially when we synchronously perform tasks together. We investigate how gaze cues can improve performance in a simple coordination task (i.e., the mirror game), whereby two players mirror each other's hand motions. In this game, each player is either a leader or follower. To study the effect of gaze in a systematic manner, the leader's role is played by a robotic avatar. We contrast two conditions, in which the avatar provides or not explicit gaze cues that indicate the next location of its hand. Specifically, we investigated (a) whether participants are able to exploit these gaze cues to improve their coordination, (b) how gaze cues affect action prediction and temporal coordination, and (c) whether introducing active gaze behavior for avatars makes them more realistic and human-like (from the user point of view). 43 subjects participated in 8 trials of the mirror game. Each subject performed the game in the two conditions (with and without gaze cues). In this within-subject study, the order of the conditions was randomized across participants, and subjective assessment of the avatar's realism was assessed by administering a post-hoc questionnaire. When gaze cues were provided, a quantitative assessment of synchrony between participants and the avatar revealed a significant improvement in subject reaction-time (RT). This confirms our hypothesis that gaze cues improve the follower's ability to predict the avatar's action. An analysis of the pattern of frequency across the two players' hand movements reveals that the gaze cues improve the overall temporal coordination across the two players. Finally, analysis of the subjective evaluations from the questionnaires reveals that, in the presence of gaze cues, participants found it not only more human-like/realistic, but also easier to interact with the avatar. This work confirms that people can exploit gaze cues to predict

  9. Kinematics and eye-head coordination of gaze shifts evoked from different sites in the superior colliculus of the cat.

    PubMed

    Guillaume, Alain; Pélisson, Denis

    2006-12-15

    Shifting gaze requires precise coordination of eye and head movements. It is clear that the superior colliculus (SC) is involved with saccadic gaze shifts. Here we investigate its role in controlling both eye and head movements during gaze shifts. Gaze shifts of the same amplitude can be evoked from different SC sites by controlled electrical microstimulation. To describe how the SC coordinates the eye and the head, we compare the characteristics of these amplitude-matched gaze shifts evoked from different SC sites. We show that matched amplitude gaze shifts elicited from progressively more caudal sites are progressively slower and associated with a greater head contribution. Stimulation at more caudal SC sites decreased the peak velocity of the eye but not of the head, suggesting that the lower peak gaze velocity for the caudal sites is due to the increased contribution of the slower-moving head. Eye-head coordination across the SC motor map is also indicated by the relative latencies of the eye and head movements. For some amplitudes of gaze shift, rostral stimulation evoked eye movement before head movement, whereas this reversed with caudal stimulation, which caused the head to move before the eyes. These results show that gaze shifts of similar amplitude evoked from different SC sites are produced with different kinematics and coordination of eye and head movements. In other words, gaze shifts evoked from different SC sites follow different amplitude-velocity curves, with different eye-head contributions. These findings shed light on mechanisms used by the central nervous system to translate a high-level motor representation (a desired gaze displacement on the SC map) into motor commands appropriate for the involved body segments (the eye and the head).

  10. Gaze pursuit responses in nucleus reticularis tegmenti pontis of head-unrestrained macaques.

    PubMed

    Suzuki, David A; Betelak, Kathleen F; Yee, Robert D

    2009-01-01

    Eye-head gaze pursuit-related activity was recorded in rostral portions of the nucleus reticularis tegmenti pontis (rNRTP) in alert macaques. The head was unrestrained in the horizontal plane, and macaques were trained to pursue a moving target either with their head, with the eyes stationary in the orbits, or with their eyes, with their head voluntarily held stationary in space. Head-pursuit-related modulations in rNRTP activity were observed with some cells exhibiting increases in firing rate with increases in head-pursuit frequency. For many units, this head-pursuit response appeared to saturate at higher frequencies (>0.6 Hz). The response phase re:peak head-pursuit velocity formed a continuum, containing cells that could encode head-pursuit velocity and those encoding head-pursuit acceleration. The latter cells did not exhibit head position-related activity. Sensitivities were calculated with respect to peak head-pursuit velocity and averaged 1.8 spikes/s/deg/s. Of the cells that were tested for both head- and eye-pursuit-related activity, 86% exhibited responses to both head- and eye-pursuit and therefore carried a putative gaze-pursuit signal. For these gaze-pursuit units, the ratio of head to eye response sensitivities averaged approximately 1.4. Pursuit eccentricity seemed to affect head-pursuit response amplitude even in the absence of a head position response per se. The results indicated that rNRTP is a strong candidate for the source of an active head-pursuit signal that projects to the cerebellum, specifically to the target-velocity and gaze-velocity Purkinje cells that have been observed in vermal lobules VI and VII.

  11. Gaze Pursuit Responses in Nucleus Reticularis Tegmenti Pontis of Head-Unrestrained Macaques

    PubMed Central

    Suzuki, David A.; Betelak, Kathleen F.; Yee, Robert D.

    2009-01-01

    Eye-head gaze pursuit–related activity was recorded in rostral portions of the nucleus reticularis tegmenti pontis (rNRTP) in alert macaques. The head was unrestrained in the horizontal plane, and macaques were trained to pursue a moving target either with their head, with the eyes stationary in the orbits, or with their eyes, with their head voluntarily held stationary in space. Head-pursuit–related modulations in rNRTP activity were observed with some cells exhibiting increases in firing rate with increases in head-pursuit frequency. For many units, this head-pursuit response appeared to saturate at higher frequencies (>0.6 Hz). The response phase re:peak head-pursuit velocity formed a continuum, containing cells that could encode head-pursuit velocity and those encoding head-pursuit acceleration. The latter cells did not exhibit head position–related activity. Sensitivities were calculated with respect to peak head-pursuit velocity and averaged 1.8 spikes/s/deg/s. Of the cells that were tested for both head- and eye-pursuit–related activity, 86% exhibited responses to both head- and eye-pursuit and therefore carried a putative gaze-pursuit signal. For these gaze-pursuit units, the ratio of head to eye response sensitivities averaged ∼1.4. Pursuit eccentricity seemed to affect head-pursuit response amplitude even in the absence of a head position response per se. The results indicated that rNRTP is a strong candidate for the source of an active head-pursuit signal that projects to the cerebellum, specifically to the target-velocity and gaze-velocity Purkinje cells that have been observed in vermal lobules VI and VII. PMID:18987125

  12. Is social attention impaired in schizophrenia? Gaze, but not pointing gestures, is associated with spatial attention deficits.

    PubMed

    Dalmaso, Mario; Galfano, Giovanni; Tarqui, Luana; Forti, Bruno; Castelli, Luigi

    2013-09-01

    The nature of possible impairments in orienting attention to social signals in schizophrenia is controversial. The present research was aimed at addressing this issue further by comparing gaze and arrow cues. Unlike previous studies, we also included pointing gestures as social cues, with the goal of addressing whether any eventual impairment in the attentional response was specific to gaze signals or reflected a more general deficit in dealing with social stimuli. Patients with schizophrenia or schizoaffective disorder and matched controls performed a spatial-cuing paradigm in which task-irrelevant centrally displayed gaze, pointing finger, and arrow cues oriented rightward or leftward, preceded a lateralized target requiring a simple detection response. Healthy controls responded faster to spatially congruent targets than to spatially incongruent targets, irrespective of cue type. In contrast, schizophrenic patients responded faster to spatially congruent targets than to spatially incongruent targets only for arrow and pointing finger cues. No cuing effect emerged for gaze cues. The results support the notion that gaze cuing is impaired in schizophrenia, and suggest that this deficit may not extend to all social cues.

  13. Seductive eyes: attractiveness and direct gaze increase desire for associated objects.

    PubMed

    Strick, Madelijn; Holland, Rob W; van Knippenberg, Ad

    2008-03-01

    Recent research in neuroscience shows that observing attractive faces with direct gaze is more rewarding than observing attractive faces with averted gaze. On the basis of this research, it was hypothesized that object evaluations can be enhanced by associating them with attractive faces displaying direct gaze. In a conditioning paradigm, novel objects were associated with either attractive or unattractive female faces, either displaying direct or averted gaze. An affective priming task showed more positive automatic evaluations of objects that were paired with attractive faces with direct gaze than attractive faces with averted gaze and unattractive faces, irrespective of gaze direction. Participants' self-reported desire for the objects matched the affective priming data. The results are discussed against the background of recent findings on affective consequences of gaze cueing.

  14. The Development of Mentalistic Gaze Understanding

    ERIC Educational Resources Information Center

    Doherty, Martin J.

    2006-01-01

    Very young infants are sensitive to and follow other people's gaze. By 18 months children, like chimpanzees, apparently represent the spatial relationship between viewer and object viewed: they can follow eye-direction alone, and react appropriately if the other's gaze is blocked by occluding barriers. This paper assesses when children represent…

  15. Measurement of ocular aberrations in downward gaze using a modified clinical aberrometer

    PubMed Central

    Ghosh, Atanu; Collins, Michael J; Read, Scott A; Davis, Brett A; Iskander, D. Robert

    2011-01-01

    Changes in corneal optics have been measured after downward gaze. However, ocular aberrations during downward gaze have not been previously measured. A commercial Shack-Hartmann aberrometer (COAS-HD) was modified by adding a relay lens system and a rotatable beam splitter to allow on-axis aberration measurements in primary gaze and downward gaze with binocular fixation. Measurements with the modified aberrometer (COAS-HD relay system) in primary and downward gaze were validated against a conventional aberrometer. In human eyes, there were significant changes (p<0.05) in defocus C(2,0), primary astigmatism C(2,2) and vertical coma C(3,−1) in downward gaze (25 degrees) compared to primary gaze, indicating the potential influence of biomechanical forces on the optics of the eye in downward gaze. To demonstrate a further clinical application of this modified aberrometer, we measured ocular aberrations when wearing a progressive addition lens (PAL) in primary gaze (0 degree), 15 degrees downward gaze and 25 degrees downward gaze. PMID:21412451

  16. Cheating experience: Guiding novices to adopt the gaze strategies of experts expedites the learning of technical laparoscopic skills.

    PubMed

    Vine, Samuel J; Masters, Rich S W; McGrath, John S; Bright, Elizabeth; Wilson, Mark R

    2012-07-01

    Previous research has demonstrated that trainees can be taught (via explicit verbal instruction) to adopt the gaze strategies of expert laparoscopic surgeons. The current study examined a software template designed to guide trainees to adopt expert gaze control strategies passively, without being provided with explicit instructions. We examined 27 novices (who had no laparoscopic training) performing 50 learning trials of a laparoscopic training task in either a discovery-learning (DL) group or a gaze-training (GT) group while wearing an eye tracker to assess gaze control. The GT group performed trials using a surgery-training template (STT); software that is designed to guide expert-like gaze strategies by highlighting the key locations on the monitor screen. The DL group had a normal, unrestricted view of the scene on the monitor screen. Both groups then took part in a nondelayed retention test (to assess learning) and a stress test (under social evaluative threat) with a normal view of the scene. The STT was successful in guiding the GT group to adopt an expert-like gaze strategy (displaying more target-locking fixations). Adopting expert gaze strategies led to an improvement in performance for the GT group, which outperformed the DL group in both retention and stress tests (faster completion time and fewer errors). The STT is a practical and cost-effective training interface that automatically promotes an optimal gaze strategy. Trainees who are trained to adopt the efficient target-locking gaze strategy of experts gain a performance advantage over trainees left to discover their own strategies for task completion. Copyright © 2012 Mosby, Inc. All rights reserved.

  17. Horizontal gaze nystagmus: a review of vision science and application issues.

    PubMed

    Rubenzer, Steven J; Stevenson, Scott B

    2010-03-01

    The Horizontal Gaze Nystagmus (HGN) test is one component of the Standardized Field Sobriety Test battery. This article reviews the literature on smooth pursuit eye movement and gaze nystagmus with a focus on normative responses, the influence of alcohol on these behaviors, and stimulus conditions similar to those used in the HGN sobriety test. Factors such as age, stimulus and background conditions, medical conditions, prescription medications, and psychiatric disorder were found to affect the smooth pursuit phase of HGN. Much less literature is available for gaze nystagmus, but onset of nystagmus may occur in some sober subjects at 45 degrees or less. We conclude that HGN is limited by large variability in the underlying normative behavior, from methods and testing environments that are often poorly controlled, and from a lack of rigorous validation in laboratory settings.

  18. Neural Mechanisms Underlying Conscious and Unconscious Gaze-Triggered Attentional Orienting in Autism Spectrum Disorder

    PubMed Central

    Sato, Wataru; Kochiyama, Takanori; Uono, Shota; Yoshimura, Sayaka; Toichi, Motomi

    2017-01-01

    Impaired joint attention represents the core clinical feature of autism spectrum disorder (ASD). Behavioral studies have suggested that gaze-triggered attentional orienting is intact in response to supraliminally presented eyes but impaired in response to subliminally presented eyes in individuals with ASD. However, the neural mechanisms underlying conscious and unconscious gaze-triggered attentional orienting remain unclear. We investigated this issue in ASD and typically developing (TD) individuals using event-related functional magnetic resonance imaging. The participants viewed cue stimuli of averted or straight eye gaze direction presented either supraliminally or subliminally and then localized a target. Reaction times were shorter when eye-gaze cues were directionally valid compared with when they were neutral under the supraliminal condition in both groups; the same pattern was found in the TD group but not the ASD group under the subliminal condition. The temporo–parieto–frontal regions showed stronger activation in response to averted eyes than to straight eyes in both groups under the supraliminal condition. The left amygdala was more activated while viewing averted vs. straight eyes in the TD group than in the ASD group under the subliminal condition. These findings provide an explanation for the neural mechanisms underlying the impairment in unconscious but not conscious gaze-triggered attentional orienting in individuals with ASD and suggest possible neurological and behavioral interventions to facilitate their joint attention behaviors. PMID:28701942

  19. Deficient gaze pattern during virtual multiparty conversation in patients with schizophrenia.

    PubMed

    Han, Kiwan; Shin, Jungeun; Yoon, Sang Young; Jang, Dong-Pyo; Kim, Jae-Jin

    2014-06-01

    Virtual reality has been used to measure abnormal social characteristics, particularly in one-to-one situations. In real life, however, conversations with multiple companions are common and more complicated than two-party conversations. In this study, we explored the features of social behaviors in patients with schizophrenia during virtual multiparty conversations. Twenty-three patients with schizophrenia and 22 healthy controls performed the virtual three-party conversation task, which included leading and aiding avatars, positive- and negative-emotion-laden situations, and listening and speaking phases. Patients showed a significant negative correlation in the listening phase between the amount of gaze on the between-avatar space and reasoning ability, and demonstrated increased gaze on the between-avatar space in the speaking phase that was uncorrelated with attentional ability. These results suggest that patients with schizophrenia have active avoidance of eye contact during three-party conversations. Virtual reality may provide a useful way to measure abnormal social characteristics during multiparty conversations in schizophrenia. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Gaze-informed, task-situated representation of space in primate hippocampus during virtual navigation

    PubMed Central

    Wirth, Sylvia; Baraduc, Pierre; Planté, Aurélie; Pinède, Serge; Duhamel, Jean-René

    2017-01-01

    To elucidate how gaze informs the construction of mental space during wayfinding in visual species like primates, we jointly examined navigation behavior, visual exploration, and hippocampal activity as macaque monkeys searched a virtual reality maze for a reward. Cells sensitive to place also responded to one or more variables like head direction, point of gaze, or task context. Many cells fired at the sight (and in anticipation) of a single landmark in a viewpoint- or task-dependent manner, simultaneously encoding the animal’s logical situation within a set of actions leading to the goal. Overall, hippocampal activity was best fit by a fine-grained state space comprising current position, view, and action contexts. Our findings indicate that counterparts of rodent place cells in primates embody multidimensional, task-situated knowledge pertaining to the target of gaze, therein supporting self-awareness in the construction of space. PMID:28241007

  1. Incidence and anatomy of gaze-evoked nystagmus in patients with cerebellar lesions.

    PubMed

    Baier, Bernhard; Dieterich, Marianne

    2011-01-25

    Disorders of gaze-holding--organized by a neural network located in the brainstem or the cerebellum--may lead to nystagmus. Based on previous animal studies it was concluded that one key player of the cerebellar part of this gaze-holding neural network is the flocculus. Up to now, in humans there are no systematic studies in patients with cerebellar lesions examining one of the most common forms of nystagmus: gaze-evoked nystagmus (GEN). The aim of our present study was to clarify which cerebellar structures are involved in the generation of GEN. Twenty-one patients with acute unilateral cerebellar stroke were analyzed by means of modern MRI-based voxel-wise lesion-behavior mapping. Our data indicate that cerebellar structures such as the vermal pyramid, the uvula, and the tonsil, but also parts of the biventer lobule and the inferior semilunar lobule, were affected in horizontal GEN. It seems that these structures are part of a gaze-holding neural integrator control system. Furthermore, GEN might present a diagnostic sign pointing toward ipsilesionally located lesions of midline and lower cerebellar structures.

  2. GazeParser: an open-source and multiplatform library for low-cost eye tracking and analysis.

    PubMed

    Sogo, Hiroyuki

    2013-09-01

    Eye movement analysis is an effective method for research on visual perception and cognition. However, recordings of eye movements present practical difficulties related to the cost of the recording devices and the programming of device controls for use in experiments. GazeParser is an open-source library for low-cost eye tracking and data analysis; it consists of a video-based eyetracker and libraries for data recording and analysis. The libraries are written in Python and can be used in conjunction with PsychoPy and VisionEgg experimental control libraries. Three eye movement experiments are reported on performance tests of GazeParser. These showed that the means and standard deviations for errors in sampling intervals were less than 1 ms. Spatial accuracy ranged from 0.7° to 1.2°, depending on participant. In gap/overlap tasks and antisaccade tasks, the latency and amplitude of the saccades detected by GazeParser agreed with those detected by a commercial eyetracker. These results showed that the GazeParser demonstrates adequate performance for use in psychological experiments.

  3. Manifold decoding for neural representations of face viewpoint and gaze direction using magnetoencephalographic data.

    PubMed

    Kuo, Po-Chih; Chen, Yong-Sheng; Chen, Li-Fen

    2018-05-01

    The main challenge in decoding neural representations lies in linking neural activity to representational content or abstract concepts. The transformation from a neural-based to a low-dimensional representation may hold the key to encoding perceptual processes in the human brain. In this study, we developed a novel model by which to represent two changeable features of faces: face viewpoint and gaze direction. These features are embedded in spatiotemporal brain activity derived from magnetoencephalographic data. Our decoding results demonstrate that face viewpoint and gaze direction can be represented by manifold structures constructed from brain responses in the bilateral occipital face area and right superior temporal sulcus, respectively. Our results also show that the superposition of brain activity in the manifold space reveals the viewpoints of faces as well as directions of gazes as perceived by the subject. The proposed manifold representation model provides a novel opportunity to gain further insight into the processing of information in the human brain. © 2018 Wiley Periodicals, Inc.

  4. MEG Evidence for Dynamic Amygdala Modulations by Gaze and Facial Emotions

    PubMed Central

    Dumas, Thibaud; Dubal, Stéphanie; Attal, Yohan; Chupin, Marie; Jouvent, Roland; Morel, Shasha; George, Nathalie

    2013-01-01

    Background Amygdala is a key brain region for face perception. While the role of amygdala in the perception of facial emotion and gaze has been extensively highlighted with fMRI, the unfolding in time of amydgala responses to emotional versus neutral faces with different gaze directions is scarcely known. Methodology/Principal Findings Here we addressed this question in healthy subjects using MEG combined with an original source imaging method based on individual amygdala volume segmentation and the localization of sources in the amygdala volume. We found an early peak of amygdala activity that was enhanced for fearful relative to neutral faces between 130 and 170 ms. The effect of emotion was again significant in a later time range (310–350 ms). Moreover, the amygdala response was greater for direct relative averted gaze between 190 and 350 ms, and this effect was selective of fearful faces in the right amygdala. Conclusion Altogether, our results show that the amygdala is involved in the processing and integration of emotion and gaze cues from faces in different time ranges, thus underlining its role in multiple stages of face perception. PMID:24040190

  5. Face age modulates gaze following in young adults.

    PubMed

    Ciardo, Francesca; Marino, Barbara F M; Actis-Grosso, Rossana; Rossetti, Angela; Ricciardelli, Paola

    2014-04-22

    Gaze-following behaviour is considered crucial for social interactions which are influenced by social similarity. We investigated whether the degree of similarity, as indicated by the perceived age of another person, can modulate gaze following. Participants of three different age-groups (18-25; 35-45; over 65) performed an eye movement (a saccade) towards an instructed target while ignoring the gaze-shift of distracters of different age-ranges (6-10; 18-25; 35-45; over 70). The results show that gaze following was modulated by the distracter face age only for young adults. Particularly, the over 70 year-old distracters exerted the least interference effect. The distracters of a similar age-range as the young adults (18-25; 35-45) had the most effect, indicating a blurred own-age bias (OAB) only for the young age group. These findings suggest that face age can modulate gaze following, but this modulation could be due to factors other than just OAB (e.g., familiarity).

  6. Gaze Stabilization During Locomotion Requires Full Body Coordination

    NASA Technical Reports Server (NTRS)

    Mulavara, A. P.; Miller, C. A.; Houser, J.; Richards, J. T.; Bloomberg, J. J.

    2001-01-01

    Maintaining gaze stabilization during locomotion places substantial demands on multiple sensorimotor subsystems for precise coordination. Gaze stabilization during locomotion requires eye-head-trunk coordination (Bloomberg, et al., 1997) as well as the regulation of energy flow or shock-wave transmission through the body at high impact phases with the support surface (McDonald, et al., 1997). Allowing these excessive transmissions of energy to reach the head may compromise gaze stability. Impairments in these mechanisms may lead to the oscillopsia and decreased dynamic visual acuity seen in crewmembers returning from short and long duration spaceflight, as well as in patients with vestibular disorders (Hillman, et al., 1999). Thus, we hypothesize that stabilized gaze during locomotion results from full-body coordination of the eye-head-trunk system combined with the lower limb apparatus. The goal of this study was to determine how multiple, interdependent full- body sensorimotor subsystems aiding gaze stabilization during locomotion are functionally coordinated, and how they adaptively respond to spaceffight.

  7. The Brainstem Switch for Gaze Shifts in Humans

    DTIC Science & Technology

    2001-10-25

    Page 1 of 4 THE BRAINSTEM SWITCH FOR GAZE SHIFTS IN HUMANS A. N. Kumar1, R. J. Leigh1,2, S. Ramat3 Department of 1Biomedical Engineering, Case...omnipause neurons during gaze shifts. Using the scleral search coil technique, eye movements were measured in seven normal subjects, as they made...voluntary, disjunctive gaze shifts comprising saccades and vergence movements. Conjugate oscillations of small amplitude and high frequency were identified

  8. A Non-Verbal Turing Test: Differentiating Mind from Machine in Gaze-Based Social Interaction

    PubMed Central

    Pfeiffer, Ulrich J.; Timmermans, Bert; Bente, Gary; Vogeley, Kai; Schilbach, Leonhard

    2011-01-01

    In social interaction, gaze behavior provides important signals that have a significant impact on our perception of others. Previous investigations, however, have relied on paradigms in which participants are passive observers of other persons’ gazes and do not adjust their gaze behavior as is the case in real-life social encounters. We used an interactive eye-tracking paradigm that allows participants to interact with an anthropomorphic virtual character whose gaze behavior is responsive to where the participant looks on the stimulus screen in real time. The character’s gaze reactions were systematically varied along a continuum from a maximal probability of gaze aversion to a maximal probability of gaze-following during brief interactions, thereby varying contingency and congruency of the reactions. We investigated how these variations influenced whether participants believed that the character was controlled by another person (i.e., a confederate) or a computer program. In a series of experiments, the human confederate was either introduced as naïve to the task, cooperative, or competitive. Results demonstrate that the ascription of humanness increases with higher congruency of gaze reactions when participants are interacting with a naïve partner. In contrast, humanness ascription is driven by the degree of contingency irrespective of congruency when the confederate was introduced as cooperative. Conversely, during interaction with a competitive confederate, judgments were neither based on congruency nor on contingency. These results offer important insights into what renders the experience of an interaction truly social: Humans appear to have a default expectation of reciprocation that can be influenced drastically by the presumed disposition of the interactor to either cooperate or compete. PMID:22096599

  9. Upward gaze-evoked nystagmus with organoarsenic poisoning.

    PubMed

    Nakamagoe, Kiyotaka; Ishii, Kazuhiro; Tamaoka, Akira; Shoji, Shin'ichi

    2006-01-10

    The authors report assessment of abnormal ocular movements in three patients after organoarsenic poisoning from diphenylarsinic acid. The characteristic and principal sign is upward gaze-evoked nystagmus. Moreover, vertical gaze holding impairment was shown by electronystagmography on direct current recording.

  10. An eye model for uncalibrated eye gaze estimation under variable head pose

    NASA Astrophysics Data System (ADS)

    Hnatow, Justin; Savakis, Andreas

    2007-04-01

    Gaze estimation is an important component of computer vision systems that monitor human activity for surveillance, human-computer interaction, and various other applications including iris recognition. Gaze estimation methods are particularly valuable when they are non-intrusive, do not require calibration, and generalize well across users. This paper presents a novel eye model that is employed for efficiently performing uncalibrated eye gaze estimation. The proposed eye model was constructed from a geometric simplification of the eye and anthropometric data about eye feature sizes in order to circumvent the requirement of calibration procedures for each individual user. The positions of the two eye corners and the midpupil, the distance between the two eye corners, and the radius of the eye sphere are required for gaze angle calculation. The locations of the eye corners and midpupil are estimated via processing following eye detection, and the remaining parameters are obtained from anthropometric data. This eye model is easily extended to estimating eye gaze under variable head pose. The eye model was tested on still images of subjects at frontal pose (0 °) and side pose (34 °). An upper bound of the model's performance was obtained by manually selecting the eye feature locations. The resulting average absolute error was 2.98 ° for frontal pose and 2.87 ° for side pose. The error was consistent across subjects, which indicates that good generalization was obtained. This level of performance compares well with other gaze estimation systems that utilize a calibration procedure to measure eye features.

  11. "Avoiding or approaching eyes"? Introversion/extraversion affects the gaze-cueing effect.

    PubMed

    Ponari, Marta; Trojano, Luigi; Grossi, Dario; Conson, Massimiliano

    2013-08-01

    We investigated whether the extra-/introversion personality dimension can influence processing of others' eye gaze direction and emotional facial expression during a target detection task. On the basis of previous evidence showing that self-reported trait anxiety can affect gaze-cueing with emotional faces, we also verified whether trait anxiety can modulate the influence of intro-/extraversion on behavioral performance. Fearful, happy, angry or neutral faces, with either direct or averted gaze, were presented before the target appeared in spatial locations congruent or incongruent with stimuli's eye gaze direction. Results showed a significant influence of intra-/extraversion dimension on gaze-cueing effect for angry, happy, and neutral faces with averted gaze. Introverts did not show the gaze congruency effect when viewing angry expressions, but did so with happy and neutral faces; extraverts showed the opposite pattern. Importantly, the influence of intro-/extraversion on gaze-cueing was not mediated by trait anxiety. These findings demonstrated that personality differences can shape processing of interactions between relevant social signals.

  12. Maternal Oxytocin Response Predicts Mother-to-Infant Gaze

    PubMed Central

    Kim, Sohye; Fonagy, Peter; Koos, Orsolya; Dorsett, Kimberly; Strathearn, Lane

    2014-01-01

    The neuropeptide oxytocin is importantly implicated in the emergence and maintenance of maternal behavior that forms the basis of the mother-infant bond. However, no research has yet examined the specific association between maternal oxytocin and maternal gaze, a key modality through which the mother makes social contact and engages with her infant. Furthermore, prior oxytocin studies have assessed maternal engagement primarily during episodes free of infant distress, while maternal engagement during infant distress is considered to be uniquely relevant to the formation of secure mother-infant attachment. Two patterns of maternal gaze, maternal gaze toward and gaze shifts away from the infant, were micro-coded while 50 mothers interacted with their 7-month-old infants during a modified still-face procedure. Maternal oxytocin response was defined as a change in the mother’s plasma oxytocin level following interaction with her infant as compared to baseline. The mother’s oxytocin response was positively associated with the duration of time her gaze was directed toward her infant, while negatively associated with the frequency with which her gaze shifted away from her infant. Importantly, mothers who showed low/average oxytocin response demonstrated a significant decrease in their gaze toward their infants during periods of infant distress, while such change was not observed in mothers with high oxytocin response. The findings underscore the involvement of oxytocin in regulating the mother’s responsive engagement with her infant, particularly in times when the infant’s need for access to the mother is greatest. PMID:24184574

  13. Direct Speaker Gaze Promotes Trust in Truth-Ambiguous Statements.

    PubMed

    Kreysa, Helene; Kessler, Luise; Schweinberger, Stefan R

    2016-01-01

    A speaker's gaze behaviour can provide perceivers with a multitude of cues which are relevant for communication, thus constituting an important non-verbal interaction channel. The present study investigated whether direct eye gaze of a speaker affects the likelihood of listeners believing truth-ambiguous statements. Participants were presented with videos in which a speaker produced such statements with either direct or averted gaze. The statements were selected through a rating study to ensure that participants were unlikely to know a-priori whether they were true or not (e.g., "sniffer dogs cannot smell the difference between identical twins"). Participants indicated in a forced-choice task whether or not they believed each statement. We found that participants were more likely to believe statements by a speaker looking at them directly, compared to a speaker with averted gaze. Moreover, when participants disagreed with a statement, they were slower to do so when the statement was uttered with direct (compared to averted) gaze, suggesting that the process of rejecting a statement as untrue may be inhibited when that statement is accompanied by direct gaze.

  14. Gaze and viewing angle influence visual stabilization of upright posture

    PubMed Central

    Ustinova, KI; Perkins, J

    2011-01-01

    Focusing gaze on a target helps stabilize upright posture. We investigated how this visual stabilization can be affected by observing a target presented under different gaze and viewing angles. In a series of 10-second trials, participants (N = 20, 29.3 ± 9 years of age) stood on a force plate and fixed their gaze on a figure presented on a screen at a distance of 1 m. The figure changed position (gaze angle: eye level (0°), 25° up or down), vertical body orientation (viewing angle: at eye level but rotated 25° as if leaning toward or away from the participant), or both (gaze and viewing angle: 25° up or down with the rotation equivalent of a natural visual perspective). Amplitude of participants’ sagittal displacement, surface area, and angular position of the center of gravity (COG) were compared. Results showed decreased COG velocity and amplitude for up and down gaze angles. Changes in viewing angles resulted in altered body alignment and increased amplitude of COG displacement. No significant changes in postural stability were observed when both gaze and viewing angles were altered. Results suggest that both the gaze angle and viewing perspective may be essential variables of the visuomotor system modulating postural responses. PMID:22398978

  15. Face Age and Eye Gaze Influence Older Adults' Emotion Recognition.

    PubMed

    Campbell, Anna; Murray, Janice E; Atkinson, Lianne; Ruffman, Ted

    2017-07-01

    Eye gaze has been shown to influence emotion recognition. In addition, older adults (over 65 years) are not as influenced by gaze direction cues as young adults (18-30 years). Nevertheless, these differences might stem from the use of young to middle-aged faces in emotion recognition research because older adults have an attention bias toward old-age faces. Therefore, using older face stimuli might allow older adults to process gaze direction cues to influence emotion recognition. To investigate this idea, young and older adults completed an emotion recognition task with young and older face stimuli displaying direct and averted gaze, assessing labeling accuracy for angry, disgusted, fearful, happy, and sad faces. Direct gaze rather than averted gaze improved young adults' recognition of emotions in young and older faces, but for older adults this was true only for older faces. The current study highlights the impact of stimulus face age and gaze direction on emotion recognition in young and older adults. The use of young face stimuli with direct gaze in most research might contribute to age-related emotion recognition differences. © The Author 2015. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  16. Just one look: Direct gaze briefly disrupts visual working memory.

    PubMed

    Wang, J Jessica; Apperly, Ian A

    2017-04-01

    Direct gaze is a salient social cue that affords rapid detection. A body of research suggests that direct gaze enhances performance on memory tasks (e.g., Hood, Macrae, Cole-Davies, & Dias, Developmental Science, 1, 67-71, 2003). Nonetheless, other studies highlight the disruptive effect direct gaze has on concurrent cognitive processes (e.g., Conty, Gimmig, Belletier, George, & Huguet, Cognition, 115(1), 133-139, 2010). This discrepancy raises questions about the effects direct gaze may have on concurrent memory tasks. We addressed this topic by employing a change detection paradigm, where participants retained information about the color of small sets of agents. Experiment 1 revealed that, despite the irrelevance of the agents' eye gaze to the memory task at hand, participants were worse at detecting changes when the agents looked directly at them compared to when the agents looked away. Experiment 2 showed that the disruptive effect was relatively short-lived. Prolonged presentation of direct gaze led to recovery from the initial disruption, rather than a sustained disruption on change detection performance. The present study provides the first evidence that direct gaze impairs visual working memory with a rapidly-developing yet short-lived effect even when there is no need to attend to agents' gaze.

  17. Can upbeat nystagmus increase in downward, but not upward, gaze?

    PubMed

    Kim, Hyun-Ah; Yi, Hyon-Ah; Lee, Hyung

    2012-04-01

    Upbeat nystagmus (UBN) is typically increased with upward gaze and decreased with downward gaze. We describe a patient with acute multiple sclerosis who developed primary position UBN with a linear slow phase waveform, in which the velocity of nystagmus was intensified in downward gaze and decreased during upward gaze. Brain MRI showed high signal lesions in the paramedian dorsal area of the caudal medulla encompassing the most caudal part of the perihypoglossal nuclei. Clinicians should be aware of possibility of a caudal medullary lesion in a patient with UBN, especially when the velocity of the UBN is increased in downward gaze. Copyright © 2011 Elsevier Ltd. All rights reserved.

  18. Visual Foraging With Fingers and Eye Gaze

    PubMed Central

    Thornton, Ian M.; Smith, Irene J.; Chetverikov, Andrey; Kristjánsson, Árni

    2016-01-01

    A popular model of the function of selective visual attention involves search where a single target is to be found among distractors. For many scenarios, a more realistic model involves search for multiple targets of various types, since natural tasks typically do not involve a single target. Here we present results from a novel multiple-target foraging paradigm. We compare finger foraging where observers cancel a set of predesignated targets by tapping them, to gaze foraging where observers cancel items by fixating them for 100 ms. During finger foraging, for most observers, there was a large difference between foraging based on a single feature, where observers switch easily between target types, and foraging based on a conjunction of features where observers tended to stick to one target type. The pattern was notably different during gaze foraging where these condition differences were smaller. Two conclusions follow: (a) The fact that a sizeable number of observers (in particular during gaze foraging) had little trouble switching between different target types raises challenges for many prominent theoretical accounts of visual attention and working memory. (b) While caveats must be noted for the comparison of gaze and finger foraging, the results suggest that selection mechanisms for gaze and pointing have different operational constraints. PMID:27433323

  19. Virtual social interactions in social anxiety--the impact of sex, gaze, and interpersonal distance.

    PubMed

    Wieser, Matthias J; Pauli, Paul; Grosseibl, Miriam; Molzow, Ina; Mühlberger, Andreas

    2010-10-01

    In social interactions, interpersonal distance between interaction partners plays an important role in determining the status of the relationship. Interpersonal distance is an important nonverbal behavior, and is used to regulate personal space in a complex interplay with other nonverbal behaviors such as eye gaze. In social anxiety, studies regarding the impact of interpersonal distance on within-situation avoidance behavior are so far rare. Thus the present study aimed to scrutinize the relationship between gaze direction, sex, interpersonal distance, and social anxiety in social interactions. Social interactions were modeled in a virtual-reality (VR) environment, where 20 low and 19 high socially anxious women were confronted with approaching male and female characters, who stopped in front of the participant, either some distance away or close to them, and displayed either a direct or an averted gaze. Gaze and head movements, as well as heart rate, were measured as indices of avoidance behavior and fear reactions. High socially anxious participants showed a complex pattern of avoidance behavior: when the avatar was standing farther away, high socially anxious women avoided gaze contact with male avatars showing a direct gaze. Furthermore, they showed avoidance behavior (backward head movements) in response to male avatars showing a direct gaze, regardless of the interpersonal distance. Overall, the current study proved that VR social interactions might be a very useful tool for investigating avoidance behavior of socially anxious individuals in highly controlled situations. This might also be the first step in using VR social interactions in clinical protocols for the therapy of social anxiety disorder.

  20. Overview of Nonelectronic Eye-Gaze Communication Techniques.

    ERIC Educational Resources Information Center

    Goossens, Carol A.; Crain, Sharon S.

    1987-01-01

    The article discusses currently used eye gaze communication techniques with the severely physically disabled (eye-gaze vest, laptray, transparent display, and mirror/prism communicator), presents information regarding the types of message displays used to depict encoded material, and discusses the advantages of implementing nonelectronic eye-gaze…

  1. Games and Telerehabilitation for Balance Impairments and Gaze Dysfunction: Protocol of a Randomized Controlled Trial.

    PubMed

    Szturm, Tony; Hochman, Jordan; Wu, Christine; Lisa, Lix; Reimer, Karen; Wonneck, Beth; Giacobbo, Andrea

    2015-10-21

    Digital media and gaming have received considerable interest from researchers and clinicians as a model for learning a broad range of complex tasks and facilitating the transfer of skills to daily life. These emerging rehabilitation technologies have the potential to improve clinical outcomes and patient participation because they are engaging, motivating, and accessible. Our research goal is to develop preventative and therapeutic point-of-care eHealth applications that will lead to equivalent or better long-term health outcomes and health care costs than existing programs. We have produced a novel computer-aided tele-rehabilitation platform that combines computer game-based exercises with tele-monitoring. Compare the therapeutic effectiveness of an in-home, game-based rehabilitation program (GRP) to standard care delivered in an outpatient physical therapy clinic on measures of balance, gaze control, dizziness, and health-related quality of life. A randomized, controlled, single-blind pilot trial will be conducted. Fifty-six participants with a diagnosis of peripheral vestibular disorder will be randomly assigned to either usual physical therapy (comparator group) or to a game-based intervention (experimental group). Measures to be assessed will include gaze control, dynamic balance, and self-reported measures of dizziness. The project was funded and enrollment was started in August 2014. To date, 36 participants have been enrolled. There have been 6 drop-outs. It is expected that the study will be completed January 2016 and the first results are expected to be submitted for publication in Spring of 2016. A successful application of this rehabilitation program would help streamline rehabilitation services, leverage therapist time spent with clients, and permit regular practice times at the client's convenience. Clinicaltrials.gov: NCT02134444; https://clinicaltrials.gov/ct2/show/NCT02134444 (Archived by WebCite at http://www.webcitation.org/6cE18bqqY).

  2. The Effects of Varying Contextual Demands on Age-related Positive Gaze Preferences

    PubMed Central

    Noh, Soo Rim; Isaacowitz, Derek M.

    2015-01-01

    Despite many studies on the age-related positivity effect and its role in visual attention, discrepancies remain regarding whether one’s full attention is required for age-related differences to emerge. The present study took a new approach to this question by varying the contextual demands of emotion processing. This was done by adding perceptual distractions, such as visual and auditory noise, that could disrupt attentional control. Younger and older participants viewed pairs of happy–neutral and fearful–neutral faces while their eye movements were recorded. Facial stimuli were shown either without noise, embedded in a background of visual noise (low, medium, or high), or with simultaneous auditory babble. Older adults showed positive gaze preferences, looking toward happy faces and away from fearful faces; however, their gaze preferences tended to be influenced by the level of visual noise. Specifically, the tendency to look away from fearful faces was not present in conditions with low and medium levels of visual noise, but was present where there were high levels of visual noise. It is important to note, however, that in the high-visual-noise condition, external cues were present to facilitate the processing of emotional information. In addition, older adults’ positive gaze preferences disappeared or were reduced when they first viewed emotional faces within a distracting context. The current results indicate that positive gaze preferences may be less likely to occur in distracting contexts that disrupt control of visual attention. PMID:26030774

  3. The effects of varying contextual demands on age-related positive gaze preferences.

    PubMed

    Noh, Soo Rim; Isaacowitz, Derek M

    2015-06-01

    Despite many studies on the age-related positivity effect and its role in visual attention, discrepancies remain regarding whether full attention is required for age-related differences to emerge. The present study took a new approach to this question by varying the contextual demands of emotion processing. This was done by adding perceptual distractions, such as visual and auditory noise, that could disrupt attentional control. Younger and older participants viewed pairs of happy-neutral and fearful-neutral faces while their eye movements were recorded. Facial stimuli were shown either without noise, embedded in a background of visual noise (low, medium, or high), or with simultaneous auditory babble. Older adults showed positive gaze preferences, looking toward happy faces and away from fearful faces; however, their gaze preferences tended to be influenced by the level of visual noise. Specifically, the tendency to look away from fearful faces was not present in conditions with low and medium levels of visual noise but was present when there were high levels of visual noise. It is important to note, however, that in the high-visual-noise condition, external cues were present to facilitate the processing of emotional information. In addition, older adults' positive gaze preferences disappeared or were reduced when they first viewed emotional faces within a distracting context. The current results indicate that positive gaze preferences may be less likely to occur in distracting contexts that disrupt control of visual attention. (c) 2015 APA, all rights reserved.

  4. Culture and Listeners' Gaze Responses to Stuttering

    ERIC Educational Resources Information Center

    Zhang, Jianliang; Kalinowski, Joseph

    2012-01-01

    Background: It is frequently observed that listeners demonstrate gaze aversion to stuttering. This response may have profound social/communicative implications for both fluent and stuttering individuals. However, there is a lack of empirical examination of listeners' eye gaze responses to stuttering, and it is unclear whether cultural background…

  5. Virtual wayfinding using simulated prosthetic vision in gaze-locked viewing.

    PubMed

    Wang, Lin; Yang, Liancheng; Dagnelie, Gislin

    2008-11-01

    To assess virtual maze navigation performance with simulated prosthetic vision in gaze-locked viewing, under the conditions of varying luminance contrast, background noise, and phosphene dropout. Four normally sighted subjects performed virtual maze navigation using simulated prosthetic vision in gaze-locked viewing, under five conditions of luminance contrast, background noise, and phosphene dropout. Navigation performance was measured as the time required to traverse a 10-room maze using a game controller, and the number of errors made during the trip. Navigation performance time (1) became stable after 6 to 10 trials, (2) remained similar on average at luminance contrast of 68% and 16% but had greater variation at 16%, (3) was not significantly affected by background noise, and (4) increased by 40% when 30% of phosphenes were removed. Navigation performance time and number of errors were significantly and positively correlated. Assuming that the simulated gaze-locked viewing conditions are extended to implant wearers, such prosthetic vision can be helpful for wayfinding in simple mobility tasks, though phosphene dropout may interfere with performance.

  6. Using gaze patterns to predict task intent in collaboration.

    PubMed

    Huang, Chien-Ming; Andrist, Sean; Sauppé, Allison; Mutlu, Bilge

    2015-01-01

    In everyday interactions, humans naturally exhibit behavioral cues, such as gaze and head movements, that signal their intentions while interpreting the behavioral cues of others to predict their intentions. Such intention prediction enables each partner to adapt their behaviors to the intent of others, serving a critical role in joint action where parties work together to achieve a common goal. Among behavioral cues, eye gaze is particularly important in understanding a person's attention and intention. In this work, we seek to quantify how gaze patterns may indicate a person's intention. Our investigation was contextualized in a dyadic sandwich-making scenario in which a "worker" prepared a sandwich by adding ingredients requested by a "customer." In this context, we investigated the extent to which the customers' gaze cues serve as predictors of which ingredients they intend to request. Predictive features were derived to represent characteristics of the customers' gaze patterns. We developed a support vector machine-based (SVM-based) model that achieved 76% accuracy in predicting the customers' intended requests based solely on gaze features. Moreover, the predictor made correct predictions approximately 1.8 s before the spoken request from the customer. We further analyzed several episodes of interactions from our data to develop a deeper understanding of the scenarios where our predictor succeeded and failed in making correct predictions. These analyses revealed additional gaze patterns that may be leveraged to improve intention prediction. This work highlights gaze cues as a significant resource for understanding human intentions and informs the design of real-time recognizers of user intention for intelligent systems, such as assistive robots and ubiquitous devices, that may enable more complex capabilities and improved user experience.

  7. Direct Speaker Gaze Promotes Trust in Truth-Ambiguous Statements

    PubMed Central

    Kessler, Luise; Schweinberger, Stefan R.

    2016-01-01

    A speaker’s gaze behaviour can provide perceivers with a multitude of cues which are relevant for communication, thus constituting an important non-verbal interaction channel. The present study investigated whether direct eye gaze of a speaker affects the likelihood of listeners believing truth-ambiguous statements. Participants were presented with videos in which a speaker produced such statements with either direct or averted gaze. The statements were selected through a rating study to ensure that participants were unlikely to know a-priori whether they were true or not (e.g., “sniffer dogs cannot smell the difference between identical twins”). Participants indicated in a forced-choice task whether or not they believed each statement. We found that participants were more likely to believe statements by a speaker looking at them directly, compared to a speaker with averted gaze. Moreover, when participants disagreed with a statement, they were slower to do so when the statement was uttered with direct (compared to averted) gaze, suggesting that the process of rejecting a statement as untrue may be inhibited when that statement is accompanied by direct gaze. PMID:27643789

  8. On the use of hidden Markov models for gaze pattern modeling

    NASA Astrophysics Data System (ADS)

    Mannaru, Pujitha; Balasingam, Balakumar; Pattipati, Krishna; Sibley, Ciara; Coyne, Joseph

    2016-05-01

    Some of the conventional metrics derived from gaze patterns (on computer screens) to study visual attention, engagement and fatigue are saccade counts, nearest neighbor index (NNI) and duration of dwells/fixations. Each of these metrics has drawbacks in modeling the behavior of gaze patterns; one such drawback comes from the fact that some portions on the screen are not as important as some other portions on the screen. This is addressed by computing the eye gaze metrics corresponding to important areas of interest (AOI) on the screen. There are some challenges in developing accurate AOI based metrics: firstly, the definition of AOI is always fuzzy; secondly, it is possible that the AOI may change adaptively over time. Hence, there is a need to introduce eye-gaze metrics that are aware of the AOI in the field of view; at the same time, the new metrics should be able to automatically select the AOI based on the nature of the gazes. In this paper, we propose a novel way of computing NNI based on continuous hidden Markov models (HMM) that model the gazes as 2D Gaussian observations (x-y coordinates of the gaze) with the mean at the center of the AOI and covariance that is related to the concentration of gazes. The proposed modeling allows us to accurately compute the NNI metric in the presence of multiple, undefined AOI on the screen in the presence of intermittent casual gazing that is modeled as random gazes on the screen.

  9. The effects of simulated vision impairments on the cone of gaze.

    PubMed

    Hecht, Heiko; Hörichs, Jenny; Sheldon, Sarah; Quint, Jessilin; Bowers, Alex

    2015-10-01

    Detecting the gaze direction of others is critical for many social interactions. We explored factors that may make the perception of mutual gaze more difficult, including the degradation of the stimulus and simulated vision impairment. To what extent do these factors affect the complex assessment of mutual gaze? Using an interactive virtual head whose eye direction could be manipulated by the subject, we conducted two experiments to assess the effects of simulated vision impairments on mutual gaze. Healthy subjects had to demarcate the center and the edges of the cone of gaze-that is, the range of gaze directions that are accepted for mutual gaze. When vision was impaired by adding a semitransparent white contrast reduction mask to the display (Exp. 1), judgments became more variable and more influenced by the head direction (indicative of a compensation strategy). When refractive blur was added (Exp. 1), the gaze cone shrank from 12.9° (no blur) to 11.3° (3-diopter lens), which cannot be explained by a low-level process but might reflect a tightening of the criterion for mutual gaze as a response to the increased uncertainty. However, the overall effects of the impairments were relatively modest. Elderly subjects (Exp. 2) produced more variability but did not differ qualitatively from the younger subjects. In the face of artificial vision impairments, compensation mechanisms and criterion changes allow us to perform better in mutual gaze perception than would be predicted by a simple extrapolation from the losses in basic visual acuity and contrast sensitivity.

  10. What Do Eye Gaze Metrics Tell Us about Motor Imagery?

    PubMed

    Poiroux, Elodie; Cavaro-Ménard, Christine; Leruez, Stéphanie; Lemée, Jean Michel; Richard, Isabelle; Dinomais, Mickael

    2015-01-01

    Many of the brain structures involved in performing real movements also have increased activity during imagined movements or during motor observation, and this could be the neural substrate underlying the effects of motor imagery in motor learning or motor rehabilitation. In the absence of any objective physiological method of measurement, it is currently impossible to be sure that the patient is indeed performing the task as instructed. Eye gaze recording during a motor imagery task could be a possible way to "spy" on the activity an individual is really engaged in. The aim of the present study was to compare the pattern of eye movement metrics during motor observation, visual and kinesthetic motor imagery (VI, KI), target fixation, and mental calculation. Twenty-two healthy subjects (16 females and 6 males), were required to perform tests in five conditions using imagery in the Box and Block Test tasks following the procedure described by Liepert et al. Eye movements were analysed by a non-invasive oculometric measure (SMI RED250 system). Two parameters describing gaze pattern were calculated: the index of ocular mobility (saccade duration over saccade + fixation duration) and the number of midline crossings (i.e. the number of times the subjects gaze crossed the midline of the screen when performing the different tasks). Both parameters were significantly different between visual imagery and kinesthesic imagery, visual imagery and mental calculation, and visual imagery and target fixation. For the first time we were able to show that eye movement patterns are different during VI and KI tasks. Our results suggest gaze metric parameters could be used as an objective unobtrusive approach to assess engagement in a motor imagery task. Further studies should define how oculomotor parameters could be used as an indicator of the rehabilitation task a patient is engaged in.

  11. Physiology and pathology of saccades and gaze holding.

    PubMed

    Shaikh, Aasef G; Ghasia, Fatema F

    2013-01-01

    Foveation is the fundamental requirement for clear vision. Saccades rapidly shift the gaze to the interesting target while gaze holding ensures foveation of the desired object. We will review the pertinent physiology of saccades and gaze holding and their pathophysiology leading to saccadic oscillations, slow saccades, saccadic dysmetria, and nystagmus. Motor commands for saccades are generated at multiple levels of the neuraxis. The frontal and parietal eye field send saccadic commands to the superior colliculus. Latter then projects to the brain-stem saccadic burst generator. The brain-stem burst generators guarantee optimum signal to ensure rapid saccadic velocity, while the neural integrator, by mathematically integrating the saccadic pulse, facilitates stable gaze holding. Reciprocal innervations that ensure rapid saccadic velocity are prone to inherent instability leading to saccadic oscillations. In contrast, suboptimal function of the burst generators causes slow saccades. Impaired error correction, either at the cerebellum or the inferior olive, leads to impaired saccade adaptation and ultimately saccadic dysmetria and oculopalatal tremor. Impairment in the function of neural integrator causes nystagmus. Neurophysiology of saccades, gaze holding, and their deficits are well recognized. These principles can be implemented to define novel therapeutic and rehabilitation approaches.

  12. A Web Browsing System by Eye-gaze Input

    NASA Astrophysics Data System (ADS)

    Abe, Kiyohiko; Owada, Kosuke; Ohi, Shoichi; Ohyama, Minoru

    We have developed an eye-gaze input system for people with severe physical disabilities, such as amyotrophic lateral sclerosis (ALS) patients. This system utilizes a personal computer and a home video camera to detect eye-gaze under natural light. The system detects both vertical and horizontal eye-gaze by simple image analysis, and does not require special image processing units or sensors. We also developed the platform for eye-gaze input based on our system. In this paper, we propose a new web browsing system for physically disabled computer users as an application of the platform for eye-gaze input. The proposed web browsing system uses a method of direct indicator selection. The method categorizes indicators by their function. These indicators are hierarchized relations; users can select the felicitous function by switching indicators group. This system also analyzes the location of selectable object on web page, such as hyperlink, radio button, edit box, etc. This system stores the locations of these objects, in other words, the mouse cursor skips to the object of candidate input. Therefore it enables web browsing at a faster pace.

  13. Estimating the gaze of a virtuality human.

    PubMed

    Roberts, David J; Rae, John; Duckworth, Tobias W; Moore, Carl M; Aspin, Rob

    2013-04-01

    The aim of our experiment is to determine if eye-gaze can be estimated from a virtuality human: to within the accuracies that underpin social interaction; and reliably across gaze poses and camera arrangements likely in every day settings. The scene is set by explaining why Immersive Virtuality Telepresence has the potential to meet the grand challenge of faithfully communicating both the appearance and the focus of attention of a remote human participant within a shared 3D computer-supported context. Within the experiment n=22 participants rotated static 3D virtuality humans, reconstructed from surround images, until they felt most looked at. The dependent variable was absolute angular error, which was compared to that underpinning social gaze behaviour in the natural world. Independent variables were 1) relative orientations of eye, head and body of captured subject; and 2) subset of cameras used to texture the form. Analysis looked for statistical and practical significance and qualitative corroborating evidence. The analysed results tell us much about the importance and detail of the relationship between gaze pose, method of video based reconstruction, and camera arrangement. They tell us that virtuality can reproduce gaze to an accuracy useful in social interaction, but with the adopted method of Video Based Reconstruction, this is highly dependent on combination of gaze pose and camera arrangement. This suggests changes in the VBR approach in order to allow more flexible camera arrangements. The work is of interest to those wanting to support expressive meetings that are both socially and spatially situated, and particular those using or building Immersive Virtuality Telepresence to accomplish this. It is also of relevance to the use of virtuality humans in applications ranging from the study of human interactions to gaming and the crossing of the stage line in films and TV.

  14. Differences in gaze anticipation for locomotion with and without vision

    PubMed Central

    Authié, Colas N.; Hilt, Pauline M.; N'Guyen, Steve; Berthoz, Alain; Bennequin, Daniel

    2015-01-01

    Previous experimental studies have shown a spontaneous anticipation of locomotor trajectory by the head and gaze direction during human locomotion. This anticipatory behavior could serve several functions: an optimal selection of visual information, for instance through landmarks and optic flow, as well as trajectory planning and motor control. This would imply that anticipation remains in darkness but with different characteristics. We asked 10 participants to walk along two predefined complex trajectories (limaçon and figure eight) without any cue on the trajectory to follow. Two visual conditions were used: (i) in light and (ii) in complete darkness with eyes open. The whole body kinematics were recorded by motion capture, along with the participant's right eye movements. We showed that in darkness and in light, horizontal gaze anticipates the orientation of the head which itself anticipates the trajectory direction. However, the horizontal angular anticipation decreases by a half in darkness for both gaze and head. In both visual conditions we observed an eye nystagmus with similar properties (frequency and amplitude). The main difference comes from the fact that in light, there is a shift of the orientations of the eye nystagmus and the head in the direction of the trajectory. These results suggest that a fundamental function of gaze is to represent self motion, stabilize the perception of space during locomotion, and to simulate the future trajectory, regardless of the vision condition. PMID:26106313

  15. Objective eye-gaze behaviour during face-to-face communication with proficient alaryngeal speakers: a preliminary study.

    PubMed

    Evitts, Paul; Gallop, Robert

    2011-01-01

    There is a large body of research demonstrating the impact of visual information on speaker intelligibility in both normal and disordered speaker populations. However, there is minimal information on which specific visual features listeners find salient during conversational discourse. To investigate listeners' eye-gaze behaviour during face-to-face conversation with normal, laryngeal and proficient alaryngeal speakers. Sixty participants individually participated in a 10-min conversation with one of four speakers (typical laryngeal, tracheoesophageal, oesophageal, electrolaryngeal; 15 participants randomly assigned to one mode of speech). All speakers were > 85% intelligible and were judged to be 'proficient' by two certified speech-language pathologists. Participants were fitted with a head-mounted eye-gaze tracking device (Mobile Eye, ASL) that calculated the region of interest and mean duration of eye-gaze. Self-reported gaze behaviour was also obtained following the conversation using a 10 cm visual analogue scale. While listening, participants viewed the lower facial region of the oesophageal speaker more than the normal or tracheoesophageal speaker. Results of non-hierarchical cluster analyses showed that while listening, the pattern of eye-gaze was predominantly directed at the lower face of the oesophageal and electrolaryngeal speaker and more evenly dispersed among the background, lower face, and eyes of the normal and tracheoesophageal speakers. Finally, results show a low correlation between self-reported eye-gaze behaviour and objective regions of interest data. Overall, results suggest similar eye-gaze behaviour when healthy controls converse with normal and tracheoesophageal speakers and that participants had significantly different eye-gaze patterns when conversing with an oesophageal speaker. Results are discussed in terms of existing eye-gaze data and its potential implications on auditory-visual speech perception. © 2011 Royal College of Speech

  16. In the presence of conflicting gaze cues, fearful expression and eye-size guide attention.

    PubMed

    Carlson, Joshua M; Aday, Jacob

    2017-10-19

    Humans are social beings that often interact in multi-individual environments. As such, we are frequently confronted with nonverbal social signals, including eye-gaze direction, from multiple individuals. Yet, the factors that allow for the prioritisation of certain gaze cues over others are poorly understood. Using a modified conflicting gaze paradigm, we tested the hypothesis that fearful gaze would be favoured amongst competing gaze cues. We further hypothesised that this effect is related to the increased sclera exposure, which is characteristic of fearful expressions. Across three experiments, we found that fearful, but not happy, gaze guides observers' attention over competing non-emotional gaze. The guidance of attention by fearful gaze appears to be linked to increased sclera exposure. However, differences in sclera exposure do not prioritise competing gazes of other types. Thus, fearful gaze guides attention among competing cues and this effect is facilitated by increased sclera exposure - but increased sclera exposure per se does not guide attention. The prioritisation of fearful gaze over non-emotional gaze likely represents an adaptive means of selectively attending to survival-relevant spatial locations.

  17. Is improved lane keeping during cognitive load caused by increased physical arousal or gaze concentration toward the road center?

    PubMed

    Li, Penghui; Markkula, Gustav; Li, Yibing; Merat, Natasha

    2018-08-01

    Driver distraction is one of the main causes of motor-vehicle accidents. However, the impact on traffic safety of tasks that impose cognitive (non-visual) distraction remains debated. One particularly intriguing finding is that cognitive load seems to improve lane keeping performance, most often quantified as reduced standard deviation of lateral position (SDLP). The main competing hypotheses, supported by current empirical evidence, suggest that cognitive load improves lane keeping via either increased physical arousal, or higher gaze concentration toward the road center, but views are mixed regarding if, and how, these possible mediators influence lane keeping performance. Hence, a simulator study was conducted, with participants driving on a straight city road section whilst completing a cognitive task at different levels of difficulty. In line with previous studies, cognitive load led to increased physical arousal, higher gaze concentration toward the road center, and higher levels of micro-steering activity, accompanied by improved lane keeping performance. More importantly, during the high cognitive task, both physical arousal and gaze concentration changed earlier in time than micro-steering activity, which in turn changed earlier than lane keeping performance. In addition, our results did not show a significant correlation between gaze concentration and physical arousal on the level of individual task recordings. Based on these findings, various multilevel models for micro-steering activity and lane keeping performance were conducted and compared, and the results suggest that all of the mechanisms proposed by existing hypotheses could be simultaneously involved. In other words, it is suggested that cognitive load leads to: (i) an increase in arousal, causing increased micro-steering activity, which in turn improves lane keeping performance, and (ii) an increase in gaze concentration, causing lane keeping improvement through both (a) further increased micro

  18. Eyes on the Mind: Investigating the Influence of Gaze Dynamics on the Perception of Others in Real-Time Social Interaction

    PubMed Central

    Pfeiffer, Ulrich J.; Schilbach, Leonhard; Jording, Mathis; Timmermans, Bert; Bente, Gary; Vogeley, Kai

    2012-01-01

    Social gaze provides a window into the interests and intentions of others and allows us to actively point out our own. It enables us to engage in triadic interactions involving human actors and physical objects and to build an indispensable basis for coordinated action and collaborative efforts. The object-related aspect of gaze in combination with the fact that any motor act of looking encompasses both input and output of the minds involved makes this non-verbal cue system particularly interesting for research in embodied social cognition. Social gaze comprises several core components, such as gaze-following or gaze aversion. Gaze-following can result in situations of either “joint attention” or “shared attention.” The former describes situations in which the gaze-follower is aware of sharing a joint visual focus with the gazer. The latter refers to a situation in which gazer and gaze-follower focus on the same object and both are aware of their reciprocal awareness of this joint focus. Here, a novel interactive eye-tracking paradigm suited for studying triadic interactions was used to explore two aspects of social gaze. Experiments 1a and 1b assessed how the latency of another person’s gaze reactions (i.e., gaze-following or gaze version) affected participants’ sense of agency, which was measured by their experience of relatedness of these reactions. Results demonstrate that both timing and congruency of a gaze reaction as well as the other’s action options influence the sense of agency. Experiment 2 explored differences in gaze dynamics when participants were asked to establish either joint or shared attention. Findings indicate that establishing shared attention takes longer and requires a larger number of gaze shifts as compared to joint attention, which more closely seems to resemble simple visual detection. Taken together, novel insights into the sense of agency and the awareness of others in gaze-based interaction are provided. PMID:23227017

  19. Modeling eye-head gaze shifts in multiple contexts without motor planning

    PubMed Central

    Haji-Abolhassani, Iman; Guitton, Daniel

    2016-01-01

    During gaze shifts, the eyes and head collaborate to rapidly capture a target (saccade) and fixate it. Accordingly, models of gaze shift control should embed both saccadic and fixation modes and a mechanism for switching between them. We demonstrate a model in which the eye and head platforms are driven by a shared gaze error signal. To limit the number of free parameters, we implement a model reduction approach in which steady-state cerebellar effects at each of their projection sites are lumped with the parameter of that site. The model topology is consistent with anatomy and neurophysiology, and can replicate eye-head responses observed in multiple experimental contexts: 1) observed gaze characteristics across species and subjects can emerge from this structure with minor parametric changes; 2) gaze can move to a goal while in the fixation mode; 3) ocular compensation for head perturbations during saccades could rely on vestibular-only cells in the vestibular nuclei with postulated projections to burst neurons; 4) two nonlinearities suffice, i.e., the experimentally-determined mapping of tectoreticular cells onto brain stem targets and the increased recruitment of the head for larger target eccentricities; 5) the effects of initial conditions on eye/head trajectories are due to neural circuit dynamics, not planning; and 6) “compensatory” ocular slow phases exist even after semicircular canal plugging, because of interconnections linking eye-head circuits. Our model structure also simulates classical vestibulo-ocular reflex and pursuit nystagmus, and provides novel neural circuit and behavioral predictions, notably that both eye-head coordination and segmental limb coordination are possible without trajectory planning. PMID:27440248

  20. Anxiety and sensitivity to gaze direction in emotionally expressive faces.

    PubMed

    Fox, Elaine; Mathews, Andrew; Calder, Andrew J; Yiend, Jenny

    2007-08-01

    This study investigated the role of neutral, happy, fearful, and angry facial expressions in enhancing orienting to the direction of eye gaze. Photographs of faces with either direct or averted gaze were presented. A target letter (T or L) appeared unpredictably to the left or the right of the face, either 300 ms or 700 ms after gaze direction changed. Response times were faster in congruent conditions (i.e., when the eyes gazed toward the target) relative to incongruent conditions (when the eyes gazed away from the target letter). Facial expression did influence reaction times, but these effects were qualified by individual differences in self-reported anxiety. High trait-anxious participants showed an enhanced orienting to the eye gaze of faces with fearful expressions relative to all other expressions. In contrast, when the eyes stared straight ahead, trait anxiety was associated with slower responding when the facial expressions depicted anger. Thus, in anxiety-prone people attention is more likely to be held by an expression of anger, whereas attention is guided more potently by fearful facial expressions. ((c) 2007 APA, all rights reserved).

  1. Rebound upbeat nystagmus after lateral gaze in episodic ataxia type 2.

    PubMed

    Kim, Hyo-Jung; Kim, Ji-Soo; Choi, Jae-Hwan; Shin, Jin-Hong; Choi, Kwang-Dong; Zee, David S

    2014-06-01

    Rebound nystagmus is a transient nystagmus that occurs on resuming the straight-ahead position after prolonged eccentric gaze. Even though rebound nystagmus is commonly associated with gaze-evoked nystagmus (GEN), development of rebound nystagmus in a different plane of gaze has not been described. We report a patient with episodic ataxia type 2 who showed transient upbeat nystagmus on resuming the straight-ahead position after sustained lateral gaze that had induced GEN and downbeat nystagmus. The rebound upbeat nystagmus may be ascribed to a shifting null in the vertical plane as a result of an adaptation to the downbeat nystagmus that developed during lateral gaze.

  2. Effect of narrowing the base of support on the gait, gaze and quiet eye of elite ballet dancers and controls.

    PubMed

    Panchuk, Derek; Vickers, Joan N

    2011-08-01

    We determined the gaze and stepping behaviours of elite ballet dancers and controls as they walked normally and along progressively narrower 3-m lines (l0.0, 2.5 cm). The ballet dancers delayed the first step and then stepped more quickly through the approach area and onto the lines, which they exited more slowly than the controls, which stepped immediately but then slowed their gait to navigate the line, which they exited faster. Contrary to predictions, the ballet group did not step more precisely, perhaps due to the unique anatomical requirements of ballet dance and/or due to releasing the degrees of freedom under their feet as they fixated ahead more than the controls. The ballet group used significantly fewer fixations of longer duration, and their final quiet eye (QE) duration prior to stepping on the line was significantly longer (2,353.39 ms) than the controls (1,327.64 ms). The control group favoured a proximal gaze strategy allocating 73.33% of their QE fixations to the line/off the line and 26.66% to the exit/visual straight ahead (VSA), while the ballet group favoured a 'look-ahead' strategy allocating 55.49% of their QE fixations to the exit/VSA and 44.51% on the line/off the line. The results are discussed in the light of the development of expertise and the enhanced role of fixations and visual attention when more tasks become more constrained.

  3. Robot Faces that Follow Gaze Facilitate Attentional Engagement and Increase Their Likeability.

    PubMed

    Willemse, Cesco; Marchesi, Serena; Wykowska, Agnieszka

    2018-01-01

    Gaze behavior of humanoid robots is an efficient mechanism for cueing our spatial orienting, but less is known about the cognitive-affective consequences of robots responding to human directional cues. Here, we examined how the extent to which a humanoid robot (iCub) avatar directed its gaze to the same objects as our participants affected engagement with the robot, subsequent gaze-cueing, and subjective ratings of the robot's characteristic traits. In a gaze-contingent eyetracking task, participants were asked to indicate a preference for one of two objects with their gaze while an iCub avatar was presented between the object photographs. In one condition, the iCub then shifted its gaze toward the object chosen by a participant in 80% of the trials (joint condition) and in the other condition it looked at the opposite object 80% of the time (disjoint condition). Based on the literature in human-human social cognition, we took the speed with which the participants looked back at the robot as a measure of facilitated reorienting and robot-preference, and found these return saccade onset times to be quicker in the joint condition than in the disjoint condition. As indicated by results from a subsequent gaze-cueing tasks, the gaze-following behavior of the robot had little effect on how our participants responded to gaze cues. Nevertheless, subjective reports suggested that our participants preferred the iCub following participants' gaze to the one with a disjoint attention behavior, rated it as more human-like and as more likeable. Taken together, our findings show a preference for robots who follow our gaze. Importantly, such subtle differences in gaze behavior are sufficient to influence our perception of humanoid agents, which clearly provides hints about the design of behavioral characteristics of humanoid robots in more naturalistic settings.

  4. Robot Faces that Follow Gaze Facilitate Attentional Engagement and Increase Their Likeability

    PubMed Central

    Willemse, Cesco; Marchesi, Serena; Wykowska, Agnieszka

    2018-01-01

    Gaze behavior of humanoid robots is an efficient mechanism for cueing our spatial orienting, but less is known about the cognitive–affective consequences of robots responding to human directional cues. Here, we examined how the extent to which a humanoid robot (iCub) avatar directed its gaze to the same objects as our participants affected engagement with the robot, subsequent gaze-cueing, and subjective ratings of the robot’s characteristic traits. In a gaze-contingent eyetracking task, participants were asked to indicate a preference for one of two objects with their gaze while an iCub avatar was presented between the object photographs. In one condition, the iCub then shifted its gaze toward the object chosen by a participant in 80% of the trials (joint condition) and in the other condition it looked at the opposite object 80% of the time (disjoint condition). Based on the literature in human–human social cognition, we took the speed with which the participants looked back at the robot as a measure of facilitated reorienting and robot-preference, and found these return saccade onset times to be quicker in the joint condition than in the disjoint condition. As indicated by results from a subsequent gaze-cueing tasks, the gaze-following behavior of the robot had little effect on how our participants responded to gaze cues. Nevertheless, subjective reports suggested that our participants preferred the iCub following participants’ gaze to the one with a disjoint attention behavior, rated it as more human-like and as more likeable. Taken together, our findings show a preference for robots who follow our gaze. Importantly, such subtle differences in gaze behavior are sufficient to influence our perception of humanoid agents, which clearly provides hints about the design of behavioral characteristics of humanoid robots in more naturalistic settings. PMID:29459842

  5. Visual Representation of Eye Gaze Is Coded by a Nonopponent Multichannel System

    ERIC Educational Resources Information Center

    Calder, Andrew J.; Jenkins, Rob; Cassel, Anneli; Clifford, Colin W. G.

    2008-01-01

    To date, there is no functional account of the visual perception of gaze in humans. Previous work has demonstrated that left gaze and right gaze are represented by separate mechanisms. However, these data are consistent with either a multichannel system comprising separate channels for distinct gaze directions (e.g., left, direct, and right) or an…

  6. Balance, mobility and gaze stability deficits remain following surgical removal of vestibular schwannoma (acoustic neuroma): an observational study.

    PubMed

    Choy, Nancy Low; Johnson, Natalie; Treleaven, Julia; Jull, Gwendolen; Panizza, Benedict; Brown-Rothwell, David

    2006-01-01

    Are there residual deficits in balance, mobility, and gaze stability after surgical removal of vestibular schwannoma? Observational study. Twelve people with a mean age of 52 years who had undergone surgical removal of vestibular schwannoma at least three months previously and had not undergone vestibular rehabilitation. Twelve age- and gender-matched healthy people who acted as controls. Handicap due to dizziness, balance, mobility, and gaze stability was measured. Handicap due to dizziness was moderate for the clinical group. They swayed significantly more than the controls in comfortable stance: firm surface eyes open and visual conflict (p < 0.05); foam surface eyes closed (p < 0.05) and visual conflict (p < 0.05); and feet together: firm surface, eyes closed (p < 0.05), foam surface, eyes open (p < 0.05) and eyes closed (p < 0.01). They displayed a higher rate of failure for timed stance and gaze stability (p < 0.05) than the controls. Step Test (p < 0.01), Tandem Walk Test (p < 0.05) and Dynamic Gait Index (p < 0.01) scores were also significantly reduced compared with controls. There was a significant correlation between handicap due to dizziness and the inability to maintain balance in single limb and tandem stance (r = 0.68, p = 0.02) and the ability to maintain gaze stability during passive head movement (r = 0.78; p = 0.02). A prospective study is required to evaluate vestibular rehabilitation to ameliorate dizziness and to improve balance, mobility, and gaze stability for this clinical group.

  7. E-ducating the Gaze: The Idea of a Poor Pedagogy

    ERIC Educational Resources Information Center

    Masschelein, Jan

    2010-01-01

    Educating the gaze is easily understood as becoming conscious about what is "really" happening in the world and becoming aware of the way our gaze is itself bound to a perspective and particular position. However, the paper explores a different idea. It understands educating the gaze not in the sense of "educare" (teaching) but of "e-ducere" as…

  8. The effect of arousal and eye gaze direction on trust evaluations of stranger's faces: A potential pathway to paranoid thinking.

    PubMed

    Abbott, Jennie; Middlemiss, Megan; Bruce, Vicki; Smailes, David; Dudley, Robert

    2018-09-01

    When asked to evaluate faces of strangers, people with paranoia show a tendency to rate others as less trustworthy. The present study investigated the impact of arousal on this interpersonal bias, and whether this bias was specific to evaluations of trust or additionally affected other trait judgements. The study also examined the impact of eye gaze direction, as direct eye gaze has been shown to heighten arousal. In two experiments, non-clinical participants completed face rating tasks before and after either an arousal manipulation or control manipulation. Experiment one examined the effects of heightened arousal on judgements of trustworthiness. Experiment two examined the specificity of the bias, and the impact of gaze direction. Experiment one indicated that the arousal manipulation led to lower trustworthiness ratings. Experiment two showed that heightened arousal reduced trust evaluations of trustworthy faces, particularly trustworthy faces with averted gaze. The control group rated trustworthy faces with direct gaze as more trustworthy post-manipulation. There was some evidence that attractiveness ratings were affected similarly to the trust judgements, whereas judgements of intelligence were not affected by higher arousal. In both studies, participants reported low levels of arousal even after the manipulation and the use of a non-clinical sample limits the generalisability to clinical samples. There is a complex interplay between arousal, evaluations of trustworthiness and gaze direction. Heightened arousal influences judgements of trustworthiness, but within the context of face type and gaze direction. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. Remote gaze tracking system for 3D environments.

    PubMed

    Congcong Liu; Herrup, Karl; Shi, Bertram E

    2017-07-01

    Eye tracking systems are typically divided into two categories: remote and mobile. Remote systems, where the eye tracker is located near the object being viewed by the subject, have the advantage of being less intrusive, but are typically used for tracking gaze points on fixed two dimensional (2D) computer screens. Mobile systems such as eye tracking glasses, where the eye tracker are attached to the subject, are more intrusive, but are better suited for cases where subjects are viewing objects in the three dimensional (3D) environment. In this paper, we describe how remote gaze tracking systems developed for 2D computer screens can be used to track gaze points in a 3D environment. The system is non-intrusive. It compensates for small head movements by the user, so that the head need not be stabilized by a chin rest or bite bar. The system maps the 3D gaze points of the user onto 2D images from a scene camera and is also located remotely from the subject. Measurement results from this system indicate that it is able to estimate gaze points in the scene camera to within one degree over a wide range of head positions.

  10. "The Gaze Heuristic:" Biography of an Adaptively Rational Decision Process.

    PubMed

    Hamlin, Robert P

    2017-04-01

    This article is a case study that describes the natural and human history of the gaze heuristic. The gaze heuristic is an interception heuristic that utilizes a single input (deviation from a constant angle of approach) repeatedly as a task is performed. Its architecture, advantages, and limitations are described in detail. A history of the gaze heuristic is then presented. In natural history, the gaze heuristic is the only known technique used by predators to intercept prey. In human history the gaze heuristic was discovered accidentally by Royal Air Force (RAF) fighter command just prior to World War II. As it was never discovered by the Luftwaffe, the technique conferred a decisive advantage upon the RAF throughout the war. After the end of the war in America, German technology was combined with the British heuristic to create the Sidewinder AIM9 missile, the most successful autonomous weapon ever built. There are no plans to withdraw it or replace its guiding gaze heuristic. The case study demonstrates that the gaze heuristic is a specific heuristic type that takes a single best input at the best time (take the best 2 ). Its use is an adaptively rational response to specific, rapidly evolving decision environments that has allowed those animals/humans/machines who use it to survive, prosper, and multiply relative to those who do not. Copyright © 2017 Cognitive Science Society, Inc.

  11. How Beauty Determines Gaze! Facial Attractiveness and Gaze Duration in Images of Real World Scenes

    PubMed Central

    Mitrovic, Aleksandra; Goller, Jürgen

    2016-01-01

    We showed that the looking time spent on faces is a valid covariate of beauty by testing the relation between facial attractiveness and gaze behavior. We presented natural scenes which always pictured two people, encompassing a wide range of facial attractiveness. Employing measurements of eye movements in a free viewing paradigm, we found a linear relation between facial attractiveness and gaze behavior: The more attractive the face, the longer and the more often it was looked at. In line with evolutionary approaches, the positive relation was particularly pronounced when participants viewed other sex faces. PMID:27698984

  12. Gaze Behavior of Children with ASD toward Pictures of Facial Expressions.

    PubMed

    Matsuda, Soichiro; Minagawa, Yasuyo; Yamamoto, Junichi

    2015-01-01

    Atypical gaze behavior in response to a face has been well documented in individuals with autism spectrum disorders (ASDs). Children with ASD appear to differ from typically developing (TD) children in gaze behavior for spoken and dynamic face stimuli but not for nonspeaking, static face stimuli. Furthermore, children with ASD and TD children show a difference in their gaze behavior for certain expressions. However, few studies have examined the relationship between autism severity and gaze behavior toward certain facial expressions. The present study replicated and extended previous studies by examining gaze behavior towards pictures of facial expressions. We presented ASD and TD children with pictures of surprised, happy, neutral, angry, and sad facial expressions. Autism severity was assessed using the Childhood Autism Rating Scale (CARS). The results showed that there was no group difference in gaze behavior when looking at pictures of facial expressions. Conversely, the children with ASD who had more severe autistic symptomatology had a tendency to gaze at angry facial expressions for a shorter duration in comparison to other facial expressions. These findings suggest that autism severity should be considered when examining atypical responses to certain facial expressions.

  13. Gaze Behavior of Children with ASD toward Pictures of Facial Expressions

    PubMed Central

    Matsuda, Soichiro; Minagawa, Yasuyo; Yamamoto, Junichi

    2015-01-01

    Atypical gaze behavior in response to a face has been well documented in individuals with autism spectrum disorders (ASDs). Children with ASD appear to differ from typically developing (TD) children in gaze behavior for spoken and dynamic face stimuli but not for nonspeaking, static face stimuli. Furthermore, children with ASD and TD children show a difference in their gaze behavior for certain expressions. However, few studies have examined the relationship between autism severity and gaze behavior toward certain facial expressions. The present study replicated and extended previous studies by examining gaze behavior towards pictures of facial expressions. We presented ASD and TD children with pictures of surprised, happy, neutral, angry, and sad facial expressions. Autism severity was assessed using the Childhood Autism Rating Scale (CARS). The results showed that there was no group difference in gaze behavior when looking at pictures of facial expressions. Conversely, the children with ASD who had more severe autistic symptomatology had a tendency to gaze at angry facial expressions for a shorter duration in comparison to other facial expressions. These findings suggest that autism severity should be considered when examining atypical responses to certain facial expressions. PMID:26090223

  14. Anxiety symptoms and children's eye gaze during fear learning.

    PubMed

    Michalska, Kalina J; Machlin, Laura; Moroney, Elizabeth; Lowet, Daniel S; Hettema, John M; Roberson-Nay, Roxann; Averbeck, Bruno B; Brotman, Melissa A; Nelson, Eric E; Leibenluft, Ellen; Pine, Daniel S

    2017-11-01

    The eye region of the face is particularly relevant for decoding threat-related signals, such as fear. However, it is unclear if gaze patterns to the eyes can be influenced by fear learning. Previous studies examining gaze patterns in adults find an association between anxiety and eye gaze avoidance, although no studies to date examine how associations between anxiety symptoms and eye-viewing patterns manifest in children. The current study examined the effects of learning and trait anxiety on eye gaze using a face-based fear conditioning task developed for use in children. Participants were 82 youth from a general population sample of twins (aged 9-13 years), exhibiting a range of anxiety symptoms. Participants underwent a fear conditioning paradigm where the conditioned stimuli (CS+) were two neutral faces, one of which was randomly selected to be paired with an aversive scream. Eye tracking, physiological, and subjective data were acquired. Children and parents reported their child's anxiety using the Screen for Child Anxiety Related Emotional Disorders. Conditioning influenced eye gaze patterns in that children looked longer and more frequently to the eye region of the CS+ than CS- face; this effect was present only during fear acquisition, not at baseline or extinction. Furthermore, consistent with past work in adults, anxiety symptoms were associated with eye gaze avoidance. Finally, gaze duration to the eye region mediated the effect of anxious traits on self-reported fear during acquisition. Anxiety symptoms in children relate to face-viewing strategies deployed in the context of a fear learning experiment. This relationship may inform attempts to understand the relationship between pediatric anxiety symptoms and learning. © 2017 Association for Child and Adolescent Mental Health.

  15. Abnormalities of gaze in cerebrovascular disease.

    PubMed

    Pedersen, R A; Troost, B T

    1981-01-01

    Disorders of ocular motility may occur after injury at several levels of the neuraxis. Unilateral supranuclear disorders of gaze tend to be transient; bilateral disorders more enduring. Nuclear disorders of gaze also tend to be enduring and are frequently present in association with long tract signs and cranial nerve palsies on opposite sides of the body. Nystagmus is a reliable sign of posterior fossa or peripheral eight nerve pathology. Familiarity with these concepts may help the clinician answer questions regarding localization and prognosis.

  16. Exploiting Listener Gaze to Improve Situated Communication in Dynamic Virtual Environments.

    PubMed

    Garoufi, Konstantina; Staudte, Maria; Koller, Alexander; Crocker, Matthew W

    2016-09-01

    Beyond the observation that both speakers and listeners rapidly inspect the visual targets of referring expressions, it has been argued that such gaze may constitute part of the communicative signal. In this study, we investigate whether a speaker may, in principle, exploit listener gaze to improve communicative success. In the context of a virtual environment where listeners follow computer-generated instructions, we provide two kinds of support for this claim. First, we show that listener gaze provides a reliable real-time index of understanding even in dynamic and complex environments, and on a per-utterance basis. Second, we show that a language generation system that uses listener gaze to provide rapid feedback improves overall task performance in comparison with two systems that do not use gaze. Aside from demonstrating the utility of listener gaze in situated communication, our findings open the door to new methods for developing and evaluating multi-modal models of situated interaction. Copyright © 2015 Cognitive Science Society, Inc.

  17. Intermediate view synthesis for eye-gazing

    NASA Astrophysics Data System (ADS)

    Baek, Eu-Ttuem; Ho, Yo-Sung

    2015-01-01

    Nonverbal communication, also known as body language, is an important form of communication. Nonverbal behaviors such as posture, eye contact, and gestures send strong messages. In regard to nonverbal communication, eye contact is one of the most important forms that an individual can use. However, lack of eye contact occurs when we use video conferencing system. The disparity between locations of the eyes and a camera gets in the way of eye contact. The lock of eye gazing can give unapproachable and unpleasant feeling. In this paper, we proposed an eye gazing correction for video conferencing. We use two cameras installed at the top and the bottom of the television. The captured two images are rendered with 2D warping at virtual position. We implement view morphing to the detected face, and synthesize the face and the warped image. Experimental results verify that the proposed system is effective in generating natural gaze-corrected images.

  18. Gaze stability of observers watching Op Art pictures.

    PubMed

    Zanker, Johannes M; Doyle, Melanie; Robin, Walker

    2003-01-01

    It has been the matter of some debate why we can experience vivid dynamic illusions when looking at static pictures composed from simple black and white patterns. The impression of illusory motion is particularly strong when viewing some of the works of 'Op Artists, such as Bridget Riley's painting Fall. Explanations of the illusory motion have ranged from retinal to cortical mechanisms, and an important role has been attributed to eye movements. To assess the possible contribution of eye movements to the illusory-motion percept we studied the strength of the illusion under different viewing conditions, and analysed the gaze stability of observers viewing the Riley painting and control patterns that do not produce the illusion. Whereas the illusion was reduced, but not abolished, when watching the painting through a pinhole, which reduces the effects of accommodation, it was not perceived in flash afterimages, suggesting an important role for eye movements in generating the illusion for this image. Recordings of eye movements revealed an abundance of small involuntary saccades when looking at the Riley pattern, despite the fact that gaze was kept within the dedicated fixation region. The frequency and particular characteristics of these rapid eye movements can vary considerably between different observers, but, although there was a tendency for gaze stability to deteriorate while viewing a Riley painting, there was no significant difference in saccade frequency between the stimulus and control patterns. Theoretical considerations indicate that such small image displacements can generate patterns of motion signals in a motion-detector network, which may serve as a simple and sufficient, but not necessarily exclusive, explanation for the illusion. Why such image displacements lead to perceptual results with a group of Op Art and similar patterns, but remain invisible for other stimuli, is discussed.

  19. Gaze Fluctuations Are Not Additively Decomposable: Reply to Bogartz and Staub

    ERIC Educational Resources Information Center

    Kelty-Stephen, Damian G.; Mirman, Daniel

    2013-01-01

    Our previous work interpreted single-lognormal fits to inter-gaze distance (i.e., "gaze steps") histograms as evidence of multiplicativity and hence interactions across scales in visual cognition. Bogartz and Staub (2012) proposed that gaze steps are additively decomposable into fixations and saccades, matching the histograms better and…

  20. Gaze-Based Assistive Technology - Usefulness in Clinical Assessments.

    PubMed

    Wandin, Helena

    2017-01-01

    Gaze-based assistive technology was used in informal clinical assessments. Excerpts of medical journals were analyzed by directed content analysis using a model of communicative competence. The results of this pilot study indicate that gaze-based assistive technology is a useful tool in communication assessments that can generate clinically relevant information.

  1. Eye gaze tracking for endoscopic camera positioning: an application of a hardware/software interface developed to automate Aesop.

    PubMed

    Ali, S M; Reisner, L A; King, B; Cao, A; Auner, G; Klein, M; Pandya, A K

    2008-01-01

    A redesigned motion control system for the medical robot Aesop allows automating and programming its movements. An IR eye tracking system has been integrated with this control interface to implement an intelligent, autonomous eye gaze-based laparoscopic positioning system. A laparoscopic camera held by Aesop can be moved based on the data from the eye tracking interface to keep the user's gaze point region at the center of a video feedback monitor. This system setup provides autonomous camera control that works around the surgeon, providing an optimal robotic camera platform.

  2. Seductive Eyes: Attractiveness and Direct Gaze Increase Desire for Associated Objects

    ERIC Educational Resources Information Center

    Strick, Madelijn; Holland, Rob W.; van Knippenberg, Ad

    2008-01-01

    Recent research in neuroscience shows that observing attractive faces with direct gaze is more rewarding than observing attractive faces with averted gaze. On the basis of this research, it was hypothesized that object evaluations can be enhanced by associating them with attractive faces displaying direct gaze. In a conditioning paradigm, novel…

  3. Gaze Strategies in Skateboard Trick Jumps: Spatiotemporal Constraints in Complex Locomotion.

    PubMed

    Klostermann, André; Küng, Philip

    2017-03-01

    This study aimed to further the knowledge on gaze behavior in locomotion by studying gaze strategies in skateboard jumps of different difficulty that had to be performed either with or without an obstacle. Nine experienced skateboarders performed "Ollie" and "Kickflip" jumps either over an obstacle or over a plane surface. The stable gaze at 5 different areas of interest was calculated regarding its relative duration as well as its temporal order. During the approach phase, an interaction between area of interest and obstacle condition, F(3, 24) = 12.91, p <  .05, η p 2  = .62, was found with longer stable-gaze locations at the takeoff area in attempts with an obstacle (p <  .05, η p 2  = .47). In contrast, in attempts over a plane surface, longer stable-gaze locations at the skateboard were revealed (p <  .05, η p 2  = .73). Regarding the trick difficulty factor, the skateboarders descriptively showed longer stable-gaze locations at the skateboard for the "Kickflip" than for the "Ollie" in the no-obstacle condition only (p>.05, d = 0.74). Finally, during the jump phase, neither obstacle condition nor trick difficulty affected gaze behavior differentially. This study underlines the functional adaptability of the visuomotor system to changing demands in highly dynamic situations. As a function of certain constraints, different gaze strategies were observed that can be considered as highly relevant for successfully performing skateboard jumps.

  4. "Wolves (Canis lupus) and dogs (Canis familiaris) differ in following human gaze into distant space but respond similar to their packmates' gaze": Correction to Werhahn et al. (2016).

    PubMed

    2017-02-01

    Reports an error in "Wolves ( Canis lupus ) and dogs ( Canis familiaris ) differ in following human gaze into distant space but respond similar to their packmates' gaze" by Geraldine Werhahn, Zsófia Virányi, Gabriela Barrera, Andrea Sommese and Friederike Range ( Journal of Comparative Psychology , 2016[Aug], Vol 130[3], 288-298). In the article, the affiliations for the second and fifth authors should be Wolf Science Center, Ernstbrunn, Austria, and Comparative Cognition, Messerli Research Institute, University of Veterinary Medicine Vienna/ Medical University of Vienna/University of Vienna. The online version of this article has been corrected. (The following abstract of the original article appeared in record 2016-26311-001.) Gaze following into distant space is defined as visual co-orientation with another individual's head direction allowing the gaze follower to gain information on its environment. Human and nonhuman animals share this basic gaze following behavior, suggested to rely on a simple reflexive mechanism and believed to be an important prerequisite for complex forms of social cognition. Pet dogs differ from other species in that they follow only communicative human gaze clearly addressed to them. However, in an earlier experiment we showed that wolves follow human gaze into distant space. Here we set out to investigate whether domestication has affected gaze following in dogs by comparing pack-living dogs and wolves raised and kept under the same conditions. In Study 1 we found that in contrast to the wolves, these dogs did not follow minimally communicative human gaze into distant space in the same test paradigm. In the observational Study 2 we found that pack-living dogs and wolves, similarly vigilant to environmental stimuli, follow the spontaneous gaze of their conspecifics similarly often. Our findings suggest that domestication did not affect the gaze following ability of dogs itself. The results raise hypotheses about which other dog skills

  5. Revisiting Patterson's Paradigm: Gaze Behaviors in Deaf Communication.

    ERIC Educational Resources Information Center

    Luciano, Jason M.

    2001-01-01

    This article explains a sequential model of eye gaze and eye contact behaviors researched among hearing populations and explores these behaviors in people with deafness. It is found that characterizations of eye contact and eye gaze behavior applied to hearing populations are not completely applicable to those with deafness. (Contains references.)…

  6. Does the 'P300' speller depend on eye gaze?

    NASA Astrophysics Data System (ADS)

    Brunner, P.; Joshi, S.; Briskin, S.; Wolpaw, J. R.; Bischof, H.; Schalk, G.

    2010-10-01

    Many people affected by debilitating neuromuscular disorders such as amyotrophic lateral sclerosis, brainstem stroke or spinal cord injury are impaired in their ability to, or are even unable to, communicate. A brain-computer interface (BCI) uses brain signals, rather than muscles, to re-establish communication with the outside world. One particular BCI approach is the so-called 'P300 matrix speller' that was first described by Farwell and Donchin (1988 Electroencephalogr. Clin. Neurophysiol. 70 510-23). It has been widely assumed that this method does not depend on the ability to focus on the desired character, because it was thought that it relies primarily on the P300-evoked potential and minimally, if at all, on other EEG features such as the visual-evoked potential (VEP). This issue is highly relevant for the clinical application of this BCI method, because eye movements may be impaired or lost in the relevant user population. This study investigated the extent to which the performance in a 'P300' speller BCI depends on eye gaze. We evaluated the performance of 17 healthy subjects using a 'P300' matrix speller under two conditions. Under one condition ('letter'), the subjects focused their eye gaze on the intended letter, while under the second condition ('center'), the subjects focused their eye gaze on a fixation cross that was located in the center of the matrix. The results show that the performance of the 'P300' matrix speller in normal subjects depends in considerable measure on gaze direction. They thereby disprove a widespread assumption in BCI research, and suggest that this BCI might function more effectively for people who retain some eye-movement control. The applicability of these findings to people with severe neuromuscular disabilities (particularly in eye-movements) remains to be determined.

  7. AUTISTIC TRAITS INFLUENCE GAZE-ORIENTED ATTENTION TO HAPPY BUT NOT FEARFUL FACES

    PubMed Central

    Lassalle, Amandine; Itier, Roxane J.

    2017-01-01

    The relationship between autistic traits and gaze-oriented attention to fearful and happy faces was investigated at the behavioral and neuronal levels. Upright and inverted dynamic face stimuli were used in a gaze-cueing paradigm while ERPs were recorded. Participants responded faster to gazed-at than to non-gazed-at targets and this Gaze Orienting Effect (GOE) diminished with inversion, suggesting it relies on facial configuration. It was also larger for fearful than happy faces but only in participants with high Autism Quotient (AQ) scores. While the GOE to fearful faces was of similar magnitude regardless of AQ scores, a diminished GOE to happy faces was found in participants with high AQ scores. At the ERP level, a congruency effect on target-elicited P1 component reflected enhanced visual processing of gazed-at targets. In addition, cue-triggered early directing attention negativity and anterior directing attention negativity reflected, respectively, attention orienting and attention holding at gazed-at locations. These neural markers of spatial attention orienting were not modulated by emotion and were not found in participants with high AQ scores. Together these findings suggest that autistic traits influence attention orienting to gaze and its modulation by social emotions such as happiness. PMID:25222883

  8. A testimony to Muzil: Hervé Guibert, Foucault, and the medical gaze.

    PubMed

    Rendell, Joanne

    2004-01-01

    Testimony to Muzil: Hervé Guibert, Michel Foucault, and the "Medical Gaze" examines the fictional/autobiographical AIDS writings of the French writer Hervé Guibert. Locating Guibert's writings alongside the work of his friend Michel Foucault, the article explores how they echo Foucault's evolving notions of the "medical gaze." The article also explores how Guilbert's narrators and Guibert himself (as writer) resist and challenge the medical gaze; a gaze which particularly in the era of AIDS has subjected, objectified, and even sometimes punished the body of the gay man. It is argued that these resistances to the gaze offer a literary extension to Foucault's later work on power and resistance strategies.

  9. 3D gaze tracking method using Purkinje images on eye optical model and pupil

    NASA Astrophysics Data System (ADS)

    Lee, Ji Woo; Cho, Chul Woo; Shin, Kwang Yong; Lee, Eui Chul; Park, Kang Ryoung

    2012-05-01

    Gaze tracking is to detect the position a user is looking at. Most research on gaze estimation has focused on calculating the X, Y gaze position on a 2D plane. However, as the importance of stereoscopic displays and 3D applications has increased greatly, research into 3D gaze estimation of not only the X, Y gaze position, but also the Z gaze position has gained attention for the development of next-generation interfaces. In this paper, we propose a new method for estimating the 3D gaze position based on the illuminative reflections (Purkinje images) on the surface of the cornea and lens by considering the 3D optical structure of the human eye model. This research is novel in the following four ways compared with previous work. First, we theoretically analyze the generated models of Purkinje images based on the 3D human eye model for 3D gaze estimation. Second, the relative positions of the first and fourth Purkinje images to the pupil center, inter-distance between these two Purkinje images, and pupil size are used as the features for calculating the Z gaze position. The pupil size is used on the basis of the fact that pupil accommodation happens according to the gaze positions in the Z direction. Third, with these features as inputs, the final Z gaze position is calculated using a multi-layered perceptron (MLP). Fourth, the X, Y gaze position on the 2D plane is calculated by the position of the pupil center based on a geometric transform considering the calculated Z gaze position. Experimental results showed that the average errors of the 3D gaze estimation were about 0.96° (0.48 cm) on the X-axis, 1.60° (0.77 cm) on the Y-axis, and 4.59 cm along the Z-axis in 3D space.

  10. Human-like object tracking and gaze estimation with PKD android

    PubMed Central

    Wijayasinghe, Indika B.; Miller, Haylie L.; Das, Sumit K; Bugnariu, Nicoleta L.; Popa, Dan O.

    2018-01-01

    As the use of robots increases for tasks that require human-robot interactions, it is vital that robots exhibit and understand human-like cues for effective communication. In this paper, we describe the implementation of object tracking capability on Philip K. Dick (PKD) android and a gaze tracking algorithm, both of which further robot capabilities with regard to human communication. PKD's ability to track objects with human-like head postures is achieved with visual feedback from a Kinect system and an eye camera. The goal of object tracking with human-like gestures is twofold : to facilitate better human-robot interactions and to enable PKD as a human gaze emulator for future studies. The gaze tracking system employs a mobile eye tracking system (ETG; SensoMotoric Instruments) and a motion capture system (Cortex; Motion Analysis Corp.) for tracking the head orientations. Objects to be tracked are displayed by a virtual reality system, the Computer Assisted Rehabilitation Environment (CAREN; MotekForce Link). The gaze tracking algorithm converts eye tracking data and head orientations to gaze information facilitating two objectives: to evaluate the performance of the object tracking system for PKD and to use the gaze information to predict the intentions of the user, enabling the robot to understand physical cues by humans. PMID:29416193

  11. Human-like object tracking and gaze estimation with PKD android

    NASA Astrophysics Data System (ADS)

    Wijayasinghe, Indika B.; Miller, Haylie L.; Das, Sumit K.; Bugnariu, Nicoleta L.; Popa, Dan O.

    2016-05-01

    As the use of robots increases for tasks that require human-robot interactions, it is vital that robots exhibit and understand human-like cues for effective communication. In this paper, we describe the implementation of object tracking capability on Philip K. Dick (PKD) android and a gaze tracking algorithm, both of which further robot capabilities with regard to human communication. PKD's ability to track objects with human-like head postures is achieved with visual feedback from a Kinect system and an eye camera. The goal of object tracking with human-like gestures is twofold: to facilitate better human-robot interactions and to enable PKD as a human gaze emulator for future studies. The gaze tracking system employs a mobile eye tracking system (ETG; SensoMotoric Instruments) and a motion capture system (Cortex; Motion Analysis Corp.) for tracking the head orientations. Objects to be tracked are displayed by a virtual reality system, the Computer Assisted Rehabilitation Environment (CAREN; MotekForce Link). The gaze tracking algorithm converts eye tracking data and head orientations to gaze information facilitating two objectives: to evaluate the performance of the object tracking system for PKD and to use the gaze information to predict the intentions of the user, enabling the robot to understand physical cues by humans.

  12. Talking heads or talking eyes? Effects of head orientation and sudden onset gaze cues on attention capture.

    PubMed

    van der Wel, Robrecht P; Welsh, Timothy; Böckler, Anne

    2018-01-01

    The direction of gaze towards or away from an observer has immediate effects on attentional processing in the observer. Previous research indicates that faces with direct gaze are processed more efficiently than faces with averted gaze. We recently reported additional processing advantages for faces that suddenly adopt direct gaze (abruptly shift from averted to direct gaze) relative to static direct gaze (always in direct gaze), sudden averted gaze (abruptly shift from direct to averted gaze), and static averted gaze (always in averted gaze). Because changes in gaze orientation in previous study co-occurred with changes in head orientation, it was not clear if the effect is contingent on face or eye processing, or whether it requires both the eyes and the face to provide consistent information. The present study delineates the impact of head orientation, sudden onset motion cues, and gaze cues. Participants completed a target-detection task in which head position remained in a static averted or direct orientation while sudden onset motion and eye gaze cues were manipulated within each trial. The results indicate a sudden direct gaze advantage that resulted from the additive role of motion and gaze cues. Interestingly, the orientation of the face towards or away from the observer did not influence the sudden direct gaze effect, suggesting that eye gaze cues, not face orientation cues, are critical for the sudden direct gaze effect.

  13. Influence of Eye Gaze on Spoken Word Processing: An ERP Study with Infants

    ERIC Educational Resources Information Center

    Parise, Eugenio; Handl, Andrea; Palumbo, Letizia; Friederici, Angela D.

    2011-01-01

    Eye gaze is an important communicative signal, both as mutual eye contact and as referential gaze to objects. To examine whether attention to speech versus nonspeech stimuli in 4- to 5-month-olds (n = 15) varies as a function of eye gaze, event-related brain potentials were used. Faces with mutual or averted gaze were presented in combination with…

  14. Fractal fluctuations in gaze speed visual search.

    PubMed

    Stephen, Damian G; Anastas, Jason

    2011-04-01

    Visual search involves a subtle coordination of visual memory and lower-order perceptual mechanisms. Specifically, the fluctuations in gaze may provide support for visual search above and beyond what may be attributed to memory. Prior research indicates that gaze during search exhibits fractal fluctuations, which allow for a wide sampling of the field of view. Fractal fluctuations constitute a case of fast diffusion that may provide an advantage in exploration. We present reanalyses of eye-tracking data collected by Stephen and Mirman (Cognition, 115, 154-165, 2010) for single-feature and conjunction search tasks. Fluctuations in gaze during these search tasks were indeed fractal. Furthermore, the degree of fractality predicted decreases in reaction time on a trial-by-trial basis. We propose that fractality may play a key role in explaining the efficacy of perceptual exploration.

  15. Surgical planning and innervation in pontine gaze palsy with ipsilateral esotropia.

    PubMed

    Somer, Deniz; Cinar, Fatma Gul; Kaderli, Ahmet; Ornek, Firdevs

    2016-10-01

    To discuss surgical intervention strategies among patients with horizontal gaze palsy with concurrent esotropia. Five consecutive patients with dorsal pontine lesions are presented. Each patient had horizontal gaze palsy with symptomatic diplopia as a consequence of esotropia in primary gaze and an anomalous head turn to attain single binocular vision. Clinical findings in the first 2 patients led us to presume there was complete loss of rectus muscle function from rectus muscle palsy. Based on this assumption, medial rectus recessions with simultaneous partial vertical muscle transposition (VRT) on the ipsilateral eye of the gaze palsy and recession-resection surgery on the contralateral eye were performed, resulting in significant motility limitation. Sequential recession-resection surgery without simultaneous VRT on the 3rd patient created an unexpected motility improvement to the side of gaze palsy, an observation differentiating rectus muscle palsy from paresis. Recession combined with VRT approach in the esotropic eye was abandoned on subsequent patients. Simultaneous recession-resection surgery without VRT in the next 2 patients resulted in alleviation of head postures, resolution of esotropia, and also substantial motility improvements to the ipsilateral hemifield of gaze palsy without limitations in adduction and vertical deviations. Ocular misalignment and abnormal head posture as a result of conjugate gaze palsy can be successfully treated by basic recession-resection surgery, with the advantage of increasing versions to the ipsilateral side of the gaze palsy. Improved motility after surgery presumably represents paresis, not "paralysis," with residual innervation in rectus muscles. Copyright © 2016 American Association for Pediatric Ophthalmology and Strabismus. Published by Elsevier Inc. All rights reserved.

  16. Direct Gaze Modulates Face Recognition in Young Infants

    ERIC Educational Resources Information Center

    Farroni, Teresa; Massaccesi, Stefano; Menon, Enrica; Johnson, Mark H.

    2007-01-01

    From birth, infants prefer to look at faces that engage them in direct eye contact. In adults, direct gaze is known to modulate the processing of faces, including the recognition of individuals. In the present study, we investigate whether direction of gaze has any effect on face recognition in four-month-old infants. Four-month infants were shown…

  17. Investigating the Link Between Radiologists Gaze, Diagnostic Decision, and Image Content

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tourassi, Georgia; Voisin, Sophie; Paquit, Vincent C

    2013-01-01

    Objective: To investigate machine learning for linking image content, human perception, cognition, and error in the diagnostic interpretation of mammograms. Methods: Gaze data and diagnostic decisions were collected from six radiologists who reviewed 20 screening mammograms while wearing a head-mounted eye-tracker. Texture analysis was performed in mammographic regions that attracted radiologists attention and in all abnormal regions. Machine learning algorithms were investigated to develop predictive models that link: (i) image content with gaze, (ii) image content and gaze with cognition, and (iii) image content, gaze, and cognition with diagnostic error. Both group-based and individualized models were explored. Results: By poolingmore » the data from all radiologists machine learning produced highly accurate predictive models linking image content, gaze, cognition, and error. Merging radiologists gaze metrics and cognitive opinions with computer-extracted image features identified 59% of the radiologists diagnostic errors while confirming 96.2% of their correct diagnoses. The radiologists individual errors could be adequately predicted by modeling the behavior of their peers. However, personalized tuning appears to be beneficial in many cases to capture more accurately individual behavior. Conclusions: Machine learning algorithms combining image features with radiologists gaze data and diagnostic decisions can be effectively developed to recognize cognitive and perceptual errors associated with the diagnostic interpretation of mammograms.« less

  18. Eye, head, and body coordination during large gaze shifts in rhesus monkeys: movement kinematics and the influence of posture.

    PubMed

    McCluskey, Meaghan K; Cullen, Kathleen E

    2007-04-01

    Coordinated movements of the eye, head, and body are used to redirect the axis of gaze between objects of interest. However, previous studies of eye-head gaze shifts in head-unrestrained primates generally assumed the contribution of body movement to be negligible. Here we characterized eye-head-body coordination during horizontal gaze shifts made by trained rhesus monkeys to visual targets while they sat upright in a standard primate chair and assumed a more natural sitting posture in a custom-designed chair. In both postures, gaze shifts were characterized by the sequential onset of eye, head, and body movements, which could be described by predictable relationships. Body motion made a small but significant contribution to gaze shifts that were > or =40 degrees in amplitude. Furthermore, as gaze shift amplitude increased (40-120 degrees ), body contribution and velocity increased systematically. In contrast, peak eye and head velocities plateaued at velocities of approximately 250-300 degrees /s, and the rotation of the eye-in-orbit and head-on-body remained well within the physical limits of ocular and neck motility during large gaze shifts, saturating at approximately 35 and 60 degrees , respectively. Gaze shifts initiated with the eye more contralateral in the orbit were accompanied by smaller body as well as head movement amplitudes and velocities were greater when monkeys were seated in the more natural body posture. Taken together, our findings show that body movement makes a predictable contribution to gaze shifts that is systematically influenced by factors such as orbital position and posture. We conclude that body movements are part of a coordinated series of motor events that are used to voluntarily reorient gaze and that these movements can be significant even in a typical laboratory setting. Our results emphasize the need for caution in the interpretation of data from neurophysiological studies of the control of saccadic eye movements and/or eye

  19. An automatic calibration procedure for remote eye-gaze tracking systems.

    PubMed

    Model, Dmitri; Guestrin, Elias D; Eizenman, Moshe

    2009-01-01

    Remote gaze estimation systems use calibration procedures to estimate subject-specific parameters that are needed for the calculation of the point-of-gaze. In these procedures, subjects are required to fixate on a specific point or points at specific time instances. Advanced remote gaze estimation systems can estimate the optical axis of the eye without any personal calibration procedure, but use a single calibration point to estimate the angle between the optical axis and the visual axis (line-of-sight). This paper presents a novel automatic calibration procedure that does not require active user participation. To estimate the angles between the optical and visual axes of each eye, this procedure minimizes the distance between the intersections of the visual axes of the left and right eyes with the surface of a display while subjects look naturally at the display (e.g., watching a video clip). Simulation results demonstrate that the performance of the algorithm improves as the range of viewing angles increases. For a subject sitting 75 cm in front of an 80 cm x 60 cm display (40" TV) the standard deviation of the error in the estimation of the angles between the optical and visual axes is 0.5 degrees.

  20. Atypical Gaze Cueing Pattern in a Complex Environment in Individuals with ASD

    ERIC Educational Resources Information Center

    Zhao, Shuo; Uono, Shota; Yoshimura, Sayaka; Kubota, Yasutaka; Toichi, Motomi

    2017-01-01

    Clinically, social interaction, including gaze-triggered attention, has been reported to be impaired in autism spectrum disorder (ASD), but psychological studies have generally shown intact gaze-triggered attention in ASD. These studies typically examined gaze-triggered attention under simple environmental conditions. In real life, however, the…

  1. Surface coverage with single vs. multiple gaze surface topography to fit scleral lenses.

    PubMed

    DeNaeyer, Gregory; Sanders, Donald R; Farajian, Timothy S

    2017-06-01

    To determine surface coverage of measurements using the sMap3D ® corneo-scleral topographer in patients presenting for scleral lens fitting. Twenty-five eyes of 23 scleral lens patients were examined. Up-gaze, straight-gaze, and down-gaze positions of each eye were "stitched" into a single map. The percentage surface coverage between 10mm and 20mm diameter circles from corneal center was compared between the straight-gaze and stitched images. Scleral toricity magnitude was calculated at 100% coverage and at the same diameter after 50% of the data was removed. At a 10mm diameter from corneal center, the straight-gaze and stitched images both had 100% coverage. At the 14, 15, 16, 18 and 20mm diameters, the straight-gaze image only covered 68%, 53%, 39%, 18%, and 6% of the ocular surface diameters while the stitched image covered 98%, 96%, 93%, 75%, and 32% respectively. In the case showing the most scleral coverage at 16mm (straight-gaze), there was only 75% coverage (straight-gaze) compared to 100% (stitched image); the case with the least coverage had 7% (straight gaze) and 92% (stitched image). The 95% limits of agreement between the 50% and 100% coverage scleral toricity was between -1.4D (50% coverage value larger) and 1.2D (100% coverage larger), a 2.6D spread. The absolute difference between 50% to 100% coverage scleral toricity was ≥0.50D in 28% and ≥1.0D in 16% of cases. It appears that a single straight-gaze image would introduce significant measurement inaccuracy in fitting scleral lenses using the sMap3D while a 3-gaze stitched image would not. Copyright © 2017 British Contact Lens Association. Published by Elsevier Ltd. All rights reserved.

  2. Conflict Tasks of Different Types Divergently Affect the Attentional Processing of Gaze and Arrow.

    PubMed

    Fan, Lingxia; Yu, Huan; Zhang, Xuemin; Feng, Qing; Sun, Mengdan; Xu, Mengsi

    2018-01-01

    The present study explored the attentional processing mechanisms of gaze and arrow cues in two different types of conflict tasks. In Experiment 1, participants performed a flanker task in which gaze and arrow cues were presented as central targets or bilateral distractors. The congruency between the direction of the target and the distractors was manipulated. Results showed that arrow distractors greatly interfered with the attentional processing of gaze, while the processing of arrow direction was immune to conflict from gaze distractors. Using a spatial compatibility task, Experiment 2 explored the conflict effects exerted on gaze and arrow processing by their relative spatial locations. When the direction of the arrow was in conflict with its spatial layout on screen, response times were slowed; however, the encoding of gaze was unaffected by spatial location. In general, processing to an arrow cue is less influenced by bilateral gaze cues but is affected by irrelevant spatial information, while processing to a gaze cue is greatly disturbed by bilateral arrows but is unaffected by irrelevant spatial information. Different effects on gaze and arrow cues by different types of conflicts may reflect two relatively distinct specific modes of the attentional process.

  3. Gaze failure, drifting eye movements, and centripetal nystagmus in cerebellar disease.

    PubMed Central

    Leech, J; Gresty, M; Hess, K; Rudge, P

    1977-01-01

    Three abnormalities of eye movement in man are described which are indicative of cerebellar system disorder, namely, centripetally beating nystagmus, failure to maintain lateral gaze either in darkness or with eye closure, and slow drifting movements of the eyes in the absence of fixation. Similar eye movement signs follow cerebellectomy in the primate and the cat. These abnormalities of eye movement, together with other signs of cerebellar disease, such as rebound alternating, and gaze paretic nystagmus, are explained by the hypothesis that the cerebellum helps to maintain lateral gaze and that brain stem mechanisms which monitor gaze position generate compensatory biases in the absence of normal cerebellar function. PMID:603785

  4. The response of guide dogs and pet dogs (Canis familiaris) to cues of human referential communication (pointing and gaze).

    PubMed

    Ittyerah, Miriam; Gaunet, Florence

    2009-03-01

    The study raises the question of whether guide dogs and pet dogs are expected to differ in response to cues of referential communication given by their owners; especially since guide dogs grow up among sighted humans, and while living with their blind owners, they still have interactions with several sighted people. Guide dogs and pet dogs were required to respond to point, point and gaze, gaze and control cues of referential communication given by their owners. Results indicate that the two groups of dogs do not differ from each other, revealing that the visual status of the owner is not a factor in the use of cues of referential communication. Both groups of dogs have higher frequencies of performance and faster latencies for the point and the point and gaze cues as compared to gaze cue only. However, responses to control cues are below chance performance for the guide dogs, whereas the pet dogs perform at chance. The below chance performance of the guide dogs may be explained by a tendency among them to go and stand by the owner. The study indicates that both groups of dogs respond similarly in normal daily dyadic interaction with their owners and the lower comprehension of the human gaze may be a less salient cue among dogs in comparison to the pointing gesture.

  5. Gaze3DFix: Detecting 3D fixations with an ellipsoidal bounding volume.

    PubMed

    Weber, Sascha; Schubert, Rebekka S; Vogt, Stefan; Velichkovsky, Boris M; Pannasch, Sebastian

    2017-10-26

    Nowadays, the use of eyetracking to determine 2-D gaze positions is common practice, and several approaches to the detection of 2-D fixations exist, but ready-to-use algorithms to determine eye movements in three dimensions are still missing. Here we present a dispersion-based algorithm with an ellipsoidal bounding volume that estimates 3D fixations. Therefore, 3D gaze points are obtained using a vector-based approach and are further processed with our algorithm. To evaluate the accuracy of our method, we performed experimental studies with real and virtual stimuli. We obtained good congruence between stimulus position and both the 3D gaze points and the 3D fixation locations within the tested range of 200-600 mm. The mean deviation of the 3D fixations from the stimulus positions was 17 mm for the real as well as for the virtual stimuli, with larger variances at increasing stimulus distances. The described algorithms are implemented in two dynamic linked libraries (Gaze3D.dll and Fixation3D.dll), and we provide a graphical user interface (Gaze3DFixGUI.exe) that is designed for importing 2-D binocular eyetracking data and calculating both 3D gaze points and 3D fixations using the libraries. The Gaze3DFix toolkit, including both libraries and the graphical user interface, is available as open-source software at https://github.com/applied-cognition-research/Gaze3DFix .

  6. A sLORETA study for gaze-independent BCI speller.

    PubMed

    Xingwei An; Jinwen Wei; Shuang Liu; Dong Ming

    2017-07-01

    EEG-based BCI (brain-computer-interface) speller, especially gaze-independent BCI speller, has become a hot topic in recent years. It provides direct spelling device by non-muscular method for people with severe motor impairments and with limited gaze movement. Brain needs to conduct both stimuli-driven and stimuli-related attention in fast presented BCI paradigms for such BCI speller applications. Few researchers studied the mechanism of brain response to such fast presented BCI applications. In this study, we compared the distribution of brain activation in visual, auditory, and audio-visual combined stimuli paradigms using sLORETA (standardized low-resolution brain electromagnetic tomography). Between groups comparisons showed the importance of visual and auditory stimuli in audio-visual combined paradigm. They both contribute to the activation of brain regions, with visual stimuli being the predominate stimuli. Visual stimuli related brain region was mainly located at parietal and occipital lobe, whereas response in frontal-temporal lobes might be caused by auditory stimuli. These regions played an important role in audio-visual bimodal paradigms. These new findings are important for future study of ERP speller as well as the mechanism of fast presented stimuli.

  7. Cognitive control modulates attention to food cues: Support for the control readiness model of self-control.

    PubMed

    Kleiman, Tali; Trope, Yaacov; Amodio, David M

    2016-12-01

    Self-control in one's food choices often depends on the regulation of attention toward healthy choices and away from temptations. We tested whether selective attention to food cues can be modulated by a newly developed proactive self-control mechanism-control readiness-whereby control activated in one domain can facilitate control in another domain. In two studies, we elicited the activation of control using a color-naming Stroop task and tested its effect on attention to food cues in a subsequent, unrelated task. We found that control readiness modulates both overt attention, which involves shifts in eye gaze (Study 1), and covert attention, which involves shift in mental attention without shifting in eye gaze (Study 2). We further demonstrated that individuals for whom tempting food cues signal a self-control problem (operationalized by relatively higher BMI) were especially likely to benefit from control readiness. We discuss the theoretical contributions of the control readiness model and the implications of our findings for enhancing proactive self-control to overcome temptation in food choices. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Gaze Compensation as a Technique for Improving Hand–Eye Coordination in Prosthetic Vision

    PubMed Central

    Titchener, Samuel A.; Shivdasani, Mohit N.; Fallon, James B.; Petoe, Matthew A.

    2018-01-01

    Purpose Shifting the region-of-interest within the input image to compensate for gaze shifts (“gaze compensation”) may improve hand–eye coordination in visual prostheses that incorporate an external camera. The present study investigated the effects of eye movement on hand-eye coordination under simulated prosthetic vision (SPV), and measured the coordination benefits of gaze compensation. Methods Seven healthy-sighted subjects performed a target localization-pointing task under SPV. Three conditions were tested, modeling: retinally stabilized phosphenes (uncompensated); gaze compensation; and no phosphene movement (center-fixed). The error in pointing was quantified for each condition. Results Gaze compensation yielded a significantly smaller pointing error than the uncompensated condition for six of seven subjects, and a similar or smaller pointing error than the center-fixed condition for all subjects (two-way ANOVA, P < 0.05). Pointing error eccentricity and gaze eccentricity were moderately correlated in the uncompensated condition (azimuth: R2 = 0.47; elevation: R2 = 0.51) but not in the gaze-compensated condition (azimuth: R2 = 0.01; elevation: R2 = 0.00). Increased variability in gaze at the time of pointing was correlated with greater reduction in pointing error in the center-fixed condition compared with the uncompensated condition (R2 = 0.64). Conclusions Eccentric eye position impedes hand–eye coordination in SPV. While limiting eye eccentricity in uncompensated viewing can reduce errors, gaze compensation is effective in improving coordination for subjects unable to maintain fixation. Translational Relevance The results highlight the present necessity for suppressing eye movement and support the use of gaze compensation to improve hand–eye coordination and localization performance in prosthetic vision. PMID:29321945

  9. Non-Intrusive Gaze Tracking Using Artificial Neural Networks

    DTIC Science & Technology

    1994-01-05

    We have developed an artificial neural network based gaze tracking, system which can be customized to individual users. A three layer feed forward...empirical analysis of the performance of a large number of artificial neural network architectures for this task. Suggestions for further explorations...for neurally based gaze trackers are presented, and are related to other similar artificial neural network applications such as autonomous road following.

  10. Anxiety and Sensitivity to Eye Gaze in Emotional Faces

    ERIC Educational Resources Information Center

    Holmes, Amanda; Richards, Anne; Green, Simon

    2006-01-01

    This paper reports three studies in which stronger orienting to perceived eye gaze direction was revealed when observers viewed faces showing fearful or angry, compared with happy or neutral, emotional expressions. Gaze-related spatial cueing effects to laterally presented fearful faces and centrally presented angry faces were also modulated by…

  11. Eye Contact and Fear of Being Laughed at in a Gaze Discrimination Task

    PubMed Central

    Torres-Marín, Jorge; Carretero-Dios, Hugo; Acosta, Alberto; Lupiáñez, Juan

    2017-01-01

    Current approaches conceptualize gelotophobia as a personality trait characterized by a disproportionate fear of being laughed at by others. Consistently with this perspective, gelotophobes are also described as neurotic and introverted and as having a paranoid tendency to anticipate derision and mockery situations. Although research on gelotophobia has significantly progressed over the past two decades, no evidence exists concerning the potential effects of gelotophobia in reaction to eye contact. Previous research has pointed to difficulties in discriminating gaze direction as the basis of possible misinterpretations of others’ intentions or mental states. The aim of the present research was to examine whether gelotophobia predisposition modulates the effects of eye contact (i.e., gaze discrimination) when processing faces portraying several emotional expressions. In two different experiments, participants performed an experimental gaze discrimination task in which they responded, as quickly and accurately as possible, to the eyes’ directions on faces displaying either a happy, angry, fear, neutral, or sad emotional expression. In particular, we expected trait-gelotophobia to modulate the eye contact effect, showing specific group differences in the happiness condition. The results of Study 1 (N = 40) indicated that gelotophobes made more errors than non-gelotophobes did in the gaze discrimination task. In contrast to our initial hypothesis, the happiness expression did not have any special role in the observed differences between individuals with high vs. low trait-gelotophobia. In Study 2 (N = 40), we replicated the pattern of data concerning gaze discrimination ability, even after controlling for individuals’ scores on social anxiety. Furthermore, in our second experiment, we found that gelotophobes did not exhibit any problem with identifying others’ emotions, or a general incorrect attribution of affective features, such as valence, intensity, or

  12. Spatial transformations between superior colliculus visual and motor response fields during head-unrestrained gaze shifts.

    PubMed

    Sadeh, Morteza; Sajad, Amirsaman; Wang, Hongying; Yan, Xiaogang; Crawford, John Douglas

    2015-12-01

    We previously reported that visuomotor activity in the superior colliculus (SC)--a key midbrain structure for the generation of rapid eye movements--preferentially encodes target position relative to the eye (Te) during low-latency head-unrestrained gaze shifts (DeSouza et al., 2011). Here, we trained two monkeys to perform head-unrestrained gaze shifts after a variable post-stimulus delay (400-700 ms), to test whether temporally separated SC visual and motor responses show different spatial codes. Target positions, final gaze positions and various frames of reference (eye, head, and space) were dissociated through natural (untrained) trial-to-trial variations in behaviour. 3D eye and head orientations were recorded, and 2D response field data were fitted against multiple models by use of a statistical method reported previously (Keith et al., 2009). Of 60 neurons, 17 showed a visual response, 12 showed a motor response, and 31 showed both visual and motor responses. The combined visual response field population (n = 48) showed a significant preference for Te, which was also preferred in each visual subpopulation. In contrast, the motor response field population (n = 43) showed a preference for final (relative to initial) gaze position models, and the Te model was statistically eliminated in the motor-only population. There was also a significant shift of coding from the visual to motor response within visuomotor neurons. These data confirm that SC response fields are gaze-centred, and show a target-to-gaze transformation between visual and motor responses. Thus, visuomotor transformations can occur between, and even within, neurons within a single frame of reference and brain structure. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  13. Cerebellar inactivation impairs memory of learned prism gaze-reach calibrations.

    PubMed

    Norris, Scott A; Hathaway, Emily N; Taylor, Jordan A; Thach, W Thomas

    2011-05-01

    Three monkeys performed a visually guided reach-touch task with and without laterally displacing prisms. The prisms offset the normally aligned gaze/reach and subsequent touch. Naive monkeys showed adaptation, such that on repeated prism trials the gaze-reach angle widened and touches hit nearer the target. On the first subsequent no-prism trial the monkeys exhibited an aftereffect, such that the widened gaze-reach angle persisted and touches missed the target in the direction opposite that of initial prism-induced error. After 20-30 days of training, monkeys showed long-term learning and storage of the prism gaze-reach calibration: they switched between prism and no-prism and touched the target on the first trials without adaptation or aftereffect. Injections of lidocaine into posterolateral cerebellar cortex or muscimol or lidocaine into dentate nucleus temporarily inactivated these structures. Immediately after injections into cortex or dentate, reaches were displaced in the direction of prism-displaced gaze, but no-prism reaches were relatively unimpaired. There was little or no adaptation on the day of injection. On days after injection, there was no adaptation and both prism and no-prism reaches were horizontally, and often vertically, displaced. A single permanent lesion (kainic acid) in the lateral dentate nucleus of one monkey immediately impaired only the learned prism gaze-reach calibration and in subsequent days disrupted both learning and performance. This effect persisted for the 18 days of observation, with little or no adaptation.

  14. Cerebellar inactivation impairs memory of learned prism gaze-reach calibrations

    PubMed Central

    Hathaway, Emily N.; Taylor, Jordan A.; Thach, W. Thomas

    2011-01-01

    Three monkeys performed a visually guided reach-touch task with and without laterally displacing prisms. The prisms offset the normally aligned gaze/reach and subsequent touch. Naive monkeys showed adaptation, such that on repeated prism trials the gaze-reach angle widened and touches hit nearer the target. On the first subsequent no-prism trial the monkeys exhibited an aftereffect, such that the widened gaze-reach angle persisted and touches missed the target in the direction opposite that of initial prism-induced error. After 20–30 days of training, monkeys showed long-term learning and storage of the prism gaze-reach calibration: they switched between prism and no-prism and touched the target on the first trials without adaptation or aftereffect. Injections of lidocaine into posterolateral cerebellar cortex or muscimol or lidocaine into dentate nucleus temporarily inactivated these structures. Immediately after injections into cortex or dentate, reaches were displaced in the direction of prism-displaced gaze, but no-prism reaches were relatively unimpaired. There was little or no adaptation on the day of injection. On days after injection, there was no adaptation and both prism and no-prism reaches were horizontally, and often vertically, displaced. A single permanent lesion (kainic acid) in the lateral dentate nucleus of one monkey immediately impaired only the learned prism gaze-reach calibration and in subsequent days disrupted both learning and performance. This effect persisted for the 18 days of observation, with little or no adaptation. PMID:21389311

  15. Affine Transform to Reform Pixel Coordinates of EOG Signals for Controlling Robot Manipulators Using Gaze Motions

    PubMed Central

    Rusydi, Muhammad Ilhamdi; Sasaki, Minoru; Ito, Satoshi

    2014-01-01

    Biosignals will play an important role in building communication between machines and humans. One of the types of biosignals that is widely used in neuroscience are electrooculography (EOG) signals. An EOG has a linear relationship with eye movement displacement. Experiments were performed to construct a gaze motion tracking method indicated by robot manipulator movements. Three operators looked at 24 target points displayed on a monitor that was 40 cm in front of them. Two channels (Ch1 and Ch2) produced EOG signals for every single eye movement. These signals were converted to pixel units by using the linear relationship between EOG signals and gaze motion distances. The conversion outcomes were actual pixel locations. An affine transform method is proposed to determine the shift of actual pixels to target pixels. This method consisted of sequences of five geometry processes, which are translation-1, rotation, translation-2, shear and dilatation. The accuracy was approximately 0.86° ± 0.67° in the horizontal direction and 0.54° ± 0.34° in the vertical. This system successfully tracked the gaze motions not only in direction, but also in distance. Using this system, three operators could operate a robot manipulator to point at some targets. This result shows that the method is reliable in building communication between humans and machines using EOGs. PMID:24919013

  16. Right Hemispheric Dominance in Gaze-Triggered Reflexive Shift of Attention in Humans

    ERIC Educational Resources Information Center

    Okada, Takashi; Sato, Wataru; Toichi, Motomi

    2006-01-01

    Recent findings suggest a right hemispheric dominance in gaze-triggered shifts of attention. The aim of this study was to clarify the dominant hemisphere in the gaze processing that mediates attentional shift. A target localization task, with preceding non-predicative gaze cues presented to each visual field, was undertaken by 44 healthy subjects,…

  17. Live interaction distinctively shapes social gaze dynamics in rhesus macaques

    PubMed Central

    Piva, Matthew; Morris, Jason A.; Chang, Steve W. C.

    2016-01-01

    The dynamic interaction of gaze between individuals is a hallmark of social cognition. However, very few studies have examined social gaze dynamics after mutual eye contact during real-time interactions. We used a highly quantifiable paradigm to assess social gaze dynamics between pairs of monkeys and modeled these dynamics using an exponential decay function to investigate sustained attention after mutual eye contact. When monkeys were interacting with real partners compared with static images and movies of the same monkeys, we found a significant increase in the proportion of fixations to the eyes and a smaller dispersion of fixations around the eyes, indicating enhanced focal attention to the eye region. Notably, dominance and familiarity between the interacting pairs induced separable components of gaze dynamics that were unique to live interactions. Gaze dynamics of dominant monkeys after mutual eye contact were associated with a greater number of fixations to the eyes, whereas those of familiar pairs were associated with a faster rate of decrease in this eye-directed attention. Our findings endorse the notion that certain key aspects of social cognition are only captured during interactive social contexts and dependent on the elapsed time relative to socially meaningful events. PMID:27486105

  18. Neural bases of eye and gaze processing: The core of social cognition

    PubMed Central

    Itier, Roxane J.; Batty, Magali

    2014-01-01

    Eyes and gaze are very important stimuli for human social interactions. Recent studies suggest that impairments in recognizing face identity, facial emotions or in inferring attention and intentions of others could be linked to difficulties in extracting the relevant information from the eye region including gaze direction. In this review, we address the central role of eyes and gaze in social cognition. We start with behavioral data demonstrating the importance of the eye region and the impact of gaze on the most significant aspects of face processing. We review neuropsychological cases and data from various imaging techniques such as fMRI/PET and ERP/MEG, in an attempt to best describe the spatio-temporal networks underlying these processes. The existence of a neuronal eye detector mechanism is discussed as well as the links between eye gaze and social cognition impairments in autism. We suggest impairments in processing eyes and gaze may represent a core deficiency in several other brain pathologies and may be central to abnormal social cognition. PMID:19428496

  19. Perception of direct vs. averted gaze in portrait paintings: An fMRI and eye-tracking study.

    PubMed

    Kesner, Ladislav; Grygarová, Dominika; Fajnerová, Iveta; Lukavský, Jiří; Nekovářová, Tereza; Tintěra, Jaroslav; Zaytseva, Yuliya; Horáček, Jiří

    2018-06-15

    In this study, we use separate eye-tracking measurements and functional magnetic resonance imaging to investigate the neuronal and behavioral response to painted portraits with direct versus averted gaze. We further explored modulatory effects of several painting characteristics (premodern vs modern period, influence of style and pictorial context). In the fMRI experiment, we show that the direct versus averted gaze elicited increased activation in lingual and inferior occipital and the fusiform face area, as well as in several areas involved in attentional and social cognitive processes, especially the theory of mind: angular gyrus/temporo-parietal junction, inferior frontal gyrus and dorsolateral prefrontal cortex. The additional eye-tracking experiment showed that participants spent more time viewing the portrait's eyes and mouth when the portrait's gaze was directed towards the observer. These results suggest that static and, in some cases, highly stylized depictions of human beings in artistic portraits elicit brain activation commensurate with the experience of being observed by a watchful intelligent being. They thus involve observers in implicit inferences of the painted subject's mental states and emotions. We further confirm the substantial influence of representational medium on brain activity. Copyright © 2018 Elsevier Inc. All rights reserved.

  20. Ravens, Corvus corax, follow gaze direction of humans around obstacles.

    PubMed Central

    Bugnyar, Thomas; Stöwe, Mareike; Heinrich, Bernd

    2004-01-01

    The ability to follow gaze (i.e. head and eye direction) has recently been shown for social mammals, particularly primates. In most studies, individuals could use gaze direction as a behavioural cue without understanding that the view of others may be different from their own. Here, we show that hand-raised ravens not only visually co-orient with the look-ups of a human experimenter but also reposition themselves to follow the experimenter's gaze around a visual barrier. Birds were capable of visual co-orientation already as fledglings but consistently tracked gaze direction behind obstacles not before six months of age. These results raise the possibility that sub-adult and adult ravens can project a line of sight for the other person into the distance. To what extent ravens may attribute mental significance to the visual behaviour of others is discussed. PMID:15306330

  1. Disentangling gaze shifts from preparatory ERP effects during spatial attention

    PubMed Central

    Kennett, Steffan; van Velzen, José; Eimer, Martin; Driver, Jon

    2007-01-01

    After a cue directing attention to one side, anterior event-related potentials (ERPs) show contralateral negativity (Anterior Directing Attention Negativity, ADAN). It is unclear whether ADAN effects are contaminated by contralateral negativity arising from residual gaze shifts. Conversely, it is possible that ADAN-related potentials contaminate the horizontal electrooculogram (HEOG), via volume conduction. To evaluate these possibilities, we used high-resolution infrared eye tracking, while recording EEG and HEOG in a cued spatial-attention task. We found that, after conventional ERP and HEOG pre-processing exclusions, small but systematic residual gaze shifts in the cued direction can remain, as revealed by the infrared measure. Nevertheless, by using this measure for more stringent exclusion of small gaze shifts, we confirmed that reliable ADAN components remain for preparatory spatial attention in the absence of any systematic gaze shifts toward the cued side. PMID:17241141

  2. Gaze Synchrony between Mothers with Mood Disorders and Their Infants: Maternal Emotion Dysregulation Matters

    PubMed Central

    Lotzin, Annett; Romer, Georg; Schiborr, Julia; Noga, Berit; Schulte-Markwort, Michael; Ramsauer, Brigitte

    2015-01-01

    A lowered and heightened synchrony between the mother’s and infant’s nonverbal behavior predicts adverse infant development. We know that maternal depressive symptoms predict lowered and heightened mother-infant gaze synchrony, but it is unclear whether maternal emotion dysregulation is related to mother-infant gaze synchrony. This cross-sectional study examined whether maternal emotion dysregulation in mothers with mood disorders is significantly related to mother-infant gaze synchrony. We also tested whether maternal emotion dysregulation is relatively more important than maternal depressive symptoms in predicting mother-infant gaze synchrony, and whether maternal emotion dysregulation mediates the relation between maternal depressive symptoms and mother-infant gaze synchrony. We observed 68 mothers and their 4- to 9-month-old infants in the Still-Face paradigm during two play interactions, before and after social stress was induced. The mothers’ and infants’ gaze behaviors were coded using microanalysis with the Maternal Regulatory Scoring System and Infant Regulatory Scoring System, respectively. The degree of mother-infant gaze synchrony was computed using time-series analysis. Maternal emotion dysregulation was measured by the Difficulties in Emotion Regulation Scale; depressive symptoms were assessed using the Beck Depression Inventory. Greater maternal emotion dysregulation was significantly related to heightened mother-infant gaze synchrony. The overall effect of maternal emotion dysregulation on mother-infant gaze synchrony was relatively more important than the effect of maternal depressive symptoms in the five tested models. Maternal emotion dysregulation fully mediated the relation between maternal depressive symptoms and mother-infant gaze synchrony. Our findings suggest that the effect of the mother’s depressive symptoms on the mother-infant gaze synchrony may be mediated by the mother’s emotion dysregulation. PMID:26657941

  3. Gaze Synchrony between Mothers with Mood Disorders and Their Infants: Maternal Emotion Dysregulation Matters.

    PubMed

    Lotzin, Annett; Romer, Georg; Schiborr, Julia; Noga, Berit; Schulte-Markwort, Michael; Ramsauer, Brigitte

    2015-01-01

    A lowered and heightened synchrony between the mother's and infant's nonverbal behavior predicts adverse infant development. We know that maternal depressive symptoms predict lowered and heightened mother-infant gaze synchrony, but it is unclear whether maternal emotion dysregulation is related to mother-infant gaze synchrony. This cross-sectional study examined whether maternal emotion dysregulation in mothers with mood disorders is significantly related to mother-infant gaze synchrony. We also tested whether maternal emotion dysregulation is relatively more important than maternal depressive symptoms in predicting mother-infant gaze synchrony, and whether maternal emotion dysregulation mediates the relation between maternal depressive symptoms and mother-infant gaze synchrony. We observed 68 mothers and their 4- to 9-month-old infants in the Still-Face paradigm during two play interactions, before and after social stress was induced. The mothers' and infants' gaze behaviors were coded using microanalysis with the Maternal Regulatory Scoring System and Infant Regulatory Scoring System, respectively. The degree of mother-infant gaze synchrony was computed using time-series analysis. Maternal emotion dysregulation was measured by the Difficulties in Emotion Regulation Scale; depressive symptoms were assessed using the Beck Depression Inventory. Greater maternal emotion dysregulation was significantly related to heightened mother-infant gaze synchrony. The overall effect of maternal emotion dysregulation on mother-infant gaze synchrony was relatively more important than the effect of maternal depressive symptoms in the five tested models. Maternal emotion dysregulation fully mediated the relation between maternal depressive symptoms and mother-infant gaze synchrony. Our findings suggest that the effect of the mother's depressive symptoms on the mother-infant gaze synchrony may be mediated by the mother's emotion dysregulation.

  4. Actively learning human gaze shifting paths for semantics-aware photo cropping.

    PubMed

    Zhang, Luming; Gao, Yue; Ji, Rongrong; Xia, Yingjie; Dai, Qionghai; Li, Xuelong

    2014-05-01

    Photo cropping is a widely used tool in printing industry, photography, and cinematography. Conventional cropping models suffer from the following three challenges. First, the deemphasized role of semantic contents that are many times more important than low-level features in photo aesthetics. Second, the absence of a sequential ordering in the existing models. In contrast, humans look at semantically important regions sequentially when viewing a photo. Third, the difficulty of leveraging inputs from multiple users. Experience from multiple users is particularly critical in cropping as photo assessment is quite a subjective task. To address these challenges, this paper proposes semantics-aware photo cropping, which crops a photo by simulating the process of humans sequentially perceiving semantically important regions of a photo. We first project the local features (graphlets in this paper) onto the semantic space, which is constructed based on the category information of the training photos. An efficient learning algorithm is then derived to sequentially select semantically representative graphlets of a photo, and the selecting process can be interpreted by a path, which simulates humans actively perceiving semantics in a photo. Furthermore, we learn a prior distribution of such active graphlet paths from training photos that are marked as aesthetically pleasing by multiple users. The learned priors enforce the corresponding active graphlet path of a test photo to be maximally similar to those from the training photos. Experimental results show that: 1) the active graphlet path accurately predicts human gaze shifting, and thus is more indicative for photo aesthetics than conventional saliency maps and 2) the cropped photos produced by our approach outperform its competitors in both qualitative and quantitative comparisons.

  5. Frames of reference for gaze saccades evoked during stimulation of lateral intraparietal cortex.

    PubMed

    Constantin, A G; Wang, H; Martinez-Trujillo, J C; Crawford, J D

    2007-08-01

    Previous studies suggest that stimulation of lateral intraparietal cortex (LIP) evokes saccadic eye movements toward eye- or head-fixed goals, whereas most single-unit studies suggest that LIP uses an eye-fixed frame with eye-position modulations. The goal of our study was to determine the reference frame for gaze shifts evoked during LIP stimulation in head-unrestrained monkeys. Two macaques (M1 and M2) were implanted with recording chambers over the right intraparietal sulcus and with search coils for recording three-dimensional eye and head movements. The LIP region was microstimulated using pulse trains of 300 Hz, 100-150 microA, and 200 ms. Eighty-five putative LIP sites in M1 and 194 putative sites in M2 were used in our quantitative analysis throughout this study. Average amplitude of the stimulation-evoked gaze shifts was 8.67 degrees for M1 and 7.97 degrees for M2 with very small head movements. When these gaze-shift trajectories were rotated into three coordinate frames (eye, head, and body), gaze endpoint distribution for all sites was most convergent to a common point when plotted in eye coordinates. Across all sites, the eye-centered model provided a significantly better fit compared with the head, body, or fixed-vector models (where the latter model signifies no modulation of the gaze trajectory as a function of initial gaze position). Moreover, the probability of evoking a gaze shift from any one particular position was modulated by the current gaze direction (independent of saccade direction). These results provide causal evidence that the motor commands from LIP encode gaze command in eye-fixed coordinates but are also subtly modulated by initial gaze position.

  6. Gaze-independent BCI-spelling using rapid serial visual presentation (RSVP).

    PubMed

    Acqualagna, Laura; Blankertz, Benjamin

    2013-05-01

    A Brain Computer Interface (BCI) speller is a communication device, which can be used by patients suffering from neurodegenerative diseases to select symbols in a computer application. For patients unable to overtly fixate the target symbol, it is crucial to develop a speller independent of gaze shifts. In the present online study, we investigated rapid serial visual presentation (RSVP) as a paradigm for mental typewriting. We investigated the RSVP speller in three conditions, regarding the Stimulus Onset Asynchrony (SOA) and the use of color features. A vocabulary of 30 symbols was presented one-by-one in a pseudo random sequence at the same location of display. All twelve participants were able to successfully operate the RSVP speller. The results show a mean online spelling rate of 1.43 symb/min and a mean symbol selection accuracy of 94.8% in the best condition. We conclude that the RSVP is a promising paradigm for BCI spelling and its performance is competitive with the fastest gaze-independent spellers in literature. The RSVP speller does not require gaze shifts towards different target locations and can be operated by non-spatial visual attention, therefore it can be considered as a valid paradigm in applications with patients for impaired oculo-motor control. Copyright © 2013 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  7. The Development of Emotional Face and Eye Gaze Processing

    ERIC Educational Resources Information Center

    Hoehl, Stefanie; Striano, Tricia

    2010-01-01

    Recent research has demonstrated that infants' attention towards novel objects is affected by an adult's emotional expression and eye gaze toward the object. The current event-related potential (ERP) study investigated how infants at 3, 6, and 9 months of age process fearful compared to neutral faces looking toward objects or averting gaze away…

  8. Figure-ground activity in V1 and guidance of saccadic eye movements.

    PubMed

    Supèr, Hans

    2006-01-01

    Every day we shift our gaze about 150.000 times mostly without noticing it. The direction of these gaze shifts are not random but directed by sensory information and internal factors. After each movement the eyes hold still for a brief moment so that visual information at the center of our gaze can be processed in detail. This means that visual information at the saccade target location is sufficient to accurately guide the gaze shift but yet is not sufficiently processed to be fully perceived. In this paper I will discuss the possible role of activity in the primary visual cortex (V1), in particular figure-ground activity, in oculo-motor behavior. Figure-ground activity occurs during the late response period of V1 neurons and correlates with perception. The strength of figure-ground responses predicts the direction and moment of saccadic eye movements. The superior colliculus, a gaze control center that integrates visual and motor signals, receives direct anatomical connections from V1. These projections may convey the perceptual information that is required for appropriate gaze shifts. In conclusion, figure-ground activity in V1 may act as an intermediate component linking visual and motor signals.

  9. Gaze Behavior in a Natural Environment with a Task-Relevant Distractor: How the Presence of a Goalkeeper Distracts the Penalty Taker

    PubMed Central

    Kurz, Johannes; Hegele, Mathias; Munzert, Jörn

    2018-01-01

    Gaze behavior in natural scenes has been shown to be influenced not only by top–down factors such as task demands and action goals but also by bottom–up factors such as stimulus salience and scene context. Whereas gaze behavior in the context of static pictures emphasizes spatial accuracy, gazing in natural scenes seems to rely more on where to direct the gaze involving both anticipative components and an evaluation of ongoing actions. Not much is known about gaze behavior in far-aiming tasks in which multiple task-relevant targets and distractors compete for the allocation of visual attention via gaze. In the present study, we examined gaze behavior in the far-aiming task of taking a soccer penalty. This task contains a proximal target, the ball; a distal target, an empty location within the goal; and a salient distractor, the goalkeeper. Our aim was to investigate where participants direct their gaze in a natural environment with multiple potential fixation targets that differ in task relevance and salience. Results showed that the early phase of the run-up seems to be driven by both the salience of the stimulus setting and the need to perform a spatial calibration of the environment. The late run-up, in contrast, seems to be controlled by attentional demands of the task with penalty takers having habitualized a visual routine that is not disrupted by external influences (e.g., the goalkeeper). In addition, when trying to shoot a ball as accurately as possible, penalty takers directed their gaze toward the ball in order to achieve optimal foot-ball contact. These results indicate that whether gaze is driven by salience of the stimulus setting or by attentional demands depends on the phase of the actual task. PMID:29434560

  10. The German Version of the Gaze Anxiety Rating Scale (GARS): Reliability and Validity

    PubMed Central

    Domes, Gregor; Marx, Lisa; Spenthof, Ines; Heinrichs, Markus

    2016-01-01

    Objective Fear of eye gaze and avoidance of eye contact are core features of social anxiety disorders (SAD). To measure self-reported fear and avoidance of eye gaze, the Gaze Anxiety Rating Scale (GARS) has been developed and validated in recent years in its English version. The main objectives of the present study were to psychometrically evaluate the German translation of the GARS concerning its reliability, factorial structure, and validity. Methods Three samples of participants were enrolled in the study. (1) A non-patient sample (n = 353) completed the GARS and a set of trait questionnaires to assess internal consistency, test-retest reliability, factorial structure, and concurrent and divergent validity. (2) A sample of patients with SAD (n = 33) was compared to a healthy control group (n = 30) regarding their scores on the GARS and the trait measures. Results The German GARS fear and avoidance scales exhibited excellent internal consistency and high stability over 2 and 4 months, as did the original version. The English version’s factorial structure was replicated, yielding two categories of situations: (1) everyday situations and (2) situations involving high evaluative threat. GARS fear and avoidance displayed convergent validity with trait measures of social anxiety and were markedly higher in patients with GSAD than in healthy controls. Fear and avoidance of eye contact in situations involving high levels of evaluative threat related more closely to social anxiety than to gaze anxiety in everyday situations. Conclusions The German version of the GARS has demonstrated reliability and validity similar to the original version, and is thus well suited to capture fear and avoidance of eye contact in different social situations as a valid self-report measure of social anxiety and related disorders in the social domain for use in both clinical practice and research. PMID:26937638

  11. The German Version of the Gaze Anxiety Rating Scale (GARS): Reliability and Validity.

    PubMed

    Domes, Gregor; Marx, Lisa; Spenthof, Ines; Heinrichs, Markus

    2016-01-01

    Fear of eye gaze and avoidance of eye contact are core features of social anxiety disorders (SAD). To measure self-reported fear and avoidance of eye gaze, the Gaze Anxiety Rating Scale (GARS) has been developed and validated in recent years in its English version. The main objectives of the present study were to psychometrically evaluate the German translation of the GARS concerning its reliability, factorial structure, and validity. Three samples of participants were enrolled in the study. (1) A non-patient sample (n = 353) completed the GARS and a set of trait questionnaires to assess internal consistency, test-retest reliability, factorial structure, and concurrent and divergent validity. (2) A sample of patients with SAD (n = 33) was compared to a healthy control group (n = 30) regarding their scores on the GARS and the trait measures. The German GARS fear and avoidance scales exhibited excellent internal consistency and high stability over 2 and 4 months, as did the original version. The English version's factorial structure was replicated, yielding two categories of situations: (1) everyday situations and (2) situations involving high evaluative threat. GARS fear and avoidance displayed convergent validity with trait measures of social anxiety and were markedly higher in patients with GSAD than in healthy controls. Fear and avoidance of eye contact in situations involving high levels of evaluative threat related more closely to social anxiety than to gaze anxiety in everyday situations. The German version of the GARS has demonstrated reliability and validity similar to the original version, and is thus well suited to capture fear and avoidance of eye contact in different social situations as a valid self-report measure of social anxiety and related disorders in the social domain for use in both clinical practice and research.

  12. CULTURAL DISPLAY RULES DRIVE EYE GAZE DURING THINKING.

    PubMed

    McCarthy, Anjanie; Lee, Kang; Itakura, Shoji; Muir, Darwin W

    2006-11-01

    The authors measured the eye gaze displays of Canadian, Trinidadian, and Japanese participants as they answered questions for which they either knew, or had to derive, the answers. When they knew the answers, Trinidadians maintained the most eye contact, whereas Japanese maintained the least. When thinking about the answers to questions, Canadians and Trinidadians looked up, whereas Japanese looked down. Thus, for humans, gaze displays while thinking are at least in part culturally determined.

  13. Investigating Gaze of Children with ASD in Naturalistic Settings

    PubMed Central

    Noris, Basilio; Nadel, Jacqueline; Barker, Mandy; Hadjikhani, Nouchine; Billard, Aude

    2012-01-01

    Background Visual behavior is known to be atypical in Autism Spectrum Disorders (ASD). Monitor-based eye-tracking studies have measured several of these atypicalities in individuals with Autism. While atypical behaviors are known to be accentuated during natural interactions, few studies have been made on gaze behavior in natural interactions. In this study we focused on i) whether the findings done in laboratory settings are also visible in a naturalistic interaction; ii) whether new atypical elements appear when studying visual behavior across the whole field of view. Methodology/Principal Findings Ten children with ASD and ten typically developing children participated in a dyadic interaction with an experimenter administering items from the Early Social Communication Scale (ESCS). The children wore a novel head-mounted eye-tracker, measuring gaze direction and presence of faces across the child's field of view. The analysis of gaze episodes to faces revealed that children with ASD looked significantly less and for shorter lapses of time at the experimenter. The analysis of gaze patterns across the child's field of view revealed that children with ASD looked downwards and made more extensive use of their lateral field of view when exploring the environment. Conclusions/Significance The data gathered in naturalistic settings confirm findings previously obtained only in monitor-based studies. Moreover, the study allowed to observe a generalized strategy of lateral gaze in children with ASD when they were looking at the objects in their environment. PMID:23028494

  14. Live interaction distinctively shapes social gaze dynamics in rhesus macaques.

    PubMed

    Dal Monte, Olga; Piva, Matthew; Morris, Jason A; Chang, Steve W C

    2016-10-01

    The dynamic interaction of gaze between individuals is a hallmark of social cognition. However, very few studies have examined social gaze dynamics after mutual eye contact during real-time interactions. We used a highly quantifiable paradigm to assess social gaze dynamics between pairs of monkeys and modeled these dynamics using an exponential decay function to investigate sustained attention after mutual eye contact. When monkeys were interacting with real partners compared with static images and movies of the same monkeys, we found a significant increase in the proportion of fixations to the eyes and a smaller dispersion of fixations around the eyes, indicating enhanced focal attention to the eye region. Notably, dominance and familiarity between the interacting pairs induced separable components of gaze dynamics that were unique to live interactions. Gaze dynamics of dominant monkeys after mutual eye contact were associated with a greater number of fixations to the eyes, whereas those of familiar pairs were associated with a faster rate of decrease in this eye-directed attention. Our findings endorse the notion that certain key aspects of social cognition are only captured during interactive social contexts and dependent on the elapsed time relative to socially meaningful events. Copyright © 2016 the American Physiological Society.

  15. Neurons in the human amygdala encode face identity, but not gaze direction.

    PubMed

    Mormann, Florian; Niediek, Johannes; Tudusciuc, Oana; Quesada, Carlos M; Coenen, Volker A; Elger, Christian E; Adolphs, Ralph

    2015-11-01

    The amygdala is important for face processing, and direction of eye gaze is one of the most socially salient facial signals. Recording from over 200 neurons in the amygdala of neurosurgical patients, we found robust encoding of the identity of neutral-expression faces, but not of their direction of gaze. Processing of gaze direction may rely on a predominantly cortical network rather than the amygdala.

  16. Measure and Analysis of a Gaze Position Using Infrared Light Technique

    DTIC Science & Technology

    2001-10-25

    MEASURE AND ANALYSIS OF A GAZE POSITION USING INFRARED LIGHT TECHNIQUE Z. Ramdane-Cherif1,2, A. Naït-Ali2, J F. Motsch2, M. O. Krebs1 1INSERM E 01-17...also proposes a method to correct head movements. Keywords: eye movement, gaze tracking, visual scan path, spatial mapping. INTRODUCTION The eye gaze ...tracking has been used for clinical purposes to detect illnesses, such as nystagmus , unusual eye movements and many others [1][2][3]. It is also used

  17. Comparisons of Neuronal and Excitatory Network Properties between the Rat Brainstem Nuclei that Participate in Vertical and Horizontal Gaze Holding

    PubMed Central

    Sugimura, Taketoshi; Yanagawa, Yuchio

    2017-01-01

    Gaze holding is primarily controlled by neural structures including the prepositus hypoglossi nucleus (PHN) for horizontal gaze and the interstitial nucleus of Cajal (INC) for vertical and torsional gaze. In contrast to the accumulating findings of the PHN, there is no report regarding the membrane properties of INC neurons or the local networks in the INC. In this study, to verify whether the neural structure of the INC is similar to that of the PHN, we investigated the neuronal and network properties of the INC using whole-cell recordings in rat brainstem slices. Three types of afterhyperpolarization (AHP) profiles and five firing patterns observed in PHN neurons were also observed in INC neurons. However, the overall distributions based on the AHP profile and the firing patterns of INC neurons were different from those of PHN neurons. The application of burst stimulation to a nearby site of a recorded INC neuron induced an increase in the frequency of spontaneous EPSCs. The duration of the increased EPSC frequency of INC neurons was not significantly different from that of PHN neurons. The percent of duration reduction induced by a Ca2+-permeable AMPA (CP-AMPA) receptor antagonist was significantly smaller in the INC than in the PHN. These findings suggest that local excitatory networks that activate sustained EPSC responses also exist in the INC, but their activation mechanisms including the contribution of CP-AMPA receptors differ between the INC and the PHN. PMID:28966973

  18. Memory and prediction in natural gaze control

    PubMed Central

    Diaz, Gabriel; Cooper, Joseph; Hayhoe, Mary

    2013-01-01

    In addition to stimulus properties and task factors, memory is an important determinant of the allocation of attention and gaze in the natural world. One way that the role of memory is revealed is by predictive eye movements. Both smooth pursuit and saccadic eye movements demonstrate predictive effects based on previous experience. We have previously shown that unskilled subjects make highly accurate predictive saccades to the anticipated location of a ball prior to a bounce in a virtual racquetball setting. In this experiment, we examined this predictive behaviour. We asked whether the period after the bounce provides subjects with visual information about the ball trajectory that is used to programme the pursuit movement initiated when the ball passes through the fixation point. We occluded a 100 ms period of the ball's trajectory immediately after the bounce, and found very little effect on the subsequent pursuit movement. Subjects did not appear to modify their strategy to prolong the fixation. Neither were we able to find an effect on interception performance. Thus, it is possible that the occluded trajectory information is not critical for subsequent pursuit, and subjects may use an estimate of the ball's trajectory to programme pursuit. These results provide further support for the role of memory in eye movements. PMID:24018726

  19. A closer look at the size of the gaze-liking effect: a preregistered replication.

    PubMed

    Tipples, Jason; Pecchinenda, Anna

    2018-04-30

    This study is a direct replication of gaze-liking effect using the same design, stimuli and procedure. The gaze-liking effect describes the tendency for people to rate objects as more likeable when they have recently seen a person repeatedly gaze toward rather than away from the object. However, as subsequent studies show considerable variability in the size of this effect, we sampled a larger number of participants (N = 98) than the original study (N = 24) to gain a more precise estimate of the gaze-liking effect size. Our results indicate a much smaller standardised effect size (d z  = 0.02) than that of the original study (d z  = 0.94). Our smaller effect size was not due to general insensitivity to eye-gaze effects because the same sample showed a clear (d z  = 1.09) gaze-cuing effect - faster reaction times when eyes looked toward vs away from target objects. We discuss the implications of our findings for future studies wishing to study the gaze-liking effect.

  20. Real-Time Mutual Gaze Perception Enhances Collaborative Learning and Collaboration Quality

    ERIC Educational Resources Information Center

    Schneider, Bertrand; Pea, Roy

    2013-01-01

    In this paper we present the results of an eye-tracking study on collaborative problem-solving dyads. Dyads remotely collaborated to learn from contrasting cases involving basic concepts about how the human brain processes visual information. In one condition, dyads saw the eye gazes of their partner on the screen; in a control group, they did not…

  1. Gaze Behavior Consistency among Older and Younger Adults When Looking at Emotional Faces

    PubMed Central

    Chaby, Laurence; Hupont, Isabelle; Avril, Marie; Luherne-du Boullay, Viviane; Chetouani, Mohamed

    2017-01-01

    The identification of non-verbal emotional signals, and especially of facial expressions, is essential for successful social communication among humans. Previous research has reported an age-related decline in facial emotion identification, and argued for socio-emotional or aging-brain model explanations. However, more perceptual differences in the gaze strategies that accompany facial emotional processing with advancing age have been under-explored yet. In this study, 22 young (22.2 years) and 22 older (70.4 years) adults were instructed to look at basic facial expressions while their gaze movements were recorded by an eye-tracker. Participants were then asked to identify each emotion, and the unbiased hit rate was applied as performance measure. Gaze data were first analyzed using traditional measures of fixations over two preferential regions of the face (upper and lower areas) for each emotion. Then, to better capture core gaze changes with advancing age, spatio-temporal gaze behaviors were deeper examined using data-driven analysis (dimension reduction, clustering). Results first confirmed that older adults performed worse than younger adults at identifying facial expressions, except for “joy” and “disgust,” and this was accompanied by a gaze preference toward the lower-face. Interestingly, this phenomenon was maintained during the whole time course of stimulus presentation. More importantly, trials corresponding to older adults were more tightly clustered, suggesting that the gaze behavior patterns of older adults are more consistent than those of younger adults. This study demonstrates that, confronted to emotional faces, younger and older adults do not prioritize or ignore the same facial areas. Older adults mainly adopted a focused-gaze strategy, consisting in focusing only on the lower part of the face throughout the whole stimuli display time. This consistency may constitute a robust and distinctive “social signature” of emotional

  2. Eye-gaze independent EEG-based brain-computer interfaces for communication.

    PubMed

    Riccio, A; Mattia, D; Simione, L; Olivetti, M; Cincotti, F

    2012-08-01

    The present review systematically examines the literature reporting gaze independent interaction modalities in non-invasive brain-computer interfaces (BCIs) for communication. BCIs measure signals related to specific brain activity and translate them into device control signals. This technology can be used to provide users with severe motor disability (e.g. late stage amyotrophic lateral sclerosis (ALS); acquired brain injury) with an assistive device that does not rely on muscular contraction. Most of the studies on BCIs explored mental tasks and paradigms using visual modality. Considering that in ALS patients the oculomotor control can deteriorate and also other potential users could have impaired visual function, tactile and auditory modalities have been investigated over the past years to seek alternative BCI systems which are independent from vision. In addition, various attentional mechanisms, such as covert attention and feature-directed attention, have been investigated to develop gaze independent visual-based BCI paradigms. Three areas of research were considered in the present review: (i) auditory BCIs, (ii) tactile BCIs and (iii) independent visual BCIs. Out of a total of 130 search results, 34 articles were selected on the basis of pre-defined exclusion criteria. Thirteen articles dealt with independent visual BCIs, 15 reported on auditory BCIs and the last six on tactile BCIs, respectively. From the review of the available literature, it can be concluded that a crucial point is represented by the trade-off between BCI systems/paradigms with high accuracy and speed, but highly demanding in terms of attention and memory load, and systems requiring lower cognitive effort but with a limited amount of communicable information. These issues should be considered as priorities to be explored in future studies to meet users' requirements in a real-life scenario.

  3. Eye-gaze independent EEG-based brain-computer interfaces for communication

    NASA Astrophysics Data System (ADS)

    Riccio, A.; Mattia, D.; Simione, L.; Olivetti, M.; Cincotti, F.

    2012-08-01

    The present review systematically examines the literature reporting gaze independent interaction modalities in non-invasive brain-computer interfaces (BCIs) for communication. BCIs measure signals related to specific brain activity and translate them into device control signals. This technology can be used to provide users with severe motor disability (e.g. late stage amyotrophic lateral sclerosis (ALS); acquired brain injury) with an assistive device that does not rely on muscular contraction. Most of the studies on BCIs explored mental tasks and paradigms using visual modality. Considering that in ALS patients the oculomotor control can deteriorate and also other potential users could have impaired visual function, tactile and auditory modalities have been investigated over the past years to seek alternative BCI systems which are independent from vision. In addition, various attentional mechanisms, such as covert attention and feature-directed attention, have been investigated to develop gaze independent visual-based BCI paradigms. Three areas of research were considered in the present review: (i) auditory BCIs, (ii) tactile BCIs and (iii) independent visual BCIs. Out of a total of 130 search results, 34 articles were selected on the basis of pre-defined exclusion criteria. Thirteen articles dealt with independent visual BCIs, 15 reported on auditory BCIs and the last six on tactile BCIs, respectively. From the review of the available literature, it can be concluded that a crucial point is represented by the trade-off between BCI systems/paradigms with high accuracy and speed, but highly demanding in terms of attention and memory load, and systems requiring lower cognitive effort but with a limited amount of communicable information. These issues should be considered as priorities to be explored in future studies to meet users’ requirements in a real-life scenario.

  4. Specificity of Age-Related Differences in Eye-Gaze Following: Evidence From Social and Nonsocial Stimuli.

    PubMed

    Slessor, Gillian; Venturini, Cristina; Bonny, Emily J; Insch, Pauline M; Rokaszewicz, Anna; Finnerty, Ailbhe N

    2016-01-01

    Eye-gaze following is a fundamental social skill, facilitating communication. The present series of studies explored adult age-related differences in this key social-cognitive ability. In Study 1 younger and older adult participants completed a cueing task in which eye-gaze cues were predictive or non-predictive of target location. Another eye-gaze cueing task, assessing the influence of congruent and incongruent eye-gaze cues relative to trials which provided no cue to target location, was administered in Study 2. Finally, in Study 3 the eye-gaze cue was replaced by an arrow. In Study 1 older adults showed less evidence of gaze following than younger participants when required to strategically follow predictive eye-gaze cues and when making automatic shifts of attention to non-predictive eye-gaze cues. Findings from Study 2 suggested that, unlike younger adults, older participants showed no facilitation effect and thus did not follow congruent eye-gaze cues. They also had significantly weaker attentional costs than their younger counterparts. These age-related differences were not found in the non-social arrow cueing task. Taken together these findings suggest older adults do not use eye-gaze cues to engage in joint attention, and have specific social difficulties decoding critical information from the eye region. © The Author 2014. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  5. A novel attention training paradigm based on operant conditioning of eye gaze: Preliminary findings.

    PubMed

    Price, Rebecca B; Greven, Inez M; Siegle, Greg J; Koster, Ernst H W; De Raedt, Rudi

    2016-02-01

    Inability to engage with positive stimuli is a widespread problem associated with negative mood states across many conditions, from low self-esteem to anhedonic depression. Though attention retraining procedures have shown promise as interventions in some clinical populations, novel procedures may be necessary to reliably attenuate chronic negative mood in refractory clinical populations (e.g., clinical depression) through, for example, more active, adaptive learning processes. In addition, a focus on individual difference variables predicting intervention outcome may improve the ability to provide such targeted interventions efficiently. To provide preliminary proof-of-principle, we tested a novel paradigm using operant conditioning to train eye gaze patterns toward happy faces. Thirty-two healthy undergraduates were randomized to receive operant conditioning of eye gaze toward happy faces (train-happy) or neutral faces (train-neutral). At the group level, the train-happy condition attenuated sad mood increases following a stressful task, in comparison to train-neutral. In individual differences analysis, greater physiological reactivity (pupil dilation) in response to happy faces (during an emotional face-search task at baseline) predicted decreased mood reactivity after stress. These Preliminary results suggest that operant conditioning of eye gaze toward happy faces buffers against stress-induced effects on mood, particularly in individuals who show sufficient baseline neural engagement with happy faces. Eye gaze patterns to emotional face arrays may have a causal relationship with mood reactivity. Personalized medicine research in depression may benefit from novel cognitive training paradigms that shape eye gaze patterns through feedback. Baseline neural function (pupil dilation) may be a key mechanism, aiding in iterative refinement of this approach. (c) 2016 APA, all rights reserved).

  6. Altered attentional and perceptual processes as indexed by N170 during gaze perception in schizophrenia: Relationship with perceived threat and paranoid delusions.

    PubMed

    Tso, Ivy F; Calwas, Anita M; Chun, Jinsoo; Mueller, Savanna A; Taylor, Stephan F; Deldin, Patricia J

    2015-08-01

    Using gaze information to orient attention and guide behavior is critical to social adaptation. Previous studies have suggested that abnormal gaze perception in schizophrenia (SCZ) may originate in abnormal early attentional and perceptual processes and may be related to paranoid symptoms. Using event-related brain potentials (ERPs), this study investigated altered early attentional and perceptual processes during gaze perception and their relationship to paranoid delusions in SCZ. Twenty-eight individuals with SCZ or schizoaffective disorder and 32 demographically matched healthy controls (HCs) completed a gaze-discrimination task with face stimuli varying in gaze direction (direct, averted), head orientation (forward, deviated), and emotion (neutral, fearful). ERPs were recorded during the task. Participants rated experienced threat from each face after the task. Participants with SCZ were as accurate as, though slower than, HCs on the task. Participants with SCZ displayed enlarged N170 responses over the left hemisphere to averted gaze presented in fearful relative to neutral faces, indicating a heightened encoding sensitivity to faces signaling external threat. This abnormality was correlated with increased perceived threat and paranoid delusions. Participants with SCZ also showed a reduction of N170 modulation by head orientation (normally increased amplitude to deviated faces relative to forward faces), suggesting less integration of contextual cues of head orientation in gaze perception. The psychophysiological deviations observed during gaze discrimination in SCZ underscore the role of early attentional and perceptual abnormalities in social information processing and paranoid symptoms of SCZ. (c) 2015 APA, all rights reserved).

  7. Gaze-evoked nystagmus: a case report and literature review.

    PubMed

    Rett, Doug

    2007-09-01

    A sustained gaze-evoked nystagmus (GEN) is an important ocular finding that may indicate serious neurologic pathology. It is also a finding that can be missed easily during routine extraocular muscle (EOM) testing. This report presents a case that should familiarize the reader with GEN and presents a novel approach to testing EOM function. The mother of an otherwise healthy 4-year-old girl noted that her daughter's eyes crossed occasionally, the right lid drooped on one occasion, and she had been having strange headaches. An asymmetric, sustained, gaze-evoked nystagmus was detected using a different approach to EOM testing. Magnetic resonance imaging found a large, brainstem astrocytoma in the cerebellar-pontine angle. EOM function often is overlooked or underperformed but is an important part of the battery of clinical tests to rule out neurologic problems. Most forms of EOM testing will check for muscle palsies but little else. If the time is taken to extend the patient's gaze to the extreme ends, to attempt to hold the gaze in all 9 positions, and to maintain an accurate speed, the clinician can stand to gain much more information regarding the neurologic system.

  8. Empirical Study on Designing of Gaze Tracking Camera Based on the Information of User's Head Movement.

    PubMed

    Pan, Weiyuan; Jung, Dongwook; Yoon, Hyo Sik; Lee, Dong Eun; Naqvi, Rizwan Ali; Lee, Kwan Woo; Park, Kang Ryoung

    2016-08-31

    Gaze tracking is the technology that identifies a region in space that a user is looking at. Most previous non-wearable gaze tracking systems use a near-infrared (NIR) light camera with an NIR illuminator. Based on the kind of camera lens used, the viewing angle and depth-of-field (DOF) of a gaze tracking camera can be different, which affects the performance of the gaze tracking system. Nevertheless, to our best knowledge, most previous researches implemented gaze tracking cameras without ground truth information for determining the optimal viewing angle and DOF of the camera lens. Eye-tracker manufacturers might also use ground truth information, but they do not provide this in public. Therefore, researchers and developers of gaze tracking systems cannot refer to such information for implementing gaze tracking system. We address this problem providing an empirical study in which we design an optimal gaze tracking camera based on experimental measurements of the amount and velocity of user's head movements. Based on our results and analyses, researchers and developers might be able to more easily implement an optimal gaze tracking system. Experimental results show that our gaze tracking system shows high performance in terms of accuracy, user convenience and interest.

  9. Can Infants Use a Nonhuman Agent's Gaze Direction to Establish Word-Object Relations?

    ERIC Educational Resources Information Center

    O'Connell, Laura; Poulin-Dubois, Diane; Demke, Tamara; Guay, Amanda

    2009-01-01

    Adopting a procedure developed with human speakers, we examined infants' ability to follow a nonhuman agent's gaze direction and subsequently to use its gaze to learn new words. When a programmable robot acted as the speaker (Experiment 1), infants followed its gaze toward the word referent whether or not it coincided with their own focus of…

  10. Is the Theory of Mind deficit observed in visual paradigms in schizophrenia explained by an impaired attention toward gaze orientation?

    PubMed

    Roux, Paul; Forgeot d'Arc, Baudoin; Passerieux, Christine; Ramus, Franck

    2014-08-01

    Schizophrenia is associated with poor Theory of Mind (ToM), particularly in goal and belief attribution to others. It is also associated with abnormal gaze behaviors toward others: individuals with schizophrenia usually look less to others' face and gaze, which are crucial epistemic cues that contribute to correct mental states inferences. This study tests the hypothesis that impaired ToM in schizophrenia might be related to a deficit in visual attention toward gaze orientation. We adapted a previous non-verbal ToM paradigm consisting of animated cartoons allowing the assessment of goal and belief attribution. In the true and false belief conditions, an object was displaced while an agent was either looking at it or away, respectively. Eye movements were recorded to quantify visual attention to gaze orientation (proportion of time participants spent looking at the head of the agent while the target object changed locations). 29 patients with schizophrenia and 29 matched controls were tested. Compared to controls, patients looked significantly less at the agent's head and had lower performance in belief and goal attribution. Performance in belief and goal attribution significantly increased with the head looking percentage. When the head looking percentage was entered as a covariate, the group effect on belief and goal attribution performance was not significant anymore. Patients' deficit on this visual ToM paradigm is thus entirely explained by a decreased visual attention toward gaze. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. The Eyes Are the Windows to the Mind: Direct Eye Gaze Triggers the Ascription of Others' Minds.

    PubMed

    Khalid, Saara; Deska, Jason C; Hugenberg, Kurt

    2016-12-01

    Eye gaze is a potent source of social information with direct eye gaze signaling the desire to approach and averted eye gaze signaling avoidance. In the current work, we proposed that eye gaze signals whether or not to impute minds into others. Across four studies, we manipulated targets' eye gaze (i.e., direct vs. averted eye gaze) and measured explicit mind ascriptions (Study 1a, Study 1b, and Study 2) and beliefs about the likelihood of targets having mind (Study 3). In all four studies, we find novel evidence that the ascription of sophisticated humanlike minds to others is signaled by the display of direct eye gaze relative to averted eye gaze. Moreover, we provide evidence suggesting that this differential mentalization is due, at least in part, to beliefs that direct gaze targets are more likely to instigate social interaction. In short, eye contact triggers mind perception. © 2016 by the Society for Personality and Social Psychology, Inc.

  12. Space-based and object-centered gaze cuing of attention in right hemisphere-damaged patients.

    PubMed

    Dalmaso, Mario; Castelli, Luigi; Priftis, Konstantinos; Buccheri, Marta; Primon, Daniela; Tronco, Silvia; Galfano, Giovanni

    2015-01-01

    Gaze cuing of attention is a well established phenomenon consisting of the tendency to shift attention to the location signaled by the averted gaze of other individuals. Evidence suggests that such phenomenon might follow intrinsic object-centered features of the head containing the gaze cue. In the present exploratory study, we aimed to investigate whether such object-centered component is present in neuropsychological patients with a lesion involving the right hemisphere, which is known to play a critical role both in orienting of attention and in face processing. To this purpose, we used a modified gaze-cuing paradigm in which a centrally placed head with averted gaze was presented either in the standard upright position or rotated 90° clockwise or anti-clockwise. Afterward, a to-be-detected target was presented either in the right or in the left hemifield. The results showed that gaze cuing of attention was present only when the target appeared in the left visual hemifield and was not modulated by head orientation. This suggests that gaze cuing of attention in right hemisphere-damaged patients can operate within different frames of reference.

  13. Space-based and object-centered gaze cuing of attention in right hemisphere-damaged patients

    PubMed Central

    Dalmaso, Mario; Castelli, Luigi; Priftis, Konstantinos; Buccheri, Marta; Primon, Daniela; Tronco, Silvia; Galfano, Giovanni

    2015-01-01

    Gaze cuing of attention is a well established phenomenon consisting of the tendency to shift attention to the location signaled by the averted gaze of other individuals. Evidence suggests that such phenomenon might follow intrinsic object-centered features of the head containing the gaze cue. In the present exploratory study, we aimed to investigate whether such object-centered component is present in neuropsychological patients with a lesion involving the right hemisphere, which is known to play a critical role both in orienting of attention and in face processing. To this purpose, we used a modified gaze-cuing paradigm in which a centrally placed head with averted gaze was presented either in the standard upright position or rotated 90° clockwise or anti-clockwise. Afterward, a to-be-detected target was presented either in the right or in the left hemifield. The results showed that gaze cuing of attention was present only when the target appeared in the left visual hemifield and was not modulated by head orientation. This suggests that gaze cuing of attention in right hemisphere-damaged patients can operate within different frames of reference. PMID:26300815

  14. Gaze Control in One Versus One Defensive Situations in Soccer Players With Various Levels of Expertise.

    PubMed

    Krzepota, Justyna; Stępiński, Miłosz; Zwierko, Teresa

    2016-12-01

    Experienced and less experienced soccer players were compared in terms of their gaze behavior (number of fixations, fixation duration, number of fixation regions, and distribution of fixations across specific regions) during frontal 1 vs. 1 defensive situations. Twenty-four men (eight experienced soccer players, eight less experienced players and eight non-players) watched 20 video clips. Gaze behavior was registered with an Eye Tracking System. The video scenes were analyzed frame-by-frame. Significant main effect of the group (experience) was observed for the number of fixation regions. Experienced soccer players had a lower number of fixation regions than the non-soccer players. Moreover, the former group presented with significantly larger percentage of fixations in the ball/foot region. These findings suggest that experienced players may use a more efficient search strategy than novices, involving fixation on a lesser number of areas in specific locations. © The Author(s) 2016.

  15. Right hemispheric dominance in gaze-triggered reflexive shift of attention in humans.

    PubMed

    Okada, Takashi; Sato, Wataru; Toichi, Motomi

    2006-11-01

    Recent findings suggest a right hemispheric dominance in gaze-triggered shifts of attention. The aim of this study was to clarify the dominant hemisphere in the gaze processing that mediates attentional shift. A target localization task, with preceding non-predicative gaze cues presented to each visual field, was undertaken by 44 healthy subjects, measuring reaction time (RT). A face identification task was also given to determine hemispheric dominance in face processing for each subject. RT differences between valid and invalid cues were larger when presented in the left rather than the right visual field. This held true regardless of individual hemispheric dominance in face processing. Together, these results indicate right hemispheric dominance in gaze-triggered reflexive shifts of attention in normal healthy subjects.

  16. A progressive model for teaching children with autism to follow gaze shift.

    PubMed

    Gunby, Kristin V; Rapp, John T; Bottoni, Melissa M

    2018-06-06

    Gunby, Rapp, Bottoni, Marchese and Wu () taught three children with autism spectrum disorder to follow an instructor's gaze shift to select a specific item; however, Gunby et al. used different types of prompts with each participant. To address this limitation, we used a progressive training model for increasing gaze shift for three children with autism spectrum disorder. Results show that each participant learned to follow an adult's shift in gaze to make a correct selection. In addition, two participants displayed the skill in response to a parent's gaze shift and with only social consequences; however, the third participant required verbal instruction and tangible reinforcement to demonstrate the skill outside of training sessions. © 2018 Society for the Experimental Analysis of Behavior.

  17. Joint Attention without Gaze Following: Human Infants and Their Parents Coordinate Visual Attention to Objects through Eye-Hand Coordination

    PubMed Central

    Yu, Chen; Smith, Linda B.

    2013-01-01

    The coordination of visual attention among social partners is central to many components of human behavior and human development. Previous research has focused on one pathway to the coordination of looking behavior by social partners, gaze following. The extant evidence shows that even very young infants follow the direction of another's gaze but they do so only in highly constrained spatial contexts because gaze direction is not a spatially precise cue as to the visual target and not easily used in spatially complex social interactions. Our findings, derived from the moment-to-moment tracking of eye gaze of one-year-olds and their parents as they actively played with toys, provide evidence for an alternative pathway, through the coordination of hands and eyes in goal-directed action. In goal-directed actions, the hands and eyes of the actor are tightly coordinated both temporally and spatially, and thus, in contexts including manual engagement with objects, hand movements and eye movements provide redundant information about where the eyes are looking. Our findings show that one-year-olds rarely look to the parent's face and eyes in these contexts but rather infants and parents coordinate looking behavior without gaze following by attending to objects held by the self or the social partner. This pathway, through eye-hand coupling, leads to coordinated joint switches in visual attention and to an overall high rate of looking at the same object at the same time, and may be the dominant pathway through which physically active toddlers align their looking behavior with a social partner. PMID:24236151

  18. Visual perception during mirror-gazing at one's own face in patients with depression.

    PubMed

    Caputo, Giovanni B; Bortolomasi, Marco; Ferrucci, Roberta; Giacopuzzi, Mario; Priori, Alberto; Zago, Stefano

    2014-01-01

    In normal observers, gazing at one's own face in the mirror for a few minutes, at a low illumination level, produces the apparition of strange faces. Observers see distortions of their own faces, but they often see hallucinations like monsters, archetypical faces, faces of relatives and deceased, and animals. In this research, patients with depression were compared to healthy controls with respect to strange-face apparitions. The experiment was a 7-minute mirror-gazing test (MGT) under low illumination. When the MGT ended, the experimenter assessed patients and controls with a specifically designed questionnaire and interviewed them, asking them to describe strange-face apparitions. Apparitions of strange faces in the mirror were very reduced in depression patients compared to healthy controls. Depression patients compared to healthy controls showed shorter duration of apparitions; minor number of strange faces; lower self-evaluation rating of apparition strength; lower self-evaluation rating of provoked emotion. These decreases in depression may be produced by deficits of facial expression and facial recognition of emotions, which are involved in the relationship between the patient (or the patient's ego) and his face image (or the patient's bodily self) that is reflected in the mirror.

  19. Contribution of olivofloccular circuitry developmental defects to atypical gaze in autism

    PubMed Central

    Wegiel, Jerzy; Kuchna, Izabela; Nowicki, Krzysztof; Imaki, Humi; Wegiel, Jarek; Ma, Shuang Yong; Azmitia, Efrain C.; Banerjee, Probal; Flory, Michael; Cohen, Ira L.; London, Eric; Brown, W. Ted; Hare, Carolyn Komich; Wisniewski, Thomas

    2014-01-01

    Individuals with autism demonstrate atypical gaze, impairments in smooth pursuit, altered movement perception and deficits in facial perception. The olivofloccular neuronal circuit is a major contributor to eye movement control. This study of the cerebellum in 12 autistic and 10 control subjects revealed dysplastic changes in the flocculus of eight autistic (67%) and two control (20%) subjects. Defects of the oculomotor system, including avoidance of eye contact and poor or no eye contact, were reported in 88% of autistic subjects with postmortem-detected floccular dysplasia. Focal disorganization of the flocculus cytoarchitecture with deficit, altered morphology, and spatial disorientation of Purkinje cells (PCs); deficit and abnormalities of granule, basket, stellate and unipolar brush cells; and structural defects and abnormal orientation of Bergmann glia are indicators of profound disruption of flocculus circuitry in a dysplastic area. The average volume of PCs was 26% less in the dysplastic region than in the unaffected region of the flocculus (p<0.01) in autistic subjects. Moreover, the average volume of PCs in the entire cerebellum was 25% less in the autistic subjects than in the control subjects (p<0.001). Findings from this study and a parallel study of the inferior olive (IO) suggest that focal floccular dysplasia combined with IO neurons and PC developmental defects may contribute to oculomotor system dysfunction and atypical gaze in autistic subjects. PMID:23558308

  20. A neural-based remote eye gaze tracker under natural head motion.

    PubMed

    Torricelli, Diego; Conforto, Silvia; Schmid, Maurizio; D'Alessio, Tommaso

    2008-10-01

    A novel approach to view-based eye gaze tracking for human computer interface (HCI) is presented. The proposed method combines different techniques to address the problems of head motion, illumination and usability in the framework of low cost applications. Feature detection and tracking algorithms have been designed to obtain an automatic setup and strengthen the robustness to light conditions. An extensive analysis of neural solutions has been performed to deal with the non-linearity associated with gaze mapping under free-head conditions. No specific hardware, such as infrared illumination or high-resolution cameras, is needed, rather a simple commercial webcam working in visible light spectrum suffices. The system is able to classify the gaze direction of the user over a 15-zone graphical interface, with a success rate of 95% and a global accuracy of around 2 degrees , comparable with the vast majority of existing remote gaze trackers.

  1. Fuzzy System-Based Target Selection for a NIR Camera-Based Gaze Tracker

    PubMed Central

    Naqvi, Rizwan Ali; Arsalan, Muhammad; Park, Kang Ryoung

    2017-01-01

    Gaze-based interaction (GBI) techniques have been a popular subject of research in the last few decades. Among other applications, GBI can be used by persons with disabilities to perform everyday tasks, as a game interface, and can play a pivotal role in the human computer interface (HCI) field. While gaze tracking systems have shown high accuracy in GBI, detecting a user’s gaze for target selection is a challenging problem that needs to be considered while using a gaze detection system. Past research has used the blinking of the eyes for this purpose as well as dwell time-based methods, but these techniques are either inconvenient for the user or requires a long time for target selection. Therefore, in this paper, we propose a method for fuzzy system-based target selection for near-infrared (NIR) camera-based gaze trackers. The results of experiments performed in addition to tests of the usability and on-screen keyboard use of the proposed method show that it is better than previous methods. PMID:28420114

  2. EEG Negativity in Fixations Used for Gaze-Based Control: Toward Converting Intentions into Actions with an Eye-Brain-Computer Interface

    PubMed Central

    Shishkin, Sergei L.; Nuzhdin, Yuri O.; Svirin, Evgeny P.; Trofimov, Alexander G.; Fedorova, Anastasia A.; Kozyrskiy, Bogdan L.; Velichkovsky, Boris M.

    2016-01-01

    We usually look at an object when we are going to manipulate it. Thus, eye tracking can be used to communicate intended actions. An effective human-machine interface, however, should be able to differentiate intentional and spontaneous eye movements. We report an electroencephalogram (EEG) marker that differentiates gaze fixations used for control from spontaneous fixations involved in visual exploration. Eight healthy participants played a game with their eye movements only. Their gaze-synchronized EEG data (fixation-related potentials, FRPs) were collected during game's control-on and control-off conditions. A slow negative wave with a maximum in the parietooccipital region was present in each participant's averaged FRPs in the control-on conditions and was absent or had much lower amplitude in the control-off condition. This wave was similar but not identical to stimulus-preceding negativity, a slow negative wave that can be observed during feedback expectation. Classification of intentional vs. spontaneous fixations was based on amplitude features from 13 EEG channels using 300 ms length segments free from electrooculogram contamination (200–500 ms relative to the fixation onset). For the first fixations in the fixation triplets required to make moves in the game, classified against control-off data, a committee of greedy classifiers provided 0.90 ± 0.07 specificity and 0.38 ± 0.14 sensitivity. Similar (slightly lower) results were obtained for the shrinkage Linear Discriminate Analysis (LDA) classifier. The second and third fixations in the triplets were classified at lower rate. We expect that, with improved feature sets and classifiers, a hybrid dwell-based Eye-Brain-Computer Interface (EBCI) can be built using the FRP difference between the intended and spontaneous fixations. If this direction of BCI development will be successful, such a multimodal interface may improve the fluency of interaction and can possibly become the basis for a new input device

  3. Social communication with virtual agents: The effects of body and gaze direction on attention and emotional responding in human observers.

    PubMed

    Marschner, Linda; Pannasch, Sebastian; Schulz, Johannes; Graupner, Sven-Thomas

    2015-08-01

    In social communication, the gaze direction of other persons provides important information to perceive and interpret their emotional response. Previous research investigated the influence of gaze by manipulating mutual eye contact. Therefore, gaze and body direction have been changed as a whole, resulting in only congruent gaze and body directions (averted or directed) of another person. Here, we aimed to disentangle these effects by using short animated sequences of virtual agents posing with either direct or averted body or gaze. Attention allocation by means of eye movements, facial muscle response, and emotional experience to agents of different gender and facial expressions were investigated. Eye movement data revealed longer fixation durations, i.e., a stronger allocation of attention, when gaze and body direction were not congruent with each other or when both were directed towards the observer. This suggests that direct interaction as well as incongruous signals increase the demands of attentional resources in the observer. For the facial muscle response, only the reaction of muscle zygomaticus major revealed an effect of body direction, expressed by stronger activity in response to happy expressions for direct compared to averted gaze when the virtual character's body was directed towards the observer. Finally, body direction also influenced the emotional experience ratings towards happy expressions. While earlier findings suggested that mutual eye contact is the main source for increased emotional responding and attentional allocation, the present results indicate that direction of the virtual agent's body and head also plays a minor but significant role. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Clinician's gaze behaviour in simulated paediatric emergencies.

    PubMed

    McNaughten, Ben; Hart, Caroline; Gallagher, Stephen; Junk, Carol; Coulter, Patricia; Thompson, Andrew; Bourke, Thomas

    2018-03-07

    Differences in the gaze behaviour of experts and novices are described in aviation and surgery. This study sought to describe the gaze behaviour of clinicians from different training backgrounds during a simulated paediatric emergency. Clinicians from four clinical areas undertook a simulated emergency. Participants wore SMI (SensoMotoric Instruments) eye tracking glasses. We measured the fixation count and dwell time on predefined areas of interest and the time taken to key clinical interventions. Paediatric intensive care unit (PICU) consultants performed best and focused longer on the chest and airway. Paediatric consultants and trainees spent longer looking at the defibrillator and algorithm (51 180 ms and 50 551 ms, respectively) than the PICU and paediatric emergency medicine consultants. This study is the first to describe differences in the gaze behaviour between experts and novices in a resuscitation. They mirror those described in aviation and surgery. Further research is needed to evaluate the potential use of eye tracking as an educational tool. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  5. Social evolution. Oxytocin-gaze positive loop and the coevolution of human-dog bonds.

    PubMed

    Nagasawa, Miho; Mitsui, Shouhei; En, Shiori; Ohtani, Nobuyo; Ohta, Mitsuaki; Sakuma, Yasuo; Onaka, Tatsushi; Mogi, Kazutaka; Kikusui, Takefumi

    2015-04-17

    Human-like modes of communication, including mutual gaze, in dogs may have been acquired during domestication with humans. We show that gazing behavior from dogs, but not wolves, increased urinary oxytocin concentrations in owners, which consequently facilitated owners' affiliation and increased oxytocin concentration in dogs. Further, nasally administered oxytocin increased gazing behavior in dogs, which in turn increased urinary oxytocin concentrations in owners. These findings support the existence of an interspecies oxytocin-mediated positive loop facilitated and modulated by gazing, which may have supported the coevolution of human-dog bonding by engaging common modes of communicating social attachment. Copyright © 2015, American Association for the Advancement of Science.

  6. Investigating the Association of Eye Gaze Pattern and Diagnostic Error in Mammography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Voisin, Sophie; Pinto, Frank M; Xu, Songhua

    2013-01-01

    The objective of this study was to investigate the association between eye-gaze patterns and the diagnostic accuracy of radiologists for the task of assessing the likelihood of malignancy of mammographic masses. Six radiologists (2 expert breast imagers and 4 Radiology residents of variable training) assessed the likelihood of malignancy of 40 biopsy-proven mammographic masses (20 malignant and 20 benign) on a computer monitor. Eye-gaze data were collected using a commercial remote eye-tracker. Upon reviewing each mass, the radiologists were also asked to provide their assessment regarding the probability of malignancy of the depicted mass as well as a rating regardingmore » the perceived difficulty of the diagnostic task. The collected data were analyzed using established algorithms and various quantitative metrics were extracted to characterize the recorded gaze patterns. The extracted metrics were correlated with the radiologists diagnostic decisions and perceived complexity scores. Results showed that the visual gaze pattern of radiologists varies substantially, not only depending on their experience level but also among individuals. However, some eye gaze metrics appear to correlate with diagnostic error and perceived complexity more consistently. These results suggest that although gaze patterns are generally associated with diagnostic error and the human perceived difficulty of the diagnostic task, there are substantially individual differences that are not explained simply by the experience level of the individual performing the diagnostic task.« less

  7. Deep Learning-Based Gaze Detection System for Automobile Drivers Using a NIR Camera Sensor.

    PubMed

    Naqvi, Rizwan Ali; Arsalan, Muhammad; Batchuluun, Ganbayar; Yoon, Hyo Sik; Park, Kang Ryoung

    2018-02-03

    A paradigm shift is required to prevent the increasing automobile accident deaths that are mostly due to the inattentive behavior of drivers. Knowledge of gaze region can provide valuable information regarding a driver's point of attention. Accurate and inexpensive gaze classification systems in cars can improve safe driving. However, monitoring real-time driving behaviors and conditions presents some challenges: dizziness due to long drives, extreme lighting variations, glasses reflections, and occlusions. Past studies on gaze detection in cars have been chiefly based on head movements. The margin of error in gaze detection increases when drivers gaze at objects by moving their eyes without moving their heads. To solve this problem, a pupil center corneal reflection (PCCR)-based method has been considered. However, the error of accurately detecting the pupil center and corneal reflection center is increased in a car environment due to various environment light changes, reflections on glasses surface, and motion and optical blurring of captured eye image. In addition, existing PCCR-based methods require initial user calibration, which is difficult to perform in a car environment. To address this issue, we propose a deep learning-based gaze detection method using a near-infrared (NIR) camera sensor considering driver head and eye movement that does not require any initial user calibration. The proposed system is evaluated on our self-constructed database as well as on open Columbia gaze dataset (CAVE-DB). The proposed method demonstrated greater accuracy than the previous gaze classification methods.

  8. Deep Learning-Based Gaze Detection System for Automobile Drivers Using a NIR Camera Sensor

    PubMed Central

    Naqvi, Rizwan Ali; Arsalan, Muhammad; Batchuluun, Ganbayar; Yoon, Hyo Sik; Park, Kang Ryoung

    2018-01-01

    A paradigm shift is required to prevent the increasing automobile accident deaths that are mostly due to the inattentive behavior of drivers. Knowledge of gaze region can provide valuable information regarding a driver’s point of attention. Accurate and inexpensive gaze classification systems in cars can improve safe driving. However, monitoring real-time driving behaviors and conditions presents some challenges: dizziness due to long drives, extreme lighting variations, glasses reflections, and occlusions. Past studies on gaze detection in cars have been chiefly based on head movements. The margin of error in gaze detection increases when drivers gaze at objects by moving their eyes without moving their heads. To solve this problem, a pupil center corneal reflection (PCCR)-based method has been considered. However, the error of accurately detecting the pupil center and corneal reflection center is increased in a car environment due to various environment light changes, reflections on glasses surface, and motion and optical blurring of captured eye image. In addition, existing PCCR-based methods require initial user calibration, which is difficult to perform in a car environment. To address this issue, we propose a deep learning-based gaze detection method using a near-infrared (NIR) camera sensor considering driver head and eye movement that does not require any initial user calibration. The proposed system is evaluated on our self-constructed database as well as on open Columbia gaze dataset (CAVE-DB). The proposed method demonstrated greater accuracy than the previous gaze classification methods. PMID:29401681

  9. Matching the oculomotor drive during head-restrained and head-unrestrained gaze shifts in monkey.

    PubMed

    Bechara, Bernard P; Gandhi, Neeraj J

    2010-08-01

    High-frequency burst neurons in the pons provide the eye velocity command (equivalently, the primary oculomotor drive) to the abducens nucleus for generation of the horizontal component of both head-restrained (HR) and head-unrestrained (HU) gaze shifts. We sought to characterize how gaze and its eye-in-head component differ when an "identical" oculomotor drive is used to produce HR and HU movements. To address this objective, the activities of pontine burst neurons were recorded during horizontal HR and HU gaze shifts. The burst profile recorded on each HU trial was compared with the burst waveform of every HR trial obtained for the same neuron. The oculomotor drive was assumed to be comparable for the pair yielding the lowest root-mean-squared error. For matched pairs of HR and HU trials, the peak eye-in-head velocity was substantially smaller in the HU condition, and the reduction was usually greater than the peak head velocity of the HU trial. A time-varying attenuation index, defined as the difference in HR and HU eye velocity waveforms divided by head velocity [alpha = (H(hr) - E(hu))/H] was computed. The index was variable at the onset of the gaze shift, but it settled at values several times greater than 1. The index then decreased gradually during the movement and stabilized at 1 around the end of gaze shift. These results imply that substantial attenuation in eye velocity occurs, at least partially, downstream of the burst neurons. We speculate on the potential roles of burst-tonic neurons in the neural integrator and various cell types in the vestibular nuclei in mediating the attenuation in eye velocity in the presence of head movements.

  10. African penguins follow the gaze direction of conspecifics

    PubMed Central

    Trincas, Egle

    2017-01-01

    Gaze following is widespread among animals. However, the corresponding ultimate functions may vary substantially. Thus, it is important to study previously understudied (or less studied) species to develop a better understanding of the ecological contexts that foster certain cognitive traits. Penguins (Family Spheniscidae), despite their wide interspecies ecological variation, have previously not been considered for cross-species comparisons. Penguin behaviour and communication have been investigated over the last decades, but less is known on how groups are structured, social hierarchies are established, and coordination for hunting and predator avoidance may occur. In this article, we investigated how African penguins (Spheniscus demersus) respond to gaze cues of conspecifics using a naturalistic setup in a zoo environment. Our results provide evidence that members of the family Spheniscidae follow gaze of conspecifics into distant space. However, further tests are necessary to examine if the observed behaviour serves solely one specific function (e.g. predator detection) or is displayed in a broader context (e.g. eavesdropping on relevant stimuli in the environment). In addition, our findings can serve as a starting point for future cross-species comparisons with other members of the penguin family, to further explore the role of aerial predation and social structure on gaze following in social species. Overall, we also suggest that zoo-housed animals represent an ideal opportunity to extend species range and to test phylogenetic families that have not been in the focus of animal cognitive research. PMID:28626619

  11. The effect of face eccentricity on the perception of gaze direction.

    PubMed

    Todorović, Dejan

    2009-01-01

    The perception of a looker's gaze direction depends not only on iris eccentricity (the position of the looker's irises within the sclera) but also on the orientation of the lookers' head. One among several potential cues of head orientation is face eccentricity, the position of the inner features of the face (eyes, nose, mouth) within the head contour, as viewed by the observer. For natural faces this cue is confounded with many other head-orientation cues, but in schematic faces it can be studied in isolation. Salient novel illustrations of the effectiveness of face eccentricity are 'Necker faces', which involve equal iris eccentricities but multiple perceived gaze directions. In four experiments, iris and face eccentricity in schematic faces were manipulated, revealing strong and consistent effects of face eccentricity on perceived gaze direction, with different types of tasks. An additional experiment confirmed the 'Mona Lisa' effect with this type of stimuli. Face eccentricity most likely acted as a simple but robust cue of head turn. A simple computational account of combined effects of cues of eye and head turn on perceived gaze direction is presented, including a formal condition for the perception of direct gaze. An account of the 'Mona Lisa' effect is presented.

  12. Optimal eye movement strategies: a comparison of neurosurgeons gaze patterns when using a surgical microscope.

    PubMed

    Eivazi, Shahram; Hafez, Ahmad; Fuhl, Wolfgang; Afkari, Hoorieh; Kasneci, Enkelejda; Lehecka, Martin; Bednarik, Roman

    2017-06-01

    Previous studies have consistently demonstrated gaze behaviour differences related to expertise during various surgical procedures. In micro-neurosurgery, however, there is a lack of evidence of empirically demonstrated individual differences associated with visual attention. It is unknown exactly how neurosurgeons see a stereoscopic magnified view in the context of micro-neurosurgery and what this implies for medical training. We report on an investigation of the eye movement patterns in micro-neurosurgery using a state-of-the-art eye tracker. We studied the eye movements of nine neurosurgeons while performing cutting and suturing tasks under a surgical microscope. Eye-movement characteristics, such as fixation (focus level) and saccade (visual search pattern), were analysed. The results show a strong relationship between the level of microsurgical skill and the gaze pattern, whereas more expertise is associated with greater eye control, stability, and focusing in eye behaviour. For example, in the cutting task, well-trained surgeons increased their fixation durations on the operating field twice as much as the novices (expert, 848 ms; novice, 402 ms). Maintaining steady visual attention on the target (fixation), as well as being able to quickly make eye jumps from one target to another (saccades) are two important elements for the success of neurosurgery. The captured gaze patterns can be used to improve medical education, as part of an assessment system or in a gaze-training application.

  13. Automatic and strategic measures as predictors of mirror gazing among individuals with body dysmorphic disorder symptoms.

    PubMed

    Clerkin, Elise M; Teachman, Bethany A

    2009-08-01

    The current study tests cognitive-behavioral models of body dysmorphic disorder (BDD) by examining the relationship between cognitive biases and correlates of mirror gazing. To provide a more comprehensive picture, we investigated both relatively strategic (i.e., available for conscious introspection) and automatic (i.e., outside conscious control) measures of cognitive biases in a sample with either high (n = 32) or low (n = 31) BDD symptoms. Specifically, we examined the extent that (1) explicit interpretations tied to appearance, as well as (2) automatic associations and (3) strategic evaluations of the importance of attractiveness predict anxiety and avoidance associated with mirror gazing. Results indicated that interpretations tied to appearance uniquely predicted self-reported desire to avoid, whereas strategic evaluations of appearance uniquely predicted peak anxiety associated with mirror gazing, and automatic appearance associations uniquely predicted behavioral avoidance. These results offer considerable support for cognitive models of BDD, and suggest a dissociation between automatic and strategic measures.

  14. Automatic and Strategic Measures as Predictors of Mirror Gazing Among Individuals with Body Dysmorphic Disorder Symptoms

    PubMed Central

    Clerkin, Elise M.; Teachman, Bethany A.

    2011-01-01

    The current study tests cognitive-behavioral models of body dysmorphic disorder (BDD) by examining the relationship between cognitive biases and correlates of mirror gazing. To provide a more comprehensive picture, we investigated both relatively strategic (i.e., available for conscious introspection) and automatic (i.e., outside conscious control) measures of cognitive biases in a sample with either high (n=32) or low (n=31) BDD symptoms. Specifically, we examined the extent that 1) explicit interpretations tied to appearance, as well as 2) automatic associations and 3) strategic evaluations of the importance of attractiveness predict anxiety and avoidance associated with mirror gazing. Results indicated that interpretations tied to appearance uniquely predicted self-reported desire to avoid, while strategic evaluations of appearance uniquely predicted peak anxiety associated with mirror gazing, and automatic appearance associations uniquely predicted behavioral avoidance. These results offer considerable support for cognitive models of BDD, and suggest a dissociation between automatic and strategic measures. PMID:19684496

  15. Gaze Step Distributions Reflect Fixations and Saccades: A Comment on Stephen and Mirman (2010)

    ERIC Educational Resources Information Center

    Bogartz, Richard S.; Staub, Adrian

    2012-01-01

    In three experimental tasks Stephen and Mirman (2010) measured gaze steps, the distance in pixels between gaze positions on successive samples from an eyetracker. They argued that the distribution of gaze steps is best fit by the lognormal distribution, and based on this analysis they concluded that interactive cognitive processes underlie eye…

  16. Empirical Study on Designing of Gaze Tracking Camera Based on the Information of User’s Head Movement

    PubMed Central

    Pan, Weiyuan; Jung, Dongwook; Yoon, Hyo Sik; Lee, Dong Eun; Naqvi, Rizwan Ali; Lee, Kwan Woo; Park, Kang Ryoung

    2016-01-01

    Gaze tracking is the technology that identifies a region in space that a user is looking at. Most previous non-wearable gaze tracking systems use a near-infrared (NIR) light camera with an NIR illuminator. Based on the kind of camera lens used, the viewing angle and depth-of-field (DOF) of a gaze tracking camera can be different, which affects the performance of the gaze tracking system. Nevertheless, to our best knowledge, most previous researches implemented gaze tracking cameras without ground truth information for determining the optimal viewing angle and DOF of the camera lens. Eye-tracker manufacturers might also use ground truth information, but they do not provide this in public. Therefore, researchers and developers of gaze tracking systems cannot refer to such information for implementing gaze tracking system. We address this problem providing an empirical study in which we design an optimal gaze tracking camera based on experimental measurements of the amount and velocity of user’s head movements. Based on our results and analyses, researchers and developers might be able to more easily implement an optimal gaze tracking system. Experimental results show that our gaze tracking system shows high performance in terms of accuracy, user convenience and interest. PMID:27589768

  17. Gaze-cueing effect depends on facial expression of emotion in 9- to 12-month-old infants

    PubMed Central

    Niedźwiecka, Alicja; Tomalski, Przemysław

    2015-01-01

    Efficient processing of gaze direction and facial expression of emotion is crucial for early social and emotional development. Toward the end of the first year of life infants begin to pay more attention to negative expressions, but it remains unclear to what extent emotion expression is processed jointly with gaze direction at this age. This study sought to establish the interactions of gaze direction and emotion expression in visual orienting in 9- to 12-month-olds. In particular, we tested whether these interactions can be explained by the negativity bias hypothesis and the shared signal hypothesis. We measured saccadic latencies in response to peripheral targets in a gaze-cueing paradigm with happy, angry, and fearful female faces. In the Pilot Experiment three gaze directions were used (direct, congruent with target location, incongruent with target location). In the Main Experiment we sought to replicate the results of the Pilot experiment using a simpler design without the direct gaze condition. In both experiments we found a robust gaze-cueing effect for happy faces, i.e., facilitation of orienting toward the target in the gaze-cued location, compared with the gaze-incongruent location. We found more rapid orienting to targets cued by happy relative to angry and fearful faces. We did not find any gaze-cueing effect for angry or fearful faces. These results are not consistent with the shared signal hypothesis. While our results show differential processing of positive and negative emotions, they do not support a general negativity bias. On the contrary, they indicate that toward the age of 12 months infants show a positivity bias in gaze-cueing tasks. PMID:25713555

  18. The Role of Gaze and Road Edge Information during High-Speed Locomotion

    ERIC Educational Resources Information Center

    Kountouriotis, Georgios K.; Floyd, Rosalind C.; Gardner, Peter H.; Merat, Natasha; Wilkie, Richard M.

    2012-01-01

    Robust control of skilled actions requires the flexible combination of multiple sources of information. Here we examined the role of gaze during high-speed locomotor steering and in particular the role of feedback from the visible road edges. Participants were required to maintain one of three lateral positions on the road when one or both edges…

  19. Calibration-free gaze tracking for automatic measurement of visual acuity in human infants.

    PubMed

    Xiong, Chunshui; Huang, Lei; Liu, Changping

    2014-01-01

    Most existing vision-based methods for gaze tracking need a tedious calibration process. In this process, subjects are required to fixate on a specific point or several specific points in space. However, it is hard to cooperate, especially for children and human infants. In this paper, a new calibration-free gaze tracking system and method is presented for automatic measurement of visual acuity in human infants. As far as I know, it is the first time to apply the vision-based gaze tracking in the measurement of visual acuity. Firstly, a polynomial of pupil center-cornea reflections (PCCR) vector is presented to be used as the gaze feature. Then, Gaussian mixture models (GMM) is employed for gaze behavior classification, which is trained offline using labeled data from subjects with healthy eyes. Experimental results on several subjects show that the proposed method is accurate, robust and sufficient for the application of measurement of visual acuity in human infants.

  20. Oxytocin enhances gaze-following responses to videos of natural social behavior in adult male rhesus monkeys

    PubMed Central

    Putnam, P.T.; Roman, J.M.; Zimmerman, P.E.; Gothard, K.M.

    2017-01-01

    Gaze following is a basic building block of social behavior that has been observed in multiple species, including primates. The absence of gaze following is associated with abnormal development of social cognition, such as in autism spectrum disorders (ASD). Some social deficits in ASD, including the failure to look at eyes and the inability to recognize facial expressions, are ameliorated by intranasal administration of oxytocin (IN-OT). Here we tested the hypothesis that IN-OT might enhance social processes that require active engagement with a social partner, such as gaze following. Alternatively, IN-OT may only enhance the perceptual salience of the eyes, and may not modify behavioral responses to social signals. To test this hypothesis, we presented four monkeys with videos of conspecifics displaying natural behaviors. Each video was viewed multiple times before and after the monkeys received intranasally either 50 IU of OT or saline. We found that despite a gradual decrease in attention to the repeated viewing of the same videos (habituation), IN-OT consistently increased the frequency of gaze following saccades. Further analysis confirmed that these behaviors did not occur randomly, but rather predictably in response to the same segments of the videos. These findings suggest that in response to more naturalistic social stimuli IN-OT enhances the propensity to interact with a social partner rather than merely elevating the perceptual salience of the eyes. In light of these findings, gaze following may serve as a metric for pro-social effects of oxytocin that target social action more than social perception. PMID:27343726

  1. Autonomic Arousal to Direct Gaze Correlates with Social Impairments among Children with ASD

    ERIC Educational Resources Information Center

    Kaartinen, Miia; Puura, Kaija; Makela, Tiina; Rannisto, Mervi; Lemponen, Riina; Helminen, Mika; Salmelin, Raili; Himanen, Sari-Leena; Hietanen, Jari K.

    2012-01-01

    The present study investigated whether autonomic arousal to direct gaze is related to social impairments among children with autism spectrum disorder (ASD). Arousal was measured through skin conductance responses (SCR) while the participants (15 children with ASD and 16 control children) viewed a live face of another person. Impairments in social…

  2. Gaze Shift as an Interactional Resource for Very Young Children

    ERIC Educational Resources Information Center

    Kidwell, Mardi

    2009-01-01

    This article examines how very young children in a day care center make use of their peers' gaze shifts to differentially locate and prepare for the possibility of a caregiver intervention during situations of their biting, hitting, pushing, and the like. At issue is how the visible character of a gaze shift--that is, the manner in which it is…

  3. Evidence for a link between changes to gaze behaviour and risk of falling in older adults during adaptive locomotion.

    PubMed

    Chapman, G J; Hollands, M A

    2006-11-01

    There is increasing evidence that gaze stabilization with respect to footfall targets plays a crucial role in the control of visually guided stepping and that there are significant changes to gaze behaviour as we age. However, past research has not measured if age-related changes in gaze behaviour are associated with changes to stepping performance. This paper aims to identify differences in gaze behaviour between young (n=8) adults, older adults determined to be at a low-risk of falling (low-risk, n=4) and older adults prone to falling (high-risk, n=4) performing an adaptive locomotor task and attempts to relate observed differences in gaze behaviour to decline in stepping performance. Participants walked at a self-selected pace along a 9m pathway stepping into two footfall target locations en route. Gaze behaviour and lower limb kinematics were recorded using an ASL 500 gaze tracker interfaced with a Vicon motion analysis system. Results showed that older adults looked significantly sooner to targets, and fixated the targets for longer, than younger adults. There were also significant differences in these measures between high and low-risk older adults. On average, high-risk older adults looked away from targets significantly sooner and demonstrated less accurate and more variable foot placements than younger adults and low-risk older adults. These findings suggest that, as we age, we need more time to plan precise stepping movements and clearly demonstrate that there are differences between low-risk and high-risk older adults in both where and when they look at future stepping targets and the precision with which they subsequently step. We propose that high-risk older adults may prioritize the planning of future actions over the accurate execution of ongoing movements and that adoption of this strategy may contribute to an increased likelihood of falls. Copyright 2005 Elsevier B.V.

  4. Combined Effects of Gaze and Orientation of Faces on Person Judgments in Social Situations

    PubMed Central

    Kaisler, Raphaela E.; Leder, Helmut

    2017-01-01

    In social situations, faces of others can vary simultaneously in gaze and orientation. How these variations affect different kinds of social judgments, such as attractiveness or trustworthiness, is only partly understood. Therefore, we studied how different gaze directions, head angles, but also levels of facial attractiveness affect perceived attractiveness and trustworthiness. We always presented pairs of faces – either two average attractive faces or a highly attractive together with a less attractive face. We also varied gaze and head angles showing faces in three different orientations, front, three-quarter and profile view. In Experiment 1 (N = 62), participants rated averted gaze in three-quarter views as more attractive than in front and profile views, and evaluated faces with direct gaze in front views as most trustworthy. Moreover, faces that were being looked at by another face were seen as more attractive. Independent of the head orientation or gaze direction, highly attractive faces were rated as more attractive and more trustworthy. In Experiment 2 (N = 54), we found that the three-quarter advantage vanished when the second face was blurred during judgments, which demonstrates the importance of the presence of another person-as in a triadic social situation-as well as the importance of their visible gaze. The findings emphasize that social evaluations such as trustworthiness are unaffected by the esthetic advantage of three-quarter views of two average attractive faces, and that the effect of a faces’ attractiveness is more powerful than the more subtle effects of gaze and orientations. PMID:28275364

  5. Gaze movements and spatial working memory in collision avoidance: a traffic intersection task

    PubMed Central

    Hardiess, Gregor; Hansmann-Roth, Sabrina; Mallot, Hanspeter A.

    2013-01-01

    Street crossing under traffic is an everyday activity including collision detection as well as avoidance of objects in the path of motion. Such tasks demand extraction and representation of spatio-temporal information about relevant obstacles in an optimized format. Relevant task information is extracted visually by the use of gaze movements and represented in spatial working memory. In a virtual reality traffic intersection task, subjects are confronted with a two-lane intersection where cars are appearing with different frequencies, corresponding to high and low traffic densities. Under free observation and exploration of the scenery (using unrestricted eye and head movements) the overall task for the subjects was to predict the potential-of-collision (POC) of the cars or to adjust an adequate driving speed in order to cross the intersection without collision (i.e., to find the free space for crossing). In a series of experiments, gaze movement parameters, task performance, and the representation of car positions within working memory at distinct time points were assessed in normal subjects as well as in neurological patients suffering from homonymous hemianopia. In the following, we review the findings of these experiments together with other studies and provide a new perspective of the role of gaze behavior and spatial memory in collision detection and avoidance, focusing on the following questions: (1) which sensory variables can be identified supporting adequate collision detection? (2) How do gaze movements and working memory contribute to collision avoidance when multiple moving objects are present and (3) how do they correlate with task performance? (4) How do patients with homonymous visual field defects (HVFDs) use gaze movements and working memory to compensate for visual field loss? In conclusion, we extend the theory of collision detection and avoidance in the case of multiple moving objects and provide a new perspective on the combined operation of

  6. iShadow: Design of a Wearable, Real-Time Mobile Gaze Tracker.

    PubMed

    Mayberry, Addison; Hu, Pan; Marlin, Benjamin; Salthouse, Christopher; Ganesan, Deepak

    2014-06-01

    Continuous, real-time tracking of eye gaze is valuable in a variety of scenarios including hands-free interaction with the physical world, detection of unsafe behaviors, leveraging visual context for advertising, life logging, and others. While eye tracking is commonly used in clinical trials and user studies, it has not bridged the gap to everyday consumer use. The challenge is that a real-time eye tracker is a power-hungry and computation-intensive device which requires continuous sensing of the eye using an imager running at many tens of frames per second, and continuous processing of the image stream using sophisticated gaze estimation algorithms. Our key contribution is the design of an eye tracker that dramatically reduces the sensing and computation needs for eye tracking, thereby achieving orders of magnitude reductions in power consumption and form-factor. The key idea is that eye images are extremely redundant, therefore we can estimate gaze by using a small subset of carefully chosen pixels per frame. We instantiate this idea in a prototype hardware platform equipped with a low-power image sensor that provides random access to pixel values, a low-power ARM Cortex M3 microcontroller, and a bluetooth radio to communicate with a mobile phone. The sparse pixel-based gaze estimation algorithm is a multi-layer neural network learned using a state-of-the-art sparsity-inducing regularization function that minimizes the gaze prediction error while simultaneously minimizing the number of pixels used. Our results show that we can operate at roughly 70mW of power, while continuously estimating eye gaze at the rate of 30 Hz with errors of roughly 3 degrees.

  7. iShadow: Design of a Wearable, Real-Time Mobile Gaze Tracker

    PubMed Central

    Mayberry, Addison; Hu, Pan; Marlin, Benjamin; Salthouse, Christopher; Ganesan, Deepak

    2015-01-01

    Continuous, real-time tracking of eye gaze is valuable in a variety of scenarios including hands-free interaction with the physical world, detection of unsafe behaviors, leveraging visual context for advertising, life logging, and others. While eye tracking is commonly used in clinical trials and user studies, it has not bridged the gap to everyday consumer use. The challenge is that a real-time eye tracker is a power-hungry and computation-intensive device which requires continuous sensing of the eye using an imager running at many tens of frames per second, and continuous processing of the image stream using sophisticated gaze estimation algorithms. Our key contribution is the design of an eye tracker that dramatically reduces the sensing and computation needs for eye tracking, thereby achieving orders of magnitude reductions in power consumption and form-factor. The key idea is that eye images are extremely redundant, therefore we can estimate gaze by using a small subset of carefully chosen pixels per frame. We instantiate this idea in a prototype hardware platform equipped with a low-power image sensor that provides random access to pixel values, a low-power ARM Cortex M3 microcontroller, and a bluetooth radio to communicate with a mobile phone. The sparse pixel-based gaze estimation algorithm is a multi-layer neural network learned using a state-of-the-art sparsity-inducing regularization function that minimizes the gaze prediction error while simultaneously minimizing the number of pixels used. Our results show that we can operate at roughly 70mW of power, while continuously estimating eye gaze at the rate of 30 Hz with errors of roughly 3 degrees. PMID:26539565

  8. Design of a Gaze-Sensitive Virtual Social Interactive System for Children With Autism

    PubMed Central

    Lahiri, Uttama; Warren, Zachary; Sarkar, Nilanjan

    2013-01-01

    Impairments in social communication skills are thought to be core deficits in children with autism spectrum disorder (ASD). In recent years, several assistive technologies, particularly Virtual Reality (VR), have been investigated to promote social interactions in this population. It is well known that children with ASD demonstrate atypical viewing patterns during social interactions and thus monitoring eye-gaze can be valuable to design intervention strategies. While several studies have used eye-tracking technology to monitor eye-gaze for offline analysis, there exists no real-time system that can monitor eye-gaze dynamically and provide individualized feedback. Given the promise of VR-based social interaction and the usefulness of monitoring eye-gaze in real-time, a novel VR-based dynamic eye-tracking system is developed in this work. This system, called Virtual Interactive system with Gaze-sensitive Adaptive Response Technology (VIGART), is capable of delivering individualized feedback based on a child’s dynamic gaze patterns during VR-based interaction. Results from a usability study with six adolescents with ASD are presented that examines the acceptability and usefulness of VIGART. The results in terms of improvement in behavioral viewing and changes in relevant eye physiological indexes of participants while interacting with VIGART indicate the potential of this novel technology. PMID:21609889

  9. Design of a gaze-sensitive virtual social interactive system for children with autism.

    PubMed

    Lahiri, Uttama; Warren, Zachary; Sarkar, Nilanjan

    2011-08-01

    Impairments in social communication skills are thought to be core deficits in children with autism spectrum disorder (ASD). In recent years, several assistive technologies, particularly Virtual Reality (VR), have been investigated to promote social interactions in this population. It is well known that children with ASD demonstrate atypical viewing patterns during social interactions and thus monitoring eye-gaze can be valuable to design intervention strategies. While several studies have used eye-tracking technology to monitor eye-gaze for offline analysis, there exists no real-time system that can monitor eye-gaze dynamically and provide individualized feedback. Given the promise of VR-based social interaction and the usefulness of monitoring eye-gaze in real-time, a novel VR-based dynamic eye-tracking system is developed in this work. This system, called Virtual Interactive system with Gaze-sensitive Adaptive Response Technology (VIGART), is capable of delivering individualized feedback based on a child's dynamic gaze patterns during VR-based interaction. Results from a usability study with six adolescents with ASD are presented that examines the acceptability and usefulness of VIGART. The results in terms of improvement in behavioral viewing and changes in relevant eye physiological indexes of participants while interacting with VIGART indicate the potential of this novel technology. © 2011 IEEE

  10. Effect of terminal accuracy requirements on temporal gaze-hand coordination during fast discrete and reciprocal pointings

    PubMed Central

    2011-01-01

    Background Rapid discrete goal-directed movements are characterized by a well known coordination pattern between the gaze and the hand displacements. The gaze always starts prior to the hand movement and reaches the target before hand velocity peak. Surprisingly, the effect of the target size on the temporal gaze-hand coordination has not been directly investigated. Moreover, goal-directed movements are often produced in a reciprocal rather than in a discrete manner. The objectives of this work were to assess the effect of the target size on temporal gaze-hand coordination during fast 1) discrete and 2) reciprocal pointings. Methods Subjects performed fast discrete (experiment 1) and reciprocal (experiment 2) pointings with an amplitude of 50 cm and four target diameters (7.6, 3.8, 1.9 and 0.95 cm) leading to indexes of difficulty (ID = log2[2A/D]) of 3.7, 4.7, 5.7 and 6.7 bits. Gaze and hand displacements were synchronously recorded. Temporal gaze-hand coordination parameters were compared between experiments (discrete and reciprocal pointings) and IDs using analyses of variance (ANOVAs). Results Data showed that the magnitude of the gaze-hand lead pattern was much higher for discrete than for reciprocal pointings. Moreover, while it was constant for discrete pointings, it decreased systematically with an increasing ID for reciprocal pointings because of the longer duration of gaze anchoring on target. Conclusion Overall, the temporal gaze-hand coordination analysis revealed that even for high IDs, fast reciprocal pointings could not be considered as a concatenation of discrete units. Moreover, our data clearly illustrate the smooth adaptation of temporal gaze-hand coordination to terminal accuracy requirements during fast reciprocal pointings. It will be interesting for further researches to investigate if the methodology used in the experiment 2 allows assessing the effect of sensori-motor deficits on gaze-hand coordination. PMID:21320315

  11. What is the role of the film viewer? The effects of narrative comprehension and viewing task on gaze control in film.

    PubMed

    Hutson, John P; Smith, Tim J; Magliano, Joseph P; Loschky, Lester C

    2017-01-01

    Film is ubiquitous, but the processes that guide viewers' attention while viewing film narratives are poorly understood. In fact, many film theorists and practitioners disagree on whether the film stimulus (bottom-up) or the viewer (top-down) is more important in determining how we watch movies. Reading research has shown a strong connection between eye movements and comprehension, and scene perception studies have shown strong effects of viewing tasks on eye movements, but such idiosyncratic top-down control of gaze in film would be anathema to the universal control mainstream filmmakers typically aim for. Thus, in two experiments we tested whether the eye movements and comprehension relationship similarly held in a classic film example, the famous opening scene of Orson Welles' Touch of Evil (Welles & Zugsmith, Touch of Evil, 1958). Comprehension differences were compared with more volitionally controlled task-based effects on eye movements. To investigate the effects of comprehension on eye movements during film viewing, we manipulated viewers' comprehension by starting participants at different points in a film, and then tracked their eyes. Overall, the manipulation created large differences in comprehension, but only produced modest differences in eye movements. To amplify top-down effects on eye movements, a task manipulation was designed to prioritize peripheral scene features: a map task. This task manipulation created large differences in eye movements when compared to participants freely viewing the clip for comprehension. Thus, to allow for strong, volitional top-down control of eye movements in film, task manipulations need to make features that are important to narrative comprehension irrelevant to the viewing task. The evidence provided by this experimental case study suggests that filmmakers' belief in their ability to create systematic gaze behavior across viewers is confirmed, but that this does not indicate universally similar comprehension of the

  12. Gaze transfer in remote cooperation: is it always helpful to see what your partner is attending to?

    PubMed

    Müller, Romy; Helmert, Jens R; Pannasch, Sebastian; Velichkovsky, Boris M

    2013-01-01

    Establishing common ground in remote cooperation is challenging because nonverbal means of ambiguity resolution are limited. In such settings, information about a partner's gaze can support cooperative performance, but it is not yet clear whether and to what extent the abundance of information reflected in gaze comes at a cost. Specifically, in tasks that mainly rely on spatial referencing, gaze transfer might be distracting and leave the partner uncertain about the meaning of the gaze cursor. To examine this question, we let pairs of participants perform a joint puzzle task. One partner knew the solution and instructed the other partner's actions by (1) gaze, (2) speech, (3) gaze and speech, or (4) mouse and speech. Based on these instructions, the acting partner moved the pieces under conditions of high or low autonomy. Performance was better when using either gaze or mouse transfer compared to speech alone. However, in contrast to the mouse, gaze transfer induced uncertainty, evidenced in delayed responses to the cursor. Also, participants tried to resolve ambiguities by engaging in more verbal effort, formulating more explicit object descriptions and fewer deictic references. Thus, gaze transfer seems to increase uncertainty and ambiguity, thereby complicating grounding in this spatial referencing task. The results highlight the importance of closely examining task characteristics when considering gaze transfer as a means of support.

  13. Considerations for the Use of Remote Gaze Tracking to Assess Behavior in Flight Simulators

    NASA Technical Reports Server (NTRS)

    Kalar, Donald J.; Liston, Dorion; Mulligan, Jeffrey B.; Beutter, Brent; Feary, Michael

    2016-01-01

    Complex user interfaces (such as those found in an aircraft cockpit) may be designed from first principles, but inevitably must be evaluated with real users. User gaze data can provide valuable information that can help to interpret other actions that change the state of the system. However, care must be taken to ensure that any conclusions drawn from gaze data are well supported. Through a combination of empirical and simulated data, we identify several considerations and potential pitfalls when measuring gaze behavior in high-fidelity simulators. We show that physical layout, behavioral differences, and noise levels can all substantially alter the quality of fit for algorithms that segment gaze measurements into individual fixations. We provide guidelines to help investigators ensure that conclusions drawn from gaze tracking data are not artifactual consequences of data quality or analysis techniques.

  14. Gaze Strategies in Skateboard Trick Jumps: Spatiotemporal Constraints in Complex Locomotion

    ERIC Educational Resources Information Center

    Klostermann, André; Küng, Philip

    2017-01-01

    Purpose: This study aimed to further the knowledge on gaze behavior in locomotion by studying gaze strategies in skateboard jumps of different difficulty that had to be performed either with or without an obstacle. Method: Nine experienced skateboarders performed "Ollie" and "Kickflip" jumps either over an obstacle or over a…

  15. ScreenMasker: An Open-source Gaze-contingent Screen Masking Environment.

    PubMed

    Orlov, Pavel A; Bednarik, Roman

    2016-09-01

    The moving-window paradigm, based on gazecontingent technic, traditionally used in a studies of the visual perceptual span. There is a strong demand for new environments that could be employed by non-technical researchers. We have developed an easy-to-use tool with a graphical user interface (GUI) allowing both execution and control of visual gaze-contingency studies. This work describes ScreenMasker, an environment that allows create gaze-contingent textured displays used together with stimuli presentation software. ScreenMasker has an architecture that meets the requirements of low-latency real-time eye-movement experiments. It also provides a variety of settings and functions. Effective rendering times and performance are ensured by means of GPU processing under CUDA technology. Performance tests show ScreenMasker's latency to be 67-74 ms on a typical office computer, and high-end 144-Hz screen latencies of about 25-28 ms. ScreenMasker is an open-source system distributed under the GNU Lesser General Public License and is available at https://github.com/PaulOrlov/ScreenMasker .

  16. Do pet dogs (Canis familiaris) follow ostensive and non-ostensive human gaze to distant space and to objects?

    PubMed Central

    Range, Friederike; Virányi, Zsófia

    2017-01-01

    Dogs are renowned for being skilful at using human-given communicative cues such as pointing. Results are contradictory, however, when it comes to dogs' following human gaze, probably due to methodological discrepancies. Here we investigated whether dogs follow human gaze to one of two food locations better than into distant space even after comparable pre-training. In Experiments 1 and 2, the gazing direction of dogs was recorded in a gaze-following into distant space and in an object-choice task where no choice was allowed, in order to allow a direct comparison between tasks, varying the ostensive nature of the gazes. We found that dogs only followed repeated ostensive human gaze into distant space, whereas they followed all gaze cues in the object-choice task. Dogs followed human gaze better in the object-choice task than when there was no obvious target to look at. In Experiment 3, dogs were tested in another object-choice task and were allowed to approach a container. Ostensive cues facilitated the dogs’ following gaze with gaze as well as their choices: we found that dogs in the ostensive group chose the indicated container at chance level, whereas they avoided this container in the non-ostensive group. We propose that dogs may perceive the object-choice task as a competition over food and may interpret non-ostensive gaze as an intentional cue that indicates the experimenter's interest in the food location she has looked at. Whether ostensive cues simply mitigate the competitive perception of this situation or they alter how dogs interpret communicative gaze needs further investigation. Our findings also show that following gaze with one's gaze and actually choosing one of the two containers in an object-choice task need to be considered as different variables. The present study clarifies a number of questions related to gaze-following in dogs and adds to a growing body of evidence showing that human ostensive cues can strongly modify dog behaviour. PMID

  17. Do pet dogs (Canis familiaris) follow ostensive and non-ostensive human gaze to distant space and to objects?

    PubMed

    Duranton, Charlotte; Range, Friederike; Virányi, Zsófia

    2017-07-01

    Dogs are renowned for being skilful at using human-given communicative cues such as pointing. Results are contradictory, however, when it comes to dogs' following human gaze, probably due to methodological discrepancies. Here we investigated whether dogs follow human gaze to one of two food locations better than into distant space even after comparable pre-training. In Experiments 1 and 2, the gazing direction of dogs was recorded in a gaze-following into distant space and in an object-choice task where no choice was allowed, in order to allow a direct comparison between tasks, varying the ostensive nature of the gazes. We found that dogs only followed repeated ostensive human gaze into distant space, whereas they followed all gaze cues in the object-choice task. Dogs followed human gaze better in the object-choice task than when there was no obvious target to look at. In Experiment 3, dogs were tested in another object-choice task and were allowed to approach a container. Ostensive cues facilitated the dogs' following gaze with gaze as well as their choices: we found that dogs in the ostensive group chose the indicated container at chance level, whereas they avoided this container in the non-ostensive group. We propose that dogs may perceive the object-choice task as a competition over food and may interpret non-ostensive gaze as an intentional cue that indicates the experimenter's interest in the food location she has looked at. Whether ostensive cues simply mitigate the competitive perception of this situation or they alter how dogs interpret communicative gaze needs further investigation. Our findings also show that following gaze with one's gaze and actually choosing one of the two containers in an object-choice task need to be considered as different variables. The present study clarifies a number of questions related to gaze-following in dogs and adds to a growing body of evidence showing that human ostensive cues can strongly modify dog behaviour.

  18. Facial Expressions Modulate the Ontogenetic Trajectory of Gaze-Following among Monkeys

    ERIC Educational Resources Information Center

    Teufel, Christoph; Gutmann, Anke; Pirow, Ralph; Fischer, Julia

    2010-01-01

    Gaze-following, the tendency to direct one's attention to locations looked at by others, is a crucial aspect of social cognition in human and nonhuman primates. Whereas the development of gaze-following has been intensely studied in human infants, its early ontogeny in nonhuman primates has received little attention. Combining longitudinal and…

  19. Look over There! Unilateral Gaze Increases Geographical Memory of the 50 United States

    ERIC Educational Resources Information Center

    Propper, Ruth E.; Brunye, Tad T.; Christman, Stephen D.; Januszewskia, Ashley

    2012-01-01

    Based on their specialized processing abilities, the left and right hemispheres of the brain may not contribute equally to recall of general world knowledge. US college students recalled the verbal names and spatial locations of the 50 US states while sustaining leftward or rightward unilateral gaze, a procedure that selectively activates the…

  20. Right hemispheric dominance and interhemispheric cooperation in gaze-triggered reflexive shift of attention.

    PubMed

    Okada, Takashi; Sato, Wataru; Kubota, Yasutaka; Toichi, Motomi; Murai, Toshiya

    2012-03-01

    The neural substrate for the processing of gaze remains unknown. The aim of the present study was to clarify which hemisphere dominantly processes and whether bilateral hemispheres cooperate with each other in gaze-triggered reflexive shift of attention. Twenty-eight normal subjects were tested. The non-predictive gaze cues were presented either in unilateral or bilateral visual fields. The subjects localized the target as soon as possible. Reaction times (RT) were shorter when gaze-cues were congruent toward than away from targets, whichever visual field they were presented in. RT were shorter in left than right visual field presentations. RT in mono-directional bilateral presentations were shorter than both of those in left and right presentations. When bi-directional bilateral cues were presented, RT were faster when valid cues were presented in the left than right visual fields. The right hemisphere appears to be dominant, and there is interhemispheric cooperation in gaze-triggered reflexive shift of attention. © 2012 The Authors. Psychiatry and Clinical Neurosciences © 2012 Japanese Society of Psychiatry and Neurology.

  1. The effects of social pressure and emotional expression on the cone of gaze in patients with social anxiety disorder.

    PubMed

    Harbort, Johannes; Spiegel, Julia; Witthöft, Michael; Hecht, Heiko

    2017-06-01

    Patients with social anxiety disorder suffer from pronounced fears in social situations. As gaze perception is crucial in these situations, we examined which factors influence the range of gaze directions where mutual gaze is experienced (the cone of gaze). The social stimulus was modified by changing the number of people (heads) present and the emotional expression of their faces. Participants completed a psychophysical task, in which they had to adjust the eyes of a virtual head to gaze at the edge of the range where mutual eye-contact was experienced. The number of heads affected the width of the gaze cone: the more heads, the wider the gaze cone. The emotional expression of the virtual head had no consistent effect on the width of the gaze cone, it did however affect the emotional state of the participants. Angry expressions produced the highest arousal values. Highest valence emerged from happy faces, lowest valence from angry faces. These results suggest that the widening of the gaze cone in social anxiety disorder is not primarily mediated by their altered emotional reactivity. Implications for gaze assessment and gaze training in therapeutic contexts are discussed. Due to interindividual variability, enlarged gaze cones are not necessarily indicative of social anxiety disorder, they merely constitute a correlate at the group level. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Deep Gaze Velocity Analysis During Mammographic Reading for Biometric Identification of Radiologists

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoon, Hong-Jun; Alamudun, Folami T.; Hudson, Kathy

    Several studies have confirmed that the gaze velocity of the human eye can be utilized as a behavioral biometric or personalized biomarker. In this study, we leverage the local feature representation capacity of convolutional neural networks (CNNs) for eye gaze velocity analysis as the basis for biometric identification of radiologists performing breast cancer screening. Using gaze data collected from 10 radiologists reading 100 mammograms of various diagnoses, we compared the performance of a CNN-based classification algorithm with two deep learning classifiers, deep neural network and deep belief network, and a previously presented hidden Markov model classifier. The study showed thatmore » the CNN classifier is superior compared to alternative classification methods based on macro F1-scores derived from 10-fold cross-validation experiments. Our results further support the efficacy of eye gaze velocity as a biometric identifier of medical imaging experts.« less

  3. Deep Gaze Velocity Analysis During Mammographic Reading for Biometric Identification of Radiologists

    DOE PAGES

    Yoon, Hong-Jun; Alamudun, Folami T.; Hudson, Kathy; ...

    2018-01-24

    Several studies have confirmed that the gaze velocity of the human eye can be utilized as a behavioral biometric or personalized biomarker. In this study, we leverage the local feature representation capacity of convolutional neural networks (CNNs) for eye gaze velocity analysis as the basis for biometric identification of radiologists performing breast cancer screening. Using gaze data collected from 10 radiologists reading 100 mammograms of various diagnoses, we compared the performance of a CNN-based classification algorithm with two deep learning classifiers, deep neural network and deep belief network, and a previously presented hidden Markov model classifier. The study showed thatmore » the CNN classifier is superior compared to alternative classification methods based on macro F1-scores derived from 10-fold cross-validation experiments. Our results further support the efficacy of eye gaze velocity as a biometric identifier of medical imaging experts.« less

  4. Revisiting the Relationship between the Processing of Gaze Direction and the Processing of Facial Expression

    ERIC Educational Resources Information Center

    Ganel, Tzvi

    2011-01-01

    There is mixed evidence on the nature of the relationship between the perception of gaze direction and the perception of facial expressions. Major support for shared processing of gaze and expression comes from behavioral studies that showed that observers cannot process expression or gaze and ignore irrelevant variations in the other dimension.…

  5. Quantifying the cognitive cost of laparo-endoscopic single-site surgeries: Gaze-based indices.

    PubMed

    Di Stasi, Leandro L; Díaz-Piedra, Carolina; Ruiz-Rabelo, Juan Francisco; Rieiro, Héctor; Sanchez Carrion, Jose M; Catena, Andrés

    2017-11-01

    Despite the growing interest concerning the laparo-endoscopic single-site surgery (LESS) procedure, LESS presents multiple difficulties and challenges that are likely to increase the surgeon's cognitive cost, in terms of both cognitive load and performance. Nevertheless, there is currently no objective index capable of assessing the surgeon cognitive cost while performing LESS. We assessed if gaze-based indices might offer unique and unbiased measures to quantify LESS complexity and its cognitive cost. We expect that the assessment of surgeon's cognitive cost to improve patient safety by measuring fitness-for-duty and reducing surgeons overload. Using a wearable eye tracker device, we measured gaze entropy and velocity of surgical trainees and attending surgeons during two surgical procedures (LESS vs. multiport laparoscopy surgery [MPS]). None of the participants had previous experience with LESS. They performed two exercises with different complexity levels (Low: Pattern Cut vs. High: Peg Transfer). We also collected performance and subjective data. LESS caused higher cognitive demand than MPS, as indicated by increased gaze entropy in both surgical trainees and attending surgeons (exploration pattern became more random). Furthermore, gaze velocity was higher (exploration pattern became more rapid) for the LESS procedure independently of the surgeon's expertise. Perceived task complexity and laparoscopic accuracy confirmed gaze-based results. Gaze-based indices have great potential as objective and non-intrusive measures to assess surgeons' cognitive cost and fitness-for-duty. Furthermore, gaze-based indices might play a relevant role in defining future guidelines on surgeons' examinations to mark their achievements during the entire training (e.g. analyzing surgical learning curves). Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Design and Evaluation of Fusion Approach for Combining Brain and Gaze Inputs for Target Selection

    PubMed Central

    Évain, Andéol; Argelaguet, Ferran; Casiez, Géry; Roussel, Nicolas; Lécuyer, Anatole

    2016-01-01

    Gaze-based interfaces and Brain-Computer Interfaces (BCIs) allow for hands-free human–computer interaction. In this paper, we investigate the combination of gaze and BCIs. We propose a novel selection technique for 2D target acquisition based on input fusion. This new approach combines the probabilistic models for each input, in order to better estimate the intent of the user. We evaluated its performance against the existing gaze and brain–computer interaction techniques. Twelve participants took part in our study, in which they had to search and select 2D targets with each of the evaluated techniques. Our fusion-based hybrid interaction technique was found to be more reliable than the previous gaze and BCI hybrid interaction techniques for 10 participants over 12, while being 29% faster on average. However, similarly to what has been observed in hybrid gaze-and-speech interaction, gaze-only interaction technique still provides the best performance. Our results should encourage the use of input fusion, as opposed to sequential interaction, in order to design better hybrid interfaces. PMID:27774048

  7. 3D ocular ultrasound using gaze tracking on the contralateral eye: a feasibility study.

    PubMed

    Afsham, Narges; Najafi, Mohammad; Abolmaesumi, Purang; Rohling, Robert

    2011-01-01

    A gaze-deviated examination of the eye with a 2D ultrasound transducer is a common and informative ophthalmic test; however, the complex task of the pose estimation of the ultrasound images relative to the eye affects 3D interpretation. To tackle this challenge, a novel system for 3D image reconstruction based on gaze tracking of the contralateral eye has been proposed. The gaze fixates on several target points and, for each fixation, the pose of the examined eye is inferred from the gaze tracking. A single camera system has been developed for pose estimation combined with subject-specific parameter identification. The ultrasound images are then transformed to the coordinate system of the examined eye to create a 3D volume. Accuracy of the proposed gaze tracking system and the pose estimation of the eye have been validated in a set of experiments. Overall system error, including pose estimation and calibration, are 3.12 mm and 4.68 degrees.

  8. Eye gaze tracking based on the shape of pupil image

    NASA Astrophysics Data System (ADS)

    Wang, Rui; Qiu, Jian; Luo, Kaiqing; Peng, Li; Han, Peng

    2018-01-01

    Eye tracker is an important instrument for research in psychology, widely used in attention, visual perception, reading and other fields of research. Because of its potential function in human-computer interaction, the eye gaze tracking has already been a topic of research in many fields over the last decades. Nowadays, with the development of technology, non-intrusive methods are more and more welcomed. In this paper, we will present a method based on the shape of pupil image to estimate the gaze point of human eyes without any other intrusive devices such as a hat, a pair of glasses and so on. After using the ellipse fitting algorithm to deal with the pupil image we get, we can determine the direction of the fixation by the shape of the pupil.The innovative aspect of this method is to utilize the new idea of the shape of the pupil so that we can avoid much complicated algorithm. The performance proposed is very helpful for the study of eye gaze tracking, which just needs one camera without infrared light to know the changes in the shape of the pupil to determine the direction of the eye gazing, no additional condition is required.

  9. Examining the durability of incidentally learned trust from gaze cues.

    PubMed

    Strachan, James W A; Tipper, Steven P

    2017-10-01

    In everyday interactions we find our attention follows the eye gaze of faces around us. As this cueing is so powerful and difficult to inhibit, gaze can therefore be used to facilitate or disrupt visual processing of the environment, and when we experience this we infer information about the trustworthiness of the cueing face. However, to date no studies have investigated how long these impressions last. To explore this we used a gaze-cueing paradigm where faces consistently demonstrated either valid or invalid cueing behaviours. Previous experiments show that valid faces are subsequently rated as more trustworthy than invalid faces. We replicate this effect (Experiment 1) and then include a brief interference task in Experiment 2 between gaze cueing and trustworthiness rating, which weakens but does not completely eliminate the effect. In Experiment 3, we explore whether greater familiarity with the faces improves the durability of trust learning and find that the effect is more resilient with familiar faces. Finally, in Experiment 4, we push this further and show that evidence of trust learning can be seen up to an hour after cueing has ended. Taken together, our results suggest that incidentally learned trust can be durable, especially for faces that deceive.

  10. Remote gaze tracking system on a large display.

    PubMed

    Lee, Hyeon Chang; Lee, Won Oh; Cho, Chul Woo; Gwon, Su Yeong; Park, Kang Ryoung; Lee, Heekyung; Cha, Jihun

    2013-10-07

    We propose a new remote gaze tracking system as an intelligent TV interface. Our research is novel in the following three ways: first, because a user can sit at various positions in front of a large display, the capture volume of the gaze tracking system should be greater, so the proposed system includes two cameras which can be moved simultaneously by panning and tilting mechanisms, a wide view camera (WVC) for detecting eye position and an auto-focusing narrow view camera (NVC) for capturing enlarged eye images. Second, in order to remove the complicated calibration between the WVC and NVC and to enhance the capture speed of the NVC, these two cameras are combined in a parallel structure. Third, the auto-focusing of the NVC is achieved on the basis of both the user's facial width in the WVC image and a focus score calculated on the eye image of the NVC. Experimental results showed that the proposed system can be operated with a gaze tracking accuracy of ±0.737°~±0.775° and a speed of 5~10 frames/s.

  11. Remote Gaze Tracking System on a Large Display

    PubMed Central

    Lee, Hyeon Chang; Lee, Won Oh; Cho, Chul Woo; Gwon, Su Yeong; Park, Kang Ryoung; Lee, Heekyung; Cha, Jihun

    2013-01-01

    We propose a new remote gaze tracking system as an intelligent TV interface. Our research is novel in the following three ways: first, because a user can sit at various positions in front of a large display, the capture volume of the gaze tracking system should be greater, so the proposed system includes two cameras which can be moved simultaneously by panning and tilting mechanisms, a wide view camera (WVC) for detecting eye position and an auto-focusing narrow view camera (NVC) for capturing enlarged eye images. Second, in order to remove the complicated calibration between the WVC and NVC and to enhance the capture speed of the NVC, these two cameras are combined in a parallel structure. Third, the auto-focusing of the NVC is achieved on the basis of both the user's facial width in the WVC image and a focus score calculated on the eye image of the NVC. Experimental results showed that the proposed system can be operated with a gaze tracking accuracy of ±0.737°∼±0.775° and a speed of 5∼10 frames/s. PMID:24105351

  12. Does Seeing One Another's Gaze Affect Group Dialogue? A Computational Approach

    ERIC Educational Resources Information Center

    Schneider, Bertrand; Pea, Roy

    2015-01-01

    In a previous study, we found that real-time mutual gaze perception (i.e., being able to see the gaze of your partner in real time on a computer screen while solving a learning task) had a positive effect on student collaboration and learning (Schneider & Pea, 2013). The goals of this paper are (1) to explore a variety of computational…

  13. Gaze direction differentially affects avoidance tendencies to happy and angry faces in socially anxious individuals.

    PubMed

    Roelofs, Karin; Putman, Peter; Schouten, Sonja; Lange, Wolf-Gero; Volman, Inge; Rinck, Mike

    2010-04-01

    Increasing evidence indicates that eye gaze direction affects the processing of emotional faces in anxious individuals. However, the effects of eye gaze direction on the behavioral responses elicited by emotional faces, such as avoidance behavior, remain largely unexplored. We administered an Approach-Avoidance Task (AAT) in high (HSA) and low socially anxious (LSA) individuals. All participants responded to photographs of angry, happy and neutral faces (presented with direct and averted gaze), by either pushing a joystick away from them (avoidance) or pulling it towards them (approach). Compared to LSA, HSA were faster in avoiding than approaching angry faces. Most crucially, this avoidance tendency was only present when the perceived anger was directed towards the subject (direct gaze) and not when the gaze of the face-stimulus was averted. In contrast, HSA individuals tended to avoid happy faces irrespectively of gaze direction. Neutral faces elicited no approach-avoidance tendencies. Thus avoidance of angry faces in social anxiety as measured by AA-tasks reflects avoidance of subject-directed anger and not of negative stimuli in general. In addition, although both anger and joy are considered to reflect approach-related emotions, gaze direction did not affect HSA's avoidance of happy faces, suggesting differential mechanisms affecting responses to happy and angry faces in social anxiety. 2009 Elsevier Ltd. All rights reserved.

  14. Influences of High-Level Features, Gaze, and Scene Transitions on the Reliability of BOLD Responses to Natural Movie Stimuli

    PubMed Central

    Lu, Kun-Han; Hung, Shao-Chin; Wen, Haiguang; Marussich, Lauren; Liu, Zhongming

    2016-01-01

    Complex, sustained, dynamic, and naturalistic visual stimulation can evoke distributed brain activities that are highly reproducible within and across individuals. However, the precise origins of such reproducible responses remain incompletely understood. Here, we employed concurrent functional magnetic resonance imaging (fMRI) and eye tracking to investigate the experimental and behavioral factors that influence fMRI activity and its intra- and inter-subject reproducibility during repeated movie stimuli. We found that widely distributed and highly reproducible fMRI responses were attributed primarily to the high-level natural content in the movie. In the absence of such natural content, low-level visual features alone in a spatiotemporally scrambled control stimulus evoked significantly reduced degree and extent of reproducible responses, which were mostly confined to the primary visual cortex (V1). We also found that the varying gaze behavior affected the cortical response at the peripheral part of V1 and in the oculomotor network, with minor effects on the response reproducibility over the extrastriate visual areas. Lastly, scene transitions in the movie stimulus due to film editing partly caused the reproducible fMRI responses at widespread cortical areas, especially along the ventral visual pathway. Therefore, the naturalistic nature of a movie stimulus is necessary for driving highly reliable visual activations. In a movie-stimulation paradigm, scene transitions and individuals’ gaze behavior should be taken as potential confounding factors in order to properly interpret cortical activity that supports natural vision. PMID:27564573

  15. Gaze Toward Naturalistic Social Scenes by Individuals With Intellectual and Developmental Disabilities: Implications for Augmentative and Alternative Communication Designs.

    PubMed

    Liang, Jiali; Wilkinson, Krista

    2018-04-18

    A striking characteristic of the social communication deficits in individuals with autism is atypical patterns of eye contact during social interactions. We used eye-tracking technology to evaluate how the number of human figures depicted and the presence of sharing activity between the human figures in still photographs influenced visual attention by individuals with autism, typical development, or Down syndrome. We sought to examine visual attention to the contents of visual scene displays, a growing form of augmentative and alternative communication support. Eye-tracking technology recorded point-of-gaze while participants viewed 32 photographs in which either 2 or 3 human figures were depicted. Sharing activities between these human figures are either present or absent. The sampling rate was 60 Hz; that is, the technology gathered 60 samples of gaze behavior per second, per participant. Gaze behaviors, including latency to fixate and time spent fixating, were quantified. The overall gaze behaviors were quite similar across groups, regardless of the social content depicted. However, individuals with autism were significantly slower than the other groups in latency to first view the human figures, especially when there were 3 people depicted in the photographs (as compared with 2 people). When participants' own viewing pace was considered, individuals with autism resembled those with Down syndrome. The current study supports the inclusion of social content with various numbers of human figures and sharing activities between human figures into visual scene displays, regardless of the population served. Study design and reporting practices in eye-tracking literature as it relates to autism and Down syndrome are discussed. https://doi.org/10.23641/asha.6066545.

  16. Visuomotor Transformation in the Fly Gaze Stabilization System

    PubMed Central

    Huston, Stephen J; Krapp, Holger G

    2008-01-01

    For sensory signals to control an animal's behavior, they must first be transformed into a format appropriate for use by its motor systems. This fundamental problem is faced by all animals, including humans. Beyond simple reflexes, little is known about how such sensorimotor transformations take place. Here we describe how the outputs of a well-characterized population of fly visual interneurons, lobula plate tangential cells (LPTCs), are used by the animal's gaze-stabilizing neck motor system. The LPTCs respond to visual input arising from both self-rotations and translations of the fly. The neck motor system however is involved in gaze stabilization and thus mainly controls compensatory head rotations. We investigated how the neck motor system is able to selectively extract rotation information from the mixed responses of the LPTCs. We recorded extracellularly from fly neck motor neurons (NMNs) and mapped the directional preferences across their extended visual receptive fields. Our results suggest that—like the tangential cells—NMNs are tuned to panoramic retinal image shifts, or optic flow fields, which occur when the fly rotates about particular body axes. In many cases, tangential cells and motor neurons appear to be tuned to similar axes of rotation, resulting in a correlation between the coordinate systems the two neural populations employ. However, in contrast to the primarily monocular receptive fields of the tangential cells, most NMNs are sensitive to visual motion presented to either eye. This results in the NMNs being more selective for rotation than the LPTCs. Thus, the neck motor system increases its rotation selectivity by a comparatively simple mechanism: the integration of binocular visual motion information. PMID:18651791

  17. Neural synchrony examined with magnetoencephalography (MEG) during eye gaze processing in autism spectrum disorders: preliminary findings

    PubMed Central

    2014-01-01

    Background Gaze processing deficits are a seminal, early, and enduring behavioral deficit in autism spectrum disorder (ASD); however, a comprehensive characterization of the neural processes mediating abnormal gaze processing in ASD has yet to be conducted. Methods This study investigated whole-brain patterns of neural synchrony during passive viewing of direct and averted eye gaze in ASD adolescents and young adults (M Age  = 16.6) compared to neurotypicals (NT) (M Age  = 17.5) while undergoing magnetoencephalography. Coherence between each pair of 54 brain regions within each of three frequency bands (low frequency (0 to 15 Hz), beta (15 to 30 Hz), and low gamma (30 to 45 Hz)) was calculated. Results Significantly higher coherence and synchronization in posterior brain regions (temporo-parietal-occipital) across all frequencies was evident in ASD, particularly within the low 0 to 15 Hz frequency range. Higher coherence in fronto-temporo-parietal regions was noted in NT. A significantly higher number of low frequency cross-hemispheric synchronous connections and a near absence of right intra-hemispheric coherence in the beta frequency band were noted in ASD. Significantly higher low frequency coherent activity in bilateral temporo-parieto-occipital cortical regions and higher gamma band coherence in right temporo-parieto-occipital brain regions during averted gaze was related to more severe symptomology as reported on the Autism Diagnostic Interview-Revised (ADI-R). Conclusions The preliminary results suggest a pattern of aberrant connectivity that includes higher low frequency synchronization in posterior cortical regions, lack of long-range right hemispheric beta and gamma coherence, and decreased coherence in fronto-temporo-parietal regions necessary for orienting to shifts in eye gaze in ASD; a critical behavior essential for social communication. PMID:24976870

  18. Use of Speaker’s Gaze and Syntax in Verb Learning

    PubMed Central

    Nappa, Rebecca; Wessel, Allison; McEldoon, Katherine L.; Gleitman, Lila R.; Trueswell, John C.

    2013-01-01

    Speaker eye gaze and gesture are known to help child and adult listeners establish communicative alignment and learn object labels. Here we consider how learners use these cues, along with linguistic information, to acquire abstract relational verbs. Test items were perspective verb pairs (e.g., chase/flee, win/lose), which pose a special problem for observational accounts of word learning because their situational contexts overlap very closely; the learner must infer the speaker’s chosen perspective on the event. Two cues to the speaker’s perspective on a depicted event were compared and combined: (a) the speaker’s eye gaze to an event participant (e.g., looking at the Chaser vs. looking at the Flee-er) and (b) the speaker’s linguistic choice of which event participant occupies Subject position in his utterance. Participants (3-, 4-, and 5-year-olds) were eye-tracked as they watched a series of videos of a man describing drawings of perspective events (e.g., a rabbit chasing an elephant). The speaker looked at one of the two characters and then uttered either an utterance that was referentially uninformative (He’s mooping him) or informative (The rabbit’s mooping the elephant/The elephant’s mooping the rabbit) because of the syntactic positioning of the nouns. Eye-tracking results showed that all participants regardless of age followed the speaker’s gaze in both uninformative and informative contexts. However, verb-meaning choices were responsive to speaker’s gaze direction only in the linguistically uninformative condition. In the presence of a linguistically informative context, effects of speaker gaze on meaning were minimal for the youngest children to nonexistent for the older populations. Thus children, like adults, can use multiple cues to inform verb-meaning choice but rapidly learn that the syntactic positioning of referring expressions is an especially informative source of evidence for these decisions. PMID:24465183

  19. The Development of Joint Visual Attention: A Longitudinal Study of Gaze following during Interactions with Mothers and Strangers

    ERIC Educational Resources Information Center

    Gredeback, Gustaf; Fikke, Linn; Melinder, Annika

    2010-01-01

    Two- to 8-month-old infants interacted with their mother or a stranger in a prospective longitudinal gaze following study. Gaze following, as assessed by eye tracking, emerged between 2 and 4 months and stabilized between 6 and 8 months of age. Overall, infants followed the gaze of a stranger more than they followed the gaze of their mothers,…

  20. "Are You Looking at Me?" How Children's Gaze Judgments Improve with Age

    ERIC Educational Resources Information Center

    Mareschal, Isabelle; Otsuka, Yumiko; Clifford, Colin W. G.; Mareschal, Denis

    2016-01-01

    Adults' judgments of another person's gaze reflect both sensory (e.g., perceptual) and nonsensory (e.g., decisional) processes. We examined how children's performance on a gaze categorization task develops over time by varying uncertainty in the stimulus presented to 6- to 11 year-olds (n = 57). We found that younger children responded…

  1. Coordination of gaze and speech in communication between children with hearing impairment and normal-hearing peers.

    PubMed

    Sandgren, Olof; Andersson, Richard; van de Weijer, Joost; Hansson, Kristina; Sahlén, Birgitta

    2014-06-01

    To investigate gaze behavior during communication between children with hearing impairment (HI) and normal-hearing (NH) peers. Ten HI-NH and 10 NH-NH dyads performed a referential communication task requiring description of faces. During task performance, eye movements and speech were tracked. Using verbal event (questions, statements, back channeling, and silence) as the predictor variable, group characteristics in gaze behavior were expressed with Kaplan-Meier survival functions (estimating time to gaze-to-partner) and odds ratios (comparing number of verbal events with and without gaze-to-partner). Analyses compared the listeners in each dyad (HI: n = 10, mean age = 12;6 years, mean better ear pure-tone average = 33.0 dB HL; NH: n = 10, mean age = 13;7 years). Log-rank tests revealed significant group differences in survival distributions for all verbal events, reflecting a higher probability of gaze to the partner's face for participants with HI. Expressed as odds ratios (OR), participants with HI displayed greater odds for gaze-to-partner (ORs ranging between 1.2 and 2.1) during all verbal events. The results show an increased probability for listeners with HI to gaze at the speaker's face in association with verbal events. Several explanations for the finding are possible, and implications for further research are discussed.

  2. Impairment of Unconscious, but Not Conscious, Gaze-Triggered Attention Orienting in Asperger's Disorder

    ERIC Educational Resources Information Center

    Sato, Wataru; Uono, Shota; Okada, Takashi; Toichi, Motomi

    2010-01-01

    Impairment of joint attention represents the core clinical features of pervasive developmental disorders (PDDs), including autism and Asperger's disorder. However, experimental studies reported intact gaze-triggered attentional orienting in PDD. Since all previous studies employed supraliminal presentation of gaze stimuli, we hypothesized that…

  3. Viewing condition dependence of the gaze-evoked nystagmus in Arnold Chiari type 1 malformation.

    PubMed

    Ghasia, Fatema F; Gulati, Deepak; Westbrook, Edward L; Shaikh, Aasef G

    2014-04-15

    Saccadic eye movements rapidly shift gaze to the target of interest. Once the eyes reach a given target, the brainstem ocular motor integrator utilizes feedback from various sources to assure steady gaze. One of such sources is cerebellum whose lesion can impair neural integration leading to gaze-evoked nystagmus. The gaze evoked nystagmus is characterized by drifts moving the eyes away from the target and a null position where the drifts are absent. The extent of impairment in the neural integration for two opposite eccentricities might determine the location of the null position. Eye in the orbit position might also determine the location of the null. We report this phenomenon in a patient with Arnold Chiari type 1 malformation who had intermittent esotropia and horizontal gaze-evoked nystagmus with a shift in the null position. During binocular viewing, the null was shifted to the right. During monocular viewing, when the eye under cover drifted nasally (secondary to the esotropia), the null of the gaze-evoked nystagmus reorganized toward the center. We speculate that the output of the neural integrator is altered from the bilateral conflicting eye in the orbit position secondary to the strabismus. This could possibly explain the reorganization of the location of the null position. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. Can gaze-contingent mirror-feedback from unfamiliar faces alter self-recognition?

    PubMed

    Estudillo, Alejandro J; Bindemann, Markus

    2017-05-01

    This study focuses on learning of the self, by examining how human observers update internal representations of their own face. For this purpose, we present a novel gaze-contingent paradigm, in which an onscreen face mimics observers' own eye-gaze behaviour (in the congruent condition), moves its eyes in different directions to that of the observers (incongruent condition), or remains static and unresponsive (neutral condition). Across three experiments, the mimicry of the onscreen face did not affect observers' perceptual self-representations. However, this paradigm influenced observers' reports of their own face. This effect was such that observers felt the onscreen face to be their own and that, if the onscreen gaze had moved on its own accord, observers expected their own eyes to move too. The theoretical implications of these findings are discussed.

  5. Coordinating Cognition: The Costs and Benefits of Shared Gaze during Collaborative Search

    ERIC Educational Resources Information Center

    Brennan, Susan E.; Chen, Xin; Dickinson, Christopher A.; Neider, Mark B.; Zelinsky, Gregory J.

    2008-01-01

    Collaboration has its benefits, but coordination has its costs. We explored the potential for remotely located pairs of people to collaborate during visual search, using shared gaze and speech. Pairs of searchers wearing eyetrackers jointly performed an O-in-Qs search task alone, or in one of three collaboration conditions: shared gaze (with one…

  6. Age differences in conscious versus subconscious social perception: the influence of face age and valence on gaze following.

    PubMed

    Bailey, Phoebe E; Slessor, Gillian; Rendell, Peter G; Bennetts, Rachel J; Campbell, Anna; Ruffman, Ted

    2014-09-01

    Gaze following is the primary means of establishing joint attention with others and is subject to age-related decline. In addition, young but not older adults experience an own-age bias in gaze following. The current research assessed the effects of subconscious processing on these age-related differences. Participants responded to targets that were either congruent or incongruent with the direction of gaze displayed in supraliminal and subliminal images of young and older faces. These faces displayed either neutral (Study 1) or happy and fearful (Study 2) expressions. In Studies 1 and 2, both age groups demonstrated gaze-directed attention by responding faster to targets that were congruent as opposed to incongruent with gaze-cues. In Study 1, subliminal stimuli did not attenuate the age-related decline in gaze-cuing, but did result in an own-age bias among older participants. In Study 2, gaze-cuing was reduced for older relative to young adults in response to supraliminal stimuli, and this could not be attributed to reduced visual acuity or age group differences in the perceived emotional intensity of the gaze-cue faces. Moreover, there were no age differences in gaze-cuing when responding to subliminal faces that were emotionally arousing. In addition, older adults demonstrated an own-age bias for both conscious and subconscious gaze-cuing when faces expressed happiness but not fear. We discuss growing evidence for age-related preservation of subconscious relative to conscious social perception, as well as an interaction between face age and valence in social perception. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  7. Timing of gazes in child dialogues: a time-course analysis of requests and back channelling in referential communication.

    PubMed

    Sandgren, Olof; Andersson, Richard; van de Weijer, Joost; Hansson, Kristina; Sahlén, Birgitta

    2012-01-01

    This study investigates gaze behaviour in child dialogues. In earlier studies the authors have investigated the use of requests for clarification and responses in order to study the co-creation of understanding in a referential communication task. By adding eye tracking, this line of research is now expanded to include non-verbal contributions in conversation. To investigate the timing of gazes in face-to-face interaction and to relate the gaze behaviour to the use of requests for clarification. Eight conversational pairs of typically developing 10-15 year olds participated. The pairs (director and executor) performed a referential communication task requiring the description of faces. During the dialogues both participants wore head-mounted eye trackers. All gazes were recorded and categorized according to the area fixated (Task, Face, Off). The verbal context for all instances of gaze at the partner's face was identified and categorized using time-course analysis. The results showed that the executor spends almost 90% of the time fixating the gaze on the task, 10% on the director's face and less than 0.5% elsewhere. Turn shift, primarily requests for clarification, and back channelling significantly predicted the executors' gaze to the face of the task director. The distribution of types of requests showed that requests for previously unmentioned information were significantly more likely to be associated with gaze at the director. The study shows that the executors' gaze at the director accompanies important dynamic shifts in the dialogue. The association with requests for clarification indicates that gaze at the director can be used to monitor the response with two modalities. Furthermore, the significantly higher association with requests for previously unmentioned information indicates that gaze may be used to emphasize the verbal content. The results will be used as a reference for studies of gaze behaviour in clinical populations with hearing and language

  8. Quantifying gaze and mouse interactions on spatial visual interfaces with a new movement analytics methodology.

    PubMed

    Demšar, Urška; Çöltekin, Arzu

    2017-01-01

    Eye movements provide insights into what people pay attention to, and therefore are commonly included in a variety of human-computer interaction studies. Eye movement recording devices (eye trackers) produce gaze trajectories, that is, sequences of gaze location on the screen. Despite recent technological developments that enabled more affordable hardware, gaze data are still costly and time consuming to collect, therefore some propose using mouse movements instead. These are easy to collect automatically and on a large scale. If and how these two movement types are linked, however, is less clear and highly debated. We address this problem in two ways. First, we introduce a new movement analytics methodology to quantify the level of dynamic interaction between the gaze and the mouse pointer on the screen. Our method uses volumetric representation of movement, the space-time densities, which allows us to calculate interaction levels between two physically different types of movement. We describe the method and compare the results with existing dynamic interaction methods from movement ecology. The sensitivity to method parameters is evaluated on simulated trajectories where we can control interaction levels. Second, we perform an experiment with eye and mouse tracking to generate real data with real levels of interaction, to apply and test our new methodology on a real case. Further, as our experiment tasks mimics route-tracing when using a map, it is more than a data collection exercise and it simultaneously allows us to investigate the actual connection between the eye and the mouse. We find that there seem to be natural coupling when eyes are not under conscious control, but that this coupling breaks down when instructed to move them intentionally. Based on these observations, we tentatively suggest that for natural tracing tasks, mouse tracking could potentially provide similar information as eye-tracking and therefore be used as a proxy for attention. However

  9. Quantifying gaze and mouse interactions on spatial visual interfaces with a new movement analytics methodology

    PubMed Central

    2017-01-01

    Eye movements provide insights into what people pay attention to, and therefore are commonly included in a variety of human-computer interaction studies. Eye movement recording devices (eye trackers) produce gaze trajectories, that is, sequences of gaze location on the screen. Despite recent technological developments that enabled more affordable hardware, gaze data are still costly and time consuming to collect, therefore some propose using mouse movements instead. These are easy to collect automatically and on a large scale. If and how these two movement types are linked, however, is less clear and highly debated. We address this problem in two ways. First, we introduce a new movement analytics methodology to quantify the level of dynamic interaction between the gaze and the mouse pointer on the screen. Our method uses volumetric representation of movement, the space-time densities, which allows us to calculate interaction levels between two physically different types of movement. We describe the method and compare the results with existing dynamic interaction methods from movement ecology. The sensitivity to method parameters is evaluated on simulated trajectories where we can control interaction levels. Second, we perform an experiment with eye and mouse tracking to generate real data with real levels of interaction, to apply and test our new methodology on a real case. Further, as our experiment tasks mimics route-tracing when using a map, it is more than a data collection exercise and it simultaneously allows us to investigate the actual connection between the eye and the mouse. We find that there seem to be natural coupling when eyes are not under conscious control, but that this coupling breaks down when instructed to move them intentionally. Based on these observations, we tentatively suggest that for natural tracing tasks, mouse tracking could potentially provide similar information as eye-tracking and therefore be used as a proxy for attention. However

  10. Age and motivation predict gaze behavior for facial expressions.

    PubMed

    Nikitin, Jana; Freund, Alexandra M

    2011-09-01

    This study investigated age-related differences between younger (M = 25.52 years) and older (M = 70.51 years) adults in avoidance motivation and the influence of avoidance motivation on gaze preferences for happy, neutral, and angry faces. In line with the hypothesis of reduced negativity effect later in life, older adults avoided angry faces and (to a lesser degree) preferred happy faces more than younger adults did. This effect cannot be explained by age-related changes in dispositional motivation. Irrespective of age, avoidance motivation predicted gaze behavior towards emotional faces. The study demonstrates the importance of interindividual differences beyond young adulthood.

  11. A focus of attention mechanism for gaze control within a framework for intelligent image analysis tools

    NASA Astrophysics Data System (ADS)

    Rodrigo, Ranga P.; Ranaweera, Kamal; Samarabandu, Jagath K.

    2004-05-01

    Focus of attention is often attributed to biological vision system where the entire field of view is first monitored and then the attention is focused to the object of interest. We propose using a similar approach for object recognition in a color image sequence. The intention is to locate an object based on a prior motive, concentrate on the detected object so that the imaging device can be guided toward it. We use the abilities of the intelligent image analysis framework developed in our laboratory to generate an algorithm dynamically to detect the particular type of object based on the user's object description. The proposed method uses color clustering along with segmentation. The segmented image with labeled regions is used to calculate the shape descriptor parameters. These and the color information are matched with the input description. Gaze is then controlled by issuing camera movement commands as appropriate. We present some preliminary results that demonstrate the success of this approach.

  12. Stabilization of gaze during circular locomotion in darkness. II. Contribution of velocity storage to compensatory eye and head nystagmus in the running monkey

    NASA Technical Reports Server (NTRS)

    Solomon, D.; Cohen, B.

    1992-01-01

    1. Yaw eye in head (Eh) and head on body velocities (Hb) were measured in two monkeys that ran around the perimeter of a circular platform in darkness. The platform was stationary or could be counterrotated to reduce body velocity in space (Bs) while increasing gait velocity on the platform (Bp). The animals were also rotated while seated in a primate chair at eccentric locations to provide linear and angular accelerations similar to those experienced while running. 2. Both animals had head and eye nystagmus while running in darkness during which slow phase gaze velocity on the body (Gb) partially compensated for body velocity in space (Bs). The eyes, driven by the vestibuloocular reflex (VOR), supplied high-frequency characteristics, bringing Gb up to compensatory levels at the beginning and end of the slow phases. The head provided substantial gaze compensation during the slow phases, probably through the vestibulocollic reflex (VCR). Synchronous eye and head quick phases moved gaze in the direction of running. Head movements occurred consistently only when animals were running. This indicates that active body and limb motion may be essential for inducing the head-eye gaze synergy. 3. Gaze compensation was good when running in both directions in one animal and in one direction in the other animal. The animals had long VOR time constants in these directions. The VOR time constant was short to one side in one animal, and it had poor gaze compensation in this direction. Postlocomotory nystagmus was weaker after running in directions with a long VOR time constant than when the animals were passively rotated in darkness. We infer that velocity storage in the vestibular system had been activated to produce continuous Eh and Hb during running and to counteract postrotatory afterresponses. 4. Continuous compensatory gaze nystagmus was not produced by passive eccentric rotation with the head stabilized or free. This indicates that an aspect of active locomotion, most

  13. Gaze and visual search strategies of children with Asperger syndrome/high functioning autism viewing a magic trick.

    PubMed

    Joosten, Annette; Girdler, Sonya; Albrecht, Matthew A; Horlin, Chiara; Falkmer, Marita; Leung, Denise; Ordqvist, Anna; Fleischer, Håkan; Falkmer, Torbjörn

    2016-01-01

    To examine visual search patterns and strategies used by children with and without Asperger syndrome/high functioning autism (AS/HFA) while watching a magic trick. Limited responsivity to gaze cues is hypothesised to contribute to social deficits in children with AS/HFA. Twenty-one children with AS/HFA and 31 matched peers viewed a video of a gaze-cued magic trick twice. Between the viewings, they were informed about how the trick was performed. Participants' eye movements were recorded using a head-mounted eye-tracker. Children with AS/HFA looked less frequently and had shorter fixation on the magician's direct and averted gazes during both viewings and more frequently at not gaze-cued objects and on areas outside the magician's face. After being informed of how the trick was conducted, both groups made fewer fixations on gaze-cued objects and direct gaze. Information may enhance effective visual strategies in children with and without AS/HFA.

  14. Modeling eye gaze patterns in clinician-patient interaction with lag sequential analysis.

    PubMed

    Montague, Enid; Xu, Jie; Chen, Ping-Yu; Asan, Onur; Barrett, Bruce P; Chewning, Betty

    2011-10-01

    The aim of this study was to examine whether lag sequential analysis could be used to describe eye gaze orientation between clinicians and patients in the medical encounter. This topic is particularly important as new technologies are implemented into multiuser health care settings in which trust is critical and nonverbal cues are integral to achieving trust. This analysis method could lead to design guidelines for technologies and more effective assessments of interventions. Nonverbal communication patterns are important aspects of clinician-patient interactions and may affect patient outcomes. The eye gaze behaviors of clinicians and patients in 110 videotaped medical encounters were analyzed using the lag sequential method to identify significant behavior sequences. Lag sequential analysis included both event-based lag and time-based lag. Results from event-based lag analysis showed that the patient's gaze followed that of the clinician, whereas the clinician's gaze did not follow the patient's. Time-based sequential analysis showed that responses from the patient usually occurred within 2 s after the initial behavior of the clinician. Our data suggest that the clinician's gaze significantly affects the medical encounter but that the converse is not true. Findings from this research have implications for the design of clinical work systems and modeling interactions. Similar research methods could be used to identify different behavior patterns in clinical settings (physical layout, technology, etc.) to facilitate and evaluate clinical work system designs.

  15. Infants' Developing Understanding of Social Gaze

    ERIC Educational Resources Information Center

    Beier, Jonathan S.; Spelke, Elizabeth S.

    2012-01-01

    Young infants are sensitive to self-directed social actions, but do they appreciate the intentional, target-directed nature of such behaviors? The authors addressed this question by investigating infants' understanding of social gaze in third-party interactions (N = 104). Ten-month-old infants discriminated between 2 people in mutual versus…

  16. Eye gaze during comprehension of American Sign Language by native and beginning signers.

    PubMed

    Emmorey, Karen; Thompson, Robin; Colvin, Rachael

    2009-01-01

    An eye-tracking experiment investigated where deaf native signers (N = 9) and hearing beginning signers (N = 10) look while comprehending a short narrative and a spatial description in American Sign Language produced live by a fluent signer. Both groups fixated primarily on the signer's face (more than 80% of the time) but differed with respect to fixation location. Beginning signers fixated on or near the signer's mouth, perhaps to better perceive English mouthing, whereas native signers tended to fixate on or near the eyes. Beginning signers shifted gaze away from the signer's face more frequently than native signers, but the pattern of gaze shifts was similar for both groups. When a shift in gaze occurred, the sign narrator was almost always looking at his or her hands and was most often producing a classifier construction. We conclude that joint visual attention and attention to mouthing (for beginning signers), rather than linguistic complexity or processing load, affect gaze fixation patterns during sign language comprehension.

  17. Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving.

    PubMed

    Hergeth, Sebastian; Lorenz, Lutz; Vilimek, Roman; Krems, Josef F

    2016-05-01

    The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving. © 2016, Human Factors and Ergonomics Society.

  18. Spatial updating depends on gaze direction even after loss of vision.

    PubMed

    Reuschel, Johanna; Rösler, Frank; Henriques, Denise Y P; Fiehler, Katja

    2012-02-15

    Direction of gaze (eye angle + head angle) has been shown to be important for representing space for action, implying a crucial role of vision for spatial updating. However, blind people have no access to vision yet are able to perform goal-directed actions successfully. Here, we investigated the role of visual experience for localizing and updating targets as a function of intervening gaze shifts in humans. People who differed in visual experience (late blind, congenitally blind, or sighted) were briefly presented with a proprioceptive reach target while facing it. Before they reached to the target's remembered location, they turned their head toward an eccentric direction that also induced corresponding eye movements in sighted and late blind individuals. We found that reaching errors varied systematically as a function of shift in gaze direction only in participants with early visual experience (sighted and late blind). In the late blind, this effect was solely present in people with moveable eyes but not in people with at least one glass eye. Our results suggest that the effect of gaze shifts on spatial updating develops on the basis of visual experience early in life and remains even after loss of vision as long as feedback from the eyes and head is available.

  19. Radiologically defining horizontal gaze using EOS imaging-a prospective study of healthy subjects and a retrospective audit.

    PubMed

    Hey, Hwee Weng Dennis; Tan, Kimberly-Anne; Ho, Vivienne Chien-Lin; Azhar, Syifa Bte; Lim, Joel-Louis; Liu, Gabriel Ka-Po; Wong, Hee-Kit

    2018-06-01

    As sagittal alignment of the cervical spine is important for maintaining horizontal gaze, it is important to determine the former for surgical correction. However, horizontal gaze remains poorly-defined from a radiological point of view. The objective of this study was to establish radiological criteria to define horizontal gaze. This study was conducted at a tertiary health-care institution over a 1-month period. A prospective cohort of healthy patients was used to determine the best radiological criteria for defining horizontal gaze. A retrospective cohort of patients without rigid spinal deformities was used to audit the incidence of horizontal gaze. Two categories of radiological parameters for determining horizontal gaze were tested: (1) the vertical offset distances of key identifiable structures from the horizontal gaze axis and (2) imaginary lines convergent with the horizontal gaze axis. Sixty-seven healthy subjects underwent whole-body EOS radiographs taken in a directed standing posture. Horizontal gaze was radiologically defined using each parameter, as represented by their means, 95% confidence intervals (CIs), and associated 2 standard deviations (SDs). Subsequently, applying the radiological criteria, we conducted a retrospective audit of such radiographs (before the implementation of a strict radioimaging standardization). The mean age of our prospective cohort was 46.8 years, whereas that of our retrospective cohort was 37.2 years. Gender was evenly distributed across both cohorts. The four parameters with the lowest 95% CI and 2 SD were the distance offsets of the midpoint of the hard palate (A) and the base of the sella turcica (B), the horizontal convergents formed by the tangential line to the hard palate (C), and the line joining the center of the orbital orifice with the internal occipital protuberance (D). In the prospective cohort, good sensitivity (>98%) was attained when two or more parameters were used. Audit using Criterion B

  20. Does Gaze Direction Modulate Facial Expression Processing in Children with Autism Spectrum Disorder?

    ERIC Educational Resources Information Center

    Akechi, Hironori; Senju, Atsushi; Kikuchi, Yukiko; Tojo, Yoshikuni; Osanai, Hiroo; Hasegawa, Toshikazu

    2009-01-01

    Two experiments investigated whether children with autism spectrum disorder (ASD) integrate relevant communicative signals, such as gaze direction, when decoding a facial expression. In Experiment 1, typically developing children (9-14 years old; n = 14) were faster at detecting a facial expression accompanying a gaze direction with a congruent…

  1. Affective Evaluations of Objects Are Influenced by Observed Gaze Direction and Emotional Expression

    ERIC Educational Resources Information Center

    Bayliss, Andrew P.; Frischen, Alexandra; Fenske, Mark J.; Tipper, Steven P.

    2007-01-01

    Gaze direction signals another person's focus of interest. Facial expressions convey information about their mental state. Appropriate responses to these signals should reflect their combined influence, yet current evidence suggests that gaze-cueing effects for objects near an observed face are not modulated by its emotional expression. Here, we…

  2. Gaze-controlled, computer-assisted communication in Intensive Care Unit: "speaking through the eyes".

    PubMed

    Maringelli, F; Brienza, N; Scorrano, F; Grasso, F; Gregoretti, C

    2013-02-01

    The aim of this study was to test the hypothesis that a gaze-controlled communication system (eye tracker, ET) can improve communication processes between completely dysarthric ICU patients and the hospital staff, in three main domains: 1) basic communication processes (i.e., fundamental needs, desire, and wishes); 2) the ability of the medical staff to understand the clinical condition of the patient; and 3) the level of frustration experienced by patient, nurses and physicians. Fifteen fully conscious medical and surgical patients, 8 physicians, and 15 nurses were included in the study. The experimental procedure was composed by three phases: in phase 1 all groups completed the preintervention questionnaire; in phase 2 the ET was introduced and tested as a communication device; in phase 3 all groups completed the postintervention questionnaire. Patients preintervention questionnaires showed remarkable communication deficits, without any group effect. Answers of physicians and nurses were pretty much similar to the one of patients. Postintervention questionnaires showed in all groups a remarkable and statistically significant improvement in different communication domains, as well as a remarkable decrease of anxiety and disphoric thought. Improvement was also reported by physicians and nurses in their ability to understand patient's clinical conditions. Our results show an improvement in the quality of the examined parameters. Better communication processes seem also to lead to improvements in several psychological parameters, namely anxiety and drop-out depression perceived by both patients and medical staff. Further controlled studies are needed to define the ET role in ICU.

  3. Real-time estimation of horizontal gaze angle by saccade integration using in-ear electrooculography.

    PubMed

    Hládek, Ľuboš; Porr, Bernd; Brimijoin, W Owen

    2018-01-01

    The manuscript proposes and evaluates a real-time algorithm for estimating eye gaze angle based solely on single-channel electrooculography (EOG), which can be obtained directly from the ear canal using conductive ear moulds. In contrast to conventional high-pass filtering, we used an algorithm that calculates absolute eye gaze angle via statistical analysis of detected saccades. The estimated eye positions of the new algorithm were still noisy. However, the performance in terms of Pearson product-moment correlation coefficients was significantly better than the conventional approach in some instances. The results suggest that in-ear EOG signals captured with conductive ear moulds could serve as a basis for light-weight and portable horizontal eye gaze angle estimation suitable for a broad range of applications. For instance, for hearing aids to steer the directivity of microphones in the direction of the user's eye gaze.

  4. Intentional gaze shift to neglected space: a compensatory strategy during recovery after unilateral spatial neglect.

    PubMed

    Takamura, Yusaku; Imanishi, Maho; Osaka, Madoka; Ohmatsu, Satoko; Tominaga, Takanori; Yamanaka, Kentaro; Morioka, Shu; Kawashima, Noritaka

    2016-11-01

    Unilateral spatial neglect is a common neurological syndrome following predominantly right hemispheric stroke. While most patients lack insight into their neglect behaviour and do not initiate compensatory behaviours in the early recovery phase, some patients recognize it and start to pay attention towards the neglected space. We aimed to characterize visual attention capacity in patients with unilateral spatial neglect with specific focus on cortical processes underlying compensatory gaze shift towards the neglected space during the recovery process. Based on the Behavioural Inattention Test score and presence or absence of experience of neglect in their daily life from stroke onset to the enrolment date, participants were divided into USN+‰‰+ (do not compensate, n = 15), USN+ (compensate, n = 10), and right hemisphere damage groups (no neglect, n = 24). The patients participated in eye pursuit-based choice reaction tasks and were asked to pursue one of five horizontally located circular objects flashed on a computer display. The task consisted of 25 trials with 4-s intervals, and the order of highlighted objects was randomly determined. From the recorded eye tracking data, eye movement onset and gaze shift were calculated. To elucidate the cortical mechanism underlying behavioural results, electroencephalagram activities were recorded in three USN+‰‰+, 13 USN+ and eight patients with right hemisphere damage. We found that while lower Behavioural Inattention Test scoring patients (USN+‰‰+) showed gaze shift to non-neglected space, some higher scoring patients (USN+) showed clear leftward gaze shift at visual stimuli onset. Moreover, we found a significant correlation between Behavioural Inattention Test score and gaze shift extent in the unilateral spatial neglect group (r = -0.62, P < 0.01). Electroencephalography data clearly demonstrated that the extent of increase in theta power in the frontal cortex strongly correlated with the leftward gaze shift

  5. Attention Orienting by Gaze and Facial Expressions Across Development

    PubMed Central

    Neath, Karly; Nilsen, Elizabeth S.; Gittsovich, Katarzyna; Itier, Roxane J.

    2014-01-01

    Processing of facial expressions has been shown to potentiate orienting of attention toward the direction signaled by gaze in adults, an important social–cognitive function. However, little is known about how this social attention skill develops. This study is the first to examine the developmental trajectory of the gaze orienting effect (GOE), its modulations by facial expressions, and its links with theory of mind (ToM) abilities. Dynamic emotional stimuli were presented to 222 participants (7–25 years old) with normal trait anxiety using a gaze-cuing paradigm. The GOE was found as early as 7 years of age and decreased linearly until 12–13 years, at which point adult levels were reached. Both fearful and surprised expressions enhanced the GOE compared with neutral expressions. The GOE for fearful faces was also larger than for joyful and angry expressions. These effects did not interact with age and were not driven by intertrial variance. Importantly, the GOE did not correlate with ToM abilities as assessed by the “Reading the Mind in the Eyes” test. The implication of these findings for clinical and typically developing populations is discussed. PMID:23356559

  6. Gazing into Thin Air: The Dual-Task Costs of Movement Planning and Execution during Adaptive Gait

    PubMed Central

    Ellmers, Toby J.; Cocks, Adam J.; Doumas, Michail; Williams, A. Mark; Young, William R.

    2016-01-01

    We examined the effect of increased cognitive load on visual search behavior and measures of gait performance during locomotion. Also, we investigated how personality traits, specifically the propensity to consciously control or monitor movements (trait movement ‘reinvestment’), impacted the ability to maintain effective gaze under conditions of cognitive load. Healthy young adults traversed a novel adaptive walking path while performing a secondary serial subtraction task. Performance was assessed using correct responses to the cognitive task, gaze behavior, stepping accuracy, and time to complete the walking task. When walking while simultaneously carrying out the secondary serial subtraction task, participants visually fixated on task-irrelevant areas ‘outside’ the walking path more often and for longer durations of time, and fixated on task-relevant areas ‘inside’ the walkway for shorter durations. These changes were most pronounced in high-trait-reinvesters. We speculate that reinvestment-related processes placed an additional cognitive demand upon working memory. These increased task-irrelevant ‘outside’ fixations were accompanied by slower completion rates on the walking task and greater gross stepping errors. Findings suggest that attention is important for the maintenance of effective gaze behaviors, supporting previous claims that the maladaptive changes in visual search observed in high-risk older adults may be a consequence of inefficiencies in attentional processing. Identifying the underlying attentional processes that disrupt effective gaze behaviour during locomotion is an essential step in the development of rehabilitation, with this information allowing for the emergence of interventions that reduce the risk of falling. PMID:27824937

  7. Gaze maintenance and autism spectrum disorder.

    PubMed

    Kaye, Leah; Kurtz, Marie; Tierney, Cheryl; Soni, Ajay; Augustyn, Marilyn

    2014-01-01

    were equal and reactive without afferent pupillary defect, and normal visual tracking as assessed through pursuit and saccades. There were some head jerking motions observed which were not thought to be part of Chase's attempts to view objects. Gaze impersistence was noted, although it was not clear if this was due to a lack of attention or a true inability to maintain a gaze in the direction instructed. On review of the school's speech and language report, they state that he is >90% intelligible. He has occasional lip trills. Testing with the Clinical Evaluation of Language Fundamentals shows mild delays in receptive language, especially those that require visual attention. Verbal Motor Production Assessment for Children reveals focal oromotor control and sequencing skills that are below average, with groping when asked to imitate single oromotor nonspeech movements and sequenced double oromotor nonspeech movements. At 5½ years, he returns for follow-up, and he is outgoing and imaginative, eager to play and socialize. He makes eye contact but does not always maintain it. He asks and responds to questions appropriately, and he is able to follow verbal directions and verbal redirection. He is very interested in Toy Story characters but willing to share them and plays with other toys. Chase's speech has predictable, easy to decode sound substitutions. On interview with him, you feel that he has borderline cognitive abilities. He also demonstrates good eye contact but lack of visual gaze maintenance; this is the opposite of the pattern you are accustomed to in patients with autism spectrum disorder. What do you do next?

  8. Post-traumatic Vertical Gaze Paresis in Nine Patients: Special Vulnerability of the Artery of Percheron in Trauma?

    PubMed Central

    Galvez-Ruiz, Alberto

    2015-01-01

    Purpose: The purpose was to present a case series of vertical gaze paresis in patients with a history of cranioencephalic trauma (CET). Methods: The clinical characteristics and management are presented of nine patients with a history of CET secondary to motor vehicle accidents with associated vertical gaze paresis. Results: Neuroimaging studies indicated posttraumatic contusion of the thalamic-mesencephalic region in all nine patients who corresponded to the artery of Percheron region; four patients had signs of hemorrhagic transformation. Vertical gaze paresis was present in all patients, ranging from complete paralysis of the upward and downward gaze to a slight limitation of upward gaze. Discussion: Posttraumatic vertical gaze paresis is a rare phenomenon that can occur in isolation or in association with other neurological deficits and can cause a significant limitation in the quality-of-life. Studies in the literature have postulated that the unique anatomy of the angle of penetration of the thalamoperforating and lenticulostriate arteries makes these vessels more vulnerable to isolated selective damage in certain individuals and can cause-specific patterns of CET. PMID:26180479

  9. Post-traumatic Vertical Gaze Paresis in Nine Patients: Special Vulnerability of the Artery of Percheron in Trauma?

    PubMed

    Galvez-Ruiz, Alberto

    2015-01-01

    The purpose was to present a case series of vertical gaze paresis in patients with a history of cranioencephalic trauma (CET). The clinical characteristics and management are presented of nine patients with a history of CET secondary to motor vehicle accidents with associated vertical gaze paresis. Neuroimaging studies indicated posttraumatic contusion of the thalamic-mesencephalic region in all nine patients who corresponded to the artery of Percheron region; four patients had signs of hemorrhagic transformation. Vertical gaze paresis was present in all patients, ranging from complete paralysis of the upward and downward gaze to a slight limitation of upward gaze. Posttraumatic vertical gaze paresis is a rare phenomenon that can occur in isolation or in association with other neurological deficits and can cause a significant limitation in the quality-of-life. Studies in the literature have postulated that the unique anatomy of the angle of penetration of the thalamoperforating and lenticulostriate arteries makes these vessels more vulnerable to isolated selective damage in certain individuals and can cause-specific patterns of CET.

  10. Real-time estimation of horizontal gaze angle by saccade integration using in-ear electrooculography

    PubMed Central

    2018-01-01

    The manuscript proposes and evaluates a real-time algorithm for estimating eye gaze angle based solely on single-channel electrooculography (EOG), which can be obtained directly from the ear canal using conductive ear moulds. In contrast to conventional high-pass filtering, we used an algorithm that calculates absolute eye gaze angle via statistical analysis of detected saccades. The estimated eye positions of the new algorithm were still noisy. However, the performance in terms of Pearson product-moment correlation coefficients was significantly better than the conventional approach in some instances. The results suggest that in-ear EOG signals captured with conductive ear moulds could serve as a basis for light-weight and portable horizontal eye gaze angle estimation suitable for a broad range of applications. For instance, for hearing aids to steer the directivity of microphones in the direction of the user’s eye gaze. PMID:29304120

  11. Dysthyroid Orbitopathy Presenting with Gaze-Evoked Amaurosis: Case Report and Review of the Literature.

    PubMed

    Orlans, Harry O; Bremner, Fion D

    2015-01-01

    Gaze-evoked amaurosis (GEA) describes visual loss associated with eccentric gaze that recovers when the eye is returned to primary position. Here we describe an unusual case of bilateral GEA as the presenting feature of dysthyroid orbitopathy. This is only the third such case to be reported in the literature and the first to feature bilateral GEA in all positions of gaze without accompanying proptosis or ophthalmoplegia. A 50-year-old man who had recently commenced treatment for thyrotoxicosis presented with a 3-week history of typical GEA in both eyes in all positions of gaze. He subsequently developed a bilateral compressive optic neuropathy which was only partially responsive to high dose steroid therapy. Although an uncommon presenting feature of dysthyroid orbitopathy, GEA is an ominous symptom that may precede sight-threatening optic nerve compromise. When present, early immunosuppressive and/or decompressive treatment should be considered.

  12. Intranasal Oxytocin Treatment Increases Eye-Gaze Behavior toward the Owner in Ancient Japanese Dog Breeds

    PubMed Central

    Nagasawa, Miho; Ogawa, Misato; Mogi, Kazutaka; Kikusui, Takefumi

    2017-01-01

    Dogs acquired unique cognitive abilities during domestication, which is thought to have contributed to the formation of the human-dog bond. In European breeds, but not in wolves, a dog’s gazing behavior plays an important role in affiliative interactions with humans and stimulates oxytocin secretion in both humans and dogs, which suggests that this interspecies oxytocin and gaze-mediated bonding was also acquired during domestication. In this study, we investigated whether Japanese breeds, which are classified as ancient breeds and are relatively close to wolves genetically, establish a bond with their owners through gazing behavior. The subject dogs were treated with either oxytocin or saline before the starting of the behavioral testing. We also evaluated physiological changes in the owners during mutual gazing by analyzing their heart rate variability (HRV) and subsequent urinary oxytocin levels in both dogs and their owners. We found that oxytocin treatment enhanced the gazing behavior of Japanese dogs and increased their owners’ urinary oxytocin levels, as was seen with European breeds; however, the measured durations of skin contact and proximity to their owners were relatively low. In the owners’ HRV readings, inter-beat (R-R) intervals (RRI), the standard deviation of normal to normal inter-beat (R-R) intervals (SDNN), and the root mean square of successive heartbeat interval differences (RMSSD) were lower when the dogs were treated with oxytocin compared with saline. Furthermore, the owners of female dogs showed lower SDNN than the owners of male dogs. These results suggest that the owners of female Japanese dogs exhibit more tension during interactions, and apart from gazing behavior, the dogs may show sex differences in their interactions with humans as well. They also suggest that Japanese dogs use eye-gazing as an attachment behavior toward humans similar to European breeds; however, there is a disparity between the dog sexes when it comes to the

  13. Intranasal Oxytocin Treatment Increases Eye-Gaze Behavior toward the Owner in Ancient Japanese Dog Breeds.

    PubMed

    Nagasawa, Miho; Ogawa, Misato; Mogi, Kazutaka; Kikusui, Takefumi

    2017-01-01

    Dogs acquired unique cognitive abilities during domestication, which is thought to have contributed to the formation of the human-dog bond. In European breeds, but not in wolves, a dog's gazing behavior plays an important role in affiliative interactions with humans and stimulates oxytocin secretion in both humans and dogs, which suggests that this interspecies oxytocin and gaze-mediated bonding was also acquired during domestication. In this study, we investigated whether Japanese breeds, which are classified as ancient breeds and are relatively close to wolves genetically, establish a bond with their owners through gazing behavior. The subject dogs were treated with either oxytocin or saline before the starting of the behavioral testing. We also evaluated physiological changes in the owners during mutual gazing by analyzing their heart rate variability (HRV) and subsequent urinary oxytocin levels in both dogs and their owners. We found that oxytocin treatment enhanced the gazing behavior of Japanese dogs and increased their owners' urinary oxytocin levels, as was seen with European breeds; however, the measured durations of skin contact and proximity to their owners were relatively low. In the owners' HRV readings, inter-beat (R-R) intervals (RRI), the standard deviation of normal to normal inter-beat (R-R) intervals (SDNN), and the root mean square of successive heartbeat interval differences (RMSSD) were lower when the dogs were treated with oxytocin compared with saline. Furthermore, the owners of female dogs showed lower SDNN than the owners of male dogs. These results suggest that the owners of female Japanese dogs exhibit more tension during interactions, and apart from gazing behavior, the dogs may show sex differences in their interactions with humans as well. They also suggest that Japanese dogs use eye-gazing as an attachment behavior toward humans similar to European breeds; however, there is a disparity between the dog sexes when it comes to the

  14. Gaze anchoring guides real but not pantomime reach-to-grasp: support for the action-perception theory.

    PubMed

    Kuntz, Jessica R; Karl, Jenni M; Doan, Jon B; Whishaw, Ian Q

    2018-04-01

    Reach-to-grasp movements feature the integration of a reach directed by the extrinsic (location) features of a target and a grasp directed by the intrinsic (size, shape) features of a target. The action-perception theory suggests that integration and scaling of a reach-to-grasp movement, including its trajectory and the concurrent digit shaping, are features that depend upon online action pathways of the dorsal visuomotor stream. Scaling is much less accurate for a pantomime reach-to-grasp movement, a pretend reach with the target object absent. Thus, the action-perception theory proposes that pantomime movement is mediated by perceptual pathways of the ventral visuomotor stream. A distinguishing visual feature of a real reach-to-grasp movement is gaze anchoring, in which a participant visually fixates the target throughout the reach and disengages, often by blinking or looking away/averting the head, at about the time that the target is grasped. The present study examined whether gaze anchoring is associated with pantomime reaching. The eye and hand movements of participants were recorded as they reached for a ball of one of three sizes, located on a pedestal at arms' length, or pantomimed the same reach with the ball and pedestal absent. The kinematic measures for real reach-to-grasp movements were coupled to the location and size of the target, whereas the kinematic measures for pantomime reach-to-grasp, although grossly reflecting target features, were significantly altered. Gaze anchoring was also tightly coupled to the target for real reach-to-grasp movements, but there was no systematic focus for gaze, either in relation with the virtual target, the previous location of the target, or the participant's reaching hand, for pantomime reach-to-grasp. The presence of gaze anchoring during real vs. its absence in pantomime reach-to-grasp supports the action-perception theory that real, but not pantomime, reaches are online visuomotor actions and is discussed in

  15. PyGaze: an open-source, cross-platform toolbox for minimal-effort programming of eyetracking experiments.

    PubMed

    Dalmaijer, Edwin S; Mathôt, Sebastiaan; Van der Stigchel, Stefan

    2014-12-01

    The PyGaze toolbox is an open-source software package for Python, a high-level programming language. It is designed for creating eyetracking experiments in Python syntax with the least possible effort, and it offers programming ease and script readability without constraining functionality and flexibility. PyGaze can be used for visual and auditory stimulus presentation; for response collection via keyboard, mouse, joystick, and other external hardware; and for the online detection of eye movements using a custom algorithm. A wide range of eyetrackers of different brands (EyeLink, SMI, and Tobii systems) are supported. The novelty of PyGaze lies in providing an easy-to-use layer on top of the many different software libraries that are required for implementing eyetracking experiments. Essentially, PyGaze is a software bridge for eyetracking research.

  16. Upbeat nystagmus changes to downbeat nystagmus with upward gaze in a patient with Wernicke's encephalopathy.

    PubMed

    Shin, Byoung-Soo; Oh, Sun-Young; Kim, Ji Soo; Lee, Hyung; Kim, Eui-Jung; Hwang, Seung-Bae

    2010-11-15

    We describe a patient with Wernicke's encephalopathy who showed spontaneous upbeat nystagmus with decelerating slow phases that changed to downbeat nystagmus during upward gaze and increased during downward gaze. He also showed horizontal gaze-evoked nystagmus and impaired upward smooth pursuit. Magnetic resonance imaging demonstrated symmetric lesions involving the bilateral medial thalami, periaqueductal gray matters and inferior cerebellar peduncles. In this patient, the decelerating slow phases and disobedience to Alexander's law of upbeat nystagmus suggest both deficient (leaky) and unstable neural integrators subserving vertical eye motion. Dysfunction of the interstitial nucleus of Cajal or its descending pathway to the vestibulocerebellum via the paramedian tract cell groups may be responsible for the upbeat nystagmus and its modulation by gazes in our patient with Wernicke's encephalopathy. Copyright © 2010 Elsevier B.V. All rights reserved.

  17. Unaddressed participants’ gaze in multi-person interaction: optimizing recipiency

    PubMed Central

    Holler, Judith; Kendrick, Kobin H.

    2015-01-01

    One of the most intriguing aspects of human communication is its turn-taking system. It requires the ability to process on-going turns at talk while planning the next, and to launch this next turn without considerable overlap or delay. Recent research has investigated the eye movements of observers of dialogs to gain insight into how we process turns at talk. More specifically, this research has focused on the extent to which we are able to anticipate the end of current and the beginning of next turns. At the same time, there has been a call for shifting experimental paradigms exploring social-cognitive processes away from passive observation toward on-line processing. Here, we present research that responds to this call by situating state-of-the-art technology for tracking interlocutors’ eye movements within spontaneous, face-to-face conversation. Each conversation involved three native speakers of English. The analysis focused on question–response sequences involving just two of those participants, thus rendering the third momentarily unaddressed. Temporal analyses of the unaddressed participants’ gaze shifts from current to next speaker revealed that unaddressed participants are able to anticipate next turns, and moreover, that they often shift their gaze toward the next speaker before the current turn ends. However, an analysis of the complex structure of turns at talk revealed that the planning of these gaze shifts virtually coincides with the points at which the turns first become recognizable as possibly complete. We argue that the timing of these eye movements is governed by an organizational principle whereby unaddressed participants shift their gaze at a point that appears interactionally most optimal: It provides unaddressed participants with access to much of the visual, bodily behavior that accompanies both the current speaker’s and the next speaker’s turn, and it allows them to display recipiency with regard to both speakers’ turns. PMID

  18. It Takes Time and Experience to Learn How to Interpret Gaze in Mentalistic Terms

    ERIC Educational Resources Information Center

    Leavens, David A.

    2006-01-01

    What capabilities are required for an organism to evince an "explicit" understanding of gaze as a mentalistic phenomenon? One possibility is that mentalistic interpretations of gaze, like concepts of unseen, supernatural beings, are culturally-specific concepts, acquired through cultural learning. These abstract concepts may either require a…

  19. Method of Menu Selection by Gaze Movement Using AC EOG Signals

    NASA Astrophysics Data System (ADS)

    Kanoh, Shin'ichiro; Futami, Ryoko; Yoshinobu, Tatsuo; Hoshimiya, Nozomu

    A method to detect the direction and the distance of voluntary eye gaze movement from EOG (electrooculogram) signals was proposed and tested. In this method, AC-amplified vertical and horizontal transient EOG signals were classified into 8-class directions and 2-class distances of voluntary eye gaze movements. A horizontal and a vertical EOGs during eye gaze movement at each sampling time were treated as a two-dimensional vector, and the center of gravity of the sample vectors whose norms were more than 80% of the maximum norm was used as a feature vector to be classified. By the classification using the k-nearest neighbor algorithm, it was shown that the averaged correct detection rates on each subject were 98.9%, 98.7%, 94.4%, respectively. This method can avoid strict EOG-based eye tracking which requires DC amplification of very small signal. It would be useful to develop robust human interfacing systems based on menu selection for severely paralyzed patients.

  20. Absence of Sex-Contingent Gaze Direction Aftereffects Suggests a Limit to Contingencies in Face Aftereffects

    PubMed Central

    Kloth, Nadine; Rhodes, Gillian; Schweinberger, Stefan R.

    2015-01-01

    Face aftereffects (e.g., expression aftereffects) can be simultaneously induced in opposite directions for different face categories (e.g., male and female faces). Such aftereffects are typically interpreted as indicating that distinct neural populations code the categories on which adaptation is contingent, e.g., male and female faces. Moreover, they suggest that these distinct populations selectively respond to variations in the secondary stimulus dimension, e.g., emotional expression. However, contingent aftereffects have now been reported for so many different combinations of face characteristics, that one might question this interpretation. Instead, the selectivity might be generated during the adaptation procedure, for instance as a result of associative learning, and not indicate pre-existing response selectivity in the face perception system. To alleviate this concern, one would need to demonstrate some limit to contingent aftereffects. Here, we report a clear limit, showing that gaze direction aftereffects are not contingent on face sex. We tested 36 young Caucasian adults in a gaze adaptation paradigm. We initially established their ability to discriminate the gaze direction of male and female test faces in a pre-adaptation phase. Afterwards, half of the participants adapted to female faces looking left and male faces looking right, and half adapted to the reverse pairing. We established the effects of this adaptation on the perception of gaze direction in subsequently presented male and female test faces. We found that adaptation induced pronounced gaze direction aftereffects, i.e., participants were biased to perceive small gaze deviations to both the left and right as direct. Importantly, however, aftereffects were identical for male and female test faces, showing that the contingency of face sex and gaze direction participants experienced during the adaptation procedure had no effect. PMID:26648890

  1. Transcranial magnetic stimulation over the cerebellum delays predictive head movements in the coordination of gaze.

    PubMed

    Zangemeister, W H; Nagel, M

    2001-01-01

    We investigated coordinated saccadic eye and head movements following predictive horizontal visual targets at +/- 30 degrees by applying transcranial magnetic stimulation (TMS) over the cerebellum before the start of the gaze movement in 10 young subjects. We found three effects of TMS on eye-head movements: 1. Saccadic latency effect. When stimulation took place shortly before movements commenced (75-25 ms before), significantly shorter latencies were found between predictive target presentation and initiation of saccades. Eye latencies were significantly decreased by 45 ms on average, but head latencies were not. 2. Gaze amplitude effect. Without TMS, for the 60 degrees target amplitudes, head movements usually preceded eye movements, as expected (predictive gaze type 3). With TMS 5-75 ms before the gaze movement, the number of eye movements preceding head movements by 20-50 ms was significantly increased (p < 0.001) and the delay between eye and head movements was reversed (p < 0.001), i.e. we found eye-predictive gaze type 1. 3. Saccadic peak velocity effect. For TMS 5-25 s before the start of head movement, mean peak velocity of synkinetic eye saccades increased by 20-30% up to 600 degrees/s, compared to 350-400 degrees/s without TMS. We conclude that transient functional cerebellar deficits exerted by means of TMS can change the central synkinesis of eye-head coordination, including the preprogramming of the saccadic pulse and step of a coordinated gaze movement.

  2. Great apes are sensitive to prior reliability of an informant in a gaze following task.

    PubMed

    Schmid, Benjamin; Karg, Katja; Perner, Josef; Tomasello, Michael

    2017-01-01

    Social animals frequently rely on information from other individuals. This can be costly in case the other individual is mistaken or even deceptive. Human infants below 4 years of age show proficiency in their reliance on differently reliable informants. They can infer the reliability of an informant from few interactions and use that assessment in later interactions with the same informant in a different context. To explore whether great apes share that ability, in our study we confronted great apes with a reliable or unreliable informant in an object choice task, to see whether that would in a subsequent task affect their gaze following behaviour in response to the same informant. In our study, prior reliability of the informant and habituation during the gaze following task affected both great apes' automatic gaze following response and their more deliberate response of gaze following behind barriers. As habituation is very context specific, it is unlikely that habituation in the reliability task affected the gaze following task. Rather it seems that apes employ a reliability tracking strategy that results in a general avoidance of additional information from an unreliable informant.

  3. The robustness of the horizontal gaze nystagmus test

    DOT National Transportation Integrated Search

    2007-09-01

    Police officers follow procedures set forth in the NHTSA/IACP curriculum when they administer the Standardized Field Sobriety Tests (SFSTs) to suspected alcohol-impaired drivers. The SFSTs include Horizontal Gaze Nystagmus (HGN) test, Walk-and-Turn (...

  4. Use of a Remote Eye-Tracker for the Analysis of Gaze during Treadmill Walking and Visual Stimuli Exposition.

    PubMed

    Serchi, V; Peruzzi, A; Cereatti, A; Della Croce, U

    2016-01-01

    The knowledge of the visual strategies adopted while walking in cognitively engaging environments is extremely valuable. Analyzing gaze when a treadmill and a virtual reality environment are used as motor rehabilitation tools is therefore critical. Being completely unobtrusive, remote eye-trackers are the most appropriate way to measure the point of gaze. Still, the point of gaze measurements are affected by experimental conditions such as head range of motion and visual stimuli. This study assesses the usability limits and measurement reliability of a remote eye-tracker during treadmill walking while visual stimuli are projected. During treadmill walking, the head remained within the remote eye-tracker workspace. Generally, the quality of the point of gaze measurements declined as the distance from the remote eye-tracker increased and data loss occurred for large gaze angles. The stimulus location (a dot-target) did not influence the point of gaze accuracy, precision, and trackability during both standing and walking. Similar results were obtained when the dot-target was replaced by a static or moving 2D target and "region of interest" analysis was applied. These findings foster the feasibility of the use of a remote eye-tracker for the analysis of gaze during treadmill walking in virtual reality environments.

  5. Steering by hearing: a bat's acoustic gaze is linked to its flight motor output by a delayed, adaptive linear law.

    PubMed

    Ghose, Kaushik; Moss, Cynthia F

    2006-02-08

    Adaptive behaviors require sensorimotor computations that convert information represented initially in sensory coordinates to commands for action in motor coordinates. Fundamental to these computations is the relationship between the region of the environment sensed by the animal (gaze) and the animal's locomotor plan. Studies of visually guided animals have revealed an anticipatory relationship between gaze direction and the locomotor plan during target-directed locomotion. Here, we study an acoustically guided animal, an echolocating bat, and relate acoustic gaze (direction of the sonar beam) to flight planning as the bat searches for and intercepts insect prey. We show differences in the relationship between gaze and locomotion as the bat progresses through different phases of insect pursuit. We define acoustic gaze angle, theta(gaze), to be the angle between the sonar beam axis and the bat's flight path. We show that there is a strong linear linkage between acoustic gaze angle at time t [theta(gaze)(t)] and flight turn rate at time t + tau into the future [theta(flight) (t + tau)], which can be expressed by the formula theta(flight) (t + tau) = ktheta(gaze)(t). The gain, k, of this linkage depends on the bat's behavioral state, which is indexed by its sonar pulse rate. For high pulse rates, associated with insect attacking behavior, k is twice as high compared with low pulse rates, associated with searching behavior. We suggest that this adjustable linkage between acoustic gaze and motor output in a flying echolocating bat simplifies the transformation of auditory information to flight motor commands.

  6. Identifying head-trunk and lower limb contributions to gaze stabilization during locomotion

    NASA Technical Reports Server (NTRS)

    Mulavara, Ajitkumar P.; Bloomberg, Jacob J.

    2002-01-01

    The goal of the present study was to determine how the multiple, interdependent full-body sensorimotor subsystems respond to a change in gaze stabilization task constraints during locomotion. Nine subjects performed two gaze stabilization tasks while walking at 6.4 km/hr on a motorized treadmill: 1) focusing on a central point target; 2) reading numeral characters; both presented at 2 m in front at the level of their eyes. While subjects performed the tasks we measured: temporal parameters of gait, full body sagittal plane segmental kinematics of the head, trunk, thigh, tibia and foot, accelerations along the vertical axis at the head and the tibia, and the vertical forces acting on the support surface. We tested the hypothesis that with the increased demands placed on visual acuity during the number recognition task, subjects would modify full-body segmental kinematics in order to reduce perturbations to the head in order to successfully perform the task. We found that while reading numeral characters as compared to the central point target: 1) compensatory head pitch movement was on average 22% greater despite the fact that the trunk pitch and trunk vertical translation movement control were not significantly changed; 2) coordination patterns between head and trunk as reflected by the peak cross correlation between the head pitch and trunk pitch motion as well as the peak cross correlation between the head pitch and vertical trunk translation motion were not significantly changed; 3) knee joint total movement was on average 11% greater during the period from the heel strike event to the peak knee flexion event in stance phase of the gait cycle; 4) peak acceleration measured at the head was significantly reduced by an average of 13% in four of the six subjects. This was so even when the peak acceleration at the tibia and the transmission of the shock wave at heel strike (measured by the peak acceleration ratio of the head/tibia and the time lag between the tibial

  7. Identifying Head-Trunk and Lower Limb Contributions to Gaze Stabilization During Locomotion

    NASA Technical Reports Server (NTRS)

    Mulavara, Ajitkumar P.; Bloomberg, Jacob J.

    2003-01-01

    The goal of the present study was to determine how the multiple, interdependent full-body sensorimotor subsystems respond to a change in gaze stabilization task constraints during locomotion. Nine subjects performed two gaze stabilization tasks while walking at 6.4 km/hr on a motorized treadmill: 1) focusing on a central point target; 2) reading numeral characters; both presented at 2m in front at the level of their eyes. While subjects performed the tasks we measured: temporal parameters of gait, full body sagittal plane segmental kinematics of the head, trunk, thigh, shank and foot, accelerations along the vertical axis at the head and the shank, and the vertical forces acting on the support surface. We tested the hypothesis that with the increased demands placed on visual acuity during the number recognition task, subjects would modify full-body segmental kinematics in order to reduce perturbations to the head in order to successfully perform the task. We found that while reading numeral characters as - compared to the central point target: 1) compensatory head pitch movement was on average 22% greater despite the fact that the trunk pitch and trunk vertical translation movement control were not significantly changed; 2) coordination patterns between head and trunk as reflected by the peak cross correlation between the head pitch and trunk pitch motion as well as the peak cross correlation between the head pitch and vertical trunk translation motion were not significantly changed; 3) knee joint total movement was on average 11% greater during the period from the heel strike event to the peak knee flexion event in stance phase of the gait cycle; 4) peak acceleration measured at the head was significantly reduced by an average of 13% in four of the six subjects. This was so even when the peak acceleration at the shank and the transmissibility of the shock wave at heel strike (measured by the peak acceleration ratio of the head/shank) remained unchanged. Taken

  8. Evaluating gaze-driven power wheelchair with navigation support for persons with disabilities.

    PubMed

    Wästlund, Erik; Sponseller, Kay; Pettersson, Ola; Bared, Anders

    2015-01-01

    This article describes a novel add-on for powered wheelchairs that is composed of a gaze-driven control system and a navigation support system. The add-on was tested by three users. All of the users were individuals with severe disabilities and no possibility of moving independently. The system is an add-on to a standard power wheelchair and can be customized for different levels of support according to the cognitive level, motor control, perceptual skills, and specific needs of the user. The primary aim of this study was to test the functionality and safety of the system in the user's home environment. The secondary aim was to evaluate whether access to a gaze-driven powered wheelchair with navigation support is perceived as meaningful in terms of independence and participation. The results show that the system has the potential to provide safe, independent indoor mobility and that the users perceive doing so as fun, meaningful, and a way to reduce dependency on others. Independent mobility has numerous benefits in addition to psychological and emotional well-being. By observing users' actions, caregivers and healthcare professionals can assess the individual's capabilities, which was not previously possible. Rehabilitation can be better adapted to the individual's specific needs, and driving a wheelchair independently can be a valuable, motivating training tool.

  9. Adaptive eye-gaze tracking using neural-network-based user profiles to assist people with motor disability.

    PubMed

    Sesin, Anaelis; Adjouadi, Malek; Cabrerizo, Mercedes; Ayala, Melvin; Barreto, Armando

    2008-01-01

    This study developed an adaptive real-time human-computer interface (HCI) that serves as an assistive technology tool for people with severe motor disability. The proposed HCI design uses eye gaze as the primary computer input device. Controlling the mouse cursor with raw eye coordinates results in sporadic motion of the pointer because of the saccadic nature of the eye. Even though eye movements are subtle and completely imperceptible under normal circumstances, they considerably affect the accuracy of an eye-gaze-based HCI. The proposed HCI system is novel because it adapts to each specific user's different and potentially changing jitter characteristics through the configuration and training of an artificial neural network (ANN) that is structured to minimize the mouse jitter. This task is based on feeding the ANN a user's initially recorded eye-gaze behavior through a short training session. The ANN finds the relationship between the gaze coordinates and the mouse cursor position based on the multilayer perceptron model. An embedded graphical interface is used during the training session to generate user profiles that make up these unique ANN configurations. The results with 12 subjects in test 1, which involved following a moving target, showed an average jitter reduction of 35%; the results with 9 subjects in test 2, which involved following the contour of a square object, showed an average jitter reduction of 53%. For both results, the outcomes led to trajectories that were significantly smoother and apt at reaching fixed or moving targets with relative ease and within a 5% error margin or deviation from desired trajectories. The positive effects of such jitter reduction are presented graphically for visual appreciation.

  10. A novel approach to training attention and gaze in ASD: A feasibility and efficacy pilot study.

    PubMed

    Chukoskie, Leanne; Westerfield, Marissa; Townsend, Jeanne

    2018-05-01

    In addition to the social, communicative and behavioral symptoms that define the disorder, individuals with ASD have difficulty re-orienting attention quickly and accurately. Similarly, fast re-orienting saccadic eye movements are also inaccurate and more variable in both endpoint and timing. Atypical gaze and attention are among the earliest symptoms observed in ASD. Disruption of these foundation skills critically affects the development of higher level cognitive and social behavior. We propose that interventions aimed at these early deficits that support social and cognitive skills will be broadly effective. We conducted a pilot clinical trial designed to demonstrate the feasibility and preliminary efficacy of using gaze-contingent video games for low-cost in-home training of attention and eye movement. Eight adolescents with ASD participated in an 8-week training, with pre-, mid- and post-testing of eye movement and attention control. Six of the eight adolescents completed the 8 weeks of training and all six showed improvement in attention (orienting, disengagement) and eye movement control or both. All game systems remained intact for the duration of training and all participants could use the system independently. We delivered a robust, low-cost, gaze-contingent game system for home use that, in our pilot training sample, improved the attention orienting and eye movement performance of adolescent participants in 8 weeks of training. We are currently conducting a clinical trial to replicate these results and to examine what, if any, aspects of training transfer to more real-world tasks. © 2017 Wiley Periodicals, Inc. Develop Neurobiol 78: 546-554, 2018. © 2017 Wiley Periodicals, Inc.

  11. Women gaze behaviour in assessing female bodies: the effects of clothing, body size, own body composition and body satisfaction.

    PubMed

    Cundall, Amelia; Guo, Kun

    2017-01-01

    Often with minimally clothed figures depicting extreme body sizes, previous studies have shown women tend to gaze at evolutionary determinants of attractiveness when viewing female bodies, possibly for self-evaluation purposes, and their gaze distribution is modulated by own body dissatisfaction level. To explore to what extent women's body-viewing gaze behaviour is affected by clothing type, dress size, subjective measurements of regional body satisfaction and objective measurements of own body composition (e.g., chest size, body mass index, waist-to-hip ratio), in this self-paced body attractiveness and body size judgement experiment, we compared healthy, young women's gaze distributions when viewing female bodies in tight and loose clothing of different dress sizes. In contrast to tight clothing, loose clothing biased gaze away from the waist-hip to the leg region, and subsequently led to enhanced body attractiveness ratings and body size underestimation for larger female bodies, indicating the important role of clothing in mediating women's body perception. When viewing preferred female bodies, women's higher satisfaction of a specific body region was associated with an increased gaze towards neighbouring body areas, implying satisfaction might reduce the need for comparison of confident body parts; furthermore undesirable body composition measurements were correlated with a gaze avoidance process if the construct was less changeable (i.e. chest size) but a gaze comparison process if the region was more changeable (i.e. body mass index, dress size). Clearly, own body satisfaction and body composition measurements had an evident impact on women's body-viewing gaze allocation, possibly through different cognitive processes.

  12. Dynamic modeling of patient and physician eye gaze to understand the effects of electronic health records on doctor-patient communication and attention.

    PubMed

    Montague, Enid; Asan, Onur

    2014-03-01

    The aim of this study was to examine eye gaze patterns between patients and physicians while electronic health records were used to support patient care. Eye gaze provides an indication of physician attention to patient, patient/physician interaction, and physician behaviors such as searching for information and documenting information. A field study was conducted where 100 patient visits were observed and video recorded in a primary care clinic. Videos were then coded for gaze behaviors where patients' and physicians' gaze at each other and artifacts such as electronic health records were coded using a pre-established objective coding scheme. Gaze data were then analyzed using lag sequential methods. Results showed that there are several eye gaze patterns significantly dependent to each other. All doctor-initiated gaze patterns were followed by patient gaze patterns. Some patient-initiated gaze patterns were also followed by doctor gaze patterns significantly unlike the findings in previous studies. Health information technology appears to contribute to some of the new significant patterns that have emerged. Differences were also found in gaze patterns related to technology that differ from patterns identified in studies with paper charts. Several sequences related to patient-doctor-technology were also significant. Electronic health records affect the patient-physician eye contact dynamic differently than paper charts. This study identified several patterns of patient-physician interaction with electronic health record systems. Consistent with previous studies, physician initiated gaze is an important driver of the interactions between patient and physician and patient and technology. Published by Elsevier Ireland Ltd.

  13. Dynamic modeling of patient and physician eye gaze to understand the effects of electronic health records on doctor-patient communication and attention

    PubMed Central

    Montague, Enid; Asan, Onur

    2014-01-01

    Objective The aim of this study was to examine eye gaze patterns between patients and physicians while electronic health records were used to support patient care. Background Eye gaze provides an indication of physician attention to patient, patient/physician interaction, and physician behaviors such as searching for information and documenting information. Methods A field study was conducted where 100 patient visits were observed and video recorded in a primary care clinic. Videos were then coded for gaze behaviors where patients’ and physicians’ gaze at each other and artifacts such as electronic health records were coded using a pre-established objective coding scheme. Gaze data were then analyzed using lag sequential methods. Results Results showed that there are several eye gaze patterns significantly dependent to each other. All doctor-initiated gaze patterns were followed by patient gaze patterns. Some patient-initiated gaze patterns were also followed by doctor gaze patterns significantly unlike the findings in previous studies. Health information technology appears to contribute to some of the new significant patterns that have emerged. Differences were also found in gaze patterns related to technology that differ from patterns identified in studies with paper charts. Several sequences related to patient-doctor- technology were also significant. Electronic health records affect the patient-physician eye contact dynamic differently than paper charts. Conclusion This study identified several patterns of patient-physician interaction with electronic health record systems. Consistent with previous studies, physician initiated gaze is an important driver of the interactions between patient and physician and patient and technology. PMID:24380671

  14. Eye-Tracking Technology and the Dynamics of Natural Gaze Behavior in Sports: A Systematic Review of 40 Years of Research.

    PubMed

    Kredel, Ralf; Vater, Christian; Klostermann, André; Hossner, Ernst-Joachim

    2017-01-01

    Reviewing 60 studies on natural gaze behavior in sports, it becomes clear that, over the last 40 years, the use of eye-tracking devices has considerably increased. Specifically, this review reveals the large variance of methods applied, analyses performed, and measures derived within the field. The results of sub-sample analyses suggest that sports-related eye-tracking research strives, on the one hand, for ecologically valid test settings (i.e., viewing conditions and response modes), while on the other, for experimental control along with high measurement accuracy (i.e., controlled test conditions with high-frequency eye-trackers linked to algorithmic analyses). To meet both demands, some promising compromises of methodological solutions have been proposed-in particular, the integration of robust mobile eye-trackers in motion-capture systems. However, as the fundamental trade-off between laboratory and field research cannot be solved by technological means, researchers need to carefully weigh the arguments for one or the other approach by accounting for the respective consequences. Nevertheless, for future research on dynamic gaze behavior in sports, further development of the current mobile eye-tracking methodology seems highly advisable to allow for the acquisition and algorithmic analyses of larger amounts of gaze-data and further, to increase the explanatory power of the derived results.

  15. Fear of Negative Evaluation Influences Eye Gaze in Adolescents with Autism Spectrum Disorder: A Pilot Study

    ERIC Educational Resources Information Center

    White, Susan W.; Maddox, Brenna B.; Panneton, Robin K.

    2015-01-01

    Social anxiety is common among adolescents with Autism Spectrum Disorder (ASD). In this modest-sized pilot study, we examined the relationship between social worries and gaze patterns to static social stimuli in adolescents with ASD (n = 15) and gender-matched adolescents without ASD (control; n = 18). Among cognitively unimpaired adolescents with…

  16. Gazing toward humans: a study on water rescue dogs using the impossible task paradigm.

    PubMed

    D'Aniello, Biagio; Scandurra, Anna; Prato-Previde, Emanuela; Valsecchi, Paola

    2015-01-01

    Various studies have assessed the role of life experiences, including learning opportunities, living conditions and the quality of dog-human relationships, in the use of human cues and problem-solving ability. The current study investigates how and to what extent training affects the behaviour of dogs and the communication of dogs with humans by comparing dogs trained for a water rescue service and untrained pet dogs in the impossible task paradigm. Twenty-three certified water rescue dogs (the water rescue group) and 17 dogs with no training experience (the untrained group) were tested using a modified version of the impossible task described by Marshall-Pescini et al. in 2009. The results demonstrated that the water rescue dogs directed their first gaze significantly more often towards the owner and spent more time gazing toward two people compared to the untrained pet dogs. There was no difference between the dogs of the two groups as far as in the amount of time spent gazing at the owner or the stranger; neither in the interaction with the apparatus attempting to obtain food. The specific training regime, aimed at promoting cooperation during the performance of water rescue, could account for the longer gazing behaviour shown toward people by the water rescue dogs and the priority of gazing toward the owner. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. Gaze Behavior of Gymnastics Judges: Where Do Experienced Judges and Gymnasts Look While Judging?

    PubMed

    Pizzera, Alexandra; Möller, Carsten; Plessner, Henning

    2018-03-01

    Gymnastics judges and former gymnasts have been shown to be quite accurate in detecting errors and accurately judging performance. The purpose of the current study was to examine if this superior judging performance is reflected in judges' gaze behavior. Thirty-five judges were asked to judge 21 gymnasts who performed a skill on the vault in a video-based test. Classifying 1 sample on 2 different criteria, judging performance and gaze behavior were compared between judges with a higher license level and judges with a lower license level and between judges who were able to perform the skill (specific motor experience [SME]) and those who were not. The results revealed better judging performance among judges with a higher license level compared with judges with a lower license level and more fixations on the gymnast during the whole skill and the landing phase, specifically on the head and arms of the gymnast. Specific motor experience did not result in any differences in judging performance; however, judges with SME showed similar gaze patterns to those of judges with a high license level, with 1 difference in their increased focus on the gymnasts' feet. Superior judging performance seems to be reflected in a specific gaze behavior. This gaze behavior appears to partly stem from judges' own sensorimotor experiences for this skill and reflects the gymnasts' perspective onto the skill.

  18. Transition from Target to Gaze Coding in Primate Frontal Eye Field during Memory Delay and Memory-Motor Transformation.

    PubMed

    Sajad, Amirsaman; Sadeh, Morteza; Yan, Xiaogang; Wang, Hongying; Crawford, John Douglas

    2016-01-01

    The frontal eye fields (FEFs) participate in both working memory and sensorimotor transformations for saccades, but their role in integrating these functions through time remains unclear. Here, we tracked FEF spatial codes through time using a novel analytic method applied to the classic memory-delay saccade task. Three-dimensional recordings of head-unrestrained gaze shifts were made in two monkeys trained to make gaze shifts toward briefly flashed targets after a variable delay (450-1500 ms). A preliminary analysis of visual and motor response fields in 74 FEF neurons eliminated most potential models for spatial coding at the neuron population level, as in our previous study (Sajad et al., 2015). We then focused on the spatiotemporal transition from an eye-centered target code (T; preferred in the visual response) to an eye-centered intended gaze position code (G; preferred in the movement response) during the memory delay interval. We treated neural population codes as a continuous spatiotemporal variable by dividing the space spanning T and G into intermediate T-G models and dividing the task into discrete steps through time. We found that FEF delay activity, especially in visuomovement cells, progressively transitions from T through intermediate T-G codes that approach, but do not reach, G. This was followed by a final discrete transition from these intermediate T-G delay codes to a "pure" G code in movement cells without delay activity. These results demonstrate that FEF activity undergoes a series of sensory-memory-motor transformations, including a dynamically evolving spatial memory signal and an imperfect memory-to-motor transformation.

  19. Effect of 3,4-diaminopyridine on the postural control in patients with downbeat nystagmus.

    PubMed

    Sprenger, Andreas; Zils, Elisabeth; Rambold, Holger; Sander, Thurid; Helmchen, Christoph

    2005-04-01

    Downbeat nystagmus (DBN) is a common, usually persistent ocular motor sign in vestibulocerebellar midline lesions. Postural imbalance in DBN may increase on lateral gaze when downbeat nystagmus increases. 3,4-Diaminopyridine (3,4-DAP) has been shown to suppress the slow-phase velocity component of downbeat nystagmus and its gravity-dependent component with concomitant improvement of oscillopsia. Because the pharmacological effect is thought to be caused by improvement of the vestibulocerebellar Purkinje cell activity, the effect of 3,4-DAP on the postural control of patients with downbeat nystagmus syndrome was examined. Eye movements were recorded with the video-based Eyelink II system. Postural sway and pathway were assessed by posturography in lateral gaze in the light and on eye closure. Two out of four patients showed an improvement of the area of postural sway by 57% of control (baseline) on eye closure. In contrast, downbeat nystagmus in gaze straight ahead and on lateral gaze did not benefit in these two patients, implying a specific influence of 3,4-DAP on the vestibulocerebellar control of posture. It was concluded that 3,4-DAP may particularly influence the postural performance in patients with downbeat nystagmus.

  20. Gaze patterns reveal how situation models and text representations contribute to episodic text memory.

    PubMed

    Johansson, Roger; Oren, Franziska; Holmqvist, Kenneth

    2018-06-01

    When recalling something you have previously read, to what degree will such episodic remembering activate a situation model of described events versus a memory representation of the text itself? The present study was designed to address this question by recording eye movements of participants who recalled previously read texts while looking at a blank screen. An accumulating body of research has demonstrated that spontaneous eye movements occur during episodic memory retrieval and that fixation locations from such gaze patterns to a large degree overlap with the visuospatial layout of the recalled information. Here we used this phenomenon to investigate to what degree participants' gaze patterns corresponded with the visuospatial configuration of the text itself versus a visuospatial configuration described in it. The texts to be recalled were scene descriptions, where the spatial configuration of the scene content was manipulated to be either congruent or incongruent with the spatial configuration of the text itself. Results show that participants' gaze patterns were more likely to correspond with a visuospatial representation of the described scene than with a visuospatial representation of the text itself, but also that the contribution of those representations of space is sensitive to the text content. This is the first demonstration that eye movements can be used to discriminate on which representational level texts are remembered and the findings provide novel insight into the underlying dynamics in play. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. Focusing the Gaze: Teacher Interrogation of Practice

    ERIC Educational Resources Information Center

    Nayler, Jennifer M.; Keddie, Amanda

    2007-01-01

    Within an Australian context of diminishing opportunities for equitable educational outcomes, this paper calls for teacher engagement in a "politics of resistance" through their focused gaze in relation to the ways in which they are positioned in their everyday practice. Our belief is that the resultant knowledge might equip teachers to…

  2. Optimal Eye-Gaze Fixation Position for Face-Related Neural Responses

    PubMed Central

    Zerouali, Younes; Lina, Jean-Marc; Jemel, Boutheina

    2013-01-01

    It is generally agreed that some features of a face, namely the eyes, are more salient than others as indexed by behavioral diagnosticity, gaze-fixation patterns and evoked-neural responses. However, because previous studies used unnatural stimuli, there is no evidence so far that the early encoding of a whole face in the human brain is based on the eyes or other facial features. To address this issue, scalp electroencephalogram (EEG) and eye gaze-fixations were recorded simultaneously in a gaze-contingent paradigm while observers viewed faces. We found that the N170 indexing the earliest face-sensitive response in the human brain was the largest when the fixation position is located around the nasion. Interestingly, for inverted faces, this optimal fixation position was more variable, but mainly clustered in the upper part of the visual field (around the mouth). These observations extend the findings of recent behavioral studies, suggesting that the early encoding of a face, as indexed by the N170, is not driven by the eyes per se, but rather arises from a general perceptual setting (upper-visual field advantage) coupled with the alignment of a face stimulus to a stored face template. PMID:23762224

  3. Optimal eye-gaze fixation position for face-related neural responses.

    PubMed

    Zerouali, Younes; Lina, Jean-Marc; Jemel, Boutheina

    2013-01-01

    It is generally agreed that some features of a face, namely the eyes, are more salient than others as indexed by behavioral diagnosticity, gaze-fixation patterns and evoked-neural responses. However, because previous studies used unnatural stimuli, there is no evidence so far that the early encoding of a whole face in the human brain is based on the eyes or other facial features. To address this issue, scalp electroencephalogram (EEG) and eye gaze-fixations were recorded simultaneously in a gaze-contingent paradigm while observers viewed faces. We found that the N170 indexing the earliest face-sensitive response in the human brain was the largest when the fixation position is located around the nasion. Interestingly, for inverted faces, this optimal fixation position was more variable, but mainly clustered in the upper part of the visual field (around the mouth). These observations extend the findings of recent behavioral studies, suggesting that the early encoding of a face, as indexed by the N170, is not driven by the eyes per se, but rather arises from a general perceptual setting (upper-visual field advantage) coupled with the alignment of a face stimulus to a stored face template.

  4. Task-induced Changes in Idiopathic Infantile Nystagmus Vary with Gaze.

    PubMed

    Salehi Fadardi, Marzieh; Bathke, Arne C; Harrar, Solomon W; Abel, Larry Allen

    2017-05-01

    Investigations of infantile nystagmus syndrome (INS) at center or at the null position have reported that INS worsens when visual demand is combined with internal states, e.g. stress. Visual function and INS parameters such as foveation time, frequency, amplitude, and intensity can also be influenced by gaze position. We hypothesized that increases from baseline in visual demand and mental load would affect INS parameters at the null position differently than at other gaze positions. Eleven participants with idiopathic INS were asked to determine the direction of Tumbling-E targets, whose visual demand was varied through changes in size and contrast, using a staircase procedure. Targets appeared between ±25° in 5° steps. The task was repeated with both mental arithmetic and time restriction to impose higher mental load, confirmed through subjective ratings and concurrent physiological measurements. Within-subject comparisons were limited to the null and 15° away from it. No significant main effects of task on any INS parameters were found. At both locations, high mental load worsened task performance metrics, i.e. lowest contrast (P = .001) and smallest optotype size reached (P = .012). There was a significant interaction between mental load and gaze position for foveation time (P = .02) and for the smallest optotype reached (P = .028). The increase in threshold optotype size from the low to high mental load was greater at the null than away from it. During high visual demand, foveation time significantly decreased from baseline at the null as compared to away from it (mean difference ± SE: 14.19 ± 0.7 msec; P = .010). Under high visual demand, the effects of increased mental load on foveation time and visual task performance differed at the null as compared to 15° away from it. Assessment of these effects could be valuable when evaluating INS clinically and when considering its impact on patients' daily activities.

  5. The Microstructure of Infants' Gaze as They View Adult Shifts in Overt Attention

    ERIC Educational Resources Information Center

    Gredeback, Gustaf; Theuring, Carolin; Hauf, Petra; Kenward, Ben

    2008-01-01

    We presented infants (5, 6, 9, and 12 months old) with movies in which a female model turned toward and fixated 1 of 2 toys placed on a table. Infants' gaze was measured using a Tobii 1750 eye tracker. Six-, 9-, and 12-month-olds' first gaze shift from the model's face (after the model started turning) was directed to the attended toy. The…

  6. Stabilization of gaze during circular locomotion in light. I. Compensatory head and eye nystagmus in the running monkey

    NASA Technical Reports Server (NTRS)

    Solomon, D.; Cohen, B.

    1992-01-01

    1. A rhesus and cynomolgus monkey were trained to run around the perimeter of a circular platform in light. We call this "circular locomotion" because forward motion had an angular component. Head and body velocity in space were recorded with angular rate sensors and eye movements with electrooculography (EOG). From these measurements we derived signals related to the angular velocity of the eyes in the head (Eh), of the head on the body (Hb), of gaze on the body (Gb), of the body in space (Bs), of gaze in space (Gs), and of the gain of gaze (Gb/Bs). 2. The monkeys had continuous compensatory nystagmus of the head and eyes while running, which stabilized Gs during the slow phases. The eyes established and maintained compensatory gaze velocities at the beginning and end of the slow phases. The head contributed to gaze velocity during the middle of the slow phases. Slow phase Gb was as high as 250 degrees/s, and targets were fixed for gaze angles as large as 90-140 degrees. 3. Properties of the visual surround affected both the gain and strategy of gaze compensation in the one monkey tested. Gains of Eh ranged from 0.3 to 1.1 during compensatory gaze nystagmus. Gains of Hb varied around 0.3 (0.2-0.7), building to a maximum as Eh dropped while running past sectors of interest. Consistent with predictions, gaze gains varied from below to above unity, when translational and angular body movements with regard to the target were in opposite or the same directions, respectively. 4. Gaze moved in saccadic shifts in the direction of running during quick phases. Most head quick phases were small, and at times the head only paused during an eye quick phase. Eye quick phases were larger, ranging up to 60 degrees. This is larger than quick phases during passive rotation or saccades made with the head fixed. 5. These data indicate that head and eye nystagmus are natural phenomena that support gaze compensation during locomotion. Despite differential utilization of the head and

  7. Gaze and Feet as Additional Input Modalities for Interacting with Geospatial Interfaces

    NASA Astrophysics Data System (ADS)

    Çöltekin, A.; Hempel, J.; Brychtova, A.; Giannopoulos, I.; Stellmach, S.; Dachselt, R.

    2016-06-01

    Geographic Information Systems (GIS) are complex software environments and we often work with multiple tasks and multiple displays when we work with GIS. However, user input is still limited to mouse and keyboard in most workplace settings. In this project, we demonstrate how the use of gaze and feet as additional input modalities can overcome time-consuming and annoying mode switches between frequently performed tasks. In an iterative design process, we developed gaze- and foot-based methods for zooming and panning of map visualizations. We first collected appropriate gestures in a preliminary user study with a small group of experts, and designed two interaction concepts based on their input. After the implementation, we evaluated the two concepts comparatively in another user study to identify strengths and shortcomings in both. We found that continuous foot input combined with implicit gaze input is promising for supportive tasks.

  8. Multimodal Language Learner Interactions via Desktop Videoconferencing within a Framework of Social Presence: Gaze

    ERIC Educational Resources Information Center

    Satar, H. Muge

    2013-01-01

    Desktop videoconferencing (DVC) offers many opportunities for language learning through its multimodal features. However, it also brings some challenges such as gaze and mutual gaze, that is, eye-contact. This paper reports some of the findings of a PhD study investigating social presence in DVC interactions of English as a Foreign Language (EFL)…

  9. Children's Knowledge of Deceptive Gaze Cues and Its Relation to Their Actual Lying Behavior

    ERIC Educational Resources Information Center

    McCarthy, Anjanie; Lee, Kang

    2009-01-01

    Eye gaze plays a pivotal role during communication. When interacting deceptively, it is commonly believed that the deceiver will break eye contact and look downward. We examined whether children's gaze behavior when lying is consistent with this belief. In our study, 7- to 15-year-olds and adults answered questions truthfully ("Truth" questions)…

  10. Novel Eye Movement Disorders in Whipple's Disease-Staircase Horizontal Saccades, Gaze-Evoked Nystagmus, and Esotropia.

    PubMed

    Shaikh, Aasef G; Ghasia, Fatema F

    2017-01-01

    Whipple's disease, a rare systemic infectious disorder, is complicated by the involvement of the central nervous system in about 5% of cases. Oscillations of the eyes and the jaw, called oculo-masticatory myorhythmia, are pathognomonic of the central nervous system involvement but are often absent. Typical manifestations of the central nervous system Whipple's disease are cognitive impairment, parkinsonism mimicking progressive supranuclear palsy with vertical saccade slowing, and up-gaze range limitation. We describe a unique patient with the central nervous system Whipple's disease who had typical features, including parkinsonism, cognitive impairment, and up-gaze limitation; but also had diplopia, esotropia with mild horizontal (abduction more than adduction) limitation, and vertigo. The patient also had gaze-evoked nystagmus and staircase horizontal saccades. Latter were thought to be due to mal-programmed small saccades followed by a series of corrective saccades. The saccades were disconjugate due to the concurrent strabismus. Also, we noted disconjugacy in the slow phase of gaze-evoked nystagmus. The disconjugacy of the slow phase of gaze-evoked nystagmus was larger during monocular viewing condition. We propose that interaction of the strabismic drifts of the covered eyes and the nystagmus drift, putatively at the final common pathway might lead to such disconjugacy.

  11. Watch the hands: infants can learn to follow gaze by seeing adults manipulate objects.

    PubMed

    Deák, Gedeon O; Krasno, Anna M; Triesch, Jochen; Lewis, Joshua; Sepeta, Leigh

    2014-03-01

    Infants gradually learn to share attention, but it is unknown how they acquire skills such as gaze-following. Deák and Triesch (2006) suggest that gaze-following could be acquired if infants learn that adults' gaze direction is likely to be aligned with interesting sights. This hypothesis stipulates that adults tend to look at things that infants find interesting, and that infants could learn by noticing this tendency. We tested the plausibility of this hypothesis through video-based micro-behavioral analysis of naturalistic parent-infant play. The results revealed that 3- to 11-month-old infants strongly preferred watching caregivers handle objects. In addition, when caregivers looked away from their infant they tended to look at their own object-handling. Finally, when infants looked toward the caregiver while she was looking at her own hands, the infant's next eye movement was often toward the caregiver's object-handling. In this way infants receive adequate naturalistic input to learn associations between their parent's gaze direction and the locations of interesting sights. © 2014 John Wiley & Sons Ltd.

  12. Does social presence or the potential for interaction reduce social gaze in online social scenarios? Introducing the "live lab" paradigm.

    PubMed

    Gregory, Nicola J; Antolin, Jastine V

    2018-05-01

    Research has shown that people's gaze is biased away from faces in the real world but towards them when they are viewed onscreen. Non-equivalent stimulus conditions may have represented a confound in this research, however, as participants viewed onscreen stimuli as pre-recordings where interaction was not possible compared with real-world stimuli which were viewed in real time where interaction was possible. We assessed the independent contributions of online social presence and ability for interaction on social gaze by developing the "live lab" paradigm. Participants in three groups ( N = 132) viewed a confederate as (1) a live webcam stream where interaction was not possible (one-way), (2) a live webcam stream where an interaction was possible (two-way), or (3) a pre-recording. Potential for interaction, rather than online social presence, was the primary influence on gaze behaviour: participants in the pre-recorded and one-way conditions looked more to the face than those in the two-way condition, particularly, when the confederate made "eye contact." Fixation durations to the face were shorter when the scene was viewed live, particularly, during a bid for eye contact. Our findings support the dual function of gaze but suggest that online social presence alone is not sufficient to activate social norms of civil inattention. Implications for the reinterpretation of previous research are discussed.

  13. Pointing control using a moving base of support.

    PubMed

    Hondzinski, Jan M; Kwon, Taegyong

    2009-07-01

    The purposes of this study were to determine whether gaze direction provides a control signal for movement direction for a pointing task requiring a step and to gain insight into why discrepancies previously identified in the literature for endpoint accuracy with gaze directed eccentrically exist. Straight arm pointing movements were performed to real and remembered target locations, either toward or 30 degrees eccentric to gaze direction. Pointing occurred in normal room lighting or darkness while subjects sat, stood still or side-stepped left or right. Trunk rotation contributed 22-65% to gaze orientations when it was not constrained. Error differences for different target locations explained discrepancies among previous experiments. Variable pointing errors were influenced by gaze direction, while mean systematic pointing errors and trunk orientations were influenced by step direction. These data support the use of a control strategy that relies on gaze direction and equilibrium inputs for whole-body goal-directed movements.

  14. Controlling Attention to Gaze and Arrows in Childhood: An fMRI Study of Typical Development and Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Vaidya, Chandan J.; Foss-Feig, Jennifer; Shook, Devon; Kaplan, Lauren; Kenworthy, Lauren; Gaillard, William D.

    2011-01-01

    Functional magnetic resonance imaging was used to examine functional anatomy of attention to social (eye gaze) and nonsocial (arrow) communicative stimuli in late childhood and in a disorder defined by atypical processing of social stimuli, Autism Spectrum Disorders (ASD). Children responded to a target word ("LEFT"/"RIGHT") in the context of a…

  15. Effect of direct eye contact in PTSD related to interpersonal trauma: an fMRI study of activation of an innate alarm system.

    PubMed

    Steuwe, Carolin; Daniels, Judith K; Frewen, Paul A; Densmore, Maria; Pannasch, Sebastian; Beblo, Thomas; Reiss, Jeffrey; Lanius, Ruth A

    2014-01-01

    In healthy individuals, direct eye contact initially leads to activation of a fast subcortical pathway, which then modulates a cortical route eliciting social cognitive processes. The aim of this study was to gain insight into the neurobiological effects of direct eye-to-eye contact using a virtual reality paradigm in individuals with posttraumatic stress disorder (PTSD) related to prolonged childhood abuse. We examined 16 healthy comparison subjects and 16 patients with a primary diagnosis of PTSD using a virtual reality functional magnetic resonance imaging paradigm involving direct vs averted gaze (happy, sad, neutral) as developed by Schrammel et al. in 2009. Irrespective of the displayed emotion, controls exhibited an increased blood oxygenation level-dependent response during direct vs averted gaze within the dorsomedial prefrontal cortex, left temporoparietal junction and right temporal pole. Under the same conditions, individuals with PTSD showed increased activation within the superior colliculus (SC)/periaqueductal gray (PAG) and locus coeruleus. Our findings suggest that healthy controls react to the exposure of direct gaze with an activation of a cortical route that enhances evaluative 'top-down' processes underlying social interactions. In individuals with PTSD, however, direct gaze leads to sustained activation of a subcortical route of eye-contact processing, an innate alarm system involving the SC and the underlying circuits of the PAG.

  16. Differential Gaze Patterns on Eyes and Mouth During Audiovisual Speech Segmentation

    PubMed Central

    Lusk, Laina G.; Mitchel, Aaron D.

    2016-01-01

    Speech is inextricably multisensory: both auditory and visual components provide critical information for all aspects of speech processing, including speech segmentation, the visual components of which have been the target of a growing number of studies. In particular, a recent study (Mitchel and Weiss, 2014) established that adults can utilize facial cues (i.e., visual prosody) to identify word boundaries in fluent speech. The current study expanded upon these results, using an eye tracker to identify highly attended facial features of the audiovisual display used in Mitchel and Weiss (2014). Subjects spent the most time watching the eyes and mouth. A significant trend in gaze durations was found with the longest gaze duration on the mouth, followed by the eyes and then the nose. In addition, eye-gaze patterns changed across familiarization as subjects learned the word boundaries, showing decreased attention to the mouth in later blocks while attention on other facial features remained consistent. These findings highlight the importance of the visual component of speech processing and suggest that the mouth may play a critical role in visual speech segmentation. PMID:26869959

  17. Holistic integration of gaze cues in visual face and body perception: Evidence from the composite design.

    PubMed

    Vrancken, Leia; Germeys, Filip; Verfaillie, Karl

    2017-01-01

    A considerable amount of research on identity recognition and emotion identification with the composite design points to the holistic processing of these aspects in faces and bodies. In this paradigm, the interference from a nonattended face half on the perception of the attended half is taken as evidence for holistic processing (i.e., a composite effect). Far less research, however, has been dedicated to the concept of gaze. Nonetheless, gaze perception is a substantial component of face and body perception, and holds critical information for everyday communicative interactions. Furthermore, the ability of human observers to detect direct versus averted eye gaze is effortless, perhaps similar to identity perception and emotion recognition. However, the hypothesis of holistic perception of eye gaze has never been tested directly. Research on gaze perception with the composite design could facilitate further systematic comparison with other aspects of face and body perception that have been investigated using the composite design (i.e., identity and emotion). In the present research, a composite design was administered to assess holistic processing of gaze cues in faces (Experiment 1) and bodies (Experiment 2). Results confirmed that eye and head orientation (Experiment 1A) and head and body orientation (Experiment 2A) are integrated in a holistic manner. However, the composite effect was not completely disrupted by inversion (Experiments 1B and 2B), a finding that will be discussed together with implications for future research.

  18. Mentalizing eye contact with a face on a video: Gaze direction does not influence autonomic arousal.

    PubMed

    Lyyra, Pessi; Myllyneva, Aki; Hietanen, Jari K

    2018-04-26

    Recent research has revealed enhanced autonomic and subjective responses to eye contact only when perceiving another live person. However, these enhanced responses to eye contact are abolished if the viewer believes that the other person is not able to look back at the viewer. We purported to investigate whether this "genuine" eye contact effect can be reproduced with pre-recorded videos of stimulus persons. Autonomic responses, gaze behavior, and subjective self-assessments were measured while participants viewed pre-recorded video persons with direct or averted gaze, imagined that the video person was real, and mentalized that the person could see them or not. Pre-recorded videos did not evoke similar physiological or subjective eye contact effect as previously observed with live persons, not even when the participants were mentalizing being seen by the person. Gaze tracking results showed, however, increased attention allocation to faces with direct gaze compared to averted gaze directions. The results suggest that elicitation of the physiological arousal in response to genuine eye contact seems to require spontaneous experience of seeing and of being seen by another individual. © 2018 Scandinavian Psychological Associations and John Wiley & Sons Ltd.

  19. Motor and Gaze Behaviors of Youth Basketball Players Taking Contested and Uncontested Jump Shots

    PubMed Central

    van Maarseveen, Mariëtte J. J.; Oudejans, Raôul R. D.

    2018-01-01

    In this study, we examined the effects of a defender contesting jump shots on performance and gaze behaviors of basketball players taking jump shots. Thirteen skilled youth basketball players performed 48 shots from about 5 m from the basket; 24 uncontested and 24 contested. The participants wore mobile eye tracking glasses to measure their gaze behavior. As expected, an approaching defender trying to contest the shot led to significant changes in movement execution and gaze behavior including shorter shot execution time, longer jump time, longer ball flight time, later final fixation onset, and longer fixation on the defender. Overall, no effects were found for shooting accuracy. However, the effects on shot accuracy were not similar for all participants: six participants showed worse performance and six participants showed better performance in the contested compared to the uncontested condition. These changes in performance were accompanied by differences in gaze behavior. The participants with worse performance showed shorter absolute and relative final fixation duration and a tendency for an earlier final fixation offset in the contested condition compared to the uncontested condition, whereas gaze behavior of the participants with better performance for contested shots was relatively unaffected. The results confirm that a defender contesting the shot is a relevant constraint for basketball shooting suggesting that representative training designs should also include contested shots, and more generally other constraints that are representative of the actual performance setting such as time or mental pressure. PMID:29867671

  20. Gaze control for an active camera system by modeling human pursuit eye movements

    NASA Astrophysics Data System (ADS)

    Toelg, Sebastian

    1992-11-01

    The ability to stabilize the image of one moving object in the presence of others by active movements of the visual sensor is an essential task for biological systems, as well as for autonomous mobile robots. An algorithm is presented that evaluates the necessary movements from acquired visual data and controls an active camera system (ACS) in a feedback loop. No a priori assumptions about the visual scene and objects are needed. The algorithm is based on functional models of human pursuit eye movements and is to a large extent influenced by structural principles of neural information processing. An intrinsic object definition based on the homogeneity of the optical flow field of relevant objects, i.e., moving mainly fronto- parallel, is used. Velocity and spatial information are processed in separate pathways, resulting in either smooth or saccadic sensor movements. The program generates a dynamic shape model of the moving object and focuses its attention to regions where the object is expected. The system proved to behave in a stable manner under real-time conditions in complex natural environments and manages general object motion. In addition it exhibits several interesting abilities well-known from psychophysics like: catch-up saccades, grouping due to coherent motion, and optokinetic nystagmus.

  1. The Effect of Gaze Angle on Visual Acuity in Infantile Nystagmus.

    PubMed

    Dunn, Matt J; Wiggins, Debbie; Woodhouse, J Margaret; Margrain, Tom H; Harris, Christopher M; Erichsen, Jonathan T

    2017-01-01

    Most individuals with infantile nystagmus (IN) have an idiosyncratic gaze angle at which their nystagmus intensity is minimized. Some adopt an abnormal head posture to use this "null zone," and it has therefore long been assumed that this provides people with nystagmus with improved visual acuity (VA). However, recent studies suggest that improving the nystagmus waveform could have little, if any, influence on VA; that is, VA is fundamentally limited in IN. Here, we examined the impact of the null zone on VA. Visual acuity was measured in eight adults with IN using a psychophysical staircase procedure with reversals at three horizontal gaze angles, including the null zone. As expected, changes in gaze angle affected nystagmus amplitude, frequency, foveation duration, and variability of intercycle foveation position. Across participants, each parameter (except frequency) was significantly correlated with VA. Within any given individual, there was a small but significant improvement in VA (0.08 logMAR) at the null zone as compared with the other gaze angles tested. Despite this, no change in any of the nystagmus waveform parameters was significantly associated with changes in VA within individuals. A strong relationship between VA and nystagmus characteristics exists between individuals with IN. Although significant, the improvement in VA observed within individuals at the null zone is much smaller than might be expected from the occasionally large variations in intensity and foveation dynamics (and anecdotal patient reports of improved vision), suggesting that improvement of other aspects of visual performance may also encourage use of the null zone.

  2. Cultural differences in gaze and emotion recognition: Americans contrast more than Chinese.

    PubMed

    Stanley, Jennifer Tehan; Zhang, Xin; Fung, Helene H; Isaacowitz, Derek M

    2013-02-01

    We investigated the influence of contextual expressions on emotion recognition accuracy and gaze patterns among American and Chinese participants. We expected Chinese participants would be more influenced by, and attend more to, contextual information than Americans. Consistent with our hypothesis, Americans were more accurate than Chinese participants at recognizing emotions embedded in the context of other emotional expressions. Eye-tracking data suggest that, for some emotions, Americans attended more to the target faces, and they made more gaze transitions to the target face than Chinese. For all emotions except anger and disgust, Americans appeared to use more of a contrasting strategy where each face was individually contrasted with the target face, compared with Chinese who used less of a contrasting strategy. Both cultures were influenced by contextual information, although the benefit of contextual information depended upon the perceptual dissimilarity of the contextual emotions to the target emotion and the gaze pattern employed during the recognition task. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  3. Cultural Differences in Gaze and Emotion Recognition: Americans Contrast More than Chinese

    PubMed Central

    Tehan Stanley, Jennifer; Zhang, Xin; Fung, Helene H.; Isaacowitz, Derek M.

    2014-01-01

    We investigated the influence of contextual expressions on emotion recognition accuracy and gaze patterns among American and Chinese participants. We expected Chinese participants would be more influenced by, and attend more to, contextual information than Americans. Consistent with our hypothesis, Americans were more accurate than Chinese participants at recognizing emotions embedded in the context of other emotional expressions. Eye tracking data suggest that, for some emotions, Americans attended more to the target faces and made more gaze transitions to the target face than Chinese. For all emotions except anger and disgust, Americans appeared to use more of a contrasting strategy where each face was individually contrasted with the target face, compared with Chinese who used less of a contrasting strategy. Both cultures were influenced by contextual information, although the benefit of contextual information depended upon the perceptual dissimilarity of the contextual emotions to the target emotion and the gaze pattern employed during the recognition task. PMID:22889414

  4. Perceptual and Gaze Biases during Face Processing: Related or Not?

    PubMed Central

    Samson, Hélène; Fiori-Duharcourt, Nicole; Doré-Mazars, Karine; Lemoine, Christelle; Vergilino-Perez, Dorine

    2014-01-01

    Previous studies have demonstrated a left perceptual bias while looking at faces, due to the fact that observers mainly use information from the left side of a face (from the observer's point of view) to perform a judgment task. Such a bias is consistent with the right hemisphere dominance for face processing and has sometimes been linked to a left gaze bias, i.e. more and/or longer fixations on the left side of the face. Here, we recorded eye-movements, in two different experiments during a gender judgment task, using normal and chimeric faces which were presented above, below, right or left to the central fixation point or on it (central position). Participants performed the judgment task by remaining fixated on the fixation point or after executing several saccades (up to three). A left perceptual bias was not systematically found as it depended on the number of allowed saccades and face position. Moreover, the gaze bias clearly depended on the face position as the initial fixation was guided by face position and landed on the closest half-face, toward the center of gravity of the face. The analysis of the subsequent fixations revealed that observers move their eyes from one side to the other. More importantly, no apparent link between gaze and perceptual biases was found here. This implies that we do not look necessarily toward the side of the face that we use to make a gender judgment task. Despite the fact that these results may be limited by the absence of perceptual and gaze biases in some conditions, we emphasized the inter-individual differences observed in terms of perceptual bias, hinting at the importance of performing individual analysis and drawing attention to the influence of the method used to study this bias. PMID:24454927

  5. Looking at Eye Gaze Processing and Its Neural Correlates in Infancy--Implications for Social Development and Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Hoehl, Stefanie; Reid, Vincent M.; Parise, Eugenio; Handl, Andrea; Palumbo, Letizia; Striano, Tricia

    2009-01-01

    The importance of eye gaze as a means of communication is indisputable. However, there is debate about whether there is a dedicated neural module, which functions as an eye gaze detector and when infants are able to use eye gaze cues in a referential way. The application of neuroscience methodologies to developmental psychology has provided new…

  6. Observing Third-Party Attentional Relationships Affects Infants' Gaze Following: An Eye-Tracking Study

    PubMed Central

    Meng, Xianwei; Uto, Yusuke; Hashiya, Kazuhide

    2017-01-01

    Not only responding to direct social actions toward themselves, infants also pay attention to relevant information from third-party interactions. However, it is unclear whether and how infants recognize the structure of these interactions. The current study aimed to investigate how infants' observation of third-party attentional relationships influence their subsequent gaze following. Nine-month-old, 1-year-old, and 1.5-year-old infants (N = 72, 37 girls) observed video clips in which a female actor gazed at one of two toys after she and her partner either silently faced each other (face-to-face condition) or looked in opposite directions (back-to-back condition). An eye tracker was used to record the infants' looking behavior (e.g., looking time, looking frequency). The analyses revealed that younger infants followed the actor's gaze toward the target object in both conditions, but this was not the case for the 1.5-year-old infants in the back-to-back condition. Furthermore, we found that infants' gaze following could be negatively predicted by their expectation of the partner's response to the actor's head turn (i.e., they shift their gaze toward the partner immediately after they realize that the actor's head will turn). These findings suggested that the sensitivity to the difference in knowledge and attentional states in the second year of human life could be extended to third-party interactions, even without any direct involvement in the situation. Additionally, a spontaneous concern with the epistemic gap between self and other, as well as between others, develops by this age. These processes might be considered part of the fundamental basis for human communication. PMID:28149284

  7. Maternal oxytocin response predicts mother-to-infant gaze

    USDA-ARS?s Scientific Manuscript database

    The neuropeptide oxytocin is importantly implicated in the emergence and maintenance of maternal behavior that forms the basis of the mother–infant bond. However, no research has yet examined the specific association between maternal oxytocin and maternal gaze, a key modality through which the mothe...

  8. Neurocognitive mechanisms behind emotional attention: Inverse effects of anodal tDCS over the left and right DLPFC on gaze disengagement from emotional faces.

    PubMed

    Sanchez-Lopez, Alvaro; Vanderhasselt, Marie-Anne; Allaert, Jens; Baeken, Chris; De Raedt, Rudi

    2018-06-01

    Attention to relevant emotional information in the environment is an important process related to vulnerability and resilience for mood and anxiety disorders. In the present study, the effects of left and right dorsolateral prefrontal cortex (i.e., DLPFC) stimulation on attentional mechanisms of emotional processing were tested and contrasted. A sample of 54 healthy participants received 20 min of active and sham anodal transcranial direct current stimulation (i.e., tDCS) either of the left (n = 27) or of the right DLPFC (n = 27) on two separate days. The anode electrode was placed over the left or the right DLPFC, the cathode over the corresponding contra lateral supraorbital area. After each neurostimulation session, participants completed an eye-tracking task assessing direct processes of attentional engagement towards and attentional disengagement away from emotional faces (happy, disgusted, and sad expressions). Compared to sham, active tDCS over the left DLPFC led to faster gaze disengagement, whereas active tDCS over the right DLPFC led to slower gaze disengagement from emotional faces. Between-group comparisons showed that such inverse change patterns were significantly different and generalized for all types of emotion. Our findings support a lateralized role of left and right DLPFC activity in enhancing/worsening the top-down regulation of emotional attention processing. These results support the rationale of new therapies for affective disorders aimed to increase the activation of the left over the right DLPFC in combination with attentional control training, and identify specific target attention mechanisms to be trained.

  9. Improved remote gaze estimation using corneal reflection-adaptive geometric transforms

    NASA Astrophysics Data System (ADS)

    Ma, Chunfei; Baek, Seung-Jin; Choi, Kang-A.; Ko, Sung-Jea

    2014-05-01

    Recently, the remote gaze estimation (RGE) technique has been widely applied to consumer devices as a more natural interface. In general, the conventional RGE method estimates a user's point of gaze using a geometric transform, which represents the relationship between several infrared (IR) light sources and their corresponding corneal reflections (CRs) in the eye image. Among various methods, the homography normalization (HN) method achieves state-of-the-art performance. However, the geometric transform of the HN method requiring four CRs is infeasible for the case when fewer than four CRs are available. To solve this problem, this paper proposes a new RGE method based on three alternative geometric transforms, which are adaptive to the number of CRs. Unlike the HN method, the proposed method not only can operate with two or three CRs, but can also provide superior accuracy. To further enhance the performance, an effective error correction method is also proposed. By combining the introduced transforms with the error-correction method, the proposed method not only provides high accuracy and robustness for gaze estimation, but also allows for a more flexible system setup with a different number of IR light sources. Experimental results demonstrate the effectiveness of the proposed method.

  10. The Role of Gaze Direction and Mutual Exclusivity in Guiding 24-Month-Olds' Word Mappings

    ERIC Educational Resources Information Center

    Graham, Susan A.; Nilsen, Elizabeth S.; Collins, Sarah; Olineck, Kara

    2010-01-01

    In these studies, we examined how a default assumption about word meaning, the mutual exclusivity assumption and an intentional cue, gaze direction, interacted to guide 24-month-olds' object-word mappings. In Expt 1, when the experimenter's gaze was consistent with the mutual exclusivity assumption, novel word mappings were facilitated. When the…

  11. Sociability and gazing toward humans in dogs and wolves: Simple behaviors with broad implications.

    PubMed

    Bentosela, Mariana; Wynne, C D L; D'Orazio, M; Elgier, A; Udell, M A R

    2016-01-01

    Sociability, defined as the tendency to approach and interact with unfamiliar people, has been found to modulate some communicative responses in domestic dogs, including gaze behavior toward the human face. The objective of this study was to compare sociability and gaze behavior in pet domestic dogs and in human-socialized captive wolves in order to identify the relative influence of domestication and learning in the development of the dog-human bond. In Experiment 1, we assessed the approach behavior and social tendencies of dogs and wolves to a familiar and an unfamiliar person. In Experiment 2, we compared the animal's duration of gaze toward a person's face in the presence of food, which the animals could see but not access. Dogs showed higher levels of interspecific sociability than wolves in all conditions, including those where attention was unavailable. In addition, dogs gazed longer at the person's face than wolves in the presence of out-of-reach food. The potential contributions of domestication, associative learning, and experiences during ontogeny to prosocial behavior toward humans are discussed. © 2016 Society for the Experimental Analysis of Behavior.

  12. Eye gaze correction with stereovision for video-teleconferencing.

    PubMed

    Yang, Ruigang; Zhang, Zhengyou

    2004-07-01

    The lack of eye contact in desktop video teleconferencing substantially reduces the effectiveness of video contents. While expensive and bulky hardware is available on the market to correct eye gaze, researchers have been trying to provide a practical software-based solution to bring video-teleconferencing one step closer to the mass market. This paper presents a novel approach: Based on stereo analysis combined with rich domain knowledge (a personalized face model), we synthesize, using graphics hardware, a virtual video that maintains eye contact. A 3D stereo head tracker with a personalized face model is used to compute initial correspondences across two views. More correspondences are then added through template and feature matching. Finally, all the correspondence information is fused together for view synthesis using view morphing techniques. The combined methods greatly enhance the accuracy and robustness of the synthesized views. Our current system is able to generate an eye-gaze corrected video stream at five frames per second on a commodity 1 GHz PC.

  13. Modeling Eye Gaze Patterns in Clinician-Patient Interaction with Lag Sequential Analysis

    PubMed Central

    Montague, E; Xu, J; Asan, O; Chen, P; Chewning, B; Barrett, B

    2011-01-01

    Objective The aim of this study was to examine whether lag-sequential analysis could be used to describe eye gaze orientation between clinicians and patients in the medical encounter. This topic is particularly important as new technologies are implemented into multi-user health care settings where trust is critical and nonverbal cues are integral to achieving trust. This analysis method could lead to design guidelines for technologies and more effective assessments of interventions. Background Nonverbal communication patterns are important aspects of clinician-patient interactions and may impact patient outcomes. Method Eye gaze behaviors of clinicians and patients in 110-videotaped medical encounters were analyzed using the lag-sequential method to identify significant behavior sequences. Lag-sequential analysis included both event-based lag and time-based lag. Results Results from event-based lag analysis showed that the patients’ gaze followed that of clinicians, while clinicians did not follow patients. Time-based sequential analysis showed that responses from the patient usually occurred within two seconds after the initial behavior of the clinician. Conclusion Our data suggest that the clinician’s gaze significantly affects the medical encounter but not the converse. Application Findings from this research have implications for the design of clinical work systems and modeling interactions. Similar research methods could be used to identify different behavior patterns in clinical settings (physical layout, technology, etc.) to facilitate and evaluate clinical work system designs. PMID:22046723

  14. Limitations of gaze transfer: without visual context, eye movements do not to help to coordinate joint action, whereas mouse movements do.

    PubMed

    Müller, Romy; Helmert, Jens R; Pannasch, Sebastian

    2014-10-01

    Remote cooperation can be improved by transferring the gaze of one participant to the other. However, based on a partner's gaze, an interpretation of his communicative intention can be difficult. Thus, gaze transfer has been inferior to mouse transfer in remote spatial referencing tasks where locations had to be pointed out explicitly. Given that eye movements serve as an indicator of visual attention, it remains to be investigated whether gaze and mouse transfer differentially affect the coordination of joint action when the situation demands an understanding of the partner's search strategies. In the present study, a gaze or mouse cursor was transferred from a searcher to an assistant in a hierarchical decision task. The assistant could use this cursor to guide his movement of a window which continuously opened up the display parts the searcher needed to find the right solution. In this context, we investigated how the ease of using gaze transfer depended on whether a link could be established between the partner's eye movements and the objects he was looking at. Therefore, in addition to the searcher's cursor, the assistant either saw the positions of these objects or only a grey background. When the objects were visible, performance and the number of spoken words were similar for gaze and mouse transfer. However, without them, gaze transfer resulted in longer solution times and more verbal effort as participants relied more strongly on speech to coordinate the window movement. Moreover, an analysis of the spatio-temporal coupling of the transmitted cursor and the window indicated that when no visual object information was available, assistants confidently followed the searcher's mouse but not his gaze cursor. Once again, the results highlight the importance of carefully considering task characteristics when applying gaze transfer in remote cooperation. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. GAFFE: a gaze-attentive fixation finding engine.

    PubMed

    Rajashekar, U; van der Linde, I; Bovik, A C; Cormack, L K

    2008-04-01

    The ability to automatically detect visually interesting regions in images has many practical applications, especially in the design of active machine vision and automatic visual surveillance systems. Analysis of the statistics of image features at observers' gaze can provide insights into the mechanisms of fixation selection in humans. Using a foveated analysis framework, we studied the statistics of four low-level local image features: luminance, contrast, and bandpass outputs of both luminance and contrast, and discovered that image patches around human fixations had, on average, higher values of each of these features than image patches selected at random. Contrast-bandpass showed the greatest difference between human and random fixations, followed by luminance-bandpass, RMS contrast, and luminance. Using these measurements, we present a new algorithm that selects image regions as likely candidates for fixation. These regions are shown to correlate well with fixations recorded from human observers.

  16. Spatiotemporal characteristics of gaze of children with autism spectrum disorders while looking at classroom scenes

    PubMed Central

    Higuchi, Takahiro; Noritake, Atsushi; Yanagimoto, Yoshitoki; Kobayashi, Hodaka; Nakamura, Kae; Kaneko, Kazunari

    2017-01-01

    Children with autism spectrum disorders (ASD) who have neurodevelopmental impairments in social communication often refuse to go to school because of difficulties in learning in class. The exact cause of maladaptation to school in such children is unknown. We hypothesized that these children have difficulty in paying attention to objects at which teachers are pointing. We performed gaze behavior analysis of children with ASD to understand their difficulties in the classroom. The subjects were 26 children with ASD (19 boys and 7 girls; mean age, 8.6 years) and 27 age-matched children with typical development (TD) (14 boys and 13 girls; mean age, 8.2 years). We measured eye movements of the children while they performed free viewing of two movies depicting actual classes: a Japanese class in which a teacher pointed at cartoon characters and an arithmetic class in which the teacher pointed at geometric figures. In the analysis, we defined the regions of interest (ROIs) as the teacher’s face and finger, the cartoon characters and geometric figures at which the teacher pointed, and the classroom wall that contained no objects. We then compared total gaze time for each ROI between the children with ASD and TD by two-way ANOVA. Children with ASD spent less gaze time on the cartoon characters pointed at by the teacher; they spent more gaze time on the wall in both classroom scenes. We could differentiate children with ASD from those with TD almost perfectly by the proportion of total gaze time that children with ASD spent looking at the wall. These results suggest that children with ASD do not follow the teacher’s instructions in class and persist in gazing at inappropriate visual areas such as walls. Thus, they may have difficulties in understanding content in class, leading to maladaptation to school. PMID:28472111

  17. Spatiotemporal characteristics of gaze of children with autism spectrum disorders while looking at classroom scenes.

    PubMed

    Higuchi, Takahiro; Ishizaki, Yuko; Noritake, Atsushi; Yanagimoto, Yoshitoki; Kobayashi, Hodaka; Nakamura, Kae; Kaneko, Kazunari

    2017-01-01

    Children with autism spectrum disorders (ASD) who have neurodevelopmental impairments in social communication often refuse to go to school because of difficulties in learning in class. The exact cause of maladaptation to school in such children is unknown. We hypothesized that these children have difficulty in paying attention to objects at which teachers are pointing. We performed gaze behavior analysis of children with ASD to understand their difficulties in the classroom. The subjects were 26 children with ASD (19 boys and 7 girls; mean age, 8.6 years) and 27 age-matched children with typical development (TD) (14 boys and 13 girls; mean age, 8.2 years). We measured eye movements of the children while they performed free viewing of two movies depicting actual classes: a Japanese class in which a teacher pointed at cartoon characters and an arithmetic class in which the teacher pointed at geometric figures. In the analysis, we defined the regions of interest (ROIs) as the teacher's face and finger, the cartoon characters and geometric figures at which the teacher pointed, and the classroom wall that contained no objects. We then compared total gaze time for each ROI between the children with ASD and TD by two-way ANOVA. Children with ASD spent less gaze time on the cartoon characters pointed at by the teacher; they spent more gaze time on the wall in both classroom scenes. We could differentiate children with ASD from those with TD almost perfectly by the proportion of total gaze time that children with ASD spent looking at the wall. These results suggest that children with ASD do not follow the teacher's instructions in class and persist in gazing at inappropriate visual areas such as walls. Thus, they may have difficulties in understanding content in class, leading to maladaptation to school.

  18. Nursing gaze of the Eastern Front in World War II: a feminist narrative analysis.

    PubMed

    Georges, Jane M; Benedict, Susan

    2008-01-01

    Grounded in a feminist perspective, a narrative analysis of letters written by Martha Lohmann, a nurse who served with the German Army on the Eastern Front in World War II, is undertaken. Utilizing "gaze" as a focus, an exploration of the narrative and the multiple gazes embedded within it is performed. Implications for future analysis of nurses' textual accounts of violence, armed conflict, and war are presented.

  19. Anticipating Intentional Actions: The Effect of Eye Gaze Direction on the Judgment of Head Rotation

    ERIC Educational Resources Information Center

    Hudson, Matthew; Liu, Chang Hong; Jellema, Tjeerd

    2009-01-01

    Using a representational momentum paradigm, this study investigated the hypothesis that judgments of how far another agent's head has rotated are influenced by the perceived gaze direction of the head. Participants observed a video-clip of a face rotating 60[degrees] towards them starting from the left or right profile view. The gaze direction of…

  20. Communicative interactions between visually impaired mothers and their sighted children: analysis of gaze, facial expressions, voice and physical contacts.

    PubMed

    Chiesa, S; Galati, D; Schmidt, S

    2015-11-01

    Social and emotional development of infants and young children is largely based on the communicative interaction with their mother, or principal caretaker (Trevarthen ). The main modalities implied in this early communication are voice, facial expressions and gaze (Stern ). This study aims at analysing early mother-child interactions in the case of visually impaired mothers who do not have access to their children's gaze and facial expressions. Spontaneous play interactions between seven visually impaired mothers and their sighted children aged between 6 months and 3 years were filmed. These dyads were compared with a control group of sighted mothers and children analysing four modalities of communication and interaction regulation: gaze, physical contacts, verbal productions and facial expressions. The visually impaired mothers' facial expressions differed from the ones of sighted mothers mainly with respect to forehead movements, leading to an impoverishment of conveyed meaning. Regarding the other communicative modalities, results suggest that visually impaired mothers and their children use compensatory strategies to guaranty harmonic interaction despite the mother's impairment: whereas gaze results the main factor of interaction regulation in sighted dyads, physical contacts and verbal productions assume a prevalent role in dyads with visually impaired mothers. Moreover, visually impaired mother's children seem to be able to differentiate between their mother and sighted interaction partners, adapting differential modes of communication. The results of this study show that, in spite of the obvious differences in the modes of communication, visual impairment does not prevent a harmonious interaction with the child. © 2015 John Wiley & Sons Ltd.