Sample records for processing underlying gaze

  1. Emotion Unchained: Facial Expression Modulates Gaze Cueing under Cognitive Load.

    PubMed

    Pecchinenda, Anna; Petrucci, Manuel

    2016-01-01

    Direction of eye gaze cues spatial attention, and typically this cueing effect is not modulated by the expression of a face unless top-down processes are explicitly or implicitly involved. To investigate the role of cognitive control on gaze cueing by emotional faces, participants performed a gaze cueing task with happy, angry, or neutral faces under high (i.e., counting backward by 7) or low cognitive load (i.e., counting forward by 2). Results show that high cognitive load enhances gaze cueing effects for angry facial expressions. In addition, cognitive load reduces gaze cueing for neutral faces, whereas happy facial expressions and gaze affected object preferences regardless of load. This evidence clearly indicates a differential role of cognitive control in processing gaze direction and facial expression, suggesting that under typical conditions, when we shift attention based on social cues from another person, cognitive control processes are used to reduce interference from emotional information.

  2. Emotion Unchained: Facial Expression Modulates Gaze Cueing under Cognitive Load

    PubMed Central

    Petrucci, Manuel

    2016-01-01

    Direction of eye gaze cues spatial attention, and typically this cueing effect is not modulated by the expression of a face unless top-down processes are explicitly or implicitly involved. To investigate the role of cognitive control on gaze cueing by emotional faces, participants performed a gaze cueing task with happy, angry, or neutral faces under high (i.e., counting backward by 7) or low cognitive load (i.e., counting forward by 2). Results show that high cognitive load enhances gaze cueing effects for angry facial expressions. In addition, cognitive load reduces gaze cueing for neutral faces, whereas happy facial expressions and gaze affected object preferences regardless of load. This evidence clearly indicates a differential role of cognitive control in processing gaze direction and facial expression, suggesting that under typical conditions, when we shift attention based on social cues from another person, cognitive control processes are used to reduce interference from emotional information. PMID:27959925

  3. Human cortical activity evoked by contextual processing in attentional orienting.

    PubMed

    Zhao, Shuo; Li, Chunlin; Uono, Shota; Yoshimura, Sayaka; Toichi, Motomi

    2017-06-07

    The ability to assess another person's direction of attention is paramount in social communication, many studies have reported a similar pattern between gaze and arrow cues in attention orienting. Neuroimaging research has also demonstrated no qualitative differences in attention to gaze and arrow cues. However, these studies were implemented under simple experiment conditions. Researchers have highlighted the importance of contextual processing (i.e., the semantic congruence between cue and target) in attentional orienting, showing that attentional orienting by social gaze or arrow cues could be modulated through contextual processing. Here, we examine the neural activity of attentional orienting by gaze and arrow cues in response to contextual processing using functional magnetic resonance imaging. The results demonstrated that the influence of neural activity through contextual processing to attentional orienting occurred under invalid conditions (when the cue and target were incongruent versus congruent) in the ventral frontoparietal network, although we did not identify any differences in the neural substrates of attentional orienting in contextual processing between gaze and arrow cues. These results support behavioural data of attentional orienting modulated by contextual processing based on the neurocognitive architecture.

  4. Neural bases of eye and gaze processing: The core of social cognition

    PubMed Central

    Itier, Roxane J.; Batty, Magali

    2014-01-01

    Eyes and gaze are very important stimuli for human social interactions. Recent studies suggest that impairments in recognizing face identity, facial emotions or in inferring attention and intentions of others could be linked to difficulties in extracting the relevant information from the eye region including gaze direction. In this review, we address the central role of eyes and gaze in social cognition. We start with behavioral data demonstrating the importance of the eye region and the impact of gaze on the most significant aspects of face processing. We review neuropsychological cases and data from various imaging techniques such as fMRI/PET and ERP/MEG, in an attempt to best describe the spatio-temporal networks underlying these processes. The existence of a neuronal eye detector mechanism is discussed as well as the links between eye gaze and social cognition impairments in autism. We suggest impairments in processing eyes and gaze may represent a core deficiency in several other brain pathologies and may be central to abnormal social cognition. PMID:19428496

  5. Social attention in children with epilepsy.

    PubMed

    Lunn, Judith; Donovan, Tim; Litchfield, Damien; Lewis, Charlie; Davies, Robert; Crawford, Trevor

    2017-04-01

    Children with epilepsy may be vulnerable to impaired social attention given the increased risk of neurobehavioural comorbidities. Social attentional orienting and the potential modulatory role of attentional control on the perceptual processing of gaze and emotion cues have not been examined in childhood onset epilepsies. Social attention mechanisms were investigated in patients with epilepsy (n=25) aged 8-18years old and performance compared to healthy controls (n=30). Dynamic gaze and emotion facial stimuli were integrated into an antisaccade eye-tracking paradigm. The time to orient attention and execute a horizontal saccade toward (prosaccade) or away (antisaccade) from a peripheral target measured processing speed of social signals under conditions of low or high attentional control. Patients with epilepsy had impaired processing speed compared to healthy controls under conditions of high attentional control only when gaze and emotions were combined meaningfully to signal motivational intent of approach (happy or anger with a direct gaze) or avoidance (fear or sad with an averted gaze). Group differences were larger in older adolescent patients. Analyses of the discrete gaze emotion combinations found independent effects of epilepsy-related, cognitive and behavioural problems. A delayed disengagement from fearful gaze was also found under low attentional control that was linked to epilepsy developmental factors and was similarly observed in patients with higher reported anxiety problems. Overall, findings indicate increased perceptual processing of developmentally relevant social motivations during increased cognitive control, and the possibility of a persistent fear-related attentional bias. This was not limited to patients with chronic epilepsy, lower IQ or reported behavioural problems and has implications for social and emotional development in individuals with childhood onset epilepsies beyond remission. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Brain Mechanisms for Processing Direct and Averted Gaze in Individuals with Autism

    ERIC Educational Resources Information Center

    Pitskel, Naomi B.; Bolling, Danielle Z.; Hudac, Caitlin M.; Lantz, Stephen D.; Minshew, Nancy J.; Vander Wyk, Brent C.; Pelphrey, Kevin A.

    2011-01-01

    Prior studies have indicated brain abnormalities underlying social processing in autism, but no fMRI study has specifically addressed the differential processing of direct and averted gaze, a critical social cue. Fifteen adolescents and adults with autism and 14 typically developing comparison participants viewed dynamic virtual-reality videos…

  7. A Support System for Mouse Operations Using Eye-Gaze Input

    NASA Astrophysics Data System (ADS)

    Abe, Kiyohiko; Nakayama, Yasuhiro; Ohi, Shoichi; Ohyama, Minoru

    We have developed an eye-gaze input system for people with severe physical disabilities, such as amyotrophic lateral sclerosis (ALS) patients. This system utilizes a personal computer and a home video camera to detect eye-gaze under natural light. The system detects both vertical and horizontal eye-gaze by simple image analysis, and does not require special image processing units or sensors. Our conventional eye-gaze input system can detect horizontal eye-gaze with a high degree of accuracy. However, it can only classify vertical eye-gaze into 3 directions (up, middle and down). In this paper, we propose a new method for vertical eye-gaze detection. This method utilizes the limbus tracking method for vertical eye-gaze detection. Therefore our new eye-gaze input system can detect the two-dimension coordinates of user's gazing point. By using this method, we develop a new support system for mouse operation. This system can move the mouse cursor to user's gazing point.

  8. Learning under your gaze: the mediating role of affective arousal between perceived direct gaze and memory performance.

    PubMed

    Helminen, Terhi M; Pasanen, Tytti P; Hietanen, Jari K

    2016-03-01

    Previous studies have shown that cognitive performance can be affected by the presence of an observer and self-directed gaze. We investigated whether the effect of gaze direction (direct vs. downcast) on verbal memory is mediated by autonomic arousal. Male participants responded with enhanced affective arousal to both male and female storytellers' direct gaze which, according to a path analysis, was negatively associated with the performance. On the other hand, parallel to this arousal-mediated effect, males' performance was affected by another process impacting the performance positively and suggested to be related to effort allocation on the task. The effect of this process was observed only when the storyteller was a male. The participants remembered more details from a story told by a male with a direct vs. downcast gaze. The effect of gaze direction on performance was the opposite for female storytellers, which was explained by the arousal-mediated process. Surprisingly, these results were restricted to male participants only and no effects of gaze were observed among female participants. We also investigated whether the participants' belief of being seen or not (through an electronic window) by the storyteller influenced the memory and arousal, but this manipulation had no effect on the results.

  9. Spatiotemporal commonalities of fronto-parietal activation in attentional orienting triggered by supraliminal and subliminal gaze cues: An event-related potential study.

    PubMed

    Uono, Shota; Sato, Wataru; Sawada, Reiko; Kochiyama, Takanori; Toichi, Motomi

    2018-05-04

    Eye gaze triggers attentional shifts with and without conscious awareness. It remains unclear whether the spatiotemporal patterns of electric neural activity are the same for conscious and unconscious attentional shifts. Thus, the present study recorded event-related potentials (ERPs) and evaluated the neural activation involved in attentional orienting induced by subliminal and supraliminal gaze cues. Nonpredictive gaze cues were presented in the central field of vision, and participants were asked to detect a subsequent peripheral target. The mean reaction time was shorter for congruent gaze cues than for incongruent gaze cues under both presentation conditions, indicating that both types of cues reliably trigger attentional orienting. The ERP analysis revealed that averted versus straight gaze induced greater negative deflection in the bilateral fronto-central and temporal regions between 278 and 344 ms under both supraliminal and subliminal presentation conditions. Supraliminal cues, irrespective of gaze direction, induced a greater negative amplitude than did subliminal cues at the right posterior cortices at a peak of approximately 170 ms and in the 200-300 ms. These results suggest that similar spatial and temporal fronto-parietal activity is involved in attentional orienting triggered by both supraliminal and subliminal gaze cues, although inputs from different visual processing routes (cortical and subcortical regions) may trigger activity in the attentional network. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Sustained neural activity to gaze and emotion perception in dynamic social scenes

    PubMed Central

    Ulloa, José Luis; Puce, Aina; Hugueville, Laurent; George, Nathalie

    2014-01-01

    To understand social interactions, we must decode dynamic social cues from seen faces. Here, we used magnetoencephalography (MEG) to study the neural responses underlying the perception of emotional expressions and gaze direction changes as depicted in an interaction between two agents. Subjects viewed displays of paired faces that first established a social scenario of gazing at each other (mutual attention) or gazing laterally together (deviated group attention) and then dynamically displayed either an angry or happy facial expression. The initial gaze change elicited a significantly larger M170 under the deviated than the mutual attention scenario. At around 400 ms after the dynamic emotion onset, responses at posterior MEG sensors differentiated between emotions, and between 1000 and 2200 ms, left posterior sensors were additionally modulated by social scenario. Moreover, activity on right anterior sensors showed both an early and prolonged interaction between emotion and social scenario. These results suggest that activity in right anterior sensors reflects an early integration of emotion and social attention, while posterior activity first differentiated between emotions only, supporting the view of a dual route for emotion processing. Altogether, our data demonstrate that both transient and sustained neurophysiological responses underlie social processing when observing interactions between others. PMID:23202662

  11. Assessing the precision of gaze following using a stereoscopic 3D virtual reality setting.

    PubMed

    Atabaki, Artin; Marciniak, Karolina; Dicke, Peter W; Thier, Peter

    2015-07-01

    Despite the ecological importance of gaze following, little is known about the underlying neuronal processes, which allow us to extract gaze direction from the geometric features of the eye and head of a conspecific. In order to understand the neuronal mechanisms underlying this ability, a careful description of the capacity and the limitations of gaze following at the behavioral level is needed. Previous studies of gaze following, which relied on naturalistic settings have the disadvantage of allowing only very limited control of potentially relevant visual features guiding gaze following, such as the contrast of iris and sclera, the shape of the eyelids and--in the case of photographs--they lack depth. Hence, in order to get full control of potentially relevant features we decided to study gaze following of human observers guided by the gaze of a human avatar seen stereoscopically. To this end we established a stereoscopic 3D virtual reality setup, in which we tested human subjects' abilities to detect at which target a human avatar was looking at. Following the gaze of the avatar showed all the features of the gaze following of a natural person, namely a substantial degree of precision associated with a consistent pattern of systematic deviations from the target. Poor stereo vision affected performance surprisingly little (only in certain experimental conditions). Only gaze following guided by targets at larger downward eccentricities exhibited a differential effect of the presence or absence of accompanying movements of the avatar's eyelids and eyebrows. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Comparison of dogs and humans in visual scanning of social interaction.

    PubMed

    Törnqvist, Heini; Somppi, Sanni; Koskela, Aija; Krause, Christina M; Vainio, Outi; Kujala, Miiamaaria V

    2015-09-01

    Previous studies have demonstrated similarities in gazing behaviour of dogs and humans, but comparisons under similar conditions are rare, and little is known about dogs' visual attention to social scenes. Here, we recorded the eye gaze of dogs while they viewed images containing two humans or dogs either interacting socially or facing away: the results were compared with equivalent data measured from humans. Furthermore, we compared the gazing behaviour of two dog and two human populations with different social experiences: family and kennel dogs; dog experts and non-experts. Dogs' gazing behaviour was similar to humans: both species gazed longer at the actors in social interaction than in non-social images. However, humans gazed longer at the actors in dog than human social interaction images, whereas dogs gazed longer at the actors in human than dog social interaction images. Both species also made more saccades between actors in images representing non-conspecifics, which could indicate that processing social interaction of non-conspecifics may be more demanding. Dog experts and non-experts viewed the images very similarly. Kennel dogs viewed images less than family dogs, but otherwise their gazing behaviour did not differ, indicating that the basic processing of social stimuli remains similar regardless of social experiences.

  13. The impact of visual gaze direction on auditory object tracking.

    PubMed

    Pomper, Ulrich; Chait, Maria

    2017-07-05

    Subjective experience suggests that we are able to direct our auditory attention independent of our visual gaze, e.g when shadowing a nearby conversation at a cocktail party. But what are the consequences at the behavioural and neural level? While numerous studies have investigated both auditory attention and visual gaze independently, little is known about their interaction during selective listening. In the present EEG study, we manipulated visual gaze independently of auditory attention while participants detected targets presented from one of three loudspeakers. We observed increased response times when gaze was directed away from the locus of auditory attention. Further, we found an increase in occipital alpha-band power contralateral to the direction of gaze, indicative of a suppression of distracting input. Finally, this condition also led to stronger central theta-band power, which correlated with the observed effect in response times, indicative of differences in top-down processing. Our data suggest that a misalignment between gaze and auditory attention both reduce behavioural performance and modulate underlying neural processes. The involvement of central theta-band and occipital alpha-band effects are in line with compensatory neural mechanisms such as increased cognitive control and the suppression of task irrelevant inputs.

  14. Look over there! Unilateral gaze increases geographical memory of the 50 United States.

    PubMed

    Propper, Ruth E; Brunyé, Tad T; Christman, Stephen D; Januszewskia, Ashley

    2012-02-01

    Based on their specialized processing abilities, the left and right hemispheres of the brain may not contribute equally to recall of general world knowledge. US college students recalled the verbal names and spatial locations of the 50 US states while sustaining leftward or rightward unilateral gaze, a procedure that selectively activates the contralateral hemisphere. Compared to a no-unilateral gaze control, right gaze/left hemisphere activation resulted in better recall, demonstrating left hemisphere superiority in recall of general world knowledge and offering equivocal support for the hemispheric encoding asymmetry model of memory. Unilateral gaze- regardless of direction- improved recall of spatial, but not verbal, information. Future research could investigate the conditions under which unilateral gaze increases recall. Sustained unilateral gaze can be used as a simple, inexpensive, means for testing theories of hemispheric specialization of cognitive functions. Results support an overall deficit in US geographical knowledge in undergraduate college students. Copyright © 2011 Elsevier Inc. All rights reserved.

  15. Evolution of Biological Image Stabilization.

    PubMed

    Hardcastle, Ben J; Krapp, Holger G

    2016-10-24

    The use of vision to coordinate behavior requires an efficient control design that stabilizes the world on the retina or directs the gaze towards salient features in the surroundings. With a level gaze, visual processing tasks are simplified and behaviorally relevant features from the visual environment can be extracted. No matter how simple or sophisticated the eye design, mechanisms have evolved across phyla to stabilize gaze. In this review, we describe functional similarities in eyes and gaze stabilization reflexes, emphasizing their fundamental role in transforming sensory information into motor commands that support postural and locomotor control. We then focus on gaze stabilization design in flying insects and detail some of the underlying principles. Systems analysis reveals that gaze stabilization often involves several sensory modalities, including vision itself, and makes use of feedback as well as feedforward signals. Independent of phylogenetic distance, the physical interaction between an animal and its natural environment - its available senses and how it moves - appears to shape the adaptation of all aspects of gaze stabilization. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Effect of direct eye contact in PTSD related to interpersonal trauma: an fMRI study of activation of an innate alarm system.

    PubMed

    Steuwe, Carolin; Daniels, Judith K; Frewen, Paul A; Densmore, Maria; Pannasch, Sebastian; Beblo, Thomas; Reiss, Jeffrey; Lanius, Ruth A

    2014-01-01

    In healthy individuals, direct eye contact initially leads to activation of a fast subcortical pathway, which then modulates a cortical route eliciting social cognitive processes. The aim of this study was to gain insight into the neurobiological effects of direct eye-to-eye contact using a virtual reality paradigm in individuals with posttraumatic stress disorder (PTSD) related to prolonged childhood abuse. We examined 16 healthy comparison subjects and 16 patients with a primary diagnosis of PTSD using a virtual reality functional magnetic resonance imaging paradigm involving direct vs averted gaze (happy, sad, neutral) as developed by Schrammel et al. in 2009. Irrespective of the displayed emotion, controls exhibited an increased blood oxygenation level-dependent response during direct vs averted gaze within the dorsomedial prefrontal cortex, left temporoparietal junction and right temporal pole. Under the same conditions, individuals with PTSD showed increased activation within the superior colliculus (SC)/periaqueductal gray (PAG) and locus coeruleus. Our findings suggest that healthy controls react to the exposure of direct gaze with an activation of a cortical route that enhances evaluative 'top-down' processes underlying social interactions. In individuals with PTSD, however, direct gaze leads to sustained activation of a subcortical route of eye-contact processing, an innate alarm system involving the SC and the underlying circuits of the PAG.

  17. Effect of direct eye contact in PTSD related to interpersonal trauma: an fMRI study of activation of an innate alarm system

    PubMed Central

    Steuwe, Carolin; Daniels, Judith K.; Frewen, Paul A.; Densmore, Maria; Pannasch, Sebastian; Beblo, Thomas; Reiss, Jeffrey; Lanius, Ruth A.

    2014-01-01

    In healthy individuals, direct eye contact initially leads to activation of a fast subcortical pathway, which then modulates a cortical route eliciting social cognitive processes. The aim of this study was to gain insight into the neurobiological effects of direct eye-to-eye contact using a virtual reality paradigm in individuals with posttraumatic stress disorder (PTSD) related to prolonged childhood abuse. We examined 16 healthy comparison subjects and 16 patients with a primary diagnosis of PTSD using a virtual reality functional magnetic resonance imaging paradigm involving direct vs averted gaze (happy, sad, neutral) as developed by Schrammel et al. in 2009. Irrespective of the displayed emotion, controls exhibited an increased blood oxygenation level-dependent response during direct vs averted gaze within the dorsomedial prefrontal cortex, left temporoparietal junction and right temporal pole. Under the same conditions, individuals with PTSD showed increased activation within the superior colliculus (SC)/periaqueductal gray (PAG) and locus coeruleus. Our findings suggest that healthy controls react to the exposure of direct gaze with an activation of a cortical route that enhances evaluative ‘top–down’ processes underlying social interactions. In individuals with PTSD, however, direct gaze leads to sustained activation of a subcortical route of eye-contact processing, an innate alarm system involving the SC and the underlying circuits of the PAG. PMID:22977200

  18. Talking heads or talking eyes? Effects of head orientation and sudden onset gaze cues on attention capture.

    PubMed

    van der Wel, Robrecht P; Welsh, Timothy; Böckler, Anne

    2018-01-01

    The direction of gaze towards or away from an observer has immediate effects on attentional processing in the observer. Previous research indicates that faces with direct gaze are processed more efficiently than faces with averted gaze. We recently reported additional processing advantages for faces that suddenly adopt direct gaze (abruptly shift from averted to direct gaze) relative to static direct gaze (always in direct gaze), sudden averted gaze (abruptly shift from direct to averted gaze), and static averted gaze (always in averted gaze). Because changes in gaze orientation in previous study co-occurred with changes in head orientation, it was not clear if the effect is contingent on face or eye processing, or whether it requires both the eyes and the face to provide consistent information. The present study delineates the impact of head orientation, sudden onset motion cues, and gaze cues. Participants completed a target-detection task in which head position remained in a static averted or direct orientation while sudden onset motion and eye gaze cues were manipulated within each trial. The results indicate a sudden direct gaze advantage that resulted from the additive role of motion and gaze cues. Interestingly, the orientation of the face towards or away from the observer did not influence the sudden direct gaze effect, suggesting that eye gaze cues, not face orientation cues, are critical for the sudden direct gaze effect.

  19. Conflict Tasks of Different Types Divergently Affect the Attentional Processing of Gaze and Arrow.

    PubMed

    Fan, Lingxia; Yu, Huan; Zhang, Xuemin; Feng, Qing; Sun, Mengdan; Xu, Mengsi

    2018-01-01

    The present study explored the attentional processing mechanisms of gaze and arrow cues in two different types of conflict tasks. In Experiment 1, participants performed a flanker task in which gaze and arrow cues were presented as central targets or bilateral distractors. The congruency between the direction of the target and the distractors was manipulated. Results showed that arrow distractors greatly interfered with the attentional processing of gaze, while the processing of arrow direction was immune to conflict from gaze distractors. Using a spatial compatibility task, Experiment 2 explored the conflict effects exerted on gaze and arrow processing by their relative spatial locations. When the direction of the arrow was in conflict with its spatial layout on screen, response times were slowed; however, the encoding of gaze was unaffected by spatial location. In general, processing to an arrow cue is less influenced by bilateral gaze cues but is affected by irrelevant spatial information, while processing to a gaze cue is greatly disturbed by bilateral arrows but is unaffected by irrelevant spatial information. Different effects on gaze and arrow cues by different types of conflicts may reflect two relatively distinct specific modes of the attentional process.

  20. The Gaze-Cueing Effect in the United States and Japan: Influence of Cultural Differences in Cognitive Strategies on Control of Attention

    PubMed Central

    Takao, Saki; Yamani, Yusuke; Ariga, Atsunori

    2018-01-01

    The direction of gaze automatically and exogenously guides visual spatial attention, a phenomenon termed as the gaze-cueing effect. Although this effect arises when the duration of stimulus onset asynchrony (SOA) between a non-predictive gaze cue and the target is relatively long, no empirical research has examined the factors underlying this extended cueing effect. Two experiments compared the gaze-cueing effect at longer SOAs (700 ms) in Japanese and American participants. Cross-cultural studies on cognition suggest that Westerners tend to use a context-independent analytical strategy to process visual environments, whereas Asians use a context-dependent holistic approach. We hypothesized that Japanese participants would not demonstrate the gaze-cueing effect at longer SOAs because they are more sensitive to contextual information, such as the knowledge that the direction of a gaze is not predictive. Furthermore, we hypothesized that American participants would demonstrate the gaze-cueing effect at the long SOAs because they tend to follow gaze direction whether it is predictive or not. In Experiment 1, American participants demonstrated the gaze-cueing effect at the long SOA, indicating that their attention was driven by the central non-predictive gaze direction regardless of the SOAs. In Experiment 2, Japanese participants demonstrated no gaze-cueing effect at the long SOA, suggesting that the Japanese participants exercised voluntary control of their attention, which inhibited the gaze-cueing effect with the long SOA. Our findings suggest that the control of visual spatial attention elicited by social stimuli systematically differs between American and Japanese individuals. PMID:29379457

  1. The Gaze-Cueing Effect in the United States and Japan: Influence of Cultural Differences in Cognitive Strategies on Control of Attention.

    PubMed

    Takao, Saki; Yamani, Yusuke; Ariga, Atsunori

    2017-01-01

    The direction of gaze automatically and exogenously guides visual spatial attention, a phenomenon termed as the gaze-cueing effect . Although this effect arises when the duration of stimulus onset asynchrony (SOA) between a non-predictive gaze cue and the target is relatively long, no empirical research has examined the factors underlying this extended cueing effect. Two experiments compared the gaze-cueing effect at longer SOAs (700 ms) in Japanese and American participants. Cross-cultural studies on cognition suggest that Westerners tend to use a context-independent analytical strategy to process visual environments, whereas Asians use a context-dependent holistic approach. We hypothesized that Japanese participants would not demonstrate the gaze-cueing effect at longer SOAs because they are more sensitive to contextual information, such as the knowledge that the direction of a gaze is not predictive. Furthermore, we hypothesized that American participants would demonstrate the gaze-cueing effect at the long SOAs because they tend to follow gaze direction whether it is predictive or not. In Experiment 1, American participants demonstrated the gaze-cueing effect at the long SOA, indicating that their attention was driven by the central non-predictive gaze direction regardless of the SOAs. In Experiment 2, Japanese participants demonstrated no gaze-cueing effect at the long SOA, suggesting that the Japanese participants exercised voluntary control of their attention, which inhibited the gaze-cueing effect with the long SOA. Our findings suggest that the control of visual spatial attention elicited by social stimuli systematically differs between American and Japanese individuals.

  2. Gazing into Thin Air: The Dual-Task Costs of Movement Planning and Execution during Adaptive Gait

    PubMed Central

    Ellmers, Toby J.; Cocks, Adam J.; Doumas, Michail; Williams, A. Mark; Young, William R.

    2016-01-01

    We examined the effect of increased cognitive load on visual search behavior and measures of gait performance during locomotion. Also, we investigated how personality traits, specifically the propensity to consciously control or monitor movements (trait movement ‘reinvestment’), impacted the ability to maintain effective gaze under conditions of cognitive load. Healthy young adults traversed a novel adaptive walking path while performing a secondary serial subtraction task. Performance was assessed using correct responses to the cognitive task, gaze behavior, stepping accuracy, and time to complete the walking task. When walking while simultaneously carrying out the secondary serial subtraction task, participants visually fixated on task-irrelevant areas ‘outside’ the walking path more often and for longer durations of time, and fixated on task-relevant areas ‘inside’ the walkway for shorter durations. These changes were most pronounced in high-trait-reinvesters. We speculate that reinvestment-related processes placed an additional cognitive demand upon working memory. These increased task-irrelevant ‘outside’ fixations were accompanied by slower completion rates on the walking task and greater gross stepping errors. Findings suggest that attention is important for the maintenance of effective gaze behaviors, supporting previous claims that the maladaptive changes in visual search observed in high-risk older adults may be a consequence of inefficiencies in attentional processing. Identifying the underlying attentional processes that disrupt effective gaze behaviour during locomotion is an essential step in the development of rehabilitation, with this information allowing for the emergence of interventions that reduce the risk of falling. PMID:27824937

  3. Intact unconscious processing of eye contact in schizophrenia.

    PubMed

    Seymour, Kiley; Rhodes, Gillian; Stein, Timo; Langdon, Robyn

    2016-03-01

    The perception of eye gaze is crucial for social interaction, providing essential information about another person's goals, intentions, and focus of attention. People with schizophrenia suffer a wide range of social cognitive deficits, including abnormalities in eye gaze perception. For instance, patients have shown an increased bias to misjudge averted gaze as being directed toward them. In this study we probed early unconscious mechanisms of gaze processing in schizophrenia using a technique known as continuous flash suppression. Previous research using this technique to render faces with direct and averted gaze initially invisible reveals that direct eye contact gains privileged access to conscious awareness in healthy adults. We found that patients, as with healthy control subjects, showed the same effect: faces with direct eye gaze became visible significantly faster than faces with averted gaze. This suggests that early unconscious processing of eye gaze is intact in schizophrenia and implies that any misjudgments of gaze direction must manifest at a later conscious stage of gaze processing where deficits and/or biases in attributing mental states to gaze and/or beliefs about being watched may play a role.

  4. Altered activity of the primary visual area during gaze processing in individuals with high-functioning autistic spectrum disorder: a magnetoencephalography study.

    PubMed

    Hasegawa, Naoya; Kitamura, Hideaki; Murakami, Hiroatsu; Kameyama, Shigeki; Sasagawa, Mutsuo; Egawa, Jun; Tamura, Ryu; Endo, Taro; Someya, Toshiyuki

    2013-01-01

    Individuals with autistic spectrum disorder (ASD) demonstrate an impaired ability to infer the mental states of others from their gaze. Thus, investigating the relationship between ASD and eye gaze processing is crucial for understanding the neural basis of social impairments seen in individuals with ASD. In addition, characteristics of ASD are observed in more comprehensive visual perception tasks. These visual characteristics of ASD have been well-explained in terms of the atypical relationship between high- and low-level gaze processing in ASD. We studied neural activity during gaze processing in individuals with ASD using magnetoencephalography, with a focus on the relationship between high- and low-level gaze processing both temporally and spatially. Minimum Current Estimate analysis was applied to perform source analysis of magnetic responses to gaze stimuli. The source analysis showed that later activity in the primary visual area (V1) was affected by gaze direction only in the ASD group. Conversely, the right posterior superior temporal sulcus, which is a brain region that processes gaze as a social signal, in the typically developed group showed a tendency toward greater activation during direct compared with averted gaze processing. These results suggest that later activity in V1 relating to gaze processing is altered or possibly enhanced in high-functioning individuals with ASD, which may underpin the social cognitive impairments in these individuals. © 2013 S. Karger AG, Basel.

  5. A Web Browsing System by Eye-gaze Input

    NASA Astrophysics Data System (ADS)

    Abe, Kiyohiko; Owada, Kosuke; Ohi, Shoichi; Ohyama, Minoru

    We have developed an eye-gaze input system for people with severe physical disabilities, such as amyotrophic lateral sclerosis (ALS) patients. This system utilizes a personal computer and a home video camera to detect eye-gaze under natural light. The system detects both vertical and horizontal eye-gaze by simple image analysis, and does not require special image processing units or sensors. We also developed the platform for eye-gaze input based on our system. In this paper, we propose a new web browsing system for physically disabled computer users as an application of the platform for eye-gaze input. The proposed web browsing system uses a method of direct indicator selection. The method categorizes indicators by their function. These indicators are hierarchized relations; users can select the felicitous function by switching indicators group. This system also analyzes the location of selectable object on web page, such as hyperlink, radio button, edit box, etc. This system stores the locations of these objects, in other words, the mouse cursor skips to the object of candidate input. Therefore it enables web browsing at a faster pace.

  6. Revisiting the Relationship between the Processing of Gaze Direction and the Processing of Facial Expression

    ERIC Educational Resources Information Center

    Ganel, Tzvi

    2011-01-01

    There is mixed evidence on the nature of the relationship between the perception of gaze direction and the perception of facial expressions. Major support for shared processing of gaze and expression comes from behavioral studies that showed that observers cannot process expression or gaze and ignore irrelevant variations in the other dimension.…

  7. An eye model for uncalibrated eye gaze estimation under variable head pose

    NASA Astrophysics Data System (ADS)

    Hnatow, Justin; Savakis, Andreas

    2007-04-01

    Gaze estimation is an important component of computer vision systems that monitor human activity for surveillance, human-computer interaction, and various other applications including iris recognition. Gaze estimation methods are particularly valuable when they are non-intrusive, do not require calibration, and generalize well across users. This paper presents a novel eye model that is employed for efficiently performing uncalibrated eye gaze estimation. The proposed eye model was constructed from a geometric simplification of the eye and anthropometric data about eye feature sizes in order to circumvent the requirement of calibration procedures for each individual user. The positions of the two eye corners and the midpupil, the distance between the two eye corners, and the radius of the eye sphere are required for gaze angle calculation. The locations of the eye corners and midpupil are estimated via processing following eye detection, and the remaining parameters are obtained from anthropometric data. This eye model is easily extended to estimating eye gaze under variable head pose. The eye model was tested on still images of subjects at frontal pose (0 °) and side pose (34 °). An upper bound of the model's performance was obtained by manually selecting the eye feature locations. The resulting average absolute error was 2.98 ° for frontal pose and 2.87 ° for side pose. The error was consistent across subjects, which indicates that good generalization was obtained. This level of performance compares well with other gaze estimation systems that utilize a calibration procedure to measure eye features.

  8. Effects of Peripheral Eccentricity and Head Orientation on Gaze Discrimination.

    PubMed

    Palanica, Adam; Itier, Roxane J

    2014-01-01

    Visual search tasks support a special role for direct gaze in human cognition, while classic gaze judgment tasks suggest the congruency between head orientation and gaze direction plays a central role in gaze perception. Moreover, whether gaze direction can be accurately discriminated in the periphery using covert attention is unknown. In the present study, individual faces in frontal and in deviated head orientations with a direct or an averted gaze were flashed for 150 ms across the visual field; participants focused on a centred fixation while judging the gaze direction. Gaze discrimination speed and accuracy varied with head orientation and eccentricity. The limit of accurate gaze discrimination was less than ±6° eccentricity. Response times suggested a processing facilitation for direct gaze in fovea, irrespective of head orientation, however, by ±3° eccentricity, head orientation started biasing gaze judgments, and this bias increased with eccentricity. Results also suggested a special processing of frontal heads with direct gaze in central vision, rather than a general congruency effect between eye and head cues. Thus, while both head and eye cues contribute to gaze discrimination, their role differs with eccentricity.

  9. Orienting to Eye Gaze and Face Processing

    ERIC Educational Resources Information Center

    Tipples, Jason

    2005-01-01

    The author conducted 7 experiments to examine possible interactions between orienting to eye gaze and specific forms of face processing. Participants classified a letter following either an upright or inverted face with averted, uninformative eye gaze. Eye gaze orienting effects were recorded for upright and inverted faces, irrespective of whether…

  10. Eye’m talking to you: speakers’ gaze direction modulates co-speech gesture processing in the right MTG

    PubMed Central

    Toni, Ivan; Hagoort, Peter; Kelly, Spencer D.; Özyürek, Aslı

    2015-01-01

    Recipients process information from speech and co-speech gestures, but it is currently unknown how this processing is influenced by the presence of other important social cues, especially gaze direction, a marker of communicative intent. Such cues may modulate neural activity in regions associated either with the processing of ostensive cues, such as eye gaze, or with the processing of semantic information, provided by speech and gesture. Participants were scanned (fMRI) while taking part in triadic communication involving two recipients and a speaker. The speaker uttered sentences that were and were not accompanied by complementary iconic gestures. Crucially, the speaker alternated her gaze direction, thus creating two recipient roles: addressed (direct gaze) vs unaddressed (averted gaze) recipient. The comprehension of Speech&Gesture relative to SpeechOnly utterances recruited middle occipital, middle temporal and inferior frontal gyri, bilaterally. The calcarine sulcus and posterior cingulate cortex were sensitive to differences between direct and averted gaze. Most importantly, Speech&Gesture utterances, but not SpeechOnly utterances, produced additional activity in the right middle temporal gyrus when participants were addressed. Marking communicative intent with gaze direction modulates the processing of speech–gesture utterances in cerebral areas typically associated with the semantic processing of multi-modal communicative acts. PMID:24652857

  11. Effects of Peripheral Eccentricity and Head Orientation on Gaze Discrimination

    PubMed Central

    Palanica, Adam; Itier, Roxane J.

    2017-01-01

    Visual search tasks support a special role for direct gaze in human cognition, while classic gaze judgment tasks suggest the congruency between head orientation and gaze direction plays a central role in gaze perception. Moreover, whether gaze direction can be accurately discriminated in the periphery using covert attention is unknown. In the present study, individual faces in frontal and in deviated head orientations with a direct or an averted gaze were flashed for 150 ms across the visual field; participants focused on a centred fixation while judging the gaze direction. Gaze discrimination speed and accuracy varied with head orientation and eccentricity. The limit of accurate gaze discrimination was less than ±6° eccentricity. Response times suggested a processing facilitation for direct gaze in fovea, irrespective of head orientation, however, by ±3° eccentricity, head orientation started biasing gaze judgments, and this bias increased with eccentricity. Results also suggested a special processing of frontal heads with direct gaze in central vision, rather than a general congruency effect between eye and head cues. Thus, while both head and eye cues contribute to gaze discrimination, their role differs with eccentricity. PMID:28344501

  12. State-dependent sensorimotor processing: gaze and posture stability during simulated flight in birds.

    PubMed

    McArthur, Kimberly L; Dickman, J David

    2011-04-01

    Vestibular responses play an important role in maintaining gaze and posture stability during rotational motion. Previous studies suggest that these responses are state dependent, their expression varying with the environmental and locomotor conditions of the animal. In this study, we simulated an ethologically relevant state in the laboratory to study state-dependent vestibular responses in birds. We used frontal airflow to simulate gliding flight and measured pigeons' eye, head, and tail responses to rotational motion in darkness, under both head-fixed and head-free conditions. We show that both eye and head response gains are significantly higher during flight, thus enhancing gaze and head-in-space stability. We also characterize state-specific tail responses to pitch and roll rotation that would help to maintain body-in-space orientation during flight. These results demonstrate that vestibular sensorimotor processing is not fixed but depends instead on the animal's behavioral state.

  13. State-dependent sensorimotor processing: gaze and posture stability during simulated flight in birds

    PubMed Central

    McArthur, Kimberly L.

    2011-01-01

    Vestibular responses play an important role in maintaining gaze and posture stability during rotational motion. Previous studies suggest that these responses are state dependent, their expression varying with the environmental and locomotor conditions of the animal. In this study, we simulated an ethologically relevant state in the laboratory to study state-dependent vestibular responses in birds. We used frontal airflow to simulate gliding flight and measured pigeons′ eye, head, and tail responses to rotational motion in darkness, under both head-fixed and head-free conditions. We show that both eye and head response gains are significantly higher during flight, thus enhancing gaze and head-in-space stability. We also characterize state-specific tail responses to pitch and roll rotation that would help to maintain body-in-space orientation during flight. These results demonstrate that vestibular sensorimotor processing is not fixed but depends instead on the animal's behavioral state. PMID:21307332

  14. Atypical Gaze Cueing Pattern in a Complex Environment in Individuals with ASD

    ERIC Educational Resources Information Center

    Zhao, Shuo; Uono, Shota; Yoshimura, Sayaka; Kubota, Yasutaka; Toichi, Motomi

    2017-01-01

    Clinically, social interaction, including gaze-triggered attention, has been reported to be impaired in autism spectrum disorder (ASD), but psychological studies have generally shown intact gaze-triggered attention in ASD. These studies typically examined gaze-triggered attention under simple environmental conditions. In real life, however, the…

  15. Gaze Behavior Consistency among Older and Younger Adults When Looking at Emotional Faces

    PubMed Central

    Chaby, Laurence; Hupont, Isabelle; Avril, Marie; Luherne-du Boullay, Viviane; Chetouani, Mohamed

    2017-01-01

    The identification of non-verbal emotional signals, and especially of facial expressions, is essential for successful social communication among humans. Previous research has reported an age-related decline in facial emotion identification, and argued for socio-emotional or aging-brain model explanations. However, more perceptual differences in the gaze strategies that accompany facial emotional processing with advancing age have been under-explored yet. In this study, 22 young (22.2 years) and 22 older (70.4 years) adults were instructed to look at basic facial expressions while their gaze movements were recorded by an eye-tracker. Participants were then asked to identify each emotion, and the unbiased hit rate was applied as performance measure. Gaze data were first analyzed using traditional measures of fixations over two preferential regions of the face (upper and lower areas) for each emotion. Then, to better capture core gaze changes with advancing age, spatio-temporal gaze behaviors were deeper examined using data-driven analysis (dimension reduction, clustering). Results first confirmed that older adults performed worse than younger adults at identifying facial expressions, except for “joy” and “disgust,” and this was accompanied by a gaze preference toward the lower-face. Interestingly, this phenomenon was maintained during the whole time course of stimulus presentation. More importantly, trials corresponding to older adults were more tightly clustered, suggesting that the gaze behavior patterns of older adults are more consistent than those of younger adults. This study demonstrates that, confronted to emotional faces, younger and older adults do not prioritize or ignore the same facial areas. Older adults mainly adopted a focused-gaze strategy, consisting in focusing only on the lower part of the face throughout the whole stimuli display time. This consistency may constitute a robust and distinctive “social signature” of emotional identification in aging. Younger adults, however, were more dispersed in terms of gaze behavior and used a more exploratory-gaze strategy, consisting in repeatedly visiting both facial areas. PMID:28450841

  16. Neural Mechanisms Underlying Conscious and Unconscious Gaze-Triggered Attentional Orienting in Autism Spectrum Disorder

    PubMed Central

    Sato, Wataru; Kochiyama, Takanori; Uono, Shota; Yoshimura, Sayaka; Toichi, Motomi

    2017-01-01

    Impaired joint attention represents the core clinical feature of autism spectrum disorder (ASD). Behavioral studies have suggested that gaze-triggered attentional orienting is intact in response to supraliminally presented eyes but impaired in response to subliminally presented eyes in individuals with ASD. However, the neural mechanisms underlying conscious and unconscious gaze-triggered attentional orienting remain unclear. We investigated this issue in ASD and typically developing (TD) individuals using event-related functional magnetic resonance imaging. The participants viewed cue stimuli of averted or straight eye gaze direction presented either supraliminally or subliminally and then localized a target. Reaction times were shorter when eye-gaze cues were directionally valid compared with when they were neutral under the supraliminal condition in both groups; the same pattern was found in the TD group but not the ASD group under the subliminal condition. The temporo–parieto–frontal regions showed stronger activation in response to averted eyes than to straight eyes in both groups under the supraliminal condition. The left amygdala was more activated while viewing averted vs. straight eyes in the TD group than in the ASD group under the subliminal condition. These findings provide an explanation for the neural mechanisms underlying the impairment in unconscious but not conscious gaze-triggered attentional orienting in individuals with ASD and suggest possible neurological and behavioral interventions to facilitate their joint attention behaviors. PMID:28701942

  17. "Avoiding or approaching eyes"? Introversion/extraversion affects the gaze-cueing effect.

    PubMed

    Ponari, Marta; Trojano, Luigi; Grossi, Dario; Conson, Massimiliano

    2013-08-01

    We investigated whether the extra-/introversion personality dimension can influence processing of others' eye gaze direction and emotional facial expression during a target detection task. On the basis of previous evidence showing that self-reported trait anxiety can affect gaze-cueing with emotional faces, we also verified whether trait anxiety can modulate the influence of intro-/extraversion on behavioral performance. Fearful, happy, angry or neutral faces, with either direct or averted gaze, were presented before the target appeared in spatial locations congruent or incongruent with stimuli's eye gaze direction. Results showed a significant influence of intra-/extraversion dimension on gaze-cueing effect for angry, happy, and neutral faces with averted gaze. Introverts did not show the gaze congruency effect when viewing angry expressions, but did so with happy and neutral faces; extraverts showed the opposite pattern. Importantly, the influence of intro-/extraversion on gaze-cueing was not mediated by trait anxiety. These findings demonstrated that personality differences can shape processing of interactions between relevant social signals.

  18. Culture, gaze and the neural processing of fear expressions

    PubMed Central

    Franklin, Robert G.; Rule, Nicholas O.; Freeman, Jonathan B.; Kveraga, Kestutis; Hadjikhani, Nouchine; Yoshikawa, Sakiko; Ambady, Nalini

    2010-01-01

    The direction of others’ eye gaze has important influences on how we perceive their emotional expressions. Here, we examined differences in neural activation to direct- versus averted-gaze fear faces as a function of culture of the participant (Japanese versus US Caucasian), culture of the stimulus face (Japanese versus US Caucasian), and the relation between the two. We employed a previously validated paradigm to examine differences in neural activation in response to rapidly presented direct- versus averted-fear expressions, finding clear evidence for a culturally determined role of gaze in the processing of fear. Greater neural responsivity was apparent to averted- versus direct-gaze fear in several regions related to face and emotion processing, including bilateral amygdalae, when posed on same-culture faces, whereas greater response to direct- versus averted-gaze fear was apparent in these same regions when posed on other-culture faces. We also found preliminary evidence for intercultural variation including differential responses across participants to Japanese versus US Caucasian stimuli, and to a lesser degree differences in how Japanese and US Caucasian participants responded to these stimuli. These findings reveal a meaningful role of culture in the processing of eye gaze and emotion, and highlight their interactive influences in neural processing. PMID:20019073

  19. Anxiety dissociates the adaptive functions of sensory and motor response enhancements to social threats

    PubMed Central

    El Zein, Marwa; Wyart, Valentin; Grèzes, Julie

    2015-01-01

    Efficient detection and reaction to negative signals in the environment is essential for survival. In social situations, these signals are often ambiguous and can imply different levels of threat for the observer, thereby making their recognition susceptible to contextual cues – such as gaze direction when judging facial displays of emotion. However, the mechanisms underlying such contextual effects remain poorly understood. By computational modeling of human behavior and electrical brain activity, we demonstrate that gaze direction enhances the perceptual sensitivity to threat-signaling emotions – anger paired with direct gaze, and fear paired with averted gaze. This effect arises simultaneously in ventral face-selective and dorsal motor cortices at 200 ms following face presentation, dissociates across individuals as a function of anxiety, and does not reflect increased attention to threat-signaling emotions. These findings reveal that threat tunes neural processing in fast, selective, yet attention-independent fashion in sensory and motor systems, for different adaptive purposes. DOI: http://dx.doi.org/10.7554/eLife.10274.001 PMID:26712157

  20. Neurons in the human amygdala encode face identity, but not gaze direction.

    PubMed

    Mormann, Florian; Niediek, Johannes; Tudusciuc, Oana; Quesada, Carlos M; Coenen, Volker A; Elger, Christian E; Adolphs, Ralph

    2015-11-01

    The amygdala is important for face processing, and direction of eye gaze is one of the most socially salient facial signals. Recording from over 200 neurons in the amygdala of neurosurgical patients, we found robust encoding of the identity of neutral-expression faces, but not of their direction of gaze. Processing of gaze direction may rely on a predominantly cortical network rather than the amygdala.

  1. ScreenMasker: An Open-source Gaze-contingent Screen Masking Environment.

    PubMed

    Orlov, Pavel A; Bednarik, Roman

    2016-09-01

    The moving-window paradigm, based on gazecontingent technic, traditionally used in a studies of the visual perceptual span. There is a strong demand for new environments that could be employed by non-technical researchers. We have developed an easy-to-use tool with a graphical user interface (GUI) allowing both execution and control of visual gaze-contingency studies. This work describes ScreenMasker, an environment that allows create gaze-contingent textured displays used together with stimuli presentation software. ScreenMasker has an architecture that meets the requirements of low-latency real-time eye-movement experiments. It also provides a variety of settings and functions. Effective rendering times and performance are ensured by means of GPU processing under CUDA technology. Performance tests show ScreenMasker's latency to be 67-74 ms on a typical office computer, and high-end 144-Hz screen latencies of about 25-28 ms. ScreenMasker is an open-source system distributed under the GNU Lesser General Public License and is available at https://github.com/PaulOrlov/ScreenMasker .

  2. Electrocortical Reflections of Face and Gaze Processing in Children with Pervasive Developmental Disorder

    ERIC Educational Resources Information Center

    Kemner, C.; Schuller, A-M.; Van Engeland, H.

    2006-01-01

    Background: Children with pervasive developmental disorder (PDD) show behavioral abnormalities in gaze and face processing, but recent studies have indicated that normal activation of face-specific brain areas in response to faces is possible in this group. It is not clear whether the brain activity related to gaze processing is also normal in…

  3. Frequency analysis of gaze points with CT colonography interpretation using eye gaze tracking system

    NASA Astrophysics Data System (ADS)

    Tsutsumi, Shoko; Tamashiro, Wataru; Sato, Mitsuru; Okajima, Mika; Ogura, Toshihiro; Doi, Kunio

    2017-03-01

    It is important to investigate eye tracking gaze points of experts, in order to assist trainees in understanding of image interpretation process. We investigated gaze points of CT colonography (CTC) interpretation process, and analyzed the difference in gaze points between experts and trainees. In this study, we attempted to understand how trainees can be improved to a level achieved by experts in viewing of CTC. We used an eye gaze point sensing system, Gazefineder (JVCKENWOOD Corporation, Tokyo, Japan), which can detect pupil point and corneal reflection point by the dark pupil eye tracking. This system can provide gaze points images and excel file data. The subjects are radiological technologists who are experienced, and inexperienced in reading CTC. We performed observer studies in reading virtual pathology images and examined observer's image interpretation process using gaze points data. Furthermore, we examined eye tracking frequency analysis by using the Fast Fourier Transform (FFT). We were able to understand the difference in gaze points between experts and trainees by use of the frequency analysis. The result of the trainee had a large amount of both high-frequency components and low-frequency components. In contrast, both components by the expert were relatively low. Regarding the amount of eye movement in every 0.02 second we found that the expert tended to interpret images slowly and calmly. On the other hand, the trainee was moving eyes quickly and also looking for wide areas. We can assess the difference in the gaze points on CTC between experts and trainees by use of the eye gaze point sensing system and based on the frequency analysis. The potential improvements in CTC interpretation for trainees can be evaluated by using gaze points data.

  4. Gaze-cueing effect depends on facial expression of emotion in 9- to 12-month-old infants

    PubMed Central

    Niedźwiecka, Alicja; Tomalski, Przemysław

    2015-01-01

    Efficient processing of gaze direction and facial expression of emotion is crucial for early social and emotional development. Toward the end of the first year of life infants begin to pay more attention to negative expressions, but it remains unclear to what extent emotion expression is processed jointly with gaze direction at this age. This study sought to establish the interactions of gaze direction and emotion expression in visual orienting in 9- to 12-month-olds. In particular, we tested whether these interactions can be explained by the negativity bias hypothesis and the shared signal hypothesis. We measured saccadic latencies in response to peripheral targets in a gaze-cueing paradigm with happy, angry, and fearful female faces. In the Pilot Experiment three gaze directions were used (direct, congruent with target location, incongruent with target location). In the Main Experiment we sought to replicate the results of the Pilot experiment using a simpler design without the direct gaze condition. In both experiments we found a robust gaze-cueing effect for happy faces, i.e., facilitation of orienting toward the target in the gaze-cued location, compared with the gaze-incongruent location. We found more rapid orienting to targets cued by happy relative to angry and fearful faces. We did not find any gaze-cueing effect for angry or fearful faces. These results are not consistent with the shared signal hypothesis. While our results show differential processing of positive and negative emotions, they do not support a general negativity bias. On the contrary, they indicate that toward the age of 12 months infants show a positivity bias in gaze-cueing tasks. PMID:25713555

  5. Intentional gaze shift to neglected space: a compensatory strategy during recovery after unilateral spatial neglect.

    PubMed

    Takamura, Yusaku; Imanishi, Maho; Osaka, Madoka; Ohmatsu, Satoko; Tominaga, Takanori; Yamanaka, Kentaro; Morioka, Shu; Kawashima, Noritaka

    2016-11-01

    Unilateral spatial neglect is a common neurological syndrome following predominantly right hemispheric stroke. While most patients lack insight into their neglect behaviour and do not initiate compensatory behaviours in the early recovery phase, some patients recognize it and start to pay attention towards the neglected space. We aimed to characterize visual attention capacity in patients with unilateral spatial neglect with specific focus on cortical processes underlying compensatory gaze shift towards the neglected space during the recovery process. Based on the Behavioural Inattention Test score and presence or absence of experience of neglect in their daily life from stroke onset to the enrolment date, participants were divided into USN+‰‰+ (do not compensate, n = 15), USN+ (compensate, n = 10), and right hemisphere damage groups (no neglect, n = 24). The patients participated in eye pursuit-based choice reaction tasks and were asked to pursue one of five horizontally located circular objects flashed on a computer display. The task consisted of 25 trials with 4-s intervals, and the order of highlighted objects was randomly determined. From the recorded eye tracking data, eye movement onset and gaze shift were calculated. To elucidate the cortical mechanism underlying behavioural results, electroencephalagram activities were recorded in three USN+‰‰+, 13 USN+ and eight patients with right hemisphere damage. We found that while lower Behavioural Inattention Test scoring patients (USN+‰‰+) showed gaze shift to non-neglected space, some higher scoring patients (USN+) showed clear leftward gaze shift at visual stimuli onset. Moreover, we found a significant correlation between Behavioural Inattention Test score and gaze shift extent in the unilateral spatial neglect group (r = -0.62, P < 0.01). Electroencephalography data clearly demonstrated that the extent of increase in theta power in the frontal cortex strongly correlated with the leftward gaze shift extent in the USN+‰‰+ and USN+ groups. Our results revealed a compensatory strategy (continuous attention to the neglected space) and its neural correlates in patients with unilateral spatial neglect. In conclusion, patients with unilateral spatial neglect who recognized their own neglect behaviour intentionally focused on the neglected space as a compensatory strategy to avoid careless oversight. © The Author (2016). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  6. Right hemispheric dominance in gaze-triggered reflexive shift of attention in humans.

    PubMed

    Okada, Takashi; Sato, Wataru; Toichi, Motomi

    2006-11-01

    Recent findings suggest a right hemispheric dominance in gaze-triggered shifts of attention. The aim of this study was to clarify the dominant hemisphere in the gaze processing that mediates attentional shift. A target localization task, with preceding non-predicative gaze cues presented to each visual field, was undertaken by 44 healthy subjects, measuring reaction time (RT). A face identification task was also given to determine hemispheric dominance in face processing for each subject. RT differences between valid and invalid cues were larger when presented in the left rather than the right visual field. This held true regardless of individual hemispheric dominance in face processing. Together, these results indicate right hemispheric dominance in gaze-triggered reflexive shifts of attention in normal healthy subjects.

  7. Sustained attention in language production: an individual differences investigation.

    PubMed

    Jongman, Suzanne R; Roelofs, Ardi; Meyer, Antje S

    2015-01-01

    Whereas it has long been assumed that most linguistic processes underlying language production happen automatically, accumulating evidence suggests that these processes do require some form of attention. Here we investigated the contribution of sustained attention: the ability to maintain alertness over time. In Experiment 1, participants' sustained attention ability was measured using auditory and visual continuous performance tasks. Subsequently, employing a dual-task procedure, participants described pictures using simple noun phrases and performed an arrow-discrimination task while their vocal and manual response times (RTs) and the durations of their gazes to the pictures were measured. Earlier research has demonstrated that gaze duration reflects language planning processes up to and including phonological encoding. The speakers' sustained attention ability correlated with the magnitude of the tail of the vocal RT distribution, reflecting the proportion of very slow responses, but not with individual differences in gaze duration. This suggests that sustained attention was most important after phonological encoding. Experiment 2 showed that the involvement of sustained attention was significantly stronger in a dual-task situation (picture naming and arrow discrimination) than in simple naming. Thus, individual differences in maintaining attention on the production processes become especially apparent when a simultaneous second task also requires attentional resources.

  8. Is gaze following purely reflexive or goal-directed instead? Revisiting the automaticity of orienting attention by gaze cues.

    PubMed

    Ricciardelli, Paola; Carcagno, Samuele; Vallar, Giuseppe; Bricolo, Emanuela

    2013-01-01

    Distracting gaze has been shown to elicit automatic gaze following. However, it is still debated whether the effects of perceived gaze are a simple automatic spatial orienting response or are instead sensitive to the context (i.e. goals and task demands). In three experiments, we investigated the conditions under which gaze following occurs. Participants were instructed to saccade towards one of two lateral targets. A face distracter, always present in the background, could gaze towards: (a) a task-relevant target--("matching" goal-directed gaze shift)--congruent or incongruent with the instructed direction, (b) a task-irrelevant target, orthogonal to the one instructed ("non-matching" goal-directed gaze shift), or (c) an empty spatial location (no-goal-directed gaze shift). Eye movement recordings showed faster saccadic latencies in correct trials in congruent conditions especially when the distracting gaze shift occurred before the instruction to make a saccade. Interestingly, while participants made a higher proportion of gaze-following errors (i.e. errors in the direction of the distracting gaze) in the incongruent conditions when the distracter's gaze shift preceded the instruction onset indicating an automatic gaze following, they never followed the distracting gaze when it was directed towards an empty location or a stimulus that was never the target. Taken together, these findings suggest that gaze following is likely to be a product of both automatic and goal-driven orienting mechanisms.

  9. Influence of Eye Gaze on Spoken Word Processing: An ERP Study with Infants

    ERIC Educational Resources Information Center

    Parise, Eugenio; Handl, Andrea; Palumbo, Letizia; Friederici, Angela D.

    2011-01-01

    Eye gaze is an important communicative signal, both as mutual eye contact and as referential gaze to objects. To examine whether attention to speech versus nonspeech stimuli in 4- to 5-month-olds (n = 15) varies as a function of eye gaze, event-related brain potentials were used. Faces with mutual or averted gaze were presented in combination with…

  10. Gaze Direction Detection in Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Forgeot d'Arc, Baudouin; Delorme, Richard; Zalla, Tiziana; Lefebvre, Aline; Amsellem, Frédérique; Moukawane, Sanaa; Letellier, Laurence; Leboyer, Marion; Mouren, Marie-Christine; Ramus, Franck

    2017-01-01

    Detecting where our partners direct their gaze is an important aspect of social interaction. An atypical gaze processing has been reported in autism. However, it remains controversial whether children and adults with autism spectrum disorder interpret indirect gaze direction with typical accuracy. This study investigated whether the detection of…

  11. Altered attentional and perceptual processes as indexed by N170 during gaze perception in schizophrenia: Relationship with perceived threat and paranoid delusions.

    PubMed

    Tso, Ivy F; Calwas, Anita M; Chun, Jinsoo; Mueller, Savanna A; Taylor, Stephan F; Deldin, Patricia J

    2015-08-01

    Using gaze information to orient attention and guide behavior is critical to social adaptation. Previous studies have suggested that abnormal gaze perception in schizophrenia (SCZ) may originate in abnormal early attentional and perceptual processes and may be related to paranoid symptoms. Using event-related brain potentials (ERPs), this study investigated altered early attentional and perceptual processes during gaze perception and their relationship to paranoid delusions in SCZ. Twenty-eight individuals with SCZ or schizoaffective disorder and 32 demographically matched healthy controls (HCs) completed a gaze-discrimination task with face stimuli varying in gaze direction (direct, averted), head orientation (forward, deviated), and emotion (neutral, fearful). ERPs were recorded during the task. Participants rated experienced threat from each face after the task. Participants with SCZ were as accurate as, though slower than, HCs on the task. Participants with SCZ displayed enlarged N170 responses over the left hemisphere to averted gaze presented in fearful relative to neutral faces, indicating a heightened encoding sensitivity to faces signaling external threat. This abnormality was correlated with increased perceived threat and paranoid delusions. Participants with SCZ also showed a reduction of N170 modulation by head orientation (normally increased amplitude to deviated faces relative to forward faces), suggesting less integration of contextual cues of head orientation in gaze perception. The psychophysiological deviations observed during gaze discrimination in SCZ underscore the role of early attentional and perceptual abnormalities in social information processing and paranoid symptoms of SCZ. (c) 2015 APA, all rights reserved).

  12. New perspectives in gaze sensitivity research.

    PubMed

    Davidson, Gabrielle L; Clayton, Nicola S

    2016-03-01

    Attending to where others are looking is thought to be of great adaptive benefit for animals when avoiding predators and interacting with group members. Many animals have been reported to respond to the gaze of others, by co-orienting their gaze with group members (gaze following) and/or responding fearfully to the gaze of predators or competitors (i.e., gaze aversion). Much of the literature has focused on the cognitive underpinnings of gaze sensitivity, namely whether animals have an understanding of the attention and visual perspectives in others. Yet there remain several unanswered questions regarding how animals learn to follow or avoid gaze and how experience may influence their behavioral responses. Many studies on the ontogeny of gaze sensitivity have shed light on how and when gaze abilities emerge and change across development, indicating the necessity to explore gaze sensitivity when animals are exposed to additional information from their environment as adults. Gaze aversion may be dependent upon experience and proximity to different predator types, other cues of predation risk, and the salience of gaze cues. Gaze following in the context of information transfer within social groups may also be dependent upon experience with group-members; therefore we propose novel means to explore the degree to which animals respond to gaze in a flexible manner, namely by inhibiting or enhancing gaze following responses. We hope this review will stimulate gaze sensitivity research to expand beyond the narrow scope of investigating underlying cognitive mechanisms, and to explore how gaze cues may function to communicate information other than attention.

  13. The Development of Emotional Face and Eye Gaze Processing

    ERIC Educational Resources Information Center

    Hoehl, Stefanie; Striano, Tricia

    2010-01-01

    Recent research has demonstrated that infants' attention towards novel objects is affected by an adult's emotional expression and eye gaze toward the object. The current event-related potential (ERP) study investigated how infants at 3, 6, and 9 months of age process fearful compared to neutral faces looking toward objects or averting gaze away…

  14. Women gaze behaviour in assessing female bodies: the effects of clothing, body size, own body composition and body satisfaction.

    PubMed

    Cundall, Amelia; Guo, Kun

    2017-01-01

    Often with minimally clothed figures depicting extreme body sizes, previous studies have shown women tend to gaze at evolutionary determinants of attractiveness when viewing female bodies, possibly for self-evaluation purposes, and their gaze distribution is modulated by own body dissatisfaction level. To explore to what extent women's body-viewing gaze behaviour is affected by clothing type, dress size, subjective measurements of regional body satisfaction and objective measurements of own body composition (e.g., chest size, body mass index, waist-to-hip ratio), in this self-paced body attractiveness and body size judgement experiment, we compared healthy, young women's gaze distributions when viewing female bodies in tight and loose clothing of different dress sizes. In contrast to tight clothing, loose clothing biased gaze away from the waist-hip to the leg region, and subsequently led to enhanced body attractiveness ratings and body size underestimation for larger female bodies, indicating the important role of clothing in mediating women's body perception. When viewing preferred female bodies, women's higher satisfaction of a specific body region was associated with an increased gaze towards neighbouring body areas, implying satisfaction might reduce the need for comparison of confident body parts; furthermore undesirable body composition measurements were correlated with a gaze avoidance process if the construct was less changeable (i.e. chest size) but a gaze comparison process if the region was more changeable (i.e. body mass index, dress size). Clearly, own body satisfaction and body composition measurements had an evident impact on women's body-viewing gaze allocation, possibly through different cognitive processes.

  15. Perceiving crowd attention: Gaze following in human crowds with conflicting cues.

    PubMed

    Sun, Zhongqiang; Yu, Wenjun; Zhou, Jifan; Shen, Mowei

    2017-05-01

    People automatically redirect their visual attention by following others' gaze orientation, a phenomenon called "gaze following." This is an evolutionarily generated socio-cognitive process that provides people with information about their environments. Often, however, people in crowds can have rather different gaze orientations. This study investigated how gaze following occurs in situations with many conflicting gazes. In two experiments, we modified the gaze cueing paradigm to use a crowd rather than a single individual. Specifically, participants were presented with a group of human avatars with differing gaze orientations, and the target appeared randomly on the left or right side of a display. We found that (a) when a marked difference existed in the number of avatars with divergent gaze orientations, participants automatically followed the majority's gaze orientation, and (b) the strongest gaze cue effect occurred when all gazes shared the same orientation, with the response superiority of the majority's oriented location monotonically diminishing with the number of gazes with divergent orientations. These findings suggested that the majority rule plays a role in gaze following behavior when individuals are confronted with conflicting multigaze scenes, and that an increasing subgroup size appears to enlarge the strength of the gaze cueing effect.

  16. Watching Eyes effects: When others meet the self.

    PubMed

    Conty, Laurence; George, Nathalie; Hietanen, Jari K

    2016-10-01

    The perception of direct gaze-that is, of another individual's gaze directed at the observer-is known to influence a wide range of cognitive processes and behaviors. We present a new theoretical proposal to provide a unified account of these effects. We argue that direct gaze first captures the beholder's attention and then triggers self-referential processing, i.e., a heightened processing of stimuli in relation with the self. Self-referential processing modulates incoming information processing and leads to the Watching Eyes effects, which we classify into four main categories: the enhancement of self-awareness, memory effects, the activation of pro-social behavior, and positive appraisals of others. We advance that the belief to be the object of another's attention is embedded in direct gaze perception and gives direct gaze its self-referential power. Finally, we stress that the Watching Eyes effects reflect a positive impact on human cognition; therefore, they may have a therapeutic potential, which future research should delineate. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  17. Gaze Compensation as a Technique for Improving Hand–Eye Coordination in Prosthetic Vision

    PubMed Central

    Titchener, Samuel A.; Shivdasani, Mohit N.; Fallon, James B.; Petoe, Matthew A.

    2018-01-01

    Purpose Shifting the region-of-interest within the input image to compensate for gaze shifts (“gaze compensation”) may improve hand–eye coordination in visual prostheses that incorporate an external camera. The present study investigated the effects of eye movement on hand-eye coordination under simulated prosthetic vision (SPV), and measured the coordination benefits of gaze compensation. Methods Seven healthy-sighted subjects performed a target localization-pointing task under SPV. Three conditions were tested, modeling: retinally stabilized phosphenes (uncompensated); gaze compensation; and no phosphene movement (center-fixed). The error in pointing was quantified for each condition. Results Gaze compensation yielded a significantly smaller pointing error than the uncompensated condition for six of seven subjects, and a similar or smaller pointing error than the center-fixed condition for all subjects (two-way ANOVA, P < 0.05). Pointing error eccentricity and gaze eccentricity were moderately correlated in the uncompensated condition (azimuth: R2 = 0.47; elevation: R2 = 0.51) but not in the gaze-compensated condition (azimuth: R2 = 0.01; elevation: R2 = 0.00). Increased variability in gaze at the time of pointing was correlated with greater reduction in pointing error in the center-fixed condition compared with the uncompensated condition (R2 = 0.64). Conclusions Eccentric eye position impedes hand–eye coordination in SPV. While limiting eye eccentricity in uncompensated viewing can reduce errors, gaze compensation is effective in improving coordination for subjects unable to maintain fixation. Translational Relevance The results highlight the present necessity for suppressing eye movement and support the use of gaze compensation to improve hand–eye coordination and localization performance in prosthetic vision. PMID:29321945

  18. Gazing at me: the importance of social meaning in understanding direct-gaze cues

    PubMed Central

    Hamilton, Antonia F. de C.

    2016-01-01

    Direct gaze is an engaging and important social cue, but the meaning of direct gaze depends heavily on the surrounding context. This paper reviews some recent studies of direct gaze, to understand more about what neural and cognitive systems are engaged by this social cue and why. The data show that gaze can act as an arousal cue and can modulate actions, and can activate brain regions linked to theory of mind and self-related processing. However, all these results are strongly modulated by the social meaning of a gaze cue and by whether participants believe that another person is really watching them. The implications of these contextual effects and audience effects for our theories of gaze are considered. PMID:26644598

  19. Right Hemispheric Dominance in Gaze-Triggered Reflexive Shift of Attention in Humans

    ERIC Educational Resources Information Center

    Okada, Takashi; Sato, Wataru; Toichi, Motomi

    2006-01-01

    Recent findings suggest a right hemispheric dominance in gaze-triggered shifts of attention. The aim of this study was to clarify the dominant hemisphere in the gaze processing that mediates attentional shift. A target localization task, with preceding non-predicative gaze cues presented to each visual field, was undertaken by 44 healthy subjects,…

  20. Gaze Step Distributions Reflect Fixations and Saccades: A Comment on Stephen and Mirman (2010)

    ERIC Educational Resources Information Center

    Bogartz, Richard S.; Staub, Adrian

    2012-01-01

    In three experimental tasks Stephen and Mirman (2010) measured gaze steps, the distance in pixels between gaze positions on successive samples from an eyetracker. They argued that the distribution of gaze steps is best fit by the lognormal distribution, and based on this analysis they concluded that interactive cognitive processes underlie eye…

  1. Loneliness and Hypervigilance to Social Cues in Females: An Eye-Tracking Study

    PubMed Central

    Lodder, Gerine M. A.; Scholte, Ron H. J.; Clemens, Ivar A. H.; Engels, Rutger C. M. E.; Goossens, Luc; Verhagen, Maaike

    2015-01-01

    The goal of the present study was to examine whether lonely individuals differ from nonlonely individuals in their overt visual attention to social cues. Previous studies showed that loneliness was related to biased post-attentive processing of social cues (e.g., negative interpretation bias), but research on whether lonely and nonlonely individuals also show differences in an earlier information processing stage (gazing behavior) is very limited. A sample of 25 lonely and 25 nonlonely students took part in an eye-tracking study consisting of four tasks. We measured gazing (duration, number of fixations and first fixation) at the eyes, nose and mouth region of faces expressing emotions (Task 1), at emotion quadrants (anger, fear, happiness and neutral expression) (Task 2), at quadrants with positive and negative social and nonsocial images (Task 3), and at the facial area of actors in video clips with positive and negative content (Task 4). In general, participants tended to gaze most often and longest at areas that conveyed most social information, such as the eye region of the face (T1), and social images (T3). Participants gazed most often and longest at happy faces (T2) in still images, and more often and longer at the facial area in negative than in positive video clips (T4). No differences occurred between lonely and nonlonely participants in their gazing times and frequencies, nor at first fixations at social cues in the four different tasks. Based on this study, we found no evidence that overt visual attention to social cues differs between lonely and nonlonely individuals. This implies that biases in social information processing of lonely individuals may be limited to other phases of social information processing. Alternatively, biased overt attention to social cues may only occur under specific conditions, for specific stimuli or for specific lonely individuals. PMID:25915656

  2. Finding the Right Fit: A Comparison of Process Assumptions Underlying Popular Drift-Diffusion Models

    ERIC Educational Resources Information Center

    Ashby, Nathaniel J. S.; Jekel, Marc; Dickert, Stephan; Glöckner, Andreas

    2016-01-01

    Recent research makes increasing use of eye-tracking methodologies to generate and test process models. Overall, such research suggests that attention, generally indexed by fixations (gaze duration), plays a critical role in the construction of preference, although the methods used to support this supposition differ substantially. In two studies…

  3. Beliefs about human agency influence the neural processing of gaze during joint attention.

    PubMed

    Caruana, Nathan; de Lissa, Peter; McArthur, Genevieve

    2017-04-01

    The current study measured adults' P350 and N170 ERPs while they interacted with a character in a virtual reality paradigm. Some participants believed the character was controlled by a human ("avatar" condition, n = 19); others believed it was controlled by a computer program ("agent" condition, n = 19). In each trial, participants initiated joint attention in order to direct the character's gaze toward a target. In 50% of trials, the character gazed toward the target (congruent responses), and in 50% of trials the character gazed to a different location (incongruent response). In the avatar condition, the character's incongruent gaze responses generated significantly larger P350 peaks at centro-parietal sites than congruent gaze responses. In the agent condition, the P350 effect was strikingly absent. Left occipitotemporal N170 responses were significantly smaller in the agent condition compared to the avatar condition for both congruent and incongruent gaze shifts. These data suggest that beliefs about human agency may recruit mechanisms that discriminate the social outcome of a gaze shift after approximately 350 ms, and that these mechanisms may modulate the early perceptual processing of gaze. These findings also suggest that the ecologically valid measurement of social cognition may depend upon paradigms that simulate genuine social interactions.

  4. Perceived Gaze Direction Modulates Neural Processing of Prosocial Decision Making

    PubMed Central

    Sun, Delin; Shao, Robin; Wang, Zhaoxin; Lee, Tatia M. C.

    2018-01-01

    Gaze direction is a common social cue implying potential interpersonal interaction. However, little is known about the neural processing of social decision making influenced by perceived gaze direction. Here, we employed functional magnetic resonance imaging (fMRI) method to investigate 27 females when they were engaging in an economic exchange game task during which photos of direct or averted eye gaze were shown. We found that, when averted but not direct gaze was presented, prosocial vs. selfish choices were associated with stronger activations in the right superior temporal gyrus (STG) as well as larger functional couplings between right STG and the posterior cingulate cortex (PCC). Moreover, stronger activations in right STG was associated with quicker actions for making prosocial choice accompanied with averted gaze. The findings suggest that, when the cue implying social contact is absent, the processing of understanding others’ intention and the relationship between self and others is more involved for making prosocial than selfish decisions. These findings could advance our understanding of the roles of subtle cues in influencing prosocial decision making, as well as shedding lights on deficient social cue processing and functioning among individuals with autism spectrum disorder (ASD). PMID:29487516

  5. A neural-based remote eye gaze tracker under natural head motion.

    PubMed

    Torricelli, Diego; Conforto, Silvia; Schmid, Maurizio; D'Alessio, Tommaso

    2008-10-01

    A novel approach to view-based eye gaze tracking for human computer interface (HCI) is presented. The proposed method combines different techniques to address the problems of head motion, illumination and usability in the framework of low cost applications. Feature detection and tracking algorithms have been designed to obtain an automatic setup and strengthen the robustness to light conditions. An extensive analysis of neural solutions has been performed to deal with the non-linearity associated with gaze mapping under free-head conditions. No specific hardware, such as infrared illumination or high-resolution cameras, is needed, rather a simple commercial webcam working in visible light spectrum suffices. The system is able to classify the gaze direction of the user over a 15-zone graphical interface, with a success rate of 95% and a global accuracy of around 2 degrees , comparable with the vast majority of existing remote gaze trackers.

  6. Gazing at me: the importance of social meaning in understanding direct-gaze cues.

    PubMed

    de C Hamilton, Antonia F

    2016-01-19

    Direct gaze is an engaging and important social cue, but the meaning of direct gaze depends heavily on the surrounding context. This paper reviews some recent studies of direct gaze, to understand more about what neural and cognitive systems are engaged by this social cue and why. The data show that gaze can act as an arousal cue and can modulate actions, and can activate brain regions linked to theory of mind and self-related processing. However, all these results are strongly modulated by the social meaning of a gaze cue and by whether participants believe that another person is really watching them. The implications of these contextual effects and audience effects for our theories of gaze are considered. © 2015 The Author(s).

  7. Temporal dynamics underlying the modulation of social status on social attention.

    PubMed

    Dalmaso, Mario; Galfano, Giovanni; Coricelli, Carol; Castelli, Luigi

    2014-01-01

    Fixating someone suddenly moving the eyes is known to trigger a corresponding shift of attention in the observer. This phenomenon, known as gaze-cueing effect, can be modulated as a function of the social status of the individual depicted in the cueing face. Here, in two experiments, we investigated the temporal dynamics underlying this modulation. To this end, a gaze-cueing paradigm was implemented in which centrally-placed faces depicting high- and low-status individuals suddenly shifted the eyes towards a location either spatially congruent or incongruent with that occupied by a subsequent target stimulus. Social status was manipulated by presenting fictive Curriculum Vitae before the experimental phase. In Experiment 1, in which two temporal intervals (50 ms vs. 900 ms) occurred between the direct-gaze face and the averted-gaze face onsets, a stronger gaze-cueing effect in response to high-status faces than low-status faces was observed, irrespective of the time participants were allowed for extracting social information. In Experiment 2, in which two temporal intervals (200 ms vs. 1000 ms) occurred between the averted-gaze face and target onset, a stronger gaze cueing for high-status faces was observed at the shorter interval only. Taken together, these results suggest that information regarding social status is extracted from faces rapidly (Experiment 1), and that the tendency to selectively attend to the locations gazed by high-status individuals may decay with time (Experiment 2).

  8. Shared mechanism for emotion processing in adolescents with and without autism

    PubMed Central

    Ioannou, Christina; Zein, Marwa El; Wyart, Valentin; Scheid, Isabelle; Amsellem, Frédérique; Delorme, Richard; Chevallier, Coralie; Grèzes, Julie

    2017-01-01

    Although, the quest to understand emotional processing in individuals with Autism Spectrum Disorders (ASD) has led to an impressive number of studies, the picture that emerges from this research remains inconsistent. Some studies find that Typically Developing (TD) individuals outperform those with ASD in emotion recognition tasks, others find no such difference. In this paper, we move beyond focusing on potential group differences in behaviour to answer what we believe is a more pressing question: do individuals with ASD use the same mechanisms to process emotional cues? To this end, we rely on model-based analyses of participants’ accuracy during an emotion categorisation task in which displays of anger and fear are paired with direct vs. averted gaze. Behavioural data of 20 ASD and 20 TD adolescents revealed that the ASD group displayed lower overall performance. Yet, gaze direction had a similar impact on emotion categorisation in both groups, i.e. improved accuracy for salient combinations (anger-direct, fear-averted). Critically, computational modelling of participants’ behaviour reveals that the same mechanism, i.e. increased perceptual sensitivity, underlies the contextual impact of gaze in both groups. We discuss the specific experimental conditions that may favour emotion processing and the automatic integration of contextual information in ASD. PMID:28218248

  9. Attention to gaze and emotion in schizophrenia.

    PubMed

    Schwartz, Barbara L; Vaidya, Chandan J; Howard, James H; Deutsch, Stephen I

    2010-11-01

    Individuals with schizophrenia have difficulty interpreting social and emotional cues such as facial expression, gaze direction, body position, and voice intonation. Nonverbal cues are powerful social signals but are often processed implicitly, outside the focus of attention. The aim of this research was to assess implicit processing of social cues in individuals with schizophrenia. Patients with schizophrenia or schizoaffective disorder and matched controls performed a primary task of word classification with social cues in the background. Participants were asked to classify target words (LEFT/RIGHT) by pressing a key that corresponded to the word, in the context of facial expressions with eye gaze averted to the left or right. Although facial expression and gaze direction were irrelevant to the task, these facial cues influenced word classification performance. Participants were slower to classify target words (e.g., LEFT) that were incongruent to gaze direction (e.g., eyes averted to the right) compared to target words (e.g., LEFT) that were congruent to gaze direction (e.g., eyes averted to the left), but this only occurred for expressions of fear. This pattern did not differ for patients and controls. The results showed that threat-related signals capture the attention of individuals with schizophrenia. These data suggest that implicit processing of eye gaze and fearful expressions is intact in schizophrenia. (c) 2010 APA, all rights reserved

  10. When Art Moves the Eyes: A Behavioral and Eye-Tracking Study

    PubMed Central

    Massaro, Davide; Savazzi, Federica; Di Dio, Cinzia; Freedberg, David; Gallese, Vittorio; Gilli, Gabriella; Marchetti, Antonella

    2012-01-01

    The aim of this study was to investigate, using eye-tracking technique, the influence of bottom-up and top-down processes on visual behavior while subjects, naïve to art criticism, were presented with representational paintings. Forty-two subjects viewed color and black and white paintings (Color) categorized as dynamic or static (Dynamism) (bottom-up processes). Half of the images represented natural environments and half human subjects (Content); all stimuli were displayed under aesthetic and movement judgment conditions (Task) (top-down processes). Results on gazing behavior showed that content-related top-down processes prevailed over low-level visually-driven bottom-up processes when a human subject is represented in the painting. On the contrary, bottom-up processes, mediated by low-level visual features, particularly affected gazing behavior when looking at nature-content images. We discuss our results proposing a reconsideration of the definition of content-related top-down processes in accordance with the concept of embodied simulation in art perception. PMID:22624007

  11. When art moves the eyes: a behavioral and eye-tracking study.

    PubMed

    Massaro, Davide; Savazzi, Federica; Di Dio, Cinzia; Freedberg, David; Gallese, Vittorio; Gilli, Gabriella; Marchetti, Antonella

    2012-01-01

    The aim of this study was to investigate, using eye-tracking technique, the influence of bottom-up and top-down processes on visual behavior while subjects, naïve to art criticism, were presented with representational paintings. Forty-two subjects viewed color and black and white paintings (Color) categorized as dynamic or static (Dynamism) (bottom-up processes). Half of the images represented natural environments and half human subjects (Content); all stimuli were displayed under aesthetic and movement judgment conditions (Task) (top-down processes). Results on gazing behavior showed that content-related top-down processes prevailed over low-level visually-driven bottom-up processes when a human subject is represented in the painting. On the contrary, bottom-up processes, mediated by low-level visual features, particularly affected gazing behavior when looking at nature-content images. We discuss our results proposing a reconsideration of the definition of content-related top-down processes in accordance with the concept of embodied simulation in art perception.

  12. Looking at Eye Gaze Processing and Its Neural Correlates in Infancy--Implications for Social Development and Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Hoehl, Stefanie; Reid, Vincent M.; Parise, Eugenio; Handl, Andrea; Palumbo, Letizia; Striano, Tricia

    2009-01-01

    The importance of eye gaze as a means of communication is indisputable. However, there is debate about whether there is a dedicated neural module, which functions as an eye gaze detector and when infants are able to use eye gaze cues in a referential way. The application of neuroscience methodologies to developmental psychology has provided new…

  13. The mere exposure effect and recognition depend on the way you look!

    PubMed

    Willems, Sylvie; Dedonder, Jonathan; Van der Linden, Martial

    2010-01-01

    In line with Whittlesea and Price (2001), we investigated whether the memory effect measured with an implicit memory paradigm (mere exposure effect) and an explicit recognition task depended on perceptual processing strategies, regardless of whether the task required intentional retrieval. We found that manipulation intended to prompt functional implicit-explicit dissociation no longer had a differential effect when we induced similar perceptual strategies in both tasks. Indeed, the results showed that prompting a nonanalytic strategy ensured performance above chance on both tasks. Conversely, inducing an analytic strategy drastically decreased both explicit and implicit performance. Furthermore, we noted that the nonanalytic strategy involved less extensive gaze scanning than the analytic strategy and that memory effects under this processing strategy were largely independent of gaze movement.

  14. Calibration-free gaze tracking for automatic measurement of visual acuity in human infants.

    PubMed

    Xiong, Chunshui; Huang, Lei; Liu, Changping

    2014-01-01

    Most existing vision-based methods for gaze tracking need a tedious calibration process. In this process, subjects are required to fixate on a specific point or several specific points in space. However, it is hard to cooperate, especially for children and human infants. In this paper, a new calibration-free gaze tracking system and method is presented for automatic measurement of visual acuity in human infants. As far as I know, it is the first time to apply the vision-based gaze tracking in the measurement of visual acuity. Firstly, a polynomial of pupil center-cornea reflections (PCCR) vector is presented to be used as the gaze feature. Then, Gaussian mixture models (GMM) is employed for gaze behavior classification, which is trained offline using labeled data from subjects with healthy eyes. Experimental results on several subjects show that the proposed method is accurate, robust and sufficient for the application of measurement of visual acuity in human infants.

  15. Sex-related differences in behavioral and amygdalar responses to compound facial threat cues.

    PubMed

    Im, Hee Yeon; Adams, Reginald B; Cushing, Cody A; Boshyan, Jasmine; Ward, Noreen; Kveraga, Kestutis

    2018-03-08

    During face perception, we integrate facial expression and eye gaze to take advantage of their shared signals. For example, fear with averted gaze provides a congruent avoidance cue, signaling both threat presence and its location, whereas fear with direct gaze sends an incongruent cue, leaving threat location ambiguous. It has been proposed that the processing of different combinations of threat cues is mediated by dual processing routes: reflexive processing via magnocellular (M) pathway and reflective processing via parvocellular (P) pathway. Because growing evidence has identified a variety of sex differences in emotional perception, here we also investigated how M and P processing of fear and eye gaze might be modulated by observer's sex, focusing on the amygdala, a structure important to threat perception and affective appraisal. We adjusted luminance and color of face stimuli to selectively engage M or P processing and asked observers to identify emotion of the face. Female observers showed more accurate behavioral responses to faces with averted gaze and greater left amygdala reactivity both to fearful and neutral faces. Conversely, males showed greater right amygdala activation only for M-biased averted-gaze fear faces. In addition to functional reactivity differences, females had proportionately greater bilateral amygdala volumes, which positively correlated with behavioral accuracy for M-biased fear. Conversely, in males only the right amygdala volume was positively correlated with accuracy for M-biased fear faces. Our findings suggest that M and P processing of facial threat cues is modulated by functional and structural differences in the amygdalae associated with observer's sex. © 2018 Wiley Periodicals, Inc.

  16. Responding to Other People's Direct Gaze: Alterations in Gaze Behavior in Infants at Risk for Autism Occur on Very Short Timescales

    ERIC Educational Resources Information Center

    Nyström, Pär; Bölte, Sven; Falck-Ytter, Terje; Achermann, Sheila; Andersson Konke, Linn; Brocki, Karin; Cauvet, Elodie; Gredebäck, Gustaf; Lundin Kleberg, Johan; Nilsson Jobs, Elisabeth; Thorup, Emilia; Zander, Eric

    2017-01-01

    Atypical gaze processing has been reported in children with autism spectrum disorders (ASD). Here we explored how infants at risk for ASD respond behaviorally to others' direct gaze. We assessed 10-month-olds with a sibling with ASD (high risk group; n = 61) and a control group (n = 18) during interaction with an adult. Eye-tracking revealed less…

  17. Neural activity in the posterior superior temporal region during eye contact perception correlates with autistic traits.

    PubMed

    Hasegawa, Naoya; Kitamura, Hideaki; Murakami, Hiroatsu; Kameyama, Shigeki; Sasagawa, Mutsuo; Egawa, Jun; Endo, Taro; Someya, Toshiyuki

    2013-08-09

    The present study investigated the relationship between neural activity associated with gaze processing and autistic traits in typically developed subjects using magnetoencephalography. Autistic traits in 24 typically developed college students with normal intelligence were assessed using the Autism Spectrum Quotient (AQ). The Minimum Current Estimates method was applied to estimate the cortical sources of magnetic responses to gaze stimuli. These stimuli consisted of apparent motion of the eyes, displaying direct or averted gaze motion. Results revealed gaze-related brain activations in the 150-250 ms time window in the right posterior superior temporal sulcus (pSTS), and in the 150-450 ms time window in medial prefrontal regions. In addition, the mean amplitude in the 150-250 ms time window in the right pSTS region was modulated by gaze direction, and its activity in response to direct gaze stimuli correlated with AQ score. pSTS activation in response to direct gaze is thought to be related to higher-order social processes. Thus, these results suggest that brain activity linking eye contact and social signals is associated with autistic traits in a typical population. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  18. Control over the processing of the opponent's gaze direction in basketball experts.

    PubMed

    Weigelt, Matthias; Güldenpenning, Iris; Steggemann-Weinrich, Yvonne; Alhaj Ahmad Alaboud, Mustafa; Kunde, Wilfried

    2017-06-01

    Basketball players' responses to an opposing players' pass direction are typically delayed when the opposing player gazes in another than the pass direction. Here, we studied the role of basketball expertise on this, the so-called head-fake effect, in three groups of participants (basketball experts, soccer players, and non-athletes). The specific focus was on the dependency of the head-fake effect on previous fake experience as an index of control over the processing of task-irrelevant gaze information. Whereas (overall) the head-fake effect was of similar size in all expertise groups, preceding fake experience removed the head-fake effect in basketball players, but not in non-experts. Accordingly, basketball expertise allows for higher levels of control over the processing of task-irrelevant gaze information.

  19. What We Observe Is Biased by What Other People Tell Us: Beliefs about the Reliability of Gaze Behavior Modulate Attentional Orienting to Gaze Cues

    PubMed Central

    Wiese, Eva; Wykowska, Agnieszka; Müller, Hermann J.

    2014-01-01

    For effective social interactions with other people, information about the physical environment must be integrated with information about the interaction partner. In order to achieve this, processing of social information is guided by two components: a bottom-up mechanism reflexively triggered by stimulus-related information in the social scene and a top-down mechanism activated by task-related context information. In the present study, we investigated whether these components interact during attentional orienting to gaze direction. In particular, we examined whether the spatial specificity of gaze cueing is modulated by expectations about the reliability of gaze behavior. Expectations were either induced by instruction or could be derived from experience with displayed gaze behavior. Spatially specific cueing effects were observed with highly predictive gaze cues, but also when participants merely believed that actually non-predictive cues were highly predictive. Conversely, cueing effects for the whole gazed-at hemifield were observed with non-predictive gaze cues, and spatially specific cueing effects were attenuated when actually predictive gaze cues were believed to be non-predictive. This pattern indicates that (i) information about cue predictivity gained from sampling gaze behavior across social episodes can be incorporated in the attentional orienting to social cues, and that (ii) beliefs about gaze behavior modulate attentional orienting to gaze direction even when they contradict information available from social episodes. PMID:24722348

  20. Gaze transfer in remote cooperation: is it always helpful to see what your partner is attending to?

    PubMed

    Müller, Romy; Helmert, Jens R; Pannasch, Sebastian; Velichkovsky, Boris M

    2013-01-01

    Establishing common ground in remote cooperation is challenging because nonverbal means of ambiguity resolution are limited. In such settings, information about a partner's gaze can support cooperative performance, but it is not yet clear whether and to what extent the abundance of information reflected in gaze comes at a cost. Specifically, in tasks that mainly rely on spatial referencing, gaze transfer might be distracting and leave the partner uncertain about the meaning of the gaze cursor. To examine this question, we let pairs of participants perform a joint puzzle task. One partner knew the solution and instructed the other partner's actions by (1) gaze, (2) speech, (3) gaze and speech, or (4) mouse and speech. Based on these instructions, the acting partner moved the pieces under conditions of high or low autonomy. Performance was better when using either gaze or mouse transfer compared to speech alone. However, in contrast to the mouse, gaze transfer induced uncertainty, evidenced in delayed responses to the cursor. Also, participants tried to resolve ambiguities by engaging in more verbal effort, formulating more explicit object descriptions and fewer deictic references. Thus, gaze transfer seems to increase uncertainty and ambiguity, thereby complicating grounding in this spatial referencing task. The results highlight the importance of closely examining task characteristics when considering gaze transfer as a means of support.

  1. I want to help you, but I am not sure why: gaze-cuing induces altruistic giving.

    PubMed

    Rogers, Robert D; Bayliss, Andrew P; Szepietowska, Anna; Dale, Laura; Reeder, Lydia; Pizzamiglio, Gloria; Czarna, Karolina; Wakeley, Judi; Cowen, Phillip J; Tipper, Steven P

    2014-04-01

    Detecting subtle indicators of trustworthiness is highly adaptive for moving effectively amongst social partners. One powerful signal is gaze direction, which individuals can use to inform (or deceive) by looking toward (or away from) important objects or events in the environment. Here, across 5 experiments, we investigate whether implicit learning about gaze cues can influence subsequent economic transactions; we also examine some of the underlying mechanisms. In the 1st experiment, we demonstrate that people invest more money with individuals whose gaze information has previously been helpful, possibly reflecting enhanced trust appraisals. However, in 2 further experiments, we show that other mechanisms driving this behavior include obligations to fairness or (painful) altruism, since people also make more generous offers and allocations of money to individuals with reliable gaze cues in adapted 1-shot ultimatum games and 1-shot dictator games. In 2 final experiments, we show that the introduction of perceptual noise while following gaze can disrupt these effects, but only when the social partners are unfamiliar. Nonconscious detection of reliable gaze cues can prompt altruism toward others, probably reflecting the interplay of systems that encode identity and control gaze-evoked attention, integrating the reinforcement value of gaze cues.

  2. Where You Look Matters for Body Perception: Preferred Gaze Location Contributes to the Body Inversion Effect

    PubMed Central

    McKean, Danielle L.; Tsao, Jack W.; Chan, Annie W.-Y.

    2017-01-01

    The Body Inversion Effect (BIE; reduced visual discrimination performance for inverted compared to upright bodies) suggests that bodies are visually processed configurally; however, the specific importance of head posture information in the BIE has been indicated in reports of BIE reduction for whole bodies with fixed head position and for headless bodies. Through measurement of gaze patterns and investigation of the causal relation of fixation location to visual body discrimination performance, the present study reveals joint contributions of feature and configuration processing to visual body discrimination. Participants predominantly gazed at the (body-centric) upper body for upright bodies and the lower body for inverted bodies in the context of an experimental paradigm directly comparable to that of prior studies of the BIE. Subsequent manipulation of fixation location indicates that these preferential gaze locations causally contributed to the BIE for whole bodies largely due to the informative nature of gazing at or near the head. Also, a BIE was detected for both whole and headless bodies even when fixation location on the body was held constant, indicating a role of configural processing in body discrimination, though inclusion of the head posture information was still highly discriminative in the context of such processing. Interestingly, the impact of configuration (upright and inverted) to the BIE appears greater than that of differential preferred gaze locations. PMID:28085894

  3. Perceptual-cognitive skill and the in situ performance of soccer players.

    PubMed

    van Maarseveen, Mariëtte J J; Oudejans, Raôul R D; Mann, David L; Savelsbergh, Geert J P

    2018-02-01

    Many studies have shown that experts possess better perceptual-cognitive skills than novices (e.g., in anticipation, decision making, pattern recall), but it remains unclear whether a relationship exists between performance on those tests of perceptual-cognitive skill and actual on-field performance. In this study, we assessed the in situ performance of skilled soccer players and related the outcomes to measures of anticipation, decision making, and pattern recall. In addition, we examined gaze behaviour when performing the perceptual-cognitive tests to better understand whether the underlying processes were related when those perceptual-cognitive tasks were performed. The results revealed that on-field performance could not be predicted on the basis of performance on the perceptual-cognitive tests. Moreover, there were no strong correlations between the level of performance on the different tests. The analysis of gaze behaviour revealed differences in search rate, fixation duration, fixation order, gaze entropy, and percentage viewing time when performing the test of pattern recall, suggesting that it is driven by different processes to those used for anticipation and decision making. Altogether, the results suggest that the perceptual-cognitive tests may not be as strong determinants of actual performance as may have previously been assumed.

  4. Effects of Optical Pitch on Oculomotor Control and the Perception of Target Elevation

    NASA Technical Reports Server (NTRS)

    Cohen, Malcom M.; Ebenholtz, Sheldon M.; Linder, Barry J.

    1995-01-01

    In two experiments, we used an ISCAN infrared video system to examine the influence of a pitched visual array on gaze elevation and on judgments of visually perceived eye level. In Experiment 1, subjects attempted to direct their gaze to a relaxed or to a horizontal orientation while they were seated in a room whose walls were pitched at various angles with respect to gravity. Gaze elevation was biased in the direction in which the room was pitched. In Experiment 2, subjects looked into a small box that was pitched at various angles while they attempted simply to direct their gaze alone, or to direct their gaze and place a visual target at their apparent horizon. Both gaze elevation and target settings varied systematically with the pitch orientation of the box. Our results suggest that under these conditions, an optostatic response, of which the subject is unaware, is responsible for the changes in both gaze elevation and judgments of target elevation.

  5. Direct gaze elicits atypical activation of the theory-of-mind network in autism spectrum conditions.

    PubMed

    von dem Hagen, Elisabeth A H; Stoyanova, Raliza S; Rowe, James B; Baron-Cohen, Simon; Calder, Andrew J

    2014-06-01

    Eye contact plays a key role in social interaction and is frequently reported to be atypical in individuals with autism spectrum conditions (ASCs). Despite the importance of direct gaze, previous functional magnetic resonance imaging in ASC has generally focused on paradigms using averted gaze. The current study sought to determine the neural processing of faces displaying direct and averted gaze in 18 males with ASC and 23 matched controls. Controls showed an increased response to direct gaze in brain areas implicated in theory-of-mind and gaze perception, including medial prefrontal cortex, temporoparietal junction, posterior superior temporal sulcus region, and amygdala. In contrast, the same regions showed an increased response to averted gaze in individuals with an ASC. This difference was confirmed by a significant gaze direction × group interaction. Relative to controls, participants with ASC also showed reduced functional connectivity between these regions. We suggest that, in the typical brain, perceiving another person gazing directly at you triggers spontaneous attributions of mental states (e.g. he is "interested" in me), and that such mental state attributions to direct gaze may be reduced or absent in the autistic brain.

  6. Holistic integration of gaze cues in visual face and body perception: Evidence from the composite design.

    PubMed

    Vrancken, Leia; Germeys, Filip; Verfaillie, Karl

    2017-01-01

    A considerable amount of research on identity recognition and emotion identification with the composite design points to the holistic processing of these aspects in faces and bodies. In this paradigm, the interference from a nonattended face half on the perception of the attended half is taken as evidence for holistic processing (i.e., a composite effect). Far less research, however, has been dedicated to the concept of gaze. Nonetheless, gaze perception is a substantial component of face and body perception, and holds critical information for everyday communicative interactions. Furthermore, the ability of human observers to detect direct versus averted eye gaze is effortless, perhaps similar to identity perception and emotion recognition. However, the hypothesis of holistic perception of eye gaze has never been tested directly. Research on gaze perception with the composite design could facilitate further systematic comparison with other aspects of face and body perception that have been investigated using the composite design (i.e., identity and emotion). In the present research, a composite design was administered to assess holistic processing of gaze cues in faces (Experiment 1) and bodies (Experiment 2). Results confirmed that eye and head orientation (Experiment 1A) and head and body orientation (Experiment 2A) are integrated in a holistic manner. However, the composite effect was not completely disrupted by inversion (Experiments 1B and 2B), a finding that will be discussed together with implications for future research.

  7. Does Gaze Direction Modulate Facial Expression Processing in Children with Autism Spectrum Disorder?

    ERIC Educational Resources Information Center

    Akechi, Hironori; Senju, Atsushi; Kikuchi, Yukiko; Tojo, Yoshikuni; Osanai, Hiroo; Hasegawa, Toshikazu

    2009-01-01

    Two experiments investigated whether children with autism spectrum disorder (ASD) integrate relevant communicative signals, such as gaze direction, when decoding a facial expression. In Experiment 1, typically developing children (9-14 years old; n = 14) were faster at detecting a facial expression accompanying a gaze direction with a congruent…

  8. Right hemispheric dominance and interhemispheric cooperation in gaze-triggered reflexive shift of attention.

    PubMed

    Okada, Takashi; Sato, Wataru; Kubota, Yasutaka; Toichi, Motomi; Murai, Toshiya

    2012-03-01

    The neural substrate for the processing of gaze remains unknown. The aim of the present study was to clarify which hemisphere dominantly processes and whether bilateral hemispheres cooperate with each other in gaze-triggered reflexive shift of attention. Twenty-eight normal subjects were tested. The non-predictive gaze cues were presented either in unilateral or bilateral visual fields. The subjects localized the target as soon as possible. Reaction times (RT) were shorter when gaze-cues were congruent toward than away from targets, whichever visual field they were presented in. RT were shorter in left than right visual field presentations. RT in mono-directional bilateral presentations were shorter than both of those in left and right presentations. When bi-directional bilateral cues were presented, RT were faster when valid cues were presented in the left than right visual fields. The right hemisphere appears to be dominant, and there is interhemispheric cooperation in gaze-triggered reflexive shift of attention. © 2012 The Authors. Psychiatry and Clinical Neurosciences © 2012 Japanese Society of Psychiatry and Neurology.

  9. Wolves (Canis lupus) and Dogs (Canis familiaris) Differ in Following Human Gaze Into Distant Space But Respond Similar to Their Packmates’ Gaze

    PubMed Central

    Werhahn, Geraldine; Virányi, Zsófia; Barrera, Gabriela; Sommese, Andrea; Range, Friederike

    2017-01-01

    Gaze following into distant space is defined as visual co-orientation with another individual’s head direction allowing the gaze follower to gain information on its environment. Human and nonhuman animals share this basic gaze following behavior, suggested to rely on a simple reflexive mechanism and believed to be an important prerequisite for complex forms of social cognition. Pet dogs differ from other species in that they follow only communicative human gaze clearly addressed to them. However, in an earlier experiment we showed that wolves follow human gaze into distant space. Here we set out to investigate whether domestication has affected gaze following in dogs by comparing pack-living dogs and wolves raised and kept under the same conditions. In Study 1 we found that in contrast to the wolves, these dogs did not follow minimally communicative human gaze into distant space in the same test paradigm. In the observational Study 2 we found that pack-living dogs and wolves, similarly vigilant to environmental stimuli, follow the spontaneous gaze of their conspecifics similarly often. Our findings suggest that domestication did not affect the gaze following ability of dogs itself. The results raise hypotheses about which other dog skills might have been altered through domestication that may have influenced their performance in Study 1. Because following human gaze in dogs might be influenced by special evolutionary as well as developmental adaptations to interactions with humans, we suggest that comparing dogs to other animal species might be more informative when done in intraspecific social contexts. PMID:27244538

  10. Children's Bricolage under the Gaze of Teachers in Sociodramatic Play

    ERIC Educational Resources Information Center

    Tam, Po Chi

    2013-01-01

    Drawing on the theory of dialogism and the literature on children's culture and cultural resistance, this article investigates the contextual and textual features of the cultural making of a group of children in sociodramatic play in a Hong Kong kindergarten. Different from other, similar studies, this study reports that under the gaze of the…

  11. Gaze and viewing angle influence visual stabilization of upright posture

    PubMed Central

    Ustinova, KI; Perkins, J

    2011-01-01

    Focusing gaze on a target helps stabilize upright posture. We investigated how this visual stabilization can be affected by observing a target presented under different gaze and viewing angles. In a series of 10-second trials, participants (N = 20, 29.3 ± 9 years of age) stood on a force plate and fixed their gaze on a figure presented on a screen at a distance of 1 m. The figure changed position (gaze angle: eye level (0°), 25° up or down), vertical body orientation (viewing angle: at eye level but rotated 25° as if leaning toward or away from the participant), or both (gaze and viewing angle: 25° up or down with the rotation equivalent of a natural visual perspective). Amplitude of participants’ sagittal displacement, surface area, and angular position of the center of gravity (COG) were compared. Results showed decreased COG velocity and amplitude for up and down gaze angles. Changes in viewing angles resulted in altered body alignment and increased amplitude of COG displacement. No significant changes in postural stability were observed when both gaze and viewing angles were altered. Results suggest that both the gaze angle and viewing perspective may be essential variables of the visuomotor system modulating postural responses. PMID:22398978

  12. I Want to Help You, But I Am Not Sure Why: Gaze-Cuing Induces Altruistic Giving

    PubMed Central

    2013-01-01

    Detecting subtle indicators of trustworthiness is highly adaptive for moving effectively amongst social partners. One powerful signal is gaze direction, which individuals can use to inform (or deceive) by looking toward (or away from) important objects or events in the environment. Here, across 5 experiments, we investigate whether implicit learning about gaze cues can influence subsequent economic transactions; we also examine some of the underlying mechanisms. In the 1st experiment, we demonstrate that people invest more money with individuals whose gaze information has previously been helpful, possibly reflecting enhanced trust appraisals. However, in 2 further experiments, we show that other mechanisms driving this behavior include obligations to fairness or (painful) altruism, since people also make more generous offers and allocations of money to individuals with reliable gaze cues in adapted 1-shot ultimatum games and 1-shot dictator games. In 2 final experiments, we show that the introduction of perceptual noise while following gaze can disrupt these effects, but only when the social partners are unfamiliar. Nonconscious detection of reliable gaze cues can prompt altruism toward others, probably reflecting the interplay of systems that encode identity and control gaze-evoked attention, integrating the reinforcement value of gaze cues. PMID:23937180

  13. "Are You Looking at Me?" How Children's Gaze Judgments Improve with Age

    ERIC Educational Resources Information Center

    Mareschal, Isabelle; Otsuka, Yumiko; Clifford, Colin W. G.; Mareschal, Denis

    2016-01-01

    Adults' judgments of another person's gaze reflect both sensory (e.g., perceptual) and nonsensory (e.g., decisional) processes. We examined how children's performance on a gaze categorization task develops over time by varying uncertainty in the stimulus presented to 6- to 11 year-olds (n = 57). We found that younger children responded…

  14. Direct Gaze Modulates Face Recognition in Young Infants

    ERIC Educational Resources Information Center

    Farroni, Teresa; Massaccesi, Stefano; Menon, Enrica; Johnson, Mark H.

    2007-01-01

    From birth, infants prefer to look at faces that engage them in direct eye contact. In adults, direct gaze is known to modulate the processing of faces, including the recognition of individuals. In the present study, we investigate whether direction of gaze has any effect on face recognition in four-month-old infants. Four-month infants were shown…

  15. Early Left Parietal Activity Elicited by Direct Gaze: A High-Density EEG Study

    PubMed Central

    Burra, Nicolas; Kerzel, Dirk; George, Nathalie

    2016-01-01

    Gaze is one of the most important cues for human communication and social interaction. In particular, gaze contact is the most primary form of social contact and it is thought to capture attention. A very early-differentiated brain response to direct versus averted gaze has been hypothesized. Here, we used high-density electroencephalography to test this hypothesis. Topographical analysis allowed us to uncover a very early topographic modulation (40–80 ms) of event-related responses to faces with direct as compared to averted gaze. This modulation was obtained only in the condition where intact broadband faces–as opposed to high-pass or low-pas filtered faces–were presented. Source estimation indicated that this early modulation involved the posterior parietal region, encompassing the left precuneus and inferior parietal lobule. This supports the idea that it reflected an early orienting response to direct versus averted gaze. Accordingly, in a follow-up behavioural experiment, we found faster response times to the direct gaze than to the averted gaze broadband faces. In addition, classical evoked potential analysis showed that the N170 peak amplitude was larger for averted gaze than for direct gaze. Taken together, these results suggest that direct gaze may be detected at a very early processing stage, involving a parallel route to the ventral occipito-temporal route of face perceptual analysis. PMID:27880776

  16. AUTISTIC TRAITS INFLUENCE GAZE-ORIENTED ATTENTION TO HAPPY BUT NOT FEARFUL FACES

    PubMed Central

    Lassalle, Amandine; Itier, Roxane J.

    2017-01-01

    The relationship between autistic traits and gaze-oriented attention to fearful and happy faces was investigated at the behavioral and neuronal levels. Upright and inverted dynamic face stimuli were used in a gaze-cueing paradigm while ERPs were recorded. Participants responded faster to gazed-at than to non-gazed-at targets and this Gaze Orienting Effect (GOE) diminished with inversion, suggesting it relies on facial configuration. It was also larger for fearful than happy faces but only in participants with high Autism Quotient (AQ) scores. While the GOE to fearful faces was of similar magnitude regardless of AQ scores, a diminished GOE to happy faces was found in participants with high AQ scores. At the ERP level, a congruency effect on target-elicited P1 component reflected enhanced visual processing of gazed-at targets. In addition, cue-triggered early directing attention negativity and anterior directing attention negativity reflected, respectively, attention orienting and attention holding at gazed-at locations. These neural markers of spatial attention orienting were not modulated by emotion and were not found in participants with high AQ scores. Together these findings suggest that autistic traits influence attention orienting to gaze and its modulation by social emotions such as happiness. PMID:25222883

  17. Just one look: Direct gaze briefly disrupts visual working memory.

    PubMed

    Wang, J Jessica; Apperly, Ian A

    2017-04-01

    Direct gaze is a salient social cue that affords rapid detection. A body of research suggests that direct gaze enhances performance on memory tasks (e.g., Hood, Macrae, Cole-Davies, & Dias, Developmental Science, 1, 67-71, 2003). Nonetheless, other studies highlight the disruptive effect direct gaze has on concurrent cognitive processes (e.g., Conty, Gimmig, Belletier, George, & Huguet, Cognition, 115(1), 133-139, 2010). This discrepancy raises questions about the effects direct gaze may have on concurrent memory tasks. We addressed this topic by employing a change detection paradigm, where participants retained information about the color of small sets of agents. Experiment 1 revealed that, despite the irrelevance of the agents' eye gaze to the memory task at hand, participants were worse at detecting changes when the agents looked directly at them compared to when the agents looked away. Experiment 2 showed that the disruptive effect was relatively short-lived. Prolonged presentation of direct gaze led to recovery from the initial disruption, rather than a sustained disruption on change detection performance. The present study provides the first evidence that direct gaze impairs visual working memory with a rapidly-developing yet short-lived effect even when there is no need to attend to agents' gaze.

  18. Wolves (Canis lupus) and dogs (Canis familiaris) differ in following human gaze into distant space but respond similar to their packmates' gaze.

    PubMed

    Werhahn, Geraldine; Virányi, Zsófia; Barrera, Gabriela; Sommese, Andrea; Range, Friederike

    2016-08-01

    Gaze following into distant space is defined as visual co-orientation with another individual's head direction allowing the gaze follower to gain information on its environment. Human and nonhuman animals share this basic gaze following behavior, suggested to rely on a simple reflexive mechanism and believed to be an important prerequisite for complex forms of social cognition. Pet dogs differ from other species in that they follow only communicative human gaze clearly addressed to them. However, in an earlier experiment we showed that wolves follow human gaze into distant space. Here we set out to investigate whether domestication has affected gaze following in dogs by comparing pack-living dogs and wolves raised and kept under the same conditions. In Study 1 we found that in contrast to the wolves, these dogs did not follow minimally communicative human gaze into distant space in the same test paradigm. In the observational Study 2 we found that pack-living dogs and wolves, similarly vigilant to environmental stimuli, follow the spontaneous gaze of their conspecifics similarly often. Our findings suggest that domestication did not affect the gaze following ability of dogs itself. The results raise hypotheses about which other dog skills might have been altered through domestication that may have influenced their performance in Study 1. Because following human gaze in dogs might be influenced by special evolutionary as well as developmental adaptations to interactions with humans, we suggest that comparing dogs to other animal species might be more informative when done in intraspecific social contexts. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  19. Gaze training enhances laparoscopic technical skill acquisition and multi-tasking performance: a randomized, controlled study.

    PubMed

    Wilson, Mark R; Vine, Samuel J; Bright, Elizabeth; Masters, Rich S W; Defriend, David; McGrath, John S

    2011-12-01

    The operating room environment is replete with stressors and distractions that increase the attention demands of what are already complex psychomotor procedures. Contemporary research in other fields (e.g., sport) has revealed that gaze training interventions may support the development of robust movement skills. This current study was designed to examine the utility of gaze training for technical laparoscopic skills and to test performance under multitasking conditions. Thirty medical trainees with no laparoscopic experience were divided randomly into one of three treatment groups: gaze trained (GAZE), movement trained (MOVE), and discovery learning/control (DISCOVERY). Participants were fitted with a Mobile Eye gaze registration system, which measures eye-line of gaze at 25 Hz. Training consisted of ten repetitions of the "eye-hand coordination" task from the LAP Mentor VR laparoscopic surgical simulator while receiving instruction and video feedback (specific to each treatment condition). After training, all participants completed a control test (designed to assess learning) and a multitasking transfer test, in which they completed the procedure while performing a concurrent tone counting task. Not only did the GAZE group learn more quickly than the MOVE and DISCOVERY groups (faster completion times in the control test), but the performance difference was even more pronounced when multitasking. Differences in gaze control (target locking fixations), rather than tool movement measures (tool path length), underpinned this performance advantage for GAZE training. These results suggest that although the GAZE intervention focused on training gaze behavior only, there were indirect benefits for movement behaviors and performance efficiency. Additionally, focusing on a single external target when learning, rather than on complex movement patterns, may have freed-up attentional resources that could be applied to concurrent cognitive tasks.

  20. Fast noninvasive eye-tracking and eye-gaze determination for biomedical and remote monitoring applications

    NASA Astrophysics Data System (ADS)

    Talukder, Ashit; Morookian, John M.; Monacos, Steve P.; Lam, Raymond K.; Lebaw, C.; Bond, A.

    2004-04-01

    Eyetracking is one of the latest technologies that has shown potential in several areas including human-computer interaction for people with and without disabilities, and for noninvasive monitoring, detection, and even diagnosis of physiological and neurological problems in individuals. Current non-invasive eyetracking methods achieve a 30 Hz rate with possibly low accuracy in gaze estimation, that is insufficient for many applications. We propose a new non-invasive visual eyetracking system that is capable of operating at speeds as high as 6-12 KHz. A new CCD video camera and hardware architecture is used, and a novel fast image processing algorithm leverages specific features of the input CCD camera to yield a real-time eyetracking system. A field programmable gate array (FPGA) is used to control the CCD camera and execute the image processing operations. Initial results show the excellent performance of our system under severe head motion and low contrast conditions.

  1. Figure-ground activity in V1 and guidance of saccadic eye movements.

    PubMed

    Supèr, Hans

    2006-01-01

    Every day we shift our gaze about 150.000 times mostly without noticing it. The direction of these gaze shifts are not random but directed by sensory information and internal factors. After each movement the eyes hold still for a brief moment so that visual information at the center of our gaze can be processed in detail. This means that visual information at the saccade target location is sufficient to accurately guide the gaze shift but yet is not sufficiently processed to be fully perceived. In this paper I will discuss the possible role of activity in the primary visual cortex (V1), in particular figure-ground activity, in oculo-motor behavior. Figure-ground activity occurs during the late response period of V1 neurons and correlates with perception. The strength of figure-ground responses predicts the direction and moment of saccadic eye movements. The superior colliculus, a gaze control center that integrates visual and motor signals, receives direct anatomical connections from V1. These projections may convey the perceptual information that is required for appropriate gaze shifts. In conclusion, figure-ground activity in V1 may act as an intermediate component linking visual and motor signals.

  2. Effects of objectifying gaze on female cognitive performance: The role of flow experience and internalization of beauty ideals.

    PubMed

    Guizzo, Francesca; Cadinu, Mara

    2017-06-01

    Although previous research has demonstrated that objectification impairs female cognitive performance, no research to date has investigated the mechanisms underlying such decrement. Therefore, we tested the role of flow experience as one mechanism leading to performance decrement under sexual objectification. Gaze gender was manipulated by having male versus female experimenters take body pictures of female participants (N = 107) who then performed a Sustained Attention to Response Task. As predicted, a moderated mediation model showed that under male versus female gaze, higher internalization of beauty ideals was associated with lower flow, which in turn decreased performance. The implications of these results are discussed in relation to objectification theory and strategies to prevent sexually objectifying experiences. © 2016 The British Psychological Society.

  3. Implicit prosody mining based on the human eye image capture technology

    NASA Astrophysics Data System (ADS)

    Gao, Pei-pei; Liu, Feng

    2013-08-01

    The technology of eye tracker has become the main methods of analyzing the recognition issues in human-computer interaction. Human eye image capture is the key problem of the eye tracking. Based on further research, a new human-computer interaction method introduced to enrich the form of speech synthetic. We propose a method of Implicit Prosody mining based on the human eye image capture technology to extract the parameters from the image of human eyes when reading, control and drive prosody generation in speech synthesis, and establish prosodic model with high simulation accuracy. Duration model is key issues for prosody generation. For the duration model, this paper put forward a new idea for obtaining gaze duration of eyes when reading based on the eye image capture technology, and synchronous controlling this duration and pronunciation duration in speech synthesis. The movement of human eyes during reading is a comprehensive multi-factor interactive process, such as gaze, twitching and backsight. Therefore, how to extract the appropriate information from the image of human eyes need to be considered and the gaze regularity of eyes need to be obtained as references of modeling. Based on the analysis of current three kinds of eye movement control model and the characteristics of the Implicit Prosody reading, relative independence between speech processing system of text and eye movement control system was discussed. It was proved that under the same text familiarity condition, gaze duration of eyes when reading and internal voice pronunciation duration are synchronous. The eye gaze duration model based on the Chinese language level prosodic structure was presented to change previous methods of machine learning and probability forecasting, obtain readers' real internal reading rhythm and to synthesize voice with personalized rhythm. This research will enrich human-computer interactive form, and will be practical significance and application prospect in terms of disabled assisted speech interaction. Experiments show that Implicit Prosody mining based on the human eye image capture technology makes the synthesized speech has more flexible expressions.

  4. Eye Contact Facilitates Awareness of Faces during Interocular Suppression

    ERIC Educational Resources Information Center

    Stein, Timo; Senju, Atsushi; Peelen, Marius V.; Sterzer, Philipp

    2011-01-01

    Eye contact captures attention and receives prioritized visual processing. Here we asked whether eye contact might be processed outside conscious awareness. Faces with direct and averted gaze were rendered invisible using interocular suppression. In two experiments we found that faces with direct gaze overcame such suppression more rapidly than…

  5. Eye gazing direction inspection based on image processing technique

    NASA Astrophysics Data System (ADS)

    Hao, Qun; Song, Yong

    2005-02-01

    According to the research result in neural biology, human eyes can obtain high resolution only at the center of view of field. In the research of Virtual Reality helmet, we design to detect the gazing direction of human eyes in real time and feed it back to the control system to improve the resolution of the graph at the center of field of view. In the case of current display instruments, this method can both give attention to the view field of virtual scene and resolution, and improve the immersion of virtual system greatly. Therefore, detecting the gazing direction of human eyes rapidly and exactly is the basis of realizing the design scheme of this novel VR helmet. In this paper, the conventional method of gazing direction detection that based on Purklinje spot is introduced firstly. In order to overcome the disadvantage of the method based on Purklinje spot, this paper proposed a method based on image processing to realize the detection and determination of the gazing direction. The locations of pupils and shapes of eye sockets change with the gazing directions. With the aid of these changes, analyzing the images of eyes captured by the cameras, gazing direction of human eyes can be determined finally. In this paper, experiments have been done to validate the efficiency of this method by analyzing the images. The algorithm can carry out the detection of gazing direction base on normal eye image directly, and it eliminates the need of special hardware. Experiment results show that the method is easy to implement and have high precision.

  6. Atypical development of spontaneous social cognition in autism spectrum disorders.

    PubMed

    Senju, Atsushi

    2013-02-01

    Individuals with autism spectrum disorders (ASD) have profound impairment in the development of social interaction and communication. However, it is also known that some 'high-functioning' individuals with ASD show apparently typical capacity to process social information in a controlled experimental settings, despite their difficulties in daily life. The current paper overviews the spontaneous social cognition, spontaneous processing of social information in the absence of explicit instruction or task demand, in individuals with ASD. Three areas of the researches, false belief attribution, imitation/mimicry, and eye gaze processing, have been reviewed. The literatures suggest that high-functioning individuals with ASD (a) do not spontaneously attribute false belief to others, even though they can easily do so when explicitly instructed, (b) can imitate others' goal-directed actions under explicit instruction and show spontaneous mimicry of others' actions when they attend to the action, but are less likely to show spontaneous mimicry without the task structure to navigate attention to others' action and (c) can process others' gaze direction and shift attention to others' gaze directions, but fail to spontaneously attend to another person's eyes in social and communicative context, and less likely to be prompted to respond in response to perceived eye contact. These results are consistent with the claim that individuals with ASD do not spontaneously attend to socially relevant information, even though they can easily process the same information when their attention is navigated towards it. Copyright © 2012 The Japanese Society of Child Neurology. Published by Elsevier B.V. All rights reserved.

  7. The Role of Global and Local Visual Information during Gaze-Cued Orienting of Attention.

    PubMed

    Munsters, Nicolette M; van den Boomen, Carlijn; Hooge, Ignace T C; Kemner, Chantal

    2016-01-01

    Gaze direction is an important social communication tool. Global and local visual information are known to play specific roles in processing socially relevant information from a face. The current study investigated whether global visual information has a primary role during gaze-cued orienting of attention and, as such, may influence quality of interaction. Adults performed a gaze-cueing task in which a centrally presented face cued (valid or invalid) the location of a peripheral target through a gaze shift. We measured brain activity (electroencephalography) towards the cue and target and behavioral responses (manual and saccadic reaction times) towards the target. The faces contained global (i.e. lower spatial frequencies), local (i.e. higher spatial frequencies), or a selection of both global and local (i.e. mid-band spatial frequencies) visual information. We found a gaze cue-validity effect (i.e. valid versus invalid), but no interaction effects with spatial frequency content. Furthermore, behavioral responses towards the target were in all cue conditions slower when lower spatial frequencies were not present in the gaze cue. These results suggest that whereas gaze-cued orienting of attention can be driven by both global and local visual information, global visual information determines the speed of behavioral responses towards other entities appearing in the surrounding of gaze cue stimuli.

  8. Face Age and Eye Gaze Influence Older Adults' Emotion Recognition.

    PubMed

    Campbell, Anna; Murray, Janice E; Atkinson, Lianne; Ruffman, Ted

    2017-07-01

    Eye gaze has been shown to influence emotion recognition. In addition, older adults (over 65 years) are not as influenced by gaze direction cues as young adults (18-30 years). Nevertheless, these differences might stem from the use of young to middle-aged faces in emotion recognition research because older adults have an attention bias toward old-age faces. Therefore, using older face stimuli might allow older adults to process gaze direction cues to influence emotion recognition. To investigate this idea, young and older adults completed an emotion recognition task with young and older face stimuli displaying direct and averted gaze, assessing labeling accuracy for angry, disgusted, fearful, happy, and sad faces. Direct gaze rather than averted gaze improved young adults' recognition of emotions in young and older faces, but for older adults this was true only for older faces. The current study highlights the impact of stimulus face age and gaze direction on emotion recognition in young and older adults. The use of young face stimuli with direct gaze in most research might contribute to age-related emotion recognition differences. © The Author 2015. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  9. The Organization of Exploratory Behaviors in Infant Locomotor Planning

    ERIC Educational Resources Information Center

    Kretch, Kari S.; Adolph, Karen E.

    2017-01-01

    How do infants plan and guide locomotion under challenging conditions? This experiment investigated the real-time process of visual and haptic exploration in 14-month-old infants as they decided whether and how to walk over challenging terrain--a series of bridges varying in width. Infants' direction of gaze was recorded with a head-mounted eye…

  10. Space-based and object-centered gaze cuing of attention in right hemisphere-damaged patients.

    PubMed

    Dalmaso, Mario; Castelli, Luigi; Priftis, Konstantinos; Buccheri, Marta; Primon, Daniela; Tronco, Silvia; Galfano, Giovanni

    2015-01-01

    Gaze cuing of attention is a well established phenomenon consisting of the tendency to shift attention to the location signaled by the averted gaze of other individuals. Evidence suggests that such phenomenon might follow intrinsic object-centered features of the head containing the gaze cue. In the present exploratory study, we aimed to investigate whether such object-centered component is present in neuropsychological patients with a lesion involving the right hemisphere, which is known to play a critical role both in orienting of attention and in face processing. To this purpose, we used a modified gaze-cuing paradigm in which a centrally placed head with averted gaze was presented either in the standard upright position or rotated 90° clockwise or anti-clockwise. Afterward, a to-be-detected target was presented either in the right or in the left hemifield. The results showed that gaze cuing of attention was present only when the target appeared in the left visual hemifield and was not modulated by head orientation. This suggests that gaze cuing of attention in right hemisphere-damaged patients can operate within different frames of reference.

  11. Space-based and object-centered gaze cuing of attention in right hemisphere-damaged patients

    PubMed Central

    Dalmaso, Mario; Castelli, Luigi; Priftis, Konstantinos; Buccheri, Marta; Primon, Daniela; Tronco, Silvia; Galfano, Giovanni

    2015-01-01

    Gaze cuing of attention is a well established phenomenon consisting of the tendency to shift attention to the location signaled by the averted gaze of other individuals. Evidence suggests that such phenomenon might follow intrinsic object-centered features of the head containing the gaze cue. In the present exploratory study, we aimed to investigate whether such object-centered component is present in neuropsychological patients with a lesion involving the right hemisphere, which is known to play a critical role both in orienting of attention and in face processing. To this purpose, we used a modified gaze-cuing paradigm in which a centrally placed head with averted gaze was presented either in the standard upright position or rotated 90° clockwise or anti-clockwise. Afterward, a to-be-detected target was presented either in the right or in the left hemifield. The results showed that gaze cuing of attention was present only when the target appeared in the left visual hemifield and was not modulated by head orientation. This suggests that gaze cuing of attention in right hemisphere-damaged patients can operate within different frames of reference. PMID:26300815

  12. Virtual wayfinding using simulated prosthetic vision in gaze-locked viewing.

    PubMed

    Wang, Lin; Yang, Liancheng; Dagnelie, Gislin

    2008-11-01

    To assess virtual maze navigation performance with simulated prosthetic vision in gaze-locked viewing, under the conditions of varying luminance contrast, background noise, and phosphene dropout. Four normally sighted subjects performed virtual maze navigation using simulated prosthetic vision in gaze-locked viewing, under five conditions of luminance contrast, background noise, and phosphene dropout. Navigation performance was measured as the time required to traverse a 10-room maze using a game controller, and the number of errors made during the trip. Navigation performance time (1) became stable after 6 to 10 trials, (2) remained similar on average at luminance contrast of 68% and 16% but had greater variation at 16%, (3) was not significantly affected by background noise, and (4) increased by 40% when 30% of phosphenes were removed. Navigation performance time and number of errors were significantly and positively correlated. Assuming that the simulated gaze-locked viewing conditions are extended to implant wearers, such prosthetic vision can be helpful for wayfinding in simple mobility tasks, though phosphene dropout may interfere with performance.

  13. The role of effort in moderating the anxiety-performance relationship: Testing the prediction of processing efficiency theory in simulated rally driving.

    PubMed

    Wilson, Mark; Smith, Nickolas C; Chattington, Mark; Ford, Mike; Marple-Horvat, Dilwyn E

    2006-11-01

    We tested some of the key predictions of processing efficiency theory using a simulated rally driving task. Two groups of participants were classified as either dispositionally high or low anxious based on trait anxiety scores and trained on a simulated driving task. Participants then raced individually on two similar courses under counterbalanced experimental conditions designed to manipulate the level of anxiety experienced. The effort exerted on the driving tasks was assessed though self-report (RSME), psychophysiological measures (pupil dilation) and visual gaze data. Efficiency was measured in terms of efficiency of visual processing (search rate) and driving control (variability of wheel and accelerator pedal) indices. Driving performance was measured as the time taken to complete the course. As predicted, increased anxiety had a negative effect on processing efficiency as indexed by the self-report, pupillary response and variability of gaze data. Predicted differences due to dispositional levels of anxiety were also found in the driving control and effort data. Although both groups of drivers performed worse under the threatening condition, the performance of the high trait anxious individuals was affected to a greater extent by the anxiety manipulation than the performance of the low trait anxious drivers. The findings suggest that processing efficiency theory holds promise as a theoretical framework for examining the relationship between anxiety and performance in sport.

  14. Gaze direction affects the magnitude of face identity aftereffects.

    PubMed

    Kloth, Nadine; Jeffery, Linda; Rhodes, Gillian

    2015-02-20

    The face perception system partly owes its efficiency to adaptive mechanisms that constantly recalibrate face coding to our current diet of faces. Moreover, faces that are better attended produce more adaptation. Here, we investigated whether the social cues conveyed by a face can influence the amount of adaptation that face induces. We compared the magnitude of face identity aftereffects induced by adaptors with direct and averted gazes. We reasoned that faces conveying direct gaze may be more engaging and better attended and thus produce larger aftereffects than those with averted gaze. Using an adaptation duration of 5 s, we found that aftereffects for adaptors with direct and averted gazes did not differ (Experiment 1). However, when processing demands were increased by reducing adaptation duration to 1 s, we found that gaze direction did affect the magnitude of the aftereffect, but in an unexpected direction: Aftereffects were larger for adaptors with averted rather than direct gaze (Experiment 2). Eye tracking revealed that differences in looking time to the faces between the two gaze directions could not account for these findings. Subsequent ratings of the stimuli (Experiment 3) showed that adaptors with averted gaze were actually perceived as more expressive and interesting than adaptors with direct gaze. Therefore it appears that the averted-gaze faces were more engaging and better attended, leading to larger aftereffects. Overall, our results suggest that naturally occurring facial signals can modulate the adaptive impact a face exerts on our perceptual system. Specifically, the faces that we perceive as most interesting also appear to calibrate the organization of our perceptual system most strongly. © 2015 ARVO.

  15. Aberrant face and gaze habituation in fragile x syndrome.

    PubMed

    Bruno, Jennifer Lynn; Garrett, Amy S; Quintin, Eve-Marie; Mazaika, Paul K; Reiss, Allan L

    2014-10-01

    The authors sought to investigate neural system habituation to face and eye gaze in fragile X syndrome, a disorder characterized by eye-gaze aversion, among other social and cognitive deficits. Participants (ages 15-25 years) were 30 individuals with fragile X syndrome (females, N=14) and a comparison group of 25 individuals without fragile X syndrome (females, N=12) matched for general cognitive ability and autism symptoms. Functional MRI (fMRI) was used to assess brain activation during a gaze habituation task. Participants viewed repeated presentations of four unique faces with either direct or averted eye gaze and judged the direction of eye gaze. Four participants (males, N=4/4; fragile X syndrome, N=3) were excluded because of excessive head motion during fMRI scanning. Behavioral performance did not differ between the groups. Less neural habituation (and significant sensitization) in the fragile X syndrome group was found in the cingulate gyrus, fusiform gyrus, and frontal cortex in response to all faces (direct and averted gaze). Left fusiform habituation in female participants was directly correlated with higher, more typical levels of the fragile X mental retardation protein and inversely correlated with autism symptoms. There was no evidence for differential habituation to direct gaze compared with averted gaze within or between groups. Impaired habituation and accentuated sensitization in response to face/eye gaze was distributed across multiple levels of neural processing. These results could help inform interventions, such as desensitization therapy, which may help patients with fragile X syndrome modulate anxiety and arousal associated with eye gaze, thereby improving social functioning.

  16. The effects of simulated vision impairments on the cone of gaze.

    PubMed

    Hecht, Heiko; Hörichs, Jenny; Sheldon, Sarah; Quint, Jessilin; Bowers, Alex

    2015-10-01

    Detecting the gaze direction of others is critical for many social interactions. We explored factors that may make the perception of mutual gaze more difficult, including the degradation of the stimulus and simulated vision impairment. To what extent do these factors affect the complex assessment of mutual gaze? Using an interactive virtual head whose eye direction could be manipulated by the subject, we conducted two experiments to assess the effects of simulated vision impairments on mutual gaze. Healthy subjects had to demarcate the center and the edges of the cone of gaze-that is, the range of gaze directions that are accepted for mutual gaze. When vision was impaired by adding a semitransparent white contrast reduction mask to the display (Exp. 1), judgments became more variable and more influenced by the head direction (indicative of a compensation strategy). When refractive blur was added (Exp. 1), the gaze cone shrank from 12.9° (no blur) to 11.3° (3-diopter lens), which cannot be explained by a low-level process but might reflect a tightening of the criterion for mutual gaze as a response to the increased uncertainty. However, the overall effects of the impairments were relatively modest. Elderly subjects (Exp. 2) produced more variability but did not differ qualitatively from the younger subjects. In the face of artificial vision impairments, compensation mechanisms and criterion changes allow us to perform better in mutual gaze perception than would be predicted by a simple extrapolation from the losses in basic visual acuity and contrast sensitivity.

  17. Beliefs about the Minds of Others Influence How We Process Sensory Information

    PubMed Central

    Prosser, Aaron; Müller, Hermann J.

    2014-01-01

    Attending where others gaze is one of the most fundamental mechanisms of social cognition. The present study is the first to examine the impact of the attribution of mind to others on gaze-guided attentional orienting and its ERP correlates. Using a paradigm in which attention was guided to a location by the gaze of a centrally presented face, we manipulated participants' beliefs about the gazer: gaze behavior was believed to result either from operations of a mind or from a machine. In Experiment 1, beliefs were manipulated by cue identity (human or robot), while in Experiment 2, cue identity (robot) remained identical across conditions and beliefs were manipulated solely via instruction, which was irrelevant to the task. ERP results and behavior showed that participants' attention was guided by gaze only when gaze was believed to be controlled by a human. Specifically, the P1 was more enhanced for validly, relative to invalidly, cued targets only when participants believed the gaze behavior was the result of a mind, rather than of a machine. This shows that sensory gain control can be influenced by higher-order (task-irrelevant) beliefs about the observed scene. We propose a new interdisciplinary model of social attention, which integrates ideas from cognitive and social neuroscience, as well as philosophy in order to provide a framework for understanding a crucial aspect of how humans' beliefs about the observed scene influence sensory processing. PMID:24714419

  18. Direct Speaker Gaze Promotes Trust in Truth-Ambiguous Statements.

    PubMed

    Kreysa, Helene; Kessler, Luise; Schweinberger, Stefan R

    2016-01-01

    A speaker's gaze behaviour can provide perceivers with a multitude of cues which are relevant for communication, thus constituting an important non-verbal interaction channel. The present study investigated whether direct eye gaze of a speaker affects the likelihood of listeners believing truth-ambiguous statements. Participants were presented with videos in which a speaker produced such statements with either direct or averted gaze. The statements were selected through a rating study to ensure that participants were unlikely to know a-priori whether they were true or not (e.g., "sniffer dogs cannot smell the difference between identical twins"). Participants indicated in a forced-choice task whether or not they believed each statement. We found that participants were more likely to believe statements by a speaker looking at them directly, compared to a speaker with averted gaze. Moreover, when participants disagreed with a statement, they were slower to do so when the statement was uttered with direct (compared to averted) gaze, suggesting that the process of rejecting a statement as untrue may be inhibited when that statement is accompanied by direct gaze.

  19. Direct Speaker Gaze Promotes Trust in Truth-Ambiguous Statements

    PubMed Central

    Kessler, Luise; Schweinberger, Stefan R.

    2016-01-01

    A speaker’s gaze behaviour can provide perceivers with a multitude of cues which are relevant for communication, thus constituting an important non-verbal interaction channel. The present study investigated whether direct eye gaze of a speaker affects the likelihood of listeners believing truth-ambiguous statements. Participants were presented with videos in which a speaker produced such statements with either direct or averted gaze. The statements were selected through a rating study to ensure that participants were unlikely to know a-priori whether they were true or not (e.g., “sniffer dogs cannot smell the difference between identical twins”). Participants indicated in a forced-choice task whether or not they believed each statement. We found that participants were more likely to believe statements by a speaker looking at them directly, compared to a speaker with averted gaze. Moreover, when participants disagreed with a statement, they were slower to do so when the statement was uttered with direct (compared to averted) gaze, suggesting that the process of rejecting a statement as untrue may be inhibited when that statement is accompanied by direct gaze. PMID:27643789

  20. Automatic attentional orienting to other people's gaze in schizophrenia.

    PubMed

    Langdon, Robyn; Seymour, Kiley; Williams, Tracey; Ward, Philip B

    2017-08-01

    Explicit tests of social cognition have revealed pervasive deficits in schizophrenia. Less is known of automatic social cognition in schizophrenia. We used a spatial orienting task to investigate automatic shifts of attention cued by another person's eye gaze in 29 patients and 28 controls. Central photographic images of a face with eyes shifted left or right, or looking straight ahead, preceded targets that appeared left or right of the cue. To examine automatic effects, cue direction was non-predictive of target location. Cue-target intervals were 100, 300, and 800 ms. In non-social control trials, arrows replaced eye-gaze cues. Both groups showed automatic attentional orienting indexed by faster reaction times (RTs) when arrows were congruent with target location across all cue-target intervals. Similar congruency effects were seen for eye-shift cues at 300 and 800 ms intervals, but patients showed significantly larger congruency effects at 800 ms, which were driven by delayed responses to incongruent target locations. At short 100-ms cue-target intervals, neither group showed faster RTs for congruent than for incongruent eye-shift cues, but patients were significantly slower to detect targets after direct-gaze cues. These findings conflict with previous studies using schematic line drawings of eye-shifts that have found automatic attentional orienting to be reduced in schizophrenia. Instead, our data indicate that patients display abnormalities in responding to gaze direction at various stages of gaze processing-reflected by a stronger preferential capture of attention by another person's direct eye contact at initial stages of gaze processing and difficulties disengaging from a gazed-at location once shared attention is established.

  1. Time in the eye of the beholder: Gaze position reveals spatial-temporal associations during encoding and memory retrieval of future and past.

    PubMed

    Martarelli, Corinna S; Mast, Fred W; Hartmann, Matthias

    2017-01-01

    Time is grounded in various ways, and previous studies point to a "mental time line" with past associated with the left, and future with the right side. In this study, we investigated whether spontaneous eye movements on a blank screen would follow a mental timeline during encoding, free recall, and recognition of past and future items. In all three stages of processing, gaze position was more rightward during future items compared to past items. Moreover, horizontal gaze position during encoding predicted horizontal gaze position during free recall and recognition. We conclude that mental time line and the stored gaze position during encoding assist memory retrieval of past versus future items. Our findings highlight the spatial nature of temporal representations.

  2. Neurocognitive mechanisms of gaze-expression interactions in face processing and social attention

    PubMed Central

    Graham, Reiko; LaBar, Kevin S.

    2012-01-01

    The face conveys a rich source of non-verbal information used during social communication. While research has revealed how specific facial channels such as emotional expression are processed, little is known about the prioritization and integration of multiple cues in the face during dyadic exchanges. Classic models of face perception have emphasized the segregation of dynamic versus static facial features along independent information processing pathways. Here we review recent behavioral and neuroscientific evidence suggesting that within the dynamic stream, concurrent changes in eye gaze and emotional expression can yield early independent effects on face judgments and covert shifts of visuospatial attention. These effects are partially segregated within initial visual afferent processing volleys, but are subsequently integrated in limbic regions such as the amygdala or via reentrant visual processing volleys. This spatiotemporal pattern may help to resolve otherwise perplexing discrepancies across behavioral studies of emotional influences on gaze-directed attentional cueing. Theoretical explanations of gaze-expression interactions are discussed, with special consideration of speed-of-processing (discriminability) and contextual (ambiguity) accounts. Future research in this area promises to reveal the mental chronometry of face processing and interpersonal attention, with implications for understanding how social referencing develops in infancy and is impaired in autism and other disorders of social cognition. PMID:22285906

  3. Neural synchrony examined with magnetoencephalography (MEG) during eye gaze processing in autism spectrum disorders: preliminary findings

    PubMed Central

    2014-01-01

    Background Gaze processing deficits are a seminal, early, and enduring behavioral deficit in autism spectrum disorder (ASD); however, a comprehensive characterization of the neural processes mediating abnormal gaze processing in ASD has yet to be conducted. Methods This study investigated whole-brain patterns of neural synchrony during passive viewing of direct and averted eye gaze in ASD adolescents and young adults (M Age  = 16.6) compared to neurotypicals (NT) (M Age  = 17.5) while undergoing magnetoencephalography. Coherence between each pair of 54 brain regions within each of three frequency bands (low frequency (0 to 15 Hz), beta (15 to 30 Hz), and low gamma (30 to 45 Hz)) was calculated. Results Significantly higher coherence and synchronization in posterior brain regions (temporo-parietal-occipital) across all frequencies was evident in ASD, particularly within the low 0 to 15 Hz frequency range. Higher coherence in fronto-temporo-parietal regions was noted in NT. A significantly higher number of low frequency cross-hemispheric synchronous connections and a near absence of right intra-hemispheric coherence in the beta frequency band were noted in ASD. Significantly higher low frequency coherent activity in bilateral temporo-parieto-occipital cortical regions and higher gamma band coherence in right temporo-parieto-occipital brain regions during averted gaze was related to more severe symptomology as reported on the Autism Diagnostic Interview-Revised (ADI-R). Conclusions The preliminary results suggest a pattern of aberrant connectivity that includes higher low frequency synchronization in posterior cortical regions, lack of long-range right hemispheric beta and gamma coherence, and decreased coherence in fronto-temporo-parietal regions necessary for orienting to shifts in eye gaze in ASD; a critical behavior essential for social communication. PMID:24976870

  4. Statistical modelling of gaze behaviour as categorical time series: what you should watch to save soccer penalties.

    PubMed

    Button, C; Dicks, M; Haines, R; Barker, R; Davids, K

    2011-08-01

    Previous research on gaze behaviour in sport has typically reported summary fixation statistics thereby largely ignoring the temporal sequencing of gaze. In the present study on penalty kicking in soccer, our aim was to apply a Markov chain modelling method to eye movement data obtained from goalkeepers. Building on the discrete analysis of gaze employed by Dicks et al. (Atten Percept Psychophys 72(3):706-720, 2010b), we wanted to statistically model the relative probabilities of the goalkeeper's gaze being directed to different locations throughout the penalty taker's approach (Dicks et al. in Atten Percept Psychophys 72(3):706-720, 2010b). Examination of gaze behaviours under in situ and video-simulation task constraints reveals differences in information pickup for perception and action (Attention, Perception and Psychophysics 72(3), 706-720). The probabilities of fixating anatomical locations of the penalty taker were high under simulated movement response conditions. In contrast, when actually required to intercept kicks, the goalkeepers initially favoured watching the penalty taker's head but then rapidly shifted focus directly to the ball for approximately the final second prior to foot-ball contact. The increased spatio-temporal demands of in situ interceptive actions over laboratory-based simulated actions lead to different visual search strategies being used. When eye movement data are modelled as time series, it is possible to discern subtle but important behavioural characteristics that are less apparent with discrete summary statistics alone.

  5. Fuzzy Integral-Based Gaze Control of a Robotic Head for Human Robot Interaction.

    PubMed

    Yoo, Bum-Soo; Kim, Jong-Hwan

    2015-09-01

    During the last few decades, as a part of effort to enhance natural human robot interaction (HRI), considerable research has been carried out to develop human-like gaze control. However, most studies did not consider hardware implementation, real-time processing, and the real environment, factors that should be taken into account to achieve natural HRI. This paper proposes a fuzzy integral-based gaze control algorithm, operating in real-time and the real environment, for a robotic head. We formulate the gaze control as a multicriteria decision making problem and devise seven human gaze-inspired criteria. Partial evaluations of all candidate gaze directions are carried out with respect to the seven criteria defined from perceived visual, auditory, and internal inputs, and fuzzy measures are assigned to a power set of the criteria to reflect the user defined preference. A fuzzy integral of the partial evaluations with respect to the fuzzy measures is employed to make global evaluations of all candidate gaze directions. The global evaluation values are adjusted by applying inhibition of return and are compared with the global evaluation values of the previous gaze directions to decide the final gaze direction. The effectiveness of the proposed algorithm is demonstrated with a robotic head, developed in the Robot Intelligence Technology Laboratory at Korea Advanced Institute of Science and Technology, through three interaction scenarios and three comparison scenarios with another algorithm.

  6. Oxytocin Reduces Face Processing Time but Leaves Recognition Accuracy and Eye-Gaze Unaffected.

    PubMed

    Hubble, Kelly; Daughters, Katie; Manstead, Antony S R; Rees, Aled; Thapar, Anita; van Goozen, Stephanie H M

    2017-01-01

    Previous studies have found that oxytocin (OXT) can improve the recognition of emotional facial expressions; it has been proposed that this effect is mediated by an increase in attention to the eye-region of faces. Nevertheless, evidence in support of this claim is inconsistent, and few studies have directly tested the effect of oxytocin on emotion recognition via altered eye-gaze Methods: In a double-blind, within-subjects, randomized control experiment, 40 healthy male participants received 24 IU intranasal OXT and placebo in two identical experimental sessions separated by a 2-week interval. Visual attention to the eye-region was assessed on both occasions while participants completed a static facial emotion recognition task using medium intensity facial expressions. Although OXT had no effect on emotion recognition accuracy, recognition performance was improved because face processing was faster across emotions under the influence of OXT. This effect was marginally significant (p<.06). Consistent with a previous study using dynamic stimuli, OXT had no effect on eye-gaze patterns when viewing static emotional faces and this was not related to recognition accuracy or face processing time. These findings suggest that OXT-induced enhanced facial emotion recognition is not necessarily mediated by an increase in attention to the eye-region of faces, as previously assumed. We discuss several methodological issues which may explain discrepant findings and suggest the effect of OXT on visual attention may differ depending on task requirements. (JINS, 2017, 23, 23-33).

  7. Cheating experience: Guiding novices to adopt the gaze strategies of experts expedites the learning of technical laparoscopic skills.

    PubMed

    Vine, Samuel J; Masters, Rich S W; McGrath, John S; Bright, Elizabeth; Wilson, Mark R

    2012-07-01

    Previous research has demonstrated that trainees can be taught (via explicit verbal instruction) to adopt the gaze strategies of expert laparoscopic surgeons. The current study examined a software template designed to guide trainees to adopt expert gaze control strategies passively, without being provided with explicit instructions. We examined 27 novices (who had no laparoscopic training) performing 50 learning trials of a laparoscopic training task in either a discovery-learning (DL) group or a gaze-training (GT) group while wearing an eye tracker to assess gaze control. The GT group performed trials using a surgery-training template (STT); software that is designed to guide expert-like gaze strategies by highlighting the key locations on the monitor screen. The DL group had a normal, unrestricted view of the scene on the monitor screen. Both groups then took part in a nondelayed retention test (to assess learning) and a stress test (under social evaluative threat) with a normal view of the scene. The STT was successful in guiding the GT group to adopt an expert-like gaze strategy (displaying more target-locking fixations). Adopting expert gaze strategies led to an improvement in performance for the GT group, which outperformed the DL group in both retention and stress tests (faster completion time and fewer errors). The STT is a practical and cost-effective training interface that automatically promotes an optimal gaze strategy. Trainees who are trained to adopt the efficient target-locking gaze strategy of experts gain a performance advantage over trainees left to discover their own strategies for task completion. Copyright © 2012 Mosby, Inc. All rights reserved.

  8. Attentional synchrony and the influence of viewing task on gaze behavior in static and dynamic scenes.

    PubMed

    Smith, Tim J; Mital, Parag K

    2013-07-17

    Does viewing task influence gaze during dynamic scene viewing? Research into the factors influencing gaze allocation during free viewing of dynamic scenes has reported that the gaze of multiple viewers clusters around points of high motion (attentional synchrony), suggesting that gaze may be primarily under exogenous control. However, the influence of viewing task on gaze behavior in static scenes and during real-world interaction has been widely demonstrated. To dissociate exogenous from endogenous factors during dynamic scene viewing we tracked participants' eye movements while they (a) freely watched unedited videos of real-world scenes (free viewing) or (b) quickly identified where the video was filmed (spot-the-location). Static scenes were also presented as controls for scene dynamics. Free viewing of dynamic scenes showed greater attentional synchrony, longer fixations, and more gaze to people and areas of high flicker compared with static scenes. These differences were minimized by the viewing task. In comparison with the free viewing of dynamic scenes, during the spot-the-location task fixation durations were shorter, saccade amplitudes were longer, and gaze exhibited less attentional synchrony and was biased away from areas of flicker and people. These results suggest that the viewing task can have a significant influence on gaze during a dynamic scene but that endogenous control is slow to kick in as initial saccades default toward the screen center, areas of high motion and people before shifting to task-relevant features. This default-like viewing behavior returns after the viewing task is completed, confirming that gaze behavior is more predictable during free viewing of dynamic than static scenes but that this may be due to natural correlation between regions of interest (e.g., people) and motion.

  9. The Relationships between Processing Facial Identity, Emotional Expression, Facial Speech, and Gaze Direction during Development

    ERIC Educational Resources Information Center

    Spangler, Sibylle M.; Schwarzer, Gudrun; Korell, Monika; Maier-Karius, Johanna

    2010-01-01

    Four experiments were conducted with 5- to 11-year-olds and adults to investigate whether facial identity, facial speech, emotional expression, and gaze direction are processed independently of or in interaction with one another. In a computer-based, speeded sorting task, participants sorted faces according to facial identity while disregarding…

  10. Atypical Processing of Gaze Cues and Faces Explains Comorbidity between Autism Spectrum Disorder (ASD) and Attention Deficit/Hyperactivity Disorder (ADHD)

    ERIC Educational Resources Information Center

    Groom, Madeleine J.; Kochhar, Puja; Hamilton, Antonia; Liddle, Elizabeth B.; Simeou, Marina; Hollis, Chris

    2017-01-01

    This study investigated the neurobiological basis of comorbidity between autism spectrum disorder (ASD) and attention deficit/hyperactivity disorder (ADHD). We compared children with ASD, ADHD or ADHD+ASD and typically developing controls (CTRL) on behavioural and electrophysiological correlates of gaze cue and face processing. We measured effects…

  11. Social decisions affect neural activity to perceived dynamic gaze

    PubMed Central

    Latinus, Marianne; Love, Scott A.; Rossi, Alejandra; Parada, Francisco J.; Huang, Lisa; Conty, Laurence; George, Nathalie; James, Karin

    2015-01-01

    Gaze direction, a cue of both social and spatial attention, is known to modulate early neural responses to faces e.g. N170. However, findings in the literature have been inconsistent, likely reflecting differences in stimulus characteristics and task requirements. Here, we investigated the effect of task on neural responses to dynamic gaze changes: away and toward transitions (resulting or not in eye contact). Subjects performed, in random order, social (away/toward them) and non-social (left/right) judgment tasks on these stimuli. Overall, in the non-social task, results showed a larger N170 to gaze aversion than gaze motion toward the observer. In the social task, however, this difference was no longer present in the right hemisphere, likely reflecting an enhanced N170 to gaze motion toward the observer. Our behavioral and event-related potential data indicate that performing social judgments enhances saliency of gaze motion toward the observer, even those that did not result in gaze contact. These data and that of previous studies suggest two modes of processing visual information: a ‘default mode’ that may focus on spatial information; a ‘socially aware mode’ that might be activated when subjects are required to make social judgments. The exact mechanism that allows switching from one mode to the other remains to be clarified. PMID:25925272

  12. Age differences in conscious versus subconscious social perception: the influence of face age and valence on gaze following.

    PubMed

    Bailey, Phoebe E; Slessor, Gillian; Rendell, Peter G; Bennetts, Rachel J; Campbell, Anna; Ruffman, Ted

    2014-09-01

    Gaze following is the primary means of establishing joint attention with others and is subject to age-related decline. In addition, young but not older adults experience an own-age bias in gaze following. The current research assessed the effects of subconscious processing on these age-related differences. Participants responded to targets that were either congruent or incongruent with the direction of gaze displayed in supraliminal and subliminal images of young and older faces. These faces displayed either neutral (Study 1) or happy and fearful (Study 2) expressions. In Studies 1 and 2, both age groups demonstrated gaze-directed attention by responding faster to targets that were congruent as opposed to incongruent with gaze-cues. In Study 1, subliminal stimuli did not attenuate the age-related decline in gaze-cuing, but did result in an own-age bias among older participants. In Study 2, gaze-cuing was reduced for older relative to young adults in response to supraliminal stimuli, and this could not be attributed to reduced visual acuity or age group differences in the perceived emotional intensity of the gaze-cue faces. Moreover, there were no age differences in gaze-cuing when responding to subliminal faces that were emotionally arousing. In addition, older adults demonstrated an own-age bias for both conscious and subconscious gaze-cuing when faces expressed happiness but not fear. We discuss growing evidence for age-related preservation of subconscious relative to conscious social perception, as well as an interaction between face age and valence in social perception. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  13. Viewing condition dependence of the gaze-evoked nystagmus in Arnold Chiari type 1 malformation.

    PubMed

    Ghasia, Fatema F; Gulati, Deepak; Westbrook, Edward L; Shaikh, Aasef G

    2014-04-15

    Saccadic eye movements rapidly shift gaze to the target of interest. Once the eyes reach a given target, the brainstem ocular motor integrator utilizes feedback from various sources to assure steady gaze. One of such sources is cerebellum whose lesion can impair neural integration leading to gaze-evoked nystagmus. The gaze evoked nystagmus is characterized by drifts moving the eyes away from the target and a null position where the drifts are absent. The extent of impairment in the neural integration for two opposite eccentricities might determine the location of the null position. Eye in the orbit position might also determine the location of the null. We report this phenomenon in a patient with Arnold Chiari type 1 malformation who had intermittent esotropia and horizontal gaze-evoked nystagmus with a shift in the null position. During binocular viewing, the null was shifted to the right. During monocular viewing, when the eye under cover drifted nasally (secondary to the esotropia), the null of the gaze-evoked nystagmus reorganized toward the center. We speculate that the output of the neural integrator is altered from the bilateral conflicting eye in the orbit position secondary to the strabismus. This could possibly explain the reorganization of the location of the null position. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Atypical Processing of Gaze Cues and Faces Explains Comorbidity between Autism Spectrum Disorder (ASD) and Attention Deficit/Hyperactivity Disorder (ADHD).

    PubMed

    Groom, Madeleine J; Kochhar, Puja; Hamilton, Antonia; Liddle, Elizabeth B; Simeou, Marina; Hollis, Chris

    2017-05-01

    This study investigated the neurobiological basis of comorbidity between autism spectrum disorder (ASD) and attention deficit/hyperactivity disorder (ADHD). We compared children with ASD, ADHD or ADHD+ASD and typically developing controls (CTRL) on behavioural and electrophysiological correlates of gaze cue and face processing. We measured effects of ASD, ADHD and their interaction on the EDAN, an ERP marker of orienting visual attention towards a spatially cued location and the N170, a right-hemisphere lateralised ERP linked to face processing. We identified atypical gaze cue and face processing in children with ASD and ADHD+ASD compared with the ADHD and CTRL groups. The findings indicate a neurobiological basis for the presence of comorbid ASD symptoms in ADHD. Further research using larger samples is needed.

  15. "Wolves (Canis lupus) and dogs (Canis familiaris) differ in following human gaze into distant space but respond similar to their packmates' gaze": Correction to Werhahn et al. (2016).

    PubMed

    2017-02-01

    Reports an error in "Wolves ( Canis lupus ) and dogs ( Canis familiaris ) differ in following human gaze into distant space but respond similar to their packmates' gaze" by Geraldine Werhahn, Zsófia Virányi, Gabriela Barrera, Andrea Sommese and Friederike Range ( Journal of Comparative Psychology , 2016[Aug], Vol 130[3], 288-298). In the article, the affiliations for the second and fifth authors should be Wolf Science Center, Ernstbrunn, Austria, and Comparative Cognition, Messerli Research Institute, University of Veterinary Medicine Vienna/ Medical University of Vienna/University of Vienna. The online version of this article has been corrected. (The following abstract of the original article appeared in record 2016-26311-001.) Gaze following into distant space is defined as visual co-orientation with another individual's head direction allowing the gaze follower to gain information on its environment. Human and nonhuman animals share this basic gaze following behavior, suggested to rely on a simple reflexive mechanism and believed to be an important prerequisite for complex forms of social cognition. Pet dogs differ from other species in that they follow only communicative human gaze clearly addressed to them. However, in an earlier experiment we showed that wolves follow human gaze into distant space. Here we set out to investigate whether domestication has affected gaze following in dogs by comparing pack-living dogs and wolves raised and kept under the same conditions. In Study 1 we found that in contrast to the wolves, these dogs did not follow minimally communicative human gaze into distant space in the same test paradigm. In the observational Study 2 we found that pack-living dogs and wolves, similarly vigilant to environmental stimuli, follow the spontaneous gaze of their conspecifics similarly often. Our findings suggest that domestication did not affect the gaze following ability of dogs itself. The results raise hypotheses about which other dog skills might have been altered through domestication that may have influenced their performance in Study 1. Because following human gaze in dogs might be influenced by special evolutionary as well as developmental adaptations to interactions with humans, we suggest that comparing dogs to other animal species might be more informative when done in intraspecific social contexts. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  16. Gaze3DFix: Detecting 3D fixations with an ellipsoidal bounding volume.

    PubMed

    Weber, Sascha; Schubert, Rebekka S; Vogt, Stefan; Velichkovsky, Boris M; Pannasch, Sebastian

    2017-10-26

    Nowadays, the use of eyetracking to determine 2-D gaze positions is common practice, and several approaches to the detection of 2-D fixations exist, but ready-to-use algorithms to determine eye movements in three dimensions are still missing. Here we present a dispersion-based algorithm with an ellipsoidal bounding volume that estimates 3D fixations. Therefore, 3D gaze points are obtained using a vector-based approach and are further processed with our algorithm. To evaluate the accuracy of our method, we performed experimental studies with real and virtual stimuli. We obtained good congruence between stimulus position and both the 3D gaze points and the 3D fixation locations within the tested range of 200-600 mm. The mean deviation of the 3D fixations from the stimulus positions was 17 mm for the real as well as for the virtual stimuli, with larger variances at increasing stimulus distances. The described algorithms are implemented in two dynamic linked libraries (Gaze3D.dll and Fixation3D.dll), and we provide a graphical user interface (Gaze3DFixGUI.exe) that is designed for importing 2-D binocular eyetracking data and calculating both 3D gaze points and 3D fixations using the libraries. The Gaze3DFix toolkit, including both libraries and the graphical user interface, is available as open-source software at https://github.com/applied-cognition-research/Gaze3DFix .

  17. Eye Contact and Fear of Being Laughed at in a Gaze Discrimination Task

    PubMed Central

    Torres-Marín, Jorge; Carretero-Dios, Hugo; Acosta, Alberto; Lupiáñez, Juan

    2017-01-01

    Current approaches conceptualize gelotophobia as a personality trait characterized by a disproportionate fear of being laughed at by others. Consistently with this perspective, gelotophobes are also described as neurotic and introverted and as having a paranoid tendency to anticipate derision and mockery situations. Although research on gelotophobia has significantly progressed over the past two decades, no evidence exists concerning the potential effects of gelotophobia in reaction to eye contact. Previous research has pointed to difficulties in discriminating gaze direction as the basis of possible misinterpretations of others’ intentions or mental states. The aim of the present research was to examine whether gelotophobia predisposition modulates the effects of eye contact (i.e., gaze discrimination) when processing faces portraying several emotional expressions. In two different experiments, participants performed an experimental gaze discrimination task in which they responded, as quickly and accurately as possible, to the eyes’ directions on faces displaying either a happy, angry, fear, neutral, or sad emotional expression. In particular, we expected trait-gelotophobia to modulate the eye contact effect, showing specific group differences in the happiness condition. The results of Study 1 (N = 40) indicated that gelotophobes made more errors than non-gelotophobes did in the gaze discrimination task. In contrast to our initial hypothesis, the happiness expression did not have any special role in the observed differences between individuals with high vs. low trait-gelotophobia. In Study 2 (N = 40), we replicated the pattern of data concerning gaze discrimination ability, even after controlling for individuals’ scores on social anxiety. Furthermore, in our second experiment, we found that gelotophobes did not exhibit any problem with identifying others’ emotions, or a general incorrect attribution of affective features, such as valence, intensity, or arousal. Therefore, this bias in processing gaze might be related to the global processes of social cognition. Further research is needed to explore how eye contact relates to the fear of being laughed at. PMID:29167652

  18. Exploring associations between gaze patterns and putative human mirror neuron system activity.

    PubMed

    Donaldson, Peter H; Gurvich, Caroline; Fielding, Joanne; Enticott, Peter G

    2015-01-01

    The human mirror neuron system (MNS) is hypothesized to be crucial to social cognition. Given that key MNS-input regions such as the superior temporal sulcus are involved in biological motion processing, and mirror neuron activity in monkeys has been shown to vary with visual attention, aberrant MNS function may be partly attributable to atypical visual input. To examine the relationship between gaze pattern and interpersonal motor resonance (IMR; an index of putative MNS activity), healthy right-handed participants aged 18-40 (n = 26) viewed videos of transitive grasping actions or static hands, whilst the left primary motor cortex received transcranial magnetic stimulation. Motor-evoked potentials recorded in contralateral hand muscles were used to determine IMR. Participants also underwent eyetracking analysis to assess gaze patterns whilst viewing the same videos. No relationship was observed between predictive gaze and IMR. However, IMR was positively associated with fixation counts in areas of biological motion in the videos, and negatively associated with object areas. These findings are discussed with reference to visual influences on the MNS, and the possibility that MNS atypicalities might be influenced by visual processes such as aberrant gaze pattern.

  19. Association of predeployment gaze bias for emotion stimuli with later symptoms of PTSD and depression in soldiers deployed in Iraq.

    PubMed

    Beevers, Christopher G; Lee, Han-Joo; Wells, Tony T; Ellis, Alissa J; Telch, Michael J

    2011-07-01

    Biased processing of emotion stimuli is thought to confer vulnerability to psychopathology, but few longitudinal studies of this link have been conducted. The authors examined the relationship between predeployment gaze bias for emotion stimuli and later symptoms of posttraumatic stress disorder (PTSD) and depression in soldiers deployed to Iraq. An eye-tracking paradigm was used to assess line of gaze in 139 soldiers while they viewed a two-by-two matrix of fearful, sad, happy, and neutral facial expressions before they were deployed to Iraq. Once they were deployed, the soldiers periodically reported on their levels of war zone stress exposure and symptoms of PTSD and depression. War zone stress exposure predicted higher scores on PTSD and depression symptom measures; however, eye gaze bias moderated this relationship. In soldiers with war zone stress exposure, shorter mean fixation time when viewing fearful faces predicted higher PTSD symptom scores, and greater total fixation time and longer mean fixation time for sad faces predicted higher depressive symptom scores. Biased processing of emotion stimuli, as measured by gaze bias, appears to confer vulnerability to symptoms of PTSD and depression in soldiers who experience war zone stress.

  20. Biasing moral decisions by exploiting the dynamics of eye gaze.

    PubMed

    Pärnamets, Philip; Johansson, Petter; Hall, Lars; Balkenius, Christian; Spivey, Michael J; Richardson, Daniel C

    2015-03-31

    Eye gaze is a window onto cognitive processing in tasks such as spatial memory, linguistic processing, and decision making. We present evidence that information derived from eye gaze can be used to change the course of individuals' decisions, even when they are reasoning about high-level, moral issues. Previous studies have shown that when an experimenter actively controls what an individual sees the experimenter can affect simple decisions with alternatives of almost equal valence. Here we show that if an experimenter passively knows when individuals move their eyes the experimenter can change complex moral decisions. This causal effect is achieved by simply adjusting the timing of the decisions. We monitored participants' eye movements during a two-alternative forced-choice task with moral questions. One option was randomly predetermined as a target. At the moment participants had fixated the target option for a set amount of time we terminated their deliberation and prompted them to choose between the two alternatives. Although participants were unaware of this gaze-contingent manipulation, their choices were systematically biased toward the target option. We conclude that even abstract moral cognition is partly constituted by interactions with the immediate environment and is likely supported by gaze-dependent decision processes. By tracking the interplay between individuals, their sensorimotor systems, and the environment, we can influence the outcome of a decision without directly manipulating the content of the information available to them.

  1. The Effect of Eye Contact Is Contingent on Visual Awareness

    PubMed Central

    Xu, Shan; Zhang, Shen; Geng, Haiyan

    2018-01-01

    The present study explored how eye contact at different levels of visual awareness influences gaze-induced joint attention. We adopted a spatial-cueing paradigm, in which an averted gaze was used as an uninformative central cue for a joint-attention task. Prior to the onset of the averted-gaze cue, either supraliminal (Experiment 1) or subliminal (Experiment 2) eye contact was presented. The results revealed a larger subsequent gaze-cueing effect following supraliminal eye contact compared to a no-contact condition. In contrast, the gaze-cueing effect was smaller in the subliminal eye-contact condition than in the no-contact condition. These findings suggest that the facilitation effect of eye contact on coordinating social attention depends on visual awareness. Furthermore, subliminal eye contact might have an impact on subsequent social attention processes that differ from supraliminal eye contact. This study highlights the need to further investigate the role of eye contact in implicit social cognition. PMID:29467703

  2. Disentangling gaze shifts from preparatory ERP effects during spatial attention

    PubMed Central

    Kennett, Steffan; van Velzen, José; Eimer, Martin; Driver, Jon

    2007-01-01

    After a cue directing attention to one side, anterior event-related potentials (ERPs) show contralateral negativity (Anterior Directing Attention Negativity, ADAN). It is unclear whether ADAN effects are contaminated by contralateral negativity arising from residual gaze shifts. Conversely, it is possible that ADAN-related potentials contaminate the horizontal electrooculogram (HEOG), via volume conduction. To evaluate these possibilities, we used high-resolution infrared eye tracking, while recording EEG and HEOG in a cued spatial-attention task. We found that, after conventional ERP and HEOG pre-processing exclusions, small but systematic residual gaze shifts in the cued direction can remain, as revealed by the infrared measure. Nevertheless, by using this measure for more stringent exclusion of small gaze shifts, we confirmed that reliable ADAN components remain for preparatory spatial attention in the absence of any systematic gaze shifts toward the cued side. PMID:17241141

  3. Gaze direction differentially affects avoidance tendencies to happy and angry faces in socially anxious individuals.

    PubMed

    Roelofs, Karin; Putman, Peter; Schouten, Sonja; Lange, Wolf-Gero; Volman, Inge; Rinck, Mike

    2010-04-01

    Increasing evidence indicates that eye gaze direction affects the processing of emotional faces in anxious individuals. However, the effects of eye gaze direction on the behavioral responses elicited by emotional faces, such as avoidance behavior, remain largely unexplored. We administered an Approach-Avoidance Task (AAT) in high (HSA) and low socially anxious (LSA) individuals. All participants responded to photographs of angry, happy and neutral faces (presented with direct and averted gaze), by either pushing a joystick away from them (avoidance) or pulling it towards them (approach). Compared to LSA, HSA were faster in avoiding than approaching angry faces. Most crucially, this avoidance tendency was only present when the perceived anger was directed towards the subject (direct gaze) and not when the gaze of the face-stimulus was averted. In contrast, HSA individuals tended to avoid happy faces irrespectively of gaze direction. Neutral faces elicited no approach-avoidance tendencies. Thus avoidance of angry faces in social anxiety as measured by AA-tasks reflects avoidance of subject-directed anger and not of negative stimuli in general. In addition, although both anger and joy are considered to reflect approach-related emotions, gaze direction did not affect HSA's avoidance of happy faces, suggesting differential mechanisms affecting responses to happy and angry faces in social anxiety. 2009 Elsevier Ltd. All rights reserved.

  4. "The Gaze Heuristic:" Biography of an Adaptively Rational Decision Process.

    PubMed

    Hamlin, Robert P

    2017-04-01

    This article is a case study that describes the natural and human history of the gaze heuristic. The gaze heuristic is an interception heuristic that utilizes a single input (deviation from a constant angle of approach) repeatedly as a task is performed. Its architecture, advantages, and limitations are described in detail. A history of the gaze heuristic is then presented. In natural history, the gaze heuristic is the only known technique used by predators to intercept prey. In human history the gaze heuristic was discovered accidentally by Royal Air Force (RAF) fighter command just prior to World War II. As it was never discovered by the Luftwaffe, the technique conferred a decisive advantage upon the RAF throughout the war. After the end of the war in America, German technology was combined with the British heuristic to create the Sidewinder AIM9 missile, the most successful autonomous weapon ever built. There are no plans to withdraw it or replace its guiding gaze heuristic. The case study demonstrates that the gaze heuristic is a specific heuristic type that takes a single best input at the best time (take the best 2 ). Its use is an adaptively rational response to specific, rapidly evolving decision environments that has allowed those animals/humans/machines who use it to survive, prosper, and multiply relative to those who do not. Copyright © 2017 Cognitive Science Society, Inc.

  5. A Comparison of Some Processing Time Measures Based on Eye Movements. Technical Report No. 285.

    ERIC Educational Resources Information Center

    Blanchard, Harry E.

    A study was conducted to provide a replication of the gaze duration algorithm proposed by M. A. Just and P. A. Carpenter using a different kind of passage, to compare the three gaze duration algorithms that have been proposed by other researchers, and to measure processing time in reading. Fifty-one college students read a passage while their eye…

  6. Predicting Aggressive Tendencies by Visual Attention Bias Associated with Hostile Emotions

    PubMed Central

    Lin, Ping-I; Hsieh, Cheng-Da; Juan, Chi-Hung; Hossain, Md Monir; Erickson, Craig A.; Lee, Yang-Han; Su, Mu-Chun

    2016-01-01

    The goal of the current study is to clarify the relationship between social information processing (e.g., visual attention to cues of hostility, hostility attribution bias, and facial expression emotion labeling) and aggressive tendencies. Thirty adults were recruited in the eye-tracking study that measured various components in social information processing. Baseline aggressive tendencies were measured using the Buss-Perry Aggression Questionnaire (AQ). Visual attention towards hostile objects was measured as the proportion of eye gaze fixation duration on cues of hostility. Hostility attribution bias was measured with the rating results for emotions of characters in the images. The results show that the eye gaze duration on hostile characters was significantly inversely correlated with the AQ score and less eye contact with an angry face. The eye gaze duration on hostile object was not significantly associated with hostility attribution bias, although hostility attribution bias was significantly positively associated with the AQ score. Our findings suggest that eye gaze fixation time towards non-hostile cues may predict aggressive tendencies. PMID:26901770

  7. Predicting Aggressive Tendencies by Visual Attention Bias Associated with Hostile Emotions.

    PubMed

    Lin, Ping-I; Hsieh, Cheng-Da; Juan, Chi-Hung; Hossain, Md Monir; Erickson, Craig A; Lee, Yang-Han; Su, Mu-Chun

    2016-01-01

    The goal of the current study is to clarify the relationship between social information processing (e.g., visual attention to cues of hostility, hostility attribution bias, and facial expression emotion labeling) and aggressive tendencies. Thirty adults were recruited in the eye-tracking study that measured various components in social information processing. Baseline aggressive tendencies were measured using the Buss-Perry Aggression Questionnaire (AQ). Visual attention towards hostile objects was measured as the proportion of eye gaze fixation duration on cues of hostility. Hostility attribution bias was measured with the rating results for emotions of characters in the images. The results show that the eye gaze duration on hostile characters was significantly inversely correlated with the AQ score and less eye contact with an angry face. The eye gaze duration on hostile object was not significantly associated with hostility attribution bias, although hostility attribution bias was significantly positively associated with the AQ score. Our findings suggest that eye gaze fixation time towards non-hostile cues may predict aggressive tendencies.

  8. Manifold decoding for neural representations of face viewpoint and gaze direction using magnetoencephalographic data.

    PubMed

    Kuo, Po-Chih; Chen, Yong-Sheng; Chen, Li-Fen

    2018-05-01

    The main challenge in decoding neural representations lies in linking neural activity to representational content or abstract concepts. The transformation from a neural-based to a low-dimensional representation may hold the key to encoding perceptual processes in the human brain. In this study, we developed a novel model by which to represent two changeable features of faces: face viewpoint and gaze direction. These features are embedded in spatiotemporal brain activity derived from magnetoencephalographic data. Our decoding results demonstrate that face viewpoint and gaze direction can be represented by manifold structures constructed from brain responses in the bilateral occipital face area and right superior temporal sulcus, respectively. Our results also show that the superposition of brain activity in the manifold space reveals the viewpoints of faces as well as directions of gazes as perceived by the subject. The proposed manifold representation model provides a novel opportunity to gain further insight into the processing of information in the human brain. © 2018 Wiley Periodicals, Inc.

  9. Learner Engagement under the "Regulatory Gaze": Possibilities for Re-Positioning Early Childhood Pre-Service Teachers as Autonomous Professionals

    ERIC Educational Resources Information Center

    Jovanovic, Jessie; Fane, Jennifer

    2016-01-01

    In a climate of increasing regulation within early childhood education and care services, and the greater re-positioning of professionals within public sectors, this article seeks to extend the literature surrounding risk and regulation in early childhood. In efforts to "push back" against the "regulatory gaze" in the early…

  10. Horizontal gaze nystagmus: a review of vision science and application issues.

    PubMed

    Rubenzer, Steven J; Stevenson, Scott B

    2010-03-01

    The Horizontal Gaze Nystagmus (HGN) test is one component of the Standardized Field Sobriety Test battery. This article reviews the literature on smooth pursuit eye movement and gaze nystagmus with a focus on normative responses, the influence of alcohol on these behaviors, and stimulus conditions similar to those used in the HGN sobriety test. Factors such as age, stimulus and background conditions, medical conditions, prescription medications, and psychiatric disorder were found to affect the smooth pursuit phase of HGN. Much less literature is available for gaze nystagmus, but onset of nystagmus may occur in some sober subjects at 45 degrees or less. We conclude that HGN is limited by large variability in the underlying normative behavior, from methods and testing environments that are often poorly controlled, and from a lack of rigorous validation in laboratory settings.

  11. The "Social Gaze Space": A Taxonomy for Gaze-Based Communication in Triadic Interactions.

    PubMed

    Jording, Mathis; Hartz, Arne; Bente, Gary; Schulte-Rüther, Martin; Vogeley, Kai

    2018-01-01

    Humans substantially rely on non-verbal cues in their communication and interaction with others. The eyes represent a "simultaneous input-output device": While we observe others and obtain information about their mental states (including feelings, thoughts, and intentions-to-act), our gaze simultaneously provides information about our own attention and inner experiences. This substantiates its pivotal role for the coordination of communication. The communicative and coordinative capacities - and their phylogenetic and ontogenetic impacts - become fully apparent in triadic interactions constituted in its simplest form by two persons and an object. Technological advances have sparked renewed interest in social gaze and provide new methodological approaches. Here we introduce the 'Social Gaze Space' as a new conceptual framework for the systematic study of gaze behavior during social information processing. It covers all possible categorical states, namely 'partner-oriented,' 'object-oriented,' 'introspective,' 'initiating joint attention,' and 'responding joint attention.' Different combinations of these states explain several interpersonal phenomena. We argue that this taxonomy distinguishes the most relevant interactional states along their distinctive features, and will showcase the implications for prominent social gaze phenomena. The taxonomy allows to identify research desiderates that have been neglected so far. We argue for a systematic investigation of these phenomena and discuss some related methodological issues.

  12. Predicting diagnostic error in radiology via eye-tracking and image analytics: Preliminary investigation in mammography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Voisin, Sophie; Tourassi, Georgia D.; Pinto, Frank

    2013-10-15

    Purpose: The primary aim of the present study was to test the feasibility of predicting diagnostic errors in mammography by merging radiologists’ gaze behavior and image characteristics. A secondary aim was to investigate group-based and personalized predictive models for radiologists of variable experience levels.Methods: The study was performed for the clinical task of assessing the likelihood of malignancy of mammographic masses. Eye-tracking data and diagnostic decisions for 40 cases were acquired from four Radiology residents and two breast imaging experts as part of an IRB-approved pilot study. Gaze behavior features were extracted from the eye-tracking data. Computer-generated and BIRADS imagesmore » features were extracted from the images. Finally, machine learning algorithms were used to merge gaze and image features for predicting human error. Feature selection was thoroughly explored to determine the relative contribution of the various features. Group-based and personalized user modeling was also investigated.Results: Machine learning can be used to predict diagnostic error by merging gaze behavior characteristics from the radiologist and textural characteristics from the image under review. Leveraging data collected from multiple readers produced a reasonable group model [area under the ROC curve (AUC) = 0.792 ± 0.030]. Personalized user modeling was far more accurate for the more experienced readers (AUC = 0.837 ± 0.029) than for the less experienced ones (AUC = 0.667 ± 0.099). The best performing group-based and personalized predictive models involved combinations of both gaze and image features.Conclusions: Diagnostic errors in mammography can be predicted to a good extent by leveraging the radiologists’ gaze behavior and image content.« less

  13. Does the 'P300' speller depend on eye gaze?

    NASA Astrophysics Data System (ADS)

    Brunner, P.; Joshi, S.; Briskin, S.; Wolpaw, J. R.; Bischof, H.; Schalk, G.

    2010-10-01

    Many people affected by debilitating neuromuscular disorders such as amyotrophic lateral sclerosis, brainstem stroke or spinal cord injury are impaired in their ability to, or are even unable to, communicate. A brain-computer interface (BCI) uses brain signals, rather than muscles, to re-establish communication with the outside world. One particular BCI approach is the so-called 'P300 matrix speller' that was first described by Farwell and Donchin (1988 Electroencephalogr. Clin. Neurophysiol. 70 510-23). It has been widely assumed that this method does not depend on the ability to focus on the desired character, because it was thought that it relies primarily on the P300-evoked potential and minimally, if at all, on other EEG features such as the visual-evoked potential (VEP). This issue is highly relevant for the clinical application of this BCI method, because eye movements may be impaired or lost in the relevant user population. This study investigated the extent to which the performance in a 'P300' speller BCI depends on eye gaze. We evaluated the performance of 17 healthy subjects using a 'P300' matrix speller under two conditions. Under one condition ('letter'), the subjects focused their eye gaze on the intended letter, while under the second condition ('center'), the subjects focused their eye gaze on a fixation cross that was located in the center of the matrix. The results show that the performance of the 'P300' matrix speller in normal subjects depends in considerable measure on gaze direction. They thereby disprove a widespread assumption in BCI research, and suggest that this BCI might function more effectively for people who retain some eye-movement control. The applicability of these findings to people with severe neuromuscular disabilities (particularly in eye-movements) remains to be determined.

  14. Countermanding eye-head gaze shifts in humans: marching orders are delivered to the head first.

    PubMed

    Corneil, Brian D; Elsley, James K

    2005-07-01

    The countermanding task requires subjects to cancel a planned movement on appearance of a stop signal, providing insights into response generation and suppression. Here, we studied human eye-head gaze shifts in a countermanding task with targets located beyond the horizontal oculomotor range. Consistent with head-restrained saccadic countermanding studies, the proportion of gaze shifts on stop trials increased the longer the stop signal was delayed after target presentation, and gaze shift stop-signal reaction times (SSRTs: a derived statistic measuring how long it takes to cancel a movement) averaged approximately 120 ms across seven subjects. We also observed a marked proportion of trials (13% of all stop trials) during which gaze remained stable but the head moved toward the target. Such head movements were more common at intermediate stop signal delays. We never observed the converse sequence wherein gaze moved while the head remained stable. SSRTs for head movements averaged approximately 190 ms or approximately 70-75 ms longer than gaze SSRTs. Although our findings are inconsistent with a single race to threshold as proposed for controlling saccadic eye movements, movement parameters on stop trials attested to interactions consistent with a race model architecture. To explain our data, we tested two extensions to the saccadic race model. The first assumed that gaze shifts and head movements are controlled by parallel but independent races. The second model assumed that gaze shifts and head movements are controlled by a single race, preceded by terminal ballistic intervals not under inhibitory control, and that the head-movement branch is activated at a lower threshold. Although simulations of both models produced acceptable fits to the empirical data, we favor the second alternative as it is more parsimonious with recent findings in the oculomotor system. Using the second model, estimates for gaze and head ballistic intervals were approximately 25 and 90 ms, respectively, consistent with the known physiology of the final motor paths. Further, the threshold of the head movement branch was estimated to be 85% of that required to activate gaze shifts. From these results, we conclude that a commitment to a head movement is made in advance of gaze shifts and that the comparative SSRT differences result primarily from biomechanical differences inherent to eye and head motion.

  15. Automatic Mechanisms for Social Attention Are Culturally Penetrable.

    PubMed

    Cohen, Adam S; Sasaki, Joni Y; German, Tamsin C; Kim, Heejung S

    2017-01-01

    Are mechanisms for social attention influenced by culture? Evidence that social attention is triggered automatically by bottom-up gaze cues and is uninfluenced by top-down verbal instructions may suggest it operates in the same way everywhere. Yet considerations from evolutionary and cultural psychology suggest that specific aspects of one's cultural background may have consequence for the way mechanisms for social attention develop and operate. In more interdependent cultures, the scope of social attention may be broader, focusing on more individuals and relations between those individuals. We administered a multi-gaze cueing task requiring participants to fixate a foreground face flanked by background faces and measured shifts in attention using eye tracking. For European Americans, gaze cueing did not depend on the direction of background gaze cues, suggesting foreground gaze alone drives automatic attention shifting; for East Asians, cueing patterns differed depending on whether the foreground cue matched or mismatched background cues, suggesting foreground and background gaze information were integrated. These results demonstrate that cultural background influences the social attention system by shifting it into a narrow or broad mode of operation and, importantly, provides evidence challenging the assumption that mechanisms underlying automatic social attention are necessarily rigid and impenetrable to culture. Copyright © 2015 Cognitive Science Society, Inc.

  16. Abnormal social reward processing in autism as indexed by pupillary responses to happy faces

    PubMed Central

    2012-01-01

    Background Individuals with Autism Spectrum Disorders (ASD) typically show impaired eye contact during social interactions. From a young age, they look less at faces than typically developing (TD) children and tend to avoid direct gaze. However, the reason for this behavior remains controversial; ASD children might avoid eye contact because they perceive the eyes as aversive or because they do not find social engagement through mutual gaze rewarding. Methods We monitored pupillary diameter as a measure of autonomic response in children with ASD (n = 20, mean age = 12.4) and TD controls (n = 18, mean age = 13.7) while they looked at faces displaying different emotions. Each face displayed happy, fearful, angry or neutral emotions with the gaze either directed to or averted from the subjects. Results Overall, children with ASD and TD controls showed similar pupillary responses; however, they differed significantly in their sensitivity to gaze direction for happy faces. Specifically, pupillary diameter increased among TD children when viewing happy faces with direct gaze as compared to those with averted gaze, whereas children with ASD did not show such sensitivity to gaze direction. We found no group differences in fixation that could explain the differential pupillary responses. There was no effect of gaze direction on pupil diameter for negative affect or neutral faces among either the TD or ASD group. Conclusions We interpret the increased pupillary diameter to happy faces with direct gaze in TD children to reflect the intrinsic reward value of a smiling face looking directly at an individual. The lack of this effect in children with ASD is consistent with the hypothesis that individuals with ASD may have reduced sensitivity to the reward value of social stimuli. PMID:22958650

  17. Eye gaze during comprehension of American Sign Language by native and beginning signers.

    PubMed

    Emmorey, Karen; Thompson, Robin; Colvin, Rachael

    2009-01-01

    An eye-tracking experiment investigated where deaf native signers (N = 9) and hearing beginning signers (N = 10) look while comprehending a short narrative and a spatial description in American Sign Language produced live by a fluent signer. Both groups fixated primarily on the signer's face (more than 80% of the time) but differed with respect to fixation location. Beginning signers fixated on or near the signer's mouth, perhaps to better perceive English mouthing, whereas native signers tended to fixate on or near the eyes. Beginning signers shifted gaze away from the signer's face more frequently than native signers, but the pattern of gaze shifts was similar for both groups. When a shift in gaze occurred, the sign narrator was almost always looking at his or her hands and was most often producing a classifier construction. We conclude that joint visual attention and attention to mouthing (for beginning signers), rather than linguistic complexity or processing load, affect gaze fixation patterns during sign language comprehension.

  18. The importance of the eyes: communication skills in infants of blind parents.

    PubMed

    Senju, Atsushi; Tucker, Leslie; Pasco, Greg; Hudry, Kristelle; Elsabbagh, Mayada; Charman, Tony; Johnson, Mark H

    2013-06-07

    The effects of selectively different experience of eye contact and gaze behaviour on the early development of five sighted infants of blind parents were investigated. Infants were assessed longitudinally at 6-10, 12-15 and 24-47 months. Face scanning and gaze following were assessed using eye tracking. In addition, established measures of autistic-like behaviours and standardized tests of cognitive, motor and linguistic development, as well as observations of naturalistic parent-child interaction were collected. These data were compared with those obtained from a larger group of sighted infants of sighted parents. Infants with blind parents did not show an overall decrease in eye contact or gaze following when they observed sighted adults on video or in live interactions, nor did they show any autistic-like behaviours. However, they directed their own eye gaze somewhat less frequently towards their blind mothers and also showed improved performance in visual memory and attention at younger ages. Being reared with significantly reduced experience of eye contact and gaze behaviour does not preclude sighted infants from developing typical gaze processing and other social-communication skills. Indeed, the need to switch between different types of communication strategy may actually enhance other skills during development.

  19. Reliability and Validity of Gaze-Dependent Functional Vision Space: A Novel Metric Quantifying Visual Function in Infantile Nystagmus Syndrome.

    PubMed

    Roberts, Tawna L; Kester, Kristi N; Hertle, Richard W

    2018-04-01

    This study presents test-retest reliability of optotype visual acuity (OVA) across 60° of horizontal gaze position in patients with infantile nystagmus syndrome (INS). Also, the validity of the metric gaze-dependent functional vision space (GDFVS) is shown in patients with INS. In experiment 1, OVA was measured twice in seven horizontal gaze positions from 30° left to right in 10° steps in 20 subjects with INS and 14 without INS. Test-retest reliability was assessed using intraclass correlation coefficient (ICC) in each gaze. OVA area under the curve (AUC) was calculated with horizontal eye position on the x-axis, and logMAR visual acuity on the y-axis and then converted to GDFVS. In experiment 2, validity of GDFVS was determined over 40° horizontal gaze by applying the 95% limits of agreement from experiment 1 to pre- and post-treatment GDFVS values from 85 patients with INS. In experiment 1, test-retest reliability for OVA was high (ICC ≥ 0.88) as the difference in test-retest was on average less than 0.1 logMAR in each gaze position. In experiment 2, as a group, INS subjects had a significant increase (P < 0.001) in the size of their GDFVS that exceeded the 95% limits of agreement found during test-retest. OVA is a reliable measure in INS patients across 60° of horizontal gaze position. GDFVS is a valid clinical method to be used to quantify OVA as a function of eye position in INS patients. This method captures the dynamic nature of OVA in INS patients and may be a valuable measure to quantify visual function patients with INS, particularly in quantifying change as part of clinical studies.

  20. Electrophysiological quantification of underlying mechanism of decision making from auto dealers advertisement - A neuromarketing research

    NASA Astrophysics Data System (ADS)

    Samsuri, Norlyiana; Reza, Faruque; Begum, Tahamina; Yusoff, Nasir; Idris, Badrisyah; Omar, Hazim; Isa, Salmi Mohd

    2016-10-01

    This study focused on which display design of advertisement that would be able to attract most attention by measuring cognitive response and gaze behavior. Total of 15 subjects were recruited from USM undergraduate medical students. The event related potential (ERP) as a cognitive response during viewing different display design were recorded from 17 electrode sites using 128 electrode sensors net which was applied on the subject's scalp according to the 10-20 international electrode placement system. The amplitude of the evoked N100 and P300 ERP components were identified. To determine the statistical significance, amplitude data were analyzed using one way ANOVA test and reaction time was analyzed using Independent t-test. Two out of the 15 subjects participated in the ERP recording in order to measure the fixation duration, pupil size and attention maps of eye movement as a gaze behavioral response to the different display design using Eye Tracking. The ERP and the gaze behavior results were consistent. Higher amplitudes of the N100 and the P300 ERP components during the RLG view proved that participants had larger visual selective attention and visual cognitive processing during visual presentation of the RLG view. Visual interpretation of the attention map together with the fixation duration and the pupil size of gaze behavior data from two case studies revealed that the RLG view attracted more attention than its counterpart. In regards to color as a confounder, gaze performance data from two cases opened an interesting finding is that both subjects showed common interest in red color during both the LLG and the RLG view, indicating color may play a different role in the display design. The finding of this research has important information for the marketers to design their advertisement making it cost-effective and limited space advertising. And on that case, RLG view is the most prioritize display design.

  1. Gaze Tracking System for User Wearing Glasses

    PubMed Central

    Gwon, Su Yeong; Cho, Chul Woo; Lee, Hyeon Chang; Lee, Won Oh; Park, Kang Ryoung

    2014-01-01

    Conventional gaze tracking systems are limited in cases where the user is wearing glasses because the glasses usually produce noise due to reflections caused by the gaze tracker's lights. This makes it difficult to locate the pupil and the specular reflections (SRs) from the cornea of the user's eye. These difficulties increase the likelihood of gaze detection errors because the gaze position is estimated based on the location of the pupil center and the positions of the corneal SRs. In order to overcome these problems, we propose a new gaze tracking method that can be used by subjects who are wearing glasses. Our research is novel in the following four ways: first, we construct a new control device for the illuminator, which includes four illuminators that are positioned at the four corners of a monitor. Second, our system automatically determines whether a user is wearing glasses or not in the initial stage by counting the number of white pixels in an image that is captured using the low exposure setting on the camera. Third, if it is determined that the user is wearing glasses, the four illuminators are turned on and off sequentially in order to obtain an image that has a minimal amount of noise due to reflections from the glasses. As a result, it is possible to avoid the reflections and accurately locate the pupil center and the positions of the four corneal SRs. Fourth, by turning off one of the four illuminators, only three corneal SRs exist in the captured image. Since the proposed gaze detection method requires four corneal SRs for calculating the gaze position, the unseen SR position is estimated based on the parallelogram shape that is defined by the three SR positions and the gaze position is calculated. Experimental results showed that the average gaze detection error with 20 persons was about 0.70° and the processing time is 63.72 ms per each frame. PMID:24473283

  2. Learning to Interact with a Computer by Gaze

    ERIC Educational Resources Information Center

    Aoki, Hirotaka; Hansen, John Paulin; Itoh, Kenji

    2008-01-01

    The aim of this paper is to examine the learning processes that subjects undertake when they start using gaze as computer input. A 7-day experiment with eight Japanese students was carried out to record novice users' eye movement data during typing of 110 sentences. The experiment revealed that inefficient eye movements was dramatically reduced…

  3. The Role of Facial Expressions in Attention-Orienting in Adults and Infants

    ERIC Educational Resources Information Center

    Rigato, Silvia; Menon, Enrica; Di Gangi, Valentina; George, Nathalie; Farroni, Teresa

    2013-01-01

    Faces convey many signals (i.e., gaze or expressions) essential for interpersonal interaction. We have previously shown that facial expressions of emotion and gaze direction are processed and integrated in specific combinations early in life. These findings open a number of developmental questions and specifically in this paper we address whether…

  4. Differential Gaze Patterns on Eyes and Mouth During Audiovisual Speech Segmentation

    PubMed Central

    Lusk, Laina G.; Mitchel, Aaron D.

    2016-01-01

    Speech is inextricably multisensory: both auditory and visual components provide critical information for all aspects of speech processing, including speech segmentation, the visual components of which have been the target of a growing number of studies. In particular, a recent study (Mitchel and Weiss, 2014) established that adults can utilize facial cues (i.e., visual prosody) to identify word boundaries in fluent speech. The current study expanded upon these results, using an eye tracker to identify highly attended facial features of the audiovisual display used in Mitchel and Weiss (2014). Subjects spent the most time watching the eyes and mouth. A significant trend in gaze durations was found with the longest gaze duration on the mouth, followed by the eyes and then the nose. In addition, eye-gaze patterns changed across familiarization as subjects learned the word boundaries, showing decreased attention to the mouth in later blocks while attention on other facial features remained consistent. These findings highlight the importance of the visual component of speech processing and suggest that the mouth may play a critical role in visual speech segmentation. PMID:26869959

  5. The effect of Ramadan fasting on spatial attention through emotional stimuli

    PubMed Central

    Molavi, Maziyar; Yunus, Jasmy; Utama, Nugraha P

    2016-01-01

    Fasting can influence psychological and mental states. In the current study, the effect of periodical fasting on the process of emotion through gazed facial expression as a realistic multisource of social information was investigated for the first time. The dynamic cue-target task was applied via behavior and event-related potential measurements for 40 participants to reveal the temporal and spatial brain activities – before, during, and after fasting periods. The significance of fasting included several effects. The amplitude of the N1 component decreased over the centroparietal scalp during fasting. Furthermore, the reaction time during the fasting period decreased. The self-measurement of deficit arousal as well as the mood increased during the fasting period. There was a significant contralateral alteration of P1 over occipital area for the happy facial expression stimuli. The significant effect of gazed expression and its interaction with the emotional stimuli was indicated by the amplitude of N1. Furthermore, the findings of the study approved the validity effect as a congruency between gaze and target position, as indicated by the increment of P3 amplitude over centroparietal area as well as slower reaction time from behavioral response data during incongruency or invalid condition between gaze and target position compared with those during valid condition. Results of this study proved that attention to facial expression stimuli as a kind of communicative social signal was affected by fasting. Also, fasting improved the mood of practitioners. Moreover, findings from the behavioral and event-related potential data analyses indicated that the neural dynamics of facial emotion are processed faster than that of gazing, as the participants tended to react faster and prefer to relay on the type of facial emotions than to gaze direction while doing the task. Because of happy facial expression stimuli, right hemisphere activation was more than that of the left hemisphere. It indicated the consistency of the emotional lateralization concept rather than the valence concept of emotional processing. PMID:27307772

  6. How social exclusion modulates social information processing: A behavioural dissociation between facial expressions and gaze direction

    PubMed Central

    Gallucci, Marcello; Ricciardelli, Paola

    2018-01-01

    Social exclusion is a painful experience that is felt as a threat to the human need to belong and can lead to increased aggressive and anti-social behaviours, and results in emotional and cognitive numbness. Excluded individuals also seem to show an automatic tuning to positivity: they tend to increase their selective attention towards social acceptance signals. Despite these effects known in the literature, the consequences of social exclusion on social information processing still need to be explored in depth. The aim of this study was to investigate the effects of social exclusion on processing two features that are strictly bound in the appraisal of the meaning of facial expressions: gaze direction and emotional expression. In two experiments (N = 60, N = 45), participants were asked to identify gaze direction or emotional expressions from facial stimuli, in which both these features were manipulated. They performed these tasks in a four-block crossed design after being socially included or excluded using the Cyberball game. Participants’ empathy and self-reported emotions were recorded using the Empathy Quotient (EQ) and PANAS questionnaires. The Need Threat Scale and three additional questions were also used as manipulation checks in the second experiment. In both experiments, excluded participants showed to be less accurate than included participants in gaze direction discrimination. Modulatory effects of direct gaze (Experiment 1) and sad expression (Experiment 2) on the effects of social exclusion were found on response times (RTs) in the emotion recognition task. Specific differences in the reaction to social exclusion between males and females were also found in Experiment 2: excluded male participants tended to be less accurate and faster than included male participants, while excluded females showed a more accurate and slower performance than included female participants. No influence of social exclusion on PANAS or EQ scores was found. Results are discussed in the context of the importance of identifying gaze direction in appraisal theories. PMID:29617410

  7. A Metric to Quantify Shared Visual Attention in Two-Person Teams

    NASA Technical Reports Server (NTRS)

    Gontar, Patrick; Mulligan, Jeffrey B.

    2015-01-01

    Introduction: Critical tasks in high-risk environments are often performed by teams, the members of which must work together efficiently. In some situations, the team members may have to work together to solve a particular problem, while in others it may be better for them to divide the work into separate tasks that can be completed in parallel. We hypothesize that these two team strategies can be differentiated on the basis of shared visual attention, measured by gaze tracking. 2) Methods: Gaze recordings were obtained for two-person flight crews flying a high-fidelity simulator (Gontar, Hoermann, 2014). Gaze was categorized with respect to 12 areas of interest (AOIs). We used these data to construct time series of 12 dimensional vectors, with each vector component representing one of the AOIs. At each time step, each vector component was set to 0, except for the one corresponding to the currently fixated AOI, which was set to 1. This time series could then be averaged in time, with the averaging window time (t) as a variable parameter. For example, when we average with a t of one minute, each vector component represents the proportion of time that the corresponding AOI was fixated within the corresponding one minute interval. We then computed the Pearson product-moment correlation coefficient between the gaze proportion vectors for each of the two crew members, at each point in time, resulting in a signal representing the time-varying correlation between gaze behaviors. We determined criteria for concluding correlated gaze behavior using two methods: first, a permutation test was applied to the subjects' data. When one crew member's gaze proportion vector is correlated with a random time sample from the other crewmember's data, a distribution of correlation values is obtained that differs markedly from the distribution obtained from temporally aligned samples. In addition to validating that the gaze tracker was functioning reasonably well, this also allows us to compute probabilities of coordinated behavior for each value of the correlation. As an alternative, we also tabulated distributions of correlation coefficients for synthetic data sets, in which the behavior was modeled as a first-order Markov process, and compared correlation distributions for identical processes with those for disparate processes, allowing us to choose criteria and estimate error rates. 3) Discussion: Our method of gaze correlation is able to measure shared visual attention, and can distinguish between activities involving different instruments. We plan to analyze whether pilots strategies of sharing visual attention can predict performance. Possible measurements of performance include expert ratings from instructors, fuel consumption, total task time, and failure rate. While developed for two-person crews, our approach can be applied to larger groups, using intra-class correlation coefficients instead of the Pearson product-moment correlation.

  8. Interactive effects between gaze direction and facial expression on attentional resources deployment: the task instruction and context matter

    PubMed Central

    Ricciardelli, Paola; Lugli, Luisa; Pellicano, Antonello; Iani, Cristina; Nicoletti, Roberto

    2016-01-01

    In three experiments, we tested whether the amount of attentional resources needed to process a face displaying neutral/angry/fearful facial expressions with direct or averted gaze depends on task instructions, and face presentation. To this end, we used a Rapid Serial Visual Presentation paradigm in which participants in Experiment 1 were first explicitly asked to discriminate whether the expression of a target face (T1) with direct or averted gaze was angry or neutral, and then to judge the orientation of a landscape (T2). Experiment 2 was identical to Experiment 1 except that participants had to discriminate the gender of the face of T1 and fearful faces were also presented randomly inter-mixed within each block of trials. Experiment 3 differed from Experiment 2 only because angry and fearful faces were never presented within the same block. The findings indicated that the presence of the attentional blink (AB) for face stimuli depends on specific combinations of gaze direction and emotional facial expressions and crucially revealed that the contextual factors (e.g., explicit instruction to process the facial expression and the presence of other emotional faces) can modify and even reverse the AB, suggesting a flexible and more contextualized deployment of attentional resources in face processing. PMID:26898473

  9. Look over There! Unilateral Gaze Increases Geographical Memory of the 50 United States

    ERIC Educational Resources Information Center

    Propper, Ruth E.; Brunye, Tad T.; Christman, Stephen D.; Januszewskia, Ashley

    2012-01-01

    Based on their specialized processing abilities, the left and right hemispheres of the brain may not contribute equally to recall of general world knowledge. US college students recalled the verbal names and spatial locations of the 50 US states while sustaining leftward or rightward unilateral gaze, a procedure that selectively activates the…

  10. Sexual dimorphism of male face shape, partnership status and the temporal context of relationship sought modulate women's preferences for direct gaze.

    PubMed

    Conway, Claire A; Jones, Benedict C; DeBruine, Lisa M; Little, Anthony C

    2010-02-01

    Most previous studies of face preferences have investigated the physical cues that influence face preferences. Far fewer studies have investigated the effects of cues to the direction of others' social interest (i.e. gaze direction) on face preferences. Here we found that unpartnered women demonstrated stronger preferences for direct gaze (indicating social interest) from feminine male faces than from masculine male faces when judging men's attractiveness for long-term relationships, but not when judging men's attractiveness for short-term relationships. Moreover, unpartnered women's preferences for direct gaze from feminine men were stronger for long-term than short-term relationships, but there was no comparable effect for judgements of masculine men. No such effects were evident among women with romantic partners, potentially reflecting different motivations underlying partnered and unpartnered women's judgements of men's attractiveness. Collectively these findings (1) complement previous findings whereby women demonstrated stronger preferences for feminine men as long-term than short-term partners, (2) demonstrate context-sensitivity in the integration of physical and social cues in face preferences, and (3) suggest that gaze preferences may function, at least in part, to facilitate efficient allocation of mating effort.

  11. Gaze-contingent control for minimally invasive robotic surgery.

    PubMed

    Mylonas, George P; Darzi, Ara; Yang, Guang Zhong

    2006-09-01

    Recovering tissue depth and deformation during robotically assisted minimally invasive procedures is an important step towards motion compensation, stabilization and co-registration with preoperative data. This work demonstrates that eye gaze derived from binocular eye tracking can be effectively used to recover 3D motion and deformation of the soft tissue. A binocular eye-tracking device was integrated into the stereoscopic surgical console. After calibration, the 3D fixation point of the participating subjects could be accurately resolved in real time. A CT-scanned phantom heart model was used to demonstrate the accuracy of gaze-contingent depth extraction and motion stabilization of the soft tissue. The dynamic response of the oculomotor system was assessed with the proposed framework by using autoregressive modeling techniques. In vivo data were also used to perform gaze-contingent decoupling of cardiac and respiratory motion. Depth reconstruction, deformation tracking, and motion stabilization of the soft tissue were possible with binocular eye tracking. The dynamic response of the oculomotor system was able to cope with frequencies likely to occur under most routine minimally invasive surgical operations. The proposed framework presents a novel approach towards the tight integration of a human and a surgical robot where interaction in response to sensing is required to be under the control of the operating surgeon.

  12. Robust gaze-steering of an active vision system against errors in the estimated parameters

    NASA Astrophysics Data System (ADS)

    Han, Youngmo

    2015-01-01

    Gaze-steering is often used to broaden the viewing range of an active vision system. Gaze-steering procedures are usually based on estimated parameters such as image position, image velocity, depth and camera calibration parameters. However, there may be uncertainties in these estimated parameters because of measurement noise and estimation errors. In this case, robust gaze-steering cannot be guaranteed. To compensate for such problems, this paper proposes a gaze-steering method based on a linear matrix inequality (LMI). In this method, we first propose a proportional derivative (PD) control scheme on the unit sphere that does not use depth parameters. This proposed PD control scheme can avoid uncertainties in the estimated depth and camera calibration parameters, as well as inconveniences in their estimation process, including the use of auxiliary feature points and highly non-linear computation. Furthermore, the control gain of the proposed PD control scheme on the unit sphere is designed using LMI such that the designed control is robust in the presence of uncertainties in the other estimated parameters, such as image position and velocity. Simulation results demonstrate that the proposed method provides a better compensation for uncertainties in the estimated parameters than the contemporary linear method and steers the gaze of the camera more steadily over time than the contemporary non-linear method.

  13. MEG Evidence for Dynamic Amygdala Modulations by Gaze and Facial Emotions

    PubMed Central

    Dumas, Thibaud; Dubal, Stéphanie; Attal, Yohan; Chupin, Marie; Jouvent, Roland; Morel, Shasha; George, Nathalie

    2013-01-01

    Background Amygdala is a key brain region for face perception. While the role of amygdala in the perception of facial emotion and gaze has been extensively highlighted with fMRI, the unfolding in time of amydgala responses to emotional versus neutral faces with different gaze directions is scarcely known. Methodology/Principal Findings Here we addressed this question in healthy subjects using MEG combined with an original source imaging method based on individual amygdala volume segmentation and the localization of sources in the amygdala volume. We found an early peak of amygdala activity that was enhanced for fearful relative to neutral faces between 130 and 170 ms. The effect of emotion was again significant in a later time range (310–350 ms). Moreover, the amygdala response was greater for direct relative averted gaze between 190 and 350 ms, and this effect was selective of fearful faces in the right amygdala. Conclusion Altogether, our results show that the amygdala is involved in the processing and integration of emotion and gaze cues from faces in different time ranges, thus underlining its role in multiple stages of face perception. PMID:24040190

  14. Does gaze cueing produce automatic response activation: a lateralized readiness potential (LRP) study.

    PubMed

    Vainio, L; Heimola, M; Heino, H; Iljin, I; Laamanen, P; Seesjärvi, E; Paavilainen, P

    2014-05-01

    Previous research has shown that gaze cues facilitate responses to an upcoming target if the target location is compatible with the direction of the cue. Similar cueing effects have also been observed with central arrow cues. Both of these cueing effects have been attributed to a reflexive orienting of attention triggered by the cue. In addition, orienting of attention has been proposed to result in a partial response activation of the corresponding hand that, in turn, can be observed in the lateralized readiness potential (LRP), an electrophysiological indicator of automatic hand-motor response preparation. For instance, a central arrow cue has been observed to produce automatic hand-motor activation as indicated by the LRPs. The present study investigated whether gaze cues could also produce similar activation patterns in LRP. Although the standard gaze cueing effect was observed in the behavioural data, the LRP data did not reveal any consistent automatic hand-motor activation. The study suggests that motor processes associated with gaze cueing effect may operate exclusively at the level of oculomotor programming. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  15. Gaze movements and spatial working memory in collision avoidance: a traffic intersection task

    PubMed Central

    Hardiess, Gregor; Hansmann-Roth, Sabrina; Mallot, Hanspeter A.

    2013-01-01

    Street crossing under traffic is an everyday activity including collision detection as well as avoidance of objects in the path of motion. Such tasks demand extraction and representation of spatio-temporal information about relevant obstacles in an optimized format. Relevant task information is extracted visually by the use of gaze movements and represented in spatial working memory. In a virtual reality traffic intersection task, subjects are confronted with a two-lane intersection where cars are appearing with different frequencies, corresponding to high and low traffic densities. Under free observation and exploration of the scenery (using unrestricted eye and head movements) the overall task for the subjects was to predict the potential-of-collision (POC) of the cars or to adjust an adequate driving speed in order to cross the intersection without collision (i.e., to find the free space for crossing). In a series of experiments, gaze movement parameters, task performance, and the representation of car positions within working memory at distinct time points were assessed in normal subjects as well as in neurological patients suffering from homonymous hemianopia. In the following, we review the findings of these experiments together with other studies and provide a new perspective of the role of gaze behavior and spatial memory in collision detection and avoidance, focusing on the following questions: (1) which sensory variables can be identified supporting adequate collision detection? (2) How do gaze movements and working memory contribute to collision avoidance when multiple moving objects are present and (3) how do they correlate with task performance? (4) How do patients with homonymous visual field defects (HVFDs) use gaze movements and working memory to compensate for visual field loss? In conclusion, we extend the theory of collision detection and avoidance in the case of multiple moving objects and provide a new perspective on the combined operation of external (bottom-up) and internal (top-down) cues in a traffic intersection task. PMID:23760667

  16. Saccadic movement deficiencies in adults with ADHD tendencies.

    PubMed

    Lee, Yun-Jeong; Lee, Sangil; Chang, Munseon; Kwak, Ho-Wan

    2015-12-01

    The goal of the present study was to explore deficits in gaze detection and emotional value judgment during a saccadic eye movement task in adults with attention deficit/hyperactivity disorder (ADHD) tendencies. Thirty-two participants, consisting of 16 ADHD tendencies and 16 controls, were recruited from a pool of 243 university students. Among the many problems in adults with ADHDs, our research focused on the deficits in the processing of nonverbal cues, such as gaze direction and the emotional value of others' faces. In Experiment 1, a cue display containing a face with emotional value and gaze direction was followed by a target display containing two faces located on the left and right side of the display. The participant's task was to make an anti-saccade opposite to the gaze direction if the cue face was not emotionally neutral. ADHD tendencies showed more overall errors than controls in making anti-saccades. Based on the hypothesis that the exposure duration of the cue display in Experiment 1 may have been too long, we presented the cue and target display simultaneously to prevent participants from preparing saccades in advance. Participants in Experiment 2 were asked to make either a pro-saccade or an anti-saccade depending on the emotional value of the central cue face. Interestingly, significant group differences were observed for errors of omission and commission. In addition, a significant three-way interaction among groups, cue emotion, and target gaze direction suggests that the emotional recognition and gaze control systems might somehow be interconnected. The result also shows that ADHDs are more easily distracted by a task-irrelevant gaze direction. Taken together, these results suggest that tasks requiring both response inhibition (anti-saccade) and gaze-emotion recognition might be useful in developing a diagnostic test for discriminating adults with ADHDs from healthy adults.

  17. The influence of banner advertisements on attention and memory: human faces with averted gaze can enhance advertising effectiveness

    PubMed Central

    Sajjacholapunt, Pitch; Ball, Linden J.

    2014-01-01

    Research suggests that banner advertisements used in online marketing are often overlooked, especially when positioned horizontally on webpages. Such inattention invariably gives rise to an inability to remember advertising brands and messages, undermining the effectiveness of this marketing method. Recent interest has focused on whether human faces within banner advertisements can increase attention to the information they contain, since the gaze cues conveyed by faces can influence where observers look. We report an experiment that investigated the efficacy of faces located in banner advertisements to enhance the attentional processing and memorability of banner contents. We tracked participants' eye movements when they examined webpages containing either bottom-right vertical banners or bottom-center horizontal banners. We also manipulated facial information such that banners either contained no face, a face with mutual gaze or a face with averted gaze. We additionally assessed people's memories for brands and advertising messages. Results indicated that relative to other conditions, the condition involving faces with averted gaze increased attention to the banner overall, as well as to the advertising text and product. Memorability of the brand and advertising message was also enhanced. Conversely, in the condition involving faces with mutual gaze, the focus of attention was localized more on the face region rather than on the text or product, weakening any memory benefits for the brand and advertising message. This detrimental impact of mutual gaze on attention to advertised products was especially marked for vertical banners. These results demonstrate that the inclusion of human faces with averted gaze in banner advertisements provides a promising means for marketers to increase the attention paid to such adverts, thereby enhancing memory for advertising information. PMID:24624104

  18. The influence of banner advertisements on attention and memory: human faces with averted gaze can enhance advertising effectiveness.

    PubMed

    Sajjacholapunt, Pitch; Ball, Linden J

    2014-01-01

    Research suggests that banner advertisements used in online marketing are often overlooked, especially when positioned horizontally on webpages. Such inattention invariably gives rise to an inability to remember advertising brands and messages, undermining the effectiveness of this marketing method. Recent interest has focused on whether human faces within banner advertisements can increase attention to the information they contain, since the gaze cues conveyed by faces can influence where observers look. We report an experiment that investigated the efficacy of faces located in banner advertisements to enhance the attentional processing and memorability of banner contents. We tracked participants' eye movements when they examined webpages containing either bottom-right vertical banners or bottom-center horizontal banners. We also manipulated facial information such that banners either contained no face, a face with mutual gaze or a face with averted gaze. We additionally assessed people's memories for brands and advertising messages. Results indicated that relative to other conditions, the condition involving faces with averted gaze increased attention to the banner overall, as well as to the advertising text and product. Memorability of the brand and advertising message was also enhanced. Conversely, in the condition involving faces with mutual gaze, the focus of attention was localized more on the face region rather than on the text or product, weakening any memory benefits for the brand and advertising message. This detrimental impact of mutual gaze on attention to advertised products was especially marked for vertical banners. These results demonstrate that the inclusion of human faces with averted gaze in banner advertisements provides a promising means for marketers to increase the attention paid to such adverts, thereby enhancing memory for advertising information.

  19. Older Adult Multitasking Performance Using a Gaze-Contingent Useful Field of View.

    PubMed

    Ward, Nathan; Gaspar, John G; Neider, Mark B; Crowell, James; Carbonari, Ronald; Kaczmarski, Hank; Ringer, Ryan V; Johnson, Aaron P; Loschky, Lester C; Kramer, Arthur F

    2018-03-01

    Objective We implemented a gaze-contingent useful field of view paradigm to examine older adult multitasking performance in a simulated driving environment. Background Multitasking refers to the ability to manage multiple simultaneous streams of information. Recent work suggests that multitasking declines with age, yet the mechanisms supporting these declines are still debated. One possible framework to better understand this phenomenon is the useful field of view, or the area in the visual field where information can be attended and processed. In particular, the useful field of view allows for the discrimination of two competing theories of real-time multitasking, a general interference account and a tunneling account. Methods Twenty-five older adult subjects completed a useful field of view task that involved discriminating the orientation of lines in gaze-contingent Gabor patches appearing at varying eccentricities (based on distance from the fovea) as they operated a vehicle in a driving simulator. In half of the driving scenarios, subjects also completed an auditory two-back task to manipulate cognitive workload, and during some trials, wind was introduced as a means to alter general driving difficulty. Results Consistent with prior work, indices of driving performance were sensitive to both wind and workload. Interestingly, we also observed a decline in Gabor patch discrimination accuracy under high cognitive workload regardless of eccentricity, which provides support for a general interference account of multitasking. Conclusion The results showed that our gaze-contingent useful field of view paradigm was able to successfully examine older adult multitasking performance in a simulated driving environment. Application This study represents the first attempt to successfully measure dynamic changes in the useful field of view for older adults completing a multitasking scenario involving driving.

  20. Gaze and Feet as Additional Input Modalities for Interacting with Geospatial Interfaces

    NASA Astrophysics Data System (ADS)

    Çöltekin, A.; Hempel, J.; Brychtova, A.; Giannopoulos, I.; Stellmach, S.; Dachselt, R.

    2016-06-01

    Geographic Information Systems (GIS) are complex software environments and we often work with multiple tasks and multiple displays when we work with GIS. However, user input is still limited to mouse and keyboard in most workplace settings. In this project, we demonstrate how the use of gaze and feet as additional input modalities can overcome time-consuming and annoying mode switches between frequently performed tasks. In an iterative design process, we developed gaze- and foot-based methods for zooming and panning of map visualizations. We first collected appropriate gestures in a preliminary user study with a small group of experts, and designed two interaction concepts based on their input. After the implementation, we evaluated the two concepts comparatively in another user study to identify strengths and shortcomings in both. We found that continuous foot input combined with implicit gaze input is promising for supportive tasks.

  1. Real-Time Mutual Gaze Perception Enhances Collaborative Learning and Collaboration Quality

    ERIC Educational Resources Information Center

    Schneider, Bertrand; Pea, Roy

    2013-01-01

    In this paper we present the results of an eye-tracking study on collaborative problem-solving dyads. Dyads remotely collaborated to learn from contrasting cases involving basic concepts about how the human brain processes visual information. In one condition, dyads saw the eye gazes of their partner on the screen; in a control group, they did not…

  2. Controlling Attention to Gaze and Arrows in Childhood: An fMRI Study of Typical Development and Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Vaidya, Chandan J.; Foss-Feig, Jennifer; Shook, Devon; Kaplan, Lauren; Kenworthy, Lauren; Gaillard, William D.

    2011-01-01

    Functional magnetic resonance imaging was used to examine functional anatomy of attention to social (eye gaze) and nonsocial (arrow) communicative stimuli in late childhood and in a disorder defined by atypical processing of social stimuli, Autism Spectrum Disorders (ASD). Children responded to a target word ("LEFT"/"RIGHT") in the context of a…

  3. Why Do We Move Our Eyes while Trying to Remember? The Relationship between Non-Visual Gaze Patterns and Memory

    ERIC Educational Resources Information Center

    Micic, Dragana; Ehrlichman, Howard; Chen, Rebecca

    2010-01-01

    Non-visual gaze patterns (NVGPs) involve saccades and fixations that spontaneously occur in cognitive activities that are not ostensibly visual. While reasons for their appearance remain obscure, convergent empirical evidence suggests that NVGPs change according to processing requirements of tasks. We examined NVGPs in tasks with long-term memory…

  4. The Effects of Varying Contextual Demands on Age-related Positive Gaze Preferences

    PubMed Central

    Noh, Soo Rim; Isaacowitz, Derek M.

    2015-01-01

    Despite many studies on the age-related positivity effect and its role in visual attention, discrepancies remain regarding whether one’s full attention is required for age-related differences to emerge. The present study took a new approach to this question by varying the contextual demands of emotion processing. This was done by adding perceptual distractions, such as visual and auditory noise, that could disrupt attentional control. Younger and older participants viewed pairs of happy–neutral and fearful–neutral faces while their eye movements were recorded. Facial stimuli were shown either without noise, embedded in a background of visual noise (low, medium, or high), or with simultaneous auditory babble. Older adults showed positive gaze preferences, looking toward happy faces and away from fearful faces; however, their gaze preferences tended to be influenced by the level of visual noise. Specifically, the tendency to look away from fearful faces was not present in conditions with low and medium levels of visual noise, but was present where there were high levels of visual noise. It is important to note, however, that in the high-visual-noise condition, external cues were present to facilitate the processing of emotional information. In addition, older adults’ positive gaze preferences disappeared or were reduced when they first viewed emotional faces within a distracting context. The current results indicate that positive gaze preferences may be less likely to occur in distracting contexts that disrupt control of visual attention. PMID:26030774

  5. The effects of varying contextual demands on age-related positive gaze preferences.

    PubMed

    Noh, Soo Rim; Isaacowitz, Derek M

    2015-06-01

    Despite many studies on the age-related positivity effect and its role in visual attention, discrepancies remain regarding whether full attention is required for age-related differences to emerge. The present study took a new approach to this question by varying the contextual demands of emotion processing. This was done by adding perceptual distractions, such as visual and auditory noise, that could disrupt attentional control. Younger and older participants viewed pairs of happy-neutral and fearful-neutral faces while their eye movements were recorded. Facial stimuli were shown either without noise, embedded in a background of visual noise (low, medium, or high), or with simultaneous auditory babble. Older adults showed positive gaze preferences, looking toward happy faces and away from fearful faces; however, their gaze preferences tended to be influenced by the level of visual noise. Specifically, the tendency to look away from fearful faces was not present in conditions with low and medium levels of visual noise but was present when there were high levels of visual noise. It is important to note, however, that in the high-visual-noise condition, external cues were present to facilitate the processing of emotional information. In addition, older adults' positive gaze preferences disappeared or were reduced when they first viewed emotional faces within a distracting context. The current results indicate that positive gaze preferences may be less likely to occur in distracting contexts that disrupt control of visual attention. (c) 2015 APA, all rights reserved.

  6. Spontaneous Facial Mimicry is Modulated by Joint Attention and Autistic Traits.

    PubMed

    Neufeld, Janina; Ioannou, Christina; Korb, Sebastian; Schilbach, Leonhard; Chakrabarti, Bhismadev

    2016-07-01

    Joint attention (JA) and spontaneous facial mimicry (SFM) are fundamental processes in social interactions, and they are closely related to empathic abilities. When tested independently, both of these processes have been usually observed to be atypical in individuals with autism spectrum conditions (ASC). However, it is not known how these processes interact with each other in relation to autistic traits. This study addresses this question by testing the impact of JA on SFM of happy faces using a truly interactive paradigm. Sixty-two neurotypical participants engaged in gaze-based social interaction with an anthropomorphic, gaze-contingent virtual agent. The agent either established JA by initiating eye contact or looked away, before looking at an object and expressing happiness or disgust. Eye tracking was used to make the agent's gaze behavior and facial actions contingent to the participants' gaze. SFM of happy expressions was measured by Electromyography (EMG) recording over the Zygomaticus Major muscle. Results showed that JA augments SFM in individuals with low compared with high autistic traits. These findings are in line with reports of reduced impact of JA on action imitation in individuals with ASC. Moreover, they suggest that investigating atypical interactions between empathic processes, instead of testing these processes individually, might be crucial to understanding the nature of social deficits in autism. Autism Res 2016, 9: 781-789. © 2015 The Authors Autism Research published by Wiley Periodicals, Inc. on behalf of International Society for Autism Research. © 2015 The Authors Autism Research published by Wiley Periodicals, Inc. on behalf of International Society for Autism Research.

  7. Facing victims: forensics, visual technologies, and sexual assault examination.

    PubMed

    Mulla, Sameena

    2011-05-01

    This article analyzes a particular legal-medical artifact: the photos of wounds and injuries collected by forensic nurses who work with sexual assault victim-patients. I show how forensic expertise draws on multiple medical practices and adapts these practices with the goal of preserving the integrity of the evidence collection processes. In particular, forensic nurse examiners practice a rigid regime of draping and avoiding the victim-patient's gaze at some points in the forensic routine while engaging the victim's gaze at other points in the examination. Unlike the examination, the photograph itself deliberately pictures the patient's gaze to break the plane of the image, giving the photographic artifact an affective charge as a truth-preserving object within a juridical process. Focusing on forensic photography sheds light on the techno-scientific possibilities that enable forensic encounters as they align therapeutic techniques with legal directives in new and problematic ways.

  8. Unaddressed participants’ gaze in multi-person interaction: optimizing recipiency

    PubMed Central

    Holler, Judith; Kendrick, Kobin H.

    2015-01-01

    One of the most intriguing aspects of human communication is its turn-taking system. It requires the ability to process on-going turns at talk while planning the next, and to launch this next turn without considerable overlap or delay. Recent research has investigated the eye movements of observers of dialogs to gain insight into how we process turns at talk. More specifically, this research has focused on the extent to which we are able to anticipate the end of current and the beginning of next turns. At the same time, there has been a call for shifting experimental paradigms exploring social-cognitive processes away from passive observation toward on-line processing. Here, we present research that responds to this call by situating state-of-the-art technology for tracking interlocutors’ eye movements within spontaneous, face-to-face conversation. Each conversation involved three native speakers of English. The analysis focused on question–response sequences involving just two of those participants, thus rendering the third momentarily unaddressed. Temporal analyses of the unaddressed participants’ gaze shifts from current to next speaker revealed that unaddressed participants are able to anticipate next turns, and moreover, that they often shift their gaze toward the next speaker before the current turn ends. However, an analysis of the complex structure of turns at talk revealed that the planning of these gaze shifts virtually coincides with the points at which the turns first become recognizable as possibly complete. We argue that the timing of these eye movements is governed by an organizational principle whereby unaddressed participants shift their gaze at a point that appears interactionally most optimal: It provides unaddressed participants with access to much of the visual, bodily behavior that accompanies both the current speaker’s and the next speaker’s turn, and it allows them to display recipiency with regard to both speakers’ turns. PMID:25709592

  9. [Autoshaping of a button-push response and eye movement in human subjects].

    PubMed

    Kimura, H; Fukui, I; Inaki, K

    1990-12-01

    Two experiments were conducted with human subjects to investigate the similarities and differences between animal and human behaviors under autoshaping procedures. In these experiments, light served as CS, and display on TV served as US. Whether the pushing button response or gazing response to CS could be obtained in human subjects under Pavlovian conditioning procedure was examined. In Experiment 1, uninstructed naive subjects were placed in a room containing a push-button and a TV display. Within the experimental sessions, the push-button was lit for 8 s as CS, and then paired with the display of a soft pornographic program on TV for 10 s. The result indicated that the modeling of pushing button promoted the increase of response probability among the subjects. The trials conducted after the rest period indicated an increase of response probability. In Experiment 2, a 4 cm square translucent panel was lit for 20 s as CS, and then paired with the display of a computer graphic picture on TV for 8 s as US. Some subjects started gazing at the CS for several seconds. These results indicated that some subjects could acquire the gazing response under the autoshaping procedure.

  10. Prior Knowledge Facilitates Mutual Gaze Convergence and Head Nodding Synchrony in Face-to-face Communication

    PubMed Central

    Thepsoonthorn, C.; Yokozuka, T.; Miura, S.; Ogawa, K.; Miyake, Y.

    2016-01-01

    As prior knowledge is claimed to be an essential key to achieve effective education, we are interested in exploring whether prior knowledge enhances communication effectiveness. To demonstrate the effects of prior knowledge, mutual gaze convergence and head nodding synchrony are observed as indicators of communication effectiveness. We conducted an experiment on lecture task between lecturer and student under 2 conditions: prior knowledge and non-prior knowledge. The students in prior knowledge condition were provided the basic information about the lecture content and were assessed their understanding by the experimenter before starting the lecture while the students in non-prior knowledge had none. The result shows that the interaction in prior knowledge condition establishes significantly higher mutual gaze convergence (t(15.03) = 6.72, p < 0.0001; α = 0.05, n = 20) and head nodding synchrony (t(16.67) = 1.83, p = 0.04; α = 0.05, n = 19) compared to non-prior knowledge condition. This study reveals that prior knowledge facilitates mutual gaze convergence and head nodding synchrony. Furthermore, the interaction with and without prior knowledge can be evaluated by measuring or observing mutual gaze convergence and head nodding synchrony. PMID:27910902

  11. Prior Knowledge Facilitates Mutual Gaze Convergence and Head Nodding Synchrony in Face-to-face Communication.

    PubMed

    Thepsoonthorn, C; Yokozuka, T; Miura, S; Ogawa, K; Miyake, Y

    2016-12-02

    As prior knowledge is claimed to be an essential key to achieve effective education, we are interested in exploring whether prior knowledge enhances communication effectiveness. To demonstrate the effects of prior knowledge, mutual gaze convergence and head nodding synchrony are observed as indicators of communication effectiveness. We conducted an experiment on lecture task between lecturer and student under 2 conditions: prior knowledge and non-prior knowledge. The students in prior knowledge condition were provided the basic information about the lecture content and were assessed their understanding by the experimenter before starting the lecture while the students in non-prior knowledge had none. The result shows that the interaction in prior knowledge condition establishes significantly higher mutual gaze convergence (t(15.03) = 6.72, p < 0.0001; α = 0.05, n = 20) and head nodding synchrony (t(16.67) = 1.83, p = 0.04; α = 0.05, n = 19) compared to non-prior knowledge condition. This study reveals that prior knowledge facilitates mutual gaze convergence and head nodding synchrony. Furthermore, the interaction with and without prior knowledge can be evaluated by measuring or observing mutual gaze convergence and head nodding synchrony.

  12. Subliminal gaze cues increase preference levels for items in the gaze direction.

    PubMed

    Mitsuda, Takashi; Masaki, Syuta

    2017-08-29

    Another individual's gaze automatically shifts an observer's attention to a location. This reflexive response occurs even when the gaze is presented subliminally over a short period. Another's gaze also increases the preference level for items in the gaze direction; however, it was previously unclear if this effect occurs when the gaze is presented subliminally. This study showed that the preference levels for nonsense figures looked at by a subliminal gaze were significantly greater than those for items that were subliminally looked away from (Task 1). Targets that were looked at by a subliminal gaze were detected faster (Task 2); however, the participants were unable to detect the gaze direction (Task 3). These results indicate that another individual's gaze automatically increases the preference levels for items in the gaze direction without conscious awareness.

  13. Eye Gaze During Face Processing in Children and Adolescents with 22q11.2 Deletion Syndrome

    ERIC Educational Resources Information Center

    Glaser, Bronwyn; Debbane, Martin; Ottet, Marie-Christine; Vuilleumier, Patrik; Zesiger, Pascal; Antonarakis, Stylianos E.; Eliez, Stephan

    2010-01-01

    Objective: The 22q11.2 deletion syndrome (22q11DS) is a neurogenetic syndrome with high risk for the development of psychiatric disorder. There is interest in identifying reliable markers for measuring and monitoring socio-emotional impairments in 22q11DS during development. The current study investigated eye gaze as a potential marker during a…

  14. Marked referential communicative behaviours, but no differentiation of the "knowledge state" of humans in untrained pet dogs versus 1-year-old infants.

    PubMed

    Gaunet, Florence; Massioui, Farid El

    2014-09-01

    The study examines whether untrained dogs and infants take their caregiver's visual experience into account when communicating with them. Fifteen adult dogs and 15 one-year-old infants were brought into play with their caregivers with one of their own toys. The caregiver gave the toy to the experimenter, who, in different conditions, placed it either above or under one of two containers, with both the infant or dog and the caregiver witnessing the positioning; in a third condition, the caregiver left the room before the toy was placed under one of the two containers and later returned. Afterwards, for each condition, the caregiver asked the participant to indicate the location of the toy. Neither dogs nor infants-untrained to the use of the partner's knowledge state-showed much difference of behaviour between the three conditions. However, dogs showed more persistence for most behaviours (gaze at the owner, gaze at the toy and gaze alternation) and conditions, suggesting that the situation made more demands on dogs' communicative behaviours than on those of infants. When all deictic behaviours of infants (arm points towards the toy and gaze at the toy) were taken into account, dogs and infants did not differ. Phylogeny, early experience and ontogeny may all play a role in the ways that both species communicate with adult humans.

  15. Examining the durability of incidentally learned trust from gaze cues.

    PubMed

    Strachan, James W A; Tipper, Steven P

    2017-10-01

    In everyday interactions we find our attention follows the eye gaze of faces around us. As this cueing is so powerful and difficult to inhibit, gaze can therefore be used to facilitate or disrupt visual processing of the environment, and when we experience this we infer information about the trustworthiness of the cueing face. However, to date no studies have investigated how long these impressions last. To explore this we used a gaze-cueing paradigm where faces consistently demonstrated either valid or invalid cueing behaviours. Previous experiments show that valid faces are subsequently rated as more trustworthy than invalid faces. We replicate this effect (Experiment 1) and then include a brief interference task in Experiment 2 between gaze cueing and trustworthiness rating, which weakens but does not completely eliminate the effect. In Experiment 3, we explore whether greater familiarity with the faces improves the durability of trust learning and find that the effect is more resilient with familiar faces. Finally, in Experiment 4, we push this further and show that evidence of trust learning can be seen up to an hour after cueing has ended. Taken together, our results suggest that incidentally learned trust can be durable, especially for faces that deceive.

  16. Gaze shifts and fixations dominate gaze behavior of walking cats

    PubMed Central

    Rivers, Trevor J.; Sirota, Mikhail G.; Guttentag, Andrew I.; Ogorodnikov, Dmitri A.; Shah, Neet A.; Beloozerova, Irina N.

    2014-01-01

    Vision is important for locomotion in complex environments. How it is used to guide stepping is not well understood. We used an eye search coil technique combined with an active marker-based head recording system to characterize the gaze patterns of cats walking over terrains of different complexity: (1) on a flat surface in the dark when no visual information was available, (2) on the flat surface in light when visual information was available but not required, (3) along the highly structured but regular and familiar surface of a horizontal ladder, a task for which visual guidance of stepping was required, and (4) along a pathway cluttered with many small stones, an irregularly structured surface that was new each day. Three cats walked in a 2.5 m corridor, and 958 passages were analyzed. Gaze activity during the time when the gaze was directed at the walking surface was subdivided into four behaviors based on speed of gaze movement along the surface: gaze shift (fast movement), gaze fixation (no movement), constant gaze (movement at the body’s speed), and slow gaze (the remainder). We found that gaze shifts and fixations dominated the cats’ gaze behavior during all locomotor tasks, jointly occupying 62–84% of the time when the gaze was directed at the surface. As visual complexity of the surface and demand on visual guidance of stepping increased, cats spent more time looking at the surface, looked closer to them, and switched between gaze behaviors more often. During both visually guided locomotor tasks, gaze behaviors predominantly followed a repeated cycle of forward gaze shift followed by fixation. We call this behavior “gaze stepping”. Each gaze shift took gaze to a site approximately 75–80 cm in front of the cat, which the cat reached in 0.7–1.2 s and 1.1–1.6 strides. Constant gaze occupied only 5–21% of the time cats spent looking at the walking surface. PMID:24973656

  17. Perception of co-speech gestures in aphasic patients: a visual exploration study during the observation of dyadic conversations.

    PubMed

    Preisig, Basil C; Eggenberger, Noëmi; Zito, Giuseppe; Vanbellingen, Tim; Schumacher, Rahel; Hopfner, Simone; Nyffeler, Thomas; Gutbrod, Klemens; Annoni, Jean-Marie; Bohlhalter, Stephan; Müri, René M

    2015-03-01

    Co-speech gestures are part of nonverbal communication during conversations. They either support the verbal message or provide the interlocutor with additional information. Furthermore, they prompt as nonverbal cues the cooperative process of turn taking. In the present study, we investigated the influence of co-speech gestures on the perception of dyadic dialogue in aphasic patients. In particular, we analysed the impact of co-speech gestures on gaze direction (towards speaker or listener) and fixation of body parts. We hypothesized that aphasic patients, who are restricted in verbal comprehension, adapt their visual exploration strategies. Sixteen aphasic patients and 23 healthy control subjects participated in the study. Visual exploration behaviour was measured by means of a contact-free infrared eye-tracker while subjects were watching videos depicting spontaneous dialogues between two individuals. Cumulative fixation duration and mean fixation duration were calculated for the factors co-speech gesture (present and absent), gaze direction (to the speaker or to the listener), and region of interest (ROI), including hands, face, and body. Both aphasic patients and healthy controls mainly fixated the speaker's face. We found a significant co-speech gesture × ROI interaction, indicating that the presence of a co-speech gesture encouraged subjects to look at the speaker. Further, there was a significant gaze direction × ROI × group interaction revealing that aphasic patients showed reduced cumulative fixation duration on the speaker's face compared to healthy controls. Co-speech gestures guide the observer's attention towards the speaker, the source of semantic input. It is discussed whether an underlying semantic processing deficit or a deficit to integrate audio-visual information may cause aphasic patients to explore less the speaker's face. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. In the presence of conflicting gaze cues, fearful expression and eye-size guide attention.

    PubMed

    Carlson, Joshua M; Aday, Jacob

    2017-10-19

    Humans are social beings that often interact in multi-individual environments. As such, we are frequently confronted with nonverbal social signals, including eye-gaze direction, from multiple individuals. Yet, the factors that allow for the prioritisation of certain gaze cues over others are poorly understood. Using a modified conflicting gaze paradigm, we tested the hypothesis that fearful gaze would be favoured amongst competing gaze cues. We further hypothesised that this effect is related to the increased sclera exposure, which is characteristic of fearful expressions. Across three experiments, we found that fearful, but not happy, gaze guides observers' attention over competing non-emotional gaze. The guidance of attention by fearful gaze appears to be linked to increased sclera exposure. However, differences in sclera exposure do not prioritise competing gazes of other types. Thus, fearful gaze guides attention among competing cues and this effect is facilitated by increased sclera exposure - but increased sclera exposure per se does not guide attention. The prioritisation of fearful gaze over non-emotional gaze likely represents an adaptive means of selectively attending to survival-relevant spatial locations.

  19. Predicting diagnostic error in Radiology via eye-tracking and image analytics: Application in mammography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Voisin, Sophie; Pinto, Frank M; Morin-Ducote, Garnetta

    2013-01-01

    Purpose: The primary aim of the present study was to test the feasibility of predicting diagnostic errors in mammography by merging radiologists gaze behavior and image characteristics. A secondary aim was to investigate group-based and personalized predictive models for radiologists of variable experience levels. Methods: The study was performed for the clinical task of assessing the likelihood of malignancy of mammographic masses. Eye-tracking data and diagnostic decisions for 40 cases were acquired from 4 Radiology residents and 2 breast imaging experts as part of an IRB-approved pilot study. Gaze behavior features were extracted from the eye-tracking data. Computer-generated and BIRADsmore » images features were extracted from the images. Finally, machine learning algorithms were used to merge gaze and image features for predicting human error. Feature selection was thoroughly explored to determine the relative contribution of the various features. Group-based and personalized user modeling was also investigated. Results: Diagnostic error can be predicted reliably by merging gaze behavior characteristics from the radiologist and textural characteristics from the image under review. Leveraging data collected from multiple readers produced a reasonable group model (AUC=0.79). Personalized user modeling was far more accurate for the more experienced readers (average AUC of 0.837 0.029) than for the less experienced ones (average AUC of 0.667 0.099). The best performing group-based and personalized predictive models involved combinations of both gaze and image features. Conclusions: Diagnostic errors in mammography can be predicted reliably by leveraging the radiologists gaze behavior and image content.« less

  20. Gaze stabilization in chronic vestibular-loss and in cerebellar ataxia: interactions of feedforward and sensory feedback mechanisms.

    PubMed

    Sağlam, M; Lehnen, N

    2014-01-01

    During gaze shifts, humans can use visual, vestibular, and proprioceptive feedback, as well as feedforward mechanisms, for stabilization against active and passive head movements. The contributions of feedforward and sensory feedback control, and the role of the cerebellum, are still under debate. To quantify these contributions, we increased the head moment of inertia in three groups (ten healthy, five chronic vestibular-loss and nine cerebellar-ataxia patients) while they performed large gaze shifts to flashed targets in darkness. This induces undesired head oscillations. Consequently, both active (desired) and passive (undesired) head movements had to be compensated for to stabilize gaze. All groups compensated for active and passive head movements, vestibular-loss patients less than the other groups (P < 0.001, passive/active compensatory gains: vestibular-loss 0.23 ± 0.09/0.43 ± 0.12, healthy 0.80 ± 0.17/0.83 ± 0.15, cerebellar-ataxia 0.68 ± 0.17/0.77 ± 0.30, mean ± SD). The compensation gain ratio against passive and active movements was smaller than one in vestibular-loss patients (0.54 ± 0.10, P=0.001). Healthy and cerebellar-ataxia patients did not differ in active and passive compensation. In summary, vestibular-loss patients can better stabilize gaze against active than against passive head movements. Therefore, feedforward mechanisms substantially contribute to gaze stabilization. Proprioception alone is not sufficient (gain 0.2). Stabilization against active and passive head movements was not impaired in our cerebellar ataxia patients.

  1. iShadow: Design of a Wearable, Real-Time Mobile Gaze Tracker.

    PubMed

    Mayberry, Addison; Hu, Pan; Marlin, Benjamin; Salthouse, Christopher; Ganesan, Deepak

    2014-06-01

    Continuous, real-time tracking of eye gaze is valuable in a variety of scenarios including hands-free interaction with the physical world, detection of unsafe behaviors, leveraging visual context for advertising, life logging, and others. While eye tracking is commonly used in clinical trials and user studies, it has not bridged the gap to everyday consumer use. The challenge is that a real-time eye tracker is a power-hungry and computation-intensive device which requires continuous sensing of the eye using an imager running at many tens of frames per second, and continuous processing of the image stream using sophisticated gaze estimation algorithms. Our key contribution is the design of an eye tracker that dramatically reduces the sensing and computation needs for eye tracking, thereby achieving orders of magnitude reductions in power consumption and form-factor. The key idea is that eye images are extremely redundant, therefore we can estimate gaze by using a small subset of carefully chosen pixels per frame. We instantiate this idea in a prototype hardware platform equipped with a low-power image sensor that provides random access to pixel values, a low-power ARM Cortex M3 microcontroller, and a bluetooth radio to communicate with a mobile phone. The sparse pixel-based gaze estimation algorithm is a multi-layer neural network learned using a state-of-the-art sparsity-inducing regularization function that minimizes the gaze prediction error while simultaneously minimizing the number of pixels used. Our results show that we can operate at roughly 70mW of power, while continuously estimating eye gaze at the rate of 30 Hz with errors of roughly 3 degrees.

  2. iShadow: Design of a Wearable, Real-Time Mobile Gaze Tracker

    PubMed Central

    Mayberry, Addison; Hu, Pan; Marlin, Benjamin; Salthouse, Christopher; Ganesan, Deepak

    2015-01-01

    Continuous, real-time tracking of eye gaze is valuable in a variety of scenarios including hands-free interaction with the physical world, detection of unsafe behaviors, leveraging visual context for advertising, life logging, and others. While eye tracking is commonly used in clinical trials and user studies, it has not bridged the gap to everyday consumer use. The challenge is that a real-time eye tracker is a power-hungry and computation-intensive device which requires continuous sensing of the eye using an imager running at many tens of frames per second, and continuous processing of the image stream using sophisticated gaze estimation algorithms. Our key contribution is the design of an eye tracker that dramatically reduces the sensing and computation needs for eye tracking, thereby achieving orders of magnitude reductions in power consumption and form-factor. The key idea is that eye images are extremely redundant, therefore we can estimate gaze by using a small subset of carefully chosen pixels per frame. We instantiate this idea in a prototype hardware platform equipped with a low-power image sensor that provides random access to pixel values, a low-power ARM Cortex M3 microcontroller, and a bluetooth radio to communicate with a mobile phone. The sparse pixel-based gaze estimation algorithm is a multi-layer neural network learned using a state-of-the-art sparsity-inducing regularization function that minimizes the gaze prediction error while simultaneously minimizing the number of pixels used. Our results show that we can operate at roughly 70mW of power, while continuously estimating eye gaze at the rate of 30 Hz with errors of roughly 3 degrees. PMID:26539565

  3. Measurement of ocular aberrations in downward gaze using a modified clinical aberrometer

    PubMed Central

    Ghosh, Atanu; Collins, Michael J; Read, Scott A; Davis, Brett A; Iskander, D. Robert

    2011-01-01

    Changes in corneal optics have been measured after downward gaze. However, ocular aberrations during downward gaze have not been previously measured. A commercial Shack-Hartmann aberrometer (COAS-HD) was modified by adding a relay lens system and a rotatable beam splitter to allow on-axis aberration measurements in primary gaze and downward gaze with binocular fixation. Measurements with the modified aberrometer (COAS-HD relay system) in primary and downward gaze were validated against a conventional aberrometer. In human eyes, there were significant changes (p<0.05) in defocus C(2,0), primary astigmatism C(2,2) and vertical coma C(3,−1) in downward gaze (25 degrees) compared to primary gaze, indicating the potential influence of biomechanical forces on the optics of the eye in downward gaze. To demonstrate a further clinical application of this modified aberrometer, we measured ocular aberrations when wearing a progressive addition lens (PAL) in primary gaze (0 degree), 15 degrees downward gaze and 25 degrees downward gaze. PMID:21412451

  4. The impact of fatigue on latent print examinations as revealed by behavioral and eye gaze testing.

    PubMed

    Busey, Thomas; Swofford, Henry J; Vanderkolk, John; Emerick, Brandi

    2015-06-01

    Eye tracking and behavioral methods were used to assess the effects of fatigue on performance in latent print examiners. Eye gaze was measured both before and after a fatiguing exercise involving fine-grained examination decisions. The eye tracking tasks used similar images, often laterally reversed versions of previously viewed prints, which holds image detail constant while minimizing prior recognition. These methods, as well as a within-subject design with fine grained analyses of the eye gaze data, allow fairly strong conclusions despite a relatively small subject population. Consistent with the effects of fatigue on practitioners in other fields such as radiology, behavioral performance declined with fatigue, and the eye gaze statistics suggested a smaller working memory capacity. Participants also terminated the search/examination process sooner when fatigued. However, fatigue did not produce changes in inter-examiner consistency as measured by the Earth Mover Metric. Implications for practice are discussed. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  5. Genuine eye contact elicits self-referential processing.

    PubMed

    Hietanen, Jonne O; Hietanen, Jari K

    2017-05-01

    The effect of eye contact on self-awareness was investigated with implicit measures based on the use of first-person singular pronouns in sentences. The measures were proposed to tap into self-referential processing, that is, information processing associated with self-awareness. In addition, participants filled in a questionnaire measuring explicit self-awareness. In Experiment 1, the stimulus was a video clip showing another person and, in Experiment 2, the stimulus was a live person. In both experiments, participants were divided into two groups and presented with the stimulus person either making eye contact or gazing downward, depending on the group assignment. During the task, the gaze stimulus was presented before each trial of the pronoun-selection task. Eye contact was found to increase the use of first-person pronouns, but only when participants were facing a real person, not when they were looking at a video of a person. No difference in self-reported self-awareness was found between the two gaze direction groups in either experiment. The results indicate that eye contact elicits self-referential processing, but the effect may be stronger, or possibly limited to, live interaction. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. 4-aminopyridine restores vertical and horizontal neural integrator function in downbeat nystagmus.

    PubMed

    Kalla, Roger; Glasauer, Stefan; Büttner, Ulrich; Brandt, Thomas; Strupp, Michael

    2007-09-01

    Downbeat nystagmus (DBN), the most common form of acquired fixation nystagmus, is often caused by cerebellar degeneration, especially if the vestibulo-cerebellum is involved. The upward ocular drift in DBN has a spontaneous and a vertical gaze-evoked component. Since cerebellar involvement is suspected to be the underlying pathomechanism of DBN, we tested in 15 patients with DBN whether the application of the potassium-channel blocker 4-aminopyridine (4-AP), which increases the excitability of cerebellar Purkinje cells as shown in animal experiments, reduces the vertical ocular drift leading to nystagmus. Fifteen age-matched healthy subjects served as the control group. 4-AP may affect spontaneous drift or gaze-evoked drift by either enhancing visual fixation ability or restoring vision-independent gaze holding. We therefore recorded 3D slow-phase eye movements using search coils during attempted fixation in nine different eye positions and with or without a continuously visible target before and 45 min after ingestion of 10mg 4-AP. Since the effect of 4-AP may depend on the associated etiology, we divided our patients into three groups (cerebellar atrophy, n = 4; idiopathic DBN, n = 5; other etiology, n = 6). 4-AP decreased DBN during gaze straight ahead in 12 of 15 patients. Statistical analysis showed that improvement occurred predominantly in patients with cerebellar atrophy, in whom the drift was reduced from -4.99 +/- 1.07 deg/s (mean +/- SE) before treatment to -0.60 +/- 0.82 deg/s afterwards. Regression analysis of slow-phase velocity (SPV) in different eye positions revealed that vertical and horizontal gaze-evoked drift was significantly reduced independently of the patient group and caused perfect gaze holding on the average. Since the observed improvements were independent of target visibility, 4-AP improved fixation by restoring gaze-holding ability. All in all, the present study demonstrates that 4-AP has a differential effect on DBN: drift with gaze straight ahead was predominantly reduced in patients with cerebellar atrophy, but less so in the remaining patients; 4-AP on the average improved neural integrator function, i.e. gaze-evoked drift, regardless of etiology. Our results thus show that 4-AP was a successful treatment option in the majority of DBN patients, possibly by increasing Purkinje cell excitability in the cerebellar flocculi. It may work best when DBN is associated with cerebellar atrophy. Furthermore, 4-AP may be a promising treatment option for patients with a dominant gaze-evoked component of nystagmus, regardless of its etiology.

  7. Learning rational temporal eye movement strategies.

    PubMed

    Hoppe, David; Rothkopf, Constantin A

    2016-07-19

    During active behavior humans redirect their gaze several times every second within the visual environment. Where we look within static images is highly efficient, as quantified by computational models of human gaze shifts in visual search and face recognition tasks. However, when we shift gaze is mostly unknown despite its fundamental importance for survival in a dynamic world. It has been suggested that during naturalistic visuomotor behavior gaze deployment is coordinated with task-relevant events, often predictive of future events, and studies in sportsmen suggest that timing of eye movements is learned. Here we establish that humans efficiently learn to adjust the timing of eye movements in response to environmental regularities when monitoring locations in the visual scene to detect probabilistically occurring events. To detect the events humans adopt strategies that can be understood through a computational model that includes perceptual and acting uncertainties, a minimal processing time, and, crucially, the intrinsic costs of gaze behavior. Thus, subjects traded off event detection rate with behavioral costs of carrying out eye movements. Remarkably, based on this rational bounded actor model the time course of learning the gaze strategies is fully explained by an optimal Bayesian learner with humans' characteristic uncertainty in time estimation, the well-known scalar law of biological timing. Taken together, these findings establish that the human visual system is highly efficient in learning temporal regularities in the environment and that it can use these regularities to control the timing of eye movements to detect behaviorally relevant events.

  8. Getting a Grip on Social Gaze: Control over Others' Gaze Helps Gaze Detection in High-Functioning Autism

    ERIC Educational Resources Information Center

    Dratsch, Thomas; Schwartz, Caroline; Yanev, Kliment; Schilbach, Leonhard; Vogeley, Kai; Bente, Gary

    2013-01-01

    We investigated the influence of control over a social stimulus on the ability to detect direct gaze in high-functioning autism (HFA). In a pilot study, 19 participants with and 19 without HFA were compared on a gaze detection and a gaze setting task. Participants with HFA were less accurate in detecting direct gaze in the detection task, but did…

  9. Optimal eye movement strategies: a comparison of neurosurgeons gaze patterns when using a surgical microscope.

    PubMed

    Eivazi, Shahram; Hafez, Ahmad; Fuhl, Wolfgang; Afkari, Hoorieh; Kasneci, Enkelejda; Lehecka, Martin; Bednarik, Roman

    2017-06-01

    Previous studies have consistently demonstrated gaze behaviour differences related to expertise during various surgical procedures. In micro-neurosurgery, however, there is a lack of evidence of empirically demonstrated individual differences associated with visual attention. It is unknown exactly how neurosurgeons see a stereoscopic magnified view in the context of micro-neurosurgery and what this implies for medical training. We report on an investigation of the eye movement patterns in micro-neurosurgery using a state-of-the-art eye tracker. We studied the eye movements of nine neurosurgeons while performing cutting and suturing tasks under a surgical microscope. Eye-movement characteristics, such as fixation (focus level) and saccade (visual search pattern), were analysed. The results show a strong relationship between the level of microsurgical skill and the gaze pattern, whereas more expertise is associated with greater eye control, stability, and focusing in eye behaviour. For example, in the cutting task, well-trained surgeons increased their fixation durations on the operating field twice as much as the novices (expert, 848 ms; novice, 402 ms). Maintaining steady visual attention on the target (fixation), as well as being able to quickly make eye jumps from one target to another (saccades) are two important elements for the success of neurosurgery. The captured gaze patterns can be used to improve medical education, as part of an assessment system or in a gaze-training application.

  10. Discriminating between intentional and unintentional gaze fixation using multimodal-based fuzzy logic algorithm for gaze tracking system with NIR camera sensor

    NASA Astrophysics Data System (ADS)

    Naqvi, Rizwan Ali; Park, Kang Ryoung

    2016-06-01

    Gaze tracking systems are widely used in human-computer interfaces, interfaces for the disabled, game interfaces, and for controlling home appliances. Most studies on gaze detection have focused on enhancing its accuracy, whereas few have considered the discrimination of intentional gaze fixation (looking at a target to activate or select it) from unintentional fixation while using gaze detection systems. Previous research methods based on the use of a keyboard or mouse button, eye blinking, and the dwell time of gaze position have various limitations. Therefore, we propose a method for discriminating between intentional and unintentional gaze fixation using a multimodal fuzzy logic algorithm applied to a gaze tracking system with a near-infrared camera sensor. Experimental results show that the proposed method outperforms the conventional method for determining gaze fixation.

  11. Cross-coupling between accommodation and convergence is optimized for a broad range of directions and distances of gaze.

    PubMed

    Nguyen, Dorothy; Vedamurthy, Indu; Schor, Clifton

    2008-03-01

    Accommodation and convergence systems are cross-coupled so that stimulation of one system produces responses by both systems. Ideally, the cross-coupled responses of accommodation and convergence match their respective stimuli. When expressed in diopters and meter angles, respectively, stimuli for accommodation and convergence are equal in the mid-sagittal plane when viewed with symmetrical convergence, where historically, the gains of the cross coupling (AC/A and CA/C ratios) have been quantified. However, targets at non-zero azimuth angles, when viewed with asymmetric convergence, present unequal stimuli for accommodation and convergence. Are the cross-links between the two systems calibrated to compensate for stimulus mismatches that increase with gaze-azimuth? We measured the response AC/A and stimulus CA/C ratios at zero azimuth, 17.5 and 30 deg of rightward gaze eccentricities with a Badal Optometer and Wheatstone-mirror haploscope. AC/A ratios were measured under open-loop convergence conditions along the iso-accommodation circle (locus of points that stimulate approximately equal amounts of accommodation to the two eyes at all azimuth angles). CA/C ratios were measured under open-loop accommodation conditions along the iso-vergence circle (locus of points that stimulate constant convergence at all azimuth angles). Our results show that the gain of accommodative-convergence (AC/A ratio) decreased and the bias of convergence-accommodation increased at the 30 deg gaze eccentricity. These changes are in directions that compensate for stimulus mismatches caused by spatial-viewing geometry during asymmetric convergence.

  12. A gaze-contingent display to study contrast sensitivity under natural viewing conditions

    NASA Astrophysics Data System (ADS)

    Dorr, Michael; Bex, Peter J.

    2011-03-01

    Contrast sensitivity has been extensively studied over the last decades and there are well-established models of early vision that were derived by presenting the visual system with synthetic stimuli such as sine-wave gratings near threshold contrasts. Natural scenes, however, contain a much wider distribution of orientations, spatial frequencies, and both luminance and contrast values. Furthermore, humans typically move their eyes two to three times per second under natural viewing conditions, but most laboratory experiments require subjects to maintain central fixation. We here describe a gaze-contingent display capable of performing real-time contrast modulations of video in retinal coordinates, thus allowing us to study contrast sensitivity when dynamically viewing dynamic scenes. Our system is based on a Laplacian pyramid for each frame that efficiently represents individual frequency bands. Each output pixel is then computed as a locally weighted sum of pyramid levels to introduce local contrast changes as a function of gaze. Our GPU implementation achieves real-time performance with more than 100 fps on high-resolution video (1920 by 1080 pixels) and a synthesis latency of only 1.5ms. Psychophysical data show that contrast sensitivity is greatly decreased in natural videos and under dynamic viewing conditions. Synthetic stimuli therefore only poorly characterize natural vision.

  13. Fuzzy integral-based gaze control architecture incorporated with modified-univector field-based navigation for humanoid robots.

    PubMed

    Yoo, Jeong-Ki; Kim, Jong-Hwan

    2012-02-01

    When a humanoid robot moves in a dynamic environment, a simple process of planning and following a path may not guarantee competent performance for dynamic obstacle avoidance because the robot acquires limited information from the environment using a local vision sensor. Thus, it is essential to update its local map as frequently as possible to obtain more information through gaze control while walking. This paper proposes a fuzzy integral-based gaze control architecture incorporated with the modified-univector field-based navigation for humanoid robots. To determine the gaze direction, four criteria based on local map confidence, waypoint, self-localization, and obstacles, are defined along with their corresponding partial evaluation functions. Using the partial evaluation values and the degree of consideration for criteria, fuzzy integral is applied to each candidate gaze direction for global evaluation. For the effective dynamic obstacle avoidance, partial evaluation functions about self-localization error and surrounding obstacles are also used for generating virtual dynamic obstacle for the modified-univector field method which generates the path and velocity of robot toward the next waypoint. The proposed architecture is verified through the comparison with the conventional weighted sum-based approach with the simulations using a developed simulator for HanSaRam-IX (HSR-IX).

  14. Hierarchical control of two-dimensional gaze saccades

    PubMed Central

    Optican, Lance M.; Blohm, Gunnar; Lefèvre, Philippe

    2014-01-01

    Coordinating the movements of different body parts is a challenging process for the central nervous system because of several problems. Four of these main difficulties are: first, moving one part can move others; second, the parts can have different dynamics; third, some parts can have different motor goals; and fourth, some parts may be perturbed by outside forces. Here, we propose a novel approach for the control of linked systems with feedback loops for each part. The proximal parts have separate goals, but critically the most distal part has only the common goal. We apply this new control policy to eye-head coordination in two-dimensions, specifically head-unrestrained gaze saccades. Paradoxically, the hierarchical structure has controllers for the gaze and the head, but not for the eye (the most distal part). Our simulations demonstrate that the proposed control structure reproduces much of the published empirical data about gaze movements, e.g., it compensates for perturbations, accurately reaches goals for gaze and head from arbitrary initial positions, simulates the nine relationships of the head-unrestrained main sequence, and reproduces observations from lesion and single-unit recording experiments. We conclude by showing how our model can be easily extended to control structures with more linked segments, such as the control of coordinated eye on head on trunk movements. PMID:24062206

  15. Coordination of eye and head components of movements evoked by stimulation of the paramedian pontine reticular formation.

    PubMed

    Gandhi, Neeraj J; Barton, Ellen J; Sparks, David L

    2008-07-01

    Constant frequency microstimulation of the paramedian pontine reticular formation (PPRF) in head-restrained monkeys evokes a constant velocity eye movement. Since the PPRF receives significant projections from structures that control coordinated eye-head movements, we asked whether stimulation of the pontine reticular formation in the head-unrestrained animal generates a combined eye-head movement or only an eye movement. Microstimulation of most sites yielded a constant-velocity gaze shift executed as a coordinated eye-head movement, although eye-only movements were evoked from some sites. The eye and head contributions to the stimulation-evoked movements varied across stimulation sites and were drastically different from the lawful relationship observed for visually-guided gaze shifts. These results indicate that the microstimulation activated elements that issued movement commands to the extraocular and, for most sites, neck motoneurons. In addition, the stimulation-evoked changes in gaze were similar in the head-restrained and head-unrestrained conditions despite the assortment of eye and head contributions, suggesting that the vestibulo-ocular reflex (VOR) gain must be near unity during the coordinated eye-head movements evoked by stimulation of the PPRF. These findings contrast the attenuation of VOR gain associated with visually-guided gaze shifts and suggest that the vestibulo-ocular pathway processes volitional and PPRF stimulation-evoked gaze shifts differently.

  16. Perceptual and Gaze Biases during Face Processing: Related or Not?

    PubMed Central

    Samson, Hélène; Fiori-Duharcourt, Nicole; Doré-Mazars, Karine; Lemoine, Christelle; Vergilino-Perez, Dorine

    2014-01-01

    Previous studies have demonstrated a left perceptual bias while looking at faces, due to the fact that observers mainly use information from the left side of a face (from the observer's point of view) to perform a judgment task. Such a bias is consistent with the right hemisphere dominance for face processing and has sometimes been linked to a left gaze bias, i.e. more and/or longer fixations on the left side of the face. Here, we recorded eye-movements, in two different experiments during a gender judgment task, using normal and chimeric faces which were presented above, below, right or left to the central fixation point or on it (central position). Participants performed the judgment task by remaining fixated on the fixation point or after executing several saccades (up to three). A left perceptual bias was not systematically found as it depended on the number of allowed saccades and face position. Moreover, the gaze bias clearly depended on the face position as the initial fixation was guided by face position and landed on the closest half-face, toward the center of gravity of the face. The analysis of the subsequent fixations revealed that observers move their eyes from one side to the other. More importantly, no apparent link between gaze and perceptual biases was found here. This implies that we do not look necessarily toward the side of the face that we use to make a gender judgment task. Despite the fact that these results may be limited by the absence of perceptual and gaze biases in some conditions, we emphasized the inter-individual differences observed in terms of perceptual bias, hinting at the importance of performing individual analysis and drawing attention to the influence of the method used to study this bias. PMID:24454927

  17. To Gaze or Not to Gaze: Visual Communication in Eastern Zaire. Sociolinguistic Working Paper Number 87.

    ERIC Educational Resources Information Center

    Blakely, Thomas D.

    The nature of gazing at someone or something, as a form of communication among the Bahemba people in eastern Zaire, is analyzed across a range of situations. Variations of steady gazing, a common eye contact routine, are outlined, including: (1) negative non-gazing or glance routines, especially in situations in which gazing would ordinarily…

  18. Etracker: A Mobile Gaze-Tracking System with Near-Eye Display Based on a Combined Gaze-Tracking Algorithm.

    PubMed

    Li, Bin; Fu, Hong; Wen, Desheng; Lo, WaiLun

    2018-05-19

    Eye tracking technology has become increasingly important for psychological analysis, medical diagnosis, driver assistance systems, and many other applications. Various gaze-tracking models have been established by previous researchers. However, there is currently no near-eye display system with accurate gaze-tracking performance and a convenient user experience. In this paper, we constructed a complete prototype of the mobile gaze-tracking system ' Etracker ' with a near-eye viewing device for human gaze tracking. We proposed a combined gaze-tracking algorithm. In this algorithm, the convolutional neural network is used to remove blinking images and predict coarse gaze position, and then a geometric model is defined for accurate human gaze tracking. Moreover, we proposed using the mean value of gazes to resolve pupil center changes caused by nystagmus in calibration algorithms, so that an individual user only needs to calibrate it the first time, which makes our system more convenient. The experiments on gaze data from 26 participants show that the eye center detection accuracy is 98% and Etracker can provide an average gaze accuracy of 0.53° at a rate of 30⁻60 Hz.

  19. Eyes on the Mind: Investigating the Influence of Gaze Dynamics on the Perception of Others in Real-Time Social Interaction

    PubMed Central

    Pfeiffer, Ulrich J.; Schilbach, Leonhard; Jording, Mathis; Timmermans, Bert; Bente, Gary; Vogeley, Kai

    2012-01-01

    Social gaze provides a window into the interests and intentions of others and allows us to actively point out our own. It enables us to engage in triadic interactions involving human actors and physical objects and to build an indispensable basis for coordinated action and collaborative efforts. The object-related aspect of gaze in combination with the fact that any motor act of looking encompasses both input and output of the minds involved makes this non-verbal cue system particularly interesting for research in embodied social cognition. Social gaze comprises several core components, such as gaze-following or gaze aversion. Gaze-following can result in situations of either “joint attention” or “shared attention.” The former describes situations in which the gaze-follower is aware of sharing a joint visual focus with the gazer. The latter refers to a situation in which gazer and gaze-follower focus on the same object and both are aware of their reciprocal awareness of this joint focus. Here, a novel interactive eye-tracking paradigm suited for studying triadic interactions was used to explore two aspects of social gaze. Experiments 1a and 1b assessed how the latency of another person’s gaze reactions (i.e., gaze-following or gaze version) affected participants’ sense of agency, which was measured by their experience of relatedness of these reactions. Results demonstrate that both timing and congruency of a gaze reaction as well as the other’s action options influence the sense of agency. Experiment 2 explored differences in gaze dynamics when participants were asked to establish either joint or shared attention. Findings indicate that establishing shared attention takes longer and requires a larger number of gaze shifts as compared to joint attention, which more closely seems to resemble simple visual detection. Taken together, novel insights into the sense of agency and the awareness of others in gaze-based interaction are provided. PMID:23227017

  20. Seductive eyes: attractiveness and direct gaze increase desire for associated objects.

    PubMed

    Strick, Madelijn; Holland, Rob W; van Knippenberg, Ad

    2008-03-01

    Recent research in neuroscience shows that observing attractive faces with direct gaze is more rewarding than observing attractive faces with averted gaze. On the basis of this research, it was hypothesized that object evaluations can be enhanced by associating them with attractive faces displaying direct gaze. In a conditioning paradigm, novel objects were associated with either attractive or unattractive female faces, either displaying direct or averted gaze. An affective priming task showed more positive automatic evaluations of objects that were paired with attractive faces with direct gaze than attractive faces with averted gaze and unattractive faces, irrespective of gaze direction. Participants' self-reported desire for the objects matched the affective priming data. The results are discussed against the background of recent findings on affective consequences of gaze cueing.

  1. The effects of social pressure and emotional expression on the cone of gaze in patients with social anxiety disorder.

    PubMed

    Harbort, Johannes; Spiegel, Julia; Witthöft, Michael; Hecht, Heiko

    2017-06-01

    Patients with social anxiety disorder suffer from pronounced fears in social situations. As gaze perception is crucial in these situations, we examined which factors influence the range of gaze directions where mutual gaze is experienced (the cone of gaze). The social stimulus was modified by changing the number of people (heads) present and the emotional expression of their faces. Participants completed a psychophysical task, in which they had to adjust the eyes of a virtual head to gaze at the edge of the range where mutual eye-contact was experienced. The number of heads affected the width of the gaze cone: the more heads, the wider the gaze cone. The emotional expression of the virtual head had no consistent effect on the width of the gaze cone, it did however affect the emotional state of the participants. Angry expressions produced the highest arousal values. Highest valence emerged from happy faces, lowest valence from angry faces. These results suggest that the widening of the gaze cone in social anxiety disorder is not primarily mediated by their altered emotional reactivity. Implications for gaze assessment and gaze training in therapeutic contexts are discussed. Due to interindividual variability, enlarged gaze cones are not necessarily indicative of social anxiety disorder, they merely constitute a correlate at the group level. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. The IMISS-1 Experiment for Recording and Analysis of Accelerations in Orbital Flight

    NASA Astrophysics Data System (ADS)

    Sadovnichii, V. A.; Alexandrov, V. V.; Bugrov, D. I.; Lemak, S. S.; Pakhomov, V. B.; Panasyuk, M. I.; Petrov, V. L.; Yashin, I. V.

    2018-03-01

    The IMISS-1 experiment represents the second step in solving the problem of the creation of the gaze stabilization corrector. This device is designed to correct the effect of the gaze stabilization delay under microgravity. IMISS-1 continues research started by the Tat'yana-2 satellite. This research will be continued on board the International Space Station. At this stage we study the possibility of registration of angular and linear accelerations acting on the sensitive mass in terms of Low Earth Orbit flight, using MEMS sensors.

  3. A telephoto camera system with shooting direction control by gaze detection

    NASA Astrophysics Data System (ADS)

    Teraya, Daiki; Hachisu, Takumi; Yendo, Tomohiro

    2015-05-01

    For safe driving, it is important for driver to check traffic conditions such as traffic lights, or traffic signs as early as soon. If on-vehicle camera takes image of important objects to understand traffic conditions from long distance and shows these to driver, driver can understand traffic conditions earlier. To take image of long distance objects clearly, the focal length of camera must be long. When the focal length is long, on-vehicle camera doesn't have enough field of view to check traffic conditions. Therefore, in order to get necessary images from long distance, camera must have long-focal length and controllability of shooting direction. In previous study, driver indicates shooting direction on displayed image taken by a wide-angle camera, a direction controllable camera takes telescopic image, and displays these to driver. However, driver uses a touch panel to indicate the shooting direction in previous study. It is cause of disturb driving. So, we propose a telephoto camera system for driving support whose shooting direction is controlled by driver's gaze to avoid disturbing drive. This proposed system is composed of a gaze detector and an active telephoto camera whose shooting direction is controlled. We adopt non-wear detecting method to avoid hindrance to drive. The gaze detector measures driver's gaze by image processing. The shooting direction of the active telephoto camera is controlled by galvanometer scanners and the direction can be switched within a few milliseconds. We confirmed that the proposed system takes images of gazing straight ahead of subject by experiments.

  4. Oxytocin enhances gaze-following responses to videos of natural social behavior in adult male rhesus monkeys

    PubMed Central

    Putnam, P.T.; Roman, J.M.; Zimmerman, P.E.; Gothard, K.M.

    2017-01-01

    Gaze following is a basic building block of social behavior that has been observed in multiple species, including primates. The absence of gaze following is associated with abnormal development of social cognition, such as in autism spectrum disorders (ASD). Some social deficits in ASD, including the failure to look at eyes and the inability to recognize facial expressions, are ameliorated by intranasal administration of oxytocin (IN-OT). Here we tested the hypothesis that IN-OT might enhance social processes that require active engagement with a social partner, such as gaze following. Alternatively, IN-OT may only enhance the perceptual salience of the eyes, and may not modify behavioral responses to social signals. To test this hypothesis, we presented four monkeys with videos of conspecifics displaying natural behaviors. Each video was viewed multiple times before and after the monkeys received intranasally either 50 IU of OT or saline. We found that despite a gradual decrease in attention to the repeated viewing of the same videos (habituation), IN-OT consistently increased the frequency of gaze following saccades. Further analysis confirmed that these behaviors did not occur randomly, but rather predictably in response to the same segments of the videos. These findings suggest that in response to more naturalistic social stimuli IN-OT enhances the propensity to interact with a social partner rather than merely elevating the perceptual salience of the eyes. In light of these findings, gaze following may serve as a metric for pro-social effects of oxytocin that target social action more than social perception. PMID:27343726

  5. Coordinates of Human Visual and Inertial Heading Perception.

    PubMed

    Crane, Benjamin Thomas

    2015-01-01

    Heading estimation involves both inertial and visual cues. Inertial motion is sensed by the labyrinth, somatic sensation by the body, and optic flow by the retina. Because the eye and head are mobile these stimuli are sensed relative to different reference frames and it remains unclear if a perception occurs in a common reference frame. Recent neurophysiologic evidence has suggested the reference frames remain separate even at higher levels of processing but has not addressed the resulting perception. Seven human subjects experienced a 2s, 16 cm/s translation and/or a visual stimulus corresponding with this translation. For each condition 72 stimuli (360° in 5° increments) were delivered in random order. After each stimulus the subject identified the perceived heading using a mechanical dial. Some trial blocks included interleaved conditions in which the influence of ±28° of gaze and/or head position were examined. The observations were fit using a two degree-of-freedom population vector decoder (PVD) model which considered the relative sensitivity to lateral motion and coordinate system offset. For visual stimuli gaze shifts caused shifts in perceived head estimates in the direction opposite the gaze shift in all subjects. These perceptual shifts averaged 13 ± 2° for eye only gaze shifts and 17 ± 2° for eye-head gaze shifts. This finding indicates visual headings are biased towards retina coordinates. Similar gaze and head direction shifts prior to inertial headings had no significant influence on heading direction. Thus inertial headings are perceived in body-centered coordinates. Combined visual and inertial stimuli yielded intermediate results.

  6. Coordinates of Human Visual and Inertial Heading Perception

    PubMed Central

    Crane, Benjamin Thomas

    2015-01-01

    Heading estimation involves both inertial and visual cues. Inertial motion is sensed by the labyrinth, somatic sensation by the body, and optic flow by the retina. Because the eye and head are mobile these stimuli are sensed relative to different reference frames and it remains unclear if a perception occurs in a common reference frame. Recent neurophysiologic evidence has suggested the reference frames remain separate even at higher levels of processing but has not addressed the resulting perception. Seven human subjects experienced a 2s, 16 cm/s translation and/or a visual stimulus corresponding with this translation. For each condition 72 stimuli (360° in 5° increments) were delivered in random order. After each stimulus the subject identified the perceived heading using a mechanical dial. Some trial blocks included interleaved conditions in which the influence of ±28° of gaze and/or head position were examined. The observations were fit using a two degree-of-freedom population vector decoder (PVD) model which considered the relative sensitivity to lateral motion and coordinate system offset. For visual stimuli gaze shifts caused shifts in perceived head estimates in the direction opposite the gaze shift in all subjects. These perceptual shifts averaged 13 ± 2° for eye only gaze shifts and 17 ± 2° for eye-head gaze shifts. This finding indicates visual headings are biased towards retina coordinates. Similar gaze and head direction shifts prior to inertial headings had no significant influence on heading direction. Thus inertial headings are perceived in body-centered coordinates. Combined visual and inertial stimuli yielded intermediate results. PMID:26267865

  7. A software module for implementing auditory and visual feedback on a video-based eye tracking system

    NASA Astrophysics Data System (ADS)

    Rosanlall, Bharat; Gertner, Izidor; Geri, George A.; Arrington, Karl F.

    2016-05-01

    We describe here the design and implementation of a software module that provides both auditory and visual feedback of the eye position measured by a commercially available eye tracking system. The present audio-visual feedback module (AVFM) serves as an extension to the Arrington Research ViewPoint EyeTracker, but it can be easily modified for use with other similar systems. Two modes of audio feedback and one mode of visual feedback are provided in reference to a circular area-of-interest (AOI). Auditory feedback can be either a click tone emitted when the user's gaze point enters or leaves the AOI, or a sinusoidal waveform with frequency inversely proportional to the distance from the gaze point to the center of the AOI. Visual feedback is in the form of a small circular light patch that is presented whenever the gaze-point is within the AOI. The AVFM processes data that are sent to a dynamic-link library by the EyeTracker. The AVFM's multithreaded implementation also allows real-time data collection (1 kHz sampling rate) and graphics processing that allow display of the current/past gaze-points as well as the AOI. The feedback provided by the AVFM described here has applications in military target acquisition and personnel training, as well as in visual experimentation, clinical research, marketing research, and sports training.

  8. Iris recognition via plenoptic imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santos-Villalobos, Hector J.; Boehnen, Chris Bensing; Bolme, David S.

    Iris recognition can be accomplished for a wide variety of eye images by using plenoptic imaging. Using plenoptic technology, it is possible to correct focus after image acquisition. One example technology reconstructs images having different focus depths and stitches them together, resulting in a fully focused image, even in an off-angle gaze scenario. Another example technology determines three-dimensional data for an eye and incorporates it into an eye model used for iris recognition processing. Another example technology detects contact lenses. Application of the technologies can result in improved iris recognition under a wide variety of scenarios.

  9. Adaptive control for eye-gaze input system

    NASA Astrophysics Data System (ADS)

    Zhao, Qijie; Tu, Dawei; Yin, Hairong

    2004-01-01

    The characteristics of the vision-based human-computer interaction system have been analyzed, and the practical application and its limited factors at present time have also been mentioned. The information process methods have been put forward. In order to make the communication flexible and spontaneous, the algorithms to adaptive control of user"s head movement has been designed, and the events-based methods and object-oriented computer language is used to develop the system software, by experiment testing, we found that under given condition, these methods and algorithms can meet the need of the HCI.

  10. Functional changes of the reward system underlie blunted response to social gaze in cocaine users

    PubMed Central

    Preller, Katrin H.; Herdener, Marcus; Schilbach, Leonhard; Stämpfli, Philipp; Hulka, Lea M.; Vonmoos, Matthias; Ingold, Nina; Vogeley, Kai; Tobler, Philippe N.; Seifritz, Erich; Quednow, Boris B.

    2014-01-01

    Social interaction deficits in drug users likely impede treatment, increase the burden of the affected families, and consequently contribute to the high costs for society associated with addiction. Despite its significance, the neural basis of altered social interaction in drug users is currently unknown. Therefore, we investigated basal social gaze behavior in cocaine users by applying behavioral, psychophysiological, and functional brain-imaging methods. In study I, 80 regular cocaine users and 63 healthy controls completed an interactive paradigm in which the participants’ gaze was recorded by an eye-tracking device that controlled the gaze of an anthropomorphic virtual character. Valence ratings of different eye-contact conditions revealed that cocaine users show diminished emotional engagement in social interaction, which was also supported by reduced pupil responses. Study II investigated the neural underpinnings of changes in social reward processing observed in study I. Sixteen cocaine users and 16 controls completed a similar interaction paradigm as used in study I while undergoing functional magnetic resonance imaging. In response to social interaction, cocaine users displayed decreased activation of the medial orbitofrontal cortex, a key region of reward processing. Moreover, blunted activation of the medial orbitofrontal cortex was significantly correlated with a decreased social network size, reflecting problems in real-life social behavior because of reduced social reward. In conclusion, basic social interaction deficits in cocaine users as observed here may arise from altered social reward processing. Consequently, these results point to the importance of reinstatement of social reward in the treatment of stimulant addiction. PMID:24449854

  11. The Eyes Are the Windows to the Mind: Direct Eye Gaze Triggers the Ascription of Others' Minds.

    PubMed

    Khalid, Saara; Deska, Jason C; Hugenberg, Kurt

    2016-12-01

    Eye gaze is a potent source of social information with direct eye gaze signaling the desire to approach and averted eye gaze signaling avoidance. In the current work, we proposed that eye gaze signals whether or not to impute minds into others. Across four studies, we manipulated targets' eye gaze (i.e., direct vs. averted eye gaze) and measured explicit mind ascriptions (Study 1a, Study 1b, and Study 2) and beliefs about the likelihood of targets having mind (Study 3). In all four studies, we find novel evidence that the ascription of sophisticated humanlike minds to others is signaled by the display of direct eye gaze relative to averted eye gaze. Moreover, we provide evidence suggesting that this differential mentalization is due, at least in part, to beliefs that direct gaze targets are more likely to instigate social interaction. In short, eye contact triggers mind perception. © 2016 by the Society for Personality and Social Psychology, Inc.

  12. Expression-dependent susceptibility to face distortions in processing of facial expressions of emotion.

    PubMed

    Guo, Kun; Soornack, Yoshi; Settle, Rebecca

    2018-03-05

    Our capability of recognizing facial expressions of emotion under different viewing conditions implies the existence of an invariant expression representation. As natural visual signals are often distorted and our perceptual strategy changes with external noise level, it is essential to understand how expression perception is susceptible to face distortion and whether the same facial cues are used to process high- and low-quality face images. We systematically manipulated face image resolution (experiment 1) and blur (experiment 2), and measured participants' expression categorization accuracy, perceived expression intensity and associated gaze patterns. Our analysis revealed a reasonable tolerance to face distortion in expression perception. Reducing image resolution up to 48 × 64 pixels or increasing image blur up to 15 cycles/image had little impact on expression assessment and associated gaze behaviour. Further distortion led to decreased expression categorization accuracy and intensity rating, increased reaction time and fixation duration, and stronger central fixation bias which was not driven by distortion-induced changes in local image saliency. Interestingly, the observed distortion effects were expression-dependent with less deterioration impact on happy and surprise expressions, suggesting this distortion-invariant facial expression perception might be achieved through the categorical model involving a non-linear configural combination of local facial features. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Do pet dogs (Canis familiaris) follow ostensive and non-ostensive human gaze to distant space and to objects?

    PubMed

    Duranton, Charlotte; Range, Friederike; Virányi, Zsófia

    2017-07-01

    Dogs are renowned for being skilful at using human-given communicative cues such as pointing. Results are contradictory, however, when it comes to dogs' following human gaze, probably due to methodological discrepancies. Here we investigated whether dogs follow human gaze to one of two food locations better than into distant space even after comparable pre-training. In Experiments 1 and 2, the gazing direction of dogs was recorded in a gaze-following into distant space and in an object-choice task where no choice was allowed, in order to allow a direct comparison between tasks, varying the ostensive nature of the gazes. We found that dogs only followed repeated ostensive human gaze into distant space, whereas they followed all gaze cues in the object-choice task. Dogs followed human gaze better in the object-choice task than when there was no obvious target to look at. In Experiment 3, dogs were tested in another object-choice task and were allowed to approach a container. Ostensive cues facilitated the dogs' following gaze with gaze as well as their choices: we found that dogs in the ostensive group chose the indicated container at chance level, whereas they avoided this container in the non-ostensive group. We propose that dogs may perceive the object-choice task as a competition over food and may interpret non-ostensive gaze as an intentional cue that indicates the experimenter's interest in the food location she has looked at. Whether ostensive cues simply mitigate the competitive perception of this situation or they alter how dogs interpret communicative gaze needs further investigation. Our findings also show that following gaze with one's gaze and actually choosing one of the two containers in an object-choice task need to be considered as different variables. The present study clarifies a number of questions related to gaze-following in dogs and adds to a growing body of evidence showing that human ostensive cues can strongly modify dog behaviour.

  14. A non-invasive method for studying an index of pupil diameter and visual performance in the rhesus monkey.

    PubMed

    Fairhall, Sarah J; Dickson, Carol A; Scott, Leah; Pearce, Peter C

    2006-04-01

    A non-invasive model has been developed to estimate gaze direction and relative pupil diameter, in minimally restrained rhesus monkeys, to investigate the effects of low doses of ocularly administered cholinergic compounds on visual performance. Animals were trained to co-operate with a novel device, which enabled eye movements to be recorded using modified human eye-tracking equipment, and to perform a task which determined visual threshold contrast. Responses were made by gaze transfer under twilight conditions. 4% w/v pilocarpine nitrate was studied to demonstrate the suitability of the model. Pilocarpine induced marked miosis for >3 h which was accompanied by a decrement in task performance. The method obviates the need for invasive surgery and, as the position of point of gaze can be approximately defined, the approach may have utility in other areas of research involving non-human primates.

  15. Same-Sex Gaze Attraction Influences Mate-Choice Copying in Humans

    PubMed Central

    Yorzinski, Jessica L.; Platt, Michael L.

    2010-01-01

    Mate-choice copying occurs when animals rely on the mating choices of others to inform their own mating decisions. The proximate mechanisms underlying mate-choice copying remain unknown. To address this question, we tracked the gaze of men and women as they viewed a series of photographs in which a potential mate was pictured beside an opposite-sex partner; the participants then indicated their willingness to engage in a long-term relationship with each potential mate. We found that both men and women expressed more interest in engaging in a relationship with a potential mate if that mate was paired with an attractive partner. Men and women's attention to partners varied with partner attractiveness and this gaze attraction influenced their subsequent mate choices. These results highlight the prevalence of non-independent mate choice in humans and implicate social attention and reward circuitry in these decisions. PMID:20161739

  16. Language/Culture Modulates Brain and Gaze Processes in Audiovisual Speech Perception.

    PubMed

    Hisanaga, Satoko; Sekiyama, Kaoru; Igasaki, Tomohiko; Murayama, Nobuki

    2016-10-13

    Several behavioural studies have shown that the interplay between voice and face information in audiovisual speech perception is not universal. Native English speakers (ESs) are influenced by visual mouth movement to a greater degree than native Japanese speakers (JSs) when listening to speech. However, the biological basis of these group differences is unknown. Here, we demonstrate the time-varying processes of group differences in terms of event-related brain potentials (ERP) and eye gaze for audiovisual and audio-only speech perception. On a behavioural level, while congruent mouth movement shortened the ESs' response time for speech perception, the opposite effect was observed in JSs. Eye-tracking data revealed a gaze bias to the mouth for the ESs but not the JSs, especially before the audio onset. Additionally, the ERP P2 amplitude indicated that ESs processed multisensory speech more efficiently than auditory-only speech; however, the JSs exhibited the opposite pattern. Taken together, the ESs' early visual attention to the mouth was likely to promote phonetic anticipation, which was not the case for the JSs. These results clearly indicate the impact of language and/or culture on multisensory speech processing, suggesting that linguistic/cultural experiences lead to the development of unique neural systems for audiovisual speech perception.

  17. Subjective evaluation of two stereoscopic imaging systems exploiting visual attention to improve 3D quality of experience

    NASA Astrophysics Data System (ADS)

    Hanhart, Philippe; Ebrahimi, Touradj

    2014-03-01

    Crosstalk and vergence-accommodation rivalry negatively impact the quality of experience (QoE) provided by stereoscopic displays. However, exploiting visual attention and adapting the 3D rendering process on the fly can reduce these drawbacks. In this paper, we propose and evaluate two different approaches that exploit visual attention to improve 3D QoE on stereoscopic displays: an offline system, which uses a saliency map to predict gaze position, and an online system, which uses a remote eye tracking system to measure real time gaze positions. The gaze points were used in conjunction with the disparity map to extract the disparity of the object-of-interest. Horizontal image translation was performed to bring the fixated object on the screen plane. The user preference between standard 3D mode and the two proposed systems was evaluated through a subjective evaluation. Results show that exploiting visual attention significantly improves image quality and visual comfort, with a slight advantage for real time gaze determination. Depth quality is also improved, but the difference is not significant.

  18. Is eye to eye contact really threatening and avoided in social anxiety?--An eye-tracking and psychophysiology study.

    PubMed

    Wieser, Matthias J; Pauli, Paul; Alpers, Georg W; Mühlberger, Andreas

    2009-01-01

    The effects of direct and averted gaze on autonomic arousal and gaze behavior in social anxiety were investigated using a new paradigm including animated movie stimuli and eye-tracking methodology. While high, medium, and low socially anxious (HSA vs. MSA vs. LSA) women watched animated movie clips, in which faces responded to the gaze of the participants with either direct or averted gaze, their eye movements, heart rate (HR) and skin conductance responses (SCR) were continuously recorded. Groups did not differ in their gaze behavior concerning direct vs. averted gaze, but high socially anxious women tended to fixate the eye region of the presented face longer than MSA and LSA, respectively. Furthermore, they responded to direct gaze with more pronounced cardiac acceleration. This physiological finding indicates that direct gaze may be a fear-relevant feature for socially anxious individuals in social interaction. However, this seems not to result in gaze avoidance. Future studies should examine the role of gaze direction and its interaction with facial expressions in social anxiety and its consequences for avoidance behavior and fear responses. Additionally, further research is needed to clarify the role of gaze perception in social anxiety.

  19. Brain stem omnipause neurons and the control of combined eye-head gaze saccades in the alert cat.

    PubMed

    Paré, M; Guitton, D

    1998-06-01

    When the head is unrestrained, rapid displacements of the visual axis-gaze shifts (eye-re-space)-are made by coordinated movements of the eyes (eye-re-head) and head (head-re-space). To address the problem of the neural control of gaze shifts, we studied and contrasted the discharges of omnipause neurons (OPNs) during a variety of combined eye-head gaze shifts and head-fixed eye saccades executed by alert cats. OPNs discharged tonically during intersaccadic intervals and at a reduced level during slow perisaccadic gaze movements sometimes accompanying saccades. Their activity ceased for the duration of the saccadic gaze shifts the animal executed, either by head-fixed eye saccades alone or by combined eye-head movements. This was true for all types of gaze shifts studied: active movements to visual targets; passive movements induced by whole-body rotation or by head rotation about stationary body; and electrically evoked movements by stimulation of the caudal part of the superior colliculus (SC), a central structure for gaze control. For combined eye-head gaze shifts, the OPN pause was therefore not correlated to the eye-in-head trajectory. For instance, in active gaze movements, the end of the pause was better correlated with the gaze end than with either the eye saccade end or the time of eye counterrotation. The hypothesis that cat OPNs participate in controlling gaze shifts is supported by these results, and also by the observation that the movements of both the eyes and the head were transiently interrupted by stimulation of OPNs during gaze shifts. However, we found that the OPN pause could be dissociated from the gaze-motor-error signal producing the gaze shift. First, OPNs resumed discharging when perturbation of head motion briefly interrupted a gaze shift before its intended amplitude was attained. Second, stimulation of caudal SC sites in head-free cat elicited large head-free gaze shifts consistent with the creation of a large gaze-motor-error signal. However, stimulation of the same sites in head-fixed cat produced small "goal-directed" eye saccades, and OPNs paused only for the duration of the latter; neither a pause nor an eye movement occurred when the same stimulation was applied with the eyes at the goal location. We conclude that OPNs can be controlled by neither a simple eye control system nor an absolute gaze control system. Our data cannot be accounted for by existing models describing the control of combined eye-head gaze shifts and therefore put new constraints on future models, which will have to incorporate all the various signals that act synergistically to control gaze shifts.

  20. [Eye contact effects: A therapeutic issue?

    PubMed

    Baltazar, M; Conty, L

    2016-12-01

    The perception of a direct gaze - that is, of another individual's gaze directed at the observer that leads to eye contact - is known to influence a wide range of cognitive processes and behaviors. We stress that these effects mainly reflect positive impacts on human cognition and may thus be used as relevant tools for therapeutic purposes. In this review, we aim (1) to provide an exhaustive review of eye contact effects while discussing the limits of the dominant models used to explain these effects, (2) to illustrate the therapeutic potential of eye contact by targeting those pathologies that show both preserved gaze processing and deficits in one or several functions that are targeted by the eye contact effects, and (3) to propose concrete ways in which eye contact could be employed as a therapeutic tool. (1) We regroup the variety of eye contact effects into four categories, including memory effects, activation of prosocial behavior, positive appraisals of self and others and the enhancement of self-awareness. We emphasize that the models proposed to account for these effects have a poor predictive value and that further descriptions of these effects is needed. (2) We then emphasize that people with pathologies that affect memory, social behavior, and self and/or other appraisal, and self-awareness could benefit from eye contact effects. We focus on depression, autism and Alzheimer's disease to illustrate our proposal. To our knowledge, no anomaly of eye contact has been reported in depression. Patients suffering from Alzheimer disease, at the early and moderate stage, have been shown to maintain a normal amount of eye contact with their interlocutor. We take into account that autism is controversial regarding whether gaze processing is preserved or altered. In the first view, individuals are thought to elude or omit gazing at another's eyes while in the second, individuals are considered to not be able to process the gaze of others. We adopt the first stance following the view that people with autism are not interested in processing social signals such as gaze but could do so efficiently if properly motivated. For each pathology we emphasize that eye contact could be used, for example, to enhance sensitivity to bodily states, thus improving emotional decision making (in autism); to lead to more positive appraisal of the self and others (in depression); to improve memory performances (in Alzheimer disease) and, more generally, to motivate the recipient to engage in the therapeutic process. (3) Finally we propose two concrete ways to employ eye contact effects as a therapeutic tool. The first is to develop cognitive-behavioral tools to learn and/or motivate the recipient to create frequent and prolonged eye contact periods. The second is to raise awareness among caregivers of the beneficial effects of eye contact and to teach them the way to use eye contact to reach its optimum effects. Future investigations are however needed to explore the ways in which eye contact effects can be efficiently integrated in therapeutic strategies, as well as to identify the clinical populations that can benefit from such therapeutic interventions. Copyright © 2016 L’Encéphale, Paris. Published by Elsevier Masson SAS. All rights reserved.

  1. Gazing behavior reactions of Vietnamese and Austrian consumers to Austrian wafers and their relations to wanting, expected and tasted liking.

    PubMed

    Vu, Thi Minh Hang; Tu, Viet Phu; Duerrschmid, Klaus

    2018-05-01

    Predictability of consumers' food choice based on their gazing behavior using eye-tracking has been shown and discussed in recent research. By applying this observational technique and conventional methods on a specific food product, this study aims at investigating consumers' reactions associated with gazing behavior, wanting, building up expectations, and the experience of tasting. The tested food products were wafers from Austria with hazelnut, whole wheat, lemon and vanilla flavors, which are very well known in Austria and not known in Vietnam. 114 Vietnamese and 128 Austrian participants took part in three sections: The results indicate that: i) the gazing behavior parameters are highly correlated in a positive way with the wanting-to-try choice; ii) wanting to try is in compliance with the expected liking for the Austrian consumer panel only, which is very familiar with the products; iii) the expected and tasted liking of the products are highly country and product dependent. The expected liking is strongly correlated with the tasted liking for the Austrian panel only. Differences between the reactions of the Vietnamese and Austrian consumers are discussed in detail. The results, which reflect the complex process from gazing for "wanting to try" to the expected and tasted liking, are discussed in the context of the cognitive theory and food choice habits of the consumers. Copyright © 2018. Published by Elsevier Ltd.

  2. Algorithms for High-Speed Noninvasive Eye-Tracking System

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Morookian, John-Michael; Lambert, James

    2010-01-01

    Two image-data-processing algorithms are essential to the successful operation of a system of electronic hardware and software that noninvasively tracks the direction of a person s gaze in real time. The system was described in High-Speed Noninvasive Eye-Tracking System (NPO-30700) NASA Tech Briefs, Vol. 31, No. 8 (August 2007), page 51. To recapitulate from the cited article: Like prior commercial noninvasive eyetracking systems, this system is based on (1) illumination of an eye by a low-power infrared light-emitting diode (LED); (2) acquisition of video images of the pupil, iris, and cornea in the reflected infrared light; (3) digitization of the images; and (4) processing the digital image data to determine the direction of gaze from the centroids of the pupil and cornea in the images. Most of the prior commercial noninvasive eyetracking systems rely on standard video cameras, which operate at frame rates of about 30 Hz. Such systems are limited to slow, full-frame operation. The video camera in the present system includes a charge-coupled-device (CCD) image detector plus electronic circuitry capable of implementing an advanced control scheme that effects readout from a small region of interest (ROI), or subwindow, of the full image. Inasmuch as the image features of interest (the cornea and pupil) typically occupy a small part of the camera frame, this ROI capability can be exploited to determine the direction of gaze at a high frame rate by reading out from the ROI that contains the cornea and pupil (but not from the rest of the image) repeatedly. One of the present algorithms exploits the ROI capability. The algorithm takes horizontal row slices and takes advantage of the symmetry of the pupil and cornea circles and of the gray-scale contrasts of the pupil and cornea with respect to other parts of the eye. The algorithm determines which horizontal image slices contain the pupil and cornea, and, on each valid slice, the end coordinates of the pupil and cornea. Information from multiple slices is then combined to robustly locate the centroids of the pupil and cornea images. The other of the two present algorithms is a modified version of an older algorithm for estimating the direction of gaze from the centroids of the pupil and cornea. The modification lies in the use of the coordinates of the centroids, rather than differences between the coordinates of the centroids, in a gaze-mapping equation. The equation locates a gaze point, defined as the intersection of the gaze axis with a surface of interest, which is typically a computer display screen (see figure). The expected advantage of the modification is to make the gaze computation less dependent on some simplifying assumptions that are sometimes not accurate

  3. Group Differences in the Mutual Gaze of Chimpanzees (Pan Troglodytes)

    ERIC Educational Resources Information Center

    Bard, Kim A.; Myowa-Yamakoshi, Masako; Tomonaga, Masaki; Tanaka, Masayuki; Costall, Alan; Matsuzawa, Tetsuro

    2005-01-01

    A comparative developmental framework was used to determine whether mutual gaze is unique to humans and, if not, whether common mechanisms support the development of mutual gaze in chimpanzees and humans. Mother-infant chimpanzees engaged in approximately 17 instances of mutual gaze per hour. Mutual gaze occurred in positive, nonagonistic…

  4. 3D gaze tracking method using Purkinje images on eye optical model and pupil

    NASA Astrophysics Data System (ADS)

    Lee, Ji Woo; Cho, Chul Woo; Shin, Kwang Yong; Lee, Eui Chul; Park, Kang Ryoung

    2012-05-01

    Gaze tracking is to detect the position a user is looking at. Most research on gaze estimation has focused on calculating the X, Y gaze position on a 2D plane. However, as the importance of stereoscopic displays and 3D applications has increased greatly, research into 3D gaze estimation of not only the X, Y gaze position, but also the Z gaze position has gained attention for the development of next-generation interfaces. In this paper, we propose a new method for estimating the 3D gaze position based on the illuminative reflections (Purkinje images) on the surface of the cornea and lens by considering the 3D optical structure of the human eye model. This research is novel in the following four ways compared with previous work. First, we theoretically analyze the generated models of Purkinje images based on the 3D human eye model for 3D gaze estimation. Second, the relative positions of the first and fourth Purkinje images to the pupil center, inter-distance between these two Purkinje images, and pupil size are used as the features for calculating the Z gaze position. The pupil size is used on the basis of the fact that pupil accommodation happens according to the gaze positions in the Z direction. Third, with these features as inputs, the final Z gaze position is calculated using a multi-layered perceptron (MLP). Fourth, the X, Y gaze position on the 2D plane is calculated by the position of the pupil center based on a geometric transform considering the calculated Z gaze position. Experimental results showed that the average errors of the 3D gaze estimation were about 0.96° (0.48 cm) on the X-axis, 1.60° (0.77 cm) on the Y-axis, and 4.59 cm along the Z-axis in 3D space.

  5. SU-E-J-187: Management of Optic Organ Motion in Fractionated Stereotactic Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manning, M; Maurer, J

    2015-06-15

    Purpose: Fractionated stereotactic radiotherapy (FSRT) for optic nerve tumors can potentially use planning target volume (PTV) expansions as small as 1–5 mm. However, the motion of the intraorbital segment of the optic nerve has not been studied. Methods: A subject with a right optic nerve sheath meningioma underwent CT simulation in three fixed gaze positions: right, left, and fixed forward at a marker. The gross tumor volume (GTV) and the organs-at-risk (OAR) were contoured on all three scans. An IMRT plan using 10 static non-coplanar fields to 50.4 Gy in 28 fractions was designed to treat the fixed-forward gazing GTVmore » with a 1 mm PTV, then resulting coverage was evaluated for the GTV in the three positions. As an alternative, the composite structures were computed to generate the internal target volume (ITV), 1 mm expansion free-gazing PTV, and planning organat-risk volumes (PRVs) for free-gazing treatment. A comparable IMRT plan was created for the free-gazing PTV. Results: If the patient were treated using the fixed forward gaze plan looking straight, right, and left, the V100% for the GTV was 100.0%, 33.1%, and 0.1%, respectively. The volumes of the PTVs for fixed gaze and free-gazing plans were 0.79 and 2.21 cc, respectively, increasing the PTV by a factor of 2.6. The V100% for the fixed gaze and free-gazing plans were 0.85 cc and 2.8 cc, respectively increasing the treated volume by a factor of 3.3. Conclusion: Fixed gaze treatment appears to provide greater organ sparing than free-gazing. However unanticipated intrafraction right or left gaze can produce a geometric miss. Further study of optic nerve motion appears to be warranted in areas such as intrafraction optical confirmation of fixed gaze and optimized gaze directions to minimize lens and other normal organ dose in cranial radiotherapy.« less

  6. Gaze-evoked nystagmus induced by alcohol intoxication.

    PubMed

    Romano, Fausto; Tarnutzer, Alexander A; Straumann, Dominik; Ramat, Stefano; Bertolini, Giovanni

    2017-03-15

    The cerebellum is the core structure controlling gaze stability. Chronic cerebellar diseases and acute alcohol intoxication affect cerebellar function, inducing, among others, gaze instability as gaze-evoked nystagmus. Gaze-evoked nystagmus is characterized by increased centripetal eye-drift. It is used as an important diagnostic sign for patients with cerebellar degeneration and to assess the 'driving while intoxicated' condition. We quantified the effect of alcohol on gaze-holding using an approach allowing, for the first time, the comparison of deficits induced by alcohol intoxication and cerebellar degeneration. Our results showed that alcohol intoxication induces a two-fold increase of centripetal eye-drift. We establish analysis techniques for using controlled alcohol intake as a model to support the study of cerebellar deficits. The observed similarity between the effect of alcohol and the clinical signs observed in cerebellar patients suggests a possible pathomechanism for gaze-holding deficits. Gaze-evoked nystagmus (GEN) is an ocular-motor finding commonly observed in cerebellar disease, characterized by increased centripetal eye-drift with centrifugal correcting saccades at eccentric gaze. With cerebellar degeneration being a rare and clinically heterogeneous disease, data from patients are limited. We hypothesized that a transient inhibition of cerebellar function by defined amounts of alcohol may provide a suitable model to study gaze-holding deficits in cerebellar disease. We recorded gaze-holding at varying horizontal eye positions in 15 healthy participants before and 30 min after alcohol intake required to reach 0.6‰ blood alcohol content (BAC). Changes in ocular-motor behaviour were quantified measuring eye-drift velocity as a continuous function of gaze eccentricity over a large range (±40 deg) of horizontal gaze angles and characterized using a two-parameter tangent model. The effect of alcohol on gaze stability was assessed analysing: (1) overall effects on the gaze-holding system, (2) specific effects on each eye and (3) differences between gaze angles in the temporal and nasal hemifields. For all subjects, alcohol consumption induced gaze instability, causing a two-fold increase [2.21 (0.55), median (median absolute deviation); P = 0.002] of eye-drift velocity at all eccentricities. Results were confirmed analysing each eye and hemifield independently. The alcohol-induced transient global deficit in gaze-holding matched the pattern previously described in patients with late-onset cerebellar degeneration. Controlled intake of alcohol seems a suitable disease model to study cerebellar GEN. With alcohol resulting in global cerebellar hypofunction, we hypothesize that patients matching the gaze-holding behaviour observed here suffered from diffuse deficits in the gaze-holding system as well. © 2016 The Authors. The Journal of Physiology © 2016 The Physiological Society.

  7. Speaker gaze increases information coupling between infant and adult brains.

    PubMed

    Leong, Victoria; Byrne, Elizabeth; Clackson, Kaili; Georgieva, Stanimira; Lam, Sarah; Wass, Sam

    2017-12-12

    When infants and adults communicate, they exchange social signals of availability and communicative intention such as eye gaze. Previous research indicates that when communication is successful, close temporal dependencies arise between adult speakers' and listeners' neural activity. However, it is not known whether similar neural contingencies exist within adult-infant dyads. Here, we used dual-electroencephalography to assess whether direct gaze increases neural coupling between adults and infants during screen-based and live interactions. In experiment 1 ( n = 17), infants viewed videos of an adult who was singing nursery rhymes with ( i ) direct gaze (looking forward), ( ii ) indirect gaze (head and eyes averted by 20°), or ( iii ) direct-oblique gaze (head averted but eyes orientated forward). In experiment 2 ( n = 19), infants viewed the same adult in a live context, singing with direct or indirect gaze. Gaze-related changes in adult-infant neural network connectivity were measured using partial directed coherence. Across both experiments, the adult had a significant (Granger) causal influence on infants' neural activity, which was stronger during direct and direct-oblique gaze relative to indirect gaze. During live interactions, infants also influenced the adult more during direct than indirect gaze. Further, infants vocalized more frequently during live direct gaze, and individual infants who vocalized longer also elicited stronger synchronization from the adult. These results demonstrate that direct gaze strengthens bidirectional adult-infant neural connectivity during communication. Thus, ostensive social signals could act to bring brains into mutual temporal alignment, creating a joint-networked state that is structured to facilitate information transfer during early communication and learning. Copyright © 2017 the Author(s). Published by PNAS.

  8. Speaker gaze increases information coupling between infant and adult brains

    PubMed Central

    Leong, Victoria; Byrne, Elizabeth; Clackson, Kaili; Georgieva, Stanimira; Lam, Sarah

    2017-01-01

    When infants and adults communicate, they exchange social signals of availability and communicative intention such as eye gaze. Previous research indicates that when communication is successful, close temporal dependencies arise between adult speakers’ and listeners’ neural activity. However, it is not known whether similar neural contingencies exist within adult–infant dyads. Here, we used dual-electroencephalography to assess whether direct gaze increases neural coupling between adults and infants during screen-based and live interactions. In experiment 1 (n = 17), infants viewed videos of an adult who was singing nursery rhymes with (i) direct gaze (looking forward), (ii) indirect gaze (head and eyes averted by 20°), or (iii) direct-oblique gaze (head averted but eyes orientated forward). In experiment 2 (n = 19), infants viewed the same adult in a live context, singing with direct or indirect gaze. Gaze-related changes in adult–infant neural network connectivity were measured using partial directed coherence. Across both experiments, the adult had a significant (Granger) causal influence on infants’ neural activity, which was stronger during direct and direct-oblique gaze relative to indirect gaze. During live interactions, infants also influenced the adult more during direct than indirect gaze. Further, infants vocalized more frequently during live direct gaze, and individual infants who vocalized longer also elicited stronger synchronization from the adult. These results demonstrate that direct gaze strengthens bidirectional adult–infant neural connectivity during communication. Thus, ostensive social signals could act to bring brains into mutual temporal alignment, creating a joint-networked state that is structured to facilitate information transfer during early communication and learning. PMID:29183980

  9. Dynamic modeling of patient and physician eye gaze to understand the effects of electronic health records on doctor-patient communication and attention.

    PubMed

    Montague, Enid; Asan, Onur

    2014-03-01

    The aim of this study was to examine eye gaze patterns between patients and physicians while electronic health records were used to support patient care. Eye gaze provides an indication of physician attention to patient, patient/physician interaction, and physician behaviors such as searching for information and documenting information. A field study was conducted where 100 patient visits were observed and video recorded in a primary care clinic. Videos were then coded for gaze behaviors where patients' and physicians' gaze at each other and artifacts such as electronic health records were coded using a pre-established objective coding scheme. Gaze data were then analyzed using lag sequential methods. Results showed that there are several eye gaze patterns significantly dependent to each other. All doctor-initiated gaze patterns were followed by patient gaze patterns. Some patient-initiated gaze patterns were also followed by doctor gaze patterns significantly unlike the findings in previous studies. Health information technology appears to contribute to some of the new significant patterns that have emerged. Differences were also found in gaze patterns related to technology that differ from patterns identified in studies with paper charts. Several sequences related to patient-doctor-technology were also significant. Electronic health records affect the patient-physician eye contact dynamic differently than paper charts. This study identified several patterns of patient-physician interaction with electronic health record systems. Consistent with previous studies, physician initiated gaze is an important driver of the interactions between patient and physician and patient and technology. Published by Elsevier Ireland Ltd.

  10. Dynamic modeling of patient and physician eye gaze to understand the effects of electronic health records on doctor-patient communication and attention

    PubMed Central

    Montague, Enid; Asan, Onur

    2014-01-01

    Objective The aim of this study was to examine eye gaze patterns between patients and physicians while electronic health records were used to support patient care. Background Eye gaze provides an indication of physician attention to patient, patient/physician interaction, and physician behaviors such as searching for information and documenting information. Methods A field study was conducted where 100 patient visits were observed and video recorded in a primary care clinic. Videos were then coded for gaze behaviors where patients’ and physicians’ gaze at each other and artifacts such as electronic health records were coded using a pre-established objective coding scheme. Gaze data were then analyzed using lag sequential methods. Results Results showed that there are several eye gaze patterns significantly dependent to each other. All doctor-initiated gaze patterns were followed by patient gaze patterns. Some patient-initiated gaze patterns were also followed by doctor gaze patterns significantly unlike the findings in previous studies. Health information technology appears to contribute to some of the new significant patterns that have emerged. Differences were also found in gaze patterns related to technology that differ from patterns identified in studies with paper charts. Several sequences related to patient-doctor- technology were also significant. Electronic health records affect the patient-physician eye contact dynamic differently than paper charts. Conclusion This study identified several patterns of patient-physician interaction with electronic health record systems. Consistent with previous studies, physician initiated gaze is an important driver of the interactions between patient and physician and patient and technology. PMID:24380671

  11. Restoring effects of oxytocin on the attentional preference for faces in autism.

    PubMed

    Kanat, M; Spenthof, I; Riedel, A; van Elst, L T; Heinrichs, M; Domes, G

    2017-04-18

    Reduced attentional preference for faces and symptoms of social anxiety are common in autism spectrum disorders (ASDs). The neuropeptide oxytocin triggers anxiolytic functions and enhances eye gaze, facial emotion recognition and neural correlates of face processing in ASD. Here we investigated whether a single dose of oxytocin increases attention to faces in ASD. As a secondary question, we explored the influence of social anxiety on these effects. We tested for oxytocin's effects on attention to neutral faces as compared to houses in a sample of 29 autistic individuals and 30 control participants using a dot-probe paradigm with two different presentation times (100 or 500 ms). A single dose of 24 IU oxytocin was administered in a randomized, double-blind placebo-controlled, cross-over design. Under placebo, ASD individuals paid less attention to faces presented for 500 ms than did controls. Oxytocin administration increased the allocation of attention toward faces in ASD to a level observed in controls. Secondary analyses revealed that these oxytocin effects primarily occurred in ASD individuals with high levels of social anxiety who were characterized by attentional avoidance of faces under placebo. Our results confirm a positive influence of intranasal oxytocin on social attention processes in ASD. Further, they suggest that oxytocin may in particular restore the attentional preference for facial information in ASD individuals with high social anxiety. We conclude that oxytocin's anxiolytic properties may partially account for its positive effects on socio-cognitive functioning in ASD, such as enhanced eye gaze and facial emotion recognition.

  12. Restoring effects of oxytocin on the attentional preference for faces in autism

    PubMed Central

    Kanat, M; Spenthof, I; Riedel, A; van Elst, L T; Heinrichs, M; Domes, G

    2017-01-01

    Reduced attentional preference for faces and symptoms of social anxiety are common in autism spectrum disorders (ASDs). The neuropeptide oxytocin triggers anxiolytic functions and enhances eye gaze, facial emotion recognition and neural correlates of face processing in ASD. Here we investigated whether a single dose of oxytocin increases attention to faces in ASD. As a secondary question, we explored the influence of social anxiety on these effects. We tested for oxytocin's effects on attention to neutral faces as compared to houses in a sample of 29 autistic individuals and 30 control participants using a dot-probe paradigm with two different presentation times (100 or 500 ms). A single dose of 24 IU oxytocin was administered in a randomized, double-blind placebo-controlled, cross-over design. Under placebo, ASD individuals paid less attention to faces presented for 500 ms than did controls. Oxytocin administration increased the allocation of attention toward faces in ASD to a level observed in controls. Secondary analyses revealed that these oxytocin effects primarily occurred in ASD individuals with high levels of social anxiety who were characterized by attentional avoidance of faces under placebo. Our results confirm a positive influence of intranasal oxytocin on social attention processes in ASD. Further, they suggest that oxytocin may in particular restore the attentional preference for facial information in ASD individuals with high social anxiety. We conclude that oxytocin's anxiolytic properties may partially account for its positive effects on socio-cognitive functioning in ASD, such as enhanced eye gaze and facial emotion recognition. PMID:28418399

  13. How physician electronic health record screen sharing affects patient and doctor non-verbal communication in primary care.

    PubMed

    Asan, Onur; Young, Henry N; Chewning, Betty; Montague, Enid

    2015-03-01

    Use of electronic health records (EHRs) in primary-care exam rooms changes the dynamics of patient-physician interaction. This study examines and compares doctor-patient non-verbal communication (eye-gaze patterns) during primary care encounters for three different screen/information sharing groups: (1) active information sharing, (2) passive information sharing, and (3) technology withdrawal. Researchers video recorded 100 primary-care visits and coded the direction and duration of doctor and patient gaze. Descriptive statistics compared the length of gaze patterns as a percentage of visit length. Lag sequential analysis determined whether physician eye-gaze influenced patient eye gaze, and vice versa, and examined variations across groups. Significant differences were found in duration of gaze across groups. Lag sequential analysis found significant associations between several gaze patterns. Some, such as DGP-PGD ("doctor gaze patient" followed by "patient gaze doctor") were significant for all groups. Others, such DGT-PGU ("doctor gaze technology" followed by "patient gaze unknown") were unique to one group. Some technology use styles (active information sharing) seem to create more patient engagement, while others (passive information sharing) lead to patient disengagement. Doctors can engage patients in communication by using EHRs in the visits. EHR training and design should facilitate this. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  14. Empirical Study on Designing of Gaze Tracking Camera Based on the Information of User's Head Movement.

    PubMed

    Pan, Weiyuan; Jung, Dongwook; Yoon, Hyo Sik; Lee, Dong Eun; Naqvi, Rizwan Ali; Lee, Kwan Woo; Park, Kang Ryoung

    2016-08-31

    Gaze tracking is the technology that identifies a region in space that a user is looking at. Most previous non-wearable gaze tracking systems use a near-infrared (NIR) light camera with an NIR illuminator. Based on the kind of camera lens used, the viewing angle and depth-of-field (DOF) of a gaze tracking camera can be different, which affects the performance of the gaze tracking system. Nevertheless, to our best knowledge, most previous researches implemented gaze tracking cameras without ground truth information for determining the optimal viewing angle and DOF of the camera lens. Eye-tracker manufacturers might also use ground truth information, but they do not provide this in public. Therefore, researchers and developers of gaze tracking systems cannot refer to such information for implementing gaze tracking system. We address this problem providing an empirical study in which we design an optimal gaze tracking camera based on experimental measurements of the amount and velocity of user's head movements. Based on our results and analyses, researchers and developers might be able to more easily implement an optimal gaze tracking system. Experimental results show that our gaze tracking system shows high performance in terms of accuracy, user convenience and interest.

  15. Empirical Study on Designing of Gaze Tracking Camera Based on the Information of User’s Head Movement

    PubMed Central

    Pan, Weiyuan; Jung, Dongwook; Yoon, Hyo Sik; Lee, Dong Eun; Naqvi, Rizwan Ali; Lee, Kwan Woo; Park, Kang Ryoung

    2016-01-01

    Gaze tracking is the technology that identifies a region in space that a user is looking at. Most previous non-wearable gaze tracking systems use a near-infrared (NIR) light camera with an NIR illuminator. Based on the kind of camera lens used, the viewing angle and depth-of-field (DOF) of a gaze tracking camera can be different, which affects the performance of the gaze tracking system. Nevertheless, to our best knowledge, most previous researches implemented gaze tracking cameras without ground truth information for determining the optimal viewing angle and DOF of the camera lens. Eye-tracker manufacturers might also use ground truth information, but they do not provide this in public. Therefore, researchers and developers of gaze tracking systems cannot refer to such information for implementing gaze tracking system. We address this problem providing an empirical study in which we design an optimal gaze tracking camera based on experimental measurements of the amount and velocity of user’s head movements. Based on our results and analyses, researchers and developers might be able to more easily implement an optimal gaze tracking system. Experimental results show that our gaze tracking system shows high performance in terms of accuracy, user convenience and interest. PMID:27589768

  16. Do pet dogs (Canis familiaris) follow ostensive and non-ostensive human gaze to distant space and to objects?

    PubMed Central

    Range, Friederike; Virányi, Zsófia

    2017-01-01

    Dogs are renowned for being skilful at using human-given communicative cues such as pointing. Results are contradictory, however, when it comes to dogs' following human gaze, probably due to methodological discrepancies. Here we investigated whether dogs follow human gaze to one of two food locations better than into distant space even after comparable pre-training. In Experiments 1 and 2, the gazing direction of dogs was recorded in a gaze-following into distant space and in an object-choice task where no choice was allowed, in order to allow a direct comparison between tasks, varying the ostensive nature of the gazes. We found that dogs only followed repeated ostensive human gaze into distant space, whereas they followed all gaze cues in the object-choice task. Dogs followed human gaze better in the object-choice task than when there was no obvious target to look at. In Experiment 3, dogs were tested in another object-choice task and were allowed to approach a container. Ostensive cues facilitated the dogs’ following gaze with gaze as well as their choices: we found that dogs in the ostensive group chose the indicated container at chance level, whereas they avoided this container in the non-ostensive group. We propose that dogs may perceive the object-choice task as a competition over food and may interpret non-ostensive gaze as an intentional cue that indicates the experimenter's interest in the food location she has looked at. Whether ostensive cues simply mitigate the competitive perception of this situation or they alter how dogs interpret communicative gaze needs further investigation. Our findings also show that following gaze with one's gaze and actually choosing one of the two containers in an object-choice task need to be considered as different variables. The present study clarifies a number of questions related to gaze-following in dogs and adds to a growing body of evidence showing that human ostensive cues can strongly modify dog behaviour. PMID:28791164

  17. Embodied female experience through the lens of imagination.

    PubMed

    Green, Sharon R

    2010-06-01

    In 1971, I made a film entitled Self Portrait of a Nude Model Turned Cinematographer in which I explore the objectifying 'male' gaze on my body in contrast to the subjective lived experience of my body. The film was a radical challenge to the gaze that objectifies woman - and thus imprisons her - which had hitherto dominated narrative cinema. Since the objectification of women has largely excluded us from the privileged phallogocentric discourses, in this paper I hope to bring into the psychoanalytic dialogue a woman's lived experience. I will approach this by exploring how remembering this film has become a personally transformative experience as I look back on it through the lens of postmodern and feminist discourses that have emerged since it was made. In addition, I will explore how this process of imaginatively looking back on an artistic creation to generate new discourses in the present is similar to the transformative process of analysis. Lastly, I will present a clinical example, where my embodied countertransference response to a patient's subjection to the objectifying male gaze opens space for a new discourse about her body to emerge.

  18. Qualitative modeling of the decision-making process using electrooculography.

    PubMed

    Zargari Marandi, Ramtin; Sabzpoushan, S H

    2015-12-01

    A novel method based on electrooculography (EOG) has been introduced in this work to study the decision-making process. An experiment was designed and implemented wherein subjects were asked to choose between two items from the same category that were presented within a limited time. The EOG and voice signals of the subjects were recorded during the experiment. A calibration task was performed to map the EOG signals to their corresponding gaze positions on the screen by using an artificial neural network. To analyze the data, 16 parameters were extracted from the response time and EOG signals of the subjects. Evaluation and comparison of the parameters, together with subjects' choices, revealed functional information. On the basis of this information, subjects switched their eye gazes between items about three times on average. We also found, according to statistical hypothesis testing-that is, a t test, t(10) = 71.62, SE = 1.25, p < .0001-that the correspondence rate of a subjects' gaze at the moment of selection with the selected item was significant. Ultimately, on the basis of these results, we propose a qualitative choice model for the decision-making task.

  19. Visual Representation of Eye Gaze Is Coded by a Nonopponent Multichannel System

    ERIC Educational Resources Information Center

    Calder, Andrew J.; Jenkins, Rob; Cassel, Anneli; Clifford, Colin W. G.

    2008-01-01

    To date, there is no functional account of the visual perception of gaze in humans. Previous work has demonstrated that left gaze and right gaze are represented by separate mechanisms. However, these data are consistent with either a multichannel system comprising separate channels for distinct gaze directions (e.g., left, direct, and right) or an…

  20. High-Speed Noninvasive Eye-Tracking System

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; LaBaw, Clayton; Michael-Morookian, John; Monacos, Steve; Serviss, Orin

    2007-01-01

    The figure schematically depicts a system of electronic hardware and software that noninvasively tracks the direction of a person s gaze in real time. Like prior commercial noninvasive eye-tracking systems, this system is based on (1) illumination of an eye by a low-power infrared light-emitting diode (LED); (2) acquisition of video images of the pupil, iris, and cornea in the reflected infrared light; (3) digitization of the images; and (4) processing the digital image data to determine the direction of gaze from the centroids of the pupil and cornea in the images. Relative to the prior commercial systems, the present system operates at much higher speed and thereby offers enhanced capability for applications that involve human-computer interactions, including typing and computer command and control by handicapped individuals,and eye-based diagnosis of physiological disorders that affect gaze responses.

  1. Observing Third-Party Attentional Relationships Affects Infants' Gaze Following: An Eye-Tracking Study

    PubMed Central

    Meng, Xianwei; Uto, Yusuke; Hashiya, Kazuhide

    2017-01-01

    Not only responding to direct social actions toward themselves, infants also pay attention to relevant information from third-party interactions. However, it is unclear whether and how infants recognize the structure of these interactions. The current study aimed to investigate how infants' observation of third-party attentional relationships influence their subsequent gaze following. Nine-month-old, 1-year-old, and 1.5-year-old infants (N = 72, 37 girls) observed video clips in which a female actor gazed at one of two toys after she and her partner either silently faced each other (face-to-face condition) or looked in opposite directions (back-to-back condition). An eye tracker was used to record the infants' looking behavior (e.g., looking time, looking frequency). The analyses revealed that younger infants followed the actor's gaze toward the target object in both conditions, but this was not the case for the 1.5-year-old infants in the back-to-back condition. Furthermore, we found that infants' gaze following could be negatively predicted by their expectation of the partner's response to the actor's head turn (i.e., they shift their gaze toward the partner immediately after they realize that the actor's head will turn). These findings suggested that the sensitivity to the difference in knowledge and attentional states in the second year of human life could be extended to third-party interactions, even without any direct involvement in the situation. Additionally, a spontaneous concern with the epistemic gap between self and other, as well as between others, develops by this age. These processes might be considered part of the fundamental basis for human communication. PMID:28149284

  2. A novel attention training paradigm based on operant conditioning of eye gaze: Preliminary findings.

    PubMed

    Price, Rebecca B; Greven, Inez M; Siegle, Greg J; Koster, Ernst H W; De Raedt, Rudi

    2016-02-01

    Inability to engage with positive stimuli is a widespread problem associated with negative mood states across many conditions, from low self-esteem to anhedonic depression. Though attention retraining procedures have shown promise as interventions in some clinical populations, novel procedures may be necessary to reliably attenuate chronic negative mood in refractory clinical populations (e.g., clinical depression) through, for example, more active, adaptive learning processes. In addition, a focus on individual difference variables predicting intervention outcome may improve the ability to provide such targeted interventions efficiently. To provide preliminary proof-of-principle, we tested a novel paradigm using operant conditioning to train eye gaze patterns toward happy faces. Thirty-two healthy undergraduates were randomized to receive operant conditioning of eye gaze toward happy faces (train-happy) or neutral faces (train-neutral). At the group level, the train-happy condition attenuated sad mood increases following a stressful task, in comparison to train-neutral. In individual differences analysis, greater physiological reactivity (pupil dilation) in response to happy faces (during an emotional face-search task at baseline) predicted decreased mood reactivity after stress. These Preliminary results suggest that operant conditioning of eye gaze toward happy faces buffers against stress-induced effects on mood, particularly in individuals who show sufficient baseline neural engagement with happy faces. Eye gaze patterns to emotional face arrays may have a causal relationship with mood reactivity. Personalized medicine research in depression may benefit from novel cognitive training paradigms that shape eye gaze patterns through feedback. Baseline neural function (pupil dilation) may be a key mechanism, aiding in iterative refinement of this approach. (c) 2016 APA, all rights reserved).

  3. Follow My Eyes: The Gaze of Politicians Reflexively Captures the Gaze of Ingroup Voters

    PubMed Central

    Liuzza, Marco Tullio; Cazzato, Valentina; Vecchione, Michele; Crostella, Filippo; Caprara, Gian Vittorio; Aglioti, Salvatore Maria

    2011-01-01

    Studies in human and non-human primates indicate that basic socio-cognitive operations are inherently linked to the power of gaze in capturing reflexively the attention of an observer. Although monkey studies indicate that the automatic tendency to follow the gaze of a conspecific is modulated by the leader-follower social status, evidence for such effects in humans is meager. Here, we used a gaze following paradigm where the directional gaze of right- or left-wing Italian political characters could influence the oculomotor behavior of ingroup or outgroup voters. We show that the gaze of Berlusconi, the right-wing leader currently dominating the Italian political landscape, potentiates and inhibits gaze following behavior in ingroup and outgroup voters, respectively. Importantly, the higher the perceived similarity in personality traits between voters and Berlusconi, the stronger the gaze interference effect. Thus, higher-order social variables such as political leadership and affiliation prepotently affect reflexive shifts of attention. PMID:21957479

  4. Gaze perception in social anxiety and social anxiety disorder

    PubMed Central

    Schulze, Lars; Renneberg, Babette; Lobmaier, Janek S.

    2013-01-01

    Clinical observations suggest abnormal gaze perception to be an important indicator of social anxiety disorder (SAD). Experimental research has yet paid relatively little attention to the study of gaze perception in SAD. In this article we first discuss gaze perception in healthy human beings before reviewing self-referential and threat-related biases of gaze perception in clinical and non-clinical socially anxious samples. Relative to controls, socially anxious individuals exhibit an enhanced self-directed perception of gaze directions and demonstrate a pronounced fear of direct eye contact, though findings are less consistent regarding the avoidance of mutual gaze in SAD. Prospects for future research and clinical implications are discussed. PMID:24379776

  5. Evaluating Gaze-Based Interface Tools to Facilitate Point-and-Select Tasks with Small Targets

    ERIC Educational Resources Information Center

    Skovsgaard, Henrik; Mateo, Julio C.; Hansen, John Paulin

    2011-01-01

    Gaze interaction affords hands-free control of computers. Pointing to and selecting small targets using gaze alone is difficult because of the limited accuracy of gaze pointing. This is the first experimental comparison of gaze-based interface tools for small-target (e.g. less than 12 x 12 pixels) point-and-select tasks. We conducted two…

  6. The role of emotion in learning trustworthiness from eye-gaze: Evidence from facial electromyography

    PubMed Central

    Manssuer, Luis R.; Pawling, Ralph; Hayes, Amy E.; Tipper, Steven P.

    2016-01-01

    Gaze direction can be used to rapidly and reflexively lead or mislead others’ attention as to the location of important stimuli. When perception of gaze direction is congruent with the location of a target, responses are faster compared to when incongruent. Faces that consistently gaze congruently are also judged more trustworthy than faces that consistently gaze incongruently. However, it’s unclear how gaze-cues elicit changes in trust. We measured facial electromyography (EMG) during an identity-contingent gaze-cueing task to examine whether embodied emotional reactions to gaze-cues mediate trust learning. Gaze-cueing effects were found to be equivalent regardless of whether participants showed learning of trust in the expected direction or did not. In contrast, we found distinctly different patterns of EMG activity in these two populations. In a further experiment we showed the learning effects were specific to viewing faces, as no changes in liking were detected when viewing arrows that evoked similar attentional orienting responses. These findings implicate embodied emotion in learning trust from identity-contingent gaze-cueing, possibly due to the social value of shared attention or deception rather than domain-general attentional orienting. PMID:27153239

  7. See You See Me: the Role of Eye Contact in Multimodal Human-Robot Interaction.

    PubMed

    Xu, Tian Linger; Zhang, Hui; Yu, Chen

    2016-05-01

    We focus on a fundamental looking behavior in human-robot interactions - gazing at each other's face. Eye contact and mutual gaze between two social partners are critical in smooth human-human interactions. Therefore, investigating at what moments and in what ways a robot should look at a human user's face as a response to the human's gaze behavior is an important topic. Toward this goal, we developed a gaze-contingent human-robot interaction system, which relied on momentary gaze behaviors from a human user to control an interacting robot in real time. Using this system, we conducted an experiment in which human participants interacted with the robot in a joint attention task. In the experiment, we systematically manipulated the robot's gaze toward the human partner's face in real time and then analyzed the human's gaze behavior as a response to the robot's gaze behavior. We found that more face looks from the robot led to more look-backs (to the robot's face) from human participants and consequently created more mutual gaze and eye contact between the two. Moreover, participants demonstrated more coordinated and synchronized multimodal behaviors between speech and gaze when more eye contact was successfully established and maintained.

  8. Acquired simulated brown syndrome following surgical repair of medial orbital wall fracture.

    PubMed

    Hwang, Jong-uk; Lim, Hyun Taek

    2005-03-01

    Simulated Brown syndrome is a term applied to a myriad of disorders that cause a Brown syndrome-like motility. We encountered a case of acquired simulated Brown syndrome in a 41-year-old man following surgical repair of fractures of both medial orbital walls. He suffered from diplopia in primary gaze, associated with hypotropia of the affected eye. We performed an ipsilateral recession of the left inferior rectus muscle as a single-stage intraoperative adjustment procedure under topical anesthesia, rather than the direct approach to the superior oblique tendon. Postoperatively, the patient was asymptomatic in all diagnostic gaze positions.

  9. Robot Faces that Follow Gaze Facilitate Attentional Engagement and Increase Their Likeability.

    PubMed

    Willemse, Cesco; Marchesi, Serena; Wykowska, Agnieszka

    2018-01-01

    Gaze behavior of humanoid robots is an efficient mechanism for cueing our spatial orienting, but less is known about the cognitive-affective consequences of robots responding to human directional cues. Here, we examined how the extent to which a humanoid robot (iCub) avatar directed its gaze to the same objects as our participants affected engagement with the robot, subsequent gaze-cueing, and subjective ratings of the robot's characteristic traits. In a gaze-contingent eyetracking task, participants were asked to indicate a preference for one of two objects with their gaze while an iCub avatar was presented between the object photographs. In one condition, the iCub then shifted its gaze toward the object chosen by a participant in 80% of the trials (joint condition) and in the other condition it looked at the opposite object 80% of the time (disjoint condition). Based on the literature in human-human social cognition, we took the speed with which the participants looked back at the robot as a measure of facilitated reorienting and robot-preference, and found these return saccade onset times to be quicker in the joint condition than in the disjoint condition. As indicated by results from a subsequent gaze-cueing tasks, the gaze-following behavior of the robot had little effect on how our participants responded to gaze cues. Nevertheless, subjective reports suggested that our participants preferred the iCub following participants' gaze to the one with a disjoint attention behavior, rated it as more human-like and as more likeable. Taken together, our findings show a preference for robots who follow our gaze. Importantly, such subtle differences in gaze behavior are sufficient to influence our perception of humanoid agents, which clearly provides hints about the design of behavioral characteristics of humanoid robots in more naturalistic settings.

  10. Owners' direct gazes increase dogs' attention-getting behaviors.

    PubMed

    Ohkita, Midori; Nagasawa, Miho; Kazutaka, Mogi; Kikusui, Takefumi

    2016-04-01

    This study examined whether dogs gain information about human's attention via their gazes and whether they change their attention-getting behaviors (i.e., whining and whimpering, looking at their owners' faces, pawing, and approaching their owners) in response to their owners' direct gazes. The results showed that when the owners gazed at their dogs, the durations of whining and whimpering and looking at the owners' faces were longer than when the owners averted their gazes. In contrast, there were no differences in duration of pawing and likelihood of approaching the owners between the direct and averted gaze conditions. Therefore, owners' direct gazes increased the behaviors that acted as distant signals and did not necessarily involve touching the owners. We suggest that dogs are sensitive to human gazes, and this sensitivity may act as attachment signals to humans, and may contribute to close relationships between humans and dogs. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Orienting in Response to Gaze and the Social Use of Gaze among Children with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Rombough, Adrienne; Iarocci, Grace

    2013-01-01

    Potential relations between gaze cueing, social use of gaze, and ability to follow line of sight were examined in children with autism and typically developing peers. Children with autism (mean age = 10 years) demonstrated intact gaze cueing. However, they preferred to follow arrows instead of eyes to infer mental state, and showed decreased…

  12. Interactions between gaze-evoked blinks and gaze shifts in monkeys.

    PubMed

    Gandhi, Neeraj J

    2012-02-01

    Rapid eyelid closure, or a blink, often accompanies head-restrained and head-unrestrained gaze shifts. This study examines the interactions between such gaze-evoked blinks and gaze shifts in monkeys. Blink probability increases with gaze amplitude and at a faster rate for head-unrestrained movements. Across animals, blink likelihood is inversely correlated with the average gaze velocity of large-amplitude control movements. Gaze-evoked blinks induce robust perturbations in eye velocity. Peak and average velocities are reduced, duration is increased, but accuracy is preserved. The temporal features of the perturbation depend on factors such as the time of blink relative to gaze onset, inherent velocity kinematics of control movements, and perhaps initial eye-in-head position. Although variable across animals, the initial effect is a reduction in eye velocity, followed by a reacceleration that yields two or more peaks in its waveform. Interestingly, head velocity is not attenuated; instead, it peaks slightly later and with a larger magnitude. Gaze latency is slightly reduced on trials with gaze-evoked blinks, although the effect was more variable during head-unrestrained movements; no reduction in head latency is observed. Preliminary data also demonstrate a similar perturbation of gaze-evoked blinks during vertical saccades. The results are compared with previously reported effects of reflexive blinks (evoked by air-puff delivered to one eye or supraorbital nerve stimulation) and discussed in terms of effects of blinks on saccadic suppression, neural correlates of the altered eye velocity signals, and implications on the hypothesis that the attenuation in eye velocity is produced by a head movement command.

  13. What You Learn is What You See: Using Eye Movements to Study Infant Cross-Situational Word Learning

    PubMed Central

    Smith, Linda

    2016-01-01

    Recent studies show that both adults and young children possess powerful statistical learning capabilities to solve the word-to-world mapping problem. However, the underlying mechanisms that make statistical learning possible and powerful are not yet known. With the goal of providing new insights into this issue, the research reported in this paper used an eye tracker to record the moment-by-moment eye movement data of 14-month-old babies in statistical learning tasks. Various measures are applied to such fine-grained temporal data, such as looking duration and shift rate (the number of shifts in gaze from one visual object to the other) trial by trial, showing different eye movement patterns between strong and weak statistical learners. Moreover, an information-theoretic measure is developed and applied to gaze data to quantify the degree of learning uncertainty trial by trial. Next, a simple associative statistical learning model is applied to eye movement data and these simulation results are compared with empirical results from young children, showing strong correlations between these two. This suggests that an associative learning mechanism with selective attention can provide a cognitively plausible model of cross-situational statistical learning. The work represents the first steps to use eye movement data to infer underlying real-time processes in statistical word learning. PMID:22213894

  14. Relations between 18-month-olds' gaze pattern and target action performance: a deferred imitation study with eye tracking.

    PubMed

    Óturai, Gabriella; Kolling, Thorsten; Knopf, Monika

    2013-12-01

    Deferred imitation studies are used to assess infants' declarative memory performance. These studies have found that deferred imitation performance improves with age, which is usually attributed to advancing memory capabilities. Imitation studies, however, are also used to assess infants' action understanding. In this second research program it has been observed that infants around the age of one year imitate selectively, i.e., they imitate certain kinds of target actions and omit others. In contrast to this, two-year-olds usually imitate the model's exact actions. 18-month-olds imitate more exactly than one-year-olds, but more selectively than two-year-olds, a fact which makes this age group especially interesting, since the processes underlying selective vs. exact imitation are largely debated. The question, for example, if selective attention to certain kinds of target actions accounts for preferential imitation of these actions in young infants is still open. Additionally, relations between memory capabilities and selective imitation processes, as well as their role in shaping 18-month-olds' neither completely selective, nor completely exact imitation have not been thoroughly investigated yet. The present study, therefore, assessed 18-month-olds' gaze toward two types of actions (functional vs. arbitrary target actions) and the model's face during target action demonstration, as well as infants' deferred imitation performance. Although infants' fixation times to functional target actions were not longer than to arbitrary target actions, they imitated the functional target actions more frequently than the arbitrary ones. This suggests that selective imitation does not rely on selective gaze toward functional target actions during the demonstration phase. In addition, a post hoc analysis of interindividual differences suggested that infants' attention to the model's social-communicative cues might play an important role in exact imitation, meaning the imitation of both functional and arbitrary target actions. Copyright © 2013 Elsevier Inc. All rights reserved.

  15. Gaze leading is associated with liking.

    PubMed

    Grynszpan, Ouriel; Martin, Jean-Claude; Fossati, Philippe

    2017-02-01

    Gaze plays a pivotal role in human communication, especially for coordinating attention. The ability to guide the gaze orientation of others forms the backbone of joint attention. Recent research has raised the possibility that gaze following behaviors could induce liking. The present study seeks to investigate this hypothesis. We designed two physically different human avatars that could follow the gaze of users via eye-tracking technology. In a preliminary experiment, 20 participants assessed the baseline appeal of the two avatars and confirmed that the avatars differed in this respect. In the main experiment, we compared how 19 participants rated the two avatars in terms of pleasantness, trustworthiness and closeness when the avatars were following their gaze versus when the avatar generated gaze movements autonomously. Although the same avatar as in the preliminary experiment was rated more favorably, the pleasantness attributed to the two avatars increased when they followed the gaze of the participants. This outcome provides evidence that gaze following fosters liking independently of the baseline appeal of the individual. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Dissociation of eye and head components of gaze shifts by stimulation of the omnipause neuron region.

    PubMed

    Gandhi, Neeraj J; Sparks, David L

    2007-07-01

    Natural movements often include actions integrated across multiple effectors. Coordinated eye-head movements are driven by a command to shift the line of sight by a desired displacement vector. Yet because extraocular and neck motoneurons are separate entities, the gaze shift command must be separated into independent signals for eye and head movement control. We report that this separation occurs, at least partially, at or before the level of pontine omnipause neurons (OPNs). Stimulation of the OPNs prior to and during gaze shifts temporally decoupled the eye and head components by inhibiting gaze and eye saccades. In contrast, head movements were consistently initiated before gaze onset, and ongoing head movements continued along their trajectories, albeit with some characteristic modulations. After stimulation offset, a gaze shift composed of an eye saccade, and a reaccelerated head movement was produced to preserve gaze accuracy. We conclude that signals subject to OPN inhibition produce the eye-movement component of a coordinated eye-head gaze shift and are not the only signals involved in the generation of the head component of the gaze shift.

  17. Perception of direct vs. averted gaze in portrait paintings: An fMRI and eye-tracking study.

    PubMed

    Kesner, Ladislav; Grygarová, Dominika; Fajnerová, Iveta; Lukavský, Jiří; Nekovářová, Tereza; Tintěra, Jaroslav; Zaytseva, Yuliya; Horáček, Jiří

    2018-06-15

    In this study, we use separate eye-tracking measurements and functional magnetic resonance imaging to investigate the neuronal and behavioral response to painted portraits with direct versus averted gaze. We further explored modulatory effects of several painting characteristics (premodern vs modern period, influence of style and pictorial context). In the fMRI experiment, we show that the direct versus averted gaze elicited increased activation in lingual and inferior occipital and the fusiform face area, as well as in several areas involved in attentional and social cognitive processes, especially the theory of mind: angular gyrus/temporo-parietal junction, inferior frontal gyrus and dorsolateral prefrontal cortex. The additional eye-tracking experiment showed that participants spent more time viewing the portrait's eyes and mouth when the portrait's gaze was directed towards the observer. These results suggest that static and, in some cases, highly stylized depictions of human beings in artistic portraits elicit brain activation commensurate with the experience of being observed by a watchful intelligent being. They thus involve observers in implicit inferences of the painted subject's mental states and emotions. We further confirm the substantial influence of representational medium on brain activity. Copyright © 2018 Elsevier Inc. All rights reserved.

  18. Attention Orienting by Gaze and Facial Expressions Across Development

    PubMed Central

    Neath, Karly; Nilsen, Elizabeth S.; Gittsovich, Katarzyna; Itier, Roxane J.

    2014-01-01

    Processing of facial expressions has been shown to potentiate orienting of attention toward the direction signaled by gaze in adults, an important social–cognitive function. However, little is known about how this social attention skill develops. This study is the first to examine the developmental trajectory of the gaze orienting effect (GOE), its modulations by facial expressions, and its links with theory of mind (ToM) abilities. Dynamic emotional stimuli were presented to 222 participants (7–25 years old) with normal trait anxiety using a gaze-cuing paradigm. The GOE was found as early as 7 years of age and decreased linearly until 12–13 years, at which point adult levels were reached. Both fearful and surprised expressions enhanced the GOE compared with neutral expressions. The GOE for fearful faces was also larger than for joyful and angry expressions. These effects did not interact with age and were not driven by intertrial variance. Importantly, the GOE did not correlate with ToM abilities as assessed by the “Reading the Mind in the Eyes” test. The implication of these findings for clinical and typically developing populations is discussed. PMID:23356559

  19. Enabling Disabled Persons to Gain Access to Digital Media

    NASA Technical Reports Server (NTRS)

    Beach, Glenn; OGrady, Ryan

    2011-01-01

    A report describes the first phase in an effort to enhance the NaviGaze software to enable profoundly disabled persons to operate computers. (Running on a Windows-based computer equipped with a video camera aimed at the user s head, the original NaviGaze software processes the user's head movements and eye blinks into cursor movements and mouse clicks to enable hands-free control of the computer.) To accommodate large variations in movement capabilities among disabled individuals, one of the enhancements was the addition of a graphical user interface for selection of parameters that affect the way the software interacts with the computer and tracks the user s movements. Tracking algorithms were improved to reduce sensitivity to rotations and reduce the likelihood of tracking the wrong features. Visual feedback to the user was improved to provide an indication of the state of the computer system. It was found that users can quickly learn to use the enhanced software, performing single clicks, double clicks, and drags within minutes of first use. Available programs that could increase the usability of NaviGaze were identified. One of these enables entry of text by using NaviGaze as a mouse to select keys on a virtual keyboard.

  20. Modification of Eccentric Gaze-Holding

    NASA Technical Reports Server (NTRS)

    Reschke, M. F.; Paloski, W. H.; Somers, J. T.; Leigh, R. J.; Wood, S. J.; Kornilova, L.

    2006-01-01

    Clear vision and accurate localization of objects in the environment are prerequisites for reliable performance of motor tasks. Space flight confronts the crewmember with a stimulus rearrangement that requires adaptation to function effectively with the new requirements of altered spatial orientation and motor coordination. Adaptation and motor learning driven by the effects of cerebellar disorders may share some of the same demands that face our astronauts. One measure of spatial localization shared by the astronauts and those suffering from cerebellar disorders that is easily quantified, and for which a neurobiological substrate has been identified, is the control of the angle of gaze (the "line of sight"). The disturbances of gaze control that have been documented to occur in astronauts and cosmonauts, both in-flight and postflight, can be directly related to changes in the extrinsic gravitational environment and intrinsic proprioceptive mechanisms thus, lending themselves to description by simple non-linear statistical models. Because of the necessity of developing robust normal response populations and normative populations against which abnormal responses can be evaluated, the basic models can be formulated using normal, non-astronaut test subjects and subsequently extended using centrifugation techniques to alter the gravitational and proprioceptive environment of these subjects. Further tests and extensions of the models can be made by studying abnormalities of gaze control in patients with cerebellar disease. A series of investigations were conducted in which a total of 62 subjects were tested to: (1) Define eccentric gaze-holding parameters in a normative population, and (2) explore the effects of linear acceleration on gaze-holding parameters. For these studies gaze-holding was evaluated with the subjects seated upright (the normative values), rolled 45 degrees to both the left and right, or pitched back 30 and 90 degrees. In a separate study the further effects of acceleration on gaze stability was examined during centrifugation (+2 G (sub x) and +2 G (sub z) using a total of 23 subjects. In all of our investigations eccentric gaze-holding was established by having the subjects acquire an eccentric target (+/-30 degrees horizontal, +/- 15 degrees vertical) that was flashed for 750 msec in an otherwise dark room. Subjects were instructed to hold gaze on the remembered position of the flashed target for 20 sec. Immediately following the 20 sec period, subjects were cued to return to the remembered center position and to hold gaze there for an additional 20 sec. Following this 20 sec period the center target was briefly flashed and the subject made any corrective eye movement back to the true center position. Conventionally, the ability to hold eccentric gaze is estimated by fitting the natural log of centripetal eye drifts by linear regression and calculating the time constant (G) of these slow phases of "gaze-evoked nystagmus". However, because our normative subjects sometimes showed essentially no drift (tau (sub c) = m), statistical estimation and inference on the effect of target direction was performed on values of the decay constant theta = 1/(tau (sub c)) which we found was well modeled by a gamma distribution. Subjects showed substantial variance of their eye drifts, which were centrifugal in approximately 20 % of cases, and > 40% for down gaze. Using the ensuing estimated gamma distributions, we were able to conclude that rightward and leftward gaze holding were not significantly different, but that upward gaze holding was significantly worse than downward (p<0.05). We also concluded that vertical gaze holding was significantly worse than horizontal (p<0.05). In the case of left and right roll, we found that both had a similar improvement to horizontal gaze holding (p<0.05), but didn't have a significant effect on vertical gaze holding. For pitch tilts, both tilt angles significantly decreased gaze-holding ility in all directions (p<0.05). Finally, we found that hyper-g centrifugation significantly decreased gaze holding ability in the vertical plane. The main findings of this study are as follows: (1) vertical gaze-holding is less stable than horizontal, (2) gaze-holding to upward targets is less stable than to downward targets, (3) tilt affects gaze holding, and (4) hyper-g affects gaze holding. This difference between horizontal and vertical gaze-holding may be ascribed to separate components of the velocity-to-position neural integrator for eye movements, and to differences in orbital mechanics. The differences between upward and downward gaze-holding may be ascribed to an inherent vertical imbalance in the vestibular system. Because whole body tilt and hyper-g affects gaze-holding, it is implied that the otolith organs have direct connections to the neural integrator and further studies of astronaut gaze-holding are warranted. Our statistical method for representing the range of normal eccentric gaze stability can be readily applied to normals who maybe exposed to environments which may modify the central integrator and require monitoring, and to evaluate patients with gaze-evoked nystagmus by comparing to the above established normative criteria.

  1. Evaluation of a gaze-controlled vision enhancement system for reading in visually impaired people

    PubMed Central

    Aguilar, Carlos; Castet, Eric

    2017-01-01

    People with low vision, especially those with Central Field Loss (CFL), need magnification to read. The flexibility of Electronic Vision Enhancement Systems (EVES) offers several ways of magnifying text. Due to the restricted field of view of EVES, the need for magnification is conflicting with the need to navigate through text (panning). We have developed and implemented a real-time gaze-controlled system whose goal is to optimize the possibility of magnifying a portion of text while maintaining global viewing of the other portions of the text (condition 1). Two other conditions were implemented that mimicked commercially available advanced systems known as CCTV (closed-circuit television systems)—conditions 2 and 3. In these two conditions, magnification was uniformly applied to the whole text without any possibility to specifically select a region of interest. The three conditions were implemented on the same computer to remove differences that might have been induced by dissimilar equipment. A gaze-contingent artificial 10° scotoma (a mask continuously displayed in real time on the screen at the gaze location) was used in the three conditions in order to simulate macular degeneration. Ten healthy subjects with a gaze-contingent scotoma read aloud sentences from a French newspaper in nine experimental one-hour sessions. Reading speed was measured and constituted the main dependent variable to compare the three conditions. All subjects were able to use condition 1 and they found it slightly more comfortable to use than condition 2 (and similar to condition 3). Importantly, reading speed results did not show any significant difference between the three systems. In addition, learning curves were similar in the three conditions. This proof of concept study suggests that the principles underlying the gaze-controlled enhanced system might be further developed and fruitfully incorporated in different kinds of EVES for low vision reading. PMID:28380004

  2. Where We Look When We Drive with or without Active Steering Wheel Control

    PubMed Central

    Mars, Franck; Navarro, Jordan

    2012-01-01

    Current theories on the role of visuomotor coordination in driving agree that active sampling of the road by the driver informs the arm-motor system in charge of performing actions on the steering wheel. Still under debate, however, is the nature of visual cues and gaze strategies used by drivers. In particular, the tangent point hypothesis, which states that drivers look at a specific point on the inside edge line, has recently become the object of controversy. An alternative hypothesis proposes that drivers orient gaze toward the desired future path, which happens to be often situated in the vicinity of the tangent point. The present study contributed to this debate through the analyses of the distribution of gaze orientation with respect to the tangent point. The results revealed that drivers sampled the roadway in the close vicinity of the tangent point rather than the tangent point proper. This supports the idea that drivers look at the boundary of a safe trajectory envelop near the inside edge line. Furthermore, the study investigated for the first time the reciprocal influence of manual control on gaze control in the context of driving. This was achieved through the comparison of gaze behavior when drivers actively steered the vehicle or when steering was performed by an automatic controller. The results showed an increase in look-ahead fixations in the direction of the bend exit and a small but consistent reduction in the time spent looking in the area of the tangent point when steering was passive. This may be the consequence of a change in the balance between cognitive and sensorimotor anticipatory gaze strategies. It might also reflect bidirectional coordination control between the eye and arm-motor systems, which goes beyond the common assumption that the eyes lead the hands when driving. PMID:22928043

  3. How children with specific language impairment view social situations: an eye tracking study.

    PubMed

    Hosozawa, Mariko; Tanaka, Kyoko; Shimizu, Toshiaki; Nakano, Tamami; Kitazawa, Shigeru

    2012-06-01

    Children with specific language impairment (SLI) face risks for social difficulties. However, the nature and developmental course of these difficulties remain unclear. Gaze behaviors have been studied by using eye tracking among those with autism spectrum disorders (ASDs). Using this method, we compared the gaze behaviors of children with SLI with those of individuals with ASD and typically developing (TD) children to explore the social perception of children with SLI. The eye gazes of 66 children (16 with SLI, 25 with ASD, and 25 TD) were studied while viewing videos of social interactions. Gaze behaviors were summarized with multidimensional scaling, and participants with similar gaze behaviors were represented proximally in a 2-dimensional plane. The SLI and TD groups each formed a cluster near the center of the multidimensional scaling plane, whereas the ASD group was distributed around the periphery. Frame-by-frame analyses showed that children with SLI and TD children viewed faces in a manner consistent with the story line, but children with ASD devoted less attention to faces and social interactions. During speech scenes, children with SLI were significantly more fixated on the mouth, whereas TD children viewed the eyes and the mouth. Children with SLI viewed social situations in ways similar to those of TD children but different from those of children with ASD. However, children with SLI concentrated on the speaker's mouth, possibly to compensate for audiovisual processing deficits. Because eyes carry important information, this difference may influence the social development of children with SLI.

  4. The Development of Joint Visual Attention: A Longitudinal Study of Gaze following during Interactions with Mothers and Strangers

    ERIC Educational Resources Information Center

    Gredeback, Gustaf; Fikke, Linn; Melinder, Annika

    2010-01-01

    Two- to 8-month-old infants interacted with their mother or a stranger in a prospective longitudinal gaze following study. Gaze following, as assessed by eye tracking, emerged between 2 and 4 months and stabilized between 6 and 8 months of age. Overall, infants followed the gaze of a stranger more than they followed the gaze of their mothers,…

  5. See You See Me: the Role of Eye Contact in Multimodal Human-Robot Interaction

    PubMed Central

    XU, TIAN (LINGER); ZHANG, HUI; YU, CHEN

    2016-01-01

    We focus on a fundamental looking behavior in human-robot interactions – gazing at each other’s face. Eye contact and mutual gaze between two social partners are critical in smooth human-human interactions. Therefore, investigating at what moments and in what ways a robot should look at a human user’s face as a response to the human’s gaze behavior is an important topic. Toward this goal, we developed a gaze-contingent human-robot interaction system, which relied on momentary gaze behaviors from a human user to control an interacting robot in real time. Using this system, we conducted an experiment in which human participants interacted with the robot in a joint attention task. In the experiment, we systematically manipulated the robot’s gaze toward the human partner’s face in real time and then analyzed the human’s gaze behavior as a response to the robot’s gaze behavior. We found that more face looks from the robot led to more look-backs (to the robot’s face) from human participants and consequently created more mutual gaze and eye contact between the two. Moreover, participants demonstrated more coordinated and synchronized multimodal behaviors between speech and gaze when more eye contact was successfully established and maintained. PMID:28966875

  6. Why we interact: on the functional role of the striatum in the subjective experience of social interaction.

    PubMed

    Pfeiffer, Ulrich J; Schilbach, Leonhard; Timmermans, Bert; Kuzmanovic, Bojana; Georgescu, Alexandra L; Bente, Gary; Vogeley, Kai

    2014-11-01

    There is ample evidence that human primates strive for social contact and experience interactions with conspecifics as intrinsically rewarding. Focusing on gaze behavior as a crucial means of human interaction, this study employed a unique combination of neuroimaging, eye-tracking, and computer-animated virtual agents to assess the neural mechanisms underlying this component of behavior. In the interaction task, participants believed that during each interaction the agent's gaze behavior could either be controlled by another participant or by a computer program. Their task was to indicate whether they experienced a given interaction as an interaction with another human participant or the computer program based on the agent's reaction. Unbeknownst to them, the agent was always controlled by a computer to enable a systematic manipulation of gaze reactions by varying the degree to which the agent engaged in joint attention. This allowed creating a tool to distinguish neural activity underlying the subjective experience of being engaged in social and non-social interaction. In contrast to previous research, this allows measuring neural activity while participants experience active engagement in real-time social interactions. Results demonstrate that gaze-based interactions with a perceived human partner are associated with activity in the ventral striatum, a core component of reward-related neurocircuitry. In contrast, interactions with a computer-driven agent activate attention networks. Comparisons of neural activity during interaction with behaviorally naïve and explicitly cooperative partners demonstrate different temporal dynamics of the reward system and indicate that the mere experience of engagement in social interaction is sufficient to recruit this system. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. Dynamic sound localization in cats

    PubMed Central

    Ruhland, Janet L.; Jones, Amy E.

    2015-01-01

    Sound localization in cats and humans relies on head-centered acoustic cues. Studies have shown that humans are able to localize sounds during rapid head movements that are directed toward the target or other objects of interest. We studied whether cats are able to utilize similar dynamic acoustic cues to localize acoustic targets delivered during rapid eye-head gaze shifts. We trained cats with visual-auditory two-step tasks in which we presented a brief sound burst during saccadic eye-head gaze shifts toward a prior visual target. No consistent or significant differences in accuracy or precision were found between this dynamic task (2-step saccade) and the comparable static task (single saccade when the head is stable) in either horizontal or vertical direction. Cats appear to be able to process dynamic auditory cues and execute complex motor adjustments to accurately localize auditory targets during rapid eye-head gaze shifts. PMID:26063772

  8. On the use of hidden Markov models for gaze pattern modeling

    NASA Astrophysics Data System (ADS)

    Mannaru, Pujitha; Balasingam, Balakumar; Pattipati, Krishna; Sibley, Ciara; Coyne, Joseph

    2016-05-01

    Some of the conventional metrics derived from gaze patterns (on computer screens) to study visual attention, engagement and fatigue are saccade counts, nearest neighbor index (NNI) and duration of dwells/fixations. Each of these metrics has drawbacks in modeling the behavior of gaze patterns; one such drawback comes from the fact that some portions on the screen are not as important as some other portions on the screen. This is addressed by computing the eye gaze metrics corresponding to important areas of interest (AOI) on the screen. There are some challenges in developing accurate AOI based metrics: firstly, the definition of AOI is always fuzzy; secondly, it is possible that the AOI may change adaptively over time. Hence, there is a need to introduce eye-gaze metrics that are aware of the AOI in the field of view; at the same time, the new metrics should be able to automatically select the AOI based on the nature of the gazes. In this paper, we propose a novel way of computing NNI based on continuous hidden Markov models (HMM) that model the gazes as 2D Gaussian observations (x-y coordinates of the gaze) with the mean at the center of the AOI and covariance that is related to the concentration of gazes. The proposed modeling allows us to accurately compute the NNI metric in the presence of multiple, undefined AOI on the screen in the presence of intermittent casual gazing that is modeled as random gazes on the screen.

  9. Specificity of Age-Related Differences in Eye-Gaze Following: Evidence From Social and Nonsocial Stimuli.

    PubMed

    Slessor, Gillian; Venturini, Cristina; Bonny, Emily J; Insch, Pauline M; Rokaszewicz, Anna; Finnerty, Ailbhe N

    2016-01-01

    Eye-gaze following is a fundamental social skill, facilitating communication. The present series of studies explored adult age-related differences in this key social-cognitive ability. In Study 1 younger and older adult participants completed a cueing task in which eye-gaze cues were predictive or non-predictive of target location. Another eye-gaze cueing task, assessing the influence of congruent and incongruent eye-gaze cues relative to trials which provided no cue to target location, was administered in Study 2. Finally, in Study 3 the eye-gaze cue was replaced by an arrow. In Study 1 older adults showed less evidence of gaze following than younger participants when required to strategically follow predictive eye-gaze cues and when making automatic shifts of attention to non-predictive eye-gaze cues. Findings from Study 2 suggested that, unlike younger adults, older participants showed no facilitation effect and thus did not follow congruent eye-gaze cues. They also had significantly weaker attentional costs than their younger counterparts. These age-related differences were not found in the non-social arrow cueing task. Taken together these findings suggest older adults do not use eye-gaze cues to engage in joint attention, and have specific social difficulties decoding critical information from the eye region. © The Author 2014. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  10. SOCIAL AND NON-SOCIAL CUEING OF VISUOSPATIAL ATTENTION IN AUTISM AND TYPICAL DEVELOPMENT

    PubMed Central

    Pruett, John R.; LaMacchia, Angela; Hoertel, Sarah; Squire, Emma; McVey, Kelly; Todd, Richard D.; Constantino, John N.; Petersen, Steven E.

    2013-01-01

    Three experiments explored attention to eye gaze, which is incompletely understood in typical development and is hypothesized to be disrupted in autism. Experiment 1 (n=26 typical adults) involved covert orienting to box, arrow, and gaze cues at two probabilities and cue-target times to test whether reorienting for gaze is endogenous, exogenous, or unique; experiment 2 (total n=80: male and female children and adults) studied age and sex effects on gaze cueing. Gaze cueing appears endogenous and may strengthen in typical development. Experiment 3 tested exogenous, endogenous, and/or gaze-based orienting in 25 typical and 27 Autistic Spectrum Disorder (ASD) children. ASD children made more saccades, slowing their reaction times; however, exogenous and endogenous orienting, including gaze cueing, appear intact in ASD. PMID:20809377

  11. Experimental Test of Spatial Updating Models for Monkey Eye-Head Gaze Shifts

    PubMed Central

    Van Grootel, Tom J.; Van der Willigen, Robert F.; Van Opstal, A. John

    2012-01-01

    How the brain maintains an accurate and stable representation of visual target locations despite the occurrence of saccadic gaze shifts is a classical problem in oculomotor research. Here we test and dissociate the predictions of different conceptual models for head-unrestrained gaze-localization behavior of macaque monkeys. We adopted the double-step paradigm with rapid eye-head gaze shifts to measure localization accuracy in response to flashed visual stimuli in darkness. We presented the second target flash either before (static), or during (dynamic) the first gaze displacement. In the dynamic case the brief visual flash induced a small retinal streak of up to about 20 deg at an unpredictable moment and retinal location during the eye-head gaze shift, which provides serious challenges for the gaze-control system. However, for both stimulus conditions, monkeys localized the flashed targets with accurate gaze shifts, which rules out several models of visuomotor control. First, these findings exclude the possibility that gaze-shift programming relies on retinal inputs only. Instead, they support the notion that accurate eye-head motor feedback updates the gaze-saccade coordinates. Second, in dynamic trials the visuomotor system cannot rely on the coordinates of the planned first eye-head saccade either, which rules out remapping on the basis of a predictive corollary gaze-displacement signal. Finally, because gaze-related head movements were also goal-directed, requiring continuous access to eye-in-head position, we propose that our results best support a dynamic feedback scheme for spatial updating in which visuomotor control incorporates accurate signals about instantaneous eye- and head positions rather than relative eye- and head displacements. PMID:23118883

  12. Gaze Synchrony between Mothers with Mood Disorders and Their Infants: Maternal Emotion Dysregulation Matters

    PubMed Central

    Lotzin, Annett; Romer, Georg; Schiborr, Julia; Noga, Berit; Schulte-Markwort, Michael; Ramsauer, Brigitte

    2015-01-01

    A lowered and heightened synchrony between the mother’s and infant’s nonverbal behavior predicts adverse infant development. We know that maternal depressive symptoms predict lowered and heightened mother-infant gaze synchrony, but it is unclear whether maternal emotion dysregulation is related to mother-infant gaze synchrony. This cross-sectional study examined whether maternal emotion dysregulation in mothers with mood disorders is significantly related to mother-infant gaze synchrony. We also tested whether maternal emotion dysregulation is relatively more important than maternal depressive symptoms in predicting mother-infant gaze synchrony, and whether maternal emotion dysregulation mediates the relation between maternal depressive symptoms and mother-infant gaze synchrony. We observed 68 mothers and their 4- to 9-month-old infants in the Still-Face paradigm during two play interactions, before and after social stress was induced. The mothers’ and infants’ gaze behaviors were coded using microanalysis with the Maternal Regulatory Scoring System and Infant Regulatory Scoring System, respectively. The degree of mother-infant gaze synchrony was computed using time-series analysis. Maternal emotion dysregulation was measured by the Difficulties in Emotion Regulation Scale; depressive symptoms were assessed using the Beck Depression Inventory. Greater maternal emotion dysregulation was significantly related to heightened mother-infant gaze synchrony. The overall effect of maternal emotion dysregulation on mother-infant gaze synchrony was relatively more important than the effect of maternal depressive symptoms in the five tested models. Maternal emotion dysregulation fully mediated the relation between maternal depressive symptoms and mother-infant gaze synchrony. Our findings suggest that the effect of the mother’s depressive symptoms on the mother-infant gaze synchrony may be mediated by the mother’s emotion dysregulation. PMID:26657941

  13. Gaze Synchrony between Mothers with Mood Disorders and Their Infants: Maternal Emotion Dysregulation Matters.

    PubMed

    Lotzin, Annett; Romer, Georg; Schiborr, Julia; Noga, Berit; Schulte-Markwort, Michael; Ramsauer, Brigitte

    2015-01-01

    A lowered and heightened synchrony between the mother's and infant's nonverbal behavior predicts adverse infant development. We know that maternal depressive symptoms predict lowered and heightened mother-infant gaze synchrony, but it is unclear whether maternal emotion dysregulation is related to mother-infant gaze synchrony. This cross-sectional study examined whether maternal emotion dysregulation in mothers with mood disorders is significantly related to mother-infant gaze synchrony. We also tested whether maternal emotion dysregulation is relatively more important than maternal depressive symptoms in predicting mother-infant gaze synchrony, and whether maternal emotion dysregulation mediates the relation between maternal depressive symptoms and mother-infant gaze synchrony. We observed 68 mothers and their 4- to 9-month-old infants in the Still-Face paradigm during two play interactions, before and after social stress was induced. The mothers' and infants' gaze behaviors were coded using microanalysis with the Maternal Regulatory Scoring System and Infant Regulatory Scoring System, respectively. The degree of mother-infant gaze synchrony was computed using time-series analysis. Maternal emotion dysregulation was measured by the Difficulties in Emotion Regulation Scale; depressive symptoms were assessed using the Beck Depression Inventory. Greater maternal emotion dysregulation was significantly related to heightened mother-infant gaze synchrony. The overall effect of maternal emotion dysregulation on mother-infant gaze synchrony was relatively more important than the effect of maternal depressive symptoms in the five tested models. Maternal emotion dysregulation fully mediated the relation between maternal depressive symptoms and mother-infant gaze synchrony. Our findings suggest that the effect of the mother's depressive symptoms on the mother-infant gaze synchrony may be mediated by the mother's emotion dysregulation.

  14. Surface coverage with single vs. multiple gaze surface topography to fit scleral lenses.

    PubMed

    DeNaeyer, Gregory; Sanders, Donald R; Farajian, Timothy S

    2017-06-01

    To determine surface coverage of measurements using the sMap3D ® corneo-scleral topographer in patients presenting for scleral lens fitting. Twenty-five eyes of 23 scleral lens patients were examined. Up-gaze, straight-gaze, and down-gaze positions of each eye were "stitched" into a single map. The percentage surface coverage between 10mm and 20mm diameter circles from corneal center was compared between the straight-gaze and stitched images. Scleral toricity magnitude was calculated at 100% coverage and at the same diameter after 50% of the data was removed. At a 10mm diameter from corneal center, the straight-gaze and stitched images both had 100% coverage. At the 14, 15, 16, 18 and 20mm diameters, the straight-gaze image only covered 68%, 53%, 39%, 18%, and 6% of the ocular surface diameters while the stitched image covered 98%, 96%, 93%, 75%, and 32% respectively. In the case showing the most scleral coverage at 16mm (straight-gaze), there was only 75% coverage (straight-gaze) compared to 100% (stitched image); the case with the least coverage had 7% (straight gaze) and 92% (stitched image). The 95% limits of agreement between the 50% and 100% coverage scleral toricity was between -1.4D (50% coverage value larger) and 1.2D (100% coverage larger), a 2.6D spread. The absolute difference between 50% to 100% coverage scleral toricity was ≥0.50D in 28% and ≥1.0D in 16% of cases. It appears that a single straight-gaze image would introduce significant measurement inaccuracy in fitting scleral lenses using the sMap3D while a 3-gaze stitched image would not. Copyright © 2017 British Contact Lens Association. Published by Elsevier Ltd. All rights reserved.

  15. Kinematics and eye-head coordination of gaze shifts evoked from different sites in the superior colliculus of the cat.

    PubMed

    Guillaume, Alain; Pélisson, Denis

    2006-12-15

    Shifting gaze requires precise coordination of eye and head movements. It is clear that the superior colliculus (SC) is involved with saccadic gaze shifts. Here we investigate its role in controlling both eye and head movements during gaze shifts. Gaze shifts of the same amplitude can be evoked from different SC sites by controlled electrical microstimulation. To describe how the SC coordinates the eye and the head, we compare the characteristics of these amplitude-matched gaze shifts evoked from different SC sites. We show that matched amplitude gaze shifts elicited from progressively more caudal sites are progressively slower and associated with a greater head contribution. Stimulation at more caudal SC sites decreased the peak velocity of the eye but not of the head, suggesting that the lower peak gaze velocity for the caudal sites is due to the increased contribution of the slower-moving head. Eye-head coordination across the SC motor map is also indicated by the relative latencies of the eye and head movements. For some amplitudes of gaze shift, rostral stimulation evoked eye movement before head movement, whereas this reversed with caudal stimulation, which caused the head to move before the eyes. These results show that gaze shifts of similar amplitude evoked from different SC sites are produced with different kinematics and coordination of eye and head movements. In other words, gaze shifts evoked from different SC sites follow different amplitude-velocity curves, with different eye-head contributions. These findings shed light on mechanisms used by the central nervous system to translate a high-level motor representation (a desired gaze displacement on the SC map) into motor commands appropriate for the involved body segments (the eye and the head).

  16. Distribution of light in the human retina under natural viewing conditions

    NASA Astrophysics Data System (ADS)

    Gibert, Jorge C.

    Age-related macular degeneration (AMD) is the leading cause of blindness inAmerica. The fact that AMD wreaks most of the damage in the center of the retina raises the question of whether light, integrated over long periods, is more concentrated in the macula. A method, based on eye-tracking, was developed to measure the distribution of light in the retina under natural viewing conditions. The hypothesis was that integrated over time, retinal illumination peaked in the macula. Additionally a possible relationship between age and retinal illumination was investigated. The eye tracker superimposed the subject's gaze position on a video recorded by a scene camera. Five informed subjects were employed in feasibility tests, and 58 naive subjects participated in 5 phases. In phase 1 the subjects viewed a gray-scale image. In phase 2, they observed a sequence of photographic images. In phase 3 they viewed a video. In phase 4, they worked on a computer; in phase 5, the subjects walked around freely. The informed subjects were instructed to gaze at bright objects in the field of view and then at dark objects. Naive subjects were allowed to gaze freely for all phases. Using the subject's gaze coordinates, and the video provided by the scene camera, the cumulative light distribution on the retina was calculated for ˜15° around the fovea. As expected for control subjects, cumulative retinal light distributions peaked and dipped in the fovea when they gazed at bright or dark objects respectively. The light distribution maps obtained from the naive subjects presented a tendency to peak in the macula for phases 1, 2, and 3, a consistent tendency in phase 4 and a variable tendency in phase 5. The feasibility of using an eye-tracker system to measure the distribution of light in the retina was demonstrated, thus helping to understand the role played by light exposure in the etiology of AMD. Results showed that a tendency for light to peak in the macula is a characteristic of some individuals and of certain tasks. In these situations, risk of AMD could be increased. No significant difference was observed based on age.

  17. Timing of gazes in child dialogues: a time-course analysis of requests and back channelling in referential communication.

    PubMed

    Sandgren, Olof; Andersson, Richard; van de Weijer, Joost; Hansson, Kristina; Sahlén, Birgitta

    2012-01-01

    This study investigates gaze behaviour in child dialogues. In earlier studies the authors have investigated the use of requests for clarification and responses in order to study the co-creation of understanding in a referential communication task. By adding eye tracking, this line of research is now expanded to include non-verbal contributions in conversation. To investigate the timing of gazes in face-to-face interaction and to relate the gaze behaviour to the use of requests for clarification. Eight conversational pairs of typically developing 10-15 year olds participated. The pairs (director and executor) performed a referential communication task requiring the description of faces. During the dialogues both participants wore head-mounted eye trackers. All gazes were recorded and categorized according to the area fixated (Task, Face, Off). The verbal context for all instances of gaze at the partner's face was identified and categorized using time-course analysis. The results showed that the executor spends almost 90% of the time fixating the gaze on the task, 10% on the director's face and less than 0.5% elsewhere. Turn shift, primarily requests for clarification, and back channelling significantly predicted the executors' gaze to the face of the task director. The distribution of types of requests showed that requests for previously unmentioned information were significantly more likely to be associated with gaze at the director. The study shows that the executors' gaze at the director accompanies important dynamic shifts in the dialogue. The association with requests for clarification indicates that gaze at the director can be used to monitor the response with two modalities. Furthermore, the significantly higher association with requests for previously unmentioned information indicates that gaze may be used to emphasize the verbal content. The results will be used as a reference for studies of gaze behaviour in clinical populations with hearing and language impairments. © 2012 Royal College of Speech and Language Therapists.

  18. Robot Faces that Follow Gaze Facilitate Attentional Engagement and Increase Their Likeability

    PubMed Central

    Willemse, Cesco; Marchesi, Serena; Wykowska, Agnieszka

    2018-01-01

    Gaze behavior of humanoid robots is an efficient mechanism for cueing our spatial orienting, but less is known about the cognitive–affective consequences of robots responding to human directional cues. Here, we examined how the extent to which a humanoid robot (iCub) avatar directed its gaze to the same objects as our participants affected engagement with the robot, subsequent gaze-cueing, and subjective ratings of the robot’s characteristic traits. In a gaze-contingent eyetracking task, participants were asked to indicate a preference for one of two objects with their gaze while an iCub avatar was presented between the object photographs. In one condition, the iCub then shifted its gaze toward the object chosen by a participant in 80% of the trials (joint condition) and in the other condition it looked at the opposite object 80% of the time (disjoint condition). Based on the literature in human–human social cognition, we took the speed with which the participants looked back at the robot as a measure of facilitated reorienting and robot-preference, and found these return saccade onset times to be quicker in the joint condition than in the disjoint condition. As indicated by results from a subsequent gaze-cueing tasks, the gaze-following behavior of the robot had little effect on how our participants responded to gaze cues. Nevertheless, subjective reports suggested that our participants preferred the iCub following participants’ gaze to the one with a disjoint attention behavior, rated it as more human-like and as more likeable. Taken together, our findings show a preference for robots who follow our gaze. Importantly, such subtle differences in gaze behavior are sufficient to influence our perception of humanoid agents, which clearly provides hints about the design of behavioral characteristics of humanoid robots in more naturalistic settings. PMID:29459842

  19. Interactions of Neonates and Infants with Prenatal Cocaine Exposure.

    ERIC Educational Resources Information Center

    Sparks, Shirley N.; Gushurst, Colette

    1995-01-01

    The effect of prenatal cocaine exposure on gaze of neonates and recovery of gaze of 2-month old infants (n=11) was studied. Compared to nonexposed controls, cocaine-exposed neonates had shorter gaze, and 2-month-old exposed infants had longer gaze. (Author/SW)

  20. The Malleability of Age-Related Positive Gaze Preferences: Training to Change Gaze and Mood

    PubMed Central

    Isaacowitz, Derek M.; Choi, YoonSun

    2010-01-01

    Older adults show positive gaze preferences, but to what extent are these preferences malleable? Examining the plasticity of age-related gaze preferences may provide a window into their origins. We therefore designed an attentional training procedure to assess the degree to which we could shift gaze and gaze-related mood in both younger and older adults. Participants completed either a positive or negative dot-probe training. Before and after the attentional training, we obtained measures of fixations to negatively-valenced images along with concurrent mood ratings. We found differential malleability of gaze and mood by age: for young adults, negative training resulted in fewer post-training fixations to the most negative areas of the images, whereas positive training appeared more successful in changing older adults’ fixation patterns. Young adults did not differ in their moods as a function of training, whereas older adults in the train negative group had the worst moods after training. Implications for the etiology of age-related positive gaze preferences are considered. PMID:21401229

  1. Flexible Coordination of Stationary and Mobile Conversations with Gaze: Resource Allocation among Multiple Joint Activities

    PubMed Central

    Mayor, Eric; Bangerter, Adrian

    2016-01-01

    Gaze is instrumental in coordinating face-to-face social interactions. But little is known about gaze use when social interactions co-occur with other joint activities. We investigated the case of walking while talking. We assessed how gaze gets allocated among various targets in mobile conversations, whether allocation of gaze to other targets affects conversational coordination, and whether reduced availability of gaze for conversational coordination affects conversational performance and content. In an experimental study, pairs were videotaped in four conditions of mobility (standing still, talking while walking along a straight-line itinerary, talking while walking along a complex itinerary, or walking along a complex itinerary with no conversational task). Gaze to partners was substantially reduced in mobile conversations, but gaze was still used to coordinate conversation via displays of mutual orientation, and conversational performance and content was not different between stationary and mobile conditions. Results expand the phenomena of multitasking to joint activities. PMID:27822189

  2. Gaze-based assistive technology in daily activities in children with severe physical impairments-An intervention study.

    PubMed

    Borgestig, Maria; Sandqvist, Jan; Ahlsten, Gunnar; Falkmer, Torbjörn; Hemmingsson, Helena

    2017-04-01

    To establish the impact of a gaze-based assistive technology (AT) intervention on activity repertoire, autonomous use, and goal attainment in children with severe physical impairments, and to examine parents' satisfaction with the gaze-based AT and with services related to the gaze-based AT intervention. Non-experimental multiple case study with before, after, and follow-up design. Ten children with severe physical impairments without speaking ability (aged 1-15 years) participated in gaze-based AT intervention for 9-10 months, during which period the gaze-based AT was implemented in daily activities. Repertoire of computer activities increased for seven children. All children had sustained usage of gaze-based AT in daily activities at follow-up, all had attained goals, and parents' satisfaction with the AT and with services was high. The gaze-based AT intervention was effective in guiding parents and teachers to continue supporting the children to perform activities with the AT after the intervention program.

  3. Audience gaze while appreciating a multipart musical performance.

    PubMed

    Kawase, Satoshi; Obata, Satoshi

    2016-11-01

    Visual information has been observed to be crucial for audience members during musical performances. The present study used an eye tracker to investigate audience members' gazes while appreciating an audiovisual musical ensemble performance, based on evidence of the dominance of musical part in auditory attention when listening to multipart music that contains different melody lines and the joint-attention theory of gaze. We presented singing performances, by a female duo. The main findings were as follows: (1) the melody part (soprano) attracted more visual attention than the accompaniment part (alto) throughout the piece, (2) joint attention emerged when the singers shifted their gazes toward their co-performer, suggesting that inter-performer gazing interactions that play a spotlight role mediated performer-audience visual interaction, and (3) musical part (melody or accompaniment) strongly influenced the total duration of gazes among audiences, while the spotlight effect of gaze was limited to just after the singers' gaze shifts. Copyright © 2016. Published by Elsevier Inc.

  4. Rebound upbeat nystagmus after lateral gaze in episodic ataxia type 2.

    PubMed

    Kim, Hyo-Jung; Kim, Ji-Soo; Choi, Jae-Hwan; Shin, Jin-Hong; Choi, Kwang-Dong; Zee, David S

    2014-06-01

    Rebound nystagmus is a transient nystagmus that occurs on resuming the straight-ahead position after prolonged eccentric gaze. Even though rebound nystagmus is commonly associated with gaze-evoked nystagmus (GEN), development of rebound nystagmus in a different plane of gaze has not been described. We report a patient with episodic ataxia type 2 who showed transient upbeat nystagmus on resuming the straight-ahead position after sustained lateral gaze that had induced GEN and downbeat nystagmus. The rebound upbeat nystagmus may be ascribed to a shifting null in the vertical plane as a result of an adaptation to the downbeat nystagmus that developed during lateral gaze.

  5. Development of Gaze Following Abilities in Wolves (Canis Lupus)

    PubMed Central

    Range, Friederike; Virányi, Zsófia

    2011-01-01

    The ability to coordinate with others' head and eye orientation to look in the same direction is considered a key step towards an understanding of others mental states like attention and intention. Here, we investigated the ontogeny and habituation patterns of gaze following into distant space and behind barriers in nine hand-raised wolves. We found that these wolves could use conspecific as well as human gaze cues even in the barrier task, which is thought to be more cognitively advanced than gazing into distant space. Moreover, while gaze following into distant space was already present at the age of 14 weeks and subjects did not habituate to repeated cues, gazing around a barrier developed considerably later and animals quickly habituated, supporting the hypothesis that different cognitive mechanisms may underlie the two gaze following modalities. More importantly, this study demonstrated that following another individuals' gaze around a barrier is not restricted to primates and corvids but is also present in canines, with remarkable between-group similarities in the ontogeny of this behaviour. This sheds new light on the evolutionary origins of and selective pressures on gaze following abilities as well as on the sensitivity of domestic dogs towards human communicative cues. PMID:21373192

  6. Role of Gaze Cues in Interpersonal Motor Coordination: Towards Higher Affiliation in Human-Robot Interaction.

    PubMed

    Khoramshahi, Mahdi; Shukla, Ashwini; Raffard, Stéphane; Bardy, Benoît G; Billard, Aude

    2016-01-01

    The ability to follow one another's gaze plays an important role in our social cognition; especially when we synchronously perform tasks together. We investigate how gaze cues can improve performance in a simple coordination task (i.e., the mirror game), whereby two players mirror each other's hand motions. In this game, each player is either a leader or follower. To study the effect of gaze in a systematic manner, the leader's role is played by a robotic avatar. We contrast two conditions, in which the avatar provides or not explicit gaze cues that indicate the next location of its hand. Specifically, we investigated (a) whether participants are able to exploit these gaze cues to improve their coordination, (b) how gaze cues affect action prediction and temporal coordination, and (c) whether introducing active gaze behavior for avatars makes them more realistic and human-like (from the user point of view). 43 subjects participated in 8 trials of the mirror game. Each subject performed the game in the two conditions (with and without gaze cues). In this within-subject study, the order of the conditions was randomized across participants, and subjective assessment of the avatar's realism was assessed by administering a post-hoc questionnaire. When gaze cues were provided, a quantitative assessment of synchrony between participants and the avatar revealed a significant improvement in subject reaction-time (RT). This confirms our hypothesis that gaze cues improve the follower's ability to predict the avatar's action. An analysis of the pattern of frequency across the two players' hand movements reveals that the gaze cues improve the overall temporal coordination across the two players. Finally, analysis of the subjective evaluations from the questionnaires reveals that, in the presence of gaze cues, participants found it not only more human-like/realistic, but also easier to interact with the avatar. This work confirms that people can exploit gaze cues to predict another person's movements and to better coordinate their motions with their partners, even when the partner is a computer-animated avatar. Moreover, this study contributes further evidence that implementing biological features, here task-relevant gaze cues, enable the humanoid robotic avatar to appear more human-like, and thus increase the user's sense of affiliation.

  7. Simple gaze-contingent cues guide eye movements in a realistic driving simulator

    NASA Astrophysics Data System (ADS)

    Pomarjanschi, Laura; Dorr, Michael; Bex, Peter J.; Barth, Erhardt

    2013-03-01

    Looking at the right place at the right time is a critical component of driving skill. Therefore, gaze guidance has the potential to become a valuable driving assistance system. In previous work, we have already shown that complex gaze-contingent stimuli can guide attention and reduce the number of accidents in a simple driving simulator. We here set out to investigate whether cues that are simple enough to be implemented in a real car can also capture gaze during a more realistic driving task in a high-fidelity driving simulator. We used a state-of-the-art, wide-field-of-view driving simulator with an integrated eye tracker. Gaze-contingent warnings were implemented using two arrays of light-emitting diodes horizontally fitted below and above the simulated windshield. Thirteen volunteering subjects drove along predetermined routes in a simulated environment popu­ lated with autonomous traffic. Warnings were triggered during the approach to half of the intersections, cueing either towards the right or to the left. The remaining intersections were not cued, and served as controls. The analysis of the recorded gaze data revealed that the gaze-contingent cues did indeed have a gaze guiding effect, triggering a significant shift in gaze position towards the highlighted direction. This gaze shift was not accompanied by changes in driving behaviour, suggesting that the cues do not interfere with the driving task itself.

  8. Eye gaze performance for children with severe physical impairments using gaze-based assistive technology-A longitudinal study.

    PubMed

    Borgestig, Maria; Sandqvist, Jan; Parsons, Richard; Falkmer, Torbjörn; Hemmingsson, Helena

    2016-01-01

    Gaze-based assistive technology (gaze-based AT) has the potential to provide children affected by severe physical impairments with opportunities for communication and activities. This study aimed to examine changes in eye gaze performance over time (time on task and accuracy) in children with severe physical impairments, without speaking ability, using gaze-based AT. A longitudinal study with a before and after design was conducted on 10 children (aged 1-15 years) with severe physical impairments, who were beginners to gaze-based AT at baseline. Thereafter, all children used the gaze-based AT in daily activities over the course of the study. Compass computer software was used to measure time on task and accuracy with eye selection of targets on screen, and tests were performed with the children at baseline, after 5 months, 9-11 months, and after 15-20 months. Findings showed that the children improved in time on task after 5 months and became more accurate in selecting targets after 15-20 months. This study indicates that these children with severe physical impairments, who were unable to speak, could improve in eye gaze performance. However, the children needed time to practice on a long-term basis to acquire skills needed to develop fast and accurate eye gaze performance.

  9. Seeing direct and averted gaze activates the approach-avoidance motivational brain systems.

    PubMed

    Hietanen, Jari K; Leppänen, Jukka M; Peltola, Mikko J; Linna-Aho, Kati; Ruuhiala, Heidi J

    2008-01-01

    Gaze direction is known to be an important factor in regulating social interaction. Recent evidence suggests that direct and averted gaze can signal the sender's motivational tendencies of approach and avoidance, respectively. We aimed at determining whether seeing another person's direct vs. averted gaze has an influence on the observer's neural approach-avoidance responses. We also examined whether it would make a difference if the participants were looking at the face of a real person or a picture. Measurements of hemispheric asymmetry in the frontal electroencephalographic activity indicated that another person's direct gaze elicited a relative left-sided frontal EEG activation (indicative of a tendency to approach), whereas averted gaze activated right-sided asymmetry (indicative of avoidance). Skin conductance responses were larger to faces than to control objects and to direct relative to averted gaze, indicating that faces, in general, and faces with direct gaze, in particular, elicited more intense autonomic activation and strength of the motivational tendencies than did control stimuli. Gaze direction also influenced subjective ratings of emotional arousal and valence. However, all these effects were observed only when participants were facing a real person, not when looking at a picture of a face. This finding was suggested to be due to the motivational responses to gaze direction being activated in the context of enhanced self-awareness by the presence of another person. The present results, thus, provide direct evidence that eye contact and gaze aversion between two persons influence the neural mechanisms regulating basic motivational-emotional responses and differentially activate the motivational approach-avoidance brain systems.

  10. Steering by hearing: a bat's acoustic gaze is linked to its flight motor output by a delayed, adaptive linear law.

    PubMed

    Ghose, Kaushik; Moss, Cynthia F

    2006-02-08

    Adaptive behaviors require sensorimotor computations that convert information represented initially in sensory coordinates to commands for action in motor coordinates. Fundamental to these computations is the relationship between the region of the environment sensed by the animal (gaze) and the animal's locomotor plan. Studies of visually guided animals have revealed an anticipatory relationship between gaze direction and the locomotor plan during target-directed locomotion. Here, we study an acoustically guided animal, an echolocating bat, and relate acoustic gaze (direction of the sonar beam) to flight planning as the bat searches for and intercepts insect prey. We show differences in the relationship between gaze and locomotion as the bat progresses through different phases of insect pursuit. We define acoustic gaze angle, theta(gaze), to be the angle between the sonar beam axis and the bat's flight path. We show that there is a strong linear linkage between acoustic gaze angle at time t [theta(gaze)(t)] and flight turn rate at time t + tau into the future [theta(flight) (t + tau)], which can be expressed by the formula theta(flight) (t + tau) = ktheta(gaze)(t). The gain, k, of this linkage depends on the bat's behavioral state, which is indexed by its sonar pulse rate. For high pulse rates, associated with insect attacking behavior, k is twice as high compared with low pulse rates, associated with searching behavior. We suggest that this adjustable linkage between acoustic gaze and motor output in a flying echolocating bat simplifies the transformation of auditory information to flight motor commands.

  11. Temporal Statistics of Natural Image Sequences Generated by Movements with Insect Flight Characteristics

    PubMed Central

    Schwegmann, Alexander; Lindemann, Jens Peter; Egelhaaf, Martin

    2014-01-01

    Many flying insects, such as flies, wasps and bees, pursue a saccadic flight and gaze strategy. This behavioral strategy is thought to separate the translational and rotational components of self-motion and, thereby, to reduce the computational efforts to extract information about the environment from the retinal image flow. Because of the distinguishing dynamic features of this active flight and gaze strategy of insects, the present study analyzes systematically the spatiotemporal statistics of image sequences generated during saccades and intersaccadic intervals in cluttered natural environments. We show that, in general, rotational movements with saccade-like dynamics elicit fluctuations and overall changes in brightness, contrast and spatial frequency of up to two orders of magnitude larger than translational movements at velocities that are characteristic of insects. Distinct changes in image parameters during translations are only caused by nearby objects. Image analysis based on larger patches in the visual field reveals smaller fluctuations in brightness and spatial frequency composition compared to small patches. The temporal structure and extent of these changes in image parameters define the temporal constraints imposed on signal processing performed by the insect visual system under behavioral conditions in natural environments. PMID:25340761

  12. Gaze Fluctuations Are Not Additively Decomposable: Reply to Bogartz and Staub

    ERIC Educational Resources Information Center

    Kelty-Stephen, Damian G.; Mirman, Daniel

    2013-01-01

    Our previous work interpreted single-lognormal fits to inter-gaze distance (i.e., "gaze steps") histograms as evidence of multiplicativity and hence interactions across scales in visual cognition. Bogartz and Staub (2012) proposed that gaze steps are additively decomposable into fixations and saccades, matching the histograms better and…

  13. Observing Shared Attention Modulates Gaze Following

    ERIC Educational Resources Information Center

    Bockler, Anne; Knoblich, Gunther; Sebanz, Natalie

    2011-01-01

    Humans' tendency to follow others' gaze is considered to be rather resistant to top-down influences. However, recent evidence indicates that gaze following depends on prior eye contact with the observed agent. Does observing two people engaging in eye contact also modulate gaze following? Participants observed two faces looking at each other or…

  14. Inertial vestibular coding of motion: concepts and evidence

    NASA Technical Reports Server (NTRS)

    Hess, B. J.; Angelaki, D. E.

    1997-01-01

    Central processing of inertial sensory information about head attitude and motion in space is crucial for motor control. Vestibular signals are coded relative to a non-inertial system, the head, that is virtually continuously in motion. Evidence for transformation of vestibular signals from head-fixed sensory coordinates to gravity-centered coordinates have been provided by studies of the vestibulo-ocular reflex. The underlying central processing depends on otolith afferent information that needs to be resolved in terms of head translation related inertial forces and head attitude dependent pull of gravity. Theoretical solutions have been suggested, but experimental evidence is still scarce. It appears, along these lines, that gaze control systems are intimately linked to motor control of head attitude and posture.

  15. A new mapping function in table-mounted eye tracker

    NASA Astrophysics Data System (ADS)

    Tong, Qinqin; Hua, Xiao; Qiu, Jian; Luo, Kaiqing; Peng, Li; Han, Peng

    2018-01-01

    Eye tracker is a new apparatus of human-computer interaction, which has caught much attention in recent years. Eye tracking technology is to obtain the current subject's "visual attention (gaze)" direction by using mechanical, electronic, optical, image processing and other means of detection. While the mapping function is one of the key technology of the image processing, and is also the determination of the accuracy of the whole eye tracker system. In this paper, we present a new mapping model based on the relationship among the eyes, the camera and the screen that the eye gazed. Firstly, according to the geometrical relationship among the eyes, the camera and the screen, the framework of mapping function between the pupil center and the screen coordinate is constructed. Secondly, in order to simplify the vectors inversion of the mapping function, the coordinate of the eyes, the camera and screen was modeled by the coaxial model systems. In order to verify the mapping function, corresponding experiment was implemented. It is also compared with the traditional quadratic polynomial function. And the results show that our approach can improve the accuracy of the determination of the gazing point. Comparing with other methods, this mapping function is simple and valid.

  16. The dolphin's (Tursiops truncatus) understanding of human gazing and pointing: knowing what and where.

    PubMed

    Pack, Adam A; Herman, Louis M

    2007-02-01

    The authors tested whether the understanding by dolphins (Tursiops truncatus) of human pointing and head-gazing cues extends to knowing the identity of an indicated object as well as its location. In Experiment 1, the dolphins Phoenix and Akeakamai processed the identity of a cued object (of 2 that were present), as shown by their success in selecting a matching object from among 2 alternatives remotely located. Phoenix was errorless on first trials in this task. In Experiment 2, Phoenix reliably responded to a cued object in alternate ways, either by matching it or by acting directly on it, with each type of response signaled by a distinct gestural command given after the indicative cue. She never confused matching and acting. In Experiment 3, Akeakamai was able to process the geometry of pointing cues (but not head-gazing cues), as revealed by her errorless responses to either a proximal or distal object simultaneously present, when each object was indicated only by the angle at which the informant pointed. The overall results establish that these dolphins could identify, through indicative cues alone, what a human is attending to as well as where.

  17. Visual laterality in belugas (Delphinapterus leucas) and Pacific white-sided dolphins (Lagenorhynchus obliquidens) when viewing familiar and unfamiliar humans.

    PubMed

    Yeater, Deirdre B; Hill, Heather M; Baus, Natalie; Farnell, Heather; Kuczaj, Stan A

    2014-11-01

    Lateralization of cognitive processes and motor functions has been demonstrated in a number of species, including humans, elephants, and cetaceans. For example, bottlenose dolphins (Tursiops truncatus) have exhibited preferential eye use during a variety of cognitive tasks. The present study investigated the possibility of visual lateralization in 12 belugas (Delphinapterus leucas) and six Pacific white-sided dolphins (Lagenorhynchus obliquidens) located at two separate marine mammal facilities. During free swim periods, the belugas and Pacific white-sided dolphins were presented a familiar human, an unfamiliar human, or no human during 10-15 min sessions. Session videos were coded for gaze duration, eye presentation at approach, and eye preference while viewing each stimulus. Although we did not find any clear group level lateralization, we found individual left eye lateralized preferences related to social stimuli for most belugas and some Pacific white-sided dolphins. Differences in gaze durations were also observed. The majority of individual belugas had longer gaze durations for unfamiliar rather than familiar stimuli. These results suggest that lateralization occurs during visual processing of human stimuli in belugas and Pacific white-sided dolphins and that these species can distinguish between familiar and unfamiliar humans.

  18. Rapid detection of person information in a naturalistic scene.

    PubMed

    Fletcher-Watson, Sue; Findlay, John M; Leekam, Susan R; Benson, Valerie

    2008-01-01

    A preferential-looking paradigm was used to investigate how gaze is distributed in naturalistic scenes. Two scenes were presented side by side: one contained a single person (person-present) and one did not (person-absent). Eye movements were recorded, the principal measures being the time spent looking at each region of the scenes, and the latency and location of the first fixation within each trial. We studied gaze patterns during free viewing, and also in a task requiring gender discrimination of the human figure depicted. Results indicated a strong bias towards looking to the person-present scene. This bias was present on the first fixation after image presentation, confirming previous findings of ultra-rapid processing of complex information. Faces attracted disproportionately many fixations, the preference emerging in the first fixation and becoming stronger in the following ones. These biases were exaggerated in the gender-discrimination task. A tendency to look at the object being fixated by the person in the scene was shown to be strongest at a slightly later point in the gaze sequence. We conclude that human bodies and faces are subject to special perceptual processing when presented as part of a naturalistic scene.

  19. Can upbeat nystagmus increase in downward, but not upward, gaze?

    PubMed

    Kim, Hyun-Ah; Yi, Hyon-Ah; Lee, Hyung

    2012-04-01

    Upbeat nystagmus (UBN) is typically increased with upward gaze and decreased with downward gaze. We describe a patient with acute multiple sclerosis who developed primary position UBN with a linear slow phase waveform, in which the velocity of nystagmus was intensified in downward gaze and decreased during upward gaze. Brain MRI showed high signal lesions in the paramedian dorsal area of the caudal medulla encompassing the most caudal part of the perihypoglossal nuclei. Clinicians should be aware of possibility of a caudal medullary lesion in a patient with UBN, especially when the velocity of the UBN is increased in downward gaze. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. A testimony to Muzil: Hervé Guibert, Foucault, and the medical gaze.

    PubMed

    Rendell, Joanne

    2004-01-01

    Testimony to Muzil: Hervé Guibert, Michel Foucault, and the "Medical Gaze" examines the fictional/autobiographical AIDS writings of the French writer Hervé Guibert. Locating Guibert's writings alongside the work of his friend Michel Foucault, the article explores how they echo Foucault's evolving notions of the "medical gaze." The article also explores how Guilbert's narrators and Guibert himself (as writer) resist and challenge the medical gaze; a gaze which particularly in the era of AIDS has subjected, objectified, and even sometimes punished the body of the gay man. It is argued that these resistances to the gaze offer a literary extension to Foucault's later work on power and resistance strategies.

  1. Effects of galvanic skin response feedback on user experience in gaze-controlled gaming: A pilot study.

    PubMed

    Larradet, Fanny; Barresi, Giacinto; Mattos, Leonardo S

    2017-07-01

    Eye-tracking (ET) is one of the most intuitive solutions for enabling people with severe motor impairments to control devices. Nevertheless, even such an effective assistive solution can detrimentally affect user experience during demanding tasks because of, for instance, the user's mental workload - using gaze-based controls for an extensive period of time can generate fatigue and cause frustration. Thus, it is necessary to design novel solutions for ET contexts able to improve the user experience, with particular attention to its aspects related to workload. In this paper, a pilot study evaluates the effects of a relaxation biofeedback system on the user experience in the context of a gaze-controlled task that is mentally and temporally demanding: ET-based gaming. Different aspects of the subjects' experience were investigated under two conditions of a gaze-controlled game. In the Biofeedback group (BF), the user triggered a command by means of voluntary relaxation, monitored through Galvanic Skin Response (GSR) and represented by visual feedback. In the No Biofeedback group (NBF), the same feedback was timed according to the average frequency of commands in BF. After the experiment, each subject filled out a user experience questionnaire. The results showed a general appreciation for BF, with a significant between-group difference in the perceived session time duration, with the latter being shorter for subjects in BF than for the ones in NBF. This result implies a lower mental workload for BF than for NBF subjects. Other results point toward a potential role of user's engagement in the improvement of user experience in BF. Such an effect highlights the value of relaxation biofeedback for improving the user experience in a demanding gaze-controlled task.

  2. Factors leading to the computer vision syndrome: an issue at the contemporary workplace.

    PubMed

    Izquierdo, Juan C; García, Maribel; Buxó, Carmen; Izquierdo, Natalio J

    2007-01-01

    Vision and eye related problems are common among computer users, and have been collectively called the Computer Vision Syndrome (CVS). An observational study in order to identify the risk factors leading to the CVS was done. Twenty-eight participants answered a validated questionnaire, and had their workstations examined. The questionnaire evaluated personal, environmental, ergonomic factors, and physiologic response of computer users. The distance from the eye to the computers' monitor (A), the computers' monitor height (B), and visual axis height (C) were measured. The difference between B and C was calculated and labeled as D. Angles of gaze to the computer monitor were calculated using the formula: angle=tan-1(D/A). Angles were divided into two groups: participants with angles of gaze ranging from 0 degree to 13.9 degrees were included in Group 1; and participants gazing at angles larger than 14 degrees were included in Group 2. Statistical analysis of the evaluated variables was made. Computer users in both groups used more tear supplements (as part of the syndrome) than expected. This association was statistically significant (p < 0.10). Participants in Group 1 reported more pain than participants in Group 2. Associations between the CVS and other personal or ergonomic variables were not statistically significant. Our findings show that the most important factor leading to the syndrome is the angle of gaze at the computer monitor. Pain in computer users is diminished when gazing downwards at angles of 14 degrees or more. The CVS remains an under estimated and poorly understood issue at the workplace. The general public, health professionals, the government, and private industries need to be educated about the CVS.

  3. Factors leading to the Computer Vision Syndrome: an issue at the contemporary workplace.

    PubMed

    Izquierdo, Juan C; García, Maribel; Buxó, Carmen; Izquierdo, Natalio J

    2004-01-01

    Vision and eye related problems are common among computer users, and have been collectively called the Computer Vision Syndrome (CVS). An observational study in order to identify the risk factors leading to the CVS was done. Twenty-eight participants answered a validated questionnaire, and had their workstations examined. The questionnaire evaluated personal, environmental, ergonomic factors, and physiologic response of computer users. The distance from the eye to the computers' monitor (A), the computers' monitor height (B), and visual axis height (C) were measured. The difference between B and C was calculated and labeled as D. Angles of gaze to the computer monitor were calculated using the formula: angle=tan(-1)(D/ A). Angles were divided into two groups: participants with angles of gaze ranging from 0 degrees to 13.9 degrees were included in Group 1; and participants gazing at angles larger than 14 degrees were included in Group 2. Statistical analysis of the evaluated variables was made. Computer users in both groups used more tear supplements (as part of the syndrome) than expected. This association was statistically significant (p<0.10). Participants in Group 1 reported more pain than participants in Group 2. Associations between the CVS and other personal or ergonomic variables were not statistically significant. Our findings show that most important factor leading to the syndrome is the angle of gaze at the computer monitor. Pain in computer users is diminished when gazing downwards at angles of 14 degrees or more. The CVS remains an under estimated and poorly understood issue at the workplace. The general public, health professionals, the government, and private industries need to be educated about the CVS.

  4. What Do Eye Gaze Metrics Tell Us about Motor Imagery?

    PubMed

    Poiroux, Elodie; Cavaro-Ménard, Christine; Leruez, Stéphanie; Lemée, Jean Michel; Richard, Isabelle; Dinomais, Mickael

    2015-01-01

    Many of the brain structures involved in performing real movements also have increased activity during imagined movements or during motor observation, and this could be the neural substrate underlying the effects of motor imagery in motor learning or motor rehabilitation. In the absence of any objective physiological method of measurement, it is currently impossible to be sure that the patient is indeed performing the task as instructed. Eye gaze recording during a motor imagery task could be a possible way to "spy" on the activity an individual is really engaged in. The aim of the present study was to compare the pattern of eye movement metrics during motor observation, visual and kinesthetic motor imagery (VI, KI), target fixation, and mental calculation. Twenty-two healthy subjects (16 females and 6 males), were required to perform tests in five conditions using imagery in the Box and Block Test tasks following the procedure described by Liepert et al. Eye movements were analysed by a non-invasive oculometric measure (SMI RED250 system). Two parameters describing gaze pattern were calculated: the index of ocular mobility (saccade duration over saccade + fixation duration) and the number of midline crossings (i.e. the number of times the subjects gaze crossed the midline of the screen when performing the different tasks). Both parameters were significantly different between visual imagery and kinesthesic imagery, visual imagery and mental calculation, and visual imagery and target fixation. For the first time we were able to show that eye movement patterns are different during VI and KI tasks. Our results suggest gaze metric parameters could be used as an objective unobtrusive approach to assess engagement in a motor imagery task. Further studies should define how oculomotor parameters could be used as an indicator of the rehabilitation task a patient is engaged in.

  5. Vertical gaze angle: absolute height-in-scene information for the programming of prehension.

    PubMed

    Gardner, P L; Mon-Williams, M

    2001-02-01

    One possible source of information regarding the distance of a fixated target is provided by the height of the object within the visual scene. It is accepted that this cue can provide ordinal information, but generally it has been assumed that the nervous system cannot extract "absolute" information from height-in-scene. In order to use height-in-scene, the nervous system would need to be sensitive to ocular position with respect to the head and to head orientation with respect to the shoulders (i.e. vertical gaze angle or VGA). We used a perturbation technique to establish whether the nervous system uses vertical gaze angle as a distance cue. Vertical gaze angle was perturbed using ophthalmic prisms with the base oriented either up or down. In experiment 1, participants were required to carry out an open-loop pointing task whilst wearing: (1) no prisms; (2) a base-up prism; or (3) a base-down prism. In experiment 2, the participants reached to grasp an object under closed-loop viewing conditions whilst wearing: (1) no prisms; (2) a base-up prism; or (3) a base-down prism. Experiment 1 and 2 provided clear evidence that the human nervous system uses vertical gaze angle as a distance cue. It was found that the weighting attached to VGA decreased with increasing target distance. The weighting attached to VGA was also affected by the discrepancy between the height of the target, as specified by all other distance cues, and the height indicated by the initial estimate of the position of the supporting surface. We conclude by considering the use of height-in-scene information in the perception of surface slant and highlight some of the complexities that must be involved in the computation of environmental layout.

  6. Adaptive eye-gaze tracking using neural-network-based user profiles to assist people with motor disability.

    PubMed

    Sesin, Anaelis; Adjouadi, Malek; Cabrerizo, Mercedes; Ayala, Melvin; Barreto, Armando

    2008-01-01

    This study developed an adaptive real-time human-computer interface (HCI) that serves as an assistive technology tool for people with severe motor disability. The proposed HCI design uses eye gaze as the primary computer input device. Controlling the mouse cursor with raw eye coordinates results in sporadic motion of the pointer because of the saccadic nature of the eye. Even though eye movements are subtle and completely imperceptible under normal circumstances, they considerably affect the accuracy of an eye-gaze-based HCI. The proposed HCI system is novel because it adapts to each specific user's different and potentially changing jitter characteristics through the configuration and training of an artificial neural network (ANN) that is structured to minimize the mouse jitter. This task is based on feeding the ANN a user's initially recorded eye-gaze behavior through a short training session. The ANN finds the relationship between the gaze coordinates and the mouse cursor position based on the multilayer perceptron model. An embedded graphical interface is used during the training session to generate user profiles that make up these unique ANN configurations. The results with 12 subjects in test 1, which involved following a moving target, showed an average jitter reduction of 35%; the results with 9 subjects in test 2, which involved following the contour of a square object, showed an average jitter reduction of 53%. For both results, the outcomes led to trajectories that were significantly smoother and apt at reaching fixed or moving targets with relative ease and within a 5% error margin or deviation from desired trajectories. The positive effects of such jitter reduction are presented graphically for visual appreciation.

  7. Playing checkers: detection and eye hand coordination in simulated prosthetic vision

    NASA Astrophysics Data System (ADS)

    Dagnelie, Gislin; Walter, Matthias; Yang, Liancheng

    2006-09-01

    In order to assess the potential for visual inspection and eye hand coordination without tactile feedback under conditions that may be available to future retinal prosthesis wearers, we studied the ability of sighted individuals to act upon pixelized visual information at very low resolution, equivalent to 20/2400 visual acuity. Live images from a head-mounted camera were low-pass filtered and presented in a raster of 6 × 10 circular Gaussian dots. Subjects could either freely move their gaze across the raster (free-viewing condition) or the raster position was locked to the subject's gaze by means of video-based pupil tracking (gaze-locked condition). Four normally sighted and one severely visually impaired subject with moderate nystagmus participated in a series of four experiments. Subjects' task was to count 1 to 16 white fields randomly distributed across an otherwise black checkerboard (counting task) or to place a black checker on each of the white fields (placing task). We found that all subjects were capable of learning both tasks after varying amounts of practice, both in the free-viewing and in the gaze-locked conditions. Normally sighted subjects all reached very similar performance levels independent of the condition. The practiced performance level of the visually impaired subject in the free-viewing condition was indistinguishable from that of the normally sighted subjects, but required approximately twice the amount of time to place checkers in the gaze-locked condition; this difference is most likely attributable to this subject's nystagmus. Thus, if early retinal prosthesis wearers can achieve crude form vision, then on the basis of these results they too should be able to perform simple eye hand coordination tasks without tactile feedback.

  8. A novel video-based paradigm to study the mechanisms underlying age- and falls risk-related differences in gaze behaviour during walking.

    PubMed

    Stanley, Jennifer; Hollands, Mark

    2014-07-01

    The current study aimed to quantitatively assess differences in gaze behaviour between participants grouped on the basis of their age and measures of functional mobility during a virtual walking paradigm. The gaze behaviour of nine young adults, seven older adults with a relatively low risk of falling and seven older adults with a relatively higher risk of falling was measured while they watched five first-person perspective movies representing the viewpoint of a pedestrian walking through various environments. Participants also completed a number of cognitive tests: Stroop task, visual search, trail making task, Mini Mental Status Examination, and reaction time, visual tests (visual acuity and contrast sensitivity) and assessments of balance (Activities Balance Confidence Scale and Berg Balance Scale) to aid in the interpretation of differences in gaze behaviour. The high risk older adult group spent significantly more time fixating aspects of the travel path than the low risk and young adult groups. High risk older adults were also significantly slower in performing a number of the cognitive tasks than young adults. Correlations were conducted to compare the extent to which travel path fixation durations co-varied with scores on the tests of visual search, motor, and cognitive function. A positive significant correlation was found between the speed of response to the incongruent Stroop task and travel path fixation duration r21  = 0.44, p < 0.05. The results indicate that our movie-viewing paradigm can identify differences in gaze behaviour between participants grouped on the basis of their age and measures of functional mobility and that these differences are associated with cognitive decline. © 2014 The Authors Ophthalmic & Physiological Optics © 2014 The College of Optometrists.

  9. Mental state attribution and the gaze cueing effect.

    PubMed

    Cole, Geoff G; Smith, Daniel T; Atkinson, Mark A

    2015-05-01

    Theory of mind is said to be possessed by an individual if he or she is able to impute mental states to others. Recently, some authors have demonstrated that such mental state attributions can mediate the "gaze cueing" effect, in which observation of another individual shifts an observer's attention. One question that follows from this work is whether such mental state attributions produce mandatory modulations of gaze cueing. Employing the basic gaze cueing paradigm, together with a technique commonly used to assess mental-state attribution in nonhuman animals, we manipulated whether the gazing agent could see the same thing as the participant (i.e., the target) or had this view obstructed by a physical barrier. We found robust gaze cueing effects, even when the observed agent in the display could not see the same thing as the participant. These results suggest that the attribution of "seeing" does not necessarily modulate the gaze cueing effect.

  10. Eye Gaze in Creative Sign Language

    ERIC Educational Resources Information Center

    Kaneko, Michiko; Mesch, Johanna

    2013-01-01

    This article discusses the role of eye gaze in creative sign language. Because eye gaze conveys various types of linguistic and poetic information, it is an intrinsic part of sign language linguistics in general and of creative signing in particular. We discuss various functions of eye gaze in poetic signing and propose a classification of gaze…

  11. Seductive Eyes: Attractiveness and Direct Gaze Increase Desire for Associated Objects

    ERIC Educational Resources Information Center

    Strick, Madelijn; Holland, Rob W.; van Knippenberg, Ad

    2008-01-01

    Recent research in neuroscience shows that observing attractive faces with direct gaze is more rewarding than observing attractive faces with averted gaze. On the basis of this research, it was hypothesized that object evaluations can be enhanced by associating them with attractive faces displaying direct gaze. In a conditioning paradigm, novel…

  12. Aversive eye gaze during a speech in virtual environment in patients with social anxiety disorder.

    PubMed

    Kim, Haena; Shin, Jung Eun; Hong, Yeon-Ju; Shin, Yu-Bin; Shin, Young Seok; Han, Kiwan; Kim, Jae-Jin; Choi, Soo-Hee

    2018-03-01

    One of the main characteristics of social anxiety disorder is excessive fear of social evaluation. In such situations, anxiety can influence gaze behaviour. Thus, the current study adopted virtual reality to examine eye gaze pattern of social anxiety disorder patients while presenting different types of speeches. A total of 79 social anxiety disorder patients and 51 healthy controls presented prepared speeches on general topics and impromptu speeches on self-related topics to a virtual audience while their eye gaze was recorded. Their presentation performance was also evaluated. Overall, social anxiety disorder patients showed less eye gaze towards the audience than healthy controls. Types of speech did not influence social anxiety disorder patients' gaze allocation towards the audience. However, patients with social anxiety disorder showed significant correlations between the amount of eye gaze towards the audience while presenting self-related speeches and social anxiety cognitions. The current study confirms that eye gaze behaviour of social anxiety disorder patients is aversive and that their anxiety symptoms are more dependent on the nature of topic.

  13. Mobile gaze tracking system for outdoor walking behavioral studies

    PubMed Central

    Tomasi, Matteo; Pundlik, Shrinivas; Bowers, Alex R.; Peli, Eli; Luo, Gang

    2016-01-01

    Most gaze tracking techniques estimate gaze points on screens, on scene images, or in confined spaces. Tracking of gaze in open-world coordinates, especially in walking situations, has rarely been addressed. We use a head-mounted eye tracker combined with two inertial measurement units (IMU) to track gaze orientation relative to the heading direction in outdoor walking. Head movements relative to the body are measured by the difference in output between the IMUs on the head and body trunk. The use of the IMU pair reduces the impact of environmental interference on each sensor. The system was tested in busy urban areas and allowed drift compensation for long (up to 18 min) gaze recording. Comparison with ground truth revealed an average error of 3.3° while walking straight segments. The range of gaze scanning in walking is frequently larger than the estimation error by about one order of magnitude. Our proposed method was also tested with real cases of natural walking and it was found to be suitable for the evaluation of gaze behaviors in outdoor environments. PMID:26894511

  14. A closer look at the size of the gaze-liking effect: a preregistered replication.

    PubMed

    Tipples, Jason; Pecchinenda, Anna

    2018-04-30

    This study is a direct replication of gaze-liking effect using the same design, stimuli and procedure. The gaze-liking effect describes the tendency for people to rate objects as more likeable when they have recently seen a person repeatedly gaze toward rather than away from the object. However, as subsequent studies show considerable variability in the size of this effect, we sampled a larger number of participants (N = 98) than the original study (N = 24) to gain a more precise estimate of the gaze-liking effect size. Our results indicate a much smaller standardised effect size (d z  = 0.02) than that of the original study (d z  = 0.94). Our smaller effect size was not due to general insensitivity to eye-gaze effects because the same sample showed a clear (d z  = 1.09) gaze-cuing effect - faster reaction times when eyes looked toward vs away from target objects. We discuss the implications of our findings for future studies wishing to study the gaze-liking effect.

  15. Perceptual Training in Beach Volleyball Defence: Different Effects of Gaze-Path Cueing on Gaze and Decision-Making

    PubMed Central

    Klostermann, André; Vater, Christian; Kredel, Ralf; Hossner, Ernst-Joachim

    2015-01-01

    For perceptual-cognitive skill training, a variety of intervention methods has been proposed, including the so-called “color-cueing method” which aims on superior gaze-path learning by applying visual markers. However, recent findings challenge this method, especially, with regards to its actual effects on gaze behavior. Consequently, after a preparatory study on the identification of appropriate visual cues for life-size displays, a perceptual-training experiment on decision-making in beach volleyball was conducted, contrasting two cueing interventions (functional vs. dysfunctional gaze path) with a conservative control condition (anticipation-related instructions). Gaze analyses revealed learning effects for the dysfunctional group only. Regarding decision-making, all groups showed enhanced performance with largest improvements for the control group followed by the functional and the dysfunctional group. Hence, the results confirm cueing effects on gaze behavior, but they also question its benefit for enhancing decision-making. However, before completely denying the method’s value, optimisations should be checked regarding, for instance, cueing-pattern characteristics and gaze-related feedback. PMID:26648894

  16. Frames of reference for gaze saccades evoked during stimulation of lateral intraparietal cortex.

    PubMed

    Constantin, A G; Wang, H; Martinez-Trujillo, J C; Crawford, J D

    2007-08-01

    Previous studies suggest that stimulation of lateral intraparietal cortex (LIP) evokes saccadic eye movements toward eye- or head-fixed goals, whereas most single-unit studies suggest that LIP uses an eye-fixed frame with eye-position modulations. The goal of our study was to determine the reference frame for gaze shifts evoked during LIP stimulation in head-unrestrained monkeys. Two macaques (M1 and M2) were implanted with recording chambers over the right intraparietal sulcus and with search coils for recording three-dimensional eye and head movements. The LIP region was microstimulated using pulse trains of 300 Hz, 100-150 microA, and 200 ms. Eighty-five putative LIP sites in M1 and 194 putative sites in M2 were used in our quantitative analysis throughout this study. Average amplitude of the stimulation-evoked gaze shifts was 8.67 degrees for M1 and 7.97 degrees for M2 with very small head movements. When these gaze-shift trajectories were rotated into three coordinate frames (eye, head, and body), gaze endpoint distribution for all sites was most convergent to a common point when plotted in eye coordinates. Across all sites, the eye-centered model provided a significantly better fit compared with the head, body, or fixed-vector models (where the latter model signifies no modulation of the gaze trajectory as a function of initial gaze position). Moreover, the probability of evoking a gaze shift from any one particular position was modulated by the current gaze direction (independent of saccade direction). These results provide causal evidence that the motor commands from LIP encode gaze command in eye-fixed coordinates but are also subtly modulated by initial gaze position.

  17. Interacting with target tracking algorithms in a gaze-enhanced motion video analysis system

    NASA Astrophysics Data System (ADS)

    Hild, Jutta; Krüger, Wolfgang; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen

    2016-05-01

    Motion video analysis is a challenging task, particularly if real-time analysis is required. It is therefore an important issue how to provide suitable assistance for the human operator. Given that the use of customized video analysis systems is more and more established, one supporting measure is to provide system functions which perform subtasks of the analysis. Recent progress in the development of automated image exploitation algorithms allow, e.g., real-time moving target tracking. Another supporting measure is to provide a user interface which strives to reduce the perceptual, cognitive and motor load of the human operator for example by incorporating the operator's visual focus of attention. A gaze-enhanced user interface is able to help here. This work extends prior work on automated target recognition, segmentation, and tracking algorithms as well as about the benefits of a gaze-enhanced user interface for interaction with moving targets. We also propose a prototypical system design aiming to combine both the qualities of the human observer's perception and the automated algorithms in order to improve the overall performance of a real-time video analysis system. In this contribution, we address two novel issues analyzing gaze-based interaction with target tracking algorithms. The first issue extends the gaze-based triggering of a target tracking process, e.g., investigating how to best relaunch in the case of track loss. The second issue addresses the initialization of tracking algorithms without motion segmentation where the operator has to provide the system with the object's image region in order to start the tracking algorithm.

  18. Mirror Neurons of Ventral Premotor Cortex Are Modulated by Social Cues Provided by Others' Gaze.

    PubMed

    Coudé, Gino; Festante, Fabrizia; Cilia, Adriana; Loiacono, Veronica; Bimbi, Marco; Fogassi, Leonardo; Ferrari, Pier Francesco

    2016-03-16

    Mirror neurons (MNs) in the inferior parietal lobule and ventral premotor cortex (PMv) can code the intentions of other individuals using contextual cues. Gaze direction is an important social cue that can be used for understanding the meaning of actions made by other individuals. Here we addressed the issue of whether PMv MNs are influenced by the gaze direction of another individual. We recorded single-unit activity in macaque PMv while the monkey was observing an experimenter performing a grasping action and orienting his gaze either toward (congruent gaze condition) or away (incongruent gaze condition) from a target object. The results showed that one-half of the recorded MNs were modulated by the gaze direction of the human agent. These gaze-modulated neurons were evenly distributed between those preferring a gaze direction congruent with the direction where the grasping action was performed and the others that preferred an incongruent gaze. Whereas the presence of congruent responses is in line with the usual coupling of hand and gaze in both executed and observed actions, the incongruent responses can be explained by the long exposure of the monkeys to this condition. Our results reveal that the representation of observed actions in PMv is influenced by contextual information not only extracted from physical cues, but also from cues endowed with biological or social value. In this study, we present the first evidence showing that social cues modulate MNs in the monkey ventral premotor cortex. These data suggest that there is an integrated representation of other's hand actions and gaze direction at the single neuron level in the ventral premotor cortex, and support the hypothesis of a functional role of MNs in decoding actions and understanding motor intentions. Copyright © 2016 the authors 0270-6474/16/363145-12$15.00/0.

  19. Training for eye contact modulates gaze following in dogs.

    PubMed

    Wallis, Lisa J; Range, Friederike; Müller, Corsin A; Serisier, Samuel; Huber, Ludwig; Virányi, Zsófia

    2015-08-01

    Following human gaze in dogs and human infants can be considered a socially facilitated orientation response, which in object choice tasks is modulated by human-given ostensive cues. Despite their similarities to human infants, and extensive skills in reading human cues in foraging contexts, no evidence that dogs follow gaze into distant space has been found. We re-examined this question, and additionally whether dogs' propensity to follow gaze was affected by age and/or training to pay attention to humans. We tested a cross-sectional sample of 145 border collies aged 6 months to 14 years with different amounts of training over their lives. The dogs' gaze-following response in test and control conditions before and after training for initiating eye contact with the experimenter was compared with that of a second group of 13 border collies trained to touch a ball with their paw. Our results provide the first evidence that dogs can follow human gaze into distant space. Although we found no age effect on gaze following, the youngest and oldest age groups were more distractible, which resulted in a higher number of looks in the test and control conditions. Extensive lifelong formal training as well as short-term training for eye contact decreased dogs' tendency to follow gaze and increased their duration of gaze to the face. The reduction in gaze following after training for eye contact cannot be explained by fatigue or short-term habituation, as in the second group gaze following increased after a different training of the same length. Training for eye contact created a competing tendency to fixate the face, which prevented the dogs from following the directional cues. We conclude that following human gaze into distant space in dogs is modulated by training, which may explain why dogs perform poorly in comparison to other species in this task.

  20. Effect of terminal accuracy requirements on temporal gaze-hand coordination during fast discrete and reciprocal pointings

    PubMed Central

    2011-01-01

    Background Rapid discrete goal-directed movements are characterized by a well known coordination pattern between the gaze and the hand displacements. The gaze always starts prior to the hand movement and reaches the target before hand velocity peak. Surprisingly, the effect of the target size on the temporal gaze-hand coordination has not been directly investigated. Moreover, goal-directed movements are often produced in a reciprocal rather than in a discrete manner. The objectives of this work were to assess the effect of the target size on temporal gaze-hand coordination during fast 1) discrete and 2) reciprocal pointings. Methods Subjects performed fast discrete (experiment 1) and reciprocal (experiment 2) pointings with an amplitude of 50 cm and four target diameters (7.6, 3.8, 1.9 and 0.95 cm) leading to indexes of difficulty (ID = log2[2A/D]) of 3.7, 4.7, 5.7 and 6.7 bits. Gaze and hand displacements were synchronously recorded. Temporal gaze-hand coordination parameters were compared between experiments (discrete and reciprocal pointings) and IDs using analyses of variance (ANOVAs). Results Data showed that the magnitude of the gaze-hand lead pattern was much higher for discrete than for reciprocal pointings. Moreover, while it was constant for discrete pointings, it decreased systematically with an increasing ID for reciprocal pointings because of the longer duration of gaze anchoring on target. Conclusion Overall, the temporal gaze-hand coordination analysis revealed that even for high IDs, fast reciprocal pointings could not be considered as a concatenation of discrete units. Moreover, our data clearly illustrate the smooth adaptation of temporal gaze-hand coordination to terminal accuracy requirements during fast reciprocal pointings. It will be interesting for further researches to investigate if the methodology used in the experiment 2 allows assessing the effect of sensori-motor deficits on gaze-hand coordination. PMID:21320315

  1. Contribution of the frontal eye field to gaze shifts in the head-unrestrained rhesus monkey: neuronal activity.

    PubMed

    Knight, T A

    2012-12-06

    The frontal eye field (FEF) has a strong influence on saccadic eye movements with the head restrained. With the head unrestrained, eye saccades combine with head movements to produce large gaze shifts, and microstimulation of the FEF evokes both eye and head movements. To test whether the dorsomedial FEF provides commands for the entire gaze shift or its separate eye and head components, we recorded extracellular single-unit activity in monkeys trained to make large head-unrestrained gaze shifts. We recorded 80 units active during gaze shifts, and closely examined 26 of these that discharged a burst of action potentials that preceded horizontal gaze movements. These units were movement or visuomovement related and most exhibited open movement fields with respect to amplitude. To reveal the relations of burst parameters to gaze, eye, and/or head movement metrics, we used behavioral dissociations of gaze, eye, and head movements and linear regression analyses. The burst number of spikes (NOS) was strongly correlated with movement amplitude and burst temporal parameters were strongly correlated with movement temporal metrics for eight gaze-related burst neurons and five saccade-related burst neurons. For the remaining 13 neurons, the NOS was strongly correlated with the head movement amplitude, but burst temporal parameters were most strongly correlated with eye movement temporal metrics (head-eye-related burst neurons, HEBNs). These results suggest that FEF units do not encode a command for the unified gaze shift only; instead, different units may carry signals related to the overall gaze shift or its eye and/or head components. Moreover, the HEBNs exhibit bursts whose magnitude and timing may encode a head displacement signal and a signal that influences the timing of the eye saccade, thereby serving as a mechanism for coordinating the eye and head movements of a gaze shift. Copyright © 2012 IBRO. Published by Elsevier Ltd. All rights reserved.

  2. Eye-gaze determination of user intent at the computer interface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldberg, J.H.; Schryver, J.C.

    1993-12-31

    Determination of user intent at the computer interface through eye-gaze monitoring can significantly aid applications for the disabled, as well as telerobotics and process control interfaces. Whereas current eye-gaze control applications are limited to object selection and x/y gazepoint tracking, a methodology was developed here to discriminate a more abstract interface operation: zooming-in or out. This methodology first collects samples of eve-gaze location looking at controlled stimuli, at 30 Hz, just prior to a user`s decision to zoom. The sample is broken into data frames, or temporal snapshots. Within a data frame, all spatial samples are connected into a minimummore » spanning tree, then clustered, according to user defined parameters. Each cluster is mapped to one in the prior data frame, and statistics are computed from each cluster. These characteristics include cluster size, position, and pupil size. A multiple discriminant analysis uses these statistics both within and between data frames to formulate optimal rules for assigning the observations into zooming, zoom-out, or no zoom conditions. The statistical procedure effectively generates heuristics for future assignments, based upon these variables. Future work will enhance the accuracy and precision of the modeling technique, and will empirically test users in controlled experiments.« less

  3. Moving Triadic Gaze Intervention Into Practice: Measuring Clinician Attitude and Implementation Fidelity

    PubMed Central

    Olswang, Lesley B.; Greenslade, Kathryn; Pinder, Gay Lloyd; Dowden, Patricia; Madden, Jodi

    2017-01-01

    Purpose This research investigated a first step in implementing the dynamic assessment (DA) component of Triadic Gaze Intervention (Olswang, Feuerstein, Pinder, & Dowden, 2013; Olswang et al., 2014), an evidence-based protocol for teaching early signals of communication to young children with physical disabilities. Clinician attitudes about adopting external evidence into practice and implementation fidelity in DA protocol delivery were examined following training. Method Seven early intervention clinicians from multiple disciplines were trained to deliver the four essential elements of the DA protocol: (a) provide communication opportunity, (b) recognize child's potentially communicative signal, (c) shape child's signal toward triadic gaze, and (d) reinforce with play. Clinician attitude regarding adopting evidence into practice was measured at baseline and follow-up, with the Evidence-Based Practice Attitude Scale (Aarons, 2004). Implementation fidelity in delivering the protocol was measured for adherence (accuracy) and competence (quality) during trial implementation. Results Clinicians' attitudes about trying new evidence that at first was perceived as incongruent with their practice improved over the course of the research. Clinicians demonstrated strong adherence to the DA protocol; however, competence varied across clinicians and appeared related to child performance. Conclusions The results provided insight into moving Triadic Gaze Intervention into practice and yielded valuable information regarding the implementation process, with implications for future research. PMID:28525577

  4. Genetics Home Reference: horizontal gaze palsy with progressive scoliosis

    MedlinePlus

    ... to track moving objects. Up-and-down (vertical) eye movements are typically normal. In people with HGPPS , an ... the brainstem is the underlying cause of the eye movement abnormalities associated with the disorder. The cause of ...

  5. Factors influencing young chimpanzees' (Pan troglodytes) recognition of attention.

    PubMed

    Povinelli, D J; Eddy, T J

    1996-12-01

    By 2 1/2 years of age, human infants appear to understand how others are connected to the external world through the mental state of attention and also appear to understand the specific role that the eyes play in deploying this attention. Previous research with chimpanzees suggests that, although they track the gaze of others, they may simultaneously be unaware of the underlying state of attention behind gaze. In a series of 3 experiments, the investigators systematically explored how the presence of eyes, direct eye contact, and head orientation and movement affected young chimpanzees' choice of 2 experimenters from whom to request food. The results indicate that young chimpanzees may be selectively attached to other organisms making direct eye contact with them or engaged in postures or movements that indicate attention, even though they may not appreciate the underlying mentalistic significance of these behaviors.

  6. 3-D-Gaze-Based Robotic Grasping Through Mimicking Human Visuomotor Function for People With Motion Impairments.

    PubMed

    Li, Songpo; Zhang, Xiaoli; Webb, Jeremy D

    2017-12-01

    The goal of this paper is to achieve a novel 3-D-gaze-based human-robot-interaction modality, with which a user with motion impairment can intuitively express what tasks he/she wants the robot to do by directly looking at the object of interest in the real world. Toward this goal, we investigate 1) the technology to accurately sense where a person is looking in real environments and 2) the method to interpret the human gaze and convert it into an effective interaction modality. Looking at a specific object reflects what a person is thinking related to that object, and the gaze location contains essential information for object manipulation. A novel gaze vector method is developed to accurately estimate the 3-D coordinates of the object being looked at in real environments, and a novel interpretation framework that mimics human visuomotor functions is designed to increase the control capability of gaze in object grasping tasks. High tracking accuracy was achieved using the gaze vector method. Participants successfully controlled a robotic arm for object grasping by directly looking at the target object. Human 3-D gaze can be effectively employed as an intuitive interaction modality for robotic object manipulation. It is the first time that 3-D gaze is utilized in a real environment to command a robot for a practical application. Three-dimensional gaze tracking is promising as an intuitive alternative for human-robot interaction especially for disabled and elderly people who cannot handle the conventional interaction modalities.

  7. Speaking and Listening with the Eyes: Gaze Signaling during Dyadic Interactions.

    PubMed

    Ho, Simon; Foulsham, Tom; Kingstone, Alan

    2015-01-01

    Cognitive scientists have long been interested in the role that eye gaze plays in social interactions. Previous research suggests that gaze acts as a signaling mechanism and can be used to control turn-taking behaviour. However, early research on this topic employed methods of analysis that aggregated gaze information across an entire trial (or trials), which masks any temporal dynamics that may exist in social interactions. More recently, attempts have been made to understand the temporal characteristics of social gaze but little research has been conducted in a natural setting with two interacting participants. The present study combines a temporally sensitive analysis technique with modern eye tracking technology to 1) validate the overall results from earlier aggregated analyses and 2) provide insight into the specific moment-to-moment temporal characteristics of turn-taking behaviour in a natural setting. Dyads played two social guessing games (20 Questions and Heads Up) while their eyes were tracked. Our general results are in line with past aggregated data, and using cross-correlational analysis on the specific gaze and speech signals of both participants we found that 1) speakers end their turn with direct gaze at the listener and 2) the listener in turn begins to speak with averted gaze. Convergent with theoretical models of social interaction, our data suggest that eye gaze can be used to signal both the end and the beginning of a speaking turn during a social interaction. The present study offers insight into the temporal dynamics of live dyadic interactions and also provides a new method of analysis for eye gaze data when temporal relationships are of interest.

  8. Assessing Self-Awareness through Gaze Agency

    PubMed Central

    Crespi, Sofia Allegra; de’Sperati, Claudio

    2016-01-01

    We define gaze agency as the awareness of the causal effect of one’s own eye movements in gaze-contingent environments, which might soon become a widespread reality with the diffusion of gaze-operated devices. Here we propose a method for measuring gaze agency based on self-monitoring propensity and sensitivity. In one task, naïf observers watched bouncing balls on a computer monitor with the goal of discovering the cause of concurrently presented beeps, which were generated in real-time by their saccades or by other events (Discovery Task). We manipulated observers’ self-awareness by pre-exposing them to a condition in which beeps depended on gaze direction or by focusing their attention to their own eyes. These manipulations increased propensity to agency discovery. In a second task, which served to monitor agency sensitivity at the sensori-motor level, observers were explicitly asked to detect gaze agency (Detection Task). Both tasks turned out to be well suited to measure both increases and decreases of gaze agency. We did not find evident oculomotor correlates of agency discovery or detection. A strength of our approach is that it probes self-monitoring propensity–difficult to evaluate with traditional tasks based on bodily agency. In addition to putting a lens on this novel cognitive function, measuring gaze agency could reveal subtle self-awareness deficits in pathological conditions and during development. PMID:27812138

  9. Gaze cueing by pareidolia faces.

    PubMed

    Takahashi, Kohske; Watanabe, Katsumi

    2013-01-01

    Visual images that are not faces are sometimes perceived as faces (the pareidolia phenomenon). While the pareidolia phenomenon provides people with a strong impression that a face is present, it is unclear how deeply pareidolia faces are processed as faces. In the present study, we examined whether a shift in spatial attention would be produced by gaze cueing of face-like objects. A robust cueing effect was observed when the face-like objects were perceived as faces. The magnitude of the cueing effect was comparable between the face-like objects and a cartoon face. However, the cueing effect was eliminated when the observer did not perceive the objects as faces. These results demonstrated that pareidolia faces do more than give the impression of the presence of faces; indeed, they trigger an additional face-specific attentional process.

  10. Gaze cueing by pareidolia faces

    PubMed Central

    Takahashi, Kohske; Watanabe, Katsumi

    2013-01-01

    Visual images that are not faces are sometimes perceived as faces (the pareidolia phenomenon). While the pareidolia phenomenon provides people with a strong impression that a face is present, it is unclear how deeply pareidolia faces are processed as faces. In the present study, we examined whether a shift in spatial attention would be produced by gaze cueing of face-like objects. A robust cueing effect was observed when the face-like objects were perceived as faces. The magnitude of the cueing effect was comparable between the face-like objects and a cartoon face. However, the cueing effect was eliminated when the observer did not perceive the objects as faces. These results demonstrated that pareidolia faces do more than give the impression of the presence of faces; indeed, they trigger an additional face-specific attentional process. PMID:25165505

  11. E-ducating the Gaze: The Idea of a Poor Pedagogy

    ERIC Educational Resources Information Center

    Masschelein, Jan

    2010-01-01

    Educating the gaze is easily understood as becoming conscious about what is "really" happening in the world and becoming aware of the way our gaze is itself bound to a perspective and particular position. However, the paper explores a different idea. It understands educating the gaze not in the sense of "educare" (teaching) but of "e-ducere" as…

  12. Can Infants Use a Nonhuman Agent's Gaze Direction to Establish Word-Object Relations?

    ERIC Educational Resources Information Center

    O'Connell, Laura; Poulin-Dubois, Diane; Demke, Tamara; Guay, Amanda

    2009-01-01

    Adopting a procedure developed with human speakers, we examined infants' ability to follow a nonhuman agent's gaze direction and subsequently to use its gaze to learn new words. When a programmable robot acted as the speaker (Experiment 1), infants followed its gaze toward the word referent whether or not it coincided with their own focus of…

  13. Gaze Behavior of Children with ASD toward Pictures of Facial Expressions.

    PubMed

    Matsuda, Soichiro; Minagawa, Yasuyo; Yamamoto, Junichi

    2015-01-01

    Atypical gaze behavior in response to a face has been well documented in individuals with autism spectrum disorders (ASDs). Children with ASD appear to differ from typically developing (TD) children in gaze behavior for spoken and dynamic face stimuli but not for nonspeaking, static face stimuli. Furthermore, children with ASD and TD children show a difference in their gaze behavior for certain expressions. However, few studies have examined the relationship between autism severity and gaze behavior toward certain facial expressions. The present study replicated and extended previous studies by examining gaze behavior towards pictures of facial expressions. We presented ASD and TD children with pictures of surprised, happy, neutral, angry, and sad facial expressions. Autism severity was assessed using the Childhood Autism Rating Scale (CARS). The results showed that there was no group difference in gaze behavior when looking at pictures of facial expressions. Conversely, the children with ASD who had more severe autistic symptomatology had a tendency to gaze at angry facial expressions for a shorter duration in comparison to other facial expressions. These findings suggest that autism severity should be considered when examining atypical responses to certain facial expressions.

  14. Elevated amygdala response to faces and gaze aversion in autism spectrum disorder.

    PubMed

    Tottenham, Nim; Hertzig, Margaret E; Gillespie-Lynch, Kristen; Gilhooly, Tara; Millner, Alexander J; Casey, B J

    2014-01-01

    Autism spectrum disorders (ASD) are often associated with impairments in judgment of facial expressions. This impairment is often accompanied by diminished eye contact and atypical amygdala responses to face stimuli. The current study used a within-subjects design to examine the effects of natural viewing and an experimental eye-gaze manipulation on amygdala responses to faces. Individuals with ASD showed less gaze toward the eye region of faces relative to a control group. Among individuals with ASD, reduced eye gaze was associated with higher threat ratings of neutral faces. Amygdala signal was elevated in the ASD group relative to controls. This elevated response was further potentiated by experimentally manipulating gaze to the eye region. Potentiation by the gaze manipulation was largest for those individuals who exhibited the least amount of naturally occurring gaze toward the eye region and was associated with their subjective threat ratings. Effects were largest for neutral faces, highlighting the importance of examining neutral faces in the pathophysiology of autism and questioning their use as control stimuli with this population. Overall, our findings provide support for the notion that gaze direction modulates affective response to faces in ASD.

  15. Gaze Behavior of Children with ASD toward Pictures of Facial Expressions

    PubMed Central

    Matsuda, Soichiro; Minagawa, Yasuyo; Yamamoto, Junichi

    2015-01-01

    Atypical gaze behavior in response to a face has been well documented in individuals with autism spectrum disorders (ASDs). Children with ASD appear to differ from typically developing (TD) children in gaze behavior for spoken and dynamic face stimuli but not for nonspeaking, static face stimuli. Furthermore, children with ASD and TD children show a difference in their gaze behavior for certain expressions. However, few studies have examined the relationship between autism severity and gaze behavior toward certain facial expressions. The present study replicated and extended previous studies by examining gaze behavior towards pictures of facial expressions. We presented ASD and TD children with pictures of surprised, happy, neutral, angry, and sad facial expressions. Autism severity was assessed using the Childhood Autism Rating Scale (CARS). The results showed that there was no group difference in gaze behavior when looking at pictures of facial expressions. Conversely, the children with ASD who had more severe autistic symptomatology had a tendency to gaze at angry facial expressions for a shorter duration in comparison to other facial expressions. These findings suggest that autism severity should be considered when examining atypical responses to certain facial expressions. PMID:26090223

  16. A model of face selection in viewing video stories.

    PubMed

    Suda, Yuki; Kitazawa, Shigeru

    2015-01-19

    When typical adults watch TV programs, they show surprisingly stereo-typed gaze behaviours, as indicated by the almost simultaneous shifts of their gazes from one face to another. However, a standard saliency model based on low-level physical features alone failed to explain such typical gaze behaviours. To find rules that explain the typical gaze behaviours, we examined temporo-spatial gaze patterns in adults while they viewed video clips with human characters that were played with or without sound, and in the forward or reverse direction. We here show the following: 1) the "peak" face scanpath, which followed the face that attracted the largest number of views but ignored other objects in the scene, still retained the key features of actual scanpaths, 2) gaze behaviours remained unchanged whether the sound was provided or not, 3) the gaze behaviours were sensitive to time reversal, and 4) nearly 60% of the variance of gaze behaviours was explained by the face saliency that was defined as a function of its size, novelty, head movements, and mouth movements. These results suggest that humans share a face-oriented network that integrates several visual features of multiple faces, and directs our eyes to the most salient face at each moment.

  17. Hard to “tune in”: neural mechanisms of live face-to-face interaction with high-functioning autistic spectrum disorder

    PubMed Central

    Tanabe, Hiroki C.; Kosaka, Hirotaka; Saito, Daisuke N.; Koike, Takahiko; Hayashi, Masamichi J.; Izuma, Keise; Komeda, Hidetsugu; Ishitobi, Makoto; Omori, Masao; Munesue, Toshio; Okazawa, Hidehiko; Wada, Yuji; Sadato, Norihiro

    2012-01-01

    Persons with autism spectrum disorders (ASD) are known to have difficulty in eye contact (EC). This may make it difficult for their partners during face to face communication with them. To elucidate the neural substrates of live inter-subject interaction of ASD patients and normal subjects, we conducted hyper-scanning functional MRI with 21 subjects with autistic spectrum disorder (ASD) paired with typically-developed (normal) subjects, and with 19 pairs of normal subjects as a control. Baseline EC was maintained while subjects performed real-time joint-attention task. The task-related effects were modeled out, and inter-individual correlation analysis was performed on the residual time-course data. ASD–Normal pairs were less accurate at detecting gaze direction than Normal–Normal pairs. Performance was impaired both in ASD subjects and in their normal partners. The left occipital pole (OP) activation by gaze processing was reduced in ASD subjects, suggesting that deterioration of eye-cue detection in ASD is related to impairment of early visual processing of gaze. On the other hand, their normal partners showed greater activity in the bilateral occipital cortex and the right prefrontal area, indicating a compensatory workload. Inter-brain coherence in the right IFG that was observed in the Normal-Normal pairs (Saito et al., 2010) during EC diminished in ASD–Normal pairs. Intra-brain functional connectivity between the right IFG and right superior temporal sulcus (STS) in normal subjects paired with ASD subjects was reduced compared with in Normal–Normal pairs. This functional connectivity was positively correlated with performance of the normal partners on the eye-cue detection. Considering the integrative role of the right STS in gaze processing, inter-subject synchronization during EC may be a prerequisite for eye cue detection by the normal partner. PMID:23060772

  18. Dogs Evaluate Threatening Facial Expressions by Their Biological Validity – Evidence from Gazing Patterns

    PubMed Central

    Somppi, Sanni; Törnqvist, Heini; Kujala, Miiamaaria V.; Hänninen, Laura; Krause, Christina M.; Vainio, Outi

    2016-01-01

    Appropriate response to companions’ emotional signals is important for all social creatures. The emotional expressions of humans and non-human animals have analogies in their form and function, suggesting shared evolutionary roots, but very little is known about how animals other than primates view and process facial expressions. In primates, threat-related facial expressions evoke exceptional viewing patterns compared with neutral or positive stimuli. Here, we explore if domestic dogs (Canis familiaris) have such an attentional bias toward threatening social stimuli and whether observed emotional expressions affect dogs’ gaze fixation distribution among the facial features (eyes, midface and mouth). We recorded the voluntary eye gaze of 31 domestic dogs during viewing of facial photographs of humans and dogs with three emotional expressions (threatening, pleasant and neutral). We found that dogs’ gaze fixations spread systematically among facial features. The distribution of fixations was altered by the seen expression, but eyes were the most probable targets of the first fixations and gathered longer looking durations than mouth regardless of the viewed expression. The examination of the inner facial features as a whole revealed more pronounced scanning differences among expressions. This suggests that dogs do not base their perception of facial expressions on the viewing of single structures, but the interpretation of the composition formed by eyes, midface and mouth. Dogs evaluated social threat rapidly and this evaluation led to attentional bias, which was dependent on the depicted species: threatening conspecifics’ faces evoked heightened attention but threatening human faces instead an avoidance response. We propose that threatening signals carrying differential biological validity are processed via distinctive neurocognitive pathways. Both of these mechanisms may have an adaptive significance for domestic dogs. The findings provide a novel perspective on understanding the processing of emotional expressions and sensitivity to social threat in non-primates. PMID:26761433

  19. Gaze-based assistive technology used in daily life by children with severe physical impairments - parents' experiences.

    PubMed

    Borgestig, Maria; Rytterström, Patrik; Hemmingsson, Helena

    2017-07-01

    To describe and explore parents' experiences when their children with severe physical impairments receive gaze-based assistive technology (gaze-based assistive technology (AT)) for use in daily life. Semi-structured interviews were conducted twice, with one year in between, with parents of eight children with cerebral palsy that used gaze-based AT in their daily activities. To understand the parents' experiences, hermeneutical interpretations were used during data analysis. The findings demonstrate that for parents, children's gaze-based AT usage meant that children demonstrated agency, provided them with opportunities to show personality and competencies, and gave children possibilities to develop. Overall, children's gaze-based AT provides hope for a better future for their children with severe physical impairments; a future in which the children can develop and gain influence in life. Gaze-based AT provides children with new opportunities to perform activities and take initiatives to communicate, giving parents hope about the children's future.

  20. Gaze and visual search strategies of children with Asperger syndrome/high functioning autism viewing a magic trick.

    PubMed

    Joosten, Annette; Girdler, Sonya; Albrecht, Matthew A; Horlin, Chiara; Falkmer, Marita; Leung, Denise; Ordqvist, Anna; Fleischer, Håkan; Falkmer, Torbjörn

    2016-01-01

    To examine visual search patterns and strategies used by children with and without Asperger syndrome/high functioning autism (AS/HFA) while watching a magic trick. Limited responsivity to gaze cues is hypothesised to contribute to social deficits in children with AS/HFA. Twenty-one children with AS/HFA and 31 matched peers viewed a video of a gaze-cued magic trick twice. Between the viewings, they were informed about how the trick was performed. Participants' eye movements were recorded using a head-mounted eye-tracker. Children with AS/HFA looked less frequently and had shorter fixation on the magician's direct and averted gazes during both viewings and more frequently at not gaze-cued objects and on areas outside the magician's face. After being informed of how the trick was conducted, both groups made fewer fixations on gaze-cued objects and direct gaze. Information may enhance effective visual strategies in children with and without AS/HFA.

  1. Exploiting Listener Gaze to Improve Situated Communication in Dynamic Virtual Environments.

    PubMed

    Garoufi, Konstantina; Staudte, Maria; Koller, Alexander; Crocker, Matthew W

    2016-09-01

    Beyond the observation that both speakers and listeners rapidly inspect the visual targets of referring expressions, it has been argued that such gaze may constitute part of the communicative signal. In this study, we investigate whether a speaker may, in principle, exploit listener gaze to improve communicative success. In the context of a virtual environment where listeners follow computer-generated instructions, we provide two kinds of support for this claim. First, we show that listener gaze provides a reliable real-time index of understanding even in dynamic and complex environments, and on a per-utterance basis. Second, we show that a language generation system that uses listener gaze to provide rapid feedback improves overall task performance in comparison with two systems that do not use gaze. Aside from demonstrating the utility of listener gaze in situated communication, our findings open the door to new methods for developing and evaluating multi-modal models of situated interaction. Copyright © 2015 Cognitive Science Society, Inc.

  2. Active head rotations and eye-head coordination

    NASA Technical Reports Server (NTRS)

    Zangemeister, W. H.; Stark, L.

    1981-01-01

    It is pointed out that head movements play an important role in gaze. The interaction between eye and head movements involves both their shared role in directing gaze and the compensatory vestibular ocular reflex. The dynamics of head trajectories are discussed, taking into account the use of parameterization to obtain the peak velocity, peak accelerations, the times of these extrema, and the duration of the movement. Attention is given to the main sequence, neck muscle EMG and details of the head-movement trajectory, types of head model accelerations, the latency of eye and head movement in coordinated gaze, gaze latency as a function of various factors, and coordinated gaze types. Clinical examples of gaze-plane analysis are considered along with the instantaneous change of compensatory eye movement (CEM) gain, and aspects of variability.

  3. FAR and NEAR Target Dynamic Visual Acuity: A Functional Assessment of Canal and Otolith Performance

    NASA Technical Reports Server (NTRS)

    Peters, Brian T.; Brady, Rachel A.; Landsness, Eric C.; Black, F. Owen; Bloomberg, Jacob J.

    2004-01-01

    Upon their return to earth, astronauts experience the effects of vestibular adaptation to microgravity. The postflight changes in vestibular information processing can affect postural and locomotor stability and may lead to oscillopsia during activities of daily living. However, it is likely that time spent in microgravity affects canal and otolith function differently. As a result, the isolated rotational stimuli used in traditional tests of canal function may fail to identify vestibular deficits after spaceflight. Also, the functional consequences of deficits that are identified often remain unknown. In a gaze control task, the relative contributions of the canal and otolith organs are modulated with viewing distance. The ability to stabilize gaze during a perturbation, on visual targets placed at different distances from the head may therefore provide independent insight into the function of this systems. Our goal was to develop a functional measure of gaze control that can also offer independent information about the function of the canal and otolith organs.

  4. Stabilization of gaze during circular locomotion in darkness. II. Contribution of velocity storage to compensatory eye and head nystagmus in the running monkey

    NASA Technical Reports Server (NTRS)

    Solomon, D.; Cohen, B.

    1992-01-01

    1. Yaw eye in head (Eh) and head on body velocities (Hb) were measured in two monkeys that ran around the perimeter of a circular platform in darkness. The platform was stationary or could be counterrotated to reduce body velocity in space (Bs) while increasing gait velocity on the platform (Bp). The animals were also rotated while seated in a primate chair at eccentric locations to provide linear and angular accelerations similar to those experienced while running. 2. Both animals had head and eye nystagmus while running in darkness during which slow phase gaze velocity on the body (Gb) partially compensated for body velocity in space (Bs). The eyes, driven by the vestibuloocular reflex (VOR), supplied high-frequency characteristics, bringing Gb up to compensatory levels at the beginning and end of the slow phases. The head provided substantial gaze compensation during the slow phases, probably through the vestibulocollic reflex (VCR). Synchronous eye and head quick phases moved gaze in the direction of running. Head movements occurred consistently only when animals were running. This indicates that active body and limb motion may be essential for inducing the head-eye gaze synergy. 3. Gaze compensation was good when running in both directions in one animal and in one direction in the other animal. The animals had long VOR time constants in these directions. The VOR time constant was short to one side in one animal, and it had poor gaze compensation in this direction. Postlocomotory nystagmus was weaker after running in directions with a long VOR time constant than when the animals were passively rotated in darkness. We infer that velocity storage in the vestibular system had been activated to produce continuous Eh and Hb during running and to counteract postrotatory afterresponses. 4. Continuous compensatory gaze nystagmus was not produced by passive eccentric rotation with the head stabilized or free. This indicates that an aspect of active locomotion, most likely somatosensory feedback, was responsible for activating velocity storage. 5. Nystagmus was compared when an animal ran in darkness and in light. the beat frequency of eye and head nystagmus was lower, and the quick phases were larger in darkness. The duration of head and eye quick phases covaried. Eye quick phases were larger when animals ran in darkness than when they were passively rotated. The maximum velocity and duration of eye quick phases were the same in both conditions. 6. The platform was counterrotated under one monkey in darkness while it ran in the direction of its long vestibular time constant.(ABSTRACT TRUNCATED AT 400 WORDS).

  5. No Evidence of Emotional Dysregulation or Aversion to Mutual Gaze in Preschoolers with Autism Spectrum Disorder: An Eye-Tracking Pupillometry Study

    ERIC Educational Resources Information Center

    Nuske, Heather J.; Vivanti, Giacomo; Dissanayake, Cheryl

    2015-01-01

    The "gaze aversion hypothesis", suggests that people with Autism Spectrum Disorder (ASD) avoid mutual gaze because they experience it as hyper-arousing. To test this hypothesis we showed mutual and averted gaze stimuli to 23 mixed-ability preschoolers with ASD ("M" Mullen DQ = 68) and 21 typically-developing preschoolers, aged…

  6. Eye, head, and body coordination during large gaze shifts in rhesus monkeys: movement kinematics and the influence of posture.

    PubMed

    McCluskey, Meaghan K; Cullen, Kathleen E

    2007-04-01

    Coordinated movements of the eye, head, and body are used to redirect the axis of gaze between objects of interest. However, previous studies of eye-head gaze shifts in head-unrestrained primates generally assumed the contribution of body movement to be negligible. Here we characterized eye-head-body coordination during horizontal gaze shifts made by trained rhesus monkeys to visual targets while they sat upright in a standard primate chair and assumed a more natural sitting posture in a custom-designed chair. In both postures, gaze shifts were characterized by the sequential onset of eye, head, and body movements, which could be described by predictable relationships. Body motion made a small but significant contribution to gaze shifts that were > or =40 degrees in amplitude. Furthermore, as gaze shift amplitude increased (40-120 degrees ), body contribution and velocity increased systematically. In contrast, peak eye and head velocities plateaued at velocities of approximately 250-300 degrees /s, and the rotation of the eye-in-orbit and head-on-body remained well within the physical limits of ocular and neck motility during large gaze shifts, saturating at approximately 35 and 60 degrees , respectively. Gaze shifts initiated with the eye more contralateral in the orbit were accompanied by smaller body as well as head movement amplitudes and velocities were greater when monkeys were seated in the more natural body posture. Taken together, our findings show that body movement makes a predictable contribution to gaze shifts that is systematically influenced by factors such as orbital position and posture. We conclude that body movements are part of a coordinated series of motor events that are used to voluntarily reorient gaze and that these movements can be significant even in a typical laboratory setting. Our results emphasize the need for caution in the interpretation of data from neurophysiological studies of the control of saccadic eye movements and/or eye-head gaze shifts because single neurons can code motor commands to move the body as well as the head and eyes.

  7. Limitations of gaze transfer: without visual context, eye movements do not to help to coordinate joint action, whereas mouse movements do.

    PubMed

    Müller, Romy; Helmert, Jens R; Pannasch, Sebastian

    2014-10-01

    Remote cooperation can be improved by transferring the gaze of one participant to the other. However, based on a partner's gaze, an interpretation of his communicative intention can be difficult. Thus, gaze transfer has been inferior to mouse transfer in remote spatial referencing tasks where locations had to be pointed out explicitly. Given that eye movements serve as an indicator of visual attention, it remains to be investigated whether gaze and mouse transfer differentially affect the coordination of joint action when the situation demands an understanding of the partner's search strategies. In the present study, a gaze or mouse cursor was transferred from a searcher to an assistant in a hierarchical decision task. The assistant could use this cursor to guide his movement of a window which continuously opened up the display parts the searcher needed to find the right solution. In this context, we investigated how the ease of using gaze transfer depended on whether a link could be established between the partner's eye movements and the objects he was looking at. Therefore, in addition to the searcher's cursor, the assistant either saw the positions of these objects or only a grey background. When the objects were visible, performance and the number of spoken words were similar for gaze and mouse transfer. However, without them, gaze transfer resulted in longer solution times and more verbal effort as participants relied more strongly on speech to coordinate the window movement. Moreover, an analysis of the spatio-temporal coupling of the transmitted cursor and the window indicated that when no visual object information was available, assistants confidently followed the searcher's mouse but not his gaze cursor. Once again, the results highlight the importance of carefully considering task characteristics when applying gaze transfer in remote cooperation. Copyright © 2013 Elsevier B.V. All rights reserved.

  8. Gaze Behavior in a Natural Environment with a Task-Relevant Distractor: How the Presence of a Goalkeeper Distracts the Penalty Taker

    PubMed Central

    Kurz, Johannes; Hegele, Mathias; Munzert, Jörn

    2018-01-01

    Gaze behavior in natural scenes has been shown to be influenced not only by top–down factors such as task demands and action goals but also by bottom–up factors such as stimulus salience and scene context. Whereas gaze behavior in the context of static pictures emphasizes spatial accuracy, gazing in natural scenes seems to rely more on where to direct the gaze involving both anticipative components and an evaluation of ongoing actions. Not much is known about gaze behavior in far-aiming tasks in which multiple task-relevant targets and distractors compete for the allocation of visual attention via gaze. In the present study, we examined gaze behavior in the far-aiming task of taking a soccer penalty. This task contains a proximal target, the ball; a distal target, an empty location within the goal; and a salient distractor, the goalkeeper. Our aim was to investigate where participants direct their gaze in a natural environment with multiple potential fixation targets that differ in task relevance and salience. Results showed that the early phase of the run-up seems to be driven by both the salience of the stimulus setting and the need to perform a spatial calibration of the environment. The late run-up, in contrast, seems to be controlled by attentional demands of the task with penalty takers having habitualized a visual routine that is not disrupted by external influences (e.g., the goalkeeper). In addition, when trying to shoot a ball as accurately as possible, penalty takers directed their gaze toward the ball in order to achieve optimal foot-ball contact. These results indicate that whether gaze is driven by salience of the stimulus setting or by attentional demands depends on the phase of the actual task. PMID:29434560

  9. Speaking and Listening with the Eyes: Gaze Signaling during Dyadic Interactions

    PubMed Central

    Ho, Simon; Foulsham, Tom; Kingstone, Alan

    2015-01-01

    Cognitive scientists have long been interested in the role that eye gaze plays in social interactions. Previous research suggests that gaze acts as a signaling mechanism and can be used to control turn-taking behaviour. However, early research on this topic employed methods of analysis that aggregated gaze information across an entire trial (or trials), which masks any temporal dynamics that may exist in social interactions. More recently, attempts have been made to understand the temporal characteristics of social gaze but little research has been conducted in a natural setting with two interacting participants. The present study combines a temporally sensitive analysis technique with modern eye tracking technology to 1) validate the overall results from earlier aggregated analyses and 2) provide insight into the specific moment-to-moment temporal characteristics of turn-taking behaviour in a natural setting. Dyads played two social guessing games (20 Questions and Heads Up) while their eyes were tracked. Our general results are in line with past aggregated data, and using cross-correlational analysis on the specific gaze and speech signals of both participants we found that 1) speakers end their turn with direct gaze at the listener and 2) the listener in turn begins to speak with averted gaze. Convergent with theoretical models of social interaction, our data suggest that eye gaze can be used to signal both the end and the beginning of a speaking turn during a social interaction. The present study offers insight into the temporal dynamics of live dyadic interactions and also provides a new method of analysis for eye gaze data when temporal relationships are of interest. PMID:26309216

  10. Objective eye-gaze behaviour during face-to-face communication with proficient alaryngeal speakers: a preliminary study.

    PubMed

    Evitts, Paul; Gallop, Robert

    2011-01-01

    There is a large body of research demonstrating the impact of visual information on speaker intelligibility in both normal and disordered speaker populations. However, there is minimal information on which specific visual features listeners find salient during conversational discourse. To investigate listeners' eye-gaze behaviour during face-to-face conversation with normal, laryngeal and proficient alaryngeal speakers. Sixty participants individually participated in a 10-min conversation with one of four speakers (typical laryngeal, tracheoesophageal, oesophageal, electrolaryngeal; 15 participants randomly assigned to one mode of speech). All speakers were > 85% intelligible and were judged to be 'proficient' by two certified speech-language pathologists. Participants were fitted with a head-mounted eye-gaze tracking device (Mobile Eye, ASL) that calculated the region of interest and mean duration of eye-gaze. Self-reported gaze behaviour was also obtained following the conversation using a 10 cm visual analogue scale. While listening, participants viewed the lower facial region of the oesophageal speaker more than the normal or tracheoesophageal speaker. Results of non-hierarchical cluster analyses showed that while listening, the pattern of eye-gaze was predominantly directed at the lower face of the oesophageal and electrolaryngeal speaker and more evenly dispersed among the background, lower face, and eyes of the normal and tracheoesophageal speakers. Finally, results show a low correlation between self-reported eye-gaze behaviour and objective regions of interest data. Overall, results suggest similar eye-gaze behaviour when healthy controls converse with normal and tracheoesophageal speakers and that participants had significantly different eye-gaze patterns when conversing with an oesophageal speaker. Results are discussed in terms of existing eye-gaze data and its potential implications on auditory-visual speech perception. © 2011 Royal College of Speech & Language Therapists.

  11. Electrical stimulation of rhesus monkey nucleus reticularis gigantocellularis. II. Effects on metrics and kinematics of ongoing gaze shifts to visual targets.

    PubMed

    Freedman, Edward G; Quessy, Stephan

    2004-06-01

    Saccade kinematics are altered by ongoing head movements. The hypothesis that a head movement command signal, proportional to head velocity, transiently reduces the gain of the saccadic burst generator (Freedman 2001, Biol Cybern 84:453-462) can account for this observation. Using electrical stimulation of the rhesus monkey nucleus reticularis gigantocellularis (NRG) to alter the head contribution to ongoing gaze shifts, two critical predictions of this gaze control hypothesis were tested. First, this hypothesis predicts that activation of the head command pathway will cause a transient reduction in the gain of the saccadic burst generator. This should alter saccade kinematics by initially reducing velocity without altering saccade amplitude. Second, because this hypothesis does not assume that gaze amplitude is controlled via feedback, the added head contribution (produced by NRG stimulation on the side ipsilateral to the direction of an ongoing gaze shift) should lead to hypermetric gaze shifts. At every stimulation site tested, saccade kinematics were systematically altered in a way that was consistent with transient reduction of the gain of the saccadic burst generator. In addition, gaze shifts produced during NRG stimulation were hypermetric compared with control movements. For example, when targets were briefly flashed 30 degrees from an initial fixation location, gaze shifts during NRG stimulation were on average 140% larger than control movements. These data are consistent with the predictions of the tested hypothesis, and may be problematic for gaze control models that rely on feedback control of gaze amplitude, as well as for models that do not posit an interaction between head commands and the saccade burst generator.

  12. I Reach Faster When I See You Look: Gaze Effects in Human-Human and Human-Robot Face-to-Face Cooperation.

    PubMed

    Boucher, Jean-David; Pattacini, Ugo; Lelong, Amelie; Bailly, Gerrard; Elisei, Frederic; Fagel, Sascha; Dominey, Peter Ford; Ventre-Dominey, Jocelyne

    2012-01-01

    Human-human interaction in natural environments relies on a variety of perceptual cues. Humanoid robots are becoming increasingly refined in their sensorimotor capabilities, and thus should now be able to manipulate and exploit these social cues in cooperation with their human partners. Previous studies have demonstrated that people follow human and robot gaze, and that it can help them to cope with spatially ambiguous language. Our goal is to extend these findings into the domain of action, to determine how human and robot gaze can influence the speed and accuracy of human action. We report on results from a human-human cooperation experiment demonstrating that an agent's vision of her/his partner's gaze can significantly improve that agent's performance in a cooperative task. We then implement a heuristic capability to generate such gaze cues by a humanoid robot that engages in the same cooperative interaction. The subsequent human-robot experiments demonstrate that a human agent can indeed exploit the predictive gaze of their robot partner in a cooperative task. This allows us to render the humanoid robot more human-like in its ability to communicate with humans. The long term objectives of the work are thus to identify social cooperation cues, and to validate their pertinence through implementation in a cooperative robot. The current research provides the robot with the capability to produce appropriate speech and gaze cues in the context of human-robot cooperation tasks. Gaze is manipulated in three conditions: Full gaze (coordinated eye and head), eyes hidden with sunglasses, and head fixed. We demonstrate the pertinence of these cues in terms of statistical measures of action times for humans in the context of a cooperative task, as gaze significantly facilitates cooperation as measured by human response times.

  13. ASB clinical biomechanics award winner 2016: Assessment of gaze stability within 24-48hours post-concussion.

    PubMed

    Murray, Nicholas G; D'Amico, Nathan R; Powell, Douglas; Mormile, Megan E; Grimes, Katelyn E; Munkasy, Barry A; Gore, Russell K; Reed-Jones, Rebecca J

    2017-05-01

    Approximately 90% of athletes with concussion experience a certain degree of visual system dysfunction immediately post-concussion. Of these abnormalities, gaze stability deficits are denoted as among the most common. Little research quantitatively explores these variables post-concussion. As such, the purpose of this study was to investigate and compare gaze stability between a control group of healthy non-injured athletes and a group of athletes with concussions 24-48hours post-injury. Ten collegiate NCAA Division I athletes with concussions and ten healthy control collegiate athletes completed two trials of a sport-like antisaccade postural control task, the Wii Fit Soccer Heading Game. During play all participants were instructed to minimize gaze deviations away from a central fixed area. Athletes with concussions were assessed within 24-48 post-concussion while healthy control data were collected during pre-season athletic screening. Raw ocular point of gaze coordinates were tracked with a monocular eye tracking device (240Hz) and motion capture during the postural task to determine the instantaneous gaze coordinates. This data was exported and analyzed using a custom algorithm. Independent t-tests analyzed gaze resultant distance, prosaccade errors, mean vertical velocity, and mean horizontal velocity. Athletes with concussions had significantly greater gaze resultant distance (p=0.006), prosaccade errors (p<0.001), and horizontal velocity (p=0.029) when compared to healthy controls. These data suggest that athletes with concussions had less control of gaze during play of the Wii Fit Soccer Heading Game. This could indicate a gaze stability deficit via potentially reduced cortical inhibition that is present within 24-48hours post-concussion. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. The attracting power of the gaze of politicians is modulated by the personality and ideological attitude of their voters: a functional magnetic resonance imaging study.

    PubMed

    Cazzato, Valentina; Liuzza, Marco Tullio; Caprara, Gian Vittorio; Macaluso, Emiliano; Aglioti, Salvatore Maria

    2015-10-01

    Observing someone rapidly moving their eyes induces reflexive shifts of overt and covert attention in the onlooker. Previous studies have shown that this process can be modulated by the onlooker's personality, as well as by the social features of the person depicted in the cued face. Here, we investigated whether an individual's preference for social dominance orientation, in-group perceived similarity (PS), and political affiliation of the cued-face modulated neural activity within specific nodes of the social attention network. During functional magnetic resonance imaging, participants were requested to perform a gaze-following task to investigate whether the directional gaze of various Italian political personages might influence the oculomotor behaviour of in-group or out-group voters. After scanning, we acquired measures of PS in personality traits with each political personage and preference for social dominance orientation. Behavioural data showed that higher gaze interference for in-group than out-group political personages was predicted by a higher preference for social hierarchy. Higher blood oxygenation level-dependent activity in incongruent vs. congruent conditions was found in areas associated with orienting to socially salient events and monitoring response conflict, namely the left frontal eye field, right supramarginal gyrus, mid-cingulate cortex and left anterior insula. Interestingly, higher ratings of PS with the in-group and less preference for social hierarchy predicted increased activity in the left frontal eye field during distracting gaze movements of in-group as compared with out-group political personages. Our results suggest that neural activity in the social orienting circuit is modulated by higher-order social dimensions, such as in-group PS and individual differences in ideological attitudes. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  15. Self-Regulation and Infant-Directed Singing in Infants with Down Syndrome.

    PubMed

    de l'Etoile, Shannon K

    2015-01-01

    Infants learn how to regulate internal states and subsequent behavior through dyadic interactions with caregivers. During infant-directed (ID) singing, mothers help infants practice attentional control and arousal modulation, thus providing critical experience in self-regulation. Infants with Down syndrome are known to have attention deficits and delayed information processing as well as difficulty managing arousability, factors that may disrupt their efforts at self-regulation. The researcher explored responses to ID singing in infants with Down syndrome (DS) and compared them with those of typically developing (TD) infants. Behaviors measured included infant gaze and affect as indicators of self-regulation. Participants included 3- to 9-month-old infants with and without DS who were videotaped throughout a 2-minute face-to-face interaction during which their mothers sang to them any song(s) of their choosing. Infant behavior was then coded for percentage of time spent demonstrating a specific gaze or affect type. All infants displayed sustained gaze more than any other gaze type. TD infants demonstrated intermittent gaze significantly more often than infants with DS. Infant status had no effect on affect type, and all infants showed predominantly neutral affect. Findings suggest that ID singing effectively maintains infant attention for both TD infants and infants with DS. However, infants with DS may have difficulty shifting attention during ID singing as needed to adjust arousal levels and self-regulate. High levels of neutral affect for all infants imply that ID singing is likely to promote a calm, curious state, regardless of infant status. © the American Music Therapy Association 2015. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  16. GazeAppraise v. 0.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, Andrew; Haass, Michael; Rintoul, Mark Daniel

    GazeAppraise advances the state of the art of gaze pattern analysis using methods that simultaneously analyze spatial and temporal characteristics of gaze patterns. GazeAppraise enables novel research in visual perception and cognition; for example, using shape features as distinguishing elements to assess individual differences in visual search strategy. Given a set of point-to-point gaze sequences, hereafter referred to as scanpaths, the method constructs multiple descriptive features for each scanpath. Once the scanpath features have been calculated, they are used to form a multidimensional vector representing each scanpath and cluster analysis is performed on the set of vectors from all scanpaths.more » An additional benefit of this method is the identification of causal or correlated characteristics of the stimuli, subjects, and visual task through statistical analysis of descriptive metadata distributions within and across clusters.« less

  17. Considerations for the Use of Remote Gaze Tracking to Assess Behavior in Flight Simulators

    NASA Technical Reports Server (NTRS)

    Kalar, Donald J.; Liston, Dorion; Mulligan, Jeffrey B.; Beutter, Brent; Feary, Michael

    2016-01-01

    Complex user interfaces (such as those found in an aircraft cockpit) may be designed from first principles, but inevitably must be evaluated with real users. User gaze data can provide valuable information that can help to interpret other actions that change the state of the system. However, care must be taken to ensure that any conclusions drawn from gaze data are well supported. Through a combination of empirical and simulated data, we identify several considerations and potential pitfalls when measuring gaze behavior in high-fidelity simulators. We show that physical layout, behavioral differences, and noise levels can all substantially alter the quality of fit for algorithms that segment gaze measurements into individual fixations. We provide guidelines to help investigators ensure that conclusions drawn from gaze tracking data are not artifactual consequences of data quality or analysis techniques.

  18. Gaze Strategies in Skateboard Trick Jumps: Spatiotemporal Constraints in Complex Locomotion.

    PubMed

    Klostermann, André; Küng, Philip

    2017-03-01

    This study aimed to further the knowledge on gaze behavior in locomotion by studying gaze strategies in skateboard jumps of different difficulty that had to be performed either with or without an obstacle. Nine experienced skateboarders performed "Ollie" and "Kickflip" jumps either over an obstacle or over a plane surface. The stable gaze at 5 different areas of interest was calculated regarding its relative duration as well as its temporal order. During the approach phase, an interaction between area of interest and obstacle condition, F(3, 24) = 12.91, p <  .05, η p 2  = .62, was found with longer stable-gaze locations at the takeoff area in attempts with an obstacle (p <  .05, η p 2  = .47). In contrast, in attempts over a plane surface, longer stable-gaze locations at the skateboard were revealed (p <  .05, η p 2  = .73). Regarding the trick difficulty factor, the skateboarders descriptively showed longer stable-gaze locations at the skateboard for the "Kickflip" than for the "Ollie" in the no-obstacle condition only (p>.05, d = 0.74). Finally, during the jump phase, neither obstacle condition nor trick difficulty affected gaze behavior differentially. This study underlines the functional adaptability of the visuomotor system to changing demands in highly dynamic situations. As a function of certain constraints, different gaze strategies were observed that can be considered as highly relevant for successfully performing skateboard jumps.

  19. Deep Learning-Based Gaze Detection System for Automobile Drivers Using a NIR Camera Sensor.

    PubMed

    Naqvi, Rizwan Ali; Arsalan, Muhammad; Batchuluun, Ganbayar; Yoon, Hyo Sik; Park, Kang Ryoung

    2018-02-03

    A paradigm shift is required to prevent the increasing automobile accident deaths that are mostly due to the inattentive behavior of drivers. Knowledge of gaze region can provide valuable information regarding a driver's point of attention. Accurate and inexpensive gaze classification systems in cars can improve safe driving. However, monitoring real-time driving behaviors and conditions presents some challenges: dizziness due to long drives, extreme lighting variations, glasses reflections, and occlusions. Past studies on gaze detection in cars have been chiefly based on head movements. The margin of error in gaze detection increases when drivers gaze at objects by moving their eyes without moving their heads. To solve this problem, a pupil center corneal reflection (PCCR)-based method has been considered. However, the error of accurately detecting the pupil center and corneal reflection center is increased in a car environment due to various environment light changes, reflections on glasses surface, and motion and optical blurring of captured eye image. In addition, existing PCCR-based methods require initial user calibration, which is difficult to perform in a car environment. To address this issue, we propose a deep learning-based gaze detection method using a near-infrared (NIR) camera sensor considering driver head and eye movement that does not require any initial user calibration. The proposed system is evaluated on our self-constructed database as well as on open Columbia gaze dataset (CAVE-DB). The proposed method demonstrated greater accuracy than the previous gaze classification methods.

  20. A comparison of facial color pattern and gazing behavior in canid species suggests gaze communication in gray wolves (Canis lupus).

    PubMed

    Ueda, Sayoko; Kumagai, Gaku; Otaki, Yusuke; Yamaguchi, Shinya; Kohshima, Shiro

    2014-01-01

    As facial color pattern around the eyes has been suggested to serve various adaptive functions related to the gaze signal, we compared the patterns among 25 canid species, focusing on the gaze signal, to estimate the function of facial color pattern in these species. The facial color patterns of the studied species could be categorized into the following three types based on contrast indices relating to the gaze signal: A-type (both pupil position in the eye outline and eye position in the face are clear), B-type (only the eye position is clear), and C-type (both the pupil and eye position are unclear). A-type faces with light-colored irises were observed in most studied species of the wolf-like clade and some of the red fox-like clade. A-type faces tended to be observed in species living in family groups all year-round, whereas B-type faces tended to be seen in solo/pair-living species. The duration of gazing behavior during which the facial gaze-signal is displayed to the other individual was longest in gray wolves with typical A-type faces, of intermediate length in fennec foxes with typical B-type faces, and shortest in bush dogs with typical C-type faces. These results suggest that the facial color pattern of canid species is related to their gaze communication and that canids with A-type faces, especially gray wolves, use the gaze signal in conspecific communication.

  1. Deep Learning-Based Gaze Detection System for Automobile Drivers Using a NIR Camera Sensor

    PubMed Central

    Naqvi, Rizwan Ali; Arsalan, Muhammad; Batchuluun, Ganbayar; Yoon, Hyo Sik; Park, Kang Ryoung

    2018-01-01

    A paradigm shift is required to prevent the increasing automobile accident deaths that are mostly due to the inattentive behavior of drivers. Knowledge of gaze region can provide valuable information regarding a driver’s point of attention. Accurate and inexpensive gaze classification systems in cars can improve safe driving. However, monitoring real-time driving behaviors and conditions presents some challenges: dizziness due to long drives, extreme lighting variations, glasses reflections, and occlusions. Past studies on gaze detection in cars have been chiefly based on head movements. The margin of error in gaze detection increases when drivers gaze at objects by moving their eyes without moving their heads. To solve this problem, a pupil center corneal reflection (PCCR)-based method has been considered. However, the error of accurately detecting the pupil center and corneal reflection center is increased in a car environment due to various environment light changes, reflections on glasses surface, and motion and optical blurring of captured eye image. In addition, existing PCCR-based methods require initial user calibration, which is difficult to perform in a car environment. To address this issue, we propose a deep learning-based gaze detection method using a near-infrared (NIR) camera sensor considering driver head and eye movement that does not require any initial user calibration. The proposed system is evaluated on our self-constructed database as well as on open Columbia gaze dataset (CAVE-DB). The proposed method demonstrated greater accuracy than the previous gaze classification methods. PMID:29401681

  2. Vicarious Social Touch Biases Gazing at Faces and Facial Emotions.

    PubMed

    Schirmer, Annett; Ng, Tabitha; Ebstein, Richard P

    2018-02-01

    Research has suggested that interpersonal touch promotes social processing and other-concern, and that women may respond to it more sensitively than men. In this study, we asked whether this phenomenon would extend to third-party observers who experience touch vicariously. In an eye-tracking experiment, participants (N = 64, 32 men and 32 women) viewed prime and target images with the intention of remembering them. Primes comprised line drawings of dyadic interactions with and without touch. Targets comprised two faces shown side-by-side, with one being neutral and the other being happy or sad. Analysis of prime fixations revealed that faces in touch interactions attracted longer gazing than faces in no-touch interactions. In addition, touch enhanced gazing at the area of touch in women but not men. Analysis of target fixations revealed that touch priming increased looking at both faces immediately after target onset, and subsequently, at the emotional face in the pair. Sex differences in target processing were nonsignificant. Together, the present results imply that vicarious touch biases visual attention to faces and promotes emotion sensitivity. In addition, they suggest that, compared with men, women are more aware of tactile exchanges in their environment. As such, vicarious touch appears to share important qualities with actual physical touch. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  3. Hierarchical Encoding of Social Cues in Primate Inferior Temporal Cortex

    PubMed Central

    Morin, Elyse L.; Hadj-Bouziane, Fadila; Stokes, Mark; Ungerleider, Leslie G.; Bell, Andrew H.

    2015-01-01

    Faces convey information about identity and emotional state, both of which are important for our social interactions. Models of face processing propose that changeable versus invariant aspects of a face, specifically facial expression/gaze direction versus facial identity, are coded by distinct neural pathways and yet neurophysiological data supporting this separation are incomplete. We recorded activity from neurons along the inferior bank of the superior temporal sulcus (STS), while monkeys viewed images of conspecific faces and non-face control stimuli. Eight monkey identities were used, each presented with 3 different facial expressions (neutral, fear grin, and threat). All facial expressions were displayed with both a direct and averted gaze. In the posterior STS, we found that about one-quarter of face-responsive neurons are sensitive to social cues, the majority of which being sensitive to only one of these cues. In contrast, in anterior STS, not only did the proportion of neurons sensitive to social cues increase, but so too did the proportion of neurons sensitive to conjunctions of identity with either gaze direction or expression. These data support a convergence of signals related to faces as one moves anteriorly along the inferior bank of the STS, which forms a fundamental part of the face-processing network. PMID:24836688

  4. Brief Report: Using a Point-of-View Camera to Measure Eye Gaze in Young Children with Autism Spectrum Disorder during Naturalistic Social Interactions--A Pilot Study

    ERIC Educational Resources Information Center

    Edmunds, Sarah R.; Rozga, Agata; Li, Yin; Karp, Elizabeth A.; Ibanez, Lisa V.; Rehg, James M.; Stone, Wendy L.

    2017-01-01

    Children with autism spectrum disorder (ASD) show reduced gaze to social partners. Eye contact during live interactions is often measured using stationary cameras that capture various views of the child, but determining a child's precise gaze target within another's face is nearly impossible. This study compared eye gaze coding derived from…

  5. The Brainstem Switch for Gaze Shifts in Humans

    DTIC Science & Technology

    2001-10-25

    Page 1 of 4 THE BRAINSTEM SWITCH FOR GAZE SHIFTS IN HUMANS A. N. Kumar1, R. J. Leigh1,2, S. Ramat3 Department of 1Biomedical Engineering, Case...omnipause neurons during gaze shifts. Using the scleral search coil technique, eye movements were measured in seven normal subjects, as they made...voluntary, disjunctive gaze shifts comprising saccades and vergence movements. Conjugate oscillations of small amplitude and high frequency were identified

  6. Visual–Motor Transformations Within Frontal Eye Fields During Head-Unrestrained Gaze Shifts in the Monkey

    PubMed Central

    Sajad, Amirsaman; Sadeh, Morteza; Keith, Gerald P.; Yan, Xiaogang; Wang, Hongying; Crawford, John Douglas

    2015-01-01

    A fundamental question in sensorimotor control concerns the transformation of spatial signals from the retina into eye and head motor commands required for accurate gaze shifts. Here, we investigated these transformations by identifying the spatial codes embedded in visually evoked and movement-related responses in the frontal eye fields (FEFs) during head-unrestrained gaze shifts. Monkeys made delayed gaze shifts to the remembered location of briefly presented visual stimuli, with delay serving to dissociate visual and movement responses. A statistical analysis of nonparametric model fits to response field data from 57 neurons (38 with visual and 49 with movement activities) eliminated most effector-specific, head-fixed, and space-fixed models, but confirmed the dominance of eye-centered codes observed in head-restrained studies. More importantly, the visual response encoded target location, whereas the movement response mainly encoded the final position of the imminent gaze shift (including gaze errors). This spatiotemporal distinction between target and gaze coding was present not only at the population level, but even at the single-cell level. We propose that an imperfect visual–motor transformation occurs during the brief memory interval between perception and action, and further transformations from the FEF's eye-centered gaze motor code to effector-specific codes in motor frames occur downstream in the subcortical areas. PMID:25491118

  7. A model of face selection in viewing video stories

    PubMed Central

    Suda, Yuki; Kitazawa, Shigeru

    2015-01-01

    When typical adults watch TV programs, they show surprisingly stereo-typed gaze behaviours, as indicated by the almost simultaneous shifts of their gazes from one face to another. However, a standard saliency model based on low-level physical features alone failed to explain such typical gaze behaviours. To find rules that explain the typical gaze behaviours, we examined temporo-spatial gaze patterns in adults while they viewed video clips with human characters that were played with or without sound, and in the forward or reverse direction. We here show the following: 1) the “peak” face scanpath, which followed the face that attracted the largest number of views but ignored other objects in the scene, still retained the key features of actual scanpaths, 2) gaze behaviours remained unchanged whether the sound was provided or not, 3) the gaze behaviours were sensitive to time reversal, and 4) nearly 60% of the variance of gaze behaviours was explained by the face saliency that was defined as a function of its size, novelty, head movements, and mouth movements. These results suggest that humans share a face-oriented network that integrates several visual features of multiple faces, and directs our eyes to the most salient face at each moment. PMID:25597621

  8. Investigating the Link Between Radiologists Gaze, Diagnostic Decision, and Image Content

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tourassi, Georgia; Voisin, Sophie; Paquit, Vincent C

    2013-01-01

    Objective: To investigate machine learning for linking image content, human perception, cognition, and error in the diagnostic interpretation of mammograms. Methods: Gaze data and diagnostic decisions were collected from six radiologists who reviewed 20 screening mammograms while wearing a head-mounted eye-tracker. Texture analysis was performed in mammographic regions that attracted radiologists attention and in all abnormal regions. Machine learning algorithms were investigated to develop predictive models that link: (i) image content with gaze, (ii) image content and gaze with cognition, and (iii) image content, gaze, and cognition with diagnostic error. Both group-based and individualized models were explored. Results: By poolingmore » the data from all radiologists machine learning produced highly accurate predictive models linking image content, gaze, cognition, and error. Merging radiologists gaze metrics and cognitive opinions with computer-extracted image features identified 59% of the radiologists diagnostic errors while confirming 96.2% of their correct diagnoses. The radiologists individual errors could be adequately predicted by modeling the behavior of their peers. However, personalized tuning appears to be beneficial in many cases to capture more accurately individual behavior. Conclusions: Machine learning algorithms combining image features with radiologists gaze data and diagnostic decisions can be effectively developed to recognize cognitive and perceptual errors associated with the diagnostic interpretation of mammograms.« less

  9. Examining the influence of a spatially irrelevant working memory load on attentional allocation.

    PubMed

    McDonnell, Gerald P; Dodd, Michael D

    2013-08-01

    The present study examined the influence of holding task-relevant gaze cues in working memory during a target detection task. Gaze cues shift attention in gaze-consistent directions, even when they are irrelevant to a primary detection task. It is unclear, however, whether gaze cues need to be perceived online to elicit these effects, or how these effects may be moderated if the gaze cues are relevant to a secondary task. In Experiment 1, participants encoded a face for a subsequent memory task, after which they performed an unrelated target detection task. Critically, gaze direction was irrelevant to the target detection task, but memory for the perceived face was tested at trial conclusion. Surprisingly, participants exhibited inhibition-of-return (IOR) and not facilitation, with slower response times for the gazed-at location. In Experiments 2, presentation duration and cue-target stimulus-onset asynchrony were manipulated and we continued to observe IOR with no early facilitation. Experiment 3 revealed facilitation but not IOR when the memory task was removed; Experiment 4 also revealed facilitation when the gaze cue memory task was replaced with arrows cues. The present experiments provide an important dissociation between perceiving cues online versus holding them in memory as it relates to attentional allocation. 2013 APA, all rights reserved

  10. Use of a Remote Eye-Tracker for the Analysis of Gaze during Treadmill Walking and Visual Stimuli Exposition.

    PubMed

    Serchi, V; Peruzzi, A; Cereatti, A; Della Croce, U

    2016-01-01

    The knowledge of the visual strategies adopted while walking in cognitively engaging environments is extremely valuable. Analyzing gaze when a treadmill and a virtual reality environment are used as motor rehabilitation tools is therefore critical. Being completely unobtrusive, remote eye-trackers are the most appropriate way to measure the point of gaze. Still, the point of gaze measurements are affected by experimental conditions such as head range of motion and visual stimuli. This study assesses the usability limits and measurement reliability of a remote eye-tracker during treadmill walking while visual stimuli are projected. During treadmill walking, the head remained within the remote eye-tracker workspace. Generally, the quality of the point of gaze measurements declined as the distance from the remote eye-tracker increased and data loss occurred for large gaze angles. The stimulus location (a dot-target) did not influence the point of gaze accuracy, precision, and trackability during both standing and walking. Similar results were obtained when the dot-target was replaced by a static or moving 2D target and "region of interest" analysis was applied. These findings foster the feasibility of the use of a remote eye-tracker for the analysis of gaze during treadmill walking in virtual reality environments.

  11. Coordination of gaze and speech in communication between children with hearing impairment and normal-hearing peers.

    PubMed

    Sandgren, Olof; Andersson, Richard; van de Weijer, Joost; Hansson, Kristina; Sahlén, Birgitta

    2014-06-01

    To investigate gaze behavior during communication between children with hearing impairment (HI) and normal-hearing (NH) peers. Ten HI-NH and 10 NH-NH dyads performed a referential communication task requiring description of faces. During task performance, eye movements and speech were tracked. Using verbal event (questions, statements, back channeling, and silence) as the predictor variable, group characteristics in gaze behavior were expressed with Kaplan-Meier survival functions (estimating time to gaze-to-partner) and odds ratios (comparing number of verbal events with and without gaze-to-partner). Analyses compared the listeners in each dyad (HI: n = 10, mean age = 12;6 years, mean better ear pure-tone average = 33.0 dB HL; NH: n = 10, mean age = 13;7 years). Log-rank tests revealed significant group differences in survival distributions for all verbal events, reflecting a higher probability of gaze to the partner's face for participants with HI. Expressed as odds ratios (OR), participants with HI displayed greater odds for gaze-to-partner (ORs ranging between 1.2 and 2.1) during all verbal events. The results show an increased probability for listeners with HI to gaze at the speaker's face in association with verbal events. Several explanations for the finding are possible, and implications for further research are discussed.

  12. [Case of acute ophthalmoparesis with gaze nystagmus].

    PubMed

    Ikuta, Naomi; Tada, Yukiko; Koga, Michiaki

    2012-01-01

    A 61-year-old man developed double vision subsequent to diarrheal illness. Mixed horizontal-vertical gaze palsy in both eyes, diminution of tendon reflexes, and gaze nystagmus were noted. His horizontal gaze palsy was accompanied by gaze nystagmus in the abducent direction, indicative of the disturbance in central nervous system. Neither limb weakness nor ataxia was noted. Serum anti-GQ1b antibody was detected. Brain magnetic resonance imaging (MRI) findings were normal. The patient was diagnosed as having acute ophthalmoparesis. The ophthalmoparesis and nystagmus gradually disappeared in 3 months. The accompanying nystagmus suggests that central nervous system disturbance may also be present with acute ophthalmoparesis.

  13. Europa and Callisto under the Watchful Gaze of Jupiter

    NASA Image and Video Library

    2000-12-21

    One moment in an ancient, orbital dance is caught in this color picture taken by NASA Cassini spacecraft on Dec. 7, 2000, just as two of Jupiter four major moons, Europa and Callisto, were nearly perfectly aligned with each other.

  14. Neurocognitive mechanisms behind emotional attention: Inverse effects of anodal tDCS over the left and right DLPFC on gaze disengagement from emotional faces.

    PubMed

    Sanchez-Lopez, Alvaro; Vanderhasselt, Marie-Anne; Allaert, Jens; Baeken, Chris; De Raedt, Rudi

    2018-06-01

    Attention to relevant emotional information in the environment is an important process related to vulnerability and resilience for mood and anxiety disorders. In the present study, the effects of left and right dorsolateral prefrontal cortex (i.e., DLPFC) stimulation on attentional mechanisms of emotional processing were tested and contrasted. A sample of 54 healthy participants received 20 min of active and sham anodal transcranial direct current stimulation (i.e., tDCS) either of the left (n = 27) or of the right DLPFC (n = 27) on two separate days. The anode electrode was placed over the left or the right DLPFC, the cathode over the corresponding contra lateral supraorbital area. After each neurostimulation session, participants completed an eye-tracking task assessing direct processes of attentional engagement towards and attentional disengagement away from emotional faces (happy, disgusted, and sad expressions). Compared to sham, active tDCS over the left DLPFC led to faster gaze disengagement, whereas active tDCS over the right DLPFC led to slower gaze disengagement from emotional faces. Between-group comparisons showed that such inverse change patterns were significantly different and generalized for all types of emotion. Our findings support a lateralized role of left and right DLPFC activity in enhancing/worsening the top-down regulation of emotional attention processing. These results support the rationale of new therapies for affective disorders aimed to increase the activation of the left over the right DLPFC in combination with attentional control training, and identify specific target attention mechanisms to be trained.

  15. Anxiety and sensitivity to gaze direction in emotionally expressive faces.

    PubMed

    Fox, Elaine; Mathews, Andrew; Calder, Andrew J; Yiend, Jenny

    2007-08-01

    This study investigated the role of neutral, happy, fearful, and angry facial expressions in enhancing orienting to the direction of eye gaze. Photographs of faces with either direct or averted gaze were presented. A target letter (T or L) appeared unpredictably to the left or the right of the face, either 300 ms or 700 ms after gaze direction changed. Response times were faster in congruent conditions (i.e., when the eyes gazed toward the target) relative to incongruent conditions (when the eyes gazed away from the target letter). Facial expression did influence reaction times, but these effects were qualified by individual differences in self-reported anxiety. High trait-anxious participants showed an enhanced orienting to the eye gaze of faces with fearful expressions relative to all other expressions. In contrast, when the eyes stared straight ahead, trait anxiety was associated with slower responding when the facial expressions depicted anger. Thus, in anxiety-prone people attention is more likely to be held by an expression of anger, whereas attention is guided more potently by fearful facial expressions. ((c) 2007 APA, all rights reserved).

  16. Design and Evaluation of Fusion Approach for Combining Brain and Gaze Inputs for Target Selection

    PubMed Central

    Évain, Andéol; Argelaguet, Ferran; Casiez, Géry; Roussel, Nicolas; Lécuyer, Anatole

    2016-01-01

    Gaze-based interfaces and Brain-Computer Interfaces (BCIs) allow for hands-free human–computer interaction. In this paper, we investigate the combination of gaze and BCIs. We propose a novel selection technique for 2D target acquisition based on input fusion. This new approach combines the probabilistic models for each input, in order to better estimate the intent of the user. We evaluated its performance against the existing gaze and brain–computer interaction techniques. Twelve participants took part in our study, in which they had to search and select 2D targets with each of the evaluated techniques. Our fusion-based hybrid interaction technique was found to be more reliable than the previous gaze and BCI hybrid interaction techniques for 10 participants over 12, while being 29% faster on average. However, similarly to what has been observed in hybrid gaze-and-speech interaction, gaze-only interaction technique still provides the best performance. Our results should encourage the use of input fusion, as opposed to sequential interaction, in order to design better hybrid interfaces. PMID:27774048

  17. Post-traumatic Vertical Gaze Paresis in Nine Patients: Special Vulnerability of the Artery of Percheron in Trauma?

    PubMed Central

    Galvez-Ruiz, Alberto

    2015-01-01

    Purpose: The purpose was to present a case series of vertical gaze paresis in patients with a history of cranioencephalic trauma (CET). Methods: The clinical characteristics and management are presented of nine patients with a history of CET secondary to motor vehicle accidents with associated vertical gaze paresis. Results: Neuroimaging studies indicated posttraumatic contusion of the thalamic-mesencephalic region in all nine patients who corresponded to the artery of Percheron region; four patients had signs of hemorrhagic transformation. Vertical gaze paresis was present in all patients, ranging from complete paralysis of the upward and downward gaze to a slight limitation of upward gaze. Discussion: Posttraumatic vertical gaze paresis is a rare phenomenon that can occur in isolation or in association with other neurological deficits and can cause a significant limitation in the quality-of-life. Studies in the literature have postulated that the unique anatomy of the angle of penetration of the thalamoperforating and lenticulostriate arteries makes these vessels more vulnerable to isolated selective damage in certain individuals and can cause-specific patterns of CET. PMID:26180479

  18. Post-traumatic Vertical Gaze Paresis in Nine Patients: Special Vulnerability of the Artery of Percheron in Trauma?

    PubMed

    Galvez-Ruiz, Alberto

    2015-01-01

    The purpose was to present a case series of vertical gaze paresis in patients with a history of cranioencephalic trauma (CET). The clinical characteristics and management are presented of nine patients with a history of CET secondary to motor vehicle accidents with associated vertical gaze paresis. Neuroimaging studies indicated posttraumatic contusion of the thalamic-mesencephalic region in all nine patients who corresponded to the artery of Percheron region; four patients had signs of hemorrhagic transformation. Vertical gaze paresis was present in all patients, ranging from complete paralysis of the upward and downward gaze to a slight limitation of upward gaze. Posttraumatic vertical gaze paresis is a rare phenomenon that can occur in isolation or in association with other neurological deficits and can cause a significant limitation in the quality-of-life. Studies in the literature have postulated that the unique anatomy of the angle of penetration of the thalamoperforating and lenticulostriate arteries makes these vessels more vulnerable to isolated selective damage in certain individuals and can cause-specific patterns of CET.

  19. Gaze patterns reveal how situation models and text representations contribute to episodic text memory.

    PubMed

    Johansson, Roger; Oren, Franziska; Holmqvist, Kenneth

    2018-06-01

    When recalling something you have previously read, to what degree will such episodic remembering activate a situation model of described events versus a memory representation of the text itself? The present study was designed to address this question by recording eye movements of participants who recalled previously read texts while looking at a blank screen. An accumulating body of research has demonstrated that spontaneous eye movements occur during episodic memory retrieval and that fixation locations from such gaze patterns to a large degree overlap with the visuospatial layout of the recalled information. Here we used this phenomenon to investigate to what degree participants' gaze patterns corresponded with the visuospatial configuration of the text itself versus a visuospatial configuration described in it. The texts to be recalled were scene descriptions, where the spatial configuration of the scene content was manipulated to be either congruent or incongruent with the spatial configuration of the text itself. Results show that participants' gaze patterns were more likely to correspond with a visuospatial representation of the described scene than with a visuospatial representation of the text itself, but also that the contribution of those representations of space is sensitive to the text content. This is the first demonstration that eye movements can be used to discriminate on which representational level texts are remembered and the findings provide novel insight into the underlying dynamics in play. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. Abnormal center-periphery gradient in spatial attention in simultanagnosia.

    PubMed

    Balslev, Daniela; Odoj, Bartholomaeus; Rennig, Johannes; Karnath, Hans-Otto

    2014-12-01

    Patients suffering from simultanagnosia cannot perceive more than one object at a time. The underlying mechanism is incompletely understood. One hypothesis is that simultanagnosia reflects "tunnel vision," a constricted attention window around gaze, which precludes the grouping of individual objects. Although this idea has a long history in neuropsychology, the question whether the patients indeed have an abnormal attention gradient around the gaze has so far not been addressed. Here we tested this hypothesis in two simultanagnosia patients with bilateral parieto-occipital lesions and two control groups, with and without brain damage. We assessed the participants' ability to discriminate letters presented briefly at fixation with and without a peripheral distractor or in the visual periphery, with or without a foveal distractor. A constricted span of attention around gaze would predict an increased susceptibility to foveated versus peripheral distractors. Contrary to this prediction and unlike both control groups, the patients' ability to discriminate the target decreased more in the presence of peripheral compared with foveated distractors. Thus, the attentional spotlight in simultanagnosia does not fall on foveated objects as previously assumed, but rather abnormally highlights the periphery. Furthermore, we found the same center-periphery gradient in the patients' ability to recognize multiple objects. They detected multiple, but not single objects more accurately in the periphery than at fixation. These results suggest that an abnormal allocation of attention around the gaze can disrupt the grouping of individual objects into an integrated visual scene.

  1. Sexual affordances, perceptual-motor invariance extraction and intentional nonlinear dynamics: sexually deviant and non-deviant patterns in male subjects.

    PubMed

    Renaud, Patrice; Goyette, Mathieu; Chartier, Sylvain; Zhornitski, Simon; Trottier, Dominique; Rouleau, Joanne-L; Proulx, Jean; Fedoroff, Paul; Bradford, John-P; Dassylva, Benoit; Bouchard, Stephane

    2010-10-01

    Sexual arousal and gaze behavior dynamics are used to characterize deviant sexual interests in male subjects. Pedophile patients and non-deviant subjects are immersed with virtual characters depicting relevant sexual features. Gaze behavior dynamics as indexed from correlation dimensions (D2) appears to be fractal in nature and significantly different from colored noise (surrogate data tests and recurrence plot analyses were performed). This perceptual-motor fractal dynamics parallels sexual arousal and differs from pedophiles to non-deviant subjects when critical sexual information is processed. Results are interpreted in terms of sexual affordance, perceptual invariance extraction and intentional nonlinear dynamics.

  2. Affine Transform to Reform Pixel Coordinates of EOG Signals for Controlling Robot Manipulators Using Gaze Motions

    PubMed Central

    Rusydi, Muhammad Ilhamdi; Sasaki, Minoru; Ito, Satoshi

    2014-01-01

    Biosignals will play an important role in building communication between machines and humans. One of the types of biosignals that is widely used in neuroscience are electrooculography (EOG) signals. An EOG has a linear relationship with eye movement displacement. Experiments were performed to construct a gaze motion tracking method indicated by robot manipulator movements. Three operators looked at 24 target points displayed on a monitor that was 40 cm in front of them. Two channels (Ch1 and Ch2) produced EOG signals for every single eye movement. These signals were converted to pixel units by using the linear relationship between EOG signals and gaze motion distances. The conversion outcomes were actual pixel locations. An affine transform method is proposed to determine the shift of actual pixels to target pixels. This method consisted of sequences of five geometry processes, which are translation-1, rotation, translation-2, shear and dilatation. The accuracy was approximately 0.86° ± 0.67° in the horizontal direction and 0.54° ± 0.34° in the vertical. This system successfully tracked the gaze motions not only in direction, but also in distance. Using this system, three operators could operate a robot manipulator to point at some targets. This result shows that the method is reliable in building communication between humans and machines using EOGs. PMID:24919013

  3. Interactions among the effects of head orientation, emotional expression, and physical attractiveness on face preferences.

    PubMed

    Main, Julie C; DeBruine, Lisa M; Little, Anthony C; Jones, Benedict C

    2010-01-01

    Previous studies have shown that preferences for direct versus averted gaze are modulated by emotional expressions and physical attractiveness. For example, preferences for direct gaze are stronger when judging happy or physically attractive faces than when judging disgusted or physically unattractive faces. Here we show that preferences for front versus three-quarter views of faces, in which gaze direction was always congruent with head orientation, are also modulated by emotional expressions and physical attractiveness; participants demonstrated preferences for front views of faces over three-quarter views of faces when judging the attractiveness of happy, physically attractive individuals, but not when judging the attractiveness of relatively unattractive individuals or those with disgusted expressions. Moreover, further analyses indicated that these interactions did not simply reflect differential perceptions of the intensity of the emotional expressions shown in each condition. Collectively, these findings present novel evidence that the effect of the direction of the attention of others on attractiveness judgments is modulated by cues to the physical attractiveness and emotional state of the depicted individual, potentially reflecting psychological adaptations for efficient allocation of social effort. These data also present the first behavioural evidence that the effect of the direction of the attention of others on attractiveness judgments reflects viewer-referenced, rather than face-referenced, coding and/or processing of gaze direction.

  4. The order of information processing alters economic gain-loss framing effects.

    PubMed

    Kwak, Youngbin; Huettel, Scott

    2018-01-01

    Adaptive decision making requires analysis of available information during the process of choice. In many decisions that information is presented visually - which means that variations in visual properties (e.g., salience, complexity) can potentially influence the process of choice. In the current study, we demonstrate that variation in the left-right positioning of risky and safe decision options can influence the canonical gain-loss framing effect. Two experiments were conducted using an economic framing task in which participants chose between gambles and certain outcomes. The first experiment demonstrated that the magnitude of the gain-loss framing effect was greater when the certain option signaling the current frame was presented on the left side of the visual display. Eye-tracking data during task performance showed a left-gaze bias for initial fixations, suggesting that the option presented on the left side was processed first. Combination of eye-tracking and choice data revealed that there was a significant effect of direction of first gaze (i.e. left vs. right) as well as an interaction between gaze direction and identity of the first fixated information (i.e. certain vs. gamble) regardless of frame. A second experiment presented the gamble and certain options in a random order, with a temporal delay between their presentations. We found that the magnitude of gain-loss framing was larger when the certain option was presented first, regardless of left and right positioning, only in individuals with lower risk-taking tendencies. The effect of presentation order on framing was not present in high risk-takers. These results suggest that the sequence of visual information processing as well as their left-right positioning can bias choices by changing the impact of the presented information during risky decision making. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Auditory, Vestibular and Cognitive Effects due to Repeated Blast Exposure on the Warfighter

    DTIC Science & Technology

    2012-10-01

    Gaze Horizontal (Left and Right) Description: The primary purpose of the Gaze Horizontal subtest was to detect nystagmus when the head is fixed and...to detect nystagmus when the head is fixed and the eyes are gazing off center from the primary (straight ahead) gaze position. This test is designed...physiological target area and examiner instructions for testing): Spontaneous Nystagmus Smooth Harmonic Acceleration (.01, .08, .32, .64, 1.75

  6. An Experimental Evaluation of a Field Sobriety Test Battery in the Marine Environment

    DTIC Science & Technology

    1990-06-01

    Turn, Horizontal Gaze Nystagmus , Finger to Nose, Finger Count, and Tracing. Of these six tests, Walk and Turn, One-Leg Stand, and Horizontal Gaze ...served as the lead officer, administering the tests while the other two officers observed. All officers administered the Horizontal Gaze Nystagmus ...administered the Horizontal Gaze Nystagmus (HGN) individually. After giving a tes’ or pair of tests (as designated) each officer on the team gave a

  7. Stimulus exposure and gaze bias: a further test of the gaze cascade model.

    PubMed

    Glaholt, Mackenzie G; Reingold, Eyal M

    2009-04-01

    We tested predictions derived from the gaze cascade model of preference decision making (Shimojo, Simion, Shimojo, & Scheier, 2003; Simion & Shimojo, 2006, 2007). In each trial, participants' eye movements were monitored while they performed an eight-alternative decision task in which four of the items in the array were preexposed prior to the trial. Replicating previous findings, we found a gaze bias toward the chosen item prior to the response. However, contrary to the prediction of the gaze cascade model, preexposure of stimuli decreased, rather than increased, the magnitude of the gaze bias in preference decisions. Furthermore, unlike the prediction of the model, preexposure did not affect the likelihood of an item being chosen, and the pattern of looking behavior in preference decisions and on a non preference control task was remarkably similar. Implications of the present findings in multistage models of decision making are discussed.

  8. Pointing control using a moving base of support.

    PubMed

    Hondzinski, Jan M; Kwon, Taegyong

    2009-07-01

    The purposes of this study were to determine whether gaze direction provides a control signal for movement direction for a pointing task requiring a step and to gain insight into why discrepancies previously identified in the literature for endpoint accuracy with gaze directed eccentrically exist. Straight arm pointing movements were performed to real and remembered target locations, either toward or 30 degrees eccentric to gaze direction. Pointing occurred in normal room lighting or darkness while subjects sat, stood still or side-stepped left or right. Trunk rotation contributed 22-65% to gaze orientations when it was not constrained. Error differences for different target locations explained discrepancies among previous experiments. Variable pointing errors were influenced by gaze direction, while mean systematic pointing errors and trunk orientations were influenced by step direction. These data support the use of a control strategy that relies on gaze direction and equilibrium inputs for whole-body goal-directed movements.

  9. Radiologically defining horizontal gaze using EOS imaging-a prospective study of healthy subjects and a retrospective audit.

    PubMed

    Hey, Hwee Weng Dennis; Tan, Kimberly-Anne; Ho, Vivienne Chien-Lin; Azhar, Syifa Bte; Lim, Joel-Louis; Liu, Gabriel Ka-Po; Wong, Hee-Kit

    2018-06-01

    As sagittal alignment of the cervical spine is important for maintaining horizontal gaze, it is important to determine the former for surgical correction. However, horizontal gaze remains poorly-defined from a radiological point of view. The objective of this study was to establish radiological criteria to define horizontal gaze. This study was conducted at a tertiary health-care institution over a 1-month period. A prospective cohort of healthy patients was used to determine the best radiological criteria for defining horizontal gaze. A retrospective cohort of patients without rigid spinal deformities was used to audit the incidence of horizontal gaze. Two categories of radiological parameters for determining horizontal gaze were tested: (1) the vertical offset distances of key identifiable structures from the horizontal gaze axis and (2) imaginary lines convergent with the horizontal gaze axis. Sixty-seven healthy subjects underwent whole-body EOS radiographs taken in a directed standing posture. Horizontal gaze was radiologically defined using each parameter, as represented by their means, 95% confidence intervals (CIs), and associated 2 standard deviations (SDs). Subsequently, applying the radiological criteria, we conducted a retrospective audit of such radiographs (before the implementation of a strict radioimaging standardization). The mean age of our prospective cohort was 46.8 years, whereas that of our retrospective cohort was 37.2 years. Gender was evenly distributed across both cohorts. The four parameters with the lowest 95% CI and 2 SD were the distance offsets of the midpoint of the hard palate (A) and the base of the sella turcica (B), the horizontal convergents formed by the tangential line to the hard palate (C), and the line joining the center of the orbital orifice with the internal occipital protuberance (D). In the prospective cohort, good sensitivity (>98%) was attained when two or more parameters were used. Audit using Criterion B+D yielded compliance rates of 76.7%, a figure much closer to that of A+B+C+D (74.8%). From a practical viewpoint, Criterion B+D were most suitable for clinical use and could be simplified to the "3-6-12 rule" as a form of cursory assessment. Verbal instructions in the absence of stringent postural checks only ensured that ~75% of subjects achieved horizontal gaze. Fulfillment of Criterion B+D is sufficient to evaluate for horizontal gaze. Further criteria can be added to increase sensitivity. Verbal instructions alone yield high rates of inaccuracy when attempting to image patients in horizontal gaze. Apart from improving methods for obtaining radiographs, a radiological definition of horizontal gaze should be routinely applied for better evaluation of sagittal spinal alignment. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Gaze-contingent displays: a review.

    PubMed

    Duchowski, Andrew T; Cournia, Nathan; Murphy, Hunter

    2004-12-01

    Gaze-contingent displays (GCDs) attempt to balance the amount of information displayed against the visual information processing capacity of the observer through real-time eye movement sensing. Based on the assumed knowledge of the instantaneous location of the observer's focus of attention, GCD content can be "tuned" through several display processing means. Screen-based displays alter pixel level information generally matching the resolvability of the human retina in an effort to maximize bandwidth. Model-based displays alter geometric-level primitives along similar goals. Attentive user interfaces (AUIs) manage object- level entities (e.g., windows, applications) depending on the assumed attentive state of the observer. Such real-time display manipulation is generally achieved through non-contact, unobtrusive tracking of the observer's eye movements. This paper briefly reviews past and present display techniques as well as emerging graphics and eye tracking technology for GCD development.

  11. Face-to-face interference in typical and atypical development

    PubMed Central

    Riby, Deborah M; Doherty-Sneddon, Gwyneth; Whittle, Lisa

    2012-01-01

    Visual communication cues facilitate interpersonal communication. It is important that we look at faces to retrieve and subsequently process such cues. It is also important that we sometimes look away from faces as they increase cognitive load that may interfere with online processing. Indeed, when typically developing individuals hold face gaze it interferes with task completion. In this novel study we quantify face interference for the first time in Williams syndrome (WS) and Autism Spectrum Disorder (ASD). These disorders of development impact on cognition and social attention, but how do faces interfere with cognitive processing? Individuals developing typically as well as those with ASD (n = 19) and WS (n = 16) were recorded during a question and answer session that involved mathematics questions. In phase 1 gaze behaviour was not manipulated, but in phase 2 participants were required to maintain eye contact with the experimenter at all times. Looking at faces decreased task accuracy for individuals who were developing typically. Critically, the same pattern was seen in WS and ASD, whereby task performance decreased when participants were required to hold face gaze. The results show that looking at faces interferes with task performance in all groups. This finding requires the caveat that individuals with WS and ASD found it harder than individuals who were developing typically to maintain eye contact throughout the interaction. Individuals with ASD struggled to hold eye contact at all points of the interaction while those with WS found it especially difficult when thinking. PMID:22356183

  12. Biases in orienting and maintenance of attention among weight dissatisfied women: an eye-movement study.

    PubMed

    Gao, Xiao; Wang, Quanchuan; Jackson, Todd; Zhao, Guang; Liang, Yi; Chen, Hong

    2011-04-01

    Despite evidence indicating fatness and thinness information are processed differently among weight-preoccupied and eating disordered individuals, the exact nature of these attentional biases is not clear. In this research, eye movement (EM) tracking assessed biases in specific component processes of visual attention (i.e., orientation, detection, maintenance and disengagement of gaze) in relation to body-related stimuli among 20 weight dissatisfied (WD) and 20 weight satisfied young women. Eye movements were recorded while participants completed a dot-probe task that featured fatness-neutral and thinness-neutral word pairs. Compared to controls, WD women were more likely to direct their initial gaze toward fatness words, had a shorter mean latency of first fixation on both fatness and thinness words, had longer first fixation on fatness words but shorter first fixation on thinness words, and shorter total gaze duration on thinness words. Reaction time data showed a maintenance bias towards fatness words among the WD women. In sum, results indicated WD women show initial orienting, speeded detection and initial maintenance biases towards fat body words in addition to a speeded detection - avoidance pattern of biases in relation to thin body words. In sum, results highlight the importance of the utility of EM-tracking as a means of identifying subtle attentional biases among weight dissatisfied women drawn from a non-clinical setting and the need to assess attentional biases as a dynamic process. Copyright © 2011 Elsevier Ltd. All rights reserved.

  13. Visuomotor control, eye movements, and steering: A unified approach for incorporating feedback, feedforward, and internal models.

    PubMed

    Lappi, Otto; Mole, Callum

    2018-06-11

    The authors present an approach to the coordination of eye movements and locomotion in naturalistic steering tasks. It is based on recent empirical research, in particular, on driver eye movements, that poses challenges for existing accounts of how we visually steer a course. They first analyze how the ideas of feedback and feedforward processes and internal models are treated in control theoretical steering models within vision science and engineering, which share an underlying architecture but have historically developed in very separate ways. The authors then show how these traditions can be naturally (re)integrated with each other and with contemporary neuroscience, to better understand the skill and gaze strategies involved. They then propose a conceptual model that (a) gives a unified account to the coordination of gaze and steering control, (b) incorporates higher-level path planning, and (c) draws on the literature on paired forward and inverse models in predictive control. Although each of these (a-c) has been considered before (also in the context of driving), integrating them into a single framework and the authors' multiple waypoint identification hypothesis within that framework are novel. The proposed hypothesis is relevant to all forms of visually guided locomotion. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  14. [Face recognition in patients with schizophrenia].

    PubMed

    Doi, Hirokazu; Shinohara, Kazuyuki

    2012-07-01

    It is well known that patients with schizophrenia show severe deficiencies in social communication skills. These deficiencies are believed to be partly derived from abnormalities in face recognition. However, the exact nature of these abnormalities exhibited by schizophrenic patients with respect to face recognition has yet to be clarified. In the present paper, we review the main findings on face recognition deficiencies in patients with schizophrenia, particularly focusing on abnormalities in the recognition of facial expression and gaze direction, which are the primary sources of information of others' mental states. The existing studies reveal that the abnormal recognition of facial expression and gaze direction in schizophrenic patients is attributable to impairments in both perceptual processing of visual stimuli, and cognitive-emotional responses to social information. Furthermore, schizophrenic patients show malfunctions in distributed neural regions, ranging from the fusiform gyrus recruited in the structural encoding of facial stimuli, to the amygdala which plays a primary role in the detection of the emotional significance of stimuli. These findings were obtained from research in patient groups with heterogeneous characteristics. Because previous studies have indicated that impairments in face recognition in schizophrenic patients might vary according to the types of symptoms, it is of primary importance to compare the nature of face recognition deficiencies and the impairments of underlying neural functions across sub-groups of patients.

  15. A Comparison of Facial Color Pattern and Gazing Behavior in Canid Species Suggests Gaze Communication in Gray Wolves (Canis lupus)

    PubMed Central

    Ueda, Sayoko; Kumagai, Gaku; Otaki, Yusuke; Yamaguchi, Shinya; Kohshima, Shiro

    2014-01-01

    As facial color pattern around the eyes has been suggested to serve various adaptive functions related to the gaze signal, we compared the patterns among 25 canid species, focusing on the gaze signal, to estimate the function of facial color pattern in these species. The facial color patterns of the studied species could be categorized into the following three types based on contrast indices relating to the gaze signal: A-type (both pupil position in the eye outline and eye position in the face are clear), B-type (only the eye position is clear), and C-type (both the pupil and eye position are unclear). A-type faces with light-colored irises were observed in most studied species of the wolf-like clade and some of the red fox-like clade. A-type faces tended to be observed in species living in family groups all year-round, whereas B-type faces tended to be seen in solo/pair-living species. The duration of gazing behavior during which the facial gaze-signal is displayed to the other individual was longest in gray wolves with typical A-type faces, of intermediate length in fennec foxes with typical B-type faces, and shortest in bush dogs with typical C-type faces. These results suggest that the facial color pattern of canid species is related to their gaze communication and that canids with A-type faces, especially gray wolves, use the gaze signal in conspecific communication. PMID:24918751

  16. Surgical planning and innervation in pontine gaze palsy with ipsilateral esotropia.

    PubMed

    Somer, Deniz; Cinar, Fatma Gul; Kaderli, Ahmet; Ornek, Firdevs

    2016-10-01

    To discuss surgical intervention strategies among patients with horizontal gaze palsy with concurrent esotropia. Five consecutive patients with dorsal pontine lesions are presented. Each patient had horizontal gaze palsy with symptomatic diplopia as a consequence of esotropia in primary gaze and an anomalous head turn to attain single binocular vision. Clinical findings in the first 2 patients led us to presume there was complete loss of rectus muscle function from rectus muscle palsy. Based on this assumption, medial rectus recessions with simultaneous partial vertical muscle transposition (VRT) on the ipsilateral eye of the gaze palsy and recession-resection surgery on the contralateral eye were performed, resulting in significant motility limitation. Sequential recession-resection surgery without simultaneous VRT on the 3rd patient created an unexpected motility improvement to the side of gaze palsy, an observation differentiating rectus muscle palsy from paresis. Recession combined with VRT approach in the esotropic eye was abandoned on subsequent patients. Simultaneous recession-resection surgery without VRT in the next 2 patients resulted in alleviation of head postures, resolution of esotropia, and also substantial motility improvements to the ipsilateral hemifield of gaze palsy without limitations in adduction and vertical deviations. Ocular misalignment and abnormal head posture as a result of conjugate gaze palsy can be successfully treated by basic recession-resection surgery, with the advantage of increasing versions to the ipsilateral side of the gaze palsy. Improved motility after surgery presumably represents paresis, not "paralysis," with residual innervation in rectus muscles. Copyright © 2016 American Association for Pediatric Ophthalmology and Strabismus. Published by Elsevier Inc. All rights reserved.

  17. Combined Effects of Gaze and Orientation of Faces on Person Judgments in Social Situations

    PubMed Central

    Kaisler, Raphaela E.; Leder, Helmut

    2017-01-01

    In social situations, faces of others can vary simultaneously in gaze and orientation. How these variations affect different kinds of social judgments, such as attractiveness or trustworthiness, is only partly understood. Therefore, we studied how different gaze directions, head angles, but also levels of facial attractiveness affect perceived attractiveness and trustworthiness. We always presented pairs of faces – either two average attractive faces or a highly attractive together with a less attractive face. We also varied gaze and head angles showing faces in three different orientations, front, three-quarter and profile view. In Experiment 1 (N = 62), participants rated averted gaze in three-quarter views as more attractive than in front and profile views, and evaluated faces with direct gaze in front views as most trustworthy. Moreover, faces that were being looked at by another face were seen as more attractive. Independent of the head orientation or gaze direction, highly attractive faces were rated as more attractive and more trustworthy. In Experiment 2 (N = 54), we found that the three-quarter advantage vanished when the second face was blurred during judgments, which demonstrates the importance of the presence of another person-as in a triadic social situation-as well as the importance of their visible gaze. The findings emphasize that social evaluations such as trustworthiness are unaffected by the esthetic advantage of three-quarter views of two average attractive faces, and that the effect of a faces’ attractiveness is more powerful than the more subtle effects of gaze and orientations. PMID:28275364

  18. Upward gaze-evoked nystagmus with organoarsenic poisoning.

    PubMed

    Nakamagoe, Kiyotaka; Ishii, Kazuhiro; Tamaoka, Akira; Shoji, Shin'ichi

    2006-01-10

    The authors report assessment of abnormal ocular movements in three patients after organoarsenic poisoning from diphenylarsinic acid. The characteristic and principal sign is upward gaze-evoked nystagmus. Moreover, vertical gaze holding impairment was shown by electronystagmography on direct current recording.

  19. Gaze Cueing of Attention

    PubMed Central

    Frischen, Alexandra; Bayliss, Andrew P.; Tipper, Steven P.

    2007-01-01

    During social interactions, people’s eyes convey a wealth of information about their direction of attention and their emotional and mental states. This review aims to provide a comprehensive overview of past and current research into the perception of gaze behavior and its effect on the observer. This encompasses the perception of gaze direction and its influence on perception of the other person, as well as gaze-following behavior such as joint attention, in infant, adult, and clinical populations. Particular focus is given to the gaze-cueing paradigm that has been used to investigate the mechanisms of joint attention. The contribution of this paradigm has been significant and will likely continue to advance knowledge across diverse fields within psychology and neuroscience. PMID:17592962

  20. Social evolution. Oxytocin-gaze positive loop and the coevolution of human-dog bonds.

    PubMed

    Nagasawa, Miho; Mitsui, Shouhei; En, Shiori; Ohtani, Nobuyo; Ohta, Mitsuaki; Sakuma, Yasuo; Onaka, Tatsushi; Mogi, Kazutaka; Kikusui, Takefumi

    2015-04-17

    Human-like modes of communication, including mutual gaze, in dogs may have been acquired during domestication with humans. We show that gazing behavior from dogs, but not wolves, increased urinary oxytocin concentrations in owners, which consequently facilitated owners' affiliation and increased oxytocin concentration in dogs. Further, nasally administered oxytocin increased gazing behavior in dogs, which in turn increased urinary oxytocin concentrations in owners. These findings support the existence of an interspecies oxytocin-mediated positive loop facilitated and modulated by gazing, which may have supported the coevolution of human-dog bonding by engaging common modes of communicating social attachment. Copyright © 2015, American Association for the Advancement of Science.

  1. Gaze failure, drifting eye movements, and centripetal nystagmus in cerebellar disease.

    PubMed Central

    Leech, J; Gresty, M; Hess, K; Rudge, P

    1977-01-01

    Three abnormalities of eye movement in man are described which are indicative of cerebellar system disorder, namely, centripetally beating nystagmus, failure to maintain lateral gaze either in darkness or with eye closure, and slow drifting movements of the eyes in the absence of fixation. Similar eye movement signs follow cerebellectomy in the primate and the cat. These abnormalities of eye movement, together with other signs of cerebellar disease, such as rebound alternating, and gaze paretic nystagmus, are explained by the hypothesis that the cerebellum helps to maintain lateral gaze and that brain stem mechanisms which monitor gaze position generate compensatory biases in the absence of normal cerebellar function. PMID:603785

  2. A Retrospective Look at Engineering Innovations in the Peanut Industry

    USDA-ARS?s Scientific Manuscript database

    As research scientists and engineers, we are able to gaze into the future of peanut production and processing because we stand on the shoulders of those who blazed the way before us. We have made tremendous progress in the areas of peanut harvest, curing, transportation, storage, and processing duri...

  3. Visual-Motor Transformations Within Frontal Eye Fields During Head-Unrestrained Gaze Shifts in the Monkey.

    PubMed

    Sajad, Amirsaman; Sadeh, Morteza; Keith, Gerald P; Yan, Xiaogang; Wang, Hongying; Crawford, John Douglas

    2015-10-01

    A fundamental question in sensorimotor control concerns the transformation of spatial signals from the retina into eye and head motor commands required for accurate gaze shifts. Here, we investigated these transformations by identifying the spatial codes embedded in visually evoked and movement-related responses in the frontal eye fields (FEFs) during head-unrestrained gaze shifts. Monkeys made delayed gaze shifts to the remembered location of briefly presented visual stimuli, with delay serving to dissociate visual and movement responses. A statistical analysis of nonparametric model fits to response field data from 57 neurons (38 with visual and 49 with movement activities) eliminated most effector-specific, head-fixed, and space-fixed models, but confirmed the dominance of eye-centered codes observed in head-restrained studies. More importantly, the visual response encoded target location, whereas the movement response mainly encoded the final position of the imminent gaze shift (including gaze errors). This spatiotemporal distinction between target and gaze coding was present not only at the population level, but even at the single-cell level. We propose that an imperfect visual-motor transformation occurs during the brief memory interval between perception and action, and further transformations from the FEF's eye-centered gaze motor code to effector-specific codes in motor frames occur downstream in the subcortical areas. © The Author 2014. Published by Oxford University Press.

  4. Eye contact with neutral and smiling faces: effects on autonomic responses and frontal EEG asymmetry

    PubMed Central

    Pönkänen, Laura M.; Hietanen, Jari K.

    2012-01-01

    In our previous studies we have shown that seeing another person “live” with a direct vs. averted gaze results in enhanced skin conductance responses (SCRs) indicating autonomic arousal and in greater relative left-sided frontal activity in the electroencephalography (asymmetry in the alpha-band power), associated with approach motivation. In our studies, however, the stimulus persons had a neutral expression. In real-life social interaction, eye contact is often associated with a smile, which is another signal of the sender's approach-related motivation. A smile could, therefore, enhance the affective-motivational responses to eye contact. In the present study, we investigated whether the facial expression (neutral vs. social smile) would modulate autonomic arousal and frontal EEG alpha-band asymmetry to seeing a direct vs. an averted gaze in faces presented “live” through a liquid crystal (LC) shutter. The results showed that the SCRs were greater for the direct than the averted gaze and that the effect of gaze direction was more pronounced for a smiling than a neutral face. However, in this study, gaze direction and facial expression did not affect the frontal EEG asymmetry, although, for gaze direction, we found a marginally significant correlation between the degree of an overall bias for asymmetric frontal activity and the degree to which direct gaze elicited stronger left-sided frontal activity than did averted gaze. PMID:22586387

  5. Keep Your Scanners Peeled: Gaze Behavior as a Measure of Automation Trust During Highly Automated Driving.

    PubMed

    Hergeth, Sebastian; Lorenz, Lutz; Vilimek, Roman; Krems, Josef F

    2016-05-01

    The feasibility of measuring drivers' automation trust via gaze behavior during highly automated driving was assessed with eye tracking and validated with self-reported automation trust in a driving simulator study. Earlier research from other domains indicates that drivers' automation trust might be inferred from gaze behavior, such as monitoring frequency. The gaze behavior and self-reported automation trust of 35 participants attending to a visually demanding non-driving-related task (NDRT) during highly automated driving was evaluated. The relationship between dispositional, situational, and learned automation trust with gaze behavior was compared. Overall, there was a consistent relationship between drivers' automation trust and gaze behavior. Participants reporting higher automation trust tended to monitor the automation less frequently. Further analyses revealed that higher automation trust was associated with lower monitoring frequency of the automation during NDRTs, and an increase in trust over the experimental session was connected with a decrease in monitoring frequency. We suggest that (a) the current results indicate a negative relationship between drivers' self-reported automation trust and monitoring frequency, (b) gaze behavior provides a more direct measure of automation trust than other behavioral measures, and (c) with further refinement, drivers' automation trust during highly automated driving might be inferred from gaze behavior. Potential applications of this research include the estimation of drivers' automation trust and reliance during highly automated driving. © 2016, Human Factors and Ergonomics Society.

  6. Quantifying gaze and mouse interactions on spatial visual interfaces with a new movement analytics methodology

    PubMed Central

    2017-01-01

    Eye movements provide insights into what people pay attention to, and therefore are commonly included in a variety of human-computer interaction studies. Eye movement recording devices (eye trackers) produce gaze trajectories, that is, sequences of gaze location on the screen. Despite recent technological developments that enabled more affordable hardware, gaze data are still costly and time consuming to collect, therefore some propose using mouse movements instead. These are easy to collect automatically and on a large scale. If and how these two movement types are linked, however, is less clear and highly debated. We address this problem in two ways. First, we introduce a new movement analytics methodology to quantify the level of dynamic interaction between the gaze and the mouse pointer on the screen. Our method uses volumetric representation of movement, the space-time densities, which allows us to calculate interaction levels between two physically different types of movement. We describe the method and compare the results with existing dynamic interaction methods from movement ecology. The sensitivity to method parameters is evaluated on simulated trajectories where we can control interaction levels. Second, we perform an experiment with eye and mouse tracking to generate real data with real levels of interaction, to apply and test our new methodology on a real case. Further, as our experiment tasks mimics route-tracing when using a map, it is more than a data collection exercise and it simultaneously allows us to investigate the actual connection between the eye and the mouse. We find that there seem to be natural coupling when eyes are not under conscious control, but that this coupling breaks down when instructed to move them intentionally. Based on these observations, we tentatively suggest that for natural tracing tasks, mouse tracking could potentially provide similar information as eye-tracking and therefore be used as a proxy for attention. However, more research is needed to confirm this. PMID:28777822

  7. Quantifying gaze and mouse interactions on spatial visual interfaces with a new movement analytics methodology.

    PubMed

    Demšar, Urška; Çöltekin, Arzu

    2017-01-01

    Eye movements provide insights into what people pay attention to, and therefore are commonly included in a variety of human-computer interaction studies. Eye movement recording devices (eye trackers) produce gaze trajectories, that is, sequences of gaze location on the screen. Despite recent technological developments that enabled more affordable hardware, gaze data are still costly and time consuming to collect, therefore some propose using mouse movements instead. These are easy to collect automatically and on a large scale. If and how these two movement types are linked, however, is less clear and highly debated. We address this problem in two ways. First, we introduce a new movement analytics methodology to quantify the level of dynamic interaction between the gaze and the mouse pointer on the screen. Our method uses volumetric representation of movement, the space-time densities, which allows us to calculate interaction levels between two physically different types of movement. We describe the method and compare the results with existing dynamic interaction methods from movement ecology. The sensitivity to method parameters is evaluated on simulated trajectories where we can control interaction levels. Second, we perform an experiment with eye and mouse tracking to generate real data with real levels of interaction, to apply and test our new methodology on a real case. Further, as our experiment tasks mimics route-tracing when using a map, it is more than a data collection exercise and it simultaneously allows us to investigate the actual connection between the eye and the mouse. We find that there seem to be natural coupling when eyes are not under conscious control, but that this coupling breaks down when instructed to move them intentionally. Based on these observations, we tentatively suggest that for natural tracing tasks, mouse tracking could potentially provide similar information as eye-tracking and therefore be used as a proxy for attention. However, more research is needed to confirm this.

  8. Constraining eye movement in individuals with Parkinson's disease during walking turns.

    PubMed

    Ambati, V N Pradeep; Saucedo, Fabricio; Murray, Nicholas G; Powell, Douglas W; Reed-Jones, Rebecca J

    2016-10-01

    Walking and turning is a movement that places individuals with Parkinson's disease (PD) at increased risk for fall-related injury. However, turning is an essential movement in activities of daily living, making up to 45 % of the total steps taken in a given day. Hypotheses regarding how turning is controlled suggest an essential role of anticipatory eye movements to provide feedforward information for body coordination. However, little research has investigated control of turning in individuals with PD with specific consideration for eye movements. The purpose of this study was to examine eye movement behavior and body segment coordination in individuals with PD during walking turns. Three experimental groups, a group of individuals with PD, a group of healthy young adults (YAC), and a group of healthy older adults (OAC), performed walking and turning tasks under two visual conditions: free gaze and fixed gaze. Whole-body motion capture and eye tracking characterized body segment coordination and eye movement behavior during walking trials. Statistical analysis revealed significant main effects of group (PD, YAC, and OAC) and visual condition (free and fixed gaze) on timing of segment rotation and horizontal eye movement. Within group comparisons, revealed timing of eye and head movement was significantly different between the free and fixed gaze conditions for YAC (p < 0.001) and OAC (p < 0.05), but not for the PD group (p > 0.05). In addition, while intersegment timings (reflecting segment coordination) were significantly different for YAC and OAC during free gaze (p < 0.05), they were not significantly different in PD. These results suggest individuals with PD do not make anticipatory eye and head movements ahead of turning and that this may result in altered segment coordination during turning. As such, eye movements may be an important addition to training programs for those with PD, possibly promoting better coordination during turning and potentially reducing the risk of falls.

  9. Judgments at Gaze Value: Gaze Cuing in Banner Advertisements, Its Effect on Attention Allocation and Product Judgments.

    PubMed

    Palcu, Johanna; Sudkamp, Jennifer; Florack, Arnd

    2017-01-01

    Banner advertising is a popular means of promoting products and brands online. Although banner advertisements are often designed to be particularly attention grabbing, they frequently go unnoticed. Applying an eye-tracking procedure, the present research aimed to (a) determine whether presenting human faces (static or animated) in banner advertisements is an adequate tool for capturing consumers' attention and thus overcoming the frequently observed phenomenon of banner blindness, (b) to examine whether the gaze of a featured face possesses the ability to direct consumers' attention toward specific elements (i.e., the product) in an advertisement, and (c) to establish whether the gaze direction of an advertised face influences consumers subsequent evaluation of the advertised product. We recorded participants' eye gaze while they viewed a fictional online shopping page displaying banner advertisements that featured either no human face or a human face that was either static or animated and involved different gaze directions (toward or away from the advertised product). Moreover, we asked participants to subsequently evaluate a set of products, one of which was the product previously featured in the banner advertisement. Results showed that, when advertisements included a human face, participants' attention was more attracted by and they looked longer at animated compared with static banner advertisements. Moreover, when a face gazed toward the product region, participants' likelihood of looking at the advertised product increased regardless of whether the face was animated or not. Most important, gaze direction influenced subsequent product evaluations; that is, consumers indicated a higher intention to buy a product when it was previously presented in a banner advertisement that featured a face that gazed toward the product. The results suggest that while animation in banner advertising constitutes a salient feature that captures consumers' visual attention, gaze cuing can be an effective tool for driving viewers' attention toward specific elements in the advertisement and even shaping consumers' intentions to purchase the advertised product.

  10. Judgments at Gaze Value: Gaze Cuing in Banner Advertisements, Its Effect on Attention Allocation and Product Judgments

    PubMed Central

    Palcu, Johanna; Sudkamp, Jennifer; Florack, Arnd

    2017-01-01

    Banner advertising is a popular means of promoting products and brands online. Although banner advertisements are often designed to be particularly attention grabbing, they frequently go unnoticed. Applying an eye-tracking procedure, the present research aimed to (a) determine whether presenting human faces (static or animated) in banner advertisements is an adequate tool for capturing consumers’ attention and thus overcoming the frequently observed phenomenon of banner blindness, (b) to examine whether the gaze of a featured face possesses the ability to direct consumers’ attention toward specific elements (i.e., the product) in an advertisement, and (c) to establish whether the gaze direction of an advertised face influences consumers subsequent evaluation of the advertised product. We recorded participants’ eye gaze while they viewed a fictional online shopping page displaying banner advertisements that featured either no human face or a human face that was either static or animated and involved different gaze directions (toward or away from the advertised product). Moreover, we asked participants to subsequently evaluate a set of products, one of which was the product previously featured in the banner advertisement. Results showed that, when advertisements included a human face, participants’ attention was more attracted by and they looked longer at animated compared with static banner advertisements. Moreover, when a face gazed toward the product region, participants’ likelihood of looking at the advertised product increased regardless of whether the face was animated or not. Most important, gaze direction influenced subsequent product evaluations; that is, consumers indicated a higher intention to buy a product when it was previously presented in a banner advertisement that featured a face that gazed toward the product. The results suggest that while animation in banner advertising constitutes a salient feature that captures consumers’ visual attention, gaze cuing can be an effective tool for driving viewers’ attention toward specific elements in the advertisement and even shaping consumers’ intentions to purchase the advertised product. PMID:28626436

  11. Gaze holding deficits discriminate early from late onset cerebellar degeneration.

    PubMed

    Tarnutzer, Alexander A; Weber, K P; Schuknecht, B; Straumann, D; Marti, S; Bertolini, G

    2015-08-01

    The vestibulo-cerebellum calibrates the output of the inherently leaky brainstem neural velocity-to-position integrator to provide stable gaze holding. In healthy humans small-amplitude centrifugal nystagmus is present at extreme gaze-angles, with a non-linear relationship between eye-drift velocity and eye eccentricity. In cerebellar degeneration this calibration is impaired, resulting in pathological gaze-evoked nystagmus (GEN). For cerebellar dysfunction, increased eye drift may be present at any gaze angle (reflecting pure scaling of eye drift found in controls) or restricted to far-lateral gaze (reflecting changes in shape of the non-linear relationship) and resulting eyed-drift patterns could be related to specific disorders. We recorded horizontal eye positions in 21 patients with cerebellar neurodegeneration (gaze-angle = ±40°) and clinically confirmed GEN. Eye-drift velocity, linearity and symmetry of drift were determined. MR-images were assessed for cerebellar atrophy. In our patients, the relation between eye-drift velocity and gaze eccentricity was non-linear, yielding (compared to controls) significant GEN at gaze-eccentricities ≥20°. Pure scaling was most frequently observed (n = 10/18), followed by pure shape-changing (n = 4/18) and a mixed pattern (n = 4/18). Pure shape-changing patients were significantly (p = 0.001) younger at disease-onset compared to pure scaling patients. Atrophy centered around the superior/dorsal vermis, flocculus/paraflocculus and dentate nucleus and did not correlate with the specific drift behaviors observed. Eye drift in cerebellar degeneration varies in magnitude; however, it retains its non-linear properties. With different drift patterns being linked to age at disease-onset, we propose that the gaze-holding pattern (scaling vs. shape-changing) may discriminate early- from late-onset cerebellar degeneration. Whether this allows a distinction among specific cerebellar disorders remains to be determined.

  12. I Reach Faster When I See You Look: Gaze Effects in Human–Human and Human–Robot Face-to-Face Cooperation

    PubMed Central

    Boucher, Jean-David; Pattacini, Ugo; Lelong, Amelie; Bailly, Gerard; Elisei, Frederic; Fagel, Sascha; Dominey, Peter Ford; Ventre-Dominey, Jocelyne

    2012-01-01

    Human–human interaction in natural environments relies on a variety of perceptual cues. Humanoid robots are becoming increasingly refined in their sensorimotor capabilities, and thus should now be able to manipulate and exploit these social cues in cooperation with their human partners. Previous studies have demonstrated that people follow human and robot gaze, and that it can help them to cope with spatially ambiguous language. Our goal is to extend these findings into the domain of action, to determine how human and robot gaze can influence the speed and accuracy of human action. We report on results from a human–human cooperation experiment demonstrating that an agent’s vision of her/his partner’s gaze can significantly improve that agent’s performance in a cooperative task. We then implement a heuristic capability to generate such gaze cues by a humanoid robot that engages in the same cooperative interaction. The subsequent human–robot experiments demonstrate that a human agent can indeed exploit the predictive gaze of their robot partner in a cooperative task. This allows us to render the humanoid robot more human-like in its ability to communicate with humans. The long term objectives of the work are thus to identify social cooperation cues, and to validate their pertinence through implementation in a cooperative robot. The current research provides the robot with the capability to produce appropriate speech and gaze cues in the context of human–robot cooperation tasks. Gaze is manipulated in three conditions: Full gaze (coordinated eye and head), eyes hidden with sunglasses, and head fixed. We demonstrate the pertinence of these cues in terms of statistical measures of action times for humans in the context of a cooperative task, as gaze significantly facilitates cooperation as measured by human response times. PMID:22563315

  13. Responses to familiar and unfamiliar objects by belugas (Delphinapterus leucas), bottlenose dolphins (Tursiops truncatus), and Pacific white-sided dolphins (Lagenorhynchus obliquidens).

    PubMed

    Guarino, Sara; Yeater, Deirdre; Lacy, Steve; Dees, Tricia; Hill, Heather M

    2017-09-01

    Previous research with bottlenose dolphins (Tursiops truncatus) demonstrated their ability to discriminate between familiar and unfamiliar stimuli. Dolphins gazed longer at unfamiliar stimuli. The current study attempted to extend this original research by examining the responses of three species of cetaceans to objects that differed in familiarity. Eleven belugas from two facilities, five bottlenose dolphins and five Pacific white-sided dolphins housed at one facility were presented different objects in a free-swim scenario. The results indicated that the animals gazed the longest at unfamiliar objects, but these gaze durations did not significantly differ from gaze durations when viewing familiar objects. Rather, the animals gazed longer at unfamiliar objects when compared to the apparatus alone. Species differences emerged with longer gaze durations exhibited by belugas and bottlenose dolphins and significantly shorter gaze durations for Pacific white-sided dolphins. It is likely that the animals categorized objects into familiar and unfamiliar categories, but the free-swim paradigm in naturalistic social groupings did not elicit clear responses. Rather this procedure emphasized the importance of attention and individual preferences when investigating familiar and unfamiliar objects, which has implications for cognitive research and enrichment use.

  14. Real-Time Gaze Tracking for Public Displays

    NASA Astrophysics Data System (ADS)

    Sippl, Andreas; Holzmann, Clemens; Zachhuber, Doris; Ferscha, Alois

    In this paper, we explore the real-time tracking of human gazes in front of large public displays. The aim of our work is to estimate at which area of a display one ore more people are looking at a time, independently from the distance and angle to the display as well as the height of the tracked people. Gaze tracking is relevant for a variety of purposes, including the automatic recognition of the user's focus of attention, or the control of interactive applications with gaze gestures. The scope of the present paper is on the former, and we show how gaze tracking can be used for implicit interaction in the pervasive advertising domain. We have developed a prototype for this purpose, which (i) uses an overhead mounted camera to distinguish four gaze areas on a large display, (ii) works for a wide range of positions in front of the display, and (iii) provides an estimation of the currently gazed quarters in real time. A detailed description of the prototype as well as the results of a user study with 12 participants, which show the recognition accuracy for different positions in front of the display, are presented.

  15. Measure and Analysis of a Gaze Position Using Infrared Light Technique

    DTIC Science & Technology

    2001-10-25

    MEASURE AND ANALYSIS OF A GAZE POSITION USING INFRARED LIGHT TECHNIQUE Z. Ramdane-Cherif1,2, A. Naït-Ali2, J F. Motsch2, M. O. Krebs1 1INSERM E 01-17...also proposes a method to correct head movements. Keywords: eye movement, gaze tracking, visual scan path, spatial mapping. INTRODUCTION The eye gaze ...tracking has been used for clinical purposes to detect illnesses, such as nystagmus , unusual eye movements and many others [1][2][3]. It is also used

  16. Using gaze patterns to predict task intent in collaboration.

    PubMed

    Huang, Chien-Ming; Andrist, Sean; Sauppé, Allison; Mutlu, Bilge

    2015-01-01

    In everyday interactions, humans naturally exhibit behavioral cues, such as gaze and head movements, that signal their intentions while interpreting the behavioral cues of others to predict their intentions. Such intention prediction enables each partner to adapt their behaviors to the intent of others, serving a critical role in joint action where parties work together to achieve a common goal. Among behavioral cues, eye gaze is particularly important in understanding a person's attention and intention. In this work, we seek to quantify how gaze patterns may indicate a person's intention. Our investigation was contextualized in a dyadic sandwich-making scenario in which a "worker" prepared a sandwich by adding ingredients requested by a "customer." In this context, we investigated the extent to which the customers' gaze cues serve as predictors of which ingredients they intend to request. Predictive features were derived to represent characteristics of the customers' gaze patterns. We developed a support vector machine-based (SVM-based) model that achieved 76% accuracy in predicting the customers' intended requests based solely on gaze features. Moreover, the predictor made correct predictions approximately 1.8 s before the spoken request from the customer. We further analyzed several episodes of interactions from our data to develop a deeper understanding of the scenarios where our predictor succeeded and failed in making correct predictions. These analyses revealed additional gaze patterns that may be leveraged to improve intention prediction. This work highlights gaze cues as a significant resource for understanding human intentions and informs the design of real-time recognizers of user intention for intelligent systems, such as assistive robots and ubiquitous devices, that may enable more complex capabilities and improved user experience.

  17. A Non-Verbal Turing Test: Differentiating Mind from Machine in Gaze-Based Social Interaction

    PubMed Central

    Pfeiffer, Ulrich J.; Timmermans, Bert; Bente, Gary; Vogeley, Kai; Schilbach, Leonhard

    2011-01-01

    In social interaction, gaze behavior provides important signals that have a significant impact on our perception of others. Previous investigations, however, have relied on paradigms in which participants are passive observers of other persons’ gazes and do not adjust their gaze behavior as is the case in real-life social encounters. We used an interactive eye-tracking paradigm that allows participants to interact with an anthropomorphic virtual character whose gaze behavior is responsive to where the participant looks on the stimulus screen in real time. The character’s gaze reactions were systematically varied along a continuum from a maximal probability of gaze aversion to a maximal probability of gaze-following during brief interactions, thereby varying contingency and congruency of the reactions. We investigated how these variations influenced whether participants believed that the character was controlled by another person (i.e., a confederate) or a computer program. In a series of experiments, the human confederate was either introduced as naïve to the task, cooperative, or competitive. Results demonstrate that the ascription of humanness increases with higher congruency of gaze reactions when participants are interacting with a naïve partner. In contrast, humanness ascription is driven by the degree of contingency irrespective of congruency when the confederate was introduced as cooperative. Conversely, during interaction with a competitive confederate, judgments were neither based on congruency nor on contingency. These results offer important insights into what renders the experience of an interaction truly social: Humans appear to have a default expectation of reciprocation that can be influenced drastically by the presumed disposition of the interactor to either cooperate or compete. PMID:22096599

  18. An amodal shared resource model of language-mediated visual attention

    PubMed Central

    Smith, Alastair C.; Monaghan, Padraic; Huettig, Falk

    2013-01-01

    Language-mediated visual attention describes the interaction of two fundamental components of the human cognitive system, language and vision. Within this paper we present an amodal shared resource model of language-mediated visual attention that offers a description of the information and processes involved in this complex multimodal behavior and a potential explanation for how this ability is acquired. We demonstrate that the model is not only sufficient to account for the experimental effects of Visual World Paradigm studies but also that these effects are emergent properties of the architecture of the model itself, rather than requiring separate information processing channels or modular processing systems. The model provides an explicit description of the connection between the modality-specific input from language and vision and the distribution of eye gaze in language-mediated visual attention. The paper concludes by discussing future applications for the model, specifically its potential for investigating the factors driving observed individual differences in language-mediated eye gaze. PMID:23966967

  19. A progressive model for teaching children with autism to follow gaze shift.

    PubMed

    Gunby, Kristin V; Rapp, John T; Bottoni, Melissa M

    2018-06-06

    Gunby, Rapp, Bottoni, Marchese and Wu () taught three children with autism spectrum disorder to follow an instructor's gaze shift to select a specific item; however, Gunby et al. used different types of prompts with each participant. To address this limitation, we used a progressive training model for increasing gaze shift for three children with autism spectrum disorder. Results show that each participant learned to follow an adult's shift in gaze to make a correct selection. In addition, two participants displayed the skill in response to a parent's gaze shift and with only social consequences; however, the third participant required verbal instruction and tangible reinforcement to demonstrate the skill outside of training sessions. © 2018 Society for the Experimental Analysis of Behavior.

  20. Computations underlying the visuomotor transformation for smooth pursuit eye movements

    PubMed Central

    Murdison, T. Scott; Leclercq, Guillaume; Lefèvre, Philippe

    2014-01-01

    Smooth pursuit eye movements are driven by retinal motion and enable us to view moving targets with high acuity. Complicating the generation of these movements is the fact that different eye and head rotations can produce different retinal stimuli but giving rise to identical smooth pursuit trajectories. However, because our eyes accurately pursue targets regardless of eye and head orientation (Blohm G, Lefèvre P. J Neurophysiol 104: 2103–2115, 2010), the brain must somehow take these signals into account. To learn about the neural mechanisms potentially underlying this visual-to-motor transformation, we trained a physiologically inspired neural network model to combine two-dimensional (2D) retinal motion signals with three-dimensional (3D) eye and head orientation and velocity signals to generate a spatially correct 3D pursuit command. We then simulated conditions of 1) head roll-induced ocular counterroll, 2) oblique gaze-induced retinal rotations, 3) eccentric gazes (invoking the half-angle rule), and 4) optokinetic nystagmus to investigate how units in the intermediate layers of the network accounted for different 3D constraints. Simultaneously, we simulated electrophysiological recordings (visual and motor tunings) and microstimulation experiments to quantify the reference frames of signals at each processing stage. We found a gradual retinal-to-intermediate-to-spatial feedforward transformation through the hidden layers. Our model is the first to describe the general 3D transformation for smooth pursuit mediated by eye- and head-dependent gain modulation. Based on several testable experimental predictions, our model provides a mechanism by which the brain could perform the 3D visuomotor transformation for smooth pursuit. PMID:25475344

  1. Overview of Nonelectronic Eye-Gaze Communication Techniques.

    ERIC Educational Resources Information Center

    Goossens, Carol A.; Crain, Sharon S.

    1987-01-01

    The article discusses currently used eye gaze communication techniques with the severely physically disabled (eye-gaze vest, laptray, transparent display, and mirror/prism communicator), presents information regarding the types of message displays used to depict encoded material, and discusses the advantages of implementing nonelectronic eye-gaze…

  2. Incubation environment impacts the social cognition of adult lizards.

    PubMed

    Siviter, Harry; Deeming, D Charles; van Giezen, M F T; Wilkinson, Anna

    2017-11-01

    Recent work exploring the relationship between early environmental conditions and cognition has shown that incubation environment can influence both brain anatomy and performance in simple operant tasks in young lizards. It is currently unknown how it impacts other, potentially more sophisticated, cognitive processes. Social-cognitive abilities, such as gaze following and social learning, are thought to be highly adaptive as they provide a short-cut to acquiring new information. Here, we investigated whether egg incubation temperature influenced two aspects of social cognition, gaze following and social learning in adult reptiles ( Pogona vitticeps ). Incubation temperature did not influence the gaze following ability of the bearded dragons; however, lizards incubated at colder temperatures were quicker at learning a social task and faster at completing that task. These results are the first to show that egg incubation temperature influences the social cognitive abilities of an oviparous reptile species and that it does so differentially depending on the task. Further, the results show that the effect of incubation environment was not ephemeral but lasted long into adulthood. It could thus have potential long-term effects on fitness.

  3. Dog Breed Differences in Visual Communication with Humans.

    PubMed

    Konno, Akitsugu; Romero, Teresa; Inoue-Murayama, Miho; Saito, Atsuko; Hasegawa, Toshikazu

    2016-01-01

    Domestic dogs (Canis familiaris) have developed a close relationship with humans through the process of domestication. In human-dog interactions, eye contact is a key element of relationship initiation and maintenance. Previous studies have suggested that canine ability to produce human-directed communicative signals is influenced by domestication history, from wolves to dogs, as well as by recent breed selection for particular working purposes. To test the genetic basis for such abilities in purebred dogs, we examined gazing behavior towards humans using two types of behavioral experiments: the 'visual contact task' and the 'unsolvable task'. A total of 125 dogs participated in the study. Based on the genetic relatedness among breeds subjects were classified into five breed groups: Ancient, Herding, Hunting, Retriever-Mastiff and Working). We found that it took longer time for Ancient breeds to make an eye-contact with humans, and that they gazed at humans for shorter periods of time than any other breed group in the unsolvable situation. Our findings suggest that spontaneous gaze behavior towards humans is associated with genetic similarity to wolves rather than with recent selective pressure to create particular working breeds.

  4. Incubation environment impacts the social cognition of adult lizards

    PubMed Central

    van Giezen, M. F. T.

    2017-01-01

    Recent work exploring the relationship between early environmental conditions and cognition has shown that incubation environment can influence both brain anatomy and performance in simple operant tasks in young lizards. It is currently unknown how it impacts other, potentially more sophisticated, cognitive processes. Social-cognitive abilities, such as gaze following and social learning, are thought to be highly adaptive as they provide a short-cut to acquiring new information. Here, we investigated whether egg incubation temperature influenced two aspects of social cognition, gaze following and social learning in adult reptiles (Pogona vitticeps). Incubation temperature did not influence the gaze following ability of the bearded dragons; however, lizards incubated at colder temperatures were quicker at learning a social task and faster at completing that task. These results are the first to show that egg incubation temperature influences the social cognitive abilities of an oviparous reptile species and that it does so differentially depending on the task. Further, the results show that the effect of incubation environment was not ephemeral but lasted long into adulthood. It could thus have potential long-term effects on fitness. PMID:29291066

  5. Perceptual learning in a non-human primate model of artificial vision

    PubMed Central

    Killian, Nathaniel J.; Vurro, Milena; Keith, Sarah B.; Kyada, Margee J.; Pezaris, John S.

    2016-01-01

    Visual perceptual grouping, the process of forming global percepts from discrete elements, is experience-dependent. Here we show that the learning time course in an animal model of artificial vision is predicted primarily from the density of visual elements. Three naïve adult non-human primates were tasked with recognizing the letters of the Roman alphabet presented at variable size and visualized through patterns of discrete visual elements, specifically, simulated phosphenes mimicking a thalamic visual prosthesis. The animals viewed a spatially static letter using a gaze-contingent pattern and then chose, by gaze fixation, between a matching letter and a non-matching distractor. Months of learning were required for the animals to recognize letters using simulated phosphene vision. Learning rates increased in proportion to the mean density of the phosphenes in each pattern. Furthermore, skill acquisition transferred from trained to untrained patterns, not depending on the precise retinal layout of the simulated phosphenes. Taken together, the findings suggest that learning of perceptual grouping in a gaze-contingent visual prosthesis can be described simply by the density of visual activation. PMID:27874058

  6. A noninvasive brain computer interface using visually-induced near-infrared spectroscopy responses.

    PubMed

    Chen, Cheng-Hsuan; Ho, Ming-Shan; Shyu, Kuo-Kai; Hsu, Kou-Cheng; Wang, Kuo-Wei; Lee, Po-Lei

    2014-09-19

    Visually-induced near-infrared spectroscopy (NIRS) response was utilized to design a brain computer interface (BCI) system. Four circular checkerboards driven by distinct flickering sequences were displayed on a LCD screen as visual stimuli to induce subjects' NIRS responses. Each flickering sequence was a concatenated sequence of alternative flickering segments and resting segments. The flickering segment was designed with fixed duration of 3s whereas the resting segment was chosen randomly within 15-20s to create the mutual independencies among different flickering sequences. Six subjects were recruited in this study and subjects were requested to gaze at the four visual stimuli one-after-one in a random order. Since visual responses in human brain are time-locked to the onsets of visual stimuli and the flicker sequences of distinct visual stimuli were designed mutually independent, the NIRS responses induced by user's gazed targets can be discerned from non-gazed targets by applying a simple averaging process. The accuracies for the six subjects were higher than 90% after 10 or more epochs being averaged. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  7. Eye gaze tracking using correlation filters

    NASA Astrophysics Data System (ADS)

    Karakaya, Mahmut; Bolme, David; Boehnen, Chris

    2014-03-01

    In this paper, we studied a method for eye gaze tracking that provide gaze estimation from a standard webcam with a zoom lens and reduce the setup and calibration requirements for new users. Specifically, we have developed a gaze estimation method based on the relative locations of points on the top of the eyelid and eye corners. Gaze estimation method in this paper is based on the distances between top point of the eyelid and eye corner detected by the correlation filters. Advanced correlation filters were found to provide facial landmark detections that are accurate enough to determine the subjects gaze direction up to angle of approximately 4-5 degrees although calibration errors often produce a larger overall shift in the estimates. This is approximately a circle of diameter 2 inches for a screen that is arm's length from the subject. At this accuracy it is possible to figure out what regions of text or images the subject is looking but it falls short of being able to determine which word the subject has looked at.

  8. The Effectiveness of Gaze-Contingent Control in Computer Games.

    PubMed

    Orlov, Paul A; Apraksin, Nikolay

    2015-01-01

    Eye-tracking technology and gaze-contingent control in human-computer interaction have become an objective reality. This article reports on a series of eye-tracking experiments, in which we concentrated on one aspect of gaze-contingent interaction: Its effectiveness compared with mouse-based control in a computer strategy game. We propose a measure for evaluating the effectiveness of interaction based on "the time of recognition" the game unit. In this article, we use this measure to compare gaze- and mouse-contingent systems, and we present the analysis of the differences as a function of the number of game units. Our results indicate that performance of gaze-contingent interaction is typically higher than mouse manipulation in a visual searching task. When tested on 60 subjects, the results showed that the effectiveness of gaze-contingent systems over 1.5 times higher. In addition, we obtained that eye behavior stays quite stabile with or without mouse interaction. © The Author(s) 2015.

  9. Gaze Stabilization During Locomotion Requires Full Body Coordination

    NASA Technical Reports Server (NTRS)

    Mulavara, A. P.; Miller, C. A.; Houser, J.; Richards, J. T.; Bloomberg, J. J.

    2001-01-01

    Maintaining gaze stabilization during locomotion places substantial demands on multiple sensorimotor subsystems for precise coordination. Gaze stabilization during locomotion requires eye-head-trunk coordination (Bloomberg, et al., 1997) as well as the regulation of energy flow or shock-wave transmission through the body at high impact phases with the support surface (McDonald, et al., 1997). Allowing these excessive transmissions of energy to reach the head may compromise gaze stability. Impairments in these mechanisms may lead to the oscillopsia and decreased dynamic visual acuity seen in crewmembers returning from short and long duration spaceflight, as well as in patients with vestibular disorders (Hillman, et al., 1999). Thus, we hypothesize that stabilized gaze during locomotion results from full-body coordination of the eye-head-trunk system combined with the lower limb apparatus. The goal of this study was to determine how multiple, interdependent full- body sensorimotor subsystems aiding gaze stabilization during locomotion are functionally coordinated, and how they adaptively respond to spaceffight.

  10. Face age modulates gaze following in young adults.

    PubMed

    Ciardo, Francesca; Marino, Barbara F M; Actis-Grosso, Rossana; Rossetti, Angela; Ricciardelli, Paola

    2014-04-22

    Gaze-following behaviour is considered crucial for social interactions which are influenced by social similarity. We investigated whether the degree of similarity, as indicated by the perceived age of another person, can modulate gaze following. Participants of three different age-groups (18-25; 35-45; over 65) performed an eye movement (a saccade) towards an instructed target while ignoring the gaze-shift of distracters of different age-ranges (6-10; 18-25; 35-45; over 70). The results show that gaze following was modulated by the distracter face age only for young adults. Particularly, the over 70 year-old distracters exerted the least interference effect. The distracters of a similar age-range as the young adults (18-25; 35-45) had the most effect, indicating a blurred own-age bias (OAB) only for the young age group. These findings suggest that face age can modulate gaze following, but this modulation could be due to factors other than just OAB (e.g., familiarity).

  11. Fuzzy System-Based Target Selection for a NIR Camera-Based Gaze Tracker

    PubMed Central

    Naqvi, Rizwan Ali; Arsalan, Muhammad; Park, Kang Ryoung

    2017-01-01

    Gaze-based interaction (GBI) techniques have been a popular subject of research in the last few decades. Among other applications, GBI can be used by persons with disabilities to perform everyday tasks, as a game interface, and can play a pivotal role in the human computer interface (HCI) field. While gaze tracking systems have shown high accuracy in GBI, detecting a user’s gaze for target selection is a challenging problem that needs to be considered while using a gaze detection system. Past research has used the blinking of the eyes for this purpose as well as dwell time-based methods, but these techniques are either inconvenient for the user or requires a long time for target selection. Therefore, in this paper, we propose a method for fuzzy system-based target selection for near-infrared (NIR) camera-based gaze trackers. The results of experiments performed in addition to tests of the usability and on-screen keyboard use of the proposed method show that it is better than previous methods. PMID:28420114

  12. 3D ocular ultrasound using gaze tracking on the contralateral eye: a feasibility study.

    PubMed

    Afsham, Narges; Najafi, Mohammad; Abolmaesumi, Purang; Rohling, Robert

    2011-01-01

    A gaze-deviated examination of the eye with a 2D ultrasound transducer is a common and informative ophthalmic test; however, the complex task of the pose estimation of the ultrasound images relative to the eye affects 3D interpretation. To tackle this challenge, a novel system for 3D image reconstruction based on gaze tracking of the contralateral eye has been proposed. The gaze fixates on several target points and, for each fixation, the pose of the examined eye is inferred from the gaze tracking. A single camera system has been developed for pose estimation combined with subject-specific parameter identification. The ultrasound images are then transformed to the coordinate system of the examined eye to create a 3D volume. Accuracy of the proposed gaze tracking system and the pose estimation of the eye have been validated in a set of experiments. Overall system error, including pose estimation and calibration, are 3.12 mm and 4.68 degrees.

  13. Visual perception during mirror-gazing at one's own face in patients with depression.

    PubMed

    Caputo, Giovanni B; Bortolomasi, Marco; Ferrucci, Roberta; Giacopuzzi, Mario; Priori, Alberto; Zago, Stefano

    2014-01-01

    In normal observers, gazing at one's own face in the mirror for a few minutes, at a low illumination level, produces the apparition of strange faces. Observers see distortions of their own faces, but they often see hallucinations like monsters, archetypical faces, faces of relatives and deceased, and animals. In this research, patients with depression were compared to healthy controls with respect to strange-face apparitions. The experiment was a 7-minute mirror-gazing test (MGT) under low illumination. When the MGT ended, the experimenter assessed patients and controls with a specifically designed questionnaire and interviewed them, asking them to describe strange-face apparitions. Apparitions of strange faces in the mirror were very reduced in depression patients compared to healthy controls. Depression patients compared to healthy controls showed shorter duration of apparitions; minor number of strange faces; lower self-evaluation rating of apparition strength; lower self-evaluation rating of provoked emotion. These decreases in depression may be produced by deficits of facial expression and facial recognition of emotions, which are involved in the relationship between the patient (or the patient's ego) and his face image (or the patient's bodily self) that is reflected in the mirror.

  14. Gaze-Contingent Music Reward Therapy for Social Anxiety Disorder: A Randomized Controlled Trial.

    PubMed

    Lazarov, Amit; Pine, Daniel S; Bar-Haim, Yair

    2017-07-01

    Patients with social anxiety disorder exhibit increased attentional dwelling on social threats, providing a viable target for therapeutics. This randomized controlled trial examined the efficacy of a novel gaze-contingent music reward therapy for social anxiety disorder designed to reduce attention dwelling on threats. Forty patients with social anxiety disorder were randomly assigned to eight sessions of either gaze-contingent music reward therapy, designed to divert patients' gaze toward neutral stimuli rather than threat stimuli, or to a control condition. Clinician and self-report measures of social anxiety were acquired pretreatment, posttreatment, and at 3-month follow-up. Dwell time on socially threatening faces was assessed during the training sessions and at pre- and posttreatment. Gaze-contingent music reward therapy yielded greater reductions of symptoms of social anxiety disorder than the control condition on both clinician-rated and self-reported measures. Therapeutic effects were maintained at follow-up. Gaze-contingent music reward therapy, but not the control condition, also reduced dwell time on threat, which partially mediated clinical effects. Finally, gaze-contingent music reward therapy, but not the control condition, also altered dwell time on socially threatening faces not used in training, reflecting near-transfer training generalization. This is the first randomized controlled trial to examine a gaze-contingent intervention in social anxiety disorder. The results demonstrate target engagement and clinical effects. This study sets the stage for larger randomized controlled trials and testing in other emotional disorders.

  15. Maternal Oxytocin Response Predicts Mother-to-Infant Gaze

    PubMed Central

    Kim, Sohye; Fonagy, Peter; Koos, Orsolya; Dorsett, Kimberly; Strathearn, Lane

    2014-01-01

    The neuropeptide oxytocin is importantly implicated in the emergence and maintenance of maternal behavior that forms the basis of the mother-infant bond. However, no research has yet examined the specific association between maternal oxytocin and maternal gaze, a key modality through which the mother makes social contact and engages with her infant. Furthermore, prior oxytocin studies have assessed maternal engagement primarily during episodes free of infant distress, while maternal engagement during infant distress is considered to be uniquely relevant to the formation of secure mother-infant attachment. Two patterns of maternal gaze, maternal gaze toward and gaze shifts away from the infant, were micro-coded while 50 mothers interacted with their 7-month-old infants during a modified still-face procedure. Maternal oxytocin response was defined as a change in the mother’s plasma oxytocin level following interaction with her infant as compared to baseline. The mother’s oxytocin response was positively associated with the duration of time her gaze was directed toward her infant, while negatively associated with the frequency with which her gaze shifted away from her infant. Importantly, mothers who showed low/average oxytocin response demonstrated a significant decrease in their gaze toward their infants during periods of infant distress, while such change was not observed in mothers with high oxytocin response. The findings underscore the involvement of oxytocin in regulating the mother’s responsive engagement with her infant, particularly in times when the infant’s need for access to the mother is greatest. PMID:24184574

  16. Attentional deployment is not necessary for successful emotion regulation via cognitive reappraisal or expressive suppression.

    PubMed

    Bebko, Genna M; Franconeri, Steven L; Ochsner, Kevin N; Chiao, Joan Y

    2014-06-01

    According to appraisal theories of emotion, cognitive reappraisal is a successful emotion regulation strategy because it involves cognitively changing our thoughts, which, in turn, change our emotions. However, recent evidence has challenged the importance of cognitive change and, instead, has suggested that attentional deployment may at least partly explain the emotion regulation success of cognitive reappraisal. The purpose of the current study was to examine the causal relationship between attentional deployment and emotion regulation success. We examined 2 commonly used emotion regulation strategies--cognitive reappraisal and expressive suppression-because both depend on attention but have divergent behavioral, experiential, and physiological outcomes. Participants were either instructed to regulate emotions during free-viewing (unrestricted image viewing) or gaze-controlled (restricted image viewing) conditions and to self-report negative emotional experience. For both emotion regulation strategies, emotion regulation success was not altered by changes in participant control over the (a) direction of attention (free-viewing vs. gaze-controlled) during image viewing and (b) valence (negative vs. neutral) of visual stimuli viewed when gaze was controlled. Taken together, these findings provide convergent evidence that attentional deployment does not alter subjective negative emotional experience during either cognitive reappraisal or expressive suppression, suggesting that strategy-specific processes, such as cognitive appraisal and response modulation, respectively, may have a greater impact on emotional regulation success than processes common to both strategies, such as attention.

  17. Hierarchical Encoding of Social Cues in Primate Inferior Temporal Cortex.

    PubMed

    Morin, Elyse L; Hadj-Bouziane, Fadila; Stokes, Mark; Ungerleider, Leslie G; Bell, Andrew H

    2015-09-01

    Faces convey information about identity and emotional state, both of which are important for our social interactions. Models of face processing propose that changeable versus invariant aspects of a face, specifically facial expression/gaze direction versus facial identity, are coded by distinct neural pathways and yet neurophysiological data supporting this separation are incomplete. We recorded activity from neurons along the inferior bank of the superior temporal sulcus (STS), while monkeys viewed images of conspecific faces and non-face control stimuli. Eight monkey identities were used, each presented with 3 different facial expressions (neutral, fear grin, and threat). All facial expressions were displayed with both a direct and averted gaze. In the posterior STS, we found that about one-quarter of face-responsive neurons are sensitive to social cues, the majority of which being sensitive to only one of these cues. In contrast, in anterior STS, not only did the proportion of neurons sensitive to social cues increase, but so too did the proportion of neurons sensitive to conjunctions of identity with either gaze direction or expression. These data support a convergence of signals related to faces as one moves anteriorly along the inferior bank of the STS, which forms a fundamental part of the face-processing network. Published by Oxford University Press 2014. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  18. Adaptation of social and non-social cues to direction in adults with autism spectrum disorder and neurotypical adults with autistic traits.

    PubMed

    Lawson, Rebecca P; Aylward, Jessica; Roiser, Jonathan P; Rees, Geraint

    2018-01-01

    Perceptual constancy strongly relies on adaptive gain control mechanisms, which shift perception as a function of recent sensory history. Here we examined the extent to which individual differences in magnitude of adaptation aftereffects for social and non-social directional cues are related to autistic traits and sensory sensitivity in healthy participants (Experiment 1); and also whether adaptation for social and non-social directional cues is differentially impacted in adults with Autism Spectrum Disorder (ASD) relative to neurotypical (NT) controls (Experiment 2). In Experiment 1, individuals with lower susceptibility to adaptation aftereffects, i.e. more 'veridical' perception, showed higher levels of autistic traits across social and non-social stimuli. Furthermore, adaptation aftereffects were predictive of sensory sensitivity. In Experiment 2, only adaptation to eye-gaze was diminished in adults with ASD, and this was related to difficulties categorizing eye-gaze direction at baseline. Autism Diagnostic Observation Schedule (ADOS) scores negatively predicted lower adaptation for social (head and eye-gaze direction) but not non-social (chair) stimuli. These results suggest that the relationship between adaptation and the broad socio-cognitive processing style captured by 'autistic traits' may be relatively domain-general, but in adults with ASD diminished adaptation is only apparent where processing is most severely impacted, such as the perception of social attention cues. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  19. Culture and Listeners' Gaze Responses to Stuttering

    ERIC Educational Resources Information Center

    Zhang, Jianliang; Kalinowski, Joseph

    2012-01-01

    Background: It is frequently observed that listeners demonstrate gaze aversion to stuttering. This response may have profound social/communicative implications for both fluent and stuttering individuals. However, there is a lack of empirical examination of listeners' eye gaze responses to stuttering, and it is unclear whether cultural background…

  20. Stabilization of gaze during circular locomotion in light. I. Compensatory head and eye nystagmus in the running monkey

    NASA Technical Reports Server (NTRS)

    Solomon, D.; Cohen, B.

    1992-01-01

    1. A rhesus and cynomolgus monkey were trained to run around the perimeter of a circular platform in light. We call this "circular locomotion" because forward motion had an angular component. Head and body velocity in space were recorded with angular rate sensors and eye movements with electrooculography (EOG). From these measurements we derived signals related to the angular velocity of the eyes in the head (Eh), of the head on the body (Hb), of gaze on the body (Gb), of the body in space (Bs), of gaze in space (Gs), and of the gain of gaze (Gb/Bs). 2. The monkeys had continuous compensatory nystagmus of the head and eyes while running, which stabilized Gs during the slow phases. The eyes established and maintained compensatory gaze velocities at the beginning and end of the slow phases. The head contributed to gaze velocity during the middle of the slow phases. Slow phase Gb was as high as 250 degrees/s, and targets were fixed for gaze angles as large as 90-140 degrees. 3. Properties of the visual surround affected both the gain and strategy of gaze compensation in the one monkey tested. Gains of Eh ranged from 0.3 to 1.1 during compensatory gaze nystagmus. Gains of Hb varied around 0.3 (0.2-0.7), building to a maximum as Eh dropped while running past sectors of interest. Consistent with predictions, gaze gains varied from below to above unity, when translational and angular body movements with regard to the target were in opposite or the same directions, respectively. 4. Gaze moved in saccadic shifts in the direction of running during quick phases. Most head quick phases were small, and at times the head only paused during an eye quick phase. Eye quick phases were larger, ranging up to 60 degrees. This is larger than quick phases during passive rotation or saccades made with the head fixed. 5. These data indicate that head and eye nystagmus are natural phenomena that support gaze compensation during locomotion. Despite differential utilization of the head and eyes in various conditions, Gb compensated for Bs. There are various frames of reference in which an estimate of angular velocity that drives the head and eyes could be based. We infer that body in space velocity (Bs) is likely to be represented centrally to provide this signal.

  1. Is Word Shape Still in Poor Shape for the Race to the Lexicon?

    ERIC Educational Resources Information Center

    Hill, Jessica C.

    2010-01-01

    Current models of normal reading behavior emphasize not only the recognition and processing of the word being fixated (n) but also processing of the upcoming parafoveal word (n + 1). Gaze contingent displays employing the boundary paradigm often mask words in order to understand how much and what type of processing is completed on the parafoveal…

  2. Use of an augmented-vision device for visual search by patients with tunnel vision.

    PubMed

    Luo, Gang; Peli, Eli

    2006-09-01

    To study the effect of an augmented-vision device that superimposes minified contour images over natural vision on visual search performance of patients with tunnel vision. Twelve subjects with tunnel vision searched for targets presented outside their visual fields (VFs) on a blank background under three cue conditions (with contour cues provided by the device, with auditory cues, and without cues). Three subjects (VF, 8 degrees -11 degrees wide) carried out the search over a 90 degrees x 74 degrees area, and nine subjects (VF, 7 degrees -16 degrees wide) carried out the search over a 66 degrees x 52 degrees area. Eye and head movements were recorded for performance analyses that included directness of search path, search time, and gaze speed. Directness of the search path was greatly and significantly improved when the contour or auditory cues were provided in the larger and the smaller area searches. When using the device, a significant reduction in search time (28% approximately 74%) was demonstrated by all three subjects in the larger area search and by subjects with VFs wider than 10 degrees in the smaller area search (average, 22%). Directness and gaze speed accounted for 90% of the variability of search time. Although performance improvement with the device for the larger search area was obvious, whether it was helpful for the smaller search area depended on VF and gaze speed. Because improvement in directness was demonstrated, increased gaze speed, which could result from further training and adaptation to the device, might enable patients with small VFs to benefit from the device for visual search tasks.

  3. Parafoveal Processing Affects Outgoing Saccade Length during the Reading of Chinese

    ERIC Educational Resources Information Center

    Liu, Yanping; Reichle, Erik D.; Li, Xingshan

    2015-01-01

    Participants' eye movements were measured while reading Chinese sentences in which target-word frequency and the availability of parafoveal processing were manipulated using a gaze-contingent boundary paradigm. The results of this study indicate that preview availability and its interaction with word frequency modulated the length of the saccades…

  4. An Analysis of the Time Course of Lexical Processing during Reading

    ERIC Educational Resources Information Center

    Sheridan, Heather; Reichle, Erik D.

    2016-01-01

    Reingold, Reichle, Glaholt, and Sheridan (2012) reported a gaze-contingent eye-movement experiment in which survival-curve analyses were used to examine the effects of word frequency, the availability of parafoveal preview, and initial fixation location on the time course of lexical processing. The key results of these analyses suggest that…

  5. Variants of Independence in the Perception of Facial Identity and Expression

    ERIC Educational Resources Information Center

    Fitousi, Daniel; Wenger, Michael J.

    2013-01-01

    A prominent theory in the face perception literature--the parallel-route hypothesis (Bruce & Young, 1986)--assumes a dedicated channel for the processing of identity that is separate and independent from the channel(s) in which nonidentity information is processed (e.g., expression, eye gaze). The current work subjected this assumption to…

  6. Reading Faces: Differential Lateral Gaze Bias in Processing Canine and Human Facial Expressions in Dogs and 4-Year-Old Children

    PubMed Central

    Racca, Anaïs; Guo, Kun; Meints, Kerstin; Mills, Daniel S.

    2012-01-01

    Sensitivity to the emotions of others provides clear biological advantages. However, in the case of heterospecific relationships, such as that existing between dogs and humans, there are additional challenges since some elements of the expression of emotions are species-specific. Given that faces provide important visual cues for communicating emotional state in both humans and dogs, and that processing of emotions is subject to brain lateralisation, we investigated lateral gaze bias in adult dogs when presented with pictures of expressive human and dog faces. Our analysis revealed clear differences in laterality of eye movements in dogs towards conspecific faces according to the emotional valence of the expressions. Differences were also found towards human faces, but to a lesser extent. For comparative purpose, a similar experiment was also run with 4-year-old children and it was observed that they showed differential processing of facial expressions compared to dogs, suggesting a species-dependent engagement of the right or left hemisphere in processing emotions. PMID:22558335

  7. Face in profile view reduces perceived facial expression intensity: an eye-tracking study.

    PubMed

    Guo, Kun; Shaw, Heather

    2015-02-01

    Recent studies measuring the facial expressions of emotion have focused primarily on the perception of frontal face images. As we frequently encounter expressive faces from different viewing angles, having a mechanism which allows invariant expression perception would be advantageous to our social interactions. Although a couple of studies have indicated comparable expression categorization accuracy across viewpoints, it is unknown how perceived expression intensity and associated gaze behaviour change across viewing angles. Differences could arise because diagnostic cues from local facial features for decoding expressions could vary with viewpoints. Here we manipulated orientation of faces (frontal, mid-profile, and profile view) displaying six common facial expressions of emotion, and measured participants' expression categorization accuracy, perceived expression intensity and associated gaze patterns. In comparison with frontal faces, profile faces slightly reduced identification rates for disgust and sad expressions, but significantly decreased perceived intensity for all tested expressions. Although quantitatively viewpoint had expression-specific influence on the proportion of fixations directed at local facial features, the qualitative gaze distribution within facial features (e.g., the eyes tended to attract the highest proportion of fixations, followed by the nose and then the mouth region) was independent of viewpoint and expression type. Our results suggest that the viewpoint-invariant facial expression processing is categorical perception, which could be linked to a viewpoint-invariant holistic gaze strategy for extracting expressive facial cues. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Evaluative Processing of Food Images: A Conditional Role for Viewing in Preference Formation

    PubMed Central

    Wolf, Alexandra; Ounjai, Kajornvut; Takahashi, Muneyoshi; Kobayashi, Shunsuke; Matsuda, Tetsuya; Lauwereyns, Johan

    2018-01-01

    Previous research suggested a role of gaze in preference formation, not merely as an expression of preference, but also as a causal influence. According to the gaze cascade hypothesis, the longer subjects look at an item, the more likely they are to develop a preference for it. However, to date the connection between viewing and liking has been investigated predominately with self-paced viewing conditions in which the subjects were required to select certain items from simultaneously presented stimuli on the basis of perceived visual attractiveness. Such conditions might promote a default, but non-mandatory connection between viewing and liking. To explore whether the connection is separable, we examined the evaluative processing of single naturalistic food images in a 2 × 2 design, conducted completely within subjects, in which we varied both the type of exposure (self-paced versus time-controlled) and the type of evaluation (non-exclusive versus exclusive). In the self-paced exclusive evaluation, longer viewing was associated with a higher likelihood of a positive evaluation. However, in the self-paced non-exclusive evaluation, the trend reversed such that longer viewing durations were associated with lesser ratings. Furthermore, in the time-controlled tasks, both with non-exclusive and exclusive evaluation, there was no significant relationship between the viewing duration and the evaluation. The overall pattern of results was consistent for viewing times measured in terms of exposure duration (i.e., the duration of stimulus presentation on the screen) and in terms of actual gaze duration (i.e., the amount of time the subject effectively gazed at the stimulus on the screen). The data indicated that viewing does not intrinsically lead to a higher evaluation when evaluating single food images; instead, the relationship between viewing duration and evaluation depends on the type of task. We suggest that self-determination of exposure duration may be a prerequisite for any influence from viewing time on evaluative processing, regardless of whether the influence is facilitative. Moreover, the purported facilitative link between viewing and liking appears to be limited to exclusive evaluation, when only a restricted number of items can be included in a chosen set. PMID:29942273

  9. Evaluative Processing of Food Images: A Conditional Role for Viewing in Preference Formation.

    PubMed

    Wolf, Alexandra; Ounjai, Kajornvut; Takahashi, Muneyoshi; Kobayashi, Shunsuke; Matsuda, Tetsuya; Lauwereyns, Johan

    2018-01-01

    Previous research suggested a role of gaze in preference formation, not merely as an expression of preference, but also as a causal influence. According to the gaze cascade hypothesis, the longer subjects look at an item, the more likely they are to develop a preference for it. However, to date the connection between viewing and liking has been investigated predominately with self-paced viewing conditions in which the subjects were required to select certain items from simultaneously presented stimuli on the basis of perceived visual attractiveness. Such conditions might promote a default, but non-mandatory connection between viewing and liking. To explore whether the connection is separable, we examined the evaluative processing of single naturalistic food images in a 2 × 2 design, conducted completely within subjects, in which we varied both the type of exposure (self-paced versus time-controlled) and the type of evaluation (non-exclusive versus exclusive). In the self-paced exclusive evaluation, longer viewing was associated with a higher likelihood of a positive evaluation. However, in the self-paced non-exclusive evaluation, the trend reversed such that longer viewing durations were associated with lesser ratings. Furthermore, in the time-controlled tasks, both with non-exclusive and exclusive evaluation, there was no significant relationship between the viewing duration and the evaluation. The overall pattern of results was consistent for viewing times measured in terms of exposure duration (i.e., the duration of stimulus presentation on the screen) and in terms of actual gaze duration (i.e., the amount of time the subject effectively gazed at the stimulus on the screen). The data indicated that viewing does not intrinsically lead to a higher evaluation when evaluating single food images; instead, the relationship between viewing duration and evaluation depends on the type of task. We suggest that self-determination of exposure duration may be a prerequisite for any influence from viewing time on evaluative processing, regardless of whether the influence is facilitative. Moreover, the purported facilitative link between viewing and liking appears to be limited to exclusive evaluation, when only a restricted number of items can be included in a chosen set.

  10. The Development of Mentalistic Gaze Understanding

    ERIC Educational Resources Information Center

    Doherty, Martin J.

    2006-01-01

    Very young infants are sensitive to and follow other people's gaze. By 18 months children, like chimpanzees, apparently represent the spatial relationship between viewer and object viewed: they can follow eye-direction alone, and react appropriately if the other's gaze is blocked by occluding barriers. This paper assesses when children represent…

  11. Impairment of Unconscious, but Not Conscious, Gaze-Triggered Attention Orienting in Asperger's Disorder

    ERIC Educational Resources Information Center

    Sato, Wataru; Uono, Shota; Okada, Takashi; Toichi, Motomi

    2010-01-01

    Impairment of joint attention represents the core clinical features of pervasive developmental disorders (PDDs), including autism and Asperger's disorder. However, experimental studies reported intact gaze-triggered attentional orienting in PDD. Since all previous studies employed supraliminal presentation of gaze stimuli, we hypothesized that…

  12. "Beloved" as an Oppositional Gaze

    ERIC Educational Resources Information Center

    Mao, Weiqiang; Zhang, Mingquan

    2009-01-01

    This paper studies the strategy Morrison adopts in "Beloved" to give voice to black Americans long silenced by the dominant white American culture. Instead of being objects passively accepting their aphasia, black Americans become speaking subjects that are able to cast an oppositional gaze to avert the objectifying gaze of white…

  13. Gaze-Based Assistive Technology - Usefulness in Clinical Assessments.

    PubMed

    Wandin, Helena

    2017-01-01

    Gaze-based assistive technology was used in informal clinical assessments. Excerpts of medical journals were analyzed by directed content analysis using a model of communicative competence. The results of this pilot study indicate that gaze-based assistive technology is a useful tool in communication assessments that can generate clinically relevant information.

  14. Anxiety and Sensitivity to Eye Gaze in Emotional Faces

    ERIC Educational Resources Information Center

    Holmes, Amanda; Richards, Anne; Green, Simon

    2006-01-01

    This paper reports three studies in which stronger orienting to perceived eye gaze direction was revealed when observers viewed faces showing fearful or angry, compared with happy or neutral, emotional expressions. Gaze-related spatial cueing effects to laterally presented fearful faces and centrally presented angry faces were also modulated by…

  15. Implicit social learning in relation to autistic-like traits.

    PubMed

    Hudson, Matthew; Nijboer, Tanja C W; Jellema, Tjeerd

    2012-12-01

    We investigated if variation in autistic traits in the typically-developed population (using the Autism-spectrum Quotient, AQ) influenced implicit learning of social information. In the learning phase, participants repeatedly observed two identities whose gaze and expression conveyed either a pro- or antisocial disposition. These identities were then employed in a gaze-cueing paradigm. Participants made speeded responses to a peripheral target that was spatially pre-cued by a non-predictive gaze direction. The low AQ group (n = 50) showed a smaller gaze-cueing effect for the antisocial than for the prosocial identity. The high AQ group (n = 48) showed equivalent gaze-cueing for both identities. Others' intentions/dispositions can be learned implicitly and affect subsequent responses to their behavior. This ability is impaired with increasing levels of autistic traits.

  16. Human-like object tracking and gaze estimation with PKD android

    PubMed Central

    Wijayasinghe, Indika B.; Miller, Haylie L.; Das, Sumit K; Bugnariu, Nicoleta L.; Popa, Dan O.

    2018-01-01

    As the use of robots increases for tasks that require human-robot interactions, it is vital that robots exhibit and understand human-like cues for effective communication. In this paper, we describe the implementation of object tracking capability on Philip K. Dick (PKD) android and a gaze tracking algorithm, both of which further robot capabilities with regard to human communication. PKD's ability to track objects with human-like head postures is achieved with visual feedback from a Kinect system and an eye camera. The goal of object tracking with human-like gestures is twofold : to facilitate better human-robot interactions and to enable PKD as a human gaze emulator for future studies. The gaze tracking system employs a mobile eye tracking system (ETG; SensoMotoric Instruments) and a motion capture system (Cortex; Motion Analysis Corp.) for tracking the head orientations. Objects to be tracked are displayed by a virtual reality system, the Computer Assisted Rehabilitation Environment (CAREN; MotekForce Link). The gaze tracking algorithm converts eye tracking data and head orientations to gaze information facilitating two objectives: to evaluate the performance of the object tracking system for PKD and to use the gaze information to predict the intentions of the user, enabling the robot to understand physical cues by humans. PMID:29416193

  17. Fortunes and misfortunes of political leaders reflected in the eyes of their electors.

    PubMed

    Porciello, Giuseppina; Liuzza, Marco Tullio; Minio-Paluello, Ilaria; Caprara, Gian Vittorio; Aglioti, Salvatore Maria

    2016-03-01

    Gaze-following is a pivotal social behaviour that, although largely automatic, is permeable to high-order variables like political affiliation. A few years ago we reported that the gaze of Italian right-wing voters was selectively captured by the gaze of their leader Silvio Berlusconi. This effect was particularly evident in voters who saw themselves as similar to Berlusconi. Two years later, we were able to run the present follow-up study because Berlusconi's popularity had drastically dropped due to sex and political scandals, and he resigned from office. In a representative subsample of our original group, we investigated whether perceived similarity and gaze-following reflected Berlusconi's loss in popularity. We were also able to test the same hypothesis in an independent group of right-wing voters when their leader, Renata Polverini, resigned as Governor of 'Regione Lazio' due to political scandals. Our results show that the leaders' fall in popularity paralleled the reduction of their gaze's attracting power, as well as the decrease in similarity perceived by their voters. The less similar right-wing voters felt to their leader, the less they followed his/her gaze. Thus, the present experimental findings suggest that gaze-following can be modulated by complex situational and dispositional factors such as leader's popularity and voter-leader perceived similarity.

  18. Gaze as a biometric

    NASA Astrophysics Data System (ADS)

    Yoon, Hong-Jun; Carmichael, Tandy R.; Tourassi, Georgia

    2014-03-01

    Two people may analyze a visual scene in two completely different ways. Our study sought to determine whether human gaze may be used to establish the identity of an individual. To accomplish this objective we investigated the gaze pattern of twelve individuals viewing still images with different spatial relationships. Specifically, we created 5 visual "dotpattern" tests to be shown on a standard computer monitor. These tests challenged the viewer's capacity to distinguish proximity, alignment, and perceptual organization. Each test included 50 images of varying difficulty (total of 250 images). Eye-tracking data were collected from each individual while taking the tests. The eye-tracking data were converted into gaze velocities and analyzed with Hidden Markov Models to develop personalized gaze profiles. Using leave-one-out cross-validation, we observed that these personalized profiles could differentiate among the 12 users with classification accuracy ranging between 53% and 76%, depending on the test. This was statistically significantly better than random guessing (i.e., 8.3% or 1 out of 12). Classification accuracy was higher for the tests where the users' average gaze velocity per case was lower. The study findings support the feasibility of using gaze as a biometric or personalized biomarker. These findings could have implications in Radiology training and the development of personalized e-learning environments.

  19. Design of a Gaze-Sensitive Virtual Social Interactive System for Children With Autism

    PubMed Central

    Lahiri, Uttama; Warren, Zachary; Sarkar, Nilanjan

    2013-01-01

    Impairments in social communication skills are thought to be core deficits in children with autism spectrum disorder (ASD). In recent years, several assistive technologies, particularly Virtual Reality (VR), have been investigated to promote social interactions in this population. It is well known that children with ASD demonstrate atypical viewing patterns during social interactions and thus monitoring eye-gaze can be valuable to design intervention strategies. While several studies have used eye-tracking technology to monitor eye-gaze for offline analysis, there exists no real-time system that can monitor eye-gaze dynamically and provide individualized feedback. Given the promise of VR-based social interaction and the usefulness of monitoring eye-gaze in real-time, a novel VR-based dynamic eye-tracking system is developed in this work. This system, called Virtual Interactive system with Gaze-sensitive Adaptive Response Technology (VIGART), is capable of delivering individualized feedback based on a child’s dynamic gaze patterns during VR-based interaction. Results from a usability study with six adolescents with ASD are presented that examines the acceptability and usefulness of VIGART. The results in terms of improvement in behavioral viewing and changes in relevant eye physiological indexes of participants while interacting with VIGART indicate the potential of this novel technology. PMID:21609889

  20. Decline of vertical gaze and convergence with aging.

    PubMed

    Oguro, Hiroaki; Okada, Kazunori; Suyama, Nobuo; Yamashita, Kazuya; Yamaguchi, Shuhei; Kobayashi, Shotai

    2004-01-01

    Disturbance of vertical eye movement and ocular convergence is often observed in elderly people, but little is known about its frequency. The purpose of this study was to investigate age-associated changes in vertical eye movement and convergence in healthy elderly people, using a digital video camera system. We analyzed vertical eye movements and convergence in 113 neurologically normal elderly subjects (mean age 70 years) in comparison with 20 healthy young controls (mean age 32 years). The range of vertical eye movement was analyzed quantitatively and convergence was analyzed qualitatively. In the elderly subjects, the angle of vertical gaze decreased with advancing age and it was significantly smaller than that of the younger subjects. The mean angle of upward gaze was significantly smaller than that of downward gaze for both young and elderly subjects. Upward gaze impairment became apparent in subjects in their 70s, and downward gaze impairment in subjects in their 60s. Disturbance in convergence also increased with advancing age, and was found in 40.7% of the elderly subjects. These findings indicate that the mechanisms of age-related change are different for upward and downward vertical gaze. Digital video camera monitoring was useful for assessing and monitoring eye movements. Copyright 2004 S. Karger AG, Basel

  1. Human-like object tracking and gaze estimation with PKD android

    NASA Astrophysics Data System (ADS)

    Wijayasinghe, Indika B.; Miller, Haylie L.; Das, Sumit K.; Bugnariu, Nicoleta L.; Popa, Dan O.

    2016-05-01

    As the use of robots increases for tasks that require human-robot interactions, it is vital that robots exhibit and understand human-like cues for effective communication. In this paper, we describe the implementation of object tracking capability on Philip K. Dick (PKD) android and a gaze tracking algorithm, both of which further robot capabilities with regard to human communication. PKD's ability to track objects with human-like head postures is achieved with visual feedback from a Kinect system and an eye camera. The goal of object tracking with human-like gestures is twofold: to facilitate better human-robot interactions and to enable PKD as a human gaze emulator for future studies. The gaze tracking system employs a mobile eye tracking system (ETG; SensoMotoric Instruments) and a motion capture system (Cortex; Motion Analysis Corp.) for tracking the head orientations. Objects to be tracked are displayed by a virtual reality system, the Computer Assisted Rehabilitation Environment (CAREN; MotekForce Link). The gaze tracking algorithm converts eye tracking data and head orientations to gaze information facilitating two objectives: to evaluate the performance of the object tracking system for PKD and to use the gaze information to predict the intentions of the user, enabling the robot to understand physical cues by humans.

  2. Design of a gaze-sensitive virtual social interactive system for children with autism.

    PubMed

    Lahiri, Uttama; Warren, Zachary; Sarkar, Nilanjan

    2011-08-01

    Impairments in social communication skills are thought to be core deficits in children with autism spectrum disorder (ASD). In recent years, several assistive technologies, particularly Virtual Reality (VR), have been investigated to promote social interactions in this population. It is well known that children with ASD demonstrate atypical viewing patterns during social interactions and thus monitoring eye-gaze can be valuable to design intervention strategies. While several studies have used eye-tracking technology to monitor eye-gaze for offline analysis, there exists no real-time system that can monitor eye-gaze dynamically and provide individualized feedback. Given the promise of VR-based social interaction and the usefulness of monitoring eye-gaze in real-time, a novel VR-based dynamic eye-tracking system is developed in this work. This system, called Virtual Interactive system with Gaze-sensitive Adaptive Response Technology (VIGART), is capable of delivering individualized feedback based on a child's dynamic gaze patterns during VR-based interaction. Results from a usability study with six adolescents with ASD are presented that examines the acceptability and usefulness of VIGART. The results in terms of improvement in behavioral viewing and changes in relevant eye physiological indexes of participants while interacting with VIGART indicate the potential of this novel technology. © 2011 IEEE

  3. Investigation of visually induced motion sickness in dynamic 3D contents based on subjective judgment, heart rate variability, and depth gaze behavior.

    PubMed

    Wibirama, Sunu; Hamamoto, Kazuhiko

    2014-01-01

    Visually induced motion sickness (VIMS) is an important safety issue in stereoscopic 3D technology. Accompanying subjective judgment of VIMS with objective measurement is useful to identify not only biomedical effects of dynamic 3D contents, but also provoking scenes that induce VIMS, duration of VIMS, and user behavior during VIMS. Heart rate variability and depth gaze behavior are appropriate physiological indicators for such objective observation. However, there is no information about relationship between subjective judgment of VIMS, heart rate variability, and depth gaze behavior. In this paper, we present a novel investigation of VIMS based on simulator sickness questionnaire (SSQ), electrocardiography (ECG), and 3D gaze tracking. Statistical analysis on SSQ data shows that nausea and disorientation symptoms increase as amount of dynamic motions increases (nausea: p<;0.005; disorientation: p<;0.05). To reduce VIMS, SSQ and ECG data suggest that user should perform voluntary gaze fixation at one point when experiencing vertical motion (up or down) and horizontal motion (turn left and right) in dynamic 3D contents. Observation of 3D gaze tracking data reveals that users who experienced VIMS tended to have unstable depth gaze than ones who did not experience VIMS.

  4. Perception and Processing of Faces in the Human Brain Is Tuned to Typical Feature Locations

    PubMed Central

    Schwarzkopf, D. Samuel; Alvarez, Ivan; Lawson, Rebecca P.; Henriksson, Linda; Kriegeskorte, Nikolaus; Rees, Geraint

    2016-01-01

    Faces are salient social stimuli whose features attract a stereotypical pattern of fixations. The implications of this gaze behavior for perception and brain activity are largely unknown. Here, we characterize and quantify a retinotopic bias implied by typical gaze behavior toward faces, which leads to eyes and mouth appearing most often in the upper and lower visual field, respectively. We found that the adult human visual system is tuned to these contingencies. In two recognition experiments, recognition performance for isolated face parts was better when they were presented at typical, rather than reversed, visual field locations. The recognition cost of reversed locations was equal to ∼60% of that for whole face inversion in the same sample. Similarly, an fMRI experiment showed that patterns of activity evoked by eye and mouth stimuli in the right inferior occipital gyrus could be separated with significantly higher accuracy when these features were presented at typical, rather than reversed, visual field locations. Our findings demonstrate that human face perception is determined not only by the local position of features within a face context, but by whether features appear at the typical retinotopic location given normal gaze behavior. Such location sensitivity may reflect fine-tuning of category-specific visual processing to retinal input statistics. Our findings further suggest that retinotopic heterogeneity might play a role for face inversion effects and for the understanding of conditions affecting gaze behavior toward faces, such as autism spectrum disorders and congenital prosopagnosia. SIGNIFICANCE STATEMENT Faces attract our attention and trigger stereotypical patterns of visual fixations, concentrating on inner features, like eyes and mouth. Here we show that the visual system represents face features better when they are shown at retinal positions where they typically fall during natural vision. When facial features were shown at typical (rather than reversed) visual field locations, they were discriminated better by humans and could be decoded with higher accuracy from brain activity patterns in the right occipital face area. This suggests that brain representations of face features do not cover the visual field uniformly. It may help us understand the well-known face-inversion effect and conditions affecting gaze behavior toward faces, such as prosopagnosia and autism spectrum disorders. PMID:27605606

  5. Visual attentional bias for food in adolescents with binge-eating disorder.

    PubMed

    Schmidt, Ricarda; Lüthold, Patrick; Kittel, Rebekka; Tetzlaff, Anne; Hilbert, Anja

    2016-09-01

    Evidence suggests that adults with binge-eating disorder (BED) are prone of having their attention interfered by food cues, and that food-related attentional biases are associated with calorie intake and eating disorder psychopathology. For adolescents with BED experimental evidence on attentional processing of food cues is lacking. Using eye-tracking and a visual search task, the present study examined visual orienting and disengagement processes of food in youth with BED. Eye-movement data and reaction times were recorded in 25 adolescents (12-20 years) with BED and 25 controls (CG) individually matched for sex, age, body mass index, and socio-economic status. During a free exploration paradigm, the BED group showed a greater gaze duration bias for food images than the CG. Groups did not differ in gaze direction biases. In a visual search task, the BED group showed a greater detection bias for food targets than the CG. Group differences were more pronounced for personally attractive than unattractive food images. Regarding clinical associations, only in the BED group the gaze duration bias for food was associated with increased hunger and lower body mass index, and the detection bias for food targets was associated with greater reward sensitivity. The study provided first evidence of an attentional bias to food in adolescents with BED. However, more research is needed for further specifying disengagement and orienting processes in adolescent BED, including overt and covert attention, and their prospective associations with binge-eating behaviors and associated psychopathology. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Semantic parafoveal-on-foveal effects and preview benefits in reading: Evidence from Fixation Related Potentials.

    PubMed

    López-Peréz, P J; Dampuré, J; Hernández-Cabrera, J A; Barber, H A

    2016-11-01

    During reading parafoveal information can affect the processing of the word currently fixated (parafovea-on-fovea effect) and words perceived parafoveally can facilitate their subsequent processing when they are fixated on (preview effect). We investigated parafoveal processing by simultaneously recording eye movements and EEG measures. Participants read word pairs that could be semantically associated or not. Additionally, the boundary paradigm allowed us to carry out the same manipulation on parafoveal previews that were displayed until reader's gaze moved to the target words. Event Related Potentials time-locked to the prime-preview presentation showed a parafoveal-on-foveal N400 effect. Fixation Related Potentials time locked to the saccade offset showed an N400 effect related to the prime-target relationship. Furthermore, this later effect interacted with the semantic manipulation of the previews, supporting a semantic preview benefit. These results demonstrate that at least under optimal conditions foveal and parafoveal information can be simultaneously processed and integrated. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. High autistic trait individuals do not modulate gaze behaviour in response to social presence but look away more when actively engaged in an interaction.

    PubMed

    von dem Hagen, Elisabeth A H; Bright, Naomi

    2017-02-01

    Autism is characterised by difficulties in social functioning, notably in interactions with other people. Yet, most studies addressing social difficulties have used static images or, at best, videos of social stimuli, with no scope for real interaction. Here, we study one crucial aspect of social interactions-gaze behaviour-in an interactive setting. First, typical individuals were shown videos of an experimenter and, by means of a deception procedure, were either led to believe that the experimenter was present via a live video-feed or was pre-recorded. Participants' eye movements revealed that when passively viewing an experimenter they believed to be "live," they looked less at that person than when they believed the experimenter video was pre-recorded. Interestingly, this reduction in viewing behaviour in response to the believed "live" presence of the experimenter was absent in individuals high in autistic traits, suggesting a relative insensitivity to social presence alone. When participants were asked to actively engage in a real-time interaction with the experimenter, however, high autistic trait individuals looked significantly less at the experimenter relative to low autistic trait individuals. The results reinforce findings of atypical gaze behaviour in individuals high in autistic traits, but suggest that active engagement in a social interaction may be important in eliciting reduced looking. We propose that difficulties with the spatio-temporal dynamics associated with real social interactions rather than underlying difficulties processing the social stimulus itself may drive these effects. The results underline the importance of developing ecologically valid methods to investigate social cognition. Autism Res 2017, 10: 359-368. © 2016 The Authors Autism Research published by Wiley Periodicals, Inc. on behalf of International Society for Autism Research. © 2016 The Authors Autism Research published by Wiley Periodicals, Inc. on behalf of International Society for Autism Research.

  8. Effects of Facial Symmetry and Gaze Direction on Perception of Social Attributes: A Study in Experimental Art History.

    PubMed

    Folgerø, Per O; Hodne, Lasse; Johansson, Christer; Andresen, Alf E; Sætren, Lill C; Specht, Karsten; Skaar, Øystein O; Reber, Rolf

    2016-01-01

    This article explores the possibility of testing hypotheses about art production in the past by collecting data in the present. We call this enterprise "experimental art history". Why did medieval artists prefer to paint Christ with his face directed towards the beholder, while profane faces were noticeably more often painted in different degrees of profile? Is a preference for frontal faces motivated by deeper evolutionary and biological considerations? Head and gaze direction is a significant factor for detecting the intentions of others, and accurate detection of gaze direction depends on strong contrast between a dark iris and a bright sclera, a combination that is only found in humans among the primates. One uniquely human capacity is language acquisition, where the detection of shared or joint attention, for example through detection of gaze direction, contributes significantly to the ease of acquisition. The perceived face and gaze direction is also related to fundamental emotional reactions such as fear, aggression, empathy and sympathy. The fast-track modulator model presents a related fast and unconscious subcortical route that involves many central brain areas. Activity in this pathway mediates the affective valence of the stimulus. In particular, different sub-regions of the amygdala show specific activation as response to gaze direction, head orientation and the valence of facial expression. We present three experiments on the effects of face orientation and gaze direction on the judgments of social attributes. We observed that frontal faces with direct gaze were more highly associated with positive adjectives. Does this help to associate positive values to the Holy Face in a Western context? The formal result indicates that the Holy Face is perceived more positively than profiles with both direct and averted gaze. Two control studies, using a Brazilian and a Dutch database of photographs, showed a similar but weaker effect with a larger contrast between the gaze directions for profiles. Our findings indicate that many factors affect the impression of a face, and that eye contact in combination with face direction reinforce the general impression of portraits, rather than determine it.

  9. Virtual social interactions in social anxiety--the impact of sex, gaze, and interpersonal distance.

    PubMed

    Wieser, Matthias J; Pauli, Paul; Grosseibl, Miriam; Molzow, Ina; Mühlberger, Andreas

    2010-10-01

    In social interactions, interpersonal distance between interaction partners plays an important role in determining the status of the relationship. Interpersonal distance is an important nonverbal behavior, and is used to regulate personal space in a complex interplay with other nonverbal behaviors such as eye gaze. In social anxiety, studies regarding the impact of interpersonal distance on within-situation avoidance behavior are so far rare. Thus the present study aimed to scrutinize the relationship between gaze direction, sex, interpersonal distance, and social anxiety in social interactions. Social interactions were modeled in a virtual-reality (VR) environment, where 20 low and 19 high socially anxious women were confronted with approaching male and female characters, who stopped in front of the participant, either some distance away or close to them, and displayed either a direct or an averted gaze. Gaze and head movements, as well as heart rate, were measured as indices of avoidance behavior and fear reactions. High socially anxious participants showed a complex pattern of avoidance behavior: when the avatar was standing farther away, high socially anxious women avoided gaze contact with male avatars showing a direct gaze. Furthermore, they showed avoidance behavior (backward head movements) in response to male avatars showing a direct gaze, regardless of the interpersonal distance. Overall, the current study proved that VR social interactions might be a very useful tool for investigating avoidance behavior of socially anxious individuals in highly controlled situations. This might also be the first step in using VR social interactions in clinical protocols for the therapy of social anxiety disorder.

  10. Gaze stability, dynamic balance and participation deficits in people with multiple sclerosis at fall-risk.

    PubMed

    Garg, Hina; Dibble, Leland E; Schubert, Michael C; Sibthorp, Jim; Foreman, K Bo; Gappmaier, Eduard

    2018-05-05

    Despite the common complaints of dizziness and demyelination of afferent or efferent pathways to and from the vestibular nuclei which may adversely affect the angular Vestibulo-Ocular Reflex (aVOR) and vestibulo-spinal function in persons with Multiple Sclerosis (PwMS), few studies have examined gaze and dynamic balance function in PwMS. 1) Determine the differences in gaze stability, dynamic balance and participation measures between PwMS and controls, 2) Examine the relationships between gaze stability, dynamic balance and participation. Nineteen ambulatory PwMS at fall-risk and 14 age-matched controls were recruited. Outcomes included (a) gaze stability [angular Vestibulo-Ocular Reflex (aVOR) gain (ratio of eye to head velocity); number of Compensatory Saccades (CS) per head rotation; CS latency; gaze position error; Coefficient of Variation (CV) of aVOR gain], (b) dynamic balance [Functional Gait Assessment, FGA; four square step test], and (c) participation [dizziness handicap inventory; activities-specific balance confidence scale]. Separate independent t-tests and Pearson's correlations were calculated. PwMS were age = 53 ± 11.7yrs and had 4.2 ± 3.3 falls/yr. PwMS demonstrated significant (p<0.05) impairments in gaze stability, dynamic balance and participation measures compared to controls. CV of aVOR gain and CS latency were significantly correlated with FGA. Deficits and correlations across a spectrum of disability measures highlight the relevance of gaze and dynamic balance assessment in PwMS. This article is protected by copyright. All rights reserved. © 2018 Wiley Periodicals, Inc.

  11. Social communication with virtual agents: The effects of body and gaze direction on attention and emotional responding in human observers.

    PubMed

    Marschner, Linda; Pannasch, Sebastian; Schulz, Johannes; Graupner, Sven-Thomas

    2015-08-01

    In social communication, the gaze direction of other persons provides important information to perceive and interpret their emotional response. Previous research investigated the influence of gaze by manipulating mutual eye contact. Therefore, gaze and body direction have been changed as a whole, resulting in only congruent gaze and body directions (averted or directed) of another person. Here, we aimed to disentangle these effects by using short animated sequences of virtual agents posing with either direct or averted body or gaze. Attention allocation by means of eye movements, facial muscle response, and emotional experience to agents of different gender and facial expressions were investigated. Eye movement data revealed longer fixation durations, i.e., a stronger allocation of attention, when gaze and body direction were not congruent with each other or when both were directed towards the observer. This suggests that direct interaction as well as incongruous signals increase the demands of attentional resources in the observer. For the facial muscle response, only the reaction of muscle zygomaticus major revealed an effect of body direction, expressed by stronger activity in response to happy expressions for direct compared to averted gaze when the virtual character's body was directed towards the observer. Finally, body direction also influenced the emotional experience ratings towards happy expressions. While earlier findings suggested that mutual eye contact is the main source for increased emotional responding and attentional allocation, the present results indicate that direction of the virtual agent's body and head also plays a minor but significant role. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. The effect of arousal and eye gaze direction on trust evaluations of stranger's faces: A potential pathway to paranoid thinking.

    PubMed

    Abbott, Jennie; Middlemiss, Megan; Bruce, Vicki; Smailes, David; Dudley, Robert

    2018-09-01

    When asked to evaluate faces of strangers, people with paranoia show a tendency to rate others as less trustworthy. The present study investigated the impact of arousal on this interpersonal bias, and whether this bias was specific to evaluations of trust or additionally affected other trait judgements. The study also examined the impact of eye gaze direction, as direct eye gaze has been shown to heighten arousal. In two experiments, non-clinical participants completed face rating tasks before and after either an arousal manipulation or control manipulation. Experiment one examined the effects of heightened arousal on judgements of trustworthiness. Experiment two examined the specificity of the bias, and the impact of gaze direction. Experiment one indicated that the arousal manipulation led to lower trustworthiness ratings. Experiment two showed that heightened arousal reduced trust evaluations of trustworthy faces, particularly trustworthy faces with averted gaze. The control group rated trustworthy faces with direct gaze as more trustworthy post-manipulation. There was some evidence that attractiveness ratings were affected similarly to the trust judgements, whereas judgements of intelligence were not affected by higher arousal. In both studies, participants reported low levels of arousal even after the manipulation and the use of a non-clinical sample limits the generalisability to clinical samples. There is a complex interplay between arousal, evaluations of trustworthiness and gaze direction. Heightened arousal influences judgements of trustworthiness, but within the context of face type and gaze direction. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Facial Expressions Modulate the Ontogenetic Trajectory of Gaze-Following among Monkeys

    ERIC Educational Resources Information Center

    Teufel, Christoph; Gutmann, Anke; Pirow, Ralph; Fischer, Julia

    2010-01-01

    Gaze-following, the tendency to direct one's attention to locations looked at by others, is a crucial aspect of social cognition in human and nonhuman primates. Whereas the development of gaze-following has been intensely studied in human infants, its early ontogeny in nonhuman primates has received little attention. Combining longitudinal and…

  14. Revisiting Patterson's Paradigm: Gaze Behaviors in Deaf Communication.

    ERIC Educational Resources Information Center

    Luciano, Jason M.

    2001-01-01

    This article explains a sequential model of eye gaze and eye contact behaviors researched among hearing populations and explores these behaviors in people with deafness. It is found that characterizations of eye contact and eye gaze behavior applied to hearing populations are not completely applicable to those with deafness. (Contains references.)…

  15. Gaze Strategies in Skateboard Trick Jumps: Spatiotemporal Constraints in Complex Locomotion

    ERIC Educational Resources Information Center

    Klostermann, André; Küng, Philip

    2017-01-01

    Purpose: This study aimed to further the knowledge on gaze behavior in locomotion by studying gaze strategies in skateboard jumps of different difficulty that had to be performed either with or without an obstacle. Method: Nine experienced skateboarders performed "Ollie" and "Kickflip" jumps either over an obstacle or over a…

  16. Affective Evaluations of Objects Are Influenced by Observed Gaze Direction and Emotional Expression

    ERIC Educational Resources Information Center

    Bayliss, Andrew P.; Frischen, Alexandra; Fenske, Mark J.; Tipper, Steven P.

    2007-01-01

    Gaze direction signals another person's focus of interest. Facial expressions convey information about their mental state. Appropriate responses to these signals should reflect their combined influence, yet current evidence suggests that gaze-cueing effects for objects near an observed face are not modulated by its emotional expression. Here, we…

  17. It Takes Time and Experience to Learn How to Interpret Gaze in Mentalistic Terms

    ERIC Educational Resources Information Center

    Leavens, David A.

    2006-01-01

    What capabilities are required for an organism to evince an "explicit" understanding of gaze as a mentalistic phenomenon? One possibility is that mentalistic interpretations of gaze, like concepts of unseen, supernatural beings, are culturally-specific concepts, acquired through cultural learning. These abstract concepts may either require a…

  18. Eye Gaze Tracking using Correlation Filters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karakaya, Mahmut; Boehnen, Chris Bensing; Bolme, David S

    In this paper, we studied a method for eye gaze tracking that provide gaze estimation from a standard webcam with a zoom lens and reduce the setup and calibration requirements for new users. Specifically, we have developed a gaze estimation method based on the relative locations of points on the top of the eyelid and eye corners. Gaze estimation method in this paper is based on the distances between top point of the eyelid and eye corner detected by the correlation filters. Advanced correlation filters were found to provide facial landmark detections that are accurate enough to determine the subjectsmore » gaze direction up to angle of approximately 4-5 degrees although calibration errors often produce a larger overall shift in the estimates. This is approximately a circle of diameter 2 inches for a screen that is arm s length from the subject. At this accuracy it is possible to figure out what regions of text or images the subject is looking but it falls short of being able to determine which word the subject has looked at.« less

  19. Is social attention impaired in schizophrenia? Gaze, but not pointing gestures, is associated with spatial attention deficits.

    PubMed

    Dalmaso, Mario; Galfano, Giovanni; Tarqui, Luana; Forti, Bruno; Castelli, Luigi

    2013-09-01

    The nature of possible impairments in orienting attention to social signals in schizophrenia is controversial. The present research was aimed at addressing this issue further by comparing gaze and arrow cues. Unlike previous studies, we also included pointing gestures as social cues, with the goal of addressing whether any eventual impairment in the attentional response was specific to gaze signals or reflected a more general deficit in dealing with social stimuli. Patients with schizophrenia or schizoaffective disorder and matched controls performed a spatial-cuing paradigm in which task-irrelevant centrally displayed gaze, pointing finger, and arrow cues oriented rightward or leftward, preceded a lateralized target requiring a simple detection response. Healthy controls responded faster to spatially congruent targets than to spatially incongruent targets, irrespective of cue type. In contrast, schizophrenic patients responded faster to spatially congruent targets than to spatially incongruent targets only for arrow and pointing finger cues. No cuing effect emerged for gaze cues. The results support the notion that gaze cuing is impaired in schizophrenia, and suggest that this deficit may not extend to all social cues.

  20. The appeal of the devil's eye: social evaluation affects social attention.

    PubMed

    Carraro, Luciana; Dalmaso, Mario; Castelli, Luigi; Galfano, Giovanni; Bobbio, Andrea; Mantovani, Gabriele

    2017-02-01

    Humans typically exhibit a tendency to follow the gaze of conspecifics, a social attention behaviour known as gaze cueing. Here, we addressed whether episodically learned social knowledge about the behaviours performed by the individual bearing the gaze can influence this phenomenon. In a learning phase, different faces were systematically associated with either positive or negative behaviours. The same faces were then used as stimuli in a gaze-cueing task. The results showed that faces associated with antisocial norm-violating behaviours triggered stronger gaze-cueing effects as compared to faces associated with sociable behaviours. Importantly, this was especially evident for participants who perceived the presented norm-violating behaviours as far more negative as compared to positive behaviours. These findings suggest that reflexive attentional responses can be affected by our appraisal of the valence of the behaviours of individuals around us.

Top