Sample records for visual world eye-tracking

  1. Tracking with the mind's eye

    NASA Technical Reports Server (NTRS)

    Krauzlis, R. J.; Stone, L. S.

    1999-01-01

    The two components of voluntary tracking eye-movements in primates, pursuit and saccades, are generally viewed as relatively independent oculomotor subsystems that move the eyes in different ways using independent visual information. Although saccades have long been known to be guided by visual processes related to perception and cognition, only recently have psychophysical and physiological studies provided compelling evidence that pursuit is also guided by such higher-order visual processes, rather than by the raw retinal stimulus. Pursuit and saccades also do not appear to be entirely independent anatomical systems, but involve overlapping neural mechanisms that might be important for coordinating these two types of eye movement during the tracking of a selected visual object. Given that the recovery of objects from real-world images is inherently ambiguous, guiding both pursuit and saccades with perception could represent an explicit strategy for ensuring that these two motor actions are driven by a single visual interpretation.

  2. Activating gender stereotypes during online spoken language processing: evidence from Visual World Eye Tracking.

    PubMed

    Pyykkönen, Pirita; Hyönä, Jukka; van Gompel, Roger P G

    2010-01-01

    This study used the visual world eye-tracking method to investigate activation of general world knowledge related to gender-stereotypical role names in online spoken language comprehension in Finnish. The results showed that listeners activated gender stereotypes elaboratively in story contexts where this information was not needed to build coherence. Furthermore, listeners made additional inferences based on gender stereotypes to revise an already established coherence relation. Both results are consistent with mental models theory (e.g., Garnham, 2001). They are harder to explain by the minimalist account (McKoon & Ratcliff, 1992) which suggests that people limit inferences to those needed to establish coherence in discourse.

  3. Eye-Tracking as a Tool to Evaluate Functional Ability in Everyday Tasks in Glaucoma.

    PubMed

    Kasneci, Enkelejda; Black, Alex A; Wood, Joanne M

    2017-01-01

    To date, few studies have investigated the eye movement patterns of individuals with glaucoma while they undertake everyday tasks in real-world settings. While some of these studies have reported possible compensatory gaze patterns in those with glaucoma who demonstrated good task performance despite their visual field loss, little is known about the complex interaction between field loss and visual scanning strategies and the impact on task performance and, consequently, on quality of life. We review existing approaches that have quantified the effect of glaucomatous visual field defects on the ability to undertake everyday activities through the use of eye movement analysis. Furthermore, we discuss current developments in eye-tracking technology and the potential for combining eye-tracking with virtual reality and advanced analytical approaches. Recent technological developments suggest that systems based on eye-tracking have the potential to assist individuals with glaucomatous loss to maintain or even improve their performance on everyday tasks and hence enhance their long-term quality of life. We discuss novel approaches for studying the visual search behavior of individuals with glaucoma that have the potential to assist individuals with glaucoma, through the use of personalized programs that take into consideration the individual characteristics of their remaining visual field and visual search behavior.

  4. Eye-Tracking as a Tool to Evaluate Functional Ability in Everyday Tasks in Glaucoma

    PubMed Central

    Black, Alex A.

    2017-01-01

    To date, few studies have investigated the eye movement patterns of individuals with glaucoma while they undertake everyday tasks in real-world settings. While some of these studies have reported possible compensatory gaze patterns in those with glaucoma who demonstrated good task performance despite their visual field loss, little is known about the complex interaction between field loss and visual scanning strategies and the impact on task performance and, consequently, on quality of life. We review existing approaches that have quantified the effect of glaucomatous visual field defects on the ability to undertake everyday activities through the use of eye movement analysis. Furthermore, we discuss current developments in eye-tracking technology and the potential for combining eye-tracking with virtual reality and advanced analytical approaches. Recent technological developments suggest that systems based on eye-tracking have the potential to assist individuals with glaucomatous loss to maintain or even improve their performance on everyday tasks and hence enhance their long-term quality of life. We discuss novel approaches for studying the visual search behavior of individuals with glaucoma that have the potential to assist individuals with glaucoma, through the use of personalized programs that take into consideration the individual characteristics of their remaining visual field and visual search behavior. PMID:28293433

  5. Like a rolling stone: naturalistic visual kinematics facilitate tracking eye movements.

    PubMed

    Souto, David; Kerzel, Dirk

    2013-02-06

    Newtonian physics constrains object kinematics in the real world. We asked whether eye movements towards tracked objects depend on their compliance with those constraints. In particular, the force of gravity constrains round objects to roll on the ground with a particular rotational and translational motion. We measured tracking eye movements towards rolling objects. We found that objects with rotational and translational motion that was congruent with an object rolling on the ground elicited faster tracking eye movements during pursuit initiation than incongruent stimuli. Relative to a condition without rotational component, we compared objects with this motion with a condition in which there was no rotational component, we essentially obtained benefits of congruence, and, to a lesser extent, costs from incongruence. Anticipatory pursuit responses showed no congruence effect, suggesting that the effect is based on visually-driven predictions, not on velocity storage. We suggest that the eye movement system incorporates information about object kinematics acquired by a lifetime of experience with visual stimuli obeying the laws of Newtonian physics.

  6. Experience and Distribution of Attention: Pet Exposure and Infants' Scanning of Animal Images

    ERIC Educational Resources Information Center

    Hurley, Karinna B.; Oakes, Lisa M.

    2015-01-01

    Although infants' cognitions about the world must be influenced by experience, little research has directly assessed the relation between everyday experience and infants' visual cognition in the laboratory. Eye-tracking procedures were used to measure 4-month-old infants' eye movements as they visually investigated a series of…

  7. Event processing in the visual world: Projected motion paths during spoken sentence comprehension.

    PubMed

    Kamide, Yuki; Lindsay, Shane; Scheepers, Christoph; Kukona, Anuenue

    2016-05-01

    Motion events in language describe the movement of an entity to another location along a path. In 2 eye-tracking experiments, we found that comprehension of motion events involves the online construction of a spatial mental model that integrates language with the visual world. In Experiment 1, participants listened to sentences describing the movement of an agent to a goal while viewing visual scenes depicting the agent, goal, and empty space in between. Crucially, verbs suggested either upward (e.g., jump) or downward (e.g., crawl) paths. We found that in the rare event of fixating the empty space between the agent and goal, visual attention was biased upward or downward in line with the verb. In Experiment 2, visual scenes depicted a central obstruction, which imposed further constraints on the paths and increased the likelihood of fixating the empty space between the agent and goal. The results from this experiment corroborated and refined the previous findings. Specifically, eye-movement effects started immediately after hearing the verb and were in line with data from an additional mouse-tracking task that encouraged a more explicit spatial reenactment of the motion event. In revealing how event comprehension operates in the visual world, these findings suggest a mental simulation process whereby spatial details of motion events are mapped onto the world through visual attention. The strength and detectability of such effects in overt eye-movements is constrained by the visual world and the fact that perceivers rarely fixate regions of empty space. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  8. Availability of Alternatives and the Processing of Scalar Implicatures: A Visual World Eye-tracking Study

    ERIC Educational Resources Information Center

    Degen, Judith; Tanenhaus, Michael K.

    2016-01-01

    Two visual world experiments investigated the processing of the implicature associated with "some" using a "gumball paradigm." On each trial, participants saw an image of a gumball machine with an upper chamber with orange and blue gumballs and an empty lower chamber. Gumballs dropped to the lower chamber, creating a contrast…

  9. Event Processing in the Visual World: Projected Motion Paths during Spoken Sentence Comprehension

    ERIC Educational Resources Information Center

    Kamide, Yuki; Lindsay, Shane; Scheepers, Christoph; Kukona, Anuenue

    2016-01-01

    Motion events in language describe the movement of an entity to another location along a path. In 2 eye-tracking experiments, we found that comprehension of motion events involves the online construction of a spatial mental model that integrates language with the visual world. In Experiment 1, participants listened to sentences describing the…

  10. Prediction in the Processing of Repair Disfluencies: Evidence from the Visual-World Paradigm

    ERIC Educational Resources Information Center

    Lowder, Matthew W.; Ferreira, Fernanda

    2016-01-01

    Two visual-world eye-tracking experiments investigated the role of prediction in the processing of repair disfluencies (e.g., "The chef reached for some salt uh I mean some ketchup ..."). Experiment 1 showed that listeners were more likely to fixate a critical distractor item (e.g., "pepper") during the processing of repair…

  11. Gradiency and Visual Context in Syntactic Garden-Paths

    ERIC Educational Resources Information Center

    Farmer, Thomas A.; Anderson, Sarah E.; Spivey, Michael J.

    2007-01-01

    Through recording the streaming x- and y-coordinates of computer-mouse movements, we report evidence that visual context provides an immediate constraint on the resolution of syntactic ambiguity in the visual-world paradigm. This finding converges with previous eye-tracking results that support a constraint-based account of sentence processing, in…

  12. Where to Look for American Sign Language (ASL) Sublexical Structure in the Visual World: Reply to Salverda (2016)

    ERIC Educational Resources Information Center

    Lieberman, Amy M.; Borovsky, Arielle; Hatrak, Marla; Mayberry, Rachel I.

    2016-01-01

    In this reply to Salverda (2016), we address a critique of the claims made in our recent study of real-time processing of American Sign Language (ASL) signs using a novel visual world eye-tracking paradigm (Lieberman, Borovsky, Hatrak, & Mayberry, 2015). Salverda asserts that our data do not support our conclusion that native signers and…

  13. On the comparison of visual discomfort generated by S3D and 2D content based on eye-tracking features

    NASA Astrophysics Data System (ADS)

    Iatsun, Iana; Larabi, Mohamed-Chaker; Fernandez-Maloigne, Christine

    2014-03-01

    The changing of TV systems from 2D to 3D mode is the next expected step in the telecommunication world. Some works have already been done to perform this progress technically, but interaction of the third dimension with humans is not yet clear. Previously, it was found that any increased load of visual system can create visual fatigue, like prolonged TV watching, computer work or video gaming. But watching S3D can cause another nature of visual fatigue, since all S3D technologies creates illusion of the third dimension based on characteristics of binocular vision. In this work we propose to evaluate and compare the visual fatigue from watching 2D and S3D content. This work shows the difference in accumulation of visual fatigue and its assessment for two types of content. In order to perform this comparison eye-tracking experiments using six commercially available movies were conducted. Healthy naive participants took part into the test and gave their answers feeling the subjective evaluation. It was found that watching stereo 3D content induce stronger feeling of visual fatigue than conventional 2D, and the nature of video has an important effect on its increase. Visual characteristics obtained by using eye-tracking were investigated regarding their relation with visual fatigue.

  14. Contributions of Head-Mounted Cameras to Studying the Visual Environments of Infants and Young Children

    ERIC Educational Resources Information Center

    Smith, Linda B.; Yu, Chen; Yoshida, Hanako; Fausey, Caitlin M.

    2015-01-01

    Head-mounted video cameras (with and without an eye camera to track gaze direction) are being increasingly used to study infants' and young children's visual environments and provide new and often unexpected insights about the visual world from a child's point of view. The challenge in using head cameras is principally conceptual and concerns the…

  15. Spatial Language, Visual Attention, and Perceptual Simulation

    ERIC Educational Resources Information Center

    Coventry, Kenny R.; Lynott, Dermot; Cangelosi, Angelo; Monrouxe, Lynn; Joyce, Dan; Richardson, Daniel C.

    2010-01-01

    Spatial language descriptions, such as "The bottle is over the glass", direct the attention of the hearer to particular aspects of the visual world. This paper asks how they do so, and what brain mechanisms underlie this process. In two experiments employing behavioural and eye tracking methodologies we examined the effects of spatial language on…

  16. Analysis of eye-tracking experiments performed on a Tobii T60

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Banks, David C

    2008-01-01

    Commercial eye-gaze trackers have the potential to be an important tool for quantifying the benefits of new visualization techniques. The expense of such trackers has made their use relatively infrequent in visualization studies. As such, it is difficult for researchers to compare multiple devices obtaining several demonstration models is impractical in cost and time, and quantitative measures from real-world use are not readily available. In this paper, we present a sample protocol to determine the accuracy of a gaze-tacking device.

  17. Before your very eyes: the value and limitations of eye tracking in medical education.

    PubMed

    Kok, Ellen M; Jarodzka, Halszka

    2017-01-01

    Medicine is a highly visual discipline. Physicians from many specialties constantly use visual information in diagnosis and treatment. However, they are often unable to explain how they use this information. Consequently, it is unclear how to train medical students in this visual processing. Eye tracking is a research technique that may offer answers to these open questions, as it enables researchers to investigate such visual processes directly by measuring eye movements. This may help researchers understand the processes that support or hinder a particular learning outcome. In this article, we clarify the value and limitations of eye tracking for medical education researchers. For example, eye tracking can clarify how experience with medical images mediates diagnostic performance and how students engage with learning materials. Furthermore, eye tracking can also be used directly for training purposes by displaying eye movements of experts in medical images. Eye movements reflect cognitive processes, but cognitive processes cannot be directly inferred from eye-tracking data. In order to interpret eye-tracking data properly, theoretical models must always be the basis for designing experiments as well as for analysing and interpreting eye-tracking data. The interpretation of eye-tracking data is further supported by sound experimental design and methodological triangulation. © 2016 John Wiley & Sons Ltd and The Association for the Study of Medical Education.

  18. Eye-Tracking in the Study of Visual Expertise: Methodology and Approaches in Medicine

    ERIC Educational Resources Information Center

    Fox, Sharon E.; Faulkner-Jones, Beverly E.

    2017-01-01

    Eye-tracking is the measurement of eye motions and point of gaze of a viewer. Advances in this technology have been essential to our understanding of many forms of visual learning, including the development of visual expertise. In recent years, these studies have been extended to the medical professions, where eye-tracking technology has helped us…

  19. How Visual Search Relates to Visual Diagnostic Performance: A Narrative Systematic Review of Eye-Tracking Research in Radiology

    ERIC Educational Resources Information Center

    van der Gijp, A.; Ravesloot, C. J.; Jarodzka, H.; van der Schaaf, M. F.; van der Schaaf, I. C.; van Schaik, J. P.; ten Cate, Th. J.

    2017-01-01

    Eye tracking research has been conducted for decades to gain understanding of visual diagnosis such as in radiology. For educational purposes, it is important to identify visual search patterns that are related to high perceptual performance and to identify effective teaching strategies. This review of eye-tracking literature in the radiology…

  20. Eye-Hand Synergy and Intermittent Behaviors during Target-Directed Tracking with Visual and Non-visual Information

    PubMed Central

    Huang, Chien-Ting; Hwang, Ing-Shiou

    2012-01-01

    Visual feedback and non-visual information play different roles in tracking of an external target. This study explored the respective roles of the visual and non-visual information in eleven healthy volunteers who coupled the manual cursor to a rhythmically moving target of 0.5 Hz under three sensorimotor conditions: eye-alone tracking (EA), eye-hand tracking with visual feedback of manual outputs (EH tracking), and the same tracking without such feedback (EHM tracking). Tracking error, kinematic variables, and movement intermittency (saccade and speed pulse) were contrasted among tracking conditions. The results showed that EHM tracking exhibited larger pursuit gain, less tracking error, and less movement intermittency for the ocular plant than EA tracking. With the vision of manual cursor, EH tracking achieved superior tracking congruency of the ocular and manual effectors with smaller movement intermittency than EHM tracking, except that the rate precision of manual action was similar for both types of tracking. The present study demonstrated that visibility of manual consequences altered mutual relationships between movement intermittency and tracking error. The speed pulse metrics of manual output were linked to ocular tracking error, and saccade events were time-locked to the positional error of manual tracking during EH tracking. In conclusion, peripheral non-visual information is critical to smooth pursuit characteristics and rate control of rhythmic manual tracking. Visual information adds to eye-hand synchrony, underlying improved amplitude control and elaborate error interpretation during oculo-manual tracking. PMID:23236498

  1. Effects of aging on eye movements in the real world

    PubMed Central

    Dowiasch, Stefan; Marx, Svenja; Einhäuser, Wolfgang; Bremmer, Frank

    2015-01-01

    The effects of aging on eye movements are well studied in the laboratory. Increased saccade latencies or decreased smooth-pursuit gain are well established findings. The question remains whether these findings are influenced by the rather untypical environment of a laboratory; that is, whether or not they transfer to the real world. We measured 34 healthy participants between the age of 25 and 85 during two everyday tasks in the real world: (I) walking down a hallway with free gaze, (II) visual tracking of an earth-fixed object while walking straight-ahead. Eye movements were recorded with a mobile light-weight eye tracker, the EyeSeeCam (ESC). We find that age significantly influences saccade parameters. With increasing age, saccade frequency, amplitude, peak velocity, and mean velocity are reduced and the velocity/amplitude distribution as well as the velocity profile become less skewed. In contrast to laboratory results on smooth pursuit, we did not find a significant effect of age on tracking eye-movements in the real world. Taken together, age-related eye-movement changes as measured in the laboratory only partly resemble those in the real world. It is well-conceivable that in the real world additional sensory cues, such as head-movement or vestibular signals, may partially compensate for age-related effects, which, according to this view, would be specific to early motion processing. In any case, our results highlight the importance of validity for natural situations when studying the impact of aging on real-life performance. PMID:25713524

  2. Effects of aging on eye movements in the real world.

    PubMed

    Dowiasch, Stefan; Marx, Svenja; Einhäuser, Wolfgang; Bremmer, Frank

    2015-01-01

    The effects of aging on eye movements are well studied in the laboratory. Increased saccade latencies or decreased smooth-pursuit gain are well established findings. The question remains whether these findings are influenced by the rather untypical environment of a laboratory; that is, whether or not they transfer to the real world. We measured 34 healthy participants between the age of 25 and 85 during two everyday tasks in the real world: (I) walking down a hallway with free gaze, (II) visual tracking of an earth-fixed object while walking straight-ahead. Eye movements were recorded with a mobile light-weight eye tracker, the EyeSeeCam (ESC). We find that age significantly influences saccade parameters. With increasing age, saccade frequency, amplitude, peak velocity, and mean velocity are reduced and the velocity/amplitude distribution as well as the velocity profile become less skewed. In contrast to laboratory results on smooth pursuit, we did not find a significant effect of age on tracking eye-movements in the real world. Taken together, age-related eye-movement changes as measured in the laboratory only partly resemble those in the real world. It is well-conceivable that in the real world additional sensory cues, such as head-movement or vestibular signals, may partially compensate for age-related effects, which, according to this view, would be specific to early motion processing. In any case, our results highlight the importance of validity for natural situations when studying the impact of aging on real-life performance.

  3. Are forward models enough to explain self-monitoring? Insights from patients and eye movements.

    PubMed

    Hartsuiker, Robert J

    2013-08-01

    At the core of Pickering & Garrod's (P&G's) theory is a monitor that uses forward models. I argue that this account is challenged by neuropsychological findings and visual world eye-tracking data and that it has two conceptual problems. I propose that conflict monitoring avoids these issues and should be considered a promising alternative to perceptual loop and forward modeling theories.

  4. Emerging applications of eye-tracking technology in dermatology.

    PubMed

    John, Kevin K; Jensen, Jakob D; King, Andy J; Pokharel, Manusheela; Grossman, Douglas

    2018-04-06

    Eye-tracking technology has been used within a multitude of disciplines to provide data linking eye movements to visual processing of various stimuli (i.e., x-rays, situational positioning, printed information, and warnings). Despite the benefits provided by eye-tracking in allowing for the identification and quantification of visual attention, the discipline of dermatology has yet to see broad application of the technology. Notwithstanding dermatologists' heavy reliance upon visual patterns and cues to discriminate between benign and atypical nevi, literature that applies eye-tracking to the study of dermatology is sparse; and literature specific to patient-initiated behaviors, such as skin self-examination (SSE), is largely non-existent. The current article provides a review of eye-tracking research in various medical fields, culminating in a discussion of current applications and advantages of eye-tracking for dermatology research. Copyright © 2018 Japanese Society for Investigative Dermatology. Published by Elsevier B.V. All rights reserved.

  5. Language Non-Selective Activation of Orthography during Spoken Word Processing in Hindi-English Sequential Bilinguals: An Eye Tracking Visual World Study

    ERIC Educational Resources Information Center

    Mishra, Ramesh Kumar; Singh, Niharika

    2014-01-01

    Previous psycholinguistic studies have shown that bilinguals activate lexical items of both the languages during auditory and visual word processing. In this study we examined if Hindi-English bilinguals activate the orthographic forms of phonological neighbors of translation equivalents of the non target language while listening to words either…

  6. An eye tracking system for monitoring face scanning patterns reveals the enhancing effect of oxytocin on eye contact in common marmosets.

    PubMed

    Kotani, Manato; Shimono, Kohei; Yoneyama, Toshihiro; Nakako, Tomokazu; Matsumoto, Kenji; Ogi, Yuji; Konoike, Naho; Nakamura, Katsuki; Ikeda, Kazuhito

    2017-09-01

    Eye tracking systems are used to investigate eyes position and gaze patterns presumed as eye contact in humans. Eye contact is a useful biomarker of social communication and known to be deficient in patients with autism spectrum disorders (ASDs). Interestingly, the same eye tracking systems have been used to directly compare face scanning patterns in some non-human primates to those in human. Thus, eye tracking is expected to be a useful translational technique for investigating not only social attention and visual interest, but also the effects of psychiatric drugs, such as oxytocin, a neuropeptide that regulates social behavior. In this study, we report on a newly established method for eye tracking in common marmosets as unique New World primates that, like humans, use eye contact as a mean of communication. Our investigation was aimed at characterizing these primates face scanning patterns and evaluating the effects of oxytocin on their eye contact behavior. We found that normal common marmosets spend more time viewing the eyes region in common marmoset's picture than the mouth region or a scrambled picture. In oxytocin experiment, the change in eyes/face ratio was significantly greater in the oxytocin group than in the vehicle group. Moreover, oxytocin-induced increase in the change in eyes/face ratio was completely blocked by the oxytocin receptor antagonist L-368,899. These results indicate that eye tracking in common marmosets may be useful for evaluating drug candidates targeting psychiatric conditions, especially ASDs. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Measuring vigilance decrement using computer vision assisted eye tracking in dynamic naturalistic environments.

    PubMed

    Bodala, Indu P; Abbasi, Nida I; Yu Sun; Bezerianos, Anastasios; Al-Nashash, Hasan; Thakor, Nitish V

    2017-07-01

    Eye tracking offers a practical solution for monitoring cognitive performance in real world tasks. However, eye tracking in dynamic environments is difficult due to high spatial and temporal variation of stimuli, needing further and thorough investigation. In this paper, we study the possibility of developing a novel computer vision assisted eye tracking analysis by using fixations. Eye movement data is obtained from a long duration naturalistic driving experiment. Source invariant feature transform (SIFT) algorithm was implemented using VLFeat toolbox to identify multiple areas of interest (AOIs). A new measure called `fixation score' was defined to understand the dynamics of fixation position between the target AOI and the non target AOIs. Fixation score is maximum when the subjects focus on the target AOI and diminishes when they gaze at the non-target AOIs. Statistically significant negative correlation was found between fixation score and reaction time data (r =-0.2253 and p<;0.05). This implies that with vigilance decrement, the fixation score decreases due to visual attention shifting away from the target objects resulting in an increase in the reaction time.

  8. The Mental Lexicon Is Fully Specified: Evidence from Eye-Tracking

    ERIC Educational Resources Information Center

    Mitterer, Holger

    2011-01-01

    Four visual-world experiments, in which listeners heard spoken words and saw printed words, compared an optimal-perception account with the theory of phonological underspecification. This theory argues that default phonological features are not specified in the mental lexicon, leading to asymmetric lexical matching: Mismatching input…

  9. Test-retest reliability of eye tracking in the visual world paradigm for the study of real-time spoken word recognition.

    PubMed

    Farris-Trimble, Ashley; McMurray, Bob

    2013-08-01

    Researchers have begun to use eye tracking in the visual world paradigm (VWP) to study clinical differences in language processing, but the reliability of such laboratory tests has rarely been assessed. In this article, the authors assess test-retest reliability of the VWP for spoken word recognition. Methods Participants performed an auditory VWP task in repeated sessions and a visual-only VWP task in a third session. The authors performed correlation and regression analyses on several parameters to determine which reflect reliable behavior and which are predictive of behavior in later sessions. Results showed that the fixation parameters most closely related to timing and degree of fixations were moderately-to-strongly correlated across days, whereas the parameters related to rate of increase or decrease of fixations to particular items were less strongly correlated. Moreover, when including factors derived from the visual-only task, the performance of the regression model was at least moderately correlated with Day 2 performance on all parameters ( R > .30). The VWP is stable enough (with some caveats) to serve as an individual measure. These findings suggest guidelines for future use of the paradigm and for areas of improvement in both methodology and analysis.

  10. Detection of differential viewing patterns to erotic and non-erotic stimuli using eye-tracking methodology.

    PubMed

    Lykins, Amy D; Meana, Marta; Kambe, Gretchen

    2006-10-01

    As a first step in the investigation of the role of visual attention in the processing of erotic stimuli, eye-tracking methodology was employed to measure eye movements during erotic scene presentation. Because eye-tracking is a novel methodology in sexuality research, we attempted to determine whether the eye-tracker could detect differences (should they exist) in visual attention to erotic and non-erotic scenes. A total of 20 men and 20 women were presented with a series of erotic and non-erotic images and tracked their eye movements during image presentation. Comparisons between erotic and non-erotic image groups showed significant differences on two of three dependent measures of visual attention (number of fixations and total time) in both men and women. As hypothesized, there was a significant Stimulus x Scene Region interaction, indicating that participants visually attended to the body more in the erotic stimuli than in the non-erotic stimuli, as evidenced by a greater number of fixations and longer total time devoted to that region. These findings provide support for the application of eye-tracking methodology as a measure of visual attentional capture in sexuality research. Future applications of this methodology to expand our knowledge of the role of cognition in sexuality are suggested.

  11. Quantifying Pilot Visual Attention in Low Visibility Terminal Operations

    NASA Technical Reports Server (NTRS)

    Ellis, Kyle K.; Arthur, J. J.; Latorella, Kara A.; Kramer, Lynda J.; Shelton, Kevin J.; Norman, Robert M.; Prinzel, Lawrence J.

    2012-01-01

    Quantifying pilot visual behavior allows researchers to determine not only where a pilot is looking and when, but holds implications for specific behavioral tracking when these data are coupled with flight technical performance. Remote eye tracking systems have been integrated into simulators at NASA Langley with effectively no impact on the pilot environment. This paper discusses the installation and use of a remote eye tracking system. The data collection techniques from a complex human-in-the-loop (HITL) research experiment are discussed; especially, the data reduction algorithms and logic to transform raw eye tracking data into quantified visual behavior metrics, and analysis methods to interpret visual behavior. The findings suggest superior performance for Head-Up Display (HUD) and improved attentional behavior for Head-Down Display (HDD) implementations of Synthetic Vision System (SVS) technologies for low visibility terminal area operations. Keywords: eye tracking, flight deck, NextGen, human machine interface, aviation

  12. Technical Report of Successful Deployment of Tandem Visual Tracking During Live Laparoscopic Cholecystectomy Between Novice and Expert Surgeon.

    PubMed

    Puckett, Yana; Baronia, Benedicto C

    2016-09-20

    With the recent advances in eye tracking technology, it is now possible to track surgeons' eye movements while engaged in a surgical task or when surgical residents practice their surgical skills. Several studies have compared eye movements of surgical experts and novices and developed techniques to assess surgical skill on the basis of eye movement utilizing simulators and live surgery. None have evaluated simultaneous visual tracking between an expert and a novice during live surgery. Here, we describe a successful simultaneous deployment of visual tracking of an expert and a novice during live laparoscopic cholecystectomy. One expert surgeon and one chief surgical resident at an accredited surgical program in Lubbock, TX, USA performed a live laparoscopic cholecystectomy while simultaneously wearing the visual tracking devices. Their visual attitudes and movements were monitored via video recordings. The recordings were then analyzed for correlation between the expert and the novice. The visual attitudes and movements correlated approximately 85% between an expert surgeon and a chief surgical resident. The surgery was carried out uneventfully, and the data was abstracted with ease. We conclude that simultaneous deployment of visual tracking during live laparoscopic surgery is a possibility. More studies and subjects are needed to verify the success of our results and obtain data analysis.

  13. Comparison of Predictable Smooth Ocular and Combined Eye-Head Tracking Behaviour in Patients with Lesions Affecting the Brainstem and Cerebellum

    NASA Technical Reports Server (NTRS)

    Grant, Michael P.; Leigh, R. John; Seidman, Scott H.; Riley, David E.; Hanna, Joseph P.

    1992-01-01

    We compared the ability of eight normal subjects and 15 patients with brainstem or cerebellar disease to follow a moving visual stimulus smoothly with either the eyes alone or with combined eye-head tracking. The visual stimulus was either a laser spot (horizontal and vertical planes) or a large rotating disc (torsional plane), which moved at one sinusoidal frequency for each subject. The visually enhanced Vestibulo-Ocular Reflex (VOR) was also measured in each plane. In the horizontal and vertical planes, we found that if tracking gain (gaze velocity/target velocity) for smooth pursuit was close to 1, the gain of combined eye-hand tracking was similar. If the tracking gain during smooth pursuit was less than about 0.7, combined eye-head tracking was usually superior. Most patients, irrespective of diagnosis, showed combined eye-head tracking that was superior to smooth pursuit; only two patients showed the converse. In the torsional plane, in which optokinetic responses were weak, combined eye-head tracking was much superior, and this was the case in both subjects and patients. We found that a linear model, in which an internal ocular tracking signal cancelled the VOR, could account for our findings in most normal subjects in the horizontal and vertical planes, but not in the torsional plane. The model failed to account for tracking behaviour in most patients in any plane, and suggested that the brain may use additional mechanisms to reduce the internal gain of the VOR during combined eye-head tracking. Our results confirm that certain patients who show impairment of smooth-pursuit eye movements preserve their ability to smoothly track a moving target with combined eye-head tracking.

  14. From basic to applied research to improve outcomes for individuals who require augmentative and alternative communication: potential contributions of eye tracking research methods.

    PubMed

    Light, Janice; McNaughton, David

    2014-06-01

    In order to improve outcomes for individuals who require AAC, there is an urgent need for research across the full spectrum--from basic research to investigate fundamental language and communication processes, to applied clinical research to test applications of this new knowledge in the real world. To date, there has been a notable lack of basic research in the AAC field to investigate the underlying cognitive, sensory perceptual, linguistic, and motor processes of individuals with complex communication needs. Eye tracking research technology provides a promising method for researchers to investigate some of the visual cognitive processes that underlie interaction via AAC. The eye tracking research technology automatically records the latency, duration, and sequence of visual fixations, providing key information on what elements attract the individual's attention (and which ones do not), for how long, and in what sequence. As illustrated by the papers in this special issue, this information can be used to improve the design of AAC systems, assessments, and interventions to better meet the needs of individuals with developmental and acquired disabilities who require AAC (e.g., individuals with autism spectrum disorders, Down syndrome, intellectual disabilities of unknown origin, aphasia).

  15. Active eye-tracking improves LASIK results.

    PubMed

    Lee, Yuan-Chieh

    2007-06-01

    To study the advantage of active eye-tracking for photorefractive surgery. In a prospective, double-masked study, LASIK for myopia and myopic astigmatism was performed in 50 patients using the ALLEGRETTO WAVE version 1007. All patients received LASIK with full comprehension of the importance of fixation during the procedure. All surgical procedures were performed by a single surgeon. The eye-tracker was turned off in one group (n = 25) and kept on in another group (n = 25). Preoperatively and 3 months postoperatively, patients underwent a standard ophthalmic examination, which included comeal topography. In the patients treated with the eye-tracker off, all had uncorrected visual acuity (UCVA) of > or = 20/40 and 64% had > or = 20/20. Compared with the patients treated with the eye-tracker on, they had higher residual cylindrical astigmatism (P < .05). Those treated with the eye-tracker on achieved better UCVA and best spectacle-corrected visual acuity (P < .05). Spherical error and potential visual acuity (TMS-II) were not significantly different between the groups. The flying-spot system can achieve a fair result without active eye-tracking, but active eye-tracking helps improve the visual outcome and reduces postoperative cylindrical astigmatism.

  16. Technical Report of Successful Deployment of Tandem Visual Tracking During Live Laparoscopic Cholecystectomy Between Novice and Expert Surgeon

    PubMed Central

    Baronia, Benedicto C

    2016-01-01

    With the recent advances in eye tracking technology, it is now possible to track surgeons’ eye movements while engaged in a surgical task or when surgical residents practice their surgical skills. Several studies have compared eye movements of surgical experts and novices and developed techniques to assess surgical skill on the basis of eye movement utilizing simulators and live surgery. None have evaluated simultaneous visual tracking between an expert and a novice during live surgery. Here, we describe a successful simultaneous deployment of visual tracking of an expert and a novice during live laparoscopic cholecystectomy. One expert surgeon and one chief surgical resident at an accredited surgical program in Lubbock, TX, USA performed a live laparoscopic cholecystectomy while simultaneously wearing the visual tracking devices. Their visual attitudes and movements were monitored via video recordings. The recordings were then analyzed for correlation between the expert and the novice. The visual attitudes and movements correlated approximately 85% between an expert surgeon and a chief surgical resident. The surgery was carried out uneventfully, and the data was abstracted with ease. We conclude that simultaneous deployment of visual tracking during live laparoscopic surgery is a possibility. More studies and subjects are needed to verify the success of our results and obtain data analysis. PMID:27774359

  17. How visual search relates to visual diagnostic performance: a narrative systematic review of eye-tracking research in radiology.

    PubMed

    van der Gijp, A; Ravesloot, C J; Jarodzka, H; van der Schaaf, M F; van der Schaaf, I C; van Schaik, J P J; Ten Cate, Th J

    2017-08-01

    Eye tracking research has been conducted for decades to gain understanding of visual diagnosis such as in radiology. For educational purposes, it is important to identify visual search patterns that are related to high perceptual performance and to identify effective teaching strategies. This review of eye-tracking literature in the radiology domain aims to identify visual search patterns associated with high perceptual performance. Databases PubMed, EMBASE, ERIC, PsycINFO, Scopus and Web of Science were searched using 'visual perception' OR 'eye tracking' AND 'radiology' and synonyms. Two authors independently screened search results and included eye tracking studies concerning visual skills in radiology published between January 1, 1994 and July 31, 2015. Two authors independently assessed study quality with the Medical Education Research Study Quality Instrument, and extracted study data with respect to design, participant and task characteristics, and variables. A thematic analysis was conducted to extract and arrange study results, and a textual narrative synthesis was applied for data integration and interpretation. The search resulted in 22 relevant full-text articles. Thematic analysis resulted in six themes that informed the relation between visual search and level of expertise: (1) time on task, (2) eye movement characteristics of experts, (3) differences in visual attention, (4) visual search patterns, (5) search patterns in cross sectional stack imaging, and (6) teaching visual search strategies. Expert search was found to be characterized by a global-focal search pattern, which represents an initial global impression, followed by a detailed, focal search-to-find mode. Specific task-related search patterns, like drilling through CT scans and systematic search in chest X-rays, were found to be related to high expert levels. One study investigated teaching of visual search strategies, and did not find a significant effect on perceptual performance. Eye tracking literature in radiology indicates several search patterns are related to high levels of expertise, but teaching novices to search as an expert may not be effective. Experimental research is needed to find out which search strategies can improve image perception in learners.

  18. The Interplay of Implicit Causality, Structural Heuristics, and Anaphor Type in Ambiguous Pronoun Resolution

    ERIC Educational Resources Information Center

    Järvikivi, Juhani; van Gompel, Roger P. G.; Hyönä, Jukka

    2017-01-01

    Two visual-world eye-tracking experiments investigating pronoun resolution in Finnish examined the time course of implicit causality information relative to both grammatical role and order-of-mention information. Experiment 1 showed an effect of implicit causality that appeared at the same time as the first-mention preference. Furthermore, when we…

  19. Maternal Socioeconomic Status Influences the Range of Expectations during Language Comprehension in Adulthood

    ERIC Educational Resources Information Center

    Troyer, Melissa; Borovsky, Arielle

    2017-01-01

    In infancy, maternal socioeconomic status (SES) is associated with real-time language processing skills, but whether or not (and if so, how) this relationship carries into adulthood is unknown. We explored the effects of maternal SES in college-aged adults on eye-tracked, spoken sentence comprehension tasks using the visual world paradigm. When…

  20. Real-time recording and classification of eye movements in an immersive virtual environment.

    PubMed

    Diaz, Gabriel; Cooper, Joseph; Kit, Dmitry; Hayhoe, Mary

    2013-10-10

    Despite the growing popularity of virtual reality environments, few laboratories are equipped to investigate eye movements within these environments. This primer is intended to reduce the time and effort required to incorporate eye-tracking equipment into a virtual reality environment. We discuss issues related to the initial startup and provide algorithms necessary for basic analysis. Algorithms are provided for the calculation of gaze angle within a virtual world using a monocular eye-tracker in a three-dimensional environment. In addition, we provide algorithms for the calculation of the angular distance between the gaze and a relevant virtual object and for the identification of fixations, saccades, and pursuit eye movements. Finally, we provide tools that temporally synchronize gaze data and the visual stimulus and enable real-time assembly of a video-based record of the experiment using the Quicktime MOV format, available at http://sourceforge.net/p/utdvrlibraries/. This record contains the visual stimulus, the gaze cursor, and associated numerical data and can be used for data exportation, visual inspection, and validation of calculated gaze movements.

  1. Real-time recording and classification of eye movements in an immersive virtual environment

    PubMed Central

    Diaz, Gabriel; Cooper, Joseph; Kit, Dmitry; Hayhoe, Mary

    2013-01-01

    Despite the growing popularity of virtual reality environments, few laboratories are equipped to investigate eye movements within these environments. This primer is intended to reduce the time and effort required to incorporate eye-tracking equipment into a virtual reality environment. We discuss issues related to the initial startup and provide algorithms necessary for basic analysis. Algorithms are provided for the calculation of gaze angle within a virtual world using a monocular eye-tracker in a three-dimensional environment. In addition, we provide algorithms for the calculation of the angular distance between the gaze and a relevant virtual object and for the identification of fixations, saccades, and pursuit eye movements. Finally, we provide tools that temporally synchronize gaze data and the visual stimulus and enable real-time assembly of a video-based record of the experiment using the Quicktime MOV format, available at http://sourceforge.net/p/utdvrlibraries/. This record contains the visual stimulus, the gaze cursor, and associated numerical data and can be used for data exportation, visual inspection, and validation of calculated gaze movements. PMID:24113087

  2. Visual Attention for Solving Multiple-Choice Science Problem: An Eye-Tracking Analysis

    ERIC Educational Resources Information Center

    Tsai, Meng-Jung; Hou, Huei-Tse; Lai, Meng-Lung; Liu, Wan-Yi; Yang, Fang-Ying

    2012-01-01

    This study employed an eye-tracking technique to examine students' visual attention when solving a multiple-choice science problem. Six university students participated in a problem-solving task to predict occurrences of landslide hazards from four images representing four combinations of four factors. Participants' responses and visual attention…

  3. A geometric method for computing ocular kinematics and classifying gaze events using monocular remote eye tracking in a robotic environment.

    PubMed

    Singh, Tarkeshwar; Perry, Christopher M; Herter, Troy M

    2016-01-26

    Robotic and virtual-reality systems offer tremendous potential for improving assessment and rehabilitation of neurological disorders affecting the upper extremity. A key feature of these systems is that visual stimuli are often presented within the same workspace as the hands (i.e., peripersonal space). Integrating video-based remote eye tracking with robotic and virtual-reality systems can provide an additional tool for investigating how cognitive processes influence visuomotor learning and rehabilitation of the upper extremity. However, remote eye tracking systems typically compute ocular kinematics by assuming eye movements are made in a plane with constant depth (e.g. frontal plane). When visual stimuli are presented at variable depths (e.g. transverse plane), eye movements have a vergence component that may influence reliable detection of gaze events (fixations, smooth pursuits and saccades). To our knowledge, there are no available methods to classify gaze events in the transverse plane for monocular remote eye tracking systems. Here we present a geometrical method to compute ocular kinematics from a monocular remote eye tracking system when visual stimuli are presented in the transverse plane. We then use the obtained kinematics to compute velocity-based thresholds that allow us to accurately identify onsets and offsets of fixations, saccades and smooth pursuits. Finally, we validate our algorithm by comparing the gaze events computed by the algorithm with those obtained from the eye-tracking software and manual digitization. Within the transverse plane, our algorithm reliably differentiates saccades from fixations (static visual stimuli) and smooth pursuits from saccades and fixations when visual stimuli are dynamic. The proposed methods provide advancements for examining eye movements in robotic and virtual-reality systems. Our methods can also be used with other video-based or tablet-based systems in which eye movements are performed in a peripersonal plane with variable depth.

  4. World knowledge affects prediction as quickly as selectional restrictions: Evidence from the visual world paradigm.

    PubMed

    Milburn, Evelyn; Warren, Tessa; Dickey, Michael Walsh

    There has been considerable debate regarding the question of whether linguistic knowledge and world knowledge are separable and used differently during processing or not (Hagoort, Hald, Bastiaansen, & Petersson, 2004; Matsuki et al., 2011; Paczynski & Kuperberg, 2012; Warren & McConnell, 2007; Warren, McConnell, & Rayner, 2008). Previous investigations into this question have provided mixed evidence as to whether violations of selectional restrictions are detected earlier than violations of world knowledge. We report a visual-world eye-tracking study comparing the timing of facilitation contributed by selectional restrictions versus world knowledge. College-aged adults (n=36) viewed photographs of natural scenes while listening to sentences. Participants anticipated upcoming direct objects similarly regardless of whether facilitation was provided by only world knowledge or a combination of selectional restrictions and world knowledge. These results suggest that selectional restrictions are not available earlier in comprehension than world knowledge.

  5. English Listeners Use Suprasegmental Cues to Lexical Stress Early during Spoken-Word Recognition

    ERIC Educational Resources Information Center

    Jesse, Alexandra; Poellmann, Katja; Kong, Ying-Yee

    2017-01-01

    Purpose: We used an eye-tracking technique to investigate whether English listeners use suprasegmental information about lexical stress to speed up the recognition of spoken words in English. Method: In a visual world paradigm, 24 young English listeners followed spoken instructions to choose 1 of 4 printed referents on a computer screen (e.g.,…

  6. Investigating the Impact of Cognitive Style on Multimedia Learners' Understanding and Visual Search Patterns: An Eye-Tracking Approach

    ERIC Educational Resources Information Center

    Liu, Han-Chin

    2018-01-01

    Multimedia students' dependence on information from the outside world can have an impact on their ability to identify and locate information from multiple resources in learning environments and thereby affect the construction of mental models. Field dependence-independence has been used to assess the ability to extract essential information from…

  7. MR-Compatible Integrated Eye Tracking System

    DTIC Science & Technology

    2016-03-10

    SECURITY CLASSIFICATION OF: This instrumentation grant was used to purchase state-of-the-art, high-resolution video eye tracker that can be used to...P.O. Box 12211 Research Triangle Park, NC 27709-2211 video eye tracking, eye movments, visual search; camouflage-breaking REPORT DOCUMENTATION PAGE...Report: MR-Compatible Integrated Eye Tracking System Report Title This instrumentation grant was used to purchase state-of-the-art, high-resolution video

  8. Impulse processing: A dynamical systems model of incremental eye movements in the visual world paradigm

    PubMed Central

    Kukona, Anuenue; Tabor, Whitney

    2011-01-01

    The visual world paradigm presents listeners with a challenging problem: they must integrate two disparate signals, the spoken language and the visual context, in support of action (e.g., complex movements of the eyes across a scene). We present Impulse Processing, a dynamical systems approach to incremental eye movements in the visual world that suggests a framework for integrating language, vision, and action generally. Our approach assumes that impulses driven by the language and the visual context impinge minutely on a dynamical landscape of attractors corresponding to the potential eye-movement behaviors of the system. We test three unique predictions of our approach in an empirical study in the visual world paradigm, and describe an implementation in an artificial neural network. We discuss the Impulse Processing framework in relation to other models of the visual world paradigm. PMID:21609355

  9. Children Do Not Overcome Lexical Biases Where Adults Do: The Role of the Referential Scene in Garden-Path Recovery

    ERIC Educational Resources Information Center

    Kidd, Evan; Stewart, Andrew J.; Serratrice, Ludovica

    2011-01-01

    In this paper we report on a visual world eye-tracking experiment that investigated the differing abilities of adults and children to use referential scene information during reanalysis to overcome lexical biases during sentence processing. The results showed that adults incorporated aspects of the referential scene into their parse as soon as it…

  10. Use of Cognitive and Metacognitive Strategies in Online Search: An Eye-Tracking Study

    ERIC Educational Resources Information Center

    Zhou, Mingming; Ren, Jing

    2016-01-01

    This study used eye-tracking technology to track students' eye movements while searching information on the web. The research question guiding this study was "Do students with different search performance levels have different visual attention distributions while searching information online? If yes, what are the patterns for high and low…

  11. Attentional bias to betel quid cues: An eye tracking study.

    PubMed

    Shen, Bin; Chiu, Meng-Chun; Li, Shuo-Heng; Huang, Guo-Joe; Liu, Ling-Jun; Ho, Ming-Chou

    2016-09-01

    The World Health Organization regards betel quid as a human carcinogen, and DSM-IV and ICD-10 dependence symptoms may develop with heavy use. This study, conducted in central Taiwan, investigated whether betel quid chewers can exhibit overt orienting to selectively respond to the betel quid cues. Twenty-four male chewers' and 23 male nonchewers' eye movements to betel-quid-related pictures and matched pictures were assessed during a visual probe task. The eye movement index showed that betel quid chewers were more likely to initially direct their gaze to the betel quid cues, t(23) = 3.70, p < .01, d = .75, and spent more time, F(1, 23) = 4.58, p < .05, η₂ = .17, and were more fixated, F(1, 23) = 5.18, p < .05, η₂ = .18, on them. The visual probe index (response time) failed to detect the chewers' attentional bias. The current study provided the first eye movement evidence of betel quid chewers' attentional bias. The results demonstrated that the betel quid chewers (but not the nonchewers) were more likely to initially direct their gaze to the betel quid cues, and spent more time and were more fixated on them. These findings suggested that when attention is directly measured through the eye tracking technique, this methodology may be more sensitive to detecting attentional biases in betel quid chewers. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  12. Storyline Visualizations of Eye Tracking of Movie Viewing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balint, John T.; Arendt, Dustin L.; Blaha, Leslie M.

    Storyline visualizations offer an approach that promises to capture the spatio-temporal characteristics of individual observers and simultaneously illustrate emerging group behaviors. We develop a visual analytics approach to parsing, aligning, and clustering fixation sequences from eye tracking data. Visualization of the results captures the similarities and differences across a group of observers performing a common task. We apply our storyline approach to visualize gaze patterns of people watching dynamic movie clips. Storylines mitigate some of the shortcomings of existent spatio-temporal visualization techniques and, importantly, continue to highlight individual observer behavioral dynamics.

  13. High-resolution eye tracking using V1 neuron activity

    PubMed Central

    McFarland, James M.; Bondy, Adrian G.; Cumming, Bruce G.; Butts, Daniel A.

    2014-01-01

    Studies of high-acuity visual cortical processing have been limited by the inability to track eye position with sufficient accuracy to precisely reconstruct the visual stimulus on the retina. As a result, studies on primary visual cortex (V1) have been performed almost entirely on neurons outside the high-resolution central portion of the visual field (the fovea). Here we describe a procedure for inferring eye position using multi-electrode array recordings from V1 coupled with nonlinear stimulus processing models. We show that this method can be used to infer eye position with one arc-minute accuracy – significantly better than conventional techniques. This allows for analysis of foveal stimulus processing, and provides a means to correct for eye-movement induced biases present even outside the fovea. This method could thus reveal critical insights into the role of eye movements in cortical coding, as well as their contribution to measures of cortical variability. PMID:25197783

  14. Dissociable Frontal Controls during Visible and Memory-guided Eye-Tracking of Moving Targets

    PubMed Central

    Ding, Jinhong; Powell, David; Jiang, Yang

    2009-01-01

    When tracking visible or occluded moving targets, several frontal regions including the frontal eye fields (FEF), dorsal-lateral prefrontal cortex (DLPFC), and Anterior Cingulate Cortex (ACC) are involved in smooth pursuit eye movements (SPEM). To investigate how these areas play different roles in predicting future locations of moving targets, twelve healthy college students participated in a smooth pursuit task of visual and occluded targets. Their eye movements and brain responses measured by event-related functional MRI were simultaneously recorded. Our results show that different visual cues resulted in time discrepancies between physical and estimated pursuit time only when the moving dot was occluded. Visible phase velocity gain was higher than that of occlusion phase. We found bilateral FEF association with eye-movement whether moving targets are visible or occluded. However, the DLPFC and ACC showed increased activity when tracking and predicting locations of occluded moving targets, and were suppressed during smooth pursuit of visible targets. When visual cues were increasingly available, less activation in the DLPFC and the ACC was observed. Additionally, there was a significant hemisphere effect in DLPFC, where right DLPFC showed significantly increased responses over left when pursuing occluded moving targets. Correlation results revealed that DLPFC, the right DLPFC in particular, communicates more with FEF during tracking of occluded moving targets (from memory). The ACC modulates FEF more during tracking of visible targets (likely related to visual attention). Our results suggest that DLPFC and ACC modulate FEF and cortical networks differentially during visible and memory-guided eye tracking of moving targets. PMID:19434603

  15. Alteration of travel patterns with vision loss from glaucoma and macular degeneration.

    PubMed

    Curriero, Frank C; Pinchoff, Jessie; van Landingham, Suzanne W; Ferrucci, Luigi; Friedman, David S; Ramulu, Pradeep Y

    2013-11-01

    The distance patients can travel outside the home influences how much of the world they can sample and to what extent they can live independently. Recent technological advances have allowed travel outside the home to be directly measured in patients' real-world routines. To determine whether decreased visual acuity (VA) from age-related macular degeneration (AMD) and visual field (VF) loss from glaucoma are associated with restricted travel patterns in older adults. Cross-sectional study. Patients were recruited from an eye clinic, while travel patterns were recorded during their real-world routines using a cellular tracking device. Sixty-one control subjects with normal vision, 84 subjects with glaucoma with bilateral VF loss, and 65 subjects with AMD with bilateral or severe unilateral loss of VA had their location tracked every 15 minutes between 7 am and 11 pm for 7 days using a tracking device. Average daily excursion size (defined as maximum distance away from home) and average daily excursion span (defined as maximum span of travel) were defined for each individual. The effects of vision loss on travel patterns were evaluated after controlling for individual and geographic factors. In multivariable models comparing subjects with AMD and control subjects, average excursion size and span decreased by approximately one-quarter mile for each line of better-eye VA loss (P ≤ .03 for both). Similar but not statistically significant associations were observed between average daily excursion size and span for severity of better-eye VF loss in subjects with glaucoma and control subjects. Being married or living with someone and younger age were associated with more distant travel, while less-distant travel was noted for older individuals, African Americans, and those living in more densely populated regions. Age-related macular degeneration-related loss of VA, but not glaucoma-related loss of VF, is associated with restriction of travel to more nearby locations. This constriction of life space may impact quality of life and restrict access to services.

  16. Accounting for direction and speed of eye motion in planning visually guided manual tracking.

    PubMed

    Leclercq, Guillaume; Blohm, Gunnar; Lefèvre, Philippe

    2013-10-01

    Accurate motor planning in a dynamic environment is a critical skill for humans because we are often required to react quickly and adequately to the visual motion of objects. Moreover, we are often in motion ourselves, and this complicates motor planning. Indeed, the retinal and spatial motions of an object are different because of the retinal motion component induced by self-motion. Many studies have investigated motion perception during smooth pursuit and concluded that eye velocity is partially taken into account by the brain. Here we investigate whether the eye velocity during ongoing smooth pursuit is taken into account for the planning of visually guided manual tracking. We had 10 human participants manually track a target while in steady-state smooth pursuit toward another target such that the difference between the retinal and spatial target motion directions could be large, depending on both the direction and the speed of the eye. We used a measure of initial arm movement direction to quantify whether motor planning occurred in retinal coordinates (not accounting for eye motion) or was spatially correct (incorporating eye velocity). Results showed that the eye velocity was nearly fully taken into account by the neuronal areas involved in the visuomotor velocity transformation (between 75% and 102%). In particular, these neuronal pathways accounted for the nonlinear effects due to the relative velocity between the target and the eye. In conclusion, the brain network transforming visual motion into a motor plan for manual tracking adequately uses extraretinal signals about eye velocity.

  17. Quantifying Novice and Expert Differences in Visual Diagnostic Reasoning in Veterinary Pathology Using Eye-Tracking Technology.

    PubMed

    Warren, Amy L; Donnon, Tyrone L; Wagg, Catherine R; Priest, Heather; Fernandez, Nicole J

    2018-01-18

    Visual diagnostic reasoning is the cognitive process by which pathologists reach a diagnosis based on visual stimuli (cytologic, histopathologic, or gross imagery). Currently, there is little to no literature examining visual reasoning in veterinary pathology. The objective of the study was to use eye tracking to establish baseline quantitative and qualitative differences between the visual reasoning processes of novice and expert veterinary pathologists viewing cytology specimens. Novice and expert participants were each shown 10 cytology images and asked to formulate a diagnosis while wearing eye-tracking equipment (10 slides) and while concurrently verbalizing their thought processes using the think-aloud protocol (5 slides). Compared to novices, experts demonstrated significantly higher diagnostic accuracy (p<.017), shorter time to diagnosis (p<.017), and a higher percentage of time spent viewing areas of diagnostic interest (p<.017). Experts elicited more key diagnostic features in the think-aloud protocol and had more efficient patterns of eye movement. These findings suggest that experts' fast time to diagnosis, efficient eye-movement patterns, and preference for viewing areas of interest supports system 1 (pattern-recognition) reasoning and script-inductive knowledge structures with system 2 (analytic) reasoning to verify their diagnosis.

  18. Visual Processing of Faces in Individuals with Fragile X Syndrome: An Eye Tracking Study

    ERIC Educational Resources Information Center

    Farzin, Faraz; Rivera, Susan M.; Hessl, David

    2009-01-01

    Gaze avoidance is a hallmark behavioral feature of fragile X syndrome (FXS), but little is known about whether abnormalities in the visual processing of faces, including disrupted autonomic reactivity, may underlie this behavior. Eye tracking was used to record fixations and pupil diameter while adolescents and young adults with FXS and sex- and…

  19. The Influences of Static and Interactive Dynamic Facial Stimuli on Visual Strategies in Persons with Asperger Syndrome

    ERIC Educational Resources Information Center

    Falkmer, Marita; Bjallmark, Anna; Larsson, Matilda; Falkmer, Torbjorn

    2011-01-01

    Several studies, using eye tracking methodology, suggest that different visual strategies in persons with autism spectrum conditions, compared with controls, are applied when viewing facial stimuli. Most eye tracking studies are, however, made in laboratory settings with either static (photos) or non-interactive dynamic stimuli, such as video…

  20. What triggers catch-up saccades during visual tracking?

    PubMed

    de Brouwer, Sophie; Yuksel, Demet; Blohm, Gunnar; Missal, Marcus; Lefèvre, Philippe

    2002-03-01

    When tracking moving visual stimuli, primates orient their visual axis by combining two kinds of eye movements, smooth pursuit and saccades, that have very different dynamics. Yet, the mechanisms that govern the decision to switch from one type of eye movement to the other are still poorly understood, even though they could bring a significant contribution to the understanding of how the CNS combines different kinds of control strategies to achieve a common motor and sensory goal. In this study, we investigated the oculomotor responses to a large range of different combinations of position error and velocity error during visual tracking of moving stimuli in humans. We found that the oculomotor system uses a prediction of the time at which the eye trajectory will cross the target, defined as the "eye crossing time" (T(XE)). The eye crossing time, which depends on both position error and velocity error, is the criterion used to switch between smooth and saccadic pursuit, i.e., to trigger catch-up saccades. On average, for T(XE) between 40 and 180 ms, no saccade is triggered and target tracking remains purely smooth. Conversely, when T(XE) becomes smaller than 40 ms or larger than 180 ms, a saccade is triggered after a short latency (around 125 ms).

  1. Eye movement-invariant representations in the human visual system.

    PubMed

    Nishimoto, Shinji; Huth, Alexander G; Bilenko, Natalia Y; Gallant, Jack L

    2017-01-01

    During natural vision, humans make frequent eye movements but perceive a stable visual world. It is therefore likely that the human visual system contains representations of the visual world that are invariant to eye movements. Here we present an experiment designed to identify visual areas that might contain eye-movement-invariant representations. We used functional MRI to record brain activity from four human subjects who watched natural movies. In one condition subjects were required to fixate steadily, and in the other they were allowed to freely make voluntary eye movements. The movies used in each condition were identical. We reasoned that the brain activity recorded in a visual area that is invariant to eye movement should be similar under fixation and free viewing conditions. In contrast, activity in a visual area that is sensitive to eye movement should differ between fixation and free viewing. We therefore measured the similarity of brain activity across repeated presentations of the same movie within the fixation condition, and separately between the fixation and free viewing conditions. The ratio of these measures was used to determine which brain areas are most likely to contain eye movement-invariant representations. We found that voxels located in early visual areas are strongly affected by eye movements, while voxels in ventral temporal areas are only weakly affected by eye movements. These results suggest that the ventral temporal visual areas contain a stable representation of the visual world that is invariant to eye movements made during natural vision.

  2. Context effects on smooth pursuit and manual interception of a disappearing target.

    PubMed

    Kreyenmeier, Philipp; Fooken, Jolande; Spering, Miriam

    2017-07-01

    In our natural environment, we interact with moving objects that are surrounded by richly textured, dynamic visual contexts. Yet most laboratory studies on vision and movement show visual objects in front of uniform gray backgrounds. Context effects on eye movements have been widely studied, but it is less well known how visual contexts affect hand movements. Here we ask whether eye and hand movements integrate motion signals from target and context similarly or differently, and whether context effects on eye and hand change over time. We developed a track-intercept task requiring participants to track the initial launch of a moving object ("ball") with smooth pursuit eye movements. The ball disappeared after a brief presentation, and participants had to intercept it in a designated "hit zone." In two experiments ( n = 18 human observers each), the ball was shown in front of a uniform or a textured background that either was stationary or moved along with the target. Eye and hand movement latencies and speeds were similarly affected by the visual context, but eye and hand interception (eye position at time of interception, and hand interception timing error) did not differ significantly between context conditions. Eye and hand interception timing errors were strongly correlated on a trial-by-trial basis across all context conditions, highlighting the close relation between these responses in manual interception tasks. Our results indicate that visual contexts similarly affect eye and hand movements but that these effects may be short-lasting, affecting movement trajectories more than movement end points. NEW & NOTEWORTHY In a novel track-intercept paradigm, human observers tracked a briefly shown object moving across a textured, dynamic context and intercepted it with their finger after it had disappeared. Context motion significantly affected eye and hand movement latency and speed, but not interception accuracy; eye and hand position at interception were correlated on a trial-by-trial basis. Visual context effects may be short-lasting, affecting movement trajectories more than movement end points. Copyright © 2017 the American Physiological Society.

  3. Visual attention on a respiratory function monitor during simulated neonatal resuscitation: an eye-tracking study.

    PubMed

    Katz, Trixie A; Weinberg, Danielle D; Fishman, Claire E; Nadkarni, Vinay; Tremoulet, Patrice; Te Pas, Arjan B; Sarcevic, Aleksandra; Foglia, Elizabeth E

    2018-06-14

    A respiratory function monitor (RFM) may improve positive pressure ventilation (PPV) technique, but many providers do not use RFM data appropriately during delivery room resuscitation. We sought to use eye-tracking technology to identify RFM parameters that neonatal providers view most commonly during simulated PPV. Mixed methods study. Neonatal providers performed RFM-guided PPV on a neonatal manikin while wearing eye-tracking glasses to quantify visual attention on displayed RFM parameters (ie, exhaled tidal volume, flow, leak). Participants subsequently provided qualitative feedback on the eye-tracking glasses. Level 3 academic neonatal intensive care unit. Twenty neonatal resuscitation providers. Visual attention: overall gaze sample percentage; total gaze duration, visit count and average visit duration for each displayed RFM parameter. Qualitative feedback: willingness to wear eye-tracking glasses during clinical resuscitation. Twenty providers participated in this study. The mean gaze sample captured wa s 93% (SD 4%). Exhaled tidal volume waveform was the RFM parameter with the highest total gaze duration (median 23%, IQR 13-51%), highest visit count (median 5.17 per 10 s, IQR 2.82-6.16) and longest visit duration (median 0.48 s, IQR 0.38-0.81 s). All participants were willing to wear the glasses during clinical resuscitation. Wearable eye-tracking technology is feasible to identify gaze fixation on the RFM display and is well accepted by providers. Neonatal providers look at exhaled tidal volume more than any other RFM parameter. Future applications of eye-tracking technology include use during clinical resuscitation. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  4. Expertise Differences in the Comprehension of Visualizations: A Meta-Analysis of Eye-Tracking Research in Professional Domains

    ERIC Educational Resources Information Center

    Gegenfurtner, Andreas; Lehtinen, Erno; Saljo, Roger

    2011-01-01

    This meta-analysis integrates 296 effect sizes reported in eye-tracking research on expertise differences in the comprehension of visualizations. Three theories were evaluated: Ericsson and Kintsch's ("Psychol Rev" 102:211-245, 1995) theory of long-term working memory, Haider and Frensch's ("J Exp Psychol Learn Mem Cognit" 25:172-190, 1999)…

  5. Preferential Inspection of Recent Real-World Events Over Future Events: Evidence from Eye Tracking during Spoken Sentence Comprehension

    PubMed Central

    Knoeferle, Pia; Carminati, Maria Nella; Abashidze, Dato; Essig, Kai

    2011-01-01

    Eye-tracking findings suggest people prefer to ground their spoken language comprehension by focusing on recently seen events more than anticipating future events: When the verb in NP1-VERB-ADV-NP2 sentences was referentially ambiguous between a recently depicted and an equally plausible future clipart action, listeners fixated the target of the recent action more often at the verb than the object that hadn’t yet been acted upon. We examined whether this inspection preference generalizes to real-world events, and whether it is (vs. isn’t) modulated by how often people see recent and future events acted out. In a first eye-tracking study, the experimenter performed an action (e.g., sugaring pancakes), and then a spoken sentence either referred to that action or to an equally plausible future action (e.g., sugaring strawberries). At the verb, people more often inspected the pancakes (the recent target) than the strawberries (the future target), thus replicating the recent-event preference with these real-world actions. Adverb tense, indicating a future versus past event, had no effect on participants’ visual attention. In a second study we increased the frequency of future actions such that participants saw 50/50 future and recent actions. During the verb people mostly inspected the recent action target, but subsequently they began to rely on tense, and anticipated the future target more often for future than past tense adverbs. A corpus study showed that the verbs and adverbs indicating past versus future actions were equally frequent, suggesting long-term frequency biases did not cause the recent-event preference. Thus, (a) recent real-world actions can rapidly influence comprehension (as indexed by eye gaze to objects), and (b) people prefer to first inspect a recent action target (vs. an object that will soon be acted upon), even when past and future actions occur with equal frequency. A simple frequency-of-experience account cannot accommodate these findings. PMID:22207858

  6. Your Child's Vision

    MedlinePlus

    ... 3½, kids should have eye health screenings and visual acuity tests (tests that measure sharpness of vision) ... eye rubbing extreme light sensitivity poor focusing poor visual tracking (following an object) abnormal alignment or movement ...

  7. Comparison of smooth pursuit and combined eye-head tracking in human subjects with deficient labyrinthine function

    NASA Technical Reports Server (NTRS)

    Leigh, R. J.; Thurston, S. E.; Sharpe, J. A.; Ranalli, P. J.; Hamid, M. A.

    1987-01-01

    The effects of deficient labyrinthine function on smooth visual tracking with the eyes and head were investigated, using ten patients with bilateral peripheral vestibular disease and ten normal controls. Active, combined eye-head tracking (EHT) was significantly better in patients than smooth pursuit with the eyes alone, whereas normal subjects pursued equally well in both cases. Compensatory eye movements during active head rotation in darkness were always less in patients than in normal subjects. These data were used to examine current hypotheses that postulate central cancellation of the vestibulo-ocular reflex (VOR) during EHT. A model that proposes summation of an integral smooth pursuit command and VOR/compensatory eye movements is consistent with the findings. Observation of passive EHT (visual fixation of a head-fixed target during en bloc rotation) appears to indicate that in this mode parametric gain changes contribute to modulation of the VOR.

  8. Getting Inside the Expert's Head: An Analysis of Physician Cognitive Processes During Trauma Resuscitations.

    PubMed

    White, Matthew R; Braund, Heather; Howes, Daniel; Egan, Rylan; Gegenfurtner, Andreas; van Merrienboer, Jeroen J G; Szulewski, Adam

    2018-04-23

    Crisis resource management skills are integral to leading the resuscitation of a critically ill patient. Despite their importance, crisis resource management skills (and their associated cognitive processes) have traditionally been difficult to study in the real world. The objective of this study was to derive key cognitive processes underpinning expert performance in resuscitation medicine, using a new eye-tracking-based video capture method during clinical cases. During an 18-month period, a sample of 10 trauma resuscitations led by 4 expert trauma team leaders was analyzed. The physician team leaders were outfitted with mobile eye-tracking glasses for each case. After each resuscitation, participants were debriefed with a modified cognitive task analysis, based on a cued-recall protocol, augmented by viewing their own first-person perspective eye-tracking video from the clinical encounter. Eye-tracking technology was successfully applied as a tool to aid in the qualitative analysis of expert performance in a clinical setting. All participants stated that using these methods helped uncover previously unconscious aspects of their cognition. Overall, 5 major themes were derived from the interviews: logistic awareness, managing uncertainty, visual fixation behaviors, selective attendance to information, and anticipatory behaviors. The novel approach of cognitive task analysis augmented by eye tracking allowed the derivation of 5 unique cognitive processes underpinning expert performance in leading a resuscitation. An understanding of these cognitive processes has the potential to enhance educational methods and to create new assessment modalities of these previously tacit aspects of expertise in this field. Copyright © 2018 American College of Emergency Physicians. Published by Elsevier Inc. All rights reserved.

  9. Eye movements and attention in reading, scene perception, and visual search.

    PubMed

    Rayner, Keith

    2009-08-01

    Eye movements are now widely used to investigate cognitive processes during reading, scene perception, and visual search. In this article, research on the following topics is reviewed with respect to reading: (a) the perceptual span (or span of effective vision), (b) preview benefit, (c) eye movement control, and (d) models of eye movements. Related issues with respect to eye movements during scene perception and visual search are also reviewed. It is argued that research on eye movements during reading has been somewhat advanced over research on eye movements in scene perception and visual search and that some of the paradigms developed to study reading should be more widely adopted in the study of scene perception and visual search. Research dealing with "real-world" tasks and research utilizing the visual-world paradigm are also briefly discussed.

  10. Visual speech influences speech perception immediately but not automatically.

    PubMed

    Mitterer, Holger; Reinisch, Eva

    2017-02-01

    Two experiments examined the time course of the use of auditory and visual speech cues to spoken word recognition using an eye-tracking paradigm. Results of the first experiment showed that the use of visual speech cues from lipreading is reduced if concurrently presented pictures require a division of attentional resources. This reduction was evident even when listeners' eye gaze was on the speaker rather than the (static) pictures. Experiment 2 used a deictic hand gesture to foster attention to the speaker. At the same time, the visual processing load was reduced by keeping the visual display constant over a fixed number of successive trials. Under these conditions, the visual speech cues from lipreading were used. Moreover, the eye-tracking data indicated that visual information was used immediately and even earlier than auditory information. In combination, these data indicate that visual speech cues are not used automatically, but if they are used, they are used immediately.

  11. Eye-Tracking Provides a Sensitive Measure of Exploration Deficits After Acute Right MCA Stroke

    PubMed Central

    Delazer, Margarete; Sojer, Martin; Ellmerer, Philipp; Boehme, Christian; Benke, Thomas

    2018-01-01

    The eye-tracking study aimed at assessing spatial biases in visual exploration in patients after acute right MCA (middle cerebral artery) stroke. Patients affected by unilateral neglect show less functional recovery and experience severe difficulties in everyday life. Thus, accurate diagnosis is essential, and specific treatment is required. Early assessment is of high importance as rehabilitative interventions are more effective when applied soon after stroke. Previous research has shown that deficits may be overlooked when classical paper-and-pencil tasks are used for diagnosis. Conversely, eye-tracking allows direct monitoring of visual exploration patterns. We hypothesized that the analysis of eye-tracking provides more sensitive measures for spatial exploration deficits after right middle cerebral artery stroke. Twenty-two patients with right MCA stroke (median 5 days after stroke) and 28 healthy controls were included. Lesions were confirmed by MRI/CCT. Groups performed comparably in the Mini–Mental State Examination (patients and controls median 29) and in a screening of executive functions. Eleven patients scored at ceiling in neglect screening tasks, 11 showed minimal to severe signs of unilateral visual neglect. An overlap plot based on MRI and CCT imaging showed lesions in the temporo–parieto–frontal cortex, basal ganglia, and adjacent white matter tracts. Visual exploration was evaluated in two eye-tracking tasks, one assessing free visual exploration of photographs, the other visual search using symbols and letters. An index of fixation asymmetries proved to be a sensitive measure of spatial exploration deficits. Both patient groups showed a marked exploration bias to the right when looking at complex photographs. A single case analysis confirmed that also most of those patients who showed no neglect in screening tasks performed outside the range of controls in free exploration. The analysis of patients’ scoring at ceiling in neglect screening tasks is of special interest, as possible deficits may be overlooked and thus remain untreated. Our findings are in line with other studies suggesting considerable limitations of laboratory screening procedures to fully appreciate the occurrence of neglect symptoms. Future investigations are needed to explore the predictive value of the eye-tracking index and its validity in everyday situations.

  12. Tracking without perceiving: a dissociation between eye movements and motion perception.

    PubMed

    Spering, Miriam; Pomplun, Marc; Carrasco, Marisa

    2011-02-01

    Can people react to objects in their visual field that they do not consciously perceive? We investigated how visual perception and motor action respond to moving objects whose visibility is reduced, and we found a dissociation between motion processing for perception and for action. We compared motion perception and eye movements evoked by two orthogonally drifting gratings, each presented separately to a different eye. The strength of each monocular grating was manipulated by inducing adaptation to one grating prior to the presentation of both gratings. Reflexive eye movements tracked the vector average of both gratings (pattern motion) even though perceptual responses followed one motion direction exclusively (component motion). Observers almost never perceived pattern motion. This dissociation implies the existence of visual-motion signals that guide eye movements in the absence of a corresponding conscious percept.

  13. Tracking Without Perceiving: A Dissociation Between Eye Movements and Motion Perception

    PubMed Central

    Spering, Miriam; Pomplun, Marc; Carrasco, Marisa

    2011-01-01

    Can people react to objects in their visual field that they do not consciously perceive? We investigated how visual perception and motor action respond to moving objects whose visibility is reduced, and we found a dissociation between motion processing for perception and for action. We compared motion perception and eye movements evoked by two orthogonally drifting gratings, each presented separately to a different eye. The strength of each monocular grating was manipulated by inducing adaptation to one grating prior to the presentation of both gratings. Reflexive eye movements tracked the vector average of both gratings (pattern motion) even though perceptual responses followed one motion direction exclusively (component motion). Observers almost never perceived pattern motion. This dissociation implies the existence of visual-motion signals that guide eye movements in the absence of a corresponding conscious percept. PMID:21189353

  14. Optimizations and Applications in Head-Mounted Video-Based Eye Tracking

    ERIC Educational Resources Information Center

    Li, Feng

    2011-01-01

    Video-based eye tracking techniques have become increasingly attractive in many research fields, such as visual perception and human-computer interface design. The technique primarily relies on the positional difference between the center of the eye's pupil and the first-surface reflection at the cornea, the corneal reflection (CR). This…

  15. Predictors of verb-mediated anticipatory eye movements in the visual world.

    PubMed

    Hintz, Florian; Meyer, Antje S; Huettig, Falk

    2017-09-01

    Many studies have demonstrated that listeners use information extracted from verbs to guide anticipatory eye movements to objects in the visual context that satisfy the selection restrictions of the verb. An important question is what underlies such verb-mediated anticipatory eye gaze. Based on empirical and theoretical suggestions, we investigated the influence of 5 potential predictors of this behavior: functional associations and general associations between verb and target object, as well as the listeners' production fluency, receptive vocabulary knowledge, and nonverbal intelligence. In 3 eye-tracking experiments, participants looked at sets of 4 objects and listened to sentences where the final word was predictable or not predictable (e.g., "The man peels/draws an apple"). On predictable trials only the target object, but not the distractors, were functionally and associatively related to the verb. In Experiments 1 and 2, objects were presented before the verb was heard. In Experiment 3, participants were given a short preview of the display after the verb was heard. Functional associations and receptive vocabulary were found to be important predictors of verb-mediated anticipatory eye gaze independent of the amount of contextual visual input. General word associations did not and nonverbal intelligence was only a very weak predictor of anticipatory eye movements. Participants' production fluency correlated positively with the likelihood of anticipatory eye movements when participants were given the long but not the short visual display preview. These findings fit best with a pluralistic approach to predictive language processing in which multiple mechanisms, mediating factors, and situational context dynamically interact. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  16. Ciliary muscle contraction force and trapezius muscle activity during manual tracking of a moving visual target.

    PubMed

    Domkin, Dmitry; Forsman, Mikael; Richter, Hans O

    2016-06-01

    Previous studies have shown an association of visual demands during near work and increased activity of the trapezius muscle. Those studies were conducted under stationary postural conditions with fixed gaze and artificial visual load. The present study investigated the relationship between ciliary muscle contraction force and trapezius muscle activity across individuals during performance of a natural dynamic motor task under free gaze conditions. Participants (N=11) tracked a moving visual target with a digital pen on a computer screen. Tracking performance, eye refraction and trapezius muscle activity were continuously measured. Ciliary muscle contraction force was computed from eye accommodative response. There was a significant Pearson correlation between ciliary muscle contraction force and trapezius muscle activity on the tracking side (0.78, p<0.01) and passive side (0.64, p<0.05). The study supports the hypothesis that high visual demands, leading to an increased ciliary muscle contraction during continuous eye-hand coordination, may increase trapezius muscle tension and thus contribute to the development of musculoskeletal complaints in the neck-shoulder area. Further experimental studies are required to clarify whether the relationship is valid within each individual or may represent a general personal trait, when individuals with higher eye accommodative response tend to have higher trapezius muscle activity. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Investigation of visual fatigue/discomfort generated by S3D video using eye-tracking data

    NASA Astrophysics Data System (ADS)

    Iatsun, Iana; Larabi, Mohamed-Chaker; Fernandez-Maloigne, Christine

    2013-03-01

    Stereoscopic 3D is undoubtedly one of the most attractive content. It has been deployed intensively during the last decade through movies and games. Among the advantages of 3D are the strong involvement of viewers and the increased feeling of presence. However, the sanitary e ects that can be generated by 3D are still not precisely known. For example, visual fatigue and visual discomfort are among symptoms that an observer may feel. In this paper, we propose an investigation of visual fatigue generated by 3D video watching, with the help of eye-tracking. From one side, a questionnaire, with the most frequent symptoms linked with 3D, is used in order to measure their variation over time. From the other side, visual characteristics such as pupil diameter, eye movements ( xations and saccades) and eye blinking have been explored thanks to data provided by the eye-tracker. The statistical analysis showed an important link between blinking duration and number of saccades with visual fatigue while pupil diameter and xations are not precise enough and are highly dependent on content. Finally, time and content play an important role in the growth of visual fatigue due to 3D watching.

  18. Prior Knowledge and Online Inquiry-Based Science Reading: Evidence from Eye Tracking

    ERIC Educational Resources Information Center

    Ho, Hsin Ning Jessie; Tsai, Meng-Jung; Wang, Ching-Yeh; Tsai, Chin-Chung

    2014-01-01

    This study employed eye-tracking technology to examine how students with different levels of prior knowledge process text and data diagrams when reading a web-based scientific report. Students' visual behaviors were tracked and recorded when they read a report demonstrating the relationship between the greenhouse effect and global climate…

  19. Evaluating Silent Reading Performance with an Eye Tracking System in Patients with Glaucoma

    PubMed Central

    Murata, Noriaki; Fukuchi, Takeo

    2017-01-01

    Objective To investigate the relationship between silent reading performance and visual field defects in patients with glaucoma using an eye tracking system. Methods Fifty glaucoma patients (Group G; mean age, 52.2 years, standard deviation: 11.4 years) and 20 normal controls (Group N; mean age, 46.9 years; standard deviation: 17.2 years) were included in the study. All participants in Group G had early to advanced glaucomatous visual field defects but better than 20/20 visual acuity in both eyes. Participants silently read Japanese articles written horizontally while the eye tracking system monitored and calculated reading duration per 100 characters, number of fixations per 100 characters, and mean fixation duration, which were compared with mean deviation and visual field index values from Humphrey visual field testing (24–2 and 10–2 Swedish interactive threshold algorithm standard) of the right versus left eye and the better versus worse eye. Results There was a statistically significant difference between Groups G and N in mean fixation duration (G, 233.4 msec; N, 215.7 msec; P = 0.010). Within Group G, significant correlations were observed between reading duration and 24–2 right mean deviation (rs = -0.280, P = 0.049), 24–2 right visual field index (rs = -0.306, P = 0.030), 24–2 worse visual field index (rs = -0.304, P = 0.032), and 10–2 worse mean deviation (rs = -0.326, P = 0.025). Significant correlations were observed between mean fixation duration and 10–2 left mean deviation (rs = -0.294, P = 0.045) and 10–2 worse mean deviation (rs = -0.306, P = 0.037), respectively. Conclusions The severity of visual field defects may influence some aspects of reading performance. At least concerning silent reading, the visual field of the worse eye is an essential element of smoothness of reading. PMID:28095478

  20. Using Eye Tracking as a Tool to Teach Informatics Students the Importance of User Centered Design

    ERIC Educational Resources Information Center

    Gelderblom, Helene; Adebesin, Funmi; Brosens, Jacques; Kruger, Rendani

    2017-01-01

    In this article the authors describe how they incorporate eye tracking in a human-computer interaction (HCI) course that forms part of a postgraduate Informatics degree. The focus is on an eye tracking assignment that involves student groups performing usability evaluation studies for real world clients. Over the past three years the authors have…

  1. Designs and Algorithms to Map Eye Tracking Data with Dynamic Multielement Moving Objects.

    PubMed

    Kang, Ziho; Mandal, Saptarshi; Crutchfield, Jerry; Millan, Angel; McClung, Sarah N

    2016-01-01

    Design concepts and algorithms were developed to address the eye tracking analysis issues that arise when (1) participants interrogate dynamic multielement objects that can overlap on the display and (2) visual angle error of the eye trackers is incapable of providing exact eye fixation coordinates. These issues were addressed by (1) developing dynamic areas of interests (AOIs) in the form of either convex or rectangular shapes to represent the moving and shape-changing multielement objects, (2) introducing the concept of AOI gap tolerance (AGT) that controls the size of the AOIs to address the overlapping and visual angle error issues, and (3) finding a near optimal AGT value. The approach was tested in the context of air traffic control (ATC) operations where air traffic controller specialists (ATCSs) interrogated multiple moving aircraft on a radar display to detect and control the aircraft for the purpose of maintaining safe and expeditious air transportation. In addition, we show how eye tracking analysis results can differ based on how we define dynamic AOIs to determine eye fixations on moving objects. The results serve as a framework to more accurately analyze eye tracking data and to better support the analysis of human performance.

  2. Designs and Algorithms to Map Eye Tracking Data with Dynamic Multielement Moving Objects

    PubMed Central

    Mandal, Saptarshi

    2016-01-01

    Design concepts and algorithms were developed to address the eye tracking analysis issues that arise when (1) participants interrogate dynamic multielement objects that can overlap on the display and (2) visual angle error of the eye trackers is incapable of providing exact eye fixation coordinates. These issues were addressed by (1) developing dynamic areas of interests (AOIs) in the form of either convex or rectangular shapes to represent the moving and shape-changing multielement objects, (2) introducing the concept of AOI gap tolerance (AGT) that controls the size of the AOIs to address the overlapping and visual angle error issues, and (3) finding a near optimal AGT value. The approach was tested in the context of air traffic control (ATC) operations where air traffic controller specialists (ATCSs) interrogated multiple moving aircraft on a radar display to detect and control the aircraft for the purpose of maintaining safe and expeditious air transportation. In addition, we show how eye tracking analysis results can differ based on how we define dynamic AOIs to determine eye fixations on moving objects. The results serve as a framework to more accurately analyze eye tracking data and to better support the analysis of human performance. PMID:27725830

  3. 3D Visual Tracking of an Articulated Robot in Precision Automated Tasks

    PubMed Central

    Alzarok, Hamza; Fletcher, Simon; Longstaff, Andrew P.

    2017-01-01

    The most compelling requirements for visual tracking systems are a high detection accuracy and an adequate processing speed. However, the combination between the two requirements in real world applications is very challenging due to the fact that more accurate tracking tasks often require longer processing times, while quicker responses for the tracking system are more prone to errors, therefore a trade-off between accuracy and speed, and vice versa is required. This paper aims to achieve the two requirements together by implementing an accurate and time efficient tracking system. In this paper, an eye-to-hand visual system that has the ability to automatically track a moving target is introduced. An enhanced Circular Hough Transform (CHT) is employed for estimating the trajectory of a spherical target in three dimensions, the colour feature of the target was carefully selected by using a new colour selection process, the process relies on the use of a colour segmentation method (Delta E) with the CHT algorithm for finding the proper colour of the tracked target, the target was attached to the six degree of freedom (DOF) robot end-effector that performs a pick-and-place task. A cooperation of two Eye-to Hand cameras with their image Averaging filters are used for obtaining clear and steady images. This paper also examines a new technique for generating and controlling the observation search window in order to increase the computational speed of the tracking system, the techniques is named Controllable Region of interest based on Circular Hough Transform (CRCHT). Moreover, a new mathematical formula is introduced for updating the depth information of the vision system during the object tracking process. For more reliable and accurate tracking, a simplex optimization technique was employed for the calculation of the parameters for camera to robotic transformation matrix. The results obtained show the applicability of the proposed approach to track the moving robot with an overall tracking error of 0.25 mm. Also, the effectiveness of CRCHT technique in saving up to 60% of the overall time required for image processing. PMID:28067860

  4. Altered transfer of visual motion information to parietal association cortex in untreated first-episode psychosis: Implications for pursuit eye tracking

    PubMed Central

    Lencer, Rebekka; Keedy, Sarah K.; Reilly, James L.; McDonough, Bruce E.; Harris, Margret S. H.; Sprenger, Andreas; Sweeney, John A.

    2011-01-01

    Visual motion processing and its use for pursuit eye movement control represent a valuable model for studying the use of sensory input for action planning. In psychotic disorders, alterations of visual motion perception have been suggested to cause pursuit eye tracking deficits. We evaluated this system in functional neuroimaging studies of untreated first-episode schizophrenia (N=24), psychotic bipolar disorder patients (N=13) and healthy controls (N=20). During a passive visual motion processing task, both patient groups showed reduced activation in the posterior parietal projection fields of motion-sensitive extrastriate area V5, but not in V5 itself. This suggests reduced bottom-up transfer of visual motion information from extrastriate cortex to perceptual systems in parietal association cortex. During active pursuit, activation was enhanced in anterior intraparietal sulcus and insula in both patient groups, and in dorsolateral prefrontal cortex and dorsomedial thalamus in schizophrenia patients. This may result from increased demands on sensorimotor systems for pursuit control due to the limited availability of perceptual motion information about target speed and tracking error. Visual motion information transfer deficits to higher -level association cortex may contribute to well-established pursuit tracking abnormalities, and perhaps to a wider array of alterations in perception and action planning in psychotic disorders. PMID:21873035

  5. Evaluation of helmet-mounted display targeting symbology based on eye tracking technology

    NASA Astrophysics Data System (ADS)

    Wang, Lijing; Wen, Fuzhen; Ma, Caixin; Zhao, Shengchu; Liu, Xiaodong

    2014-06-01

    The purpose of this paper is to find the Target Locator Lines (TLLs) which perform best by contrasting and comparing experiment based on three kinds of TTLs of fighter HMD. 10 university students, male, with an average age of 21-23, corrected visual acuity 1.5, participated in the experiment. In the experiment, head movement data was obtained by TrackIR. The geometric relationship between the coordinates of the real world and coordinates of the visual display was obtained by calculating the distance from viewpoint to midpoint of both eyes and the head movement data. Virtual helmet system simulation experiment environment was created by drawing TLLs of fighter HMD in the flight simulator visual scene. In the experiment, eye tracker was used to record the time and saccade trajectory. The results were evaluated by the duration of the time and saccade trajectory. The results showed that the symbol"locator line with digital vector length indication" cost most time and had the longest length of the saccade trajectory. It is the most ineffective and most unacceptable way. "Locator line with extending head vector length symbol" cost less time and had less length of the saccade trajectory. It is effective and acceptable;"Locator line with reflected vector length symbol" cost the least time and had the least length of the saccade trajectory. It is the most effective and most acceptable way. "Locator line with reflected vector length symbol" performs best. The results will provide reference value for the research of TTLs in future.

  6. Do the Eyes Have It? Using Eye Tracking to Assess Students Cognitive Dimensions

    ERIC Educational Resources Information Center

    Nisiforou, Efi A.; Laghos, Andrew

    2013-01-01

    Field dependence/independence (FD/FI) is a significant dimension of cognitive styles. The paper presents results of a study that seeks to identify individuals' level of field independence during visual stimulus tasks processing. Specifically, it examined the relationship between the Hidden Figure Test (HFT) scores and the eye tracking metrics.…

  7. Influence of social presence on eye movements in visual search tasks.

    PubMed

    Liu, Na; Yu, Ruifeng

    2017-12-01

    This study employed an eye-tracking technique to investigate the influence of social presence on eye movements in visual search tasks. A total of 20 male subjects performed visual search tasks in a 2 (target presence: present vs. absent) × 2 (task complexity: complex vs. simple) × 2 (social presence: alone vs. a human audience) within-subject experiment. Results indicated that the presence of an audience could evoke a social facilitation effect on response time in visual search tasks. Compared with working alone, the participants made fewer and shorter fixations, larger saccades and shorter scan path in simple search tasks and more and longer fixations, smaller saccades and longer scan path in complex search tasks when working with an audience. The saccade velocity and pupil diameter in the audience-present condition were larger than those in the working-alone condition. No significant change in target fixation number was observed between two social presence conditions. Practitioner Summary: This study employed an eye-tracking technique to examine the influence of social presence on eye movements in visual search tasks. Results clarified the variation mechanism and characteristics of oculomotor scanning induced by social presence in visual search.

  8. Surface ablation with iris recognition and dynamic rotational eye tracking-based tissue saving treatment with the Technolas 217z excimer laser.

    PubMed

    Prakash, Gaurav; Agarwal, Amar; Kumar, Dhivya Ashok; Jacob, Soosan; Agarwal, Athiya; Maity, Amrita

    2011-03-01

    To evaluate the visual and refractive outcomes and expected benefits of Tissue Saving Treatment algorithm-guided surface ablation with iris recognition and dynamic rotational eye tracking. This prospective, interventional case series comprised 122 eyes (70 patients). Pre- and postoperative assessment included uncorrected distance visual acuity (UDVA), corrected distance visual acuity (CDVA), refraction, and higher order aberrations. All patients underwent Tissue Saving Treatment algorithm-guided surface ablation with iris recognition and dynamic rotational eye tracking using the Technolas 217z 100-Hz excimer platform (Technolas Perfect Vision GmbH). Follow-up was performed up to 6 months postoperatively. Theoretical benefit analysis was performed to evaluate the algorithm's outcomes compared to others. Preoperative spherocylindrical power was sphere -3.62 ± 1.60 diopters (D) (range: 0 to -6.75 D), cylinder -1.15 ± 1.00 D (range: 0 to -3.50 D), and spherical equivalent -4.19 ± 1.60 D (range: -7.75 to -2.00 D). At 6 months, 91% (111/122) of eyes were within ± 0.50 D of attempted correction. Postoperative UDVA was comparable to preoperative CDVA at 1 month (P=.47) and progressively improved at 6 months (P=.004). Two eyes lost one line of CDVA at 6 months. Theoretical benefit analysis revealed that of 101 eyes with astigmatism, 29 would have had cyclotorsion-induced astigmatism of ≥ 10% if iris recognition and dynamic rotational eye tracking were not used. Furthermore, the mean percentage decrease in maximum depth of ablation by using the Tissue Saving Treatment was 11.8 ± 2.9% over Aspheric, 17.8 ± 6.2% over Personalized, and 18.2 ± 2.8% over Planoscan algorithms. Tissue saving surface ablation with iris recognition and dynamic rotational eye tracking was safe and effective in this series of eyes. Copyright 2011, SLACK Incorporated.

  9. Auditory display as feedback for a novel eye-tracking system for sterile operating room interaction.

    PubMed

    Black, David; Unger, Michael; Fischer, Nele; Kikinis, Ron; Hahn, Horst; Neumuth, Thomas; Glaser, Bernhard

    2018-01-01

    The growing number of technical systems in the operating room has increased attention on developing touchless interaction methods for sterile conditions. However, touchless interaction paradigms lack the tactile feedback found in common input devices such as mice and keyboards. We propose a novel touchless eye-tracking interaction system with auditory display as a feedback method for completing typical operating room tasks. Auditory display provides feedback concerning the selected input into the eye-tracking system as well as a confirmation of the system response. An eye-tracking system with a novel auditory display using both earcons and parameter-mapping sonification was developed to allow touchless interaction for six typical scrub nurse tasks. An evaluation with novice participants compared auditory display with visual display with respect to reaction time and a series of subjective measures. When using auditory display to substitute for the lost tactile feedback during eye-tracking interaction, participants exhibit reduced reaction time compared to using visual-only display. In addition, the auditory feedback led to lower subjective workload and higher usefulness and system acceptance ratings. Due to the absence of tactile feedback for eye-tracking and other touchless interaction methods, auditory display is shown to be a useful and necessary addition to new interaction concepts for the sterile operating room, reducing reaction times while improving subjective measures, including usefulness, user satisfaction, and cognitive workload.

  10. Nerve Fiber Flux Analysis Using Wide-Field Swept-Source Optical Coherence Tomography.

    PubMed

    Tan, Ou; Liu, Liang; Liu, Li; Huang, David

    2018-02-01

    To devise a method to quantify nerve fibers over their arcuate courses over an extended peripapillary area using optical coherence tomography (OCT). Participants were imaged with 8 × 8-mm volumetric OCT scans centered at the optic disc. A new quantity, nerve fiber flux (NFF), represents the cross-sectional area transected perpendicular to the nerve fibers. The peripapillary area was divided into 64 tracks with equal flux. An iterative algorithm traced the trajectory of the tracks assuming that the relative distribution of the NFF was conserved with compensation for fiber connections to ganglion cells on the macular side. Average trajectory was averaged from normal eyes and use to calculate the NFF maps for glaucomatous eyes. The NFF maps were divided into eight sectors that correspond to visual field regions. There were 24 healthy and 10 glaucomatous eyes enrolled. The algorithm converged on similar patterns of NFL tracks for all healthy eyes. In glaucomatous eyes, NFF correlated with visual field sensitivity in the arcuate sectors (Spearman ρ = 0.53-0.62). Focal nerve fiber loss in glaucomatous eyes appeared as uniform tracks of NFF defects that followed the expected arcuate fiber trajectory. Using an algorithm based on the conservation of flux, we derived nerve fiber trajectories in the peripapillary area. The NFF map is useful for the visualization of focal defects and quantification of sector nerve fiber loss from wide-area volumetric OCT scans. NFF provides a cumulative measure of volumetric loss along nerve fiber tracks and could improve the detection of focal glaucoma damage.

  11. Self-Monitoring of Gaze in High Functioning Autism

    ERIC Educational Resources Information Center

    Grynszpan, Ouriel; Nadel, Jacqueline; Martin, Jean-Claude; Simonin, Jerome; Bailleul, Pauline; Wang, Yun; Gepner, Daniel; Le Barillier, Florence; Constant, Jacques

    2012-01-01

    Atypical visual behaviour has been recently proposed to account for much of social misunderstanding in autism. Using an eye-tracking system and a gaze-contingent lens display, the present study explores self-monitoring of eye motion in two conditions: free visual exploration and guided exploration via blurring the visual field except for the focal…

  12. Toward Collaboration Sensing

    ERIC Educational Resources Information Center

    Schneider, Bertrand; Pea, Roy

    2014-01-01

    We describe preliminary applications of network analysis techniques to eye-tracking data collected during a collaborative learning activity. This paper makes three contributions: first, we visualize collaborative eye-tracking data as networks, where the nodes of the graph represent fixations and edges represent saccades. We found that those…

  13. Real-time tracking of visually attended objects in virtual environments and its application to LOD.

    PubMed

    Lee, Sungkil; Kim, Gerard Jounghyun; Choi, Seungmoon

    2009-01-01

    This paper presents a real-time framework for computationally tracking objects visually attended by the user while navigating in interactive virtual environments. In addition to the conventional bottom-up (stimulus-driven) saliency map, the proposed framework uses top-down (goal-directed) contexts inferred from the user's spatial and temporal behaviors, and identifies the most plausibly attended objects among candidates in the object saliency map. The computational framework was implemented using GPU, exhibiting high computational performance adequate for interactive virtual environments. A user experiment was also conducted to evaluate the prediction accuracy of the tracking framework by comparing objects regarded as visually attended by the framework to actual human gaze collected with an eye tracker. The results indicated that the accuracy was in the level well supported by the theory of human cognition for visually identifying single and multiple attentive targets, especially owing to the addition of top-down contextual information. Finally, we demonstrate how the visual attention tracking framework can be applied to managing the level of details in virtual environments, without any hardware for head or eye tracking.

  14. iShadow: Design of a Wearable, Real-Time Mobile Gaze Tracker.

    PubMed

    Mayberry, Addison; Hu, Pan; Marlin, Benjamin; Salthouse, Christopher; Ganesan, Deepak

    2014-06-01

    Continuous, real-time tracking of eye gaze is valuable in a variety of scenarios including hands-free interaction with the physical world, detection of unsafe behaviors, leveraging visual context for advertising, life logging, and others. While eye tracking is commonly used in clinical trials and user studies, it has not bridged the gap to everyday consumer use. The challenge is that a real-time eye tracker is a power-hungry and computation-intensive device which requires continuous sensing of the eye using an imager running at many tens of frames per second, and continuous processing of the image stream using sophisticated gaze estimation algorithms. Our key contribution is the design of an eye tracker that dramatically reduces the sensing and computation needs for eye tracking, thereby achieving orders of magnitude reductions in power consumption and form-factor. The key idea is that eye images are extremely redundant, therefore we can estimate gaze by using a small subset of carefully chosen pixels per frame. We instantiate this idea in a prototype hardware platform equipped with a low-power image sensor that provides random access to pixel values, a low-power ARM Cortex M3 microcontroller, and a bluetooth radio to communicate with a mobile phone. The sparse pixel-based gaze estimation algorithm is a multi-layer neural network learned using a state-of-the-art sparsity-inducing regularization function that minimizes the gaze prediction error while simultaneously minimizing the number of pixels used. Our results show that we can operate at roughly 70mW of power, while continuously estimating eye gaze at the rate of 30 Hz with errors of roughly 3 degrees.

  15. iShadow: Design of a Wearable, Real-Time Mobile Gaze Tracker

    PubMed Central

    Mayberry, Addison; Hu, Pan; Marlin, Benjamin; Salthouse, Christopher; Ganesan, Deepak

    2015-01-01

    Continuous, real-time tracking of eye gaze is valuable in a variety of scenarios including hands-free interaction with the physical world, detection of unsafe behaviors, leveraging visual context for advertising, life logging, and others. While eye tracking is commonly used in clinical trials and user studies, it has not bridged the gap to everyday consumer use. The challenge is that a real-time eye tracker is a power-hungry and computation-intensive device which requires continuous sensing of the eye using an imager running at many tens of frames per second, and continuous processing of the image stream using sophisticated gaze estimation algorithms. Our key contribution is the design of an eye tracker that dramatically reduces the sensing and computation needs for eye tracking, thereby achieving orders of magnitude reductions in power consumption and form-factor. The key idea is that eye images are extremely redundant, therefore we can estimate gaze by using a small subset of carefully chosen pixels per frame. We instantiate this idea in a prototype hardware platform equipped with a low-power image sensor that provides random access to pixel values, a low-power ARM Cortex M3 microcontroller, and a bluetooth radio to communicate with a mobile phone. The sparse pixel-based gaze estimation algorithm is a multi-layer neural network learned using a state-of-the-art sparsity-inducing regularization function that minimizes the gaze prediction error while simultaneously minimizing the number of pixels used. Our results show that we can operate at roughly 70mW of power, while continuously estimating eye gaze at the rate of 30 Hz with errors of roughly 3 degrees. PMID:26539565

  16. The influence of clutter on real-world scene search: evidence from search efficiency and eye movements.

    PubMed

    Henderson, John M; Chanceaux, Myriam; Smith, Tim J

    2009-01-23

    We investigated the relationship between visual clutter and visual search in real-world scenes. Specifically, we investigated whether visual clutter, indexed by feature congestion, sub-band entropy, and edge density, correlates with search performance as assessed both by traditional behavioral measures (response time and error rate) and by eye movements. Our results demonstrate that clutter is related to search performance. These results hold for both traditional search measures and for eye movements. The results suggest that clutter may serve as an image-based proxy for search set size in real-world scenes.

  17. Measuring advertising effectiveness in Travel 2.0 websites through eye-tracking technology.

    PubMed

    Muñoz-Leiva, Francisco; Hernández-Méndez, Janet; Gómez-Carmona, Diego

    2018-03-06

    The advent of Web 2.0 is changing tourists' behaviors, prompting them to take on a more active role in preparing their travel plans. It is also leading tourism companies to have to adapt their marketing strategies to different online social media. The present study analyzes advertising effectiveness in social media in terms of customers' visual attention and self-reported memory (recall). Data were collected through a within-subjects and between-groups design based on eye-tracking technology, followed by a self-administered questionnaire. Participants were instructed to visit three Travel 2.0 websites (T2W), including a hotel's blog, social network profile (Facebook), and virtual community profile (Tripadvisor). Overall, the results revealed greater advertising effectiveness in the case of the hotel social network; and visual attention measures based on eye-tracking data differed from measures of self-reported recall. Visual attention to the ad banner was paid at a low level of awareness, which explains why the associations with the ad did not activate its subsequent recall. The paper offers a pioneering attempt in the application of eye-tracking technology, and examines the possible impact of visual marketing stimuli on user T2W-related behavior. The practical implications identified in this research, along with its limitations and future research opportunities, are of interest both for further theoretical development and practical application. Copyright © 2018 Elsevier Inc. All rights reserved.

  18. Eye Tracking Outcomes in Tobacco Control Regulation and Communication: A Systematic Review.

    PubMed

    Meernik, Clare; Jarman, Kristen; Wright, Sarah Towner; Klein, Elizabeth G; Goldstein, Adam O; Ranney, Leah

    2016-10-01

    In this paper we synthesize the evidence from eye tracking research in tobacco control to inform tobacco regulatory strategies and tobacco communication campaigns. We systematically searched 11 databases for studies that reported eye tracking outcomes in regards to tobacco regulation and communication. Two coders independently reviewed studies for inclusion and abstracted study characteristics and findings. Eighteen studies met full criteria for inclusion. Eye tracking studies on health warnings consistently showed these warnings often were ignored, though eye tracking demonstrated that novel warnings, graphic warnings, and plain packaging can increase attention toward warnings. Eye tracking also revealed that greater visual attention to warnings on advertisements and packages consistently was associated with cognitive processing as measured by warning recall. Eye tracking is a valid indicator of attention, cognitive processing, and memory. The use of this technology in tobacco control research complements existing methods in tobacco regulatory and communication science; it also can be used to examine the effects of health warnings and other tobacco product communications on consumer behavior in experimental settings prior to the implementation of novel health communication policies. However, the utility of eye tracking will be enhanced by the standardization of methodology and reporting metrics.

  19. Eye Tracking Outcomes in Tobacco Control Regulation and Communication: A Systematic Review

    PubMed Central

    Meernik, Clare; Jarman, Kristen; Wright, Sarah Towner; Klein, Elizabeth G.; Goldstein, Adam O.; Ranney, Leah

    2016-01-01

    Objective In this paper we synthesize the evidence from eye tracking research in tobacco control to inform tobacco regulatory strategies and tobacco communication campaigns. Methods We systematically searched 11 databases for studies that reported eye tracking outcomes in regards to tobacco regulation and communication. Two coders independently reviewed studies for inclusion and abstracted study characteristics and findings. Results Eighteen studies met full criteria for inclusion. Eye tracking studies on health warnings consistently showed these warnings often were ignored, though eye tracking demonstrated that novel warnings, graphic warnings, and plain packaging can increase attention toward warnings. Eye tracking also revealed that greater visual attention to warnings on advertisements and packages consistently was associated with cognitive processing as measured by warning recall. Conclusions Eye tracking is a valid indicator of attention, cognitive processing, and memory. The use of this technology in tobacco control research complements existing methods in tobacco regulatory and communication science; it also can be used to examine the effects of health warnings and other tobacco product communications on consumer behavior in experimental settings prior to the implementation of novel health communication policies. However, the utility of eye tracking will be enhanced by the standardization of methodology and reporting metrics. PMID:27668270

  20. A MATLAB-based eye tracking control system using non-invasive helmet head restraint in the macaque.

    PubMed

    De Luna, Paolo; Mohamed Mustafar, Mohamed Faiz Bin; Rainer, Gregor

    2014-09-30

    Tracking eye position is vital for behavioral and neurophysiological investigations in systems and cognitive neuroscience. Infrared camera systems which are now available can be used for eye tracking without the need to surgically implant magnetic search coils. These systems are generally employed using rigid head fixation in monkeys, which maintains the eye in a constant position and facilitates eye tracking. We investigate the use of non-rigid head fixation using a helmet that constrains only general head orientation and allows some freedom of movement. We present a MATLAB software solution to gather and process eye position data, present visual stimuli, interact with various devices, provide experimenter feedback and store data for offline analysis. Our software solution achieves excellent timing performance due to the use of data streaming, instead of the traditionally employed data storage mode for processing analog eye position data. We present behavioral data from two monkeys, demonstrating that adequate performance levels can be achieved on a simple fixation paradigm and show how performance depends on parameters such as fixation window size. Our findings suggest that non-rigid head restraint can be employed for behavioral training and testing on a variety of gaze-dependent visual paradigms, reducing the need for rigid head restraint systems for some applications. While developed for macaque monkey, our system of course can work equally well for applications in human eye tracking where head constraint is undesirable. Copyright © 2014. Published by Elsevier B.V.

  1. Exploring virtual worlds with head-mounted displays

    NASA Astrophysics Data System (ADS)

    Chung, James C.; Harris, Mark R.; Brooks, F. P.; Fuchs, Henry; Kelley, Michael T.

    1989-02-01

    Research has been conducted in the use of simple head mounted displays in real world applications. Such units provide the user with non-holographic true 3-D information, since the kinetic depth effect, stereoscopy, and other visual cues combine to immerse the user in a virtual world which behaves like the real world in some respects. UNC's head mounted display was built inexpensively from commercially available off-the-shelf components. Tracking of the user's head position and orientation is performed by a Polhemus Navigation Sciences' 3SPACE tracker. The host computer uses the tracking information to generate updated images corresponding to the user's new left eye and right eye views. The images are broadcast to two liquid crystal television screens (220x320 pixels) mounted on a horizontal shelf at the user's forehead. The user views these color screens through half-silvered mirrors, enabling the computer generated image to be superimposed upon the user's real physical environment. The head mounted display was incorporated into existing molecular and architectural applications being developed at UNC. In molecular structure studies, chemists are presented with a room sized molecule with which they can interact in a manner more intuitive than that provided by conventional 2-D displays and dial boxes. Walking around and through the large molecule may provide quicker understanding of its structure, and such problems as drug enzyme docking may be approached with greater insight.

  2. Acting without seeing: eye movements reveal visual processing without awareness.

    PubMed

    Spering, Miriam; Carrasco, Marisa

    2015-04-01

    Visual perception and eye movements are considered to be tightly linked. Diverse fields, ranging from developmental psychology to computer science, utilize eye tracking to measure visual perception. However, this prevailing view has been challenged by recent behavioral studies. Here, we review converging evidence revealing dissociations between the contents of perceptual awareness and different types of eye movement. Such dissociations reveal situations in which eye movements are sensitive to particular visual features that fail to modulate perceptual reports. We also discuss neurophysiological, neuroimaging, and clinical studies supporting the role of subcortical pathways for visual processing without awareness. Our review links awareness to perceptual-eye movement dissociations and furthers our understanding of the brain pathways underlying vision and movement with and without awareness. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Acting without seeing: Eye movements reveal visual processing without awareness Miriam Spering & Marisa Carrasco

    PubMed Central

    Spering, Miriam; Carrasco, Marisa

    2015-01-01

    Visual perception and eye movements are considered to be tightly linked. Diverse fields, ranging from developmental psychology to computer science, utilize eye tracking to measure visual perception. However, this prevailing view has been challenged by recent behavioral studies. We review converging evidence revealing dissociations between the contents of perceptual awareness and different types of eye movements. Such dissociations reveal situations in which eye movements are sensitive to particular visual features that fail to modulate perceptual reports. We also discuss neurophysiological, neuroimaging and clinical studies supporting the role of subcortical pathways for visual processing without awareness. Our review links awareness to perceptual-eye movement dissociations and furthers our understanding of the brain pathways underlying vision and movement with and without awareness. PMID:25765322

  4. Remote vs. head-mounted eye-tracking: a comparison using radiologists reading mammograms

    NASA Astrophysics Data System (ADS)

    Mello-Thoms, Claudia; Gur, David

    2007-03-01

    Eye position monitoring has been used for decades in Radiology in order to determine how radiologists interpret medical images. Using these devices several discoveries about the perception/decision making process have been made, such as the importance of comparisons of perceived abnormalities with selected areas of the background, the likelihood that a true lesion will attract visual attention early in the reading process, and the finding that most misses attract prolonged visual dwell, often comparable to dwell in the location of reported lesions. However, eye position tracking is a cumbersome process, which often requires the observer to wear a helmet gear which contains the eye tracker per se and a magnetic head tracker, which allows for the computation of head position. Observers tend to complain of fatigue after wearing the gear for a prolonged time. Recently, with the advances made to remote eye-tracking, the use of head-mounted systems seemed destined to become a thing of the past. In this study we evaluated a remote eye tracking system, and compared it to a head-mounted system, as radiologists read a case set of one-view mammograms on a high-resolution display. We compared visual search parameters between the two systems, such as time to hit the location of the lesion for the first time, amount of dwell time in the location of the lesion, total time analyzing the image, etc. We also evaluated the observers' impressions of both systems, and what their perceptions were of the restrictions of each system.

  5. Exploring Eye Movements of Experienced and Novice Readers of Medical Texts Concerning the Cardiovascular System in Making a Diagnosis

    ERIC Educational Resources Information Center

    Vilppu, Henna; Mikkilä-Erdmann, Mirjamaija; Södervik, Ilona; Österholm-Matikainen, Erika

    2017-01-01

    This study used the eye-tracking method to explore how the level of expertise influences reading, and solving, two written patient cases on cardiac failure and pulmonary embolus. Eye-tracking is a fairly commonly used method in medical education research, but it has been primarily applied to studies analyzing the processing of visualizations, such…

  6. A unified dynamic neural field model of goal directed eye movements

    NASA Astrophysics Data System (ADS)

    Quinton, J. C.; Goffart, L.

    2018-01-01

    Primates heavily rely on their visual system, which exploits signals of graded precision based on the eccentricity of the target in the visual field. The interactions with the environment involve actively selecting and focusing on visual targets or regions of interest, instead of contemplating an omnidirectional visual flow. Eye-movements specifically allow foveating targets and track their motion. Once a target is brought within the central visual field, eye-movements are usually classified into catch-up saccades (jumping from one orientation or fixation to another) and smooth pursuit (continuously tracking a target with low velocity). Building on existing dynamic neural field equations, we introduce a novel model that incorporates internal projections to better estimate the current target location (associated to a peak of activity). Such estimate is then used to trigger an eye movement, leading to qualitatively different behaviours depending on the dynamics of the whole oculomotor system: (1) fixational eye-movements due to small variations in the weights of projections when the target is stationary, (2) interceptive and catch-up saccades when peaks build and relax on the neural field, (3) smooth pursuit when the peak stabilises near the centre of the field, the system reaching a fixed point attractor. Learning is nevertheless required for tracking a rapidly moving target, and the proposed model thus replicates recent results in the monkey, in which repeated exercise permits the maintenance of the target within in the central visual field at its current (here-and-now) location, despite the delays involved in transmitting retinal signals to the oculomotor neurons.

  7. Alcohol and disorientation-responses. VI, Effects of alcohol on eye movements and tracking performance during laboratory angular accelerations about the yaw and pitch axes.

    DOT National Transportation Integrated Search

    1972-12-01

    Alcohol ingestion interferes with visual control of vestibular eye movements and thereby produces significant decrements in performance at a compensatory tracking task during oscillation about the yaw axis; significant or consistent decrements in per...

  8. A Statistical Physics Perspective to Understand Social Visual Attention in Autism Spectrum Disorder.

    PubMed

    Liberati, Alessio; Fadda, Roberta; Doneddu, Giuseppe; Congiu, Sara; Javarone, Marco A; Striano, Tricia; Chessa, Alessandro

    2017-08-01

    This study investigated social visual attention in children with Autism Spectrum Disorder (ASD) and with typical development (TD) in the light of Brockmann and Geisel's model of visual attention. The probability distribution of gaze movements and clustering of gaze points, registered with eye-tracking technology, was studied during a free visual exploration of a gaze stimulus. A data-driven analysis of the distribution of eye movements was chosen to overcome any possible methodological problems related to the subjective expectations of the experimenters about the informative contents of the image in addition to a computational model to simulate group differences. Analysis of the eye-tracking data indicated that the scanpaths of children with TD and ASD were characterized by eye movements geometrically equivalent to Lévy flights. Children with ASD showed a higher frequency of long saccadic amplitudes compared with controls. A clustering analysis revealed a greater dispersion of eye movements for these children. Modeling of the results indicated higher values of the model parameter modulating the dispersion of eye movements for children with ASD. Together, the experimental results and the model point to a greater dispersion of gaze points in ASD.

  9. Instruction-based clinical eye-tracking study on the visual interpretation of divergence: How do students look at vector field plots?

    NASA Astrophysics Data System (ADS)

    Klein, P.; Viiri, J.; Mozaffari, S.; Dengel, A.; Kuhn, J.

    2018-06-01

    Relating mathematical concepts to graphical representations is a challenging task for students. In this paper, we introduce two visual strategies to qualitatively interpret the divergence of graphical vector field representations. One strategy is based on the graphical interpretation of partial derivatives, while the other is based on the flux concept. We test the effectiveness of both strategies in an instruction-based eye-tracking study with N =41 physics majors. We found that students' performance improved when both strategies were introduced (74% correct) instead of only one strategy (64% correct), and students performed best when they were free to choose between the two strategies (88% correct). This finding supports the idea of introducing multiple representations of a physical concept to foster student understanding. Relevant eye-tracking measures demonstrate that both strategies imply different visual processing of the vector field plots, therefore reflecting conceptual differences between the strategies. Advanced analysis methods further reveal significant differences in eye movements between the best and worst performing students. For instance, the best students performed predominantly horizontal and vertical saccades, indicating correct interpretation of partial derivatives. They also focused on smaller regions when they balanced positive and negative flux. This mixed-method research leads to new insights into student visual processing of vector field representations, highlights the advantages and limitations of eye-tracking methodologies in this context, and discusses implications for teaching and for future research. The introduction of saccadic direction analysis expands traditional methods, and shows the potential to discover new insights into student understanding and learning difficulties.

  10. Effects of reward on the accuracy and dynamics of smooth pursuit eye movements.

    PubMed

    Brielmann, Aenne A; Spering, Miriam

    2015-08-01

    Reward modulates behavioral choices and biases goal-oriented behavior, such as eye or hand movements, toward locations or stimuli associated with higher rewards. We investigated reward effects on the accuracy and timing of smooth pursuit eye movements in 4 experiments. Eye movements were recorded in participants tracking a moving visual target on a computer monitor. Before target motion onset, a monetary reward cue indicated whether participants could earn money by tracking accurately, or whether the trial was unrewarded (Experiments 1 and 2, n = 11 each). Reward significantly improved eye-movement accuracy across different levels of task difficulty. Improvements were seen even in the earliest phase of the eye movement, within 70 ms of tracking onset, indicating that reward impacts visual-motor processing at an early level. We obtained similar findings when reward was not precued but explicitly associated with the pursuit target (Experiment 3, n = 16); critically, these results were not driven by stimulus prevalence or other factors such as preparation or motivation. Numerical cues (Experiment 4, n = 9) were not effective. (c) 2015 APA, all rights reserved).

  11. Computer vision enhances mobile eye-tracking to expose expert cognition in natural-scene visual-search tasks

    NASA Astrophysics Data System (ADS)

    Keane, Tommy P.; Cahill, Nathan D.; Tarduno, John A.; Jacobs, Robert A.; Pelz, Jeff B.

    2014-02-01

    Mobile eye-tracking provides the fairly unique opportunity to record and elucidate cognition in action. In our research, we are searching for patterns in, and distinctions between, the visual-search performance of experts and novices in the geo-sciences. Traveling to regions resultant from various geological processes as part of an introductory field studies course in geology, we record the prima facie gaze patterns of experts and novices when they are asked to determine the modes of geological activity that have formed the scene-view presented to them. Recording eye video and scene video in natural settings generates complex imagery that requires advanced applications of computer vision research to generate registrations and mappings between the views of separate observers. By developing such mappings, we could then place many observers into a single mathematical space where we can spatio-temporally analyze inter- and intra-subject fixations, saccades, and head motions. While working towards perfecting these mappings, we developed an updated experiment setup that allowed us to statistically analyze intra-subject eye-movement events without the need for a common domain. Through such analyses we are finding statistical differences between novices and experts in these visual-search tasks. In the course of this research we have developed a unified, open-source, software framework for processing, visualization, and interaction of mobile eye-tracking and high-resolution panoramic imagery.

  12. Do you see what I see? Mobile eye-tracker contextual analysis and inter-rater reliability.

    PubMed

    Stuart, S; Hunt, D; Nell, J; Godfrey, A; Hausdorff, J M; Rochester, L; Alcock, L

    2018-02-01

    Mobile eye-trackers are currently used during real-world tasks (e.g. gait) to monitor visual and cognitive processes, particularly in ageing and Parkinson's disease (PD). However, contextual analysis involving fixation locations during such tasks is rarely performed due to its complexity. This study adapted a validated algorithm and developed a classification method to semi-automate contextual analysis of mobile eye-tracking data. We further assessed inter-rater reliability of the proposed classification method. A mobile eye-tracker recorded eye-movements during walking in five healthy older adult controls (HC) and five people with PD. Fixations were identified using a previously validated algorithm, which was adapted to provide still images of fixation locations (n = 116). The fixation location was manually identified by two raters (DH, JN), who classified the locations. Cohen's kappa correlation coefficients determined the inter-rater reliability. The algorithm successfully provided still images for each fixation, allowing manual contextual analysis to be performed. The inter-rater reliability for classifying the fixation location was high for both PD (kappa = 0.80, 95% agreement) and HC groups (kappa = 0.80, 91% agreement), which indicated a reliable classification method. This study developed a reliable semi-automated contextual analysis method for gait studies in HC and PD. Future studies could adapt this methodology for various gait-related eye-tracking studies.

  13. Alcohol and disorientation-related responses. IV, Effects of different alcohol dosages and display illumination tracking performance during vestibular stimulation.

    DOT National Transportation Integrated Search

    1971-07-01

    A previous CAMI laboratory investigation showed that alcohol impairs the ability of men to suppress vestibular nystagmus while visually fixating on a cockpit instrument, thus degrading visual tracking performance (eye-hand coordination) during angula...

  14. Language-Mediated Eye Movements in the Absence of a Visual World: The "Blank Screen Paradigm"

    ERIC Educational Resources Information Center

    Altmann, Gerry T. M.

    2004-01-01

    The "visual world paradigm" typically involves presenting participants with a visual scene and recording eye movements as they either hear an instruction to manipulate objects in the scene or as they listen to a description of what may happen to those objects. In this study, participants heard each target sentence only after the corresponding…

  15. Exploring What’s Missing: What Do Target Absent Trials Reveal About Autism Search Superiority?

    PubMed Central

    Keehn, Brandon; Joseph, Robert M.

    2016-01-01

    We used eye-tracking to investigate the roles of enhanced discrimination and peripheral selection in superior visual search in autism spectrum disorder (ASD). Children with ASD were faster at visual search than their typically developing peers. However, group differences in performance and eye-movements did not vary with the level of difficulty of discrimination or selection. Rather, consistent with prior ASD research, group differences were mainly the effect of faster performance on target-absent trials. Eye-tracking revealed a lack of left-visual-field search asymmetry in ASD, which may confer an additional advantage when the target is absent. Lastly, ASD symptomatology was positively associated with search superiority, the mechanisms of which may shed light on the atypical brain organization that underlies social-communicative impairment in ASD. PMID:26762114

  16. Failures of Perception in the Low-Prevalence Effect: Evidence From Active and Passive Visual Search

    PubMed Central

    Hout, Michael C.; Walenchok, Stephen C.; Goldinger, Stephen D.; Wolfe, Jeremy M.

    2017-01-01

    In visual search, rare targets are missed disproportionately often. This low-prevalence effect (LPE) is a robust problem with demonstrable societal consequences. What is the source of the LPE? Is it a perceptual bias against rare targets or a later process, such as premature search termination or motor response errors? In 4 experiments, we examined the LPE using standard visual search (with eye tracking) and 2 variants of rapid serial visual presentation (RSVP) in which observers made present/absent decisions after sequences ended. In all experiments, observers looked for 2 target categories (teddy bear and butterfly) simultaneously. To minimize simple motor errors, caused by repetitive absent responses, we held overall target prevalence at 50%, with 1 low-prevalence and 1 high-prevalence target type. Across conditions, observers either searched for targets among other real-world objects or searched for specific bears or butterflies among within-category distractors. We report 4 main results: (a) In standard search, high-prevalence targets were found more quickly and accurately than low-prevalence targets. (b) The LPE persisted in RSVP search, even though observers never terminated search on their own. (c) Eye-tracking analyses showed that high-prevalence targets elicited better attentional guidance and faster perceptual decisions. And (d) even when observers looked directly at low-prevalence targets, they often (12%–34% of trials) failed to detect them. These results strongly argue that low-prevalence misses represent failures of perception when early search termination or motor errors are controlled. PMID:25915073

  17. Contribution of malocclusion and female facial attractiveness to smile esthetics evaluated by eye tracking.

    PubMed

    Richards, Michael R; Fields, Henry W; Beck, F Michael; Firestone, Allen R; Walther, Dirk B; Rosenstiel, Stephen; Sacksteder, James M

    2015-04-01

    There is disagreement in the literature concerning the importance of the mouth in overall facial attractiveness. Eye tracking provides an objective method to evaluate what people see. The objective of this study was to determine whether dental and facial attractiveness alters viewers' visual attention in terms of which area of the face (eyes, nose, mouth, chin, ears, or other) is viewed first, viewed the greatest number of times, and viewed for the greatest total time (duration) using eye tracking. Seventy-six viewers underwent 1 eye tracking session. Of these, 53 were white (49% female, 51% male). Their ages ranged from 18 to 29 years, with a mean of 19.8 years, and none were dental professionals. After being positioned and calibrated, they were shown 24 unique female composite images, each image shown twice for reliability. These images reflected a repaired unilateral cleft lip or 3 grades of dental attractiveness similar to those of grades 1 (near ideal), 7 (borderline treatment need), and 10 (definite treatment need) as assessed in the aesthetic component of the Index of Orthodontic Treatment Need (AC-IOTN). The images were then embedded in faces of 3 levels of attractiveness: attractive, average, and unattractive. During viewing, data were collected for the first location, frequency, and duration of each viewer's gaze. Observer reliability ranged from 0.58 to 0.92 (intraclass correlation coefficients) but was less than 0.07 (interrater) for the chin, which was eliminated from the study. Likewise, reliability for the area of first fixation was kappa less than 0.10 for both intrarater and interrater reliabilities; the area of first fixation was also removed from the data analysis. Repeated-measures analysis of variance showed a significant effect (P <0.001) for level of attractiveness by malocclusion by area of the face. For both number of fixations and duration of fixations, the eyes overwhelmingly were most salient, with the mouth receiving the second most visual attention. At times, the mouth and the eyes were statistically indistinguishable in viewers' gazes of fixation and duration. As the dental attractiveness decreased, the visual attention increased on the mouth, approaching that of the eyes. AC-IOTN grade 10 gained the most attention, followed by both AC-IOTN grade 7 and the cleft. AC-IOTN grade 1 received the least amount of visual attention. Also, lower dental attractiveness (AC-IOTN 7 and AC-IOTN 10) received more visual attention as facial attractiveness increased. Eye tracking indicates that dental attractiveness can alter the level of visual attention depending on the female models' facial attractiveness when viewed by laypersons. Copyright © 2015 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.

  18. Searching for unity: Real-world versus item-based visual search in age-related eye disease.

    PubMed

    Crabb, David P; Taylor, Deanna J

    2017-01-01

    When studying visual search, item-based approaches using synthetic targets and distractors limit the real-world applicability of results. Everyday visual search can be impaired in patients with common eye diseases like glaucoma and age-related macular degeneration. We highlight some results in the literature that suggest assessment of real-word search tasks in these patients could be clinically useful.

  19. Reproducibility of retinal nerve fiber layer thickness measures using eye tracking in children with nonglaucomatous optic neuropathy.

    PubMed

    Rajjoub, Raneem D; Trimboli-Heidler, Carmelina; Packer, Roger J; Avery, Robert A

    2015-01-01

    To determine the intra- and intervisit reproducibility of circumpapillary retinal nerve fiber layer (RNFL) thickness measures using eye tracking-assisted spectral-domain optical coherence tomography (SD OCT) in children with nonglaucomatous optic neuropathy. Prospective longitudinal study. Circumpapillary RNFL thickness measures were acquired with SD OCT using the eye-tracking feature at 2 separate study visits. Children with normal and abnormal vision (visual acuity ≥ 0.2 logMAR above normal and/or visual field loss) who demonstrated clinical and radiographic stability were enrolled. Intra- and intervisit reproducibility was calculated for the global average and 9 anatomic sectors by calculating the coefficient of variation and intraclass correlation coefficient. Forty-two subjects (median age 8.6 years, range 3.9-18.2 years) met inclusion criteria and contributed 62 study eyes. Both the abnormal and normal vision cohort demonstrated the lowest intravisit coefficient of variation for the global RNFL thickness. Intervisit reproducibility remained good for those with normal and abnormal vision, although small but statistically significant increases in the coefficient of variation were observed for multiple anatomic sectors in both cohorts. The magnitude of visual acuity loss was significantly associated with the global (ß = 0.026, P < .01) and temporal sector coefficient of variation (ß = 0.099, P < .01). SD OCT with eye tracking demonstrates highly reproducible RNFL thickness measures. Subjects with vision loss demonstrate greater intra- and intervisit variability than those with normal vision. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Video-based eye tracking for neuropsychiatric assessment.

    PubMed

    Adhikari, Sam; Stark, David E

    2017-01-01

    This paper presents a video-based eye-tracking method, ideally deployed via a mobile device or laptop-based webcam, as a tool for measuring brain function. Eye movements and pupillary motility are tightly regulated by brain circuits, are subtly perturbed by many disease states, and are measurable using video-based methods. Quantitative measurement of eye movement by readily available webcams may enable early detection and diagnosis, as well as remote/serial monitoring, of neurological and neuropsychiatric disorders. We successfully extracted computational and semantic features for 14 testing sessions, comprising 42 individual video blocks and approximately 17,000 image frames generated across several days of testing. Here, we demonstrate the feasibility of collecting video-based eye-tracking data from a standard webcam in order to assess psychomotor function. Furthermore, we were able to demonstrate through systematic analysis of this data set that eye-tracking features (in particular, radial and tangential variance on a circular visual-tracking paradigm) predict performance on well-validated psychomotor tests. © 2017 New York Academy of Sciences.

  1. Pilots' visual scan patterns and situation awareness in flight operations.

    PubMed

    Yu, Chung-San; Wang, Eric Min-Yang; Li, Wen-Chin; Braithwaite, Graham

    2014-07-01

    Situation awareness (SA) is considered an essential prerequisite for safe flying. If the impact of visual scanning patterns on a pilot's situation awareness could be identified in flight operations, then eye-tracking tools could be integrated with flight simulators to improve training efficiency. Participating in this research were 18 qualified, mission-ready fighter pilots. The equipment included high-fidelity and fixed-base type flight simulators and mobile head-mounted eye-tracking devices to record a subject's eye movements and SA while performing air-to-surface tasks. There were significant differences in pilots' percentage of fixation in three operating phases: preparation (M = 46.09, SD = 14.79), aiming (M = 24.24, SD = 11.03), and release and break-away (M = 33.98, SD = 14.46). Also, there were significant differences in pilots' pupil sizes, which were largest in the aiming phase (M = 27,621, SD = 6390.8), followed by release and break-away (M = 27,173, SD = 5830.46), then preparation (M = 25,710, SD = 6078.79), which was the smallest. Furthermore, pilots with better SA performance showed lower perceived workload (M = 30.60, SD = 17.86), and pilots with poor SA performance showed higher perceived workload (M = 60.77, SD = 12.72). Pilots' percentage of fixation and average fixation duration among five different areas of interest showed significant differences as well. Eye-tracking devices can aid in capturing pilots' visual scan patterns and SA performance, unlike traditional flight simulators. Therefore, integrating eye-tracking devices into the simulator may be a useful method for promoting SA training in flight operations, and can provide in-depth understanding of the mechanism of visual scan patterns and information processing to improve training effectiveness in aviation.

  2. Effect of glaucoma on eye movement patterns and laboratory-based hazard detection ability

    PubMed Central

    Black, Alex A.; Wood, Joanne M.

    2017-01-01

    Purpose The mechanisms underlying the elevated crash rates of older drivers with glaucoma are poorly understood. A key driving skill is timely detection of hazards; however, the hazard detection ability of drivers with glaucoma has been largely unexplored. This study assessed the eye movement patterns and visual predictors of performance on a laboratory-based hazard detection task in older drivers with glaucoma. Methods Participants included 30 older drivers with glaucoma (71±7 years; average better-eye mean deviation (MD) = −3.1±3.2 dB; average worse-eye MD = −11.9±6.2 dB) and 25 age-matched controls (72±7 years). Visual acuity, contrast sensitivity, visual fields, useful field of view (UFoV; processing speeds), and motion sensitivity were assessed. Participants completed a computerised Hazard Perception Test (HPT) while their eye movements were recorded using a desk-mounted Tobii TX300 eye-tracking system. The HPT comprises a series of real-world traffic videos recorded from the driver’s perspective; participants responded to road hazards appearing in the videos, and hazard response times were determined. Results Participants with glaucoma exhibited an average of 0.42 seconds delay in hazard response time (p = 0.001), smaller saccades (p = 0.010), and delayed first fixation on hazards (p<0.001) compared to controls. Importantly, larger saccades were associated with faster hazard responses in the glaucoma group (p = 0.004), but not in the control group (p = 0.19). Across both groups, significant visual predictors of hazard response times included motion sensitivity, UFoV, and worse-eye MD (p<0.05). Conclusions Older drivers with glaucoma had delayed hazard response times compared to controls, with associated changes in eye movement patterns. The association between larger saccades and faster hazard response time in the glaucoma group may represent a compensatory behaviour to facilitate improved performance. PMID:28570621

  3. Covert enaction at work: Recording the continuous movements of visuospatial attention to visible or imagined targets by means of Steady-State Visual Evoked Potentials (SSVEPs).

    PubMed

    Gregori Grgič, Regina; Calore, Enrico; de'Sperati, Claudio

    2016-01-01

    Whereas overt visuospatial attention is customarily measured with eye tracking, covert attention is assessed by various methods. Here we exploited Steady-State Visual Evoked Potentials (SSVEPs) - the oscillatory responses of the visual cortex to incoming flickering stimuli - to record the movements of covert visuospatial attention in a way operatively similar to eye tracking (attention tracking), which allowed us to compare motion observation and motion extrapolation with and without eye movements. Observers fixated a central dot and covertly tracked a target oscillating horizontally and sinusoidally. In the background, the left and the right halves of the screen flickered at two different frequencies, generating two SSVEPs in occipital regions whose size varied reciprocally as observers attended to the moving target. The two signals were combined into a single quantity that was modulated at the target frequency in a quasi-sinusoidal way, often clearly visible in single trials. The modulation continued almost unchanged when the target was switched off and observers mentally extrapolated its motion in imagery, and also when observers pointed their finger at the moving target during covert tracking, or imagined doing so. The amplitude of modulation during covert tracking was ∼25-30% of that measured when observers followed the target with their eyes. We used 4 electrodes in parieto-occipital areas, but similar results were achieved with a single electrode in Oz. In a second experiment we tested ramp and step motion. During overt tracking, SSVEPs were remarkably accurate, showing both saccadic-like and smooth pursuit-like modulations of cortical responsiveness, although during covert tracking the modulation deteriorated. Covert tracking was better with sinusoidal motion than ramp motion, and better with moving targets than stationary ones. The clear modulation of cortical responsiveness recorded during both overt and covert tracking, identical for motion observation and motion extrapolation, suggests to include covert attention movements in enactive theories of mental imagery. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Eye Movements Reveal the Dynamic Simulation of Speed in Language

    ERIC Educational Resources Information Center

    Speed, Laura J.; Vigliocco, Gabriella

    2014-01-01

    This study investigates how speed of motion is processed in language. In three eye-tracking experiments, participants were presented with visual scenes and spoken sentences describing fast or slow events (e.g., "The lion ambled/dashed to the balloon"). Results showed that looking time to relevant objects in the visual scene was affected…

  5. A software module for implementing auditory and visual feedback on a video-based eye tracking system

    NASA Astrophysics Data System (ADS)

    Rosanlall, Bharat; Gertner, Izidor; Geri, George A.; Arrington, Karl F.

    2016-05-01

    We describe here the design and implementation of a software module that provides both auditory and visual feedback of the eye position measured by a commercially available eye tracking system. The present audio-visual feedback module (AVFM) serves as an extension to the Arrington Research ViewPoint EyeTracker, but it can be easily modified for use with other similar systems. Two modes of audio feedback and one mode of visual feedback are provided in reference to a circular area-of-interest (AOI). Auditory feedback can be either a click tone emitted when the user's gaze point enters or leaves the AOI, or a sinusoidal waveform with frequency inversely proportional to the distance from the gaze point to the center of the AOI. Visual feedback is in the form of a small circular light patch that is presented whenever the gaze-point is within the AOI. The AVFM processes data that are sent to a dynamic-link library by the EyeTracker. The AVFM's multithreaded implementation also allows real-time data collection (1 kHz sampling rate) and graphics processing that allow display of the current/past gaze-points as well as the AOI. The feedback provided by the AVFM described here has applications in military target acquisition and personnel training, as well as in visual experimentation, clinical research, marketing research, and sports training.

  6. Contextual effects on motion perception and smooth pursuit eye movements.

    PubMed

    Spering, Miriam; Gegenfurtner, Karl R

    2008-08-15

    Smooth pursuit eye movements are continuous, slow rotations of the eyes that allow us to follow the motion of a visual object of interest. These movements are closely related to sensory inputs from the visual motion processing system. To track a moving object in the natural environment, its motion first has to be segregated from the motion signals provided by surrounding stimuli. Here, we review experiments on the effect of the visual context on motion processing with a focus on the relationship between motion perception and smooth pursuit eye movements. While perception and pursuit are closely linked, we show that they can behave quite distinctly when required by the visual context.

  7. Target Selection by the Frontal Cortex during Coordinated Saccadic and Smooth Pursuit Eye Movements

    ERIC Educational Resources Information Center

    Srihasam, Krishna; Bullock, Daniel; Grossberg, Stephen

    2009-01-01

    Oculomotor tracking of moving objects is an important component of visually based cognition and planning. Such tracking is achieved by a combination of saccades and smooth-pursuit eye movements. In particular, the saccadic and smooth-pursuit systems interact to often choose the same target, and to maximize its visibility through time. How do…

  8. Using eye tracking to identify faking attempts during penile plethysmography assessment.

    PubMed

    Trottier, Dominique; Rouleau, Joanne-Lucine; Renaud, Patrice; Goyette, Mathieu

    2014-01-01

    Penile plethysmography (PPG) is considered the most rigorous method for sexual interest assessment. Nevertheless, it is subject to faking attempts by participants, which compromises the internal validity of the instrument. To date, various attempts have been made to limit voluntary control of sexual response during PPG assessments, without satisfactory results. This exploratory research examined eye-tracking technologies' ability to identify the presence of cognitive strategies responsible for erectile inhibition during PPG assessment. Eye movements and penile responses for 20 subjects were recorded while exploring animated human-like computer-generated stimuli in a virtual environment under three distinct viewing conditions: (a) the free visual exploration of a preferred sexual stimulus without erectile inhibition; (b) the viewing of a preferred sexual stimulus with erectile inhibition; and (c) the free visual exploration of a non-preferred sexual stimulus. Results suggest that attempts to control erectile responses generate specific eye-movement variations, characterized by a general deceleration of the exploration process and limited exploration of the erogenous zone. Findings indicate that recording eye movements can provide significant information on the presence of competing covert processes responsible for erectile inhibition. The use of eye-tracking technologies during PPG could therefore lead to improved internal validity of the plethysmographic procedure.

  9. Development of a novel visuomotor integration paradigm by integrating a virtual environment with mobile eye-tracking and motion-capture systems

    PubMed Central

    Miller, Haylie L.; Bugnariu, Nicoleta; Patterson, Rita M.; Wijayasinghe, Indika; Popa, Dan O.

    2018-01-01

    Visuomotor integration (VMI), the use of visual information to guide motor planning, execution, and modification, is necessary for a wide range of functional tasks. To comprehensively, quantitatively assess VMI, we developed a paradigm integrating virtual environments, motion-capture, and mobile eye-tracking. Virtual environments enable tasks to be repeatable, naturalistic, and varied in complexity. Mobile eye-tracking and minimally-restricted movement enable observation of natural strategies for interacting with the environment. This paradigm yields a rich dataset that may inform our understanding of VMI in typical and atypical development. PMID:29876370

  10. Rett syndrome: basic features of visual processing-a pilot study of eye-tracking.

    PubMed

    Djukic, Aleksandra; Valicenti McDermott, Maria; Mavrommatis, Kathleen; Martins, Cristina L

    2012-07-01

    Consistently observed "strong eye gaze" has not been validated as a means of communication in girls with Rett syndrome, ubiquitously affected by apraxia, unable to reply either verbally or manually to questions during formal psychologic assessment. We examined nonverbal cognitive abilities and basic features of visual processing (visual discrimination attention/memory) by analyzing patterns of visual fixation in 44 girls with Rett syndrome, compared with typical control subjects. To determine features of visual fixation patterns, multiple pictures (with the location of the salient and presence/absence of novel stimuli as variables) were presented on the screen of a TS120 eye-tracker. Of the 44, 35 (80%) calibrated and exhibited meaningful patterns of visual fixation. They looked longer at salient stimuli (cartoon, 2.8 ± 2 seconds S.D., vs shape, 0.9 ± 1.2 seconds S.D.; P = 0.02), regardless of their position on the screen. They recognized novel stimuli, decreasing the fixation time on the central image when another image appeared on the periphery of the slide (2.7 ± 1 seconds S.D. vs 1.8 ± 1 seconds S.D., P = 0.002). Eye-tracking provides a feasible method for cognitive assessment and new insights into the "hidden" abilities of individuals with Rett syndrome. Copyright © 2012 Elsevier Inc. All rights reserved.

  11. Cultural and Species Differences in Gazing Patterns for Marked and Decorated Objects: A Comparative Eye-Tracking Study

    PubMed Central

    Mühlenbeck, Cordelia; Jacobsen, Thomas; Pritsch, Carla; Liebal, Katja

    2017-01-01

    Objects from the Middle Paleolithic period colored with ochre and marked with incisions represent the beginning of non-utilitarian object manipulation in different species of the Homo genus. To investigate the visual effects caused by these markings, we compared humans who have different cultural backgrounds (Namibian hunter–gatherers and German city dwellers) to one species of non-human great apes (orangutans) with respect to their perceptions of markings on objects. We used eye-tracking to analyze their fixation patterns and the durations of their fixations on marked and unmarked stones and sticks. In an additional test, humans evaluated the objects regarding their aesthetic preferences. Our hypotheses were that colorful markings help an individual to structure the surrounding world by making certain features of the environment salient, and that aesthetic appreciation should be associated with this structuring. Our results showed that humans fixated on the marked objects longer and used them in the structural processing of the objects and their background, but did not consistently report finding them more beautiful. Orangutans, in contrast, did not distinguish between object and background in their visual processing and did not clearly fixate longer on the markings. Our results suggest that marking behavior is characteristic for humans and evolved as an attention-directing rather than aesthetic benefit. PMID:28167923

  12. Exploring What's Missing: What Do Target Absent Trials Reveal about Autism Search Superiority?

    ERIC Educational Resources Information Center

    Keehn, Brandon; Joseph, Robert M.

    2016-01-01

    We used eye-tracking to investigate the roles of enhanced discrimination and peripheral selection in superior visual search in autism spectrum disorder (ASD). Children with ASD were faster at visual search than their typically developing peers. However, group differences in performance and eye-movements did not vary with the level of difficulty of…

  13. Peer Assessment of Webpage Design: Behavioral Sequential Analysis Based on Eye-Tracking Evidence

    ERIC Educational Resources Information Center

    Hsu, Ting-Chia; Chang, Shao-Chen; Liu, Nan-Cen

    2018-01-01

    This study employed an eye-tracking machine to record the process of peer assessment. Each web page was divided into several regions of interest (ROIs) based on the frame design and content. A total of 49 undergraduate students with a visual learning style participated in the experiment. This study investigated the peer assessment attitudes of the…

  14. Procedural Learning and Associative Memory Mechanisms Contribute to Contextual Cueing: Evidence from fMRI and Eye-Tracking

    ERIC Educational Resources Information Center

    Manelis, Anna; Reder, Lynne M.

    2012-01-01

    Using a combination of eye tracking and fMRI in a contextual cueing task, we explored the mechanisms underlying the facilitation of visual search for repeated spatial configurations. When configurations of distractors were repeated, greater activation in the right hippocampus corresponded to greater reductions in the number of saccades to locate…

  15. Measuring and tracking eye movements of a behaving archer fish by real-time stereo vision.

    PubMed

    Ben-Simon, Avi; Ben-Shahar, Ohad; Segev, Ronen

    2009-11-15

    The archer fish (Toxotes chatareus) exhibits unique visual behavior in that it is able to aim at and shoot down with a squirt of water insects resting on the foliage above water level and then feed on them. This extreme behavior requires excellent visual acuity, learning, and tight synchronization between the visual system and body motion. This behavior also raises many important questions, such as the fish's ability to compensate for air-water refraction and the neural mechanisms underlying target acquisition. While many such questions remain open, significant insights towards solving them can be obtained by tracking the eye and body movements of freely behaving fish. Unfortunately, existing tracking methods suffer from either a high level of invasiveness or low resolution. Here, we present a video-based eye tracking method for accurately and remotely measuring the eye and body movements of a freely moving behaving fish. Based on a stereo vision system and a unique triangulation method that corrects for air-glass-water refraction, we are able to measure a full three-dimensional pose of the fish eye and body with high temporal and spatial resolution. Our method, being generic, can be applied to studying the behavior of marine animals in general. We demonstrate how data collected by our method may be used to show that the hunting behavior of the archer fish is composed of surfacing concomitant with rotating the body around the direction of the fish's fixed gaze towards the target, until the snout reaches in the correct shooting position at water level.

  16. Pursuit Eye Movements

    NASA Technical Reports Server (NTRS)

    Krauzlis, Rich; Stone, Leland; Null, Cynthia H. (Technical Monitor)

    1998-01-01

    When viewing objects, primates use a combination of saccadic and pursuit eye movements to stabilize the retinal image of the object of regard within the high-acuity region near the fovea. Although these movements involve widespread regions of the nervous system, they mix seamlessly in normal behavior. Saccades are discrete movements that quickly direct the eyes toward a visual target, thereby translating the image of the target from an eccentric retinal location to the fovea. In contrast, pursuit is a continuous movement that slowly rotates the eyes to compensate for the motion of the visual target, minimizing the blur that can compromise visual acuity. While other mammalian species can generate smooth optokinetic eye movements - which track the motion of the entire visual surround - only primates can smoothly pursue a single small element within a complex visual scene, regardless of the motion elsewhere on the retina. This ability likely reflects the greater ability of primates to segment the visual scene, to identify individual visual objects, and to select a target of interest.

  17. Measure and Analysis of a Gaze Position Using Infrared Light Technique

    DTIC Science & Technology

    2001-10-25

    MEASURE AND ANALYSIS OF A GAZE POSITION USING INFRARED LIGHT TECHNIQUE Z. Ramdane-Cherif1,2, A. Naït-Ali2, J F. Motsch2, M. O. Krebs1 1INSERM E 01-17...also proposes a method to correct head movements. Keywords: eye movement, gaze tracking, visual scan path, spatial mapping. INTRODUCTION The eye gaze ...tracking has been used for clinical purposes to detect illnesses, such as nystagmus , unusual eye movements and many others [1][2][3]. It is also used

  18. Control of gaze in natural environments: effects of rewards and costs, uncertainty and memory in target selection.

    PubMed

    Hayhoe, Mary M; Matthis, Jonathan Samir

    2018-08-06

    The development of better eye and body tracking systems, and more flexible virtual environments have allowed more systematic exploration of natural vision and contributed a number of insights. In natural visually guided behaviour, humans make continuous sequences of sensory-motor decisions to satisfy current goals, and the role of vision is to provide the relevant information in order to achieve those goals. This paper reviews the factors that control gaze in natural visually guided actions such as locomotion, including the rewards and costs associated with the immediate behavioural goals, uncertainty about the state of the world and prior knowledge of the environment. These general features of human gaze control may inform the development of artificial systems.

  19. Eye movements to audiovisual scenes reveal expectations of a just world.

    PubMed

    Callan, Mitchell J; Ferguson, Heather J; Bindemann, Markus

    2013-02-01

    When confronted with bad things happening to good people, observers often engage reactive strategies, such as victim derogation, to maintain a belief in a just world. Although such reasoning is usually made retrospectively, we investigated the extent to which knowledge of another person's good or bad behavior can also bias people's online expectations for subsequent good or bad outcomes. Using a fully crossed design, participants listened to auditory scenarios that varied in terms of whether the characters engaged in morally good or bad behavior while their eye movements were tracked around concurrent visual scenes depicting good and bad outcomes. We found that the good (bad) behavior of the characters influenced gaze preferences for good (bad) outcomes just prior to the actual outcomes being revealed. These findings suggest that beliefs about a person's moral worth encourage observers to foresee a preferred deserved outcome as the event unfolds. We include evidence to show that this effect cannot be explained in terms of affective priming or matching strategies. 2013 APA, all rights reserved

  20. The Potential Utility of Eye Movements in the Detection and Characterization of Everyday Functional Difficulties in Mild Cognitive Impairment.

    PubMed

    Seligman, Sarah C; Giovannetti, Tania

    2015-06-01

    Mild cognitive impairment (MCI) refers to the intermediate period between the typical cognitive decline of normal aging and more severe decline associated with dementia, and it is associated with greater risk for progression to dementia. Research has suggested that functional abilities are compromised in MCI, but the degree of impairment and underlying mechanisms remain poorly understood. The development of sensitive measures to assess subtle functional decline poses a major challenge for characterizing functional limitations in MCI. Eye-tracking methodology has been used to describe visual processes in everyday, naturalistic action among healthy older adults as well as several case studies of severely impaired individuals, and it has successfully differentiated healthy older adults from those with MCI on specific visual tasks. These studies highlight the promise of eye-tracking technology as a method to characterize subtle functional decline in MCI. However, to date no studies have examined visual behaviors during completion of naturalistic tasks in MCI. This review describes the current understanding of functional ability in MCI, summarizes findings of eye-tracking studies in healthy individuals, severe impairment, and MCI, and presents future research directions to aid with early identification and prevention of functional decline in disorders of aging.

  1. Integrated framework for developing search and discrimination metrics

    NASA Astrophysics Data System (ADS)

    Copeland, Anthony C.; Trivedi, Mohan M.

    1997-06-01

    This paper presents an experimental framework for evaluating target signature metrics as models of human visual search and discrimination. This framework is based on a prototype eye tracking testbed, the Integrated Testbed for Eye Movement Studies (ITEMS). ITEMS determines an observer's visual fixation point while he studies a displayed image scene, by processing video of the observer's eye. The utility of this framework is illustrated with an experiment using gray-scale images of outdoor scenes that contain randomly placed targets. Each target is a square region of a specific size containing pixel values from another image of an outdoor scene. The real-world analogy of this experiment is that of a military observer looking upon the sensed image of a static scene to find camouflaged enemy targets that are reported to be in the area. ITEMS provides the data necessary to compute various statistics for each target to describe how easily the observers located it, including the likelihood the target was fixated or identified and the time required to do so. The computed values of several target signature metrics are compared to these statistics, and a second-order metric based on a model of image texture was found to be the most highly correlated.

  2. Measuring social attention and motivation in autism spectrum disorder using eye-tracking: Stimulus type matters.

    PubMed

    Chevallier, Coralie; Parish-Morris, Julia; McVey, Alana; Rump, Keiran M; Sasson, Noah J; Herrington, John D; Schultz, Robert T

    2015-10-01

    Autism Spectrum Disorder (ASD) is characterized by social impairments that have been related to deficits in social attention, including diminished gaze to faces. Eye-tracking studies are commonly used to examine social attention and social motivation in ASD, but they vary in sensitivity. In this study, we hypothesized that the ecological nature of the social stimuli would affect participants' social attention, with gaze behavior during more naturalistic scenes being most predictive of ASD vs. typical development. Eighty-one children with and without ASD participated in three eye-tracking tasks that differed in the ecological relevance of the social stimuli. In the "Static Visual Exploration" task, static images of objects and people were presented; in the "Dynamic Visual Exploration" task, video clips of individual faces and objects were presented side-by-side; in the "Interactive Visual Exploration" task, video clips of children playing with objects in a naturalistic context were presented. Our analyses uncovered a three-way interaction between Task, Social vs. Object Stimuli, and Diagnosis. This interaction was driven by group differences on one task only-the Interactive task. Bayesian analyses confirmed that the other two tasks were insensitive to group membership. In addition, receiver operating characteristic analyses demonstrated that, unlike the other two tasks, the Interactive task had significant classification power. The ecological relevance of social stimuli is an important factor to consider for eye-tracking studies aiming to measure social attention and motivation in ASD. © 2015 International Society for Autism Research, Wiley Periodicals, Inc.

  3. Keeping an eye on pain: investigating visual attention biases in individuals with chronic pain using eye-tracking methodology

    PubMed Central

    Fashler, Samantha R; Katz, Joel

    2016-01-01

    Attentional biases to painful stimuli are evident in individuals with chronic pain, although the directional tendency of these biases (ie, toward or away from threat-related stimuli) remains unclear. This study used eye-tracking technology, a measure of visual attention, to evaluate the attentional patterns of individuals with and without chronic pain during exposure to injury-related and neutral pictures. Individuals with (N=51) and without chronic pain (N=62) completed a dot-probe task using injury-related and neutral pictures while their eye movements were recorded. Mixed-design analysis of variance evaluated the interaction between group (chronic pain, pain-free) and picture type (injury-related, neutral). Reaction time results showed that regardless of chronic pain status, participants responded faster to trials with neutral stimuli in comparison to trials that included injury-related pictures. Eye-tracking measures showed within-group differences whereby injury-related pictures received more frequent fixations and visits, as well as longer average visit durations. Between-group differences showed that individuals with chronic pain had fewer fixations and shorter average visit durations for all stimuli. An examination of how biases change over the time-course of stimulus presentation showed that during the late phase of attention, individuals with chronic pain had longer average gaze durations on injury pictures relative to pain-free individuals. The results show the advantage of incorporating eye-tracking methodology when examining attentional biases, and suggest future avenues of research. PMID:27570461

  4. Experience-dependent plasticity from eye opening enables lasting, visual cortex-dependent enhancement of motion vision.

    PubMed

    Prusky, Glen T; Silver, Byron D; Tschetter, Wayne W; Alam, Nazia M; Douglas, Robert M

    2008-09-24

    Developmentally regulated plasticity of vision has generally been associated with "sensitive" or "critical" periods in juvenile life, wherein visual deprivation leads to loss of visual function. Here we report an enabling form of visual plasticity that commences in infant rats from eye opening, in which daily threshold testing of optokinetic tracking, amid otherwise normal visual experience, stimulates enduring, visual cortex-dependent enhancement (>60%) of the spatial frequency threshold for tracking. The perceptual ability to use spatial frequency in discriminating between moving visual stimuli is also improved by the testing experience. The capacity for inducing enhancement is transitory and effectively limited to infancy; however, enhanced responses are not consolidated and maintained unless in-kind testing experience continues uninterrupted into juvenile life. The data show that selective visual experience from infancy can alone enable visual function. They also indicate that plasticity associated with visual deprivation may not be the only cause of developmental visual dysfunction, because we found that experientially inducing enhancement in late infancy, without subsequent reinforcement of the experience in early juvenile life, can lead to enduring loss of function.

  5. Visualization of Data Regarding Infections Using Eye Tracking Techniques

    PubMed Central

    Yoon, Sunmoo; Cohen, Bevin; Cato, Kenrick D.; Liu, Jianfang; Larson, Elaine L.

    2016-01-01

    Objective To evaluate ease of use and usefulness for nurses of visualizations of infectious disease transmission in a hospital. Design An observational study was used to evaluate perceptions of several visualizations of data extracted from electronic health records designed using a participatory approach. Twelve nurses in the master’s program in an urban research-intensive nursing school participated in May 2015. Methods A convergent parallel mixed method was used to evaluate nurses’ perceptions on ease of use and usefulness of five visualization conveying trends in hospital infection transmission applying think-aloud, interview, and eye-tracking techniques. Findings Subjective data from the interview and think-aloud techniques indicated that participants preferred the traditional line graphs in simple data representation due to their familiarity, clarity, and easiness to read. An objective quantitative measure of eye movement analysis (444,421 gaze events) identified a high degree of participants’ attention span in infographics in all three scenarios. All participants responded with the correct answer within 1 min in comprehensive tests. Conclusions A user-centric approach was effective in developing and evaluating visualizations for hospital infection transmission. For the visualizations designed by the users, the participants were easily able to comprehend the infection visualizations on both line graphs and infographics for simple visualization. The findings from the objective comprehension test and eye movement and subjective attitudes support the feasibility of integrating user-centric visualization designs into electronic health records, which may inspire clinicians to be mindful of hospital infection transmission. Future studies are needed to investigate visualizations and motivation, and the effectiveness of visualization on infection rate. Clinical Relevance This study designed visualization images using clinical data from electronic health records applying a user-centric approach. The design insights can be applied for visualizing patient data in electronic health records. PMID:27061619

  6. Visualization of Data Regarding Infections Using Eye Tracking Techniques.

    PubMed

    Yoon, Sunmoo; Cohen, Bevin; Cato, Kenrick D; Liu, Jianfang; Larson, Elaine L

    2016-05-01

    To evaluate ease of use and usefulness for nurses of visualizations of infectious disease transmission in a hospital. An observational study was used to evaluate perceptions of several visualizations of data extracted from electronic health records designed using a participatory approach. Twelve nurses in the master's program in an urban research-intensive nursing school participated in May 2015. A convergent parallel mixed method was used to evaluate nurses' perceptions on ease of use and usefulness of five visualization conveying trends in hospital infection transmission applying think-aloud, interview, and eye-tracking techniques. Subjective data from the interview and think-aloud techniques indicated that participants preferred the traditional line graphs in simple data representation due to their familiarity, clarity, and easiness to read. An objective quantitative measure of eye movement analysis (444,421 gaze events) identified a high degree of participants' attention span in infographics in all three scenarios. All participants responded with the correct answer within 1 min in comprehensive tests. A user-centric approach was effective in developing and evaluating visualizations for hospital infection transmission. For the visualizations designed by the users, the participants were easily able to comprehend the infection visualizations on both line graphs and infographics for simple visualization. The findings from the objective comprehension test and eye movement and subjective attitudes support the feasibility of integrating user-centric visualization designs into electronic health records, which may inspire clinicians to be mindful of hospital infection transmission. Future studies are needed to investigate visualizations and motivation, and the effectiveness of visualization on infection rate. This study designed visualization images using clinical data from electronic health records applying a user-centric approach. The design insights can be applied for visualizing patient data in electronic health records. © 2016 Sigma Theta Tau International.

  7. Different Visual Preference Patterns in Response to Simple and Complex Dynamic Social Stimuli in Preschool-Aged Children with Autism Spectrum Disorders

    PubMed Central

    Shi, Lijuan; Zhou, Yuanyue; Ou, Jianjun; Gong, Jingbo; Wang, Suhong; Cui, Xilong; Lyu, Hailong; Zhao, Jingping; Luo, Xuerong

    2015-01-01

    Eye-tracking studies in young children with autism spectrum disorder (ASD) have shown a visual attention preference for geometric patterns when viewing paired dynamic social images (DSIs) and dynamic geometric images (DGIs). In the present study, eye-tracking of two different paired presentations of DSIs and DGIs was monitored in a group of 13 children aged 4 to 6 years with ASD and 20 chronologically age-matched typically developing children (TDC). The results indicated that compared with the control group, children with ASD attended significantly less to DSIs showing two or more children playing than to similar DSIs showing a single child. Visual attention preference in 4- to 6-year-old children with ASDs, therefore, appears to be modulated by the type of visual stimuli. PMID:25781170

  8. Vestibulo-Cervico-Ocular Responses and Tracking Eye Movements after Prolonged Exposure to Microgravity

    NASA Technical Reports Server (NTRS)

    Kornilova, L. N.; Naumov, I. A.; Azarov, K. A.; Sagalovitch, S. V.; Reschke, Millard F.; Kozlovskaya, I. B.

    2007-01-01

    The vestibular function and tracking eye movements were investigated in 12 Russian crew members of ISS missions on days 1(2), 4(5-6), and 8(9-10) after prolonged exposure to microgravity (126 to 195 days). The spontaneous oculomotor activity, static torsional otolith-cervico-ocular reflex, dynamic vestibulo-cervico-ocular responses, vestibular reactivity, tracking eye movements, and gaze-holding were studied using videooculography (VOG) and electrooculography (EOG) for parallel eye movement recording. On post-flight days 1-2 (R+1-2) some cosmonauts demonstrated: - an increased spontaneous oculomotor activity (floating eye movements, spontaneous nystagmus of the typical and atypical form, square wave jerks, gaze nystagmus) with the head held in the vertical position; - suppressed otolith function (absent or reduced by one half amplitude of torsional compensatory eye counter-rolling) with the head inclined statically right- or leftward by 300; - increased vestibular reactivity (lowered threshold and increased intensity of the vestibular nystagmus) during head turns around the longitudinal body axis at 0.125 Hz; - a significant change in the accuracy, velocity, and temporal characteristics of the eye tracking. The pattern, depth, dynamics, and velocity of the vestibular function and tracking eye movements recovery varied with individual participants in the investigation. However, there were also regular responses during readaptation to the normal gravity: - suppression of the otolith function was typically accompanied by an exaggerated vestibular reactivity; - the structure of visual tracking (the accuracy of fixational eye rotations, smooth tracking, and gaze-holding) was disturbed (the appearance of correcting saccades, the transition of smooth tracking to saccadic tracking) only in those cosmonauts who, in parallel to an increased reactivity of the vestibular input, also had central changes in the oculomotor system (spontaneous nystagmus, gaze nystagmus).

  9. Predictors of Verb-Mediated Anticipatory Eye Movements in the Visual World

    ERIC Educational Resources Information Center

    Hintz, Florian; Meyer, Antje S.; Huettig, Falk

    2017-01-01

    Many studies have demonstrated that listeners use information extracted from verbs to guide anticipatory eye movements to objects in the visual context that satisfy the selection restrictions of the verb. An important question is what underlies such verb-mediated anticipatory eye gaze. Based on empirical and theoretical suggestions, we…

  10. Binocular eye movement control and motion perception: what is being tracked?

    PubMed

    van der Steen, Johannes; Dits, Joyce

    2012-10-19

    We investigated under what conditions humans can make independent slow phase eye movements. The ability to make independent movements of the two eyes generally is attributed to few specialized lateral eyed animal species, for example chameleons. In our study, we showed that humans also can move the eyes in different directions. To maintain binocular retinal correspondence independent slow phase movements of each eye are produced. We used the scleral search coil method to measure binocular eye movements in response to dichoptically viewed visual stimuli oscillating in orthogonal direction. Correlated stimuli led to orthogonal slow eye movements, while the binocularly perceived motion was the vector sum of the motion presented to each eye. The importance of binocular fusion on independency of the movements of the two eyes was investigated with anti-correlated stimuli. The perceived global motion pattern of anti-correlated dichoptic stimuli was perceived as an oblique oscillatory motion, as well as resulted in a conjugate oblique motion of the eyes. We propose that the ability to make independent slow phase eye movements in humans is used to maintain binocular retinal correspondence. Eye-of-origin and binocular information are used during the processing of binocular visual information, and it is decided at an early stage whether binocular or monocular motion information and independent slow phase eye movements of each eye are produced during binocular tracking.

  11. Binocular Eye Movement Control and Motion Perception: What Is Being Tracked?

    PubMed Central

    van der Steen, Johannes; Dits, Joyce

    2012-01-01

    Purpose. We investigated under what conditions humans can make independent slow phase eye movements. The ability to make independent movements of the two eyes generally is attributed to few specialized lateral eyed animal species, for example chameleons. In our study, we showed that humans also can move the eyes in different directions. To maintain binocular retinal correspondence independent slow phase movements of each eye are produced. Methods. We used the scleral search coil method to measure binocular eye movements in response to dichoptically viewed visual stimuli oscillating in orthogonal direction. Results. Correlated stimuli led to orthogonal slow eye movements, while the binocularly perceived motion was the vector sum of the motion presented to each eye. The importance of binocular fusion on independency of the movements of the two eyes was investigated with anti-correlated stimuli. The perceived global motion pattern of anti-correlated dichoptic stimuli was perceived as an oblique oscillatory motion, as well as resulted in a conjugate oblique motion of the eyes. Conclusions. We propose that the ability to make independent slow phase eye movements in humans is used to maintain binocular retinal correspondence. Eye-of-origin and binocular information are used during the processing of binocular visual information, and it is decided at an early stage whether binocular or monocular motion information and independent slow phase eye movements of each eye are produced during binocular tracking. PMID:22997286

  12. Sixteen-month-olds can use language to update their expectations about the visual world.

    PubMed

    Ganea, Patricia A; Fitch, Allison; Harris, Paul L; Kaldy, Zsuzsa

    2016-11-01

    The capacity to use language to form new representations and to revise existing knowledge is a crucial aspect of human cognition. Here we examined whether infants can use language to adjust their representation of a recently encoded scene. Using an eye-tracking paradigm, we asked whether 16-month-old infants (N=26; mean age=16;0 [months;days], range=14;15-17;15) can use language about an occluded event to inform their expectation about what the world will look like when the occluder is removed. We compared looking time to outcome scenes that matched the language input with looking time to those that did not. Infants looked significantly longer at the event outcome when the outcome did not match the language input, suggesting that they generated an expectation of the outcome based on that input alone. This effect was unrelated to infants' vocabulary size. Thus, using language to adjust expectations about the visual world is present at an early developmental stage even when language skills are rudimentary. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Eye-movements and Voice as Interface Modalities to Computer Systems

    NASA Astrophysics Data System (ADS)

    Farid, Mohsen M.; Murtagh, Fionn D.

    2003-03-01

    We investigate the visual and vocal modalities of interaction with computer systems. We focus our attention on the integration of visual and vocal interface as possible replacement and/or additional modalities to enhance human-computer interaction. We present a new framework for employing eye gaze as a modality of interface. While voice commands, as means of interaction with computers, have been around for a number of years, integration of both the vocal interface and the visual interface, in terms of detecting user's eye movements through an eye-tracking device, is novel and promises to open the horizons for new applications where a hand-mouse interface provides little or no apparent support to the task to be accomplished. We present an array of applications to illustrate the new framework and eye-voice integration.

  14. Effects of speaker emotional facial expression and listener age on incremental sentence processing.

    PubMed

    Carminati, Maria Nella; Knoeferle, Pia

    2013-01-01

    We report two visual-world eye-tracking experiments that investigated how and with which time course emotional information from a speaker's face affects younger (N = 32, Mean age  = 23) and older (N = 32, Mean age  = 64) listeners' visual attention and language comprehension as they processed emotional sentences in a visual context. The age manipulation tested predictions by socio-emotional selectivity theory of a positivity effect in older adults. After viewing the emotional face of a speaker (happy or sad) on a computer display, participants were presented simultaneously with two pictures depicting opposite-valence events (positive and negative; IAPS database) while they listened to a sentence referring to one of the events. Participants' eye fixations on the pictures while processing the sentence were increased when the speaker's face was (vs. wasn't) emotionally congruent with the sentence. The enhancement occurred from the early stages of referential disambiguation and was modulated by age. For the older adults it was more pronounced with positive faces, and for the younger ones with negative faces. These findings demonstrate for the first time that emotional facial expressions, similarly to previously-studied speaker cues such as eye gaze and gestures, are rapidly integrated into sentence processing. They also provide new evidence for positivity effects in older adults during situated sentence processing.

  15. What Visual Information Do Children and Adults Consider while Switching between Tasks? Eye-Tracking Investigation of Cognitive Flexibility Development

    ERIC Educational Resources Information Center

    Chevalier, Nicolas; Blaye, Agnes; Dufau, Stephane; Lucenet, Joanna

    2010-01-01

    This study investigated the visual information that children and adults consider while switching or maintaining object-matching rules. Eye movements of 5- and 6-year-old children and adults were collected with two versions of the Advanced Dimensional Change Card Sort, which requires switching between shape- and color-matching rules. In addition to…

  16. Visual and non-visual motion information processing during pursuit eye tracking in schizophrenia and bipolar disorder.

    PubMed

    Trillenberg, Peter; Sprenger, Andreas; Talamo, Silke; Herold, Kirsten; Helmchen, Christoph; Verleger, Rolf; Lencer, Rebekka

    2017-04-01

    Despite many reports on visual processing deficits in psychotic disorders, studies are needed on the integration of visual and non-visual components of eye movement control to improve the understanding of sensorimotor information processing in these disorders. Non-visual inputs to eye movement control include prediction of future target velocity from extrapolation of past visual target movement and anticipation of future target movements. It is unclear whether non-visual input is impaired in patients with schizophrenia. We recorded smooth pursuit eye movements in 21 patients with schizophrenia spectrum disorder, 22 patients with bipolar disorder, and 24 controls. In a foveo-fugal ramp task, the target was either continuously visible or was blanked during movement. We determined peak gain (measuring overall performance), initial eye acceleration (measuring visually driven pursuit), deceleration after target extinction (measuring prediction), eye velocity drifts before onset of target visibility (measuring anticipation), and residual gain during blanking intervals (measuring anticipation and prediction). In both patient groups, initial eye acceleration was decreased and the ability to adjust eye acceleration to increasing target acceleration was impaired. In contrast, neither deceleration nor eye drift velocity was reduced in patients, implying unimpaired non-visual contributions to pursuit drive. Disturbances of eye movement control in psychotic disorders appear to be a consequence of deficits in sensorimotor transformation rather than a pure failure in adding cognitive contributions to pursuit drive in higher-order cortical circuits. More generally, this deficit might reflect a fundamental imbalance between processing external input and acting according to internal preferences.

  17. Is there an age-related positivity effect in visual attention? A comparison of two methodologies.

    PubMed

    Isaacowitz, Derek M; Wadlinger, Heather A; Goren, Deborah; Wilson, Hugh R

    2006-08-01

    Research suggests a positivity effect in older adults' memory for emotional material, but the evidence from the attentional domain is mixed. The present study combined 2 methodologies for studying preferences in visual attention, eye tracking, and dot-probe, as younger and older adults viewed synthetic emotional faces. Eye tracking most consistently revealed a positivity effect in older adults' attention, so that older adults showed preferential looking toward happy faces and away from sad faces. Dot-probe results were less robust, but in the same direction. Methodological and theoretical implications for the study of socioemotional aging are discussed. (c) 2006 APA, all rights reserved

  18. Eye tracking reveals the cost of switching between self and other perspectives in a visual perspective-taking task.

    PubMed

    Ferguson, Heather J; Apperly, Ian; Cane, James E

    2017-08-01

    Previous studies have shown that while people can rapidly and accurately compute their own and other people's visual perspectives, they experience difficulty ignoring the irrelevant perspective when the two perspectives differ. We used the "avatar" perspective-taking task to examine the mechanisms that underlie these egocentric (i.e., interference from their own perspective) and altercentric (i.e., interference from the other person's perspective) tendencies. Participants were eye-tracked as they verified the number of discs in a visual scene according to either their own or an on-screen avatar's perspective. Crucially in some trials the two perspectives were inconsistent (i.e., each saw a different number of discs), while in others they were consistent. To examine the effect of perspective switching, performance was compared for trials that were preceded with the same versus a different perspective cue. We found that altercentric interference can be reduced or eliminated when participants stick with their own perspective across consecutive trials. Our eye-tracking analyses revealed distinct fixation patterns for self and other perspective taking, suggesting that consistency effects in this paradigm are driven by implicit mentalizing of what others can see, and not automatic directional cues from the avatar.

  19. Effects of phencyclidine, secobarbital and diazepam on eye tracking in rhesus monkeys.

    PubMed

    Ando, K; Johanson, C E; Levy, D L; Yasillo, N J; Holzman, P S; Schuster, C R

    1983-01-01

    Rhesus monkeys were trained to track a moving disk using a procedure in which responses on a lever were reinforced with water delivery only when the disk, oscillating in a horizontal plane on a screen at a frequency of 0.4 Hz in a visual angle of 20 degrees, dimmed for a brief period. Pursuit eye movements were recorded by electrooculography (EOG). IM phencyclidine, secobarbital, and diazepam injections decreased the number of reinforced lever presses in a dose-related manner. Both secobarbital and diazepam produced episodic jerky-pursuit eye movements, while phencyclidine had no consistent effects on eye movements. Lever pressing was disrupted at doses which had little effect on the quality of smooth-pursuit eye movements in some monkeys. This separation was particularly pronounced with diazepam. The similarities of the drug effects on smooth-pursuit eye movements between the present study and human studies indicate that the present method using rhesus monkeys may be useful for predicting drug effects on eye tracking and oculomotor function in humans.

  20. Web Camera Based Eye Tracking to Assess Visual Memory on a Visual Paired Comparison Task.

    PubMed

    Bott, Nicholas T; Lange, Alex; Rentz, Dorene; Buffalo, Elizabeth; Clopton, Paul; Zola, Stuart

    2017-01-01

    Background: Web cameras are increasingly part of the standard hardware of most smart devices. Eye movements can often provide a noninvasive "window on the brain," and the recording of eye movements using web cameras is a burgeoning area of research. Objective: This study investigated a novel methodology for administering a visual paired comparison (VPC) decisional task using a web camera.To further assess this method, we examined the correlation between a standard eye-tracking camera automated scoring procedure [obtaining images at 60 frames per second (FPS)] and a manually scored procedure using a built-in laptop web camera (obtaining images at 3 FPS). Methods: This was an observational study of 54 clinically normal older adults.Subjects completed three in-clinic visits with simultaneous recording of eye movements on a VPC decision task by a standard eye tracker camera and a built-in laptop-based web camera. Inter-rater reliability was analyzed using Siegel and Castellan's kappa formula. Pearson correlations were used to investigate the correlation between VPC performance using a standard eye tracker camera and a built-in web camera. Results: Strong associations were observed on VPC mean novelty preference score between the 60 FPS eye tracker and 3 FPS built-in web camera at each of the three visits ( r = 0.88-0.92). Inter-rater agreement of web camera scoring at each time point was high (κ = 0.81-0.88). There were strong relationships on VPC mean novelty preference score between 10, 5, and 3 FPS training sets ( r = 0.88-0.94). Significantly fewer data quality issues were encountered using the built-in web camera. Conclusions: Human scoring of a VPC decisional task using a built-in laptop web camera correlated strongly with automated scoring of the same task using a standard high frame rate eye tracker camera.While this method is not suitable for eye tracking paradigms requiring the collection and analysis of fine-grained metrics, such as fixation points, built-in web cameras are a standard feature of most smart devices (e.g., laptops, tablets, smart phones) and can be effectively employed to track eye movements on decisional tasks with high accuracy and minimal cost.

  1. Visual Attention Modeling for Stereoscopic Video: A Benchmark and Computational Model.

    PubMed

    Fang, Yuming; Zhang, Chi; Li, Jing; Lei, Jianjun; Perreira Da Silva, Matthieu; Le Callet, Patrick

    2017-10-01

    In this paper, we investigate the visual attention modeling for stereoscopic video from the following two aspects. First, we build one large-scale eye tracking database as the benchmark of visual attention modeling for stereoscopic video. The database includes 47 video sequences and their corresponding eye fixation data. Second, we propose a novel computational model of visual attention for stereoscopic video based on Gestalt theory. In the proposed model, we extract the low-level features, including luminance, color, texture, and depth, from discrete cosine transform coefficients, which are used to calculate feature contrast for the spatial saliency computation. The temporal saliency is calculated by the motion contrast from the planar and depth motion features in the stereoscopic video sequences. The final saliency is estimated by fusing the spatial and temporal saliency with uncertainty weighting, which is estimated by the laws of proximity, continuity, and common fate in Gestalt theory. Experimental results show that the proposed method outperforms the state-of-the-art stereoscopic video saliency detection models on our built large-scale eye tracking database and one other database (DML-ITRACK-3D).

  2. Observers' cognitive states modulate how visual inputs relate to gaze control.

    PubMed

    Kardan, Omid; Henderson, John M; Yourganov, Grigori; Berman, Marc G

    2016-09-01

    Previous research has shown that eye-movements change depending on both the visual features of our environment, and the viewer's top-down knowledge. One important question that is unclear is the degree to which the visual goals of the viewer modulate how visual features of scenes guide eye-movements. Here, we propose a systematic framework to investigate this question. In our study, participants performed 3 different visual tasks on 135 scenes: search, memorization, and aesthetic judgment, while their eye-movements were tracked. Canonical correlation analyses showed that eye-movements were reliably more related to low-level visual features at fixations during the visual search task compared to the aesthetic judgment and scene memorization tasks. Different visual features also had different relevance to eye-movements between tasks. This modulation of the relationship between visual features and eye-movements by task was also demonstrated with classification analyses, where classifiers were trained to predict the viewing task based on eye movements and visual features at fixations. Feature loadings showed that the visual features at fixations could signal task differences independent of temporal and spatial properties of eye-movements. When classifying across participants, edge density and saliency at fixations were as important as eye-movements in the successful prediction of task, with entropy and hue also being significant, but with smaller effect sizes. When classifying within participants, brightness and saturation were also significant contributors. Canonical correlation and classification results, together with a test of moderation versus mediation, suggest that the cognitive state of the observer moderates the relationship between stimulus-driven visual features and eye-movements. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  3. A Novel Eye-Tracking Method to Assess Attention Allocation in Individuals with and without Aphasia Using a Dual-Task Paradigm

    PubMed Central

    Heuer, Sabine; Hallowell, Brooke

    2015-01-01

    Numerous authors report that people with aphasia have greater difficulty allocating attention than people without neurological disorders. Studying how attention deficits contribute to language deficits is important. However, existing methods for indexing attention allocation in people with aphasia pose serious methodological challenges. Eye-tracking methods have great potential to address such challenges. We developed and assessed the validity of a new dual-task method incorporating eye tracking to assess attention allocation. Twenty-six adults with aphasia and 33 control participants completed auditory sentence comprehension and visual search tasks. To test whether the new method validly indexes well-documented patterns in attention allocation, demands were manipulated by varying task complexity in single- and dual-task conditions. Differences in attention allocation were indexed via eye-tracking measures. For all participants significant increases in attention allocation demands were observed from single- to dual-task conditions and from simple to complex stimuli. Individuals with aphasia had greater difficulty allocating attention with greater task demands. Relationships between eye-tracking indices of comprehension during single and dual tasks and standardized testing were examined. Results support the validity of the novel eye-tracking method for assessing attention allocation in people with and without aphasia. Clinical and research implications are discussed. PMID:25913549

  4. Testing of visual field with virtual reality goggles in manual and visual grasp modes.

    PubMed

    Wroblewski, Dariusz; Francis, Brian A; Sadun, Alfredo; Vakili, Ghazal; Chopra, Vikas

    2014-01-01

    Automated perimetry is used for the assessment of visual function in a variety of ophthalmic and neurologic diseases. We report development and clinical testing of a compact, head-mounted, and eye-tracking perimeter (VirtualEye) that provides a more comfortable test environment than the standard instrumentation. VirtualEye performs the equivalent of a full threshold 24-2 visual field in two modes: (1) manual, with patient response registered with a mouse click, and (2) visual grasp, where the eye tracker senses change in gaze direction as evidence of target acquisition. 59 patients successfully completed the test in manual mode and 40 in visual grasp mode, with 59 undergoing the standard Humphrey field analyzer (HFA) testing. Large visual field defects were reliably detected by VirtualEye. Point-by-point comparison between the results obtained with the different modalities indicates: (1) minimal systematic differences between measurements taken in visual grasp and manual modes, (2) the average standard deviation of the difference distributions of about 5 dB, and (3) a systematic shift (of 4-6 dB) to lower sensitivities for VirtualEye device, observed mostly in high dB range. The usability survey suggested patients' acceptance of the head-mounted device. The study appears to validate the concepts of a head-mounted perimeter and the visual grasp mode.

  5. Prediction in the Processing of Repair Disfluencies: Evidence from the Visual-World Paradigm

    PubMed Central

    Lowder, Matthew W.; Ferreira, Fernanda

    2016-01-01

    Two visual-world eye-tracking experiments investigated the role of prediction in the processing of repair disfluencies (e.g., The chef reached for some salt uh I mean some ketchup…). Experiment 1 showed that listeners were more likely to fixate a critical distractor item (e.g., pepper) during the processing of repair disfluencies compared to the processing of coordination structures (e.g., …some salt and also some ketchup…). Experiment 2 replicated the findings of Experiment 1 for disfluency versus coordination constructions and also showed that the pattern of fixations to the critical distractor for disfluency constructions was similar to the fixation patterns for sentences employing contrastive focus (e.g., …not some salt but rather some ketchup…). The results suggest that similar mechanisms underlie the processing of repair disfluencies and contrastive focus, with listeners generating sets of entities that stand in semantic contrast to the reparandum in the case of disfluencies or the negated entity in the case of contrastive focus. PMID:26866657

  6. Evaluation of perception performance in neck dissection planning using eye tracking and attention landscapes

    NASA Astrophysics Data System (ADS)

    Burgert, Oliver; Örn, Veronika; Velichkovsky, Boris M.; Gessat, Michael; Joos, Markus; Strauß, Gero; Tietjen, Christian; Preim, Bernhard; Hertel, Ilka

    2007-03-01

    Neck dissection is a surgical intervention at which cervical lymph node metastases are removed. Accurate surgical planning is of high importance because wrong judgment of the situation causes severe harm for the patient. Diagnostic perception of radiological images by a surgeon is an acquired skill that can be enhanced by training and experience. To improve accuracy in detecting pathological lymph nodes by newcomers and less experienced professionals, it is essential to understand how surgical experts solve relevant visual and recognition tasks. By using eye tracking and especially the newly-developed attention landscapes visualizations, it could be determined whether visualization options, for example 3D models instead of CT data, help in increasing accuracy and speed of neck dissection planning. Thirteen ORL surgeons with different levels of expertise participated in this study. They inspected different visualizations of 3D models and original CT datasets of patients. Among others, we used scanpath analysis and attention landscapes to interpret the inspection strategies. It was possible to distinguish different patterns of visual exploratory activity. The experienced surgeons exhibited a higher concentration of attention on the limited number of areas of interest and demonstrated less saccadic eye movements indicating a better orientation.

  7. A Data Model and Task Space for Data of Interest (DOI) Eye-Tracking Analyses.

    PubMed

    Jianu, Radu; Alam, Sayeed Safayet

    2018-03-01

    Eye-tracking data is traditionally analyzed by looking at where on a visual stimulus subjects fixate, or, to facilitate more advanced analyses, by using area-of-interests (AOI) defined onto visual stimuli. Recently, there is increasing interest in methods that capture what users are looking at rather than where they are looking. By instrumenting visualization code that transforms a data model into visual content, gaze coordinates reported by an eye-tracker can be mapped directly to granular data shown on the screen, producing temporal sequences of data objects that subjects viewed in an experiment. Such data collection, which is called gaze to object mapping (GTOM) or data-of-interest analysis (DOI), can be done reliably with limited overhead and can facilitate research workflows not previously possible. Our paper contributes to establishing a foundation of DOI analyses by defining a DOI data model and highlighting its differences to AOI data in structure and scale; by defining and exemplifying a space of DOI enabled tasks; by describing three concrete examples of DOI experimentation in three different domains; and by discussing immediate research challenges in creating a framework of visual support for DOI experimentation and analysis.

  8. When Art Moves the Eyes: A Behavioral and Eye-Tracking Study

    PubMed Central

    Massaro, Davide; Savazzi, Federica; Di Dio, Cinzia; Freedberg, David; Gallese, Vittorio; Gilli, Gabriella; Marchetti, Antonella

    2012-01-01

    The aim of this study was to investigate, using eye-tracking technique, the influence of bottom-up and top-down processes on visual behavior while subjects, naïve to art criticism, were presented with representational paintings. Forty-two subjects viewed color and black and white paintings (Color) categorized as dynamic or static (Dynamism) (bottom-up processes). Half of the images represented natural environments and half human subjects (Content); all stimuli were displayed under aesthetic and movement judgment conditions (Task) (top-down processes). Results on gazing behavior showed that content-related top-down processes prevailed over low-level visually-driven bottom-up processes when a human subject is represented in the painting. On the contrary, bottom-up processes, mediated by low-level visual features, particularly affected gazing behavior when looking at nature-content images. We discuss our results proposing a reconsideration of the definition of content-related top-down processes in accordance with the concept of embodied simulation in art perception. PMID:22624007

  9. When art moves the eyes: a behavioral and eye-tracking study.

    PubMed

    Massaro, Davide; Savazzi, Federica; Di Dio, Cinzia; Freedberg, David; Gallese, Vittorio; Gilli, Gabriella; Marchetti, Antonella

    2012-01-01

    The aim of this study was to investigate, using eye-tracking technique, the influence of bottom-up and top-down processes on visual behavior while subjects, naïve to art criticism, were presented with representational paintings. Forty-two subjects viewed color and black and white paintings (Color) categorized as dynamic or static (Dynamism) (bottom-up processes). Half of the images represented natural environments and half human subjects (Content); all stimuli were displayed under aesthetic and movement judgment conditions (Task) (top-down processes). Results on gazing behavior showed that content-related top-down processes prevailed over low-level visually-driven bottom-up processes when a human subject is represented in the painting. On the contrary, bottom-up processes, mediated by low-level visual features, particularly affected gazing behavior when looking at nature-content images. We discuss our results proposing a reconsideration of the definition of content-related top-down processes in accordance with the concept of embodied simulation in art perception.

  10. Eye tracking measures of uncertainty during perceptual decision making.

    PubMed

    Brunyé, Tad T; Gardony, Aaron L

    2017-10-01

    Perceptual decision making involves gathering and interpreting sensory information to effectively categorize the world and inform behavior. For instance, a radiologist distinguishing the presence versus absence of a tumor, or a luggage screener categorizing objects as threatening or non-threatening. In many cases, sensory information is not sufficient to reliably disambiguate the nature of a stimulus, and resulting decisions are done under conditions of uncertainty. The present study asked whether several oculomotor metrics might prove sensitive to transient states of uncertainty during perceptual decision making. Participants viewed images with varying visual clarity and were asked to categorize them as faces or houses, and rate the certainty of their decisions, while we used eye tracking to monitor fixations, saccades, blinks, and pupil diameter. Results demonstrated that decision certainty influenced several oculomotor variables, including fixation frequency and duration, the frequency, peak velocity, and amplitude of saccades, and phasic pupil diameter. Whereas most measures tended to change linearly along with decision certainty, pupil diameter revealed more nuanced and dynamic information about the time course of perceptual decision making. Together, results demonstrate robust alterations in eye movement behavior as a function of decision certainty and attention demands, and suggest that monitoring oculomotor variables during applied task performance may prove valuable for identifying and remediating transient states of uncertainty. Published by Elsevier B.V.

  11. Eye Tracking Metrics for Workload Estimation in Flight Deck Operation

    NASA Technical Reports Server (NTRS)

    Ellis, Kyle; Schnell, Thomas

    2010-01-01

    Flight decks of the future are being enhanced through improved avionics that adapt to both aircraft and operator state. Eye tracking allows for non-invasive analysis of pilot eye movements, from which a set of metrics can be derived to effectively and reliably characterize workload. This research identifies eye tracking metrics that correlate to aircraft automation conditions, and identifies the correlation of pilot workload to the same automation conditions. Saccade length was used as an indirect index of pilot workload: Pilots in the fully automated condition were observed to have on average, larger saccadic movements in contrast to the guidance and manual flight conditions. The data set itself also provides a general model of human eye movement behavior and so ostensibly visual attention distribution in the cockpit for approach to land tasks with various levels of automation, by means of the same metrics used for workload algorithm development.

  12. How Revisions to Mathematical Visuals Affect Cognition: Evidence from Eye Tracking

    ERIC Educational Resources Information Center

    Clinton, Virginia; Cooper, Jennifer L.; Michaelis, Joseph; Alibali, Martha W.; Nathan, Mitchell J.

    2017-01-01

    Mathematics curricula are frequently rich with visuals, but these visuals are often not designed for optimal use of students' limited cognitive resources. The authors of this study revised the visuals in a mathematics lesson based on instructional design principles. The purpose of this study is to examine the effects of these revised visuals on…

  13. Eye-tracking novice and expert geologist groups in the field and laboratory

    NASA Astrophysics Data System (ADS)

    Cottrell, R. D.; Evans, K. M.; Jacobs, R. A.; May, B. B.; Pelz, J. B.; Rosen, M. R.; Tarduno, J. A.; Voronov, J.

    2010-12-01

    We are using an Active Vision approach to learn how novices and expert geologists acquire visual information in the field. The Active Vision approach emphasizes that visual perception is an active process wherein new information is acquired about a particular environment through exploratory eye movements. Eye movements are not only influenced by physical stimuli, but are also strongly influenced by high-level perceptual and cognitive processes. Eye-tracking data were collected on ten novices (undergraduate geology students) and 3 experts during a 10-day field trip across California focused on neotectonics. In addition, high-resolution panoramic images were captured at each key locality for use in a semi-immersive laboratory environment. Examples of each data type will be presented. The number of observers will be increased in subsequent field trips, but expert/novice differences are already apparent in the first set of individual eye-tracking records, including gaze time, gaze pattern and object recognition. We will review efforts to quantify these patterns, and development of semi-immersive environments to display geologic scenes. The research is a collaborative effort between Earth scientists, Cognitive scientists and Imaging scientists at the University of Rochester and the Rochester Institute of Technology and with funding from the National Science Foundation.

  14. Sex differences in a chronometric mental rotation test with cube figures: a behavioral, electroencephalography, and eye-tracking pilot study.

    PubMed

    Scheer, Clara; Mattioni Maturana, Felipe; Jansen, Petra

    2018-05-07

    In chronometric mental rotation tasks, sex differences are widely discussed. Most studies find men to be more skilled in mental rotation than women, which can be explained by the holistic strategy that they use to rotate stimuli. Women are believed to apply a piecemeal strategy. So far, there have been no studies investigating this phenomenon using eye-tacking methods in combination with electroencephalography (EEG) analysis: the present study compared behavioral responses, EEG activity, and eye movements of 15 men and 15 women while solving a three-dimensional chronometric mental rotation test. The behavioral analysis showed neither differences in reaction time nor in the accuracy rate between men and women. The EEG data showed a higher right activation on parietal electrodes for women and the eye-tracking results indicated a longer fixation in a higher number of areas of interest at 0° for women. Men and women are likely to possess different perceptual (visual search) and decision-making mechanisms, but similar mental rotation processes. Furthermore, men presented a longer visual search processing, characterized by the greater saccade latency of 0°-135°. Generally, this study could be considered a pilot study to investigate sex differences in mental rotation tasks while combining eye-tracking and EEG methods.

  15. Abstract conceptual feature ratings predict gaze within written word arrays: evidence from a Visual Wor(l)d paradigm

    PubMed Central

    Primativo, Silvia; Reilly, Jamie; Crutch, Sebastian J

    2016-01-01

    The Abstract Conceptual Feature (ACF) framework predicts that word meaning is represented within a high-dimensional semantic space bounded by weighted contributions of perceptual, affective, and encyclopedic information. The ACF, like latent semantic analysis, is amenable to distance metrics between any two words. We applied predictions of the ACF framework to abstract words using eye tracking via an adaptation of the classical ‘visual word paradigm’. Healthy adults (N=20) selected the lexical item most related to a probe word in a 4-item written word array comprising the target and three distractors. The relation between the probe and each of the four words was determined using the semantic distance metrics derived from ACF ratings. Eye-movement data indicated that the word that was most semantically related to the probe received more and longer fixations relative to distractors. Importantly, in sets where participants did not provide an overt behavioral response, the fixation rates were none the less significantly higher for targets than distractors, closely resembling trials where an expected response was given. Furthermore, ACF ratings which are based on individual words predicted eye fixation metrics of probe-target similarity at least as well as latent semantic analysis ratings which are based on word co-occurrence. The results provide further validation of Euclidean distance metrics derived from ACF ratings as a measure of one facet of the semantic relatedness of abstract words and suggest that they represent a reasonable approximation of the organization of abstract conceptual space. The data are also compatible with the broad notion that multiple sources of information (not restricted to sensorimotor and emotion information) shape the organization of abstract concepts. Whilst the adapted ‘visual word paradigm’ is potentially a more metacognitive task than the classical visual world paradigm, we argue that it offers potential utility for studying abstract word comprehension. PMID:26901571

  16. Tracking the Eye Movement of Four Years Old Children Learning Chinese Words.

    PubMed

    Lin, Dan; Chen, Guangyao; Liu, Yingyi; Liu, Jiaxin; Pan, Jue; Mo, Lei

    2018-02-01

    Storybook reading is the major source of literacy exposure for beginning readers. The present study tracked 4-year-old Chinese children's eye movements while they were reading simulated storybook pages. Their eye-movement patterns were examined in relation to their word learning gains. The same reading list, consisting of 20 two-character Chinese words, was used in the pretest, 5-min eye-tracking learning session, and posttest. Additionally, visual spatial skill and phonological awareness were assessed in the pretest as cognitive controls. The results showed that the children's attention was attracted quickly by pictures, on which their attention was focused most, with only 13% of the time looking at words. Moreover, significant learning gains in word reading were observed, from the pretest to posttest, from 5-min exposure to simulated storybook pages with words, picture and pronunciation of two-character words present. Furthermore, the children's attention to words significantly predicted posttest reading beyond socioeconomic status, age, visual spatial skill, phonological awareness and pretest reading performance. This eye-movement evidence of storybook reading by children as young as four years, reading a non-alphabetic script (i.e., Chinese), has demonstrated exciting findings that children can learn words effectively with minimal exposure and little instruction; these findings suggest that learning to read requires attention to the basic words itself. The study contributes to our understanding of early reading acquisition with eye-movement evidence from beginning readers.

  17. Time Course of Visual Attention in Infant Categorization of Cats versus Dogs: Evidence for a Head Bias as Revealed through Eye Tracking

    ERIC Educational Resources Information Center

    Quinn, Paul C.; Doran, Matthew M.; Reiss, Jason E.; Hoffman, James E.

    2009-01-01

    Previous looking time studies have shown that infants use the heads of cat and dog images to form category representations for these animal classes. The present research used an eye-tracking procedure to determine the time course of attention to the head and whether it reflects a preexisting bias or online learning. Six- to 7-month-olds were…

  18. VisualEyes: a modular software system for oculomotor experimentation.

    PubMed

    Guo, Yi; Kim, Eun H; Kim, Eun; Alvarez, Tara; Alvarez, Tara L

    2011-03-25

    Eye movement studies have provided a strong foundation forming an understanding of how the brain acquires visual information in both the normal and dysfunctional brain.(1) However, development of a platform to stimulate and store eye movements can require substantial programming, time and costs. Many systems do not offer the flexibility to program numerous stimuli for a variety of experimental needs. However, the VisualEyes System has a flexible architecture, allowing the operator to choose any background and foreground stimulus, program one or two screens for tandem or opposing eye movements and stimulate the left and right eye independently. This system can significantly reduce the programming development time needed to conduct an oculomotor study. The VisualEyes System will be discussed in three parts: 1) the oculomotor recording device to acquire eye movement responses, 2) the VisualEyes software written in LabView, to generate an array of stimuli and store responses as text files and 3) offline data analysis. Eye movements can be recorded by several types of instrumentation such as: a limbus tracking system, a sclera search coil, or a video image system. Typical eye movement stimuli such as saccadic steps, vergent ramps and vergent steps with the corresponding responses will be shown. In this video report, we demonstrate the flexibility of a system to create numerous visual stimuli and record eye movements that can be utilized by basic scientists and clinicians to study healthy as well as clinical populations.

  19. Optimization of illumination schemes in a head-mounted display integrated with eye tracking capabilities

    NASA Astrophysics Data System (ADS)

    Pansing, Craig W.; Hua, Hong; Rolland, Jannick P.

    2005-08-01

    Head-mounted display (HMD) technologies find a variety of applications in the field of 3D virtual and augmented environments, 3D scientific visualization, as well as wearable displays. While most of the current HMDs use head pose to approximate line of sight, we propose to investigate approaches and designs for integrating eye tracking capability into HMDs from a low-level system design perspective and to explore schemes for optimizing system performance. In this paper, we particularly propose to optimize the illumination scheme, which is a critical component in designing an eye tracking-HMD (ET-HMD) integrated system. An optimal design can improve not only eye tracking accuracy, but also robustness. Using LightTools, we present the simulation of a complete eye illumination and imaging system using an eye model along with multiple near infrared LED (IRLED) illuminators and imaging optics, showing the irradiance variation of the different eye structures. The simulation of dark pupil effects along with multiple 1st-order Purkinje images will be presented. A parametric analysis is performed to investigate the relationships between the IRLED configurations and the irradiance distribution at the eye, and a set of optimal configuration parameters is recommended. The analysis will be further refined by actual eye image acquisition and processing.

  20. Objective assessment of the contribution of dental esthetics and facial attractiveness in men via eye tracking.

    PubMed

    Baker, Robin S; Fields, Henry W; Beck, F Michael; Firestone, Allen R; Rosenstiel, Stephen F

    2018-04-01

    Recently, greater emphasis has been placed on smile esthetics in dentistry. Eye tracking has been used to objectively evaluate attention to the dentition (mouth) in female models with different levels of dental esthetics quantified by the aesthetic component of the Index of Orthodontic Treatment Need (IOTN). This has not been accomplished in men. Our objective was to determine the visual attention to the mouth in men with different levels of dental esthetics (IOTN levels) and background facial attractiveness, for both male and female raters, using eye tracking. Facial images of men rated as unattractive, average, and attractive were digitally manipulated and paired with validated oral images, IOTN levels 1 (no treatment need), 7 (borderline treatment need), and 10 (definite treatment need). Sixty-four raters meeting the inclusion criteria were included in the data analysis. Each rater was calibrated in the eye tracker and randomly viewed the composite images for 3 seconds, twice for reliability. Reliability was good or excellent (intraclass correlation coefficients, 0.6-0.9). Significant interactions were observed with factorial repeated-measures analysis of variance and the Tukey-Kramer method for density and duration of fixations in the interactions of model facial attractiveness by area of the face (P <0.0001, P <0.0001, respectively), dental esthetics (IOTN) by area of the face (P <0.0001, P <0.0001, respectively), and rater sex by area of the face (P = 0.0166, P = 0.0290, respectively). For area by facial attractiveness, the hierarchy of visual attention in unattractive and attractive models was eye, mouth, and nose, but for men of average attractiveness, it was mouth, eye, and nose. For dental esthetics by area, at IOTN 7, the mouth had significantly more visual attention than it did at IOTN 1 and significantly more than the nose. At IOTN 10, the mouth received significantly more attention than at IOTN 7 and surpassed the nose and eye. These findings were irrespective of facial attractiveness levels. For rater sex by area in visual density, women showed significantly more attention to the eyes than did men, and only men showed significantly more attention to the mouth over the nose. Visual attention to the mouth was the greatest in men of average facial attractiveness, irrespective of dental esthetics. In borderline dental esthetics (IOTN 7), the eye and mouth were statistically indistinguishable, but in the most unesthetic dental attractiveness level (IOTN 10), the mouth exceeded the eye. The most unesthetic malocclusion significantly attracted visual attention in men. Male and female raters showed differences in their visual attention to male faces. Laypersons gave significant visual attention to poor dental esthetics in men, irrespective of background attractiveness; this was counter to what was seen in women. Copyright © 2017 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.

  1. Eye-catching odors: olfaction elicits sustained gazing to faces and eyes in 4-month-old infants.

    PubMed

    Durand, Karine; Baudouin, Jean-Yves; Lewkowicz, David J; Goubet, Nathalie; Schaal, Benoist

    2013-01-01

    This study investigated whether an odor can affect infants' attention to visually presented objects and whether it can selectively direct visual gaze at visual targets as a function of their meaning. Four-month-old infants (n = 48) were exposed to their mother's body odors while their visual exploration was recorded with an eye-movement tracking system. Two groups of infants, who were assigned to either an odor condition or a control condition, looked at a scene composed of still pictures of faces and cars. As expected, infants looked longer at the faces than at the cars but this spontaneous preference for faces was significantly enhanced in presence of the odor. As expected also, when looking at the face, the infants looked longer at the eyes than at any other facial regions, but, again, they looked at the eyes significantly longer in the presence of the odor. Thus, 4-month-old infants are sensitive to the contextual effects of odors while looking at faces. This suggests that early social attention to faces is mediated by visual as well as non-visual cues.

  2. The Use of Eye Movements in the Study of Multimedia Learning

    ERIC Educational Resources Information Center

    Hyona, Jukka

    2010-01-01

    This commentary focuses on the use of the eye-tracking methodology to study cognitive processes during multimedia learning. First, some general remarks are made about how the method is applied to investigate visual information processing, followed by a reflection on the eye movement measures employed in the studies published in this special issue.…

  3. Objective Methods to Test Visual Dysfunction in the Presence of Cognitive Impairment

    DTIC Science & Technology

    2015-12-01

    the eye and 3) purposeful eye movements to track targets that are resolved. Major Findings: Three major objective tests of vision were successfully...developed and optimized to detect disease. These were 1) the pupil light reflex (either comparing the two eyes or independently evaluating each eye ...separately for retina or optic nerve damage, 2) eye movement based analysis of target acquisition, fixation, and eccentric viewing as a means of

  4. Eye-tracking of visual attention in web-based assessment using the Force Concept Inventory

    NASA Astrophysics Data System (ADS)

    Han, Jing; Chen, Li; Fu, Zhao; Fritchman, Joseph; Bao, Lei

    2017-07-01

    This study used eye-tracking technology to investigate students’ visual attention while taking the Force Concept Inventory (FCI) in a web-based interface. Eighty nine university students were randomly selected into a pre-test group and a post-test group. Students took the 30-question FCI on a computer equipped with an eye-tracker. There were seven weeks of instruction between the pre- and post-test data collection. Students’ performance on the FCI improved significantly from pre-test to post-test. Meanwhile, the eye-tracking results reveal that the time students spent on taking the FCI test was not affected by student performance and did not change from pre-test to post-test. Analysis of students’ attention to answer choices shows that on the pre-test students primarily focused on the naïve choices and ignored the expert choices. On the post-test, although students had shifted their primary attention to the expert choices, they still kept a high level of attention to the naïve choices, indicating significant conceptual mixing and competition during problem solving. Outcomes of this study provide new insights on students’ conceptual development in learning physics.

  5. Semantic guidance of eye movements in real-world scenes

    PubMed Central

    Hwang, Alex D.; Wang, Hsueh-Cheng; Pomplun, Marc

    2011-01-01

    The perception of objects in our visual world is influenced by not only their low-level visual features such as shape and color, but also their high-level features such as meaning and semantic relations among them. While it has been shown that low-level features in real-world scenes guide eye movements during scene inspection and search, the influence of semantic similarity among scene objects on eye movements in such situations has not been investigated. Here we study guidance of eye movements by semantic similarity among objects during real-world scene inspection and search. By selecting scenes from the LabelMe object-annotated image database and applying Latent Semantic Analysis (LSA) to the object labels, we generated semantic saliency maps of real-world scenes based on the semantic similarity of scene objects to the currently fixated object or the search target. An ROC analysis of these maps as predictors of subjects’ gaze transitions between objects during scene inspection revealed a preference for transitions to objects that were semantically similar to the currently inspected one. Furthermore, during the course of a scene search, subjects’ eye movements were progressively guided toward objects that were semantically similar to the search target. These findings demonstrate substantial semantic guidance of eye movements in real-world scenes and show its importance for understanding real-world attentional control. PMID:21426914

  6. Semantic guidance of eye movements in real-world scenes.

    PubMed

    Hwang, Alex D; Wang, Hsueh-Cheng; Pomplun, Marc

    2011-05-25

    The perception of objects in our visual world is influenced by not only their low-level visual features such as shape and color, but also their high-level features such as meaning and semantic relations among them. While it has been shown that low-level features in real-world scenes guide eye movements during scene inspection and search, the influence of semantic similarity among scene objects on eye movements in such situations has not been investigated. Here we study guidance of eye movements by semantic similarity among objects during real-world scene inspection and search. By selecting scenes from the LabelMe object-annotated image database and applying latent semantic analysis (LSA) to the object labels, we generated semantic saliency maps of real-world scenes based on the semantic similarity of scene objects to the currently fixated object or the search target. An ROC analysis of these maps as predictors of subjects' gaze transitions between objects during scene inspection revealed a preference for transitions to objects that were semantically similar to the currently inspected one. Furthermore, during the course of a scene search, subjects' eye movements were progressively guided toward objects that were semantically similar to the search target. These findings demonstrate substantial semantic guidance of eye movements in real-world scenes and show its importance for understanding real-world attentional control. Copyright © 2011 Elsevier Ltd. All rights reserved.

  7. A deep (learning) dive into visual search behaviour of breast radiologists

    NASA Astrophysics Data System (ADS)

    Mall, Suneeta; Brennan, Patrick C.; Mello-Thoms, Claudia

    2018-03-01

    Visual search, the process of detecting and identifying objects using the eye movements (saccades) and the foveal vision, has been studied for identification of root causes of errors in the interpretation of mammography. The aim of this study is to model visual search behaviour of radiologists and their interpretation of mammograms using deep machine learning approaches. Our model is based on a deep convolutional neural network, a biologically-inspired multilayer perceptron that simulates the visual cortex, and is reinforced with transfer learning techniques. Eye tracking data obtained from 8 radiologists (of varying experience levels in reading mammograms) reviewing 120 two-view digital mammography cases (59 cancers) have been used to train the model, which was pre-trained with the ImageNet dataset for transfer learning. Areas of the mammogram that received direct (foveally fixated), indirect (peripherally fixated) or no (never fixated) visual attention were extracted from radiologists' visual search maps (obtained by a head mounted eye tracking device). These areas, along with the radiologists' assessment (including confidence of the assessment) of suspected malignancy were used to model: 1) Radiologists' decision; 2) Radiologists' confidence on such decision; and 3) The attentional level (i.e. foveal, peripheral or none) obtained by an area of the mammogram. Our results indicate high accuracy and low misclassification in modelling such behaviours.

  8. Visual attention to food cues in obesity: an eye-tracking study.

    PubMed

    Doolan, Katy J; Breslin, Gavin; Hanna, Donncha; Murphy, Kate; Gallagher, Alison M

    2014-12-01

    Based on the theory of incentive sensitization, the aim of this study was to investigate differences in attentional processing of food-related visual cues between normal-weight and overweight/obese males and females. Twenty-six normal-weight (14M, 12F) and 26 overweight/obese (14M, 12F) adults completed a visual probe task and an eye-tracking paradigm. Reaction times and eye movements to food and control images were collected during both a fasted and fed condition in a counterbalanced design. Participants had greater visual attention towards high-energy-density food images compared to low-energy-density food images regardless of hunger condition. This was most pronounced in overweight/obese males who had significantly greater maintained attention towards high-energy-density food images when compared with their normal-weight counterparts however no between weight group differences were observed for female participants. High-energy-density food images appear to capture visual attention more readily than low-energy-density food images. Results also suggest the possibility of an altered visual food cue-associated reward system in overweight/obese males. Attentional processing of food cues may play a role in eating behaviors thus should be taken into consideration as part of an integrated approach to curbing obesity. © 2014 The Obesity Society.

  9. Comparison of Threshold Saccadic Vector Optokinetic Perimetry (SVOP) and Standard Automated Perimetry (SAP) in Glaucoma. Part II: Patterns of Visual Field Loss and Acceptability.

    PubMed

    McTrusty, Alice D; Cameron, Lorraine A; Perperidis, Antonios; Brash, Harry M; Tatham, Andrew J; Agarwal, Pankaj K; Murray, Ian C; Fleck, Brian W; Minns, Robert A

    2017-09-01

    We compared patterns of visual field loss detected by standard automated perimetry (SAP) to saccadic vector optokinetic perimetry (SVOP) and examined patient perceptions of each test. A cross-sectional study was done of 58 healthy subjects and 103 with glaucoma who were tested using SAP and two versions of SVOP (v1 and v2). Visual fields from both devices were categorized by masked graders as: 0, normal; 1, paracentral defect; 2, nasal step; 3, arcuate defect; 4, altitudinal; 5, biarcuate; and 6, end-stage field loss. SVOP and SAP classifications were cross-tabulated. Subjects completed a questionnaire on their opinions of each test. We analyzed 142 (v1) and 111 (v2) SVOP and SAP test pairs. SVOP v2 had a sensitivity of 97.7% and specificity of 77.9% for identifying normal versus abnormal visual fields. SAP and SVOP v2 classifications showed complete agreement in 54% of glaucoma patients, with a further 23% disagreeing by one category. On repeat testing, 86% of SVOP v2 classifications agreed with the previous test, compared to 91% of SAP classifications; 71% of subjects preferred SVOP compared to 20% who preferred SAP. Eye-tracking perimetry can be used to obtain threshold visual field sensitivity values in patients with glaucoma and produce maps of visual field defects, with patterns exhibiting close agreement to SAP. Patients preferred eye-tracking perimetry compared to SAP. This first report of threshold eye tracking perimetry shows good agreement with conventional automated perimetry and provides a benchmark for future iterations.

  10. Gesturing during mental problem solving reduces eye movements, especially for individuals with lower visual working memory capacity.

    PubMed

    Pouw, Wim T J L; Mavilidi, Myrto-Foteini; van Gog, Tamara; Paas, Fred

    2016-08-01

    Non-communicative hand gestures have been found to benefit problem-solving performance. These gestures seem to compensate for limited internal cognitive capacities, such as visual working memory capacity. Yet, it is not clear how gestures might perform this cognitive function. One hypothesis is that gesturing is a means to spatially index mental simulations, thereby reducing the need for visually projecting the mental simulation onto the visual presentation of the task. If that hypothesis is correct, less eye movements should be made when participants gesture during problem solving than when they do not gesture. We therefore used mobile eye tracking to investigate the effect of co-thought gesturing and visual working memory capacity on eye movements during mental solving of the Tower of Hanoi problem. Results revealed that gesturing indeed reduced the number of eye movements (lower saccade counts), especially for participants with a relatively lower visual working memory capacity. Subsequent problem-solving performance was not affected by having (not) gestured during the mental solving phase. The current findings suggest that our understanding of gestures in problem solving could be improved by taking into account eye movements during gesturing.

  11. Template construction grammar: from visual scene description to language comprehension and agrammatism.

    PubMed

    Barrès, Victor; Lee, Jinyong

    2014-01-01

    How does the language system coordinate with our visual system to yield flexible integration of linguistic, perceptual, and world-knowledge information when we communicate about the world we perceive? Schema theory is a computational framework that allows the simulation of perceptuo-motor coordination programs on the basis of known brain operating principles such as cooperative computation and distributed processing. We present first its application to a model of language production, SemRep/TCG, which combines a semantic representation of visual scenes (SemRep) with Template Construction Grammar (TCG) as a means to generate verbal descriptions of a scene from its associated SemRep graph. SemRep/TCG combines the neurocomputational framework of schema theory with the representational format of construction grammar in a model linking eye-tracking data to visual scene descriptions. We then offer a conceptual extension of TCG to include language comprehension and address data on the role of both world knowledge and grammatical semantics in the comprehension performances of agrammatic aphasic patients. This extension introduces a distinction between heavy and light semantics. The TCG model of language comprehension offers a computational framework to quantitatively analyze the distributed dynamics of language processes, focusing on the interactions between grammatical, world knowledge, and visual information. In particular, it reveals interesting implications for the understanding of the various patterns of comprehension performances of agrammatic aphasics measured using sentence-picture matching tasks. This new step in the life cycle of the model serves as a basis for exploring the specific challenges that neurolinguistic computational modeling poses to the neuroinformatics community.

  12. Keep your eyes on the ball: smooth pursuit eye movements enhance prediction of visual motion.

    PubMed

    Spering, Miriam; Schütz, Alexander C; Braun, Doris I; Gegenfurtner, Karl R

    2011-04-01

    Success of motor behavior often depends on the ability to predict the path of moving objects. Here we asked whether tracking a visual object with smooth pursuit eye movements helps to predict its motion direction. We developed a paradigm, "eye soccer," in which observers had to either track or fixate a visual target (ball) and judge whether it would have hit or missed a stationary vertical line segment (goal). Ball and goal were presented briefly for 100-500 ms and disappeared from the screen together before the perceptual judgment was prompted. In pursuit conditions, the ball moved towards the goal; in fixation conditions, the goal moved towards the stationary ball, resulting in similar retinal stimulation during pursuit and fixation. We also tested the condition in which the goal was fixated and the ball moved. Motion direction prediction was significantly better in pursuit than in fixation trials, regardless of whether ball or goal served as fixation target. In both fixation and pursuit trials, prediction performance was better when eye movements were accurate. Performance also increased with shorter ball-goal distance and longer presentation duration. A longer trajectory did not affect performance. During pursuit, an efference copy signal might provide additional motion information, leading to the advantage in motion prediction.

  13. Real-world use of ranibizumab for neovascular age-related macular degeneration in Taiwan.

    PubMed

    Chang, Yi-Sheng; Lee, Wan-Ju; Lim, Chen-Chee; Wang, Shih-Hao; Hsu, Sheng-Min; Chen, Yi-Chian; Cheng, Chia-Yi; Teng, Yu-Ti; Huang, Yi-Hsun; Lai, Chun-Chieh; Tseng, Sung-Huei

    2018-05-10

    This study investigated the "real-world" use of ranibizumab for neovascular age-related macular degeneration (nAMD) in Taiwan and assessed the visual outcome. We reviewed the medical records at National Cheng Kung University Hospital, Taiwan, during 2012-2014 for 264 consecutive eyes of 229 patients with nAMD, who applied for ranibizumab covered by national health insurance. A total of 194 eyes (73.5%) in 179 patients (65.5% men; mean ± standard deviation age 69.4 ± 10.7 years) were pre-approved for treatment. Applications for treatment increased year by year, but approval rates decreased during this time. The major causes of rejection for funding were diseases mimicking nAMD, including macular pucker/epiretinal membrane, macular scarring, dry-type AMD, and possible polypoidal choroidal vasculopathy. After completion of three injections in 147 eyes, visual acuity significantly improved, gaining ≥1 line in 51.8% of eyes and stabilising in 38.3% of 141 eyes in which visual acuity was measured. The 114 eyes approved with only one application had a better visual outcome than the 27 eyes approved after the second or third applications. In conclusion, ranibizumab is effective for nAMD; however, approval after the second or third application for national health insurance cover is a less favourable predictor of visual outcome.

  14. Eyes on the bodies: an eye tracking study on deployment of visual attention among females with body dissatisfaction.

    PubMed

    Gao, Xiao; Deng, Xiao; Yang, Jia; Liang, Shuang; Liu, Jie; Chen, Hong

    2014-12-01

    Visual attentional bias has important functions during the appearance social comparisons. However, for the limitations of experimental paradigms or analysis methods in previous studies, the time course of attentional bias to thin and fat body images among women with body dissatisfaction (BD) has still been unclear. In using free reviewing task combined with eye movement tracking, and based on event-related analyses of the critical first eye movement events, as well as epoch-related analyses of gaze durations, the current study investigated different attentional bias components to body shape/part images during 15s presentation time among 34 high BD and 34 non-BD young women. In comparison to the controls, women with BD showed sustained maintenance biases on thin and fat body images during both early automatic and late strategic processing stages. This study highlights a clear need for research on the dynamics of attentional biases related to body image and eating disturbances. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Technical Report of the Use of a Novel Eye Tracking System to Measure Impairment Associated with Mild Traumatic Brain Injury

    PubMed Central

    2017-01-01

    This technical report details the results of an uncontrolled study of EyeGuide Focus, a 10-second concussion management tool which relies on eye tracking to determine the potential impairment of visual attention, an indicator often of mild traumatic brain injury (mTBI). Essentially, people who can visually keep steady and accurate attention on a moving object in their environment likely suffer from no impairment. However, if after a potential mTBI event, subjects cannot keep attention on a moving object in a normal way as demonstrated on their previous healthy baseline tests. This may indicate possible neurological impairment. Now deployed at multiple locations across the United States, Focus (EyeGuide, Lubbock, Texas, United States) to date, has recorded more than 4,000 test scores. Our data analysis of these results shows the promise of Focus as a low-cost, ocular-based impairment test for assessing potential neurological impairment caused by mTBI in subjects ages eight and older.  PMID:28630809

  16. Technical Report of the Use of a Novel Eye Tracking System to Measure Impairment Associated with Mild Traumatic Brain Injury.

    PubMed

    Kelly, Michael

    2017-05-15

    This technical report details the results of an uncontrolled study of EyeGuide Focus, a 10-second concussion management tool which relies on eye tracking to determine the potential impairment of visual attention, an indicator often of mild traumatic brain injury (mTBI). Essentially, people who can visually keep steady and accurate attention on a moving object in their environment likely suffer from no impairment. However, if after a potential mTBI event, subjects cannot keep attention on a moving object in a normal way as demonstrated on their previous healthy baseline tests. This may indicate possible neurological impairment. Now deployed at multiple locations across the United States, Focus (EyeGuide, Lubbock, Texas, United States) to date, has recorded more than 4,000 test scores. Our data analysis of these results shows the promise of Focus as a low-cost, ocular-based impairment test for assessing potential neurological impairment caused by mTBI in subjects ages eight and older.

  17. Using Eye Tracking to Explore Consumers' Visual Behavior According to Their Shopping Motivation in Mobile Environments.

    PubMed

    Hwang, Yoon Min; Lee, Kun Chang

    2017-07-01

    Despite a strong shift to mobile shopping trends, many in-depth questions about mobile shoppers' visual behaviors in mobile shopping environments remain unaddressed. This study aims to answer two challenging research questions (RQs): (a) how much does shopping motivation like goal orientation and recreation influence mobile shoppers' visual behavior toward displays of shopping information on a mobile shopping screen and (b) how much of mobile shoppers' visual behavior influences their purchase intention for the products displayed on a mobile shopping screen? An eye-tracking approach is adopted to answer the RQs empirically. The experimental results showed that goal-oriented shoppers paid closer attention to products' information areas to meet their shopping goals. Their purchase intention was positively influenced by their visual attention to the two areas of interest such as product information and consumer opinions. In contrast, recreational shoppers tended to visually fixate on the promotion area, which positively influences their purchase intention. The results contribute to understanding mobile shoppers' visual behaviors and shopping intentions from the perspective of mindset theory.

  18. Contrast and assimilation in motion perception and smooth pursuit eye movements.

    PubMed

    Spering, Miriam; Gegenfurtner, Karl R

    2007-09-01

    The analysis of visual motion serves many different functions ranging from object motion perception to the control of self-motion. The perception of visual motion and the oculomotor tracking of a moving object are known to be closely related and are assumed to be controlled by shared brain areas. We compared perceived velocity and the velocity of smooth pursuit eye movements in human observers in a paradigm that required the segmentation of target object motion from context motion. In each trial, a pursuit target and a visual context were independently perturbed simultaneously to briefly increase or decrease in speed. Observers had to accurately track the target and estimate target speed during the perturbation interval. Here we show that the same motion signals are processed in fundamentally different ways for perception and steady-state smooth pursuit eye movements. For the computation of perceived velocity, motion of the context was subtracted from target motion (motion contrast), whereas pursuit velocity was determined by the motion average (motion assimilation). We conclude that the human motion system uses these computations to optimally accomplish different functions: image segmentation for object motion perception and velocity estimation for the control of smooth pursuit eye movements.

  19. Mobile Eye Tracking Reveals Little Evidence for Age Differences in Attentional Selection for Mood Regulation

    PubMed Central

    Isaacowitz, Derek M.; Livingstone, Kimberly M.; Harris, Julia A.; Marcotte, Stacy L.

    2014-01-01

    We report two studies representing the first use of mobile eye tracking to study emotion regulation across adulthood. Past research on age differences in attentional deployment using stationary eye tracking has found older adults show relatively more positive looking, and seem to benefit more mood-wise from this looking pattern, compared to younger adults. However, these past studies have greatly constrained the stimuli participants can look at, despite real-world settings providing numerous possibilities for what to choose to look at. We therefore used mobile eye tracking to study age differences in attentional selection, as indicated by fixation patterns to stimuli of different valence freely chosen by the participant. In contrast to stationary eye tracking studies of attentional deployment, Study 1 showed that younger and older individuals generally selected similar proportions of valenced stimuli, and attentional selection had similar effects on mood across age groups. Study 2 replicated this pattern with an adult lifespan sample including middle-aged individuals. Emotion regulation-relevant attention may thus differ depending on whether stimuli are freely chosen or not. PMID:25527965

  20. Correspondence of presaccadic activity in the monkey primary visual cortex with saccadic eye movements

    PubMed Central

    Supèr, Hans; van der Togt, Chris; Spekreijse, Henk; Lamme, Victor A. F.

    2004-01-01

    We continuously scan the visual world via rapid or saccadic eye movements. Such eye movements are guided by visual information, and thus the oculomotor structures that determine when and where to look need visual information to control the eye movements. To know whether visual areas contain activity that may contribute to the control of eye movements, we recorded neural responses in the visual cortex of monkeys engaged in a delayed figure-ground detection task and analyzed the activity during the period of oculomotor preparation. We show that ≈100 ms before the onset of visually and memory-guided saccades neural activity in V1 becomes stronger where the strongest presaccadic responses are found at the location of the saccade target. In addition, in memory-guided saccades the strength of presaccadic activity shows a correlation with the onset of the saccade. These findings indicate that the primary visual cortex contains saccade-related responses and participates in visually guided oculomotor behavior. PMID:14970334

  1. Prediction and Production of Simple Mathematical Equations: Evidence from Visual World Eye-Tracking.

    PubMed

    Hintz, Florian; Meyer, Antje S

    2015-01-01

    The relationship between the production and the comprehension systems has recently become a topic of interest for many psycholinguists. It has been argued that these systems are tightly linked and in particular that listeners use the production system to predict upcoming content. In this study, we tested how similar production and prediction processes are in a novel version of the visual world paradigm. Dutch speaking participants (native speakers in Experiment 1; German-Dutch bilinguals in Experiment 2) listened to mathematical equations while looking at a clock face featuring the numbers 1 to 12. On alternating trials, they either heard a complete equation ("three plus eight is eleven") or they heard the first part ("three plus eight is") and had to produce the result ("eleven") themselves. Participants were encouraged to look at the relevant numbers throughout the trial. Their eye movements were recorded and analyzed. We found that the participants' eye movements in the two tasks were overall very similar. They fixated the first and second number of the equations shortly after they were mentioned, and fixated the result number well before they named it on production trials and well before the recorded speaker named it on comprehension trials. However, all fixation latencies were shorter on production than on comprehension trials. These findings suggest that the processes involved in planning to say a word and anticipating hearing a word are quite similar, but that people are more aroused or engaged when they intend to respond than when they merely listen to another person.

  2. Object-Based Visual Attention in 8-Month-Old Infants: Evidence from an Eye-Tracking Study

    ERIC Educational Resources Information Center

    Bulf, Hermann; Valenza, Eloisa

    2013-01-01

    Visual attention is one of the infant's primary tools for gathering relevant information from the environment for further processing and learning. The space-based component of visual attention in infants has been widely investigated; however, the object-based component of visual attention has received scarce interest. This scarcity is…

  3. Improving data retention in EEG research with children using child-centered eye tracking

    PubMed Central

    Maguire, Mandy J.; Magnon, Grant; Fitzhugh, Anna E.

    2014-01-01

    Background Event Related Potentials (ERPs) elicited by visual stimuli have increased our understanding of developmental disorders and adult cognitive abilities for decades; however, these studies are very difficult with populations who cannot sustain visual attention such as infants and young children. Current methods for studying such populations include requiring a button response, which may be impossible for some participants, and experimenter monitoring, which is subject to error, highly variable, and spatially imprecise. New Method We developed a child-centered methodology to integrate EEG data acquisition and eye-tracking technologies that uses “attention-getters” in which stimulus display is contingent upon the child’s gaze. The goal was to increase the number of trials retained. Additionally, we used the eye-tracker to categorize and analyze the EEG data based on gaze to specific areas of the visual display, compared to analyzing based on stimulus presentation. Results Compared with Existing Methods The number of trials retained was substantially improved using the child-centered methodology compared to a button-press response in 7–8 year olds. In contrast, analyzing the EEG based on eye gaze to specific points within the visual display as opposed to stimulus presentation provided too few trials for reliable interpretation. Conclusions By using the linked EEG-eye-tracker we significantly increased data retention. With this method, studies can be completed with fewer participants and a wider range of populations. However, caution should be used when epoching based on participants’ eye gaze because, in this case, this technique provided substantially fewer trials. PMID:25251555

  4. Retinotopic memory is more precise than spatiotopic memory.

    PubMed

    Golomb, Julie D; Kanwisher, Nancy

    2012-01-31

    Successful visually guided behavior requires information about spatiotopic (i.e., world-centered) locations, but how accurately is this information actually derived from initial retinotopic (i.e., eye-centered) visual input? We conducted a spatial working memory task in which subjects remembered a cued location in spatiotopic or retinotopic coordinates while making guided eye movements during the memory delay. Surprisingly, after a saccade, subjects were significantly more accurate and precise at reporting retinotopic locations than spatiotopic locations. This difference grew with each eye movement, such that spatiotopic memory continued to deteriorate, whereas retinotopic memory did not accumulate error. The loss in spatiotopic fidelity is therefore not a generic consequence of eye movements, but a direct result of converting visual information from native retinotopic coordinates. Thus, despite our conscious experience of an effortlessly stable spatiotopic world and our lifetime of practice with spatiotopic tasks, memory is actually more reliable in raw retinotopic coordinates than in ecologically relevant spatiotopic coordinates.

  5. Testing of Visual Field with Virtual Reality Goggles in Manual and Visual Grasp Modes

    PubMed Central

    Wroblewski, Dariusz; Francis, Brian A.; Sadun, Alfredo; Vakili, Ghazal; Chopra, Vikas

    2014-01-01

    Automated perimetry is used for the assessment of visual function in a variety of ophthalmic and neurologic diseases. We report development and clinical testing of a compact, head-mounted, and eye-tracking perimeter (VirtualEye) that provides a more comfortable test environment than the standard instrumentation. VirtualEye performs the equivalent of a full threshold 24-2 visual field in two modes: (1) manual, with patient response registered with a mouse click, and (2) visual grasp, where the eye tracker senses change in gaze direction as evidence of target acquisition. 59 patients successfully completed the test in manual mode and 40 in visual grasp mode, with 59 undergoing the standard Humphrey field analyzer (HFA) testing. Large visual field defects were reliably detected by VirtualEye. Point-by-point comparison between the results obtained with the different modalities indicates: (1) minimal systematic differences between measurements taken in visual grasp and manual modes, (2) the average standard deviation of the difference distributions of about 5 dB, and (3) a systematic shift (of 4–6 dB) to lower sensitivities for VirtualEye device, observed mostly in high dB range. The usability survey suggested patients' acceptance of the head-mounted device. The study appears to validate the concepts of a head-mounted perimeter and the visual grasp mode. PMID:25050326

  6. Video-Based Eye Tracking to Detect the Attention Shift: A Computer Classroom Context-Aware System

    ERIC Educational Resources Information Center

    Kuo, Yung-Lung; Lee, Jiann-Shu; Hsieh, Min-Chai

    2014-01-01

    Eye and head movements evoked in response to obvious visual attention shifts. However, there has been little progress on the causes of absent-mindedness so far. The paper proposes an attention awareness system that captures the conditions regarding the interaction of eye gaze and head pose under various attentional switching in computer classroom.…

  7. Visual Data Mining: An Exploratory Approach to Analyzing Temporal Patterns of Eye Movements

    ERIC Educational Resources Information Center

    Yu, Chen; Yurovsky, Daniel; Xu, Tian

    2012-01-01

    Infant eye movements are an important behavioral resource to understand early human development and learning. But the complexity and amount of gaze data recorded from state-of-the-art eye-tracking systems also pose a challenge: how does one make sense of such dense data? Toward this goal, this article describes an interactive approach based on…

  8. Eye-tracking AFROC study of the influence of experience and training on chest x-ray interpretation

    NASA Astrophysics Data System (ADS)

    Manning, David; Ethell, Susan C.; Crawford, Trevor

    2003-05-01

    Four observer groups with different levels of expertise were tested in an investigation into the comparative nature of expert performance. The radiological task was the detection and localization of significant pulmonary nodules in postero-anterior vies of the chest in adults. Three test banks of 40 images were used. The observer groups were 6 experienced radiographers prior to a six month training program in chest image interpretation, the same radiographers after their tr4aining program, and 6 fresher undergraduate radiography students. Eye tracking was carried out on all observers to demonstrate differences in visual activity and nodule detection performance was measured with an AFROC technique. Detection performances of the four groups showed the radiologists and radiographers after training were measurably superior at the task. The eye-tracking parameters saccadic length, number of fixations visual coverage and scrutiny timer per film were measured for all subjects and compared. The missed nodules fixated and not fixated were also determined for the radiologist group. Results have shown distinct stylistic differences in the visual scanning strategies between the experienced and inexperienced observers that we believe can be generalized into a description of characteristics of expert versus non-expert performance. The findings will be used in the educational program of image interpretation for non-radiology practitioners.

  9. Eye Tracking Dysfunction in Schizophrenia: Characterization and Pathophysiology

    PubMed Central

    Sereno, Anne B.; Gooding, Diane C.; O’Driscoll, Gilllian A.

    2011-01-01

    Eye tracking dysfunction (ETD) is one of the most widely replicated behavioral deficits in schizophrenia and is over-represented in clinically unaffected first-degree relatives of schizophrenia patients. Here, we provide an overview of research relevant to the characterization and pathophysiology of this impairment. Deficits are most robust in the maintenance phase of pursuit, particularly during the tracking of predictable target movement. Impairments are also found in pursuit initiation and correlate with performance on tests of motion processing, implicating early sensory processing of motion signals. Taken together, the evidence suggests that ETD involves higher-order structures, including the frontal eye fields, which adjust the gain of the pursuit response to visual and anticipated target movement, as well as early parts of the pursuit pathway, including motion areas (the middle temporal area and the adjacent medial superior temporal area). Broader application of localizing behavioral paradigms in patient and family studies would be advantageous for refining the eye tracking phenotype for genetic studies. PMID:21312405

  10. A Novel Hybrid Mental Spelling Application Based on Eye Tracking and SSVEP-Based BCI

    PubMed Central

    Stawicki, Piotr; Gembler, Felix; Rezeika, Aya; Volosyak, Ivan

    2017-01-01

    Steady state visual evoked potentials (SSVEPs)-based Brain-Computer interfaces (BCIs), as well as eyetracking devices, provide a pathway for re-establishing communication for people with severe disabilities. We fused these control techniques into a novel eyetracking/SSVEP hybrid system, which utilizes eye tracking for initial rough selection and the SSVEP technology for fine target activation. Based on our previous studies, only four stimuli were used for the SSVEP aspect, granting sufficient control for most BCI users. As Eye tracking data is not used for activation of letters, false positives due to inappropriate dwell times are avoided. This novel approach combines the high speed of eye tracking systems and the high classification accuracies of low target SSVEP-based BCIs, leading to an optimal combination of both methods. We evaluated accuracy and speed of the proposed hybrid system with a 30-target spelling application implementing all three control approaches (pure eye tracking, SSVEP and the hybrid system) with 32 participants. Although the highest information transfer rates (ITRs) were achieved with pure eye tracking, a considerable amount of subjects was not able to gain sufficient control over the stand-alone eye-tracking device or the pure SSVEP system (78.13% and 75% of the participants reached reliable control, respectively). In this respect, the proposed hybrid was most universal (over 90% of users achieved reliable control), and outperformed the pure SSVEP system in terms of speed and user friendliness. The presented hybrid system might offer communication to a wider range of users in comparison to the standard techniques. PMID:28379187

  11. Clutter in electronic medical records: examining its performance and attentional costs using eye tracking.

    PubMed

    Moacdieh, Nadine; Sarter, Nadine

    2015-06-01

    The objective was to use eye tracking to trace the underlying changes in attention allocation associated with the performance effects of clutter, stress, and task difficulty in visual search and noticing tasks. Clutter can degrade performance in complex domains, yet more needs to be known about the associated changes in attention allocation, particularly in the presence of stress and for different tasks. Frequently used and relatively simple eye tracking metrics do not effectively capture the various effects of clutter, which is critical for comprehensively analyzing clutter and developing targeted, real-time countermeasures. Electronic medical records (EMRs) were chosen as the application domain for this research. Clutter, stress, and task difficulty were manipulated, and physicians' performance on search and noticing tasks was recorded. Several eye tracking metrics were used to trace attention allocation throughout those tasks, and subjective data were gathered via a debriefing questionnaire. Clutter degraded performance in terms of response time and noticing accuracy. These decrements were largely accentuated by high stress and task difficulty. Eye tracking revealed the underlying attentional mechanisms, and several display-independent metrics were shown to be significant indicators of the effects of clutter. Eye tracking provides a promising means to understand in detail (offline) and prevent (in real time) major performance breakdowns due to clutter. Display designers need to be aware of the risks of clutter in EMRs and other complex displays and can use the identified eye tracking metrics to evaluate and/or adjust their display. © 2015, Human Factors and Ergonomics Society.

  12. Prevalence and Causes of Visual Loss Among the Indigenous Peoples of the World: A Systematic Review.

    PubMed

    Foreman, Joshua; Keel, Stuart; van Wijngaarden, Peter; Bourne, Rupert A; Wormald, Richard; Crowston, Jonathan; Taylor, Hugh R; Dirani, Mohamed

    2018-05-01

    Studies have documented a higher disease burden in indigenous compared with nonindigenous populations, but no global data on the epidemiology of visual loss in indigenous peoples are available. A systematic review of literature on visual loss in the world's indigenous populations could identify major gaps and inform interventions to reduce their burden of visual loss. To conduct a systematic review on the prevalence and causes of visual loss among the world's indigenous populations. A search of databases and alternative sources identified literature on the prevalence and causes of visual loss (visual impairment and blindness) and eye diseases in indigenous populations. Studies from January 1, 1990, through August 1, 2017, that included clinical eye examinations of indigenous participants and, where possible, compared findings with those of nonindigenous populations were included. Methodologic quality of studies was evaluated to reveal gaps in the literature. Limited data were available worldwide. A total of 85 articles described 64 unique studies from 24 countries that examined 79 598 unique indigenous participants. Nineteen studies reported comparator data on 42 085 nonindigenous individuals. The prevalence of visual loss was reported in 13 countries, with visual impairment ranging from 0.6% in indigenous Australian children to 48.5% in native Tibetans 50 years or older. Uncorrected refractive error was the main cause of visual impairment (21.0%-65.1%) in 5 of 6 studies that measured presenting visual acuity. Cataract was the main cause of visual impairment in all 6 studies measuring best-corrected acuity (25.4%-72.2%). Cataract was the leading cause of blindness in 13 studies (32.0%-79.2%), followed by uncorrected refractive error in 2 studies (33.0% and 35.8%). Most countries with indigenous peoples do not have data on the burden of visual loss in these populations. Although existing studies vary in methodologic quality and reliability, they suggest that most visual loss in indigenous populations is avoidable. Improvements in quality and frequency of research into the eye health of indigenous communities appear to be required, and coordinated eye care programs should be implemented to specifically target the indigenous peoples of the world.

  13. Eye movement accuracy determines natural interception strategies.

    PubMed

    Fooken, Jolande; Yeo, Sang-Hoon; Pai, Dinesh K; Spering, Miriam

    2016-11-01

    Eye movements aid visual perception and guide actions such as reaching or grasping. Most previous work on eye-hand coordination has focused on saccadic eye movements. Here we show that smooth pursuit eye movement accuracy strongly predicts both interception accuracy and the strategy used to intercept a moving object. We developed a naturalistic task in which participants (n = 42 varsity baseball players) intercepted a moving dot (a "2D fly ball") with their index finger in a designated "hit zone." Participants were instructed to track the ball with their eyes, but were only shown its initial launch (100-300 ms). Better smooth pursuit resulted in more accurate interceptions and determined the strategy used for interception, i.e., whether interception was early or late in the hit zone. Even though early and late interceptors showed equally accurate interceptions, they may have relied on distinct tactics: early interceptors used cognitive heuristics, whereas late interceptors' performance was best predicted by pursuit accuracy. Late interception may be beneficial in real-world tasks as it provides more time for decision and adjustment. Supporting this view, baseball players who were more senior were more likely to be late interceptors. Our findings suggest that interception strategies are optimally adapted to the proficiency of the pursuit system.

  14. Love is in the gaze: an eye-tracking study of love and sexual desire.

    PubMed

    Bolmont, Mylene; Cacioppo, John T; Cacioppo, Stephanie

    2014-09-01

    Reading other people's eyes is a valuable skill during interpersonal interaction. Although a number of studies have investigated visual patterns in relation to the perceiver's interest, intentions, and goals, little is known about eye gaze when it comes to differentiating intentions to love from intentions to lust (sexual desire). To address this question, we conducted two experiments: one testing whether the visual pattern related to the perception of love differs from that related to lust and one testing whether the visual pattern related to the expression of love differs from that related to lust. Our results show that a person's eye gaze shifts as a function of his or her goal (love vs. lust) when looking at a visual stimulus. Such identification of distinct visual patterns for love and lust could have theoretical and clinical importance in couples therapy when these two phenomena are difficult to disentangle from one another on the basis of patients' self-reports. © The Author(s) 2014.

  15. Receptive fields for smooth pursuit eye movements and motion perception.

    PubMed

    Debono, Kurt; Schütz, Alexander C; Spering, Miriam; Gegenfurtner, Karl R

    2010-12-01

    Humans use smooth pursuit eye movements to track moving objects of interest. In order to track an object accurately, motion signals from the target have to be integrated and segmented from motion signals in the visual context. Most studies on pursuit eye movements used small visual targets against a featureless background, disregarding the requirements of our natural visual environment. Here, we tested the ability of the pursuit and the perceptual system to integrate motion signals across larger areas of the visual field. Stimuli were random-dot kinematograms containing a horizontal motion signal, which was perturbed by a spatially localized, peripheral motion signal. Perturbations appeared in a gaze-contingent coordinate system and had a different direction than the main motion including a vertical component. We measured pursuit and perceptual direction discrimination decisions and found that both steady-state pursuit and perception were influenced most by perturbation angles close to that of the main motion signal and only in regions close to the center of gaze. The narrow direction bandwidth (26 angular degrees full width at half height) and small spatial extent (8 degrees of visual angle standard deviation) correspond closely to tuning parameters of neurons in the middle temporal area (MT). Copyright © 2010 Elsevier Ltd. All rights reserved.

  16. Bilateral phacoemulsification and intraocular lens implantation in a great horned owl.

    PubMed

    Carter, Renee T; Murphy, Christopher J; Stuhr, Charles M; Diehl, Kathryn A

    2007-02-15

    A great horned owl of estimated age < 1 year that was captured by wildlife rehabilitators was evaluated because of suspected cataracts. Nuclear and incomplete cortical cataracts were evident in both eyes. Ocular ultrasonography revealed no evidence of retinal detachment, and electroretinography revealed normal retinal function. For visual rehabilitation, cataract surgery was planned and intraocular lens design was determined on the basis of values obtained from the schematic eye, which is a mathematical model representing a normal eye for a species. Cataract surgery and intraocular lens placement were performed in both eyes. After surgery, refraction was within -0.75 diopters in the right eye and -0.25 diopters in the left eye. Visual rehabilitation was evident on the basis of improved tracking and feeding behavior, and the owl was eventually released into the wild. In raptors with substantial visual compromise, euthanasia or placement in a teaching facility is a typical outcome because release of such a bird is unacceptable. Successful intraocular lens implantation for visual rehabilitation and successful release into the wild are achievable.

  17. Human image tracking technique applied to remote collaborative environments

    NASA Astrophysics Data System (ADS)

    Nagashima, Yoshio; Suzuki, Gen

    1993-10-01

    To support various kinds of collaborations over long distances by using visual telecommunication, it is necessary to transmit visual information related to the participants and topical materials. When people collaborate in the same workspace, they use visual cues such as facial expressions and eye movement. The realization of coexistence in a collaborative workspace requires the support of these visual cues. Therefore, it is important that the facial images be large enough to be useful. During collaborations, especially dynamic collaborative activities such as equipment operation or lectures, the participants often move within the workspace. When the people move frequently or over a wide area, the necessity for automatic human tracking increases. Using the movement area of the human being or the resolution of the extracted area, we have developed a memory tracking method and a camera tracking method for automatic human tracking. Experimental results using a real-time tracking system show that the extracted area fairly moves according to the movement of the human head.

  18. Impaired Oculomotor Behavior of Children with Developmental Dyslexia in Antisaccades and Predictive Saccades Tasks

    PubMed Central

    Lukasova, Katerina; Silva, Isadora P.; Macedo, Elizeu C.

    2016-01-01

    Analysis of eye movement patterns during tracking tasks represents a potential way to identify differences in the cognitive processing and motor mechanisms underlying reading in dyslexic children before the occurrence of school failure. The current study aimed to evaluate the pattern of eye movements in antisaccades, predictive saccades and visually guided saccades in typical readers and readers with developmental dyslexia. The study included 30 children (age M = 11; SD = 1.67), 15 diagnosed with developmental dyslexia (DG) and 15 regular readers (CG), matched by age, gender and school grade. Cognitive assessment was performed prior to the eye-tracking task during which both eyes were registered using the Tobii® 1750 eye-tracking device. The results demonstrated a lower correct antisaccades rate in dyslexic children compared to the controls (p < 0.001, DG = 25%, CC = 37%). Dyslexic children also made fewer saccades in predictive latency (p < 0.001, DG = 34%, CG = 46%, predictive latency within −300–120 ms with target as 0 point). No between-group difference was found for visually guided saccades. In this task, both groups showed shorter latency for right-side targets. The results indicated altered oculomotor behavior in dyslexic children, which has been reported in previous studies. We extend these findings by demonstrating impaired implicit learning of target's time/position patterns in dyslexic children. PMID:27445945

  19. Prey Capture Behavior Evoked by Simple Visual Stimuli in Larval Zebrafish

    PubMed Central

    Bianco, Isaac H.; Kampff, Adam R.; Engert, Florian

    2011-01-01

    Understanding how the nervous system recognizes salient stimuli in the environment and selects and executes the appropriate behavioral responses is a fundamental question in systems neuroscience. To facilitate the neuroethological study of visually guided behavior in larval zebrafish, we developed “virtual reality” assays in which precisely controlled visual cues can be presented to larvae whilst their behavior is automatically monitored using machine vision algorithms. Freely swimming larvae responded to moving stimuli in a size-dependent manner: they directed multiple low amplitude orienting turns (∼20°) toward small moving spots (1°) but reacted to larger spots (10°) with high-amplitude aversive turns (∼60°). The tracking of small spots led us to examine how larvae respond to prey during hunting routines. By analyzing movie sequences of larvae hunting paramecia, we discovered that all prey capture routines commence with eye convergence and larvae maintain their eyes in a highly converged position for the duration of the prey-tracking and capture swim phases. We adapted our virtual reality assay to deliver artificial visual cues to partially restrained larvae and found that small moving spots evoked convergent eye movements and J-turns of the tail, which are defining features of natural hunting. We propose that eye convergence represents the engagement of a predatory mode of behavior in larval fish and serves to increase the region of binocular visual space to enable stereoscopic targeting of prey. PMID:22203793

  20. The role of vision in odor-plume tracking by walking and flying insects.

    PubMed

    Willis, Mark A; Avondet, Jennifer L; Zheng, Elizabeth

    2011-12-15

    The walking paths of male cockroaches, Periplaneta americana, tracking point-source plumes of female pheromone often appear similar in structure to those observed from flying male moths. Flying moths use visual-flow-field feedback of their movements to control steering and speed over the ground and to detect the wind speed and direction while tracking plumes of odors. Walking insects are also known to use flow field cues to steer their trajectories. Can the upwind steering we observe in plume-tracking walking male cockroaches be explained by visual-flow-field feedback, as in flying moths? To answer this question, we experimentally occluded the compound eyes and ocelli of virgin P. americana males, separately and in combination, and challenged them with different wind and odor environments in our laboratory wind tunnel. They were observed responding to: (1) still air and no odor, (2) wind and no odor, (3) a wind-borne point-source pheromone plume and (4) a wide pheromone plume in wind. If walking cockroaches require visual cues to control their steering with respect to their environment, we would expect their tracks to be less directed and more variable if they cannot see. Instead, we found few statistically significant differences among behaviors exhibited by intact control cockroaches or those with their eyes occluded, under any of our environmental conditions. Working towards our goal of a comprehensive understanding of chemo-orientation in insects, we then challenged flying and walking male moths to track pheromone plumes with and without visual feedback. Neither walking nor flying moths performed as well as walking cockroaches when there was no visual information available.

  1. The role of vision in odor-plume tracking by walking and flying insects

    PubMed Central

    Willis, Mark A.; Avondet, Jennifer L.; Zheng, Elizabeth

    2011-01-01

    SUMMARY The walking paths of male cockroaches, Periplaneta americana, tracking point-source plumes of female pheromone often appear similar in structure to those observed from flying male moths. Flying moths use visual-flow-field feedback of their movements to control steering and speed over the ground and to detect the wind speed and direction while tracking plumes of odors. Walking insects are also known to use flow field cues to steer their trajectories. Can the upwind steering we observe in plume-tracking walking male cockroaches be explained by visual-flow-field feedback, as in flying moths? To answer this question, we experimentally occluded the compound eyes and ocelli of virgin P. americana males, separately and in combination, and challenged them with different wind and odor environments in our laboratory wind tunnel. They were observed responding to: (1) still air and no odor, (2) wind and no odor, (3) a wind-borne point-source pheromone plume and (4) a wide pheromone plume in wind. If walking cockroaches require visual cues to control their steering with respect to their environment, we would expect their tracks to be less directed and more variable if they cannot see. Instead, we found few statistically significant differences among behaviors exhibited by intact control cockroaches or those with their eyes occluded, under any of our environmental conditions. Working towards our goal of a comprehensive understanding of chemo-orientation in insects, we then challenged flying and walking male moths to track pheromone plumes with and without visual feedback. Neither walking nor flying moths performed as well as walking cockroaches when there was no visual information available. PMID:22116754

  2. Anticipatory Effects of Intonation: Eye Movements during Instructed Visual Search

    ERIC Educational Resources Information Center

    Ito, Kiwako; Speer, Shari R.

    2008-01-01

    Three eye-tracking experiments investigated the role of pitch accents during online discourse comprehension. Participants faced a grid with ornaments, and followed prerecorded instructions such as "Next, hang the blue ball" to decorate holiday trees. Experiment 1 demonstrated a processing advantage for felicitous as compared to infelicitous uses…

  3. ERPs and Eye Movements Reflect Atypical Visual Perception in Pervasive Developmental Disorder

    ERIC Educational Resources Information Center

    Kemner, Chantal; van Engeland, Herman

    2006-01-01

    Many studies of eye tracking or event-related brain potentials (ERPs) in subjects with Pervasive Developmental Disorder (PDD) have yielded inconsistent results on attentional processing. However, recent studies have indicated that there are specific abnormalities in early processing that are probably related to perception. ERP amplitudes in…

  4. Gaze Toward Naturalistic Social Scenes by Individuals With Intellectual and Developmental Disabilities: Implications for Augmentative and Alternative Communication Designs.

    PubMed

    Liang, Jiali; Wilkinson, Krista

    2018-04-18

    A striking characteristic of the social communication deficits in individuals with autism is atypical patterns of eye contact during social interactions. We used eye-tracking technology to evaluate how the number of human figures depicted and the presence of sharing activity between the human figures in still photographs influenced visual attention by individuals with autism, typical development, or Down syndrome. We sought to examine visual attention to the contents of visual scene displays, a growing form of augmentative and alternative communication support. Eye-tracking technology recorded point-of-gaze while participants viewed 32 photographs in which either 2 or 3 human figures were depicted. Sharing activities between these human figures are either present or absent. The sampling rate was 60 Hz; that is, the technology gathered 60 samples of gaze behavior per second, per participant. Gaze behaviors, including latency to fixate and time spent fixating, were quantified. The overall gaze behaviors were quite similar across groups, regardless of the social content depicted. However, individuals with autism were significantly slower than the other groups in latency to first view the human figures, especially when there were 3 people depicted in the photographs (as compared with 2 people). When participants' own viewing pace was considered, individuals with autism resembled those with Down syndrome. The current study supports the inclusion of social content with various numbers of human figures and sharing activities between human figures into visual scene displays, regardless of the population served. Study design and reporting practices in eye-tracking literature as it relates to autism and Down syndrome are discussed. https://doi.org/10.23641/asha.6066545.

  5. Learning-based saliency model with depth information.

    PubMed

    Ma, Chih-Yao; Hang, Hsueh-Ming

    2015-01-01

    Most previous studies on visual saliency focused on two-dimensional (2D) scenes. Due to the rapidly growing three-dimensional (3D) video applications, it is very desirable to know how depth information affects human visual attention. In this study, we first conducted eye-fixation experiments on 3D images. Our fixation data set comprises 475 3D images and 16 subjects. We used a Tobii TX300 eye tracker (Tobii, Stockholm, Sweden) to track the eye movement of each subject. In addition, this database contains 475 computed depth maps. Due to the scarcity of public-domain 3D fixation data, this data set should be useful to the 3D visual attention research community. Then, a learning-based visual attention model was designed to predict human attention. In addition to the popular 2D features, we included the depth map and its derived features. The results indicate that the extra depth information can enhance the saliency estimation accuracy specifically for close-up objects hidden in a complex-texture background. In addition, we examined the effectiveness of various low-, mid-, and high-level features on saliency prediction. Compared with both 2D and 3D state-of-the-art saliency estimation models, our methods show better performance on the 3D test images. The eye-tracking database and the MATLAB source codes for the proposed saliency model and evaluation methods are available on our website.

  6. Top-down influences on visual attention during listening are modulated by observer sex.

    PubMed

    Shen, John; Itti, Laurent

    2012-07-15

    In conversation, women have a small advantage in decoding non-verbal communication compared to men. In light of these findings, we sought to determine whether sex differences also existed in visual attention during a related listening task, and if so, if the differences existed among attention to high-level aspects of the scene or to conspicuous visual features. Using eye-tracking and computational techniques, we present direct evidence that men and women orient attention differently during conversational listening. We tracked the eyes of 15 men and 19 women who watched and listened to 84 clips featuring 12 different speakers in various outdoor settings. At the fixation following each saccadic eye movement, we analyzed the type of object that was fixated. Men gazed more often at the mouth and women at the eyes of the speaker. Women more often exhibited "distracted" saccades directed away from the speaker and towards a background scene element. Examining the multi-scale center-surround variation in low-level visual features (static: color, intensity, orientation, and dynamic: motion energy), we found that men consistently selected regions which expressed more variation in dynamic features, which can be attributed to a male preference for motion and a female preference for areas that may contain nonverbal information about the speaker. In sum, significant differences were observed, which we speculate arise from different integration strategies of visual cues in selecting the final target of attention. Our findings have implications for studies of sex in nonverbal communication, as well as for more predictive models of visual attention. Published by Elsevier Ltd.

  7. Weighted feature selection criteria for visual servoing of a telerobot

    NASA Technical Reports Server (NTRS)

    Feddema, John T.; Lee, C. S. G.; Mitchell, O. R.

    1989-01-01

    Because of the continually changing environment of a space station, visual feedback is a vital element of a telerobotic system. A real time visual servoing system would allow a telerobot to track and manipulate randomly moving objects. Methodologies for the automatic selection of image features to be used to visually control the relative position between an eye-in-hand telerobot and a known object are devised. A weighted criteria function with both image recognition and control components is used to select the combination of image features which provides the best control. Simulation and experimental results of a PUMA robot arm visually tracking a randomly moving carburetor gasket with a visual update time of 70 milliseconds are discussed.

  8. Eye-Catching Odors: Olfaction Elicits Sustained Gazing to Faces and Eyes in 4-Month-Old Infants

    PubMed Central

    Lewkowicz, David J.; Goubet, Nathalie; Schaal, Benoist

    2013-01-01

    This study investigated whether an odor can affect infants' attention to visually presented objects and whether it can selectively direct visual gaze at visual targets as a function of their meaning. Four-month-old infants (n = 48) were exposed to their mother's body odors while their visual exploration was recorded with an eye-movement tracking system. Two groups of infants, who were assigned to either an odor condition or a control condition, looked at a scene composed of still pictures of faces and cars. As expected, infants looked longer at the faces than at the cars but this spontaneous preference for faces was significantly enhanced in presence of the odor. As expected also, when looking at the face, the infants looked longer at the eyes than at any other facial regions, but, again, they looked at the eyes significantly longer in the presence of the odor. Thus, 4-month-old infants are sensitive to the contextual effects of odors while looking at faces. This suggests that early social attention to faces is mediated by visual as well as non-visual cues. PMID:24015175

  9. Predicting Aggressive Tendencies by Visual Attention Bias Associated with Hostile Emotions

    PubMed Central

    Lin, Ping-I; Hsieh, Cheng-Da; Juan, Chi-Hung; Hossain, Md Monir; Erickson, Craig A.; Lee, Yang-Han; Su, Mu-Chun

    2016-01-01

    The goal of the current study is to clarify the relationship between social information processing (e.g., visual attention to cues of hostility, hostility attribution bias, and facial expression emotion labeling) and aggressive tendencies. Thirty adults were recruited in the eye-tracking study that measured various components in social information processing. Baseline aggressive tendencies were measured using the Buss-Perry Aggression Questionnaire (AQ). Visual attention towards hostile objects was measured as the proportion of eye gaze fixation duration on cues of hostility. Hostility attribution bias was measured with the rating results for emotions of characters in the images. The results show that the eye gaze duration on hostile characters was significantly inversely correlated with the AQ score and less eye contact with an angry face. The eye gaze duration on hostile object was not significantly associated with hostility attribution bias, although hostility attribution bias was significantly positively associated with the AQ score. Our findings suggest that eye gaze fixation time towards non-hostile cues may predict aggressive tendencies. PMID:26901770

  10. Predicting Aggressive Tendencies by Visual Attention Bias Associated with Hostile Emotions.

    PubMed

    Lin, Ping-I; Hsieh, Cheng-Da; Juan, Chi-Hung; Hossain, Md Monir; Erickson, Craig A; Lee, Yang-Han; Su, Mu-Chun

    2016-01-01

    The goal of the current study is to clarify the relationship between social information processing (e.g., visual attention to cues of hostility, hostility attribution bias, and facial expression emotion labeling) and aggressive tendencies. Thirty adults were recruited in the eye-tracking study that measured various components in social information processing. Baseline aggressive tendencies were measured using the Buss-Perry Aggression Questionnaire (AQ). Visual attention towards hostile objects was measured as the proportion of eye gaze fixation duration on cues of hostility. Hostility attribution bias was measured with the rating results for emotions of characters in the images. The results show that the eye gaze duration on hostile characters was significantly inversely correlated with the AQ score and less eye contact with an angry face. The eye gaze duration on hostile object was not significantly associated with hostility attribution bias, although hostility attribution bias was significantly positively associated with the AQ score. Our findings suggest that eye gaze fixation time towards non-hostile cues may predict aggressive tendencies.

  11. Free visual exploration of natural movies in schizophrenia.

    PubMed

    Silberg, Johanna Elisa; Agtzidis, Ioannis; Startsev, Mikhail; Fasshauer, Teresa; Silling, Karen; Sprenger, Andreas; Dorr, Michael; Lencer, Rebekka

    2018-01-05

    Eye tracking dysfunction (ETD) observed with standard pursuit stimuli represents a well-established biomarker for schizophrenia. How ETD may manifest during free visual exploration of real-life movies is unclear. Eye movements were recorded (EyeLink®1000) while 26 schizophrenia patients and 25 healthy age-matched controls freely explored nine uncut movies and nine pictures of real-life situations for 20 s each. Subsequently, participants were shown still shots of these scenes to decide whether they had explored them as movies or pictures. Participants were additionally assessed on standard eye-tracking tasks. Patients made smaller saccades (movies (p = 0.003), pictures (p = 0.002)) and had a stronger central bias (movies and pictures (p < 0.001)) than controls. In movies, patients' exploration behavior was less driven by image-defined, bottom-up stimulus saliency than controls (p < 0.05). Proportions of pursuit tracking on movies differed between groups depending on the individual movie (group*movie p = 0.011, movie p < 0.001). Eye velocity on standard pursuit stimuli was reduced in patients (p = 0.029) but did not correlate with pursuit behavior on movies. Additionally, patients obtained lower rates of correctly identified still shots as movies or pictures (p = 0.046). Our results suggest a restricted centrally focused visual exploration behavior in patients not only on pictures, but also on movies of real-life scenes. While ETD observed in the laboratory cannot be directly transferred to natural viewing conditions, these alterations support a model of impairments in motion information processing in patients resulting in a reduced ability to perceive moving objects and less saliency driven exploration behavior presumably contributing to alterations in the perception of the natural environment.

  12. Distinct eye movement patterns enhance dynamic visual acuity.

    PubMed

    Palidis, Dimitrios J; Wyder-Hodge, Pearson A; Fooken, Jolande; Spering, Miriam

    2017-01-01

    Dynamic visual acuity (DVA) is the ability to resolve fine spatial detail in dynamic objects during head fixation, or in static objects during head or body rotation. This ability is important for many activities such as ball sports, and a close relation has been shown between DVA and sports expertise. DVA tasks involve eye movements, yet, it is unclear which aspects of eye movements contribute to successful performance. Here we examined the relation between DVA and the kinematics of smooth pursuit and saccadic eye movements in a cohort of 23 varsity baseball players. In a computerized dynamic-object DVA test, observers reported the location of the gap in a small Landolt-C ring moving at various speeds while eye movements were recorded. Smooth pursuit kinematics-eye latency, acceleration, velocity gain, position error-and the direction and amplitude of saccadic eye movements were linked to perceptual performance. Results reveal that distinct eye movement patterns-minimizing eye position error, tracking smoothly, and inhibiting reverse saccades-were related to dynamic visual acuity. The close link between eye movement quality and DVA performance has important implications for the development of perceptual training programs to improve DVA.

  13. Distinct eye movement patterns enhance dynamic visual acuity

    PubMed Central

    Palidis, Dimitrios J.; Wyder-Hodge, Pearson A.; Fooken, Jolande; Spering, Miriam

    2017-01-01

    Dynamic visual acuity (DVA) is the ability to resolve fine spatial detail in dynamic objects during head fixation, or in static objects during head or body rotation. This ability is important for many activities such as ball sports, and a close relation has been shown between DVA and sports expertise. DVA tasks involve eye movements, yet, it is unclear which aspects of eye movements contribute to successful performance. Here we examined the relation between DVA and the kinematics of smooth pursuit and saccadic eye movements in a cohort of 23 varsity baseball players. In a computerized dynamic-object DVA test, observers reported the location of the gap in a small Landolt-C ring moving at various speeds while eye movements were recorded. Smooth pursuit kinematics—eye latency, acceleration, velocity gain, position error—and the direction and amplitude of saccadic eye movements were linked to perceptual performance. Results reveal that distinct eye movement patterns—minimizing eye position error, tracking smoothly, and inhibiting reverse saccades—were related to dynamic visual acuity. The close link between eye movement quality and DVA performance has important implications for the development of perceptual training programs to improve DVA. PMID:28187157

  14. Human-like object tracking and gaze estimation with PKD android

    PubMed Central

    Wijayasinghe, Indika B.; Miller, Haylie L.; Das, Sumit K; Bugnariu, Nicoleta L.; Popa, Dan O.

    2018-01-01

    As the use of robots increases for tasks that require human-robot interactions, it is vital that robots exhibit and understand human-like cues for effective communication. In this paper, we describe the implementation of object tracking capability on Philip K. Dick (PKD) android and a gaze tracking algorithm, both of which further robot capabilities with regard to human communication. PKD's ability to track objects with human-like head postures is achieved with visual feedback from a Kinect system and an eye camera. The goal of object tracking with human-like gestures is twofold : to facilitate better human-robot interactions and to enable PKD as a human gaze emulator for future studies. The gaze tracking system employs a mobile eye tracking system (ETG; SensoMotoric Instruments) and a motion capture system (Cortex; Motion Analysis Corp.) for tracking the head orientations. Objects to be tracked are displayed by a virtual reality system, the Computer Assisted Rehabilitation Environment (CAREN; MotekForce Link). The gaze tracking algorithm converts eye tracking data and head orientations to gaze information facilitating two objectives: to evaluate the performance of the object tracking system for PKD and to use the gaze information to predict the intentions of the user, enabling the robot to understand physical cues by humans. PMID:29416193

  15. Human-like object tracking and gaze estimation with PKD android

    NASA Astrophysics Data System (ADS)

    Wijayasinghe, Indika B.; Miller, Haylie L.; Das, Sumit K.; Bugnariu, Nicoleta L.; Popa, Dan O.

    2016-05-01

    As the use of robots increases for tasks that require human-robot interactions, it is vital that robots exhibit and understand human-like cues for effective communication. In this paper, we describe the implementation of object tracking capability on Philip K. Dick (PKD) android and a gaze tracking algorithm, both of which further robot capabilities with regard to human communication. PKD's ability to track objects with human-like head postures is achieved with visual feedback from a Kinect system and an eye camera. The goal of object tracking with human-like gestures is twofold: to facilitate better human-robot interactions and to enable PKD as a human gaze emulator for future studies. The gaze tracking system employs a mobile eye tracking system (ETG; SensoMotoric Instruments) and a motion capture system (Cortex; Motion Analysis Corp.) for tracking the head orientations. Objects to be tracked are displayed by a virtual reality system, the Computer Assisted Rehabilitation Environment (CAREN; MotekForce Link). The gaze tracking algorithm converts eye tracking data and head orientations to gaze information facilitating two objectives: to evaluate the performance of the object tracking system for PKD and to use the gaze information to predict the intentions of the user, enabling the robot to understand physical cues by humans.

  16. Parsing Heterogeneity in Autism Spectrum Disorders: Visual Scanning of Dynamic Social Scenes in School-Aged Children

    ERIC Educational Resources Information Center

    Rice, Katherine; Moriuchi, Jennifer M.; Jones, Warren; Klin, Ami

    2012-01-01

    Objective: To examine patterns of variability in social visual engagement and their relationship to standardized measures of social disability in a heterogeneous sample of school-aged children with autism spectrum disorders (ASD). Method: Eye-tracking measures of visual fixation during free-viewing of dynamic social scenes were obtained for 109…

  17. Seeing and Knowing: Attention to Illustrations during Storybook Reading and Narrative Comprehension in 2-Year-Olds

    ERIC Educational Resources Information Center

    Kaefer, Tanya; Pinkham, Ashley M.; Neuman, Susan B.

    2017-01-01

    Research (Evans & Saint-Aubin, 2005) suggests systematic patterns in how young children visually attend to storybooks. However, these studies have not addressed whether visual attention is predictive of children's storybook comprehension. In the current study, we used eye-tracking methodology to examine two-year-olds' visual attention while…

  18. A Comparison of the Visual Attention Patterns of People with Aphasia and Adults without Neurological Conditions for Camera-Engaged and Task-Engaged Visual Scenes

    ERIC Educational Resources Information Center

    Thiessen, Amber; Beukelman, David; Hux, Karen; Longenecker, Maria

    2016-01-01

    Purpose: The purpose of the study was to compare the visual attention patterns of adults with aphasia and adults without neurological conditions when viewing visual scenes with 2 types of engagement. Method: Eye-tracking technology was used to measure the visual attention patterns of 10 adults with aphasia and 10 adults without neurological…

  19. Infrared dim and small target detecting and tracking method inspired by Human Visual System

    NASA Astrophysics Data System (ADS)

    Dong, Xiabin; Huang, Xinsheng; Zheng, Yongbin; Shen, Lurong; Bai, Shengjian

    2014-01-01

    Detecting and tracking dim and small target in infrared images and videos is one of the most important techniques in many computer vision applications, such as video surveillance and infrared imaging precise guidance. Recently, more and more algorithms based on Human Visual System (HVS) have been proposed to detect and track the infrared dim and small target. In general, HVS concerns at least three mechanisms including contrast mechanism, visual attention and eye movement. However, most of the existing algorithms simulate only a single one of the HVS mechanisms, resulting in many drawbacks of these algorithms. A novel method which combines the three mechanisms of HVS is proposed in this paper. First, a group of Difference of Gaussians (DOG) filters which simulate the contrast mechanism are used to filter the input image. Second, a visual attention, which is simulated by a Gaussian window, is added at a point near the target in order to further enhance the dim small target. This point is named as the attention point. Eventually, the Proportional-Integral-Derivative (PID) algorithm is first introduced to predict the attention point of the next frame of an image which simulates the eye movement of human being. Experimental results of infrared images with different types of backgrounds demonstrate the high efficiency and accuracy of the proposed method to detect and track the dim and small targets.

  20. A new system for quantitative evaluation of infant gaze capabilities in a wide visual field.

    PubMed

    Pratesi, Andrea; Cecchi, Francesca; Beani, Elena; Sgandurra, Giuseppina; Cioni, Giovanni; Laschi, Cecilia; Dario, Paolo

    2015-09-07

    The visual assessment of infants poses specific challenges: many techniques that are used on adults are based on the patient's response, and are not suitable for infants. Significant advances in the eye-tracking have made this assessment of infant visual capabilities easier, however, eye-tracking still requires the subject's collaboration, in most cases and thus limiting the application in infant research. Moreover, there is a lack of transferability to clinical practice, and thus it emerges the need for a new tool to measure the paradigms and explore the most common visual competences in a wide visual field. This work presents the design, development and preliminary testing of a new system for measuring infant's gaze in the wide visual field called CareToy C: CareToy for Clinics. The system is based on a commercial eye tracker (SmartEye) with six cameras running at 60 Hz, suitable for measuring an infant's gaze. In order to stimulate the infant visually and audibly, a mechanical structure has been designed to support five speakers and five screens at a specific distance (60 cm) and angle: one in the centre, two on the right-hand side and two on the left (at 30° and 60° respectively). Different tasks have been designed in order to evaluate the system capability to assess the infant's gaze movements during different conditions (such as gap, overlap or audio-visual paradigms). Nine healthy infants aged 4-10 months were assessed as they performed the visual tasks at random. We developed a system able to measure infant's gaze in a wide visual field covering a total visual range of ±60° from the centre with an intermediate evaluation at ±30°. Moreover, the same system, thanks to different integrated software, was able to provide different visual paradigms (as gap, overlap and audio-visual) assessing and comparing different visual and multisensory sub-competencies. The proposed system endowed the integration of a commercial eye-tracker into a purposive setup in a smart and innovative way. The proposed system is suitable for measuring and evaluating infant's gaze capabilities in a wide visual field, in order to provide quantitative data that can enrich the clinical assessment.

  1. Three-Dimensional Eye Tracking in a Surgical Scenario.

    PubMed

    Bogdanova, Rositsa; Boulanger, Pierre; Zheng, Bin

    2015-10-01

    Eye tracking has been widely used in studying the eye behavior of surgeons in the past decade. Most eye-tracking data are reported in a 2-dimensional (2D) fashion, and data for describing surgeons' behaviors on stereoperception are often missed. With the introduction of stereoscopes in laparoscopic procedures, there is an increasing need for studying the depth perception of surgeons under 3D image-guided surgery. We developed a new algorithm for the computation of convergence points in stereovision by measuring surgeons' interpupillary distance, the distance to the view target, and the difference between gaze locations of the 2 eyes. To test the feasibility of our new algorithm, we recruited 10 individuals to watch stereograms using binocular disparity and asked them to develop stereoperception using a cross-eyed viewing technique. Participants' eye motions were recorded by the Tobii eye tracker while they performed the trials. Convergence points between normal and stereo-viewing conditions were computed using the developed algorithm. All 10 participants were able to develop stereovision after a short period of training. During stereovision, participants' eye convergence points were 14 ± 1 cm in front of their eyes, which was significantly closer than the convergence points under the normal viewing condition (77 ± 20 cm). By applying our method of calculating convergence points using eye tracking, we were able to elicit the eye movement patterns of human operators between the normal and stereovision conditions. Knowledge from this study can be applied to the design of surgical visual systems, with the goal of improving surgical performance and patient safety. © The Author(s) 2015.

  2. Electronic eye occluder with time-counting and reflection control

    NASA Astrophysics Data System (ADS)

    Karitans, V.; Ozolinsh, M.; Kuprisha, G.

    2008-09-01

    In pediatric ophthalmology 2 - 3 % of all the children are impacted by a visual pathology - amblyopia. It develops if a clear image isn't presented to the retina during an early stage of the development of the visual system. A common way of treating this pathology is to cover the better-seeing eye to force the "lazy" eye to learn seeing. However, children are often reluctant to wear such an occluder because they are ashamed or simply because they find it inconvenient. This fact requires to find a way how to track the regime of occlusion because results of occlusion is a hint that the actual regime of occlusion isn't that what the optometrist has recommended. We design an electronic eye occluder that allows to track the regime of eye occlusion. We employ real-time clock DS1302 providing time information from seconds to years. Data is stored in the internal memory of the CPU (EEPROM). The MCU (PIC16F676) switches on only if a mechanical switch is closed and temperature has reached a satisfactory level. The occlusion is registered between time moments when the infrared signal appeared and disappeared.

  3. Biases in rhythmic sensorimotor coordination: effects of modality and intentionality.

    PubMed

    Debats, Nienke B; Ridderikhoff, Arne; de Boer, Betteco J; Peper, C Lieke E

    2013-08-01

    Sensorimotor biases were examined for intentional (tracking task) and unintentional (distractor task) rhythmic coordination. The tracking task involved unimanual tracking of either an oscillating visual signal or the passive movements of the contralateral hand (proprioceptive signal). In both conditions the required coordination patterns (isodirectional and mirror-symmetric) were defined relative to the body midline and the hands were not visible. For proprioceptive tracking the two patterns did not differ in stability, whereas for visual tracking the isodirectional pattern was performed more stably than the mirror-symmetric pattern. However, when visual feedback about the unimanual hand movements was provided during visual tracking, the isodirectional pattern ceased to be dominant. Together these results indicated that the stability of the coordination patterns did not depend on the modality of the target signal per se, but on the combination of sensory signals that needed to be processed (unimodal vs. cross-modal). The distractor task entailed rhythmic unimanual movements during which a rhythmic visual or proprioceptive distractor signal had to be ignored. The observed biases were similar as for intentional coordination, suggesting that intentionality did not affect the underlying sensorimotor processes qualitatively. Intentional tracking was characterized by active sensory pursuit, through muscle activity in the passively moved arm (proprioceptive tracking task) and rhythmic eye movements (visual tracking task). Presumably this pursuit afforded predictive information serving the coordination process. Copyright © 2013 Elsevier B.V. All rights reserved.

  4. Visual Routines for Extracting Magnitude Relations

    ERIC Educational Resources Information Center

    Michal, Audrey L.; Uttal, David; Shah, Priti; Franconeri, Steven L.

    2016-01-01

    Linking relations described in text with relations in visualizations is often difficult. We used eye tracking to measure the optimal way to extract such relations in graphs, college students, and young children (6- and 8-year-olds). Participants compared relational statements ("Are there more blueberries than oranges?") with simple…

  5. Eye Detection and Tracking for Intelligent Human Computer Interaction

    DTIC Science & Technology

    2006-02-01

    Meer and I. Weiss, “Smoothed Differentiation Filters for Images”, Journal of Visual Communication and Image Representation, 3(1):58-72, 1992. [13...25] P. Meer and I. Weiss. “Smoothed differentiation filters for images”. Journal of Visual Communication and Image Representation, 3(1), 1992

  6. "What Are You Looking At?" An Eye Movement Exploration in Science Text Reading

    ERIC Educational Resources Information Center

    Hung, Yueh-Nu

    2014-01-01

    The main purpose of this research was to investigate how Taiwanese grade 6 readers selected and used information from different print (main text, headings, captions) and visual elements (decorational, representational, interpretational) to comprehend a science text through tracking their eye movement behaviors. Six grade 6 students read a double…

  7. Reading Polymorphemic Dutch Compounds: Toward a Multiple Route Model of Lexical Processing

    ERIC Educational Resources Information Center

    Kuperman, Victor; Schreuder, Robert; Bertram, Raymond; Baayen, R. Harald

    2009-01-01

    This article reports an eye-tracking experiment with 2,500 polymorphemic Dutch compounds presented in isolation for visual lexical decision while readers' eye movements were registered. The authors found evidence that both full forms of compounds ("dishwasher") and their constituent morphemes (e.g., "dish," "washer," "er") and morphological…

  8. The seam visual tracking method for large structures

    NASA Astrophysics Data System (ADS)

    Bi, Qilin; Jiang, Xiaomin; Liu, Xiaoguang; Cheng, Taobo; Zhu, Yulong

    2017-10-01

    In this paper, a compact and flexible weld visual tracking method is proposed. Firstly, there was the interference between the visual device and the work-piece to be welded when visual tracking height cannot change. a kind of weld vision system with compact structure and tracking height is researched. Secondly, according to analyze the relative spatial pose between the camera, the laser and the work-piece to be welded and study with the theory of relative geometric imaging, The mathematical model between image feature parameters and three-dimensional trajectory of the assembly gap to be welded is established. Thirdly, the visual imaging parameters of line structured light are optimized by experiment of the weld structure of the weld. Fourth, the interference that line structure light will be scatters at the bright area of metal and the area of surface scratches will be bright is exited in the imaging. These disturbances seriously affect the computational efficiency. The algorithm based on the human eye visual attention mechanism is used to extract the weld characteristics efficiently and stably. Finally, in the experiment, It is verified that the compact and flexible weld tracking method has the tracking accuracy of 0.5mm in the tracking of large structural parts. It is a wide range of industrial application prospects.

  9. Visual attention and stability

    PubMed Central

    Mathôt, Sebastiaan; Theeuwes, Jan

    2011-01-01

    In the present review, we address the relationship between attention and visual stability. Even though with each eye, head and body movement the retinal image changes dramatically, we perceive the world as stable and are able to perform visually guided actions. However, visual stability is not as complete as introspection would lead us to believe. We attend to only a few items at a time and stability is maintained only for those items. There appear to be two distinct mechanisms underlying visual stability. The first is a passive mechanism: the visual system assumes the world to be stable, unless there is a clear discrepancy between the pre- and post-saccadic image of the region surrounding the saccade target. This is related to the pre-saccadic shift of attention, which allows for an accurate preview of the saccade target. The second is an active mechanism: information about attended objects is remapped within retinotopic maps to compensate for eye movements. The locus of attention itself, which is also characterized by localized retinotopic activity, is remapped as well. We conclude that visual attention is crucial in our perception of a stable world. PMID:21242140

  10. Neural Correlates of Fixation Duration during Real-world Scene Viewing: Evidence from Fixation-related (FIRE) fMRI.

    PubMed

    Henderson, John M; Choi, Wonil

    2015-06-01

    During active scene perception, our eyes move from one location to another via saccadic eye movements, with the eyes fixating objects and scene elements for varying amounts of time. Much of the variability in fixation duration is accounted for by attentional, perceptual, and cognitive processes associated with scene analysis and comprehension. For this reason, current theories of active scene viewing attempt to account for the influence of attention and cognition on fixation duration. Yet almost nothing is known about the neurocognitive systems associated with variation in fixation duration during scene viewing. We addressed this topic using fixation-related fMRI, which involves coregistering high-resolution eye tracking and magnetic resonance scanning to conduct event-related fMRI analysis based on characteristics of eye movements. We observed that activation in visual and prefrontal executive control areas was positively correlated with fixation duration, whereas activation in ventral areas associated with scene encoding and medial superior frontal and paracentral regions associated with changing action plans was negatively correlated with fixation duration. The results suggest that fixation duration in scene viewing is controlled by cognitive processes associated with real-time scene analysis interacting with motor planning, consistent with current computational models of active vision for scene perception.

  11. Hawk Eyes I: Diurnal Raptors Differ in Visual Fields and Degree of Eye Movement

    PubMed Central

    O'Rourke, Colleen T.; Hall, Margaret I.; Pitlik, Todd; Fernández-Juricic, Esteban

    2010-01-01

    Background Different strategies to search and detect prey may place specific demands on sensory modalities. We studied visual field configuration, degree of eye movement, and orbit orientation in three diurnal raptors belonging to the Accipitridae and Falconidae families. Methodology/Principal Findings We used an ophthalmoscopic reflex technique and an integrated 3D digitizer system. We found inter-specific variation in visual field configuration and degree of eye movement, but not in orbit orientation. Red-tailed Hawks have relatively small binocular areas (∼33°) and wide blind areas (∼82°), but intermediate degree of eye movement (∼5°), which underscores the importance of lateral vision rather than binocular vision to scan for distant prey in open areas. Cooper's Hawks' have relatively wide binocular fields (∼36°), small blind areas (∼60°), and high degree of eye movement (∼8°), which may increase visual coverage and enhance prey detection in closed habitats. Additionally, we found that Cooper's Hawks can visually inspect the items held in the tip of the bill, which may facilitate food handling. American Kestrels have intermediate-sized binocular and lateral areas that may be used in prey detection at different distances through stereopsis and motion parallax; whereas the low degree eye movement (∼1°) may help stabilize the image when hovering above prey before an attack. Conclusions We conclude that: (a) there are between-species differences in visual field configuration in these diurnal raptors; (b) these differences are consistent with prey searching strategies and degree of visual obstruction in the environment (e.g., open and closed habitats); (c) variations in the degree of eye movement between species appear associated with foraging strategies; and (d) the size of the binocular and blind areas in hawks can vary substantially due to eye movements. Inter-specific variation in visual fields and eye movements can influence behavioral strategies to visually search for and track prey while perching. PMID:20877645

  12. Hawk eyes I: diurnal raptors differ in visual fields and degree of eye movement.

    PubMed

    O'Rourke, Colleen T; Hall, Margaret I; Pitlik, Todd; Fernández-Juricic, Esteban

    2010-09-22

    Different strategies to search and detect prey may place specific demands on sensory modalities. We studied visual field configuration, degree of eye movement, and orbit orientation in three diurnal raptors belonging to the Accipitridae and Falconidae families. We used an ophthalmoscopic reflex technique and an integrated 3D digitizer system. We found inter-specific variation in visual field configuration and degree of eye movement, but not in orbit orientation. Red-tailed Hawks have relatively small binocular areas (∼33°) and wide blind areas (∼82°), but intermediate degree of eye movement (∼5°), which underscores the importance of lateral vision rather than binocular vision to scan for distant prey in open areas. Cooper's Hawks' have relatively wide binocular fields (∼36°), small blind areas (∼60°), and high degree of eye movement (∼8°), which may increase visual coverage and enhance prey detection in closed habitats. Additionally, we found that Cooper's Hawks can visually inspect the items held in the tip of the bill, which may facilitate food handling. American Kestrels have intermediate-sized binocular and lateral areas that may be used in prey detection at different distances through stereopsis and motion parallax; whereas the low degree eye movement (∼1°) may help stabilize the image when hovering above prey before an attack. We conclude that: (a) there are between-species differences in visual field configuration in these diurnal raptors; (b) these differences are consistent with prey searching strategies and degree of visual obstruction in the environment (e.g., open and closed habitats); (c) variations in the degree of eye movement between species appear associated with foraging strategies; and (d) the size of the binocular and blind areas in hawks can vary substantially due to eye movements. Inter-specific variation in visual fields and eye movements can influence behavioral strategies to visually search for and track prey while perching.

  13. Language-driven anticipatory eye movements in virtual reality.

    PubMed

    Eichert, Nicole; Peeters, David; Hagoort, Peter

    2018-06-01

    Predictive language processing is often studied by measuring eye movements as participants look at objects on a computer screen while they listen to spoken sentences. This variant of the visual-world paradigm has revealed that information encountered by a listener at a spoken verb can give rise to anticipatory eye movements to a target object, which is taken to indicate that people predict upcoming words. The ecological validity of such findings remains questionable, however, because these computer experiments used two-dimensional stimuli that were mere abstractions of real-world objects. Here we present a visual-world paradigm study in a three-dimensional (3-D) immersive virtual reality environment. Despite significant changes in the stimulus materials and the different mode of stimulus presentation, language-mediated anticipatory eye movements were still observed. These findings thus indicate that people do predict upcoming words during language comprehension in a more naturalistic setting where natural depth cues are preserved. Moreover, the results confirm the feasibility of using eyetracking in rich and multimodal 3-D virtual environments.

  14. A comparison study of visually stimulated brain-computer and eye-tracking interfaces

    NASA Astrophysics Data System (ADS)

    Suefusa, Kaori; Tanaka, Toshihisa

    2017-06-01

    Objective. Brain-computer interfacing (BCI) based on visual stimuli detects the target on a screen on which a user is focusing. The detection of the gazing target can be achieved by tracking gaze positions with a video camera, which is called eye-tracking or eye-tracking interfaces (ETIs). The two types of interface have been developed in different communities. Thus, little work on a comprehensive comparison between these two types of interface has been reported. This paper quantitatively compares the performance of these two interfaces on the same experimental platform. Specifically, our study is focused on two major paradigms of BCI and ETI: steady-state visual evoked potential-based BCIs and dwelling-based ETIs. Approach. Recognition accuracy and the information transfer rate were measured by giving subjects the task of selecting one of four targets by gazing at it. The targets were displayed in three different sizes (with sides 20, 40 and 60 mm long) to evaluate performance with respect to the target size. Main results. The experimental results showed that the BCI was comparable to the ETI in terms of accuracy and the information transfer rate. In particular, when the size of a target was relatively small, the BCI had significantly better performance than the ETI. Significance. The results on which of the two interfaces works better in different situations would not only enable us to improve the design of the interfaces but would also allow for the appropriate choice of interface based on the situation. Specifically, one can choose an interface based on the size of the screen that displays the targets.

  15. Infant viewing of social scenes is under genetic control and atypical in autism

    PubMed Central

    Constantino, John N.; Kennon-McGill, Stefanie; Weichselbaum, Claire; Marrus, Natasha; Haider, Alyzeh; Glowinski, Anne L.; Gillespie, Scott; Klaiman, Cheryl; Klin, Ami; Jones, Warren

    2017-01-01

    Long before infants reach, crawl, or walk, they explore the world by looking: they look to learn and to engage1, giving preferential attention to social stimuli including faces2, face-like stimuli3, and biological motion4. This capacity—social visual engagement—shapes typical infant development from birth5 and is pathognomonically impaired in children affected by autism6. Here we show that variation in viewing of social scenes—including levels of preferential attention and the timing, direction, and targeting of individual eye movements—is strongly influenced by genetic factors, with effects directly traceable to the active seeking of social information7. In a series of eye-tracking experiments conducted with 338 toddlers—including 166 epidemiologically-ascertained twins, 88 non-twins with autism, and 84 singleton controls—we find high monozygotic twin-twin concordance (0.91) and relatively low dizygotic concordance (0.35). Moreover, the measures that are most highly heritable, preferential attention to eye and mouth regions of the face, are also those that are differentially diminished in children with autism (Χ2=64.03, P<0.0001). These results—which implicate social visual engagement as a neurodevelopmental endophenotype—not only for autism, but for population-wide variation in social-information-seeking8—reveal a means of human biological niche construction, with phenotypic differences emerging from the interaction of individual genotypes with early life experience7. PMID:28700580

  16. [A tracking function of human eye in microgravity and during readaptation to earth's gravity].

    PubMed

    Kornilova, L N

    2001-01-01

    The paper summarizes results of electro-oculography of all ways of visual tracking: fixative eye movements (saccades), smooth pursuit of linearly, pendulum-like and circularly moving point stimuli, pursuit of vertically moving foveoretinal optokinetic stimuli, and presents values of thresholds and amplification coefficients of the optokinetic nystagmus during tracking of linear movement of foveoretinal optokinetic stimuli. Investigations were performed aboard the Salyut and Mir space stations with participation of 31 cosmonauts of whom 27 made long-term (76 up to 438 day) and 4 made short-term (7 to 9 day) missions. It was shown that in space flight the saccadic structure within the tracking reaction does not change; yet, corrective movements (additional microsaccades to achieve tracking) appeared in 47% of observations at the onset and in 76% of observations on months 3 to 6 of space flight. After landing, the structure of vertical saccades was found altered in half the cosmonauts. No matter in or after flight, reverse nystagmus was present along with the gaze nystagmus during static saccades in 22% (7 cosmonauts) of the observations. Amplitude of tracking vertically, diagonally or circularly moving stimuli was significantly reduced as period on mission increased. Early in flight (40% of the cosmonauts) and shortly afterwards (21% of the cosmonauts) the structure of smooth tracking reaction was totally broken up, that is eye followed stimulus with micro- or macrosaccades. The structure of smooth eye tracking recovered on flight days 6-8 and on postflight days 3-4. However, in 46% of the cosmonauts on long-term missions the structure of smooth eye tracking was noted to be disturbed periodically, i.e. smooth tracking was replaced by saccadic.

  17. How virtual reality works: illusions of vision in "real" and virtual environments

    NASA Astrophysics Data System (ADS)

    Stark, Lawrence W.

    1995-04-01

    Visual illusions abound in normal vision--illusions of clarity and completeness, of continuity in time and space, of presence and vivacity--and are part and parcel of the visual world inwhich we live. These illusions are discussed in terms of the human visual system, with its high- resolution fovea, moved from point to point in the visual scene by rapid saccadic eye movements (EMs). This sampling of visual information is supplemented by a low-resolution, wide peripheral field of view, especially sensitive to motion. Cognitive-spatial models controlling perception, imagery, and 'seeing,' also control the EMs that shift the fovea in the Scanpath mode. These illusions provide for presence, the sense off being within an environment. They equally well lead to 'Telepresence,' the sense of being within a virtual display, especially if the operator is intensely interacting within an eye-hand and head-eye human-machine interface that provides for congruent visual and motor frames of reference. Interaction, immersion, and interest compel telepresence; intuitive functioning and engineered information flows can optimize human adaptation to the artificial new world of virtual reality, as virtual reality expands into entertainment, simulation, telerobotics, and scientific visualization and other professional work.

  18. Unintentional Activation of Translation Equivalents in Bilinguals Leads to Attention Capture in a Cross-Modal Visual Task

    PubMed Central

    Singh, Niharika; Mishra, Ramesh Kumar

    2015-01-01

    Using a variant of the visual world eye tracking paradigm, we examined if language non- selective activation of translation equivalents leads to attention capture and distraction in a visual task in bilinguals. High and low proficient Hindi-English speaking bilinguals were instructed to programme a saccade towards a line drawing which changed colour among other distractor objects. A spoken word, irrelevant to the main task, was presented before the colour change. On critical trials, one of the line drawings was a phonologically related word of the translation equivalent of the spoken word. Results showed that saccade latency was significantly higher towards the target in the presence of this cross-linguistic translation competitor compared to when the display contained completely unrelated objects. Participants were also slower when the display contained the referent of the spoken word among the distractors. However, the bilingual groups did not differ with regard to the interference effect observed. These findings suggest that spoken words activates translation equivalent which bias attention leading to interference in goal directed action in the visual domain. PMID:25775184

  19. Eye movements and manual interception of ballistic trajectories: effects of law of motion perturbations and occlusions.

    PubMed

    Delle Monache, Sergio; Lacquaniti, Francesco; Bosco, Gianfranco

    2015-02-01

    Manual interceptions are known to depend critically on integration of visual feedback information and experience-based predictions of the interceptive event. Within this framework, coupling between gaze and limb movements might also contribute to the interceptive outcome, since eye movements afford acquisition of high-resolution visual information. We investigated this issue by analyzing subjects' head-fixed oculomotor behavior during manual interceptions. Subjects moved a mouse cursor to intercept computer-generated ballistic trajectories either congruent with Earth's gravity or perturbed with weightlessness (0 g) or hypergravity (2 g) effects. In separate sessions, trajectories were either fully visible or occluded before interception to enforce visual prediction. Subjects' oculomotor behavior was classified in terms of amounts of time they gazed at different visual targets and of overall number of saccades. Then, by way of multivariate analyses, we assessed the following: (1) whether eye movement patterns depended on targets' laws of motion and occlusions; and (2) whether interceptive performance was related to the oculomotor behavior. First, we found that eye movement patterns depended significantly on targets' laws of motion and occlusion, suggesting predictive mechanisms. Second, subjects coupled differently oculomotor and interceptive behavior depending on whether targets were visible or occluded. With visible targets, subjects made smaller interceptive errors if they gazed longer at the mouse cursor. Instead, with occluded targets, they achieved better performance by increasing the target's tracking accuracy and by avoiding gaze shifts near interception, suggesting that precise ocular tracking provided better trajectory predictions for the interceptive response.

  20. Clinical application of eye movement tasks as an aid to understanding Parkinson's disease pathophysiology.

    PubMed

    Fukushima, Kikuro; Fukushima, Junko; Barnes, Graham R

    2017-05-01

    Parkinson's disease (PD) is a progressive neurodegenerative disorder of the basal ganglia. Most PD patients suffer from somatomotor and oculomotor disorders. The oculomotor system facilitates obtaining accurate information from the visual world. If a target moves slowly in the fronto-parallel plane, tracking eye movements occur that consist primarily of smooth-pursuit interspersed with corrective saccades. Efficient smooth-pursuit requires appropriate target selection and predictive compensation for inherent processing delays. Although pursuit impairment, e.g. as latency prolongation or low gain (eye velocity/target velocity), is well known in PD, normal aging alone results in such changes. In this article, we first briefly review some basic features of smooth-pursuit, then review recent results showing the specific nature of impaired pursuit in PD using a cue-dependent memory-based smooth-pursuit task. This task was initially used for monkeys to separate two major components of prediction (image-motion direction memory and movement preparation), and neural correlates were examined in major pursuit pathways. Most PD patients possessed normal cue-information memory but extra-retinal mechanisms for pursuit preparation and execution were dysfunctional. A minority of PD patients had abnormal cue-information memory or difficulty in understanding the task. Some PD patients with normal cue-information memory changed strategy to initiate smooth tracking. Strategy changes were also observed to compensate for impaired pursuit during whole body rotation while the target moved with the head. We discuss PD pathophysiology by comparing eye movement task results with neuropsychological and motor symptom evaluations of individual patients and further with monkey results, and suggest possible neural circuits for these functions/dysfunctions.

  1. What Does the Eye See? Reading Online Primary Source Photographs in History

    ERIC Educational Resources Information Center

    Levesque, Stephane; Ng-A-Fook, Nicholas; Corrigan, Julie

    2014-01-01

    This exploratory study looks at how a sample of preservice teachers and historians read visuals in the context of school history. The participants used eye tracking technology and think-aloud protocol, as they examined a series of online primary source photographs from a virtual exhibit. Voluntary participants (6 students and 2 professional…

  2. Photorefractive keratectomy with a small spot laser and tracker.

    PubMed

    Pallikaris, I G; Koufala, K I; Siganos, D S; Papadaki, T G; Katsanevaki, V J; Tourtsan, V; McDonald, M B

    1999-01-01

    The Autonomous Technologies LADARVision excimer laser system utilizes an eye tracking mechanism and a small spot for photorefractive keratectomy. One hundred and two eyes of 102 patients were treated for -1.50 to -6.25 D of spherical myopia at the spectacle plane using a 6-mm diameter ablation zone. One year follow-up was available for 93 eyes (91%). Uncorrected visual acuity for eyes treated for distance vision was 20/40 or better in 99% (n = 90), and 20/20 or better in 70% (n = 64) of eyes at 12 months. Spectacle-corrected visual acuity was 20/25 or better in all 92 eyes reported; no eye lost more than 2 lines of spectacle-corrected visual acuity, and only 1 eye (1.0%) experienced a loss of 2 lines (20/12.5 to 20/20) at 1 year. The refractive result was within +/- 0.50 D of the desired correction in 75% (n = 70), and within +/- 1.00 D in 93% (n = 86) of eyes at 12 months. Refractive stability was achieved between 3 and 6 months. Corneal haze was graded as trace or less in 100% of the 93 eyes. No significant reductions were noted in contrast sensitivity or endothelial cell density. Patients treated with the Autonomous Technologies LADARVision excimer laser system for -1.50 to -6.25 D of spherical myopia with 1 year follow-up had uncorrected visual acuity of 20/20 or better in 70%, no significant loss of spectacle-corrected visual acuity, no reduction of endothelial cell density or contrast sensitivity, and low levels of corneal haze.

  3. Walking simulator for evaluation of ophthalmic devices

    NASA Astrophysics Data System (ADS)

    Barabas, James; Woods, Russell L.; Peli, Eli

    2005-03-01

    Simulating mobility tasks in a virtual environment reduces risk for research subjects, and allows for improved experimental control and measurement. We are currently using a simulated shopping mall environment (where subjects walk on a treadmill in front of a large projected video display) to evaluate a number of ophthalmic devices developed at the Schepens Eye Research Institute for people with vision impairment, particularly visual field defects. We have conducted experiments to study subject's perception of "safe passing distance" when walking towards stationary obstacles. The subject's binary responses about potential collisions are analyzed by fitting a psychometric function, which gives an estimate of the subject's perceived safe passing distance, and the variability of subject responses. The system also enables simulations of visual field defects using head and eye tracking, enabling better understanding of the impact of visual field loss. Technical infrastructure for our simulated walking environment includes a custom eye and head tracking system, a gait feedback system to adjust treadmill speed, and a handheld 3-D pointing device. Images are generated by a graphics workstation, which contains a model with photographs of storefronts from an actual shopping mall, where concurrent validation experiments are being conducted.

  4. Measuring eye movements during locomotion: filtering techniques for obtaining velocity signals from a video-based eye monitor

    NASA Technical Reports Server (NTRS)

    Das, V. E.; Thomas, C. W.; Zivotofsky, A. Z.; Leigh, R. J.

    1996-01-01

    Video-based eye-tracking systems are especially suited to studying eye movements during naturally occurring activities such as locomotion, but eye velocity records suffer from broad band noise that is not amenable to conventional filtering methods. We evaluated the effectiveness of combined median and moving-average filters by comparing prefiltered and postfiltered records made synchronously with a video eye-tracker and the magnetic search coil technique, which is relatively noise free. Root-mean-square noise was reduced by half, without distorting the eye velocity signal. To illustrate the practical use of this technique, we studied normal subjects and patients with deficient labyrinthine function and compared their ability to hold gaze on a visual target that moved with their heads (cancellation of the vestibulo-ocular reflex). Patients and normal subjects performed similarly during active head rotation but, during locomotion, patients held their eyes more steadily on the visual target than did subjects.

  5. Experimental tests of a superposition hypothesis to explain the relationship between the vestibuloocular reflex and smooth pursuit during horizontal combined eye-head tracking in humans

    NASA Technical Reports Server (NTRS)

    Huebner, W. P.; Leigh, R. J.; Seidman, S. H.; Thomas, C. W.; Billian, C.; DiScenna, A. O.; Dell'Osso, L. F.

    1992-01-01

    1. We used a modeling approach to test the hypothesis that, in humans, the smooth pursuit (SP) system provides the primary signal for cancelling the vestibuloocular reflex (VOR) during combined eye-head tracking (CEHT) of a target moving smoothly in the horizontal plane. Separate models for SP and the VOR were developed. The optimal values of parameters of the two models were calculated using measured responses of four subjects to trials of SP and the visually enhanced VOR. After optimal parameter values were specified, each model generated waveforms that accurately reflected the subjects' responses to SP and vestibular stimuli. The models were then combined into a CEHT model wherein the final eye movement command signal was generated as the linear summation of the signals from the SP and VOR pathways. 2. The SP-VOR superposition hypothesis was tested using two types of CEHT stimuli, both of which involved passive rotation of subjects in a vestibular chair. The first stimulus consisted of a "chair brake" or sudden stop of the subject's head during CEHT; the visual target continued to move. The second stimulus consisted of a sudden change from the visually enhanced VOR to CEHT ("delayed target onset" paradigm); as the vestibular chair rotated past the angular position of the stationary visual stimulus, the latter started to move in synchrony with the chair. Data collected during experiments that employed these stimuli were compared quantitatively with predictions made by the CEHT model. 3. During CEHT, when the chair was suddenly and unexpectedly stopped, the eye promptly began to move in the orbit to track the moving target. Initially, gaze velocity did not completely match target velocity, however; this finally occurred approximately 100 ms after the brake onset. The model did predict the prompt onset of eye-in-orbit motion after the brake, but it did not predict that gaze velocity would initially be only approximately 70% of target velocity. One possible explanation for this discrepancy is that VOR gain can be dynamically modulated and, during sustained CEHT, it may assume a lower value. Consequently, during CEHT, a smaller-amplitude SP signal would be needed to cancel the lower-gain VOR. This reduction of the SP signal could account for the attenuated tracking response observed immediately after the brake. We found evidence for the dynamic modulation of VOR gain by noting differences in responses to the onset and offset of head rotation in trials of the visually enhanced VOR.(ABSTRACT TRUNCATED AT 400 WORDS).

  6. Using eye tracking technology to compare the effectiveness of malignant hyperthermia cognitive aid design.

    PubMed

    King, Roderick; Hanhan, Jaber; Harrison, T Kyle; Kou, Alex; Howard, Steven K; Borg, Lindsay K; Shum, Cynthia; Udani, Ankeet D; Mariano, Edward R

    2018-05-15

    Malignant hyperthermia is a rare but potentially fatal complication of anesthesia, and several different cognitive aids designed to facilitate a timely and accurate response to this crisis currently exist. Eye tracking technology can measure voluntary and involuntary eye movements, gaze fixation within an area of interest, and speed of visual response and has been used to a limited extent in anesthesiology. With eye tracking technology, we compared the accessibility of five malignant hyperthermia cognitive aids by collecting gaze data from twelve volunteer participants. Recordings were reviewed and annotated to measure the time required for participants to locate objects on the cognitive aid to provide an answer; cumulative time to answer was the primary outcome. For the primary outcome, there were differences detected between cumulative time to answer survival curves (P < 0.001). Participants demonstrated the shortest cumulative time to answer when viewing the Society for Pediatric Anesthesia (SPA) cognitive aid compared to four other publicly available cognitive aids for malignant hyperthermia, and this outcome was not influenced by the anesthesiologists' years of experience. This is the first study to utilize eye tracking technology in a comparative evaluation of cognitive aid design, and our experience suggests that there may be additional applications of eye tracking technology in healthcare and medical education. Potentially advantageous design features of the SPA cognitive aid include a single page, linear layout, and simple typescript with minimal use of single color blocking.

  7. Eye movements reveal the time-course of anticipating behaviour based on complex, conflicting desires.

    PubMed

    Ferguson, Heather J; Breheny, Richard

    2011-05-01

    The time-course of representing others' perspectives is inconclusive across the currently available models of ToM processing. We report two visual-world studies investigating how knowledge about a character's basic preferences (e.g. Tom's favourite colour is pink) and higher-order desires (his wish to keep this preference secret) compete to influence online expectations about subsequent behaviour. Participants' eye movements around a visual scene were tracked while they listened to auditory narratives. While clear differences in anticipatory visual biases emerged between conditions in Experiment 1, post-hoc analyses testing the strength of the relevant biases suggested a discrepancy in the time-course of predicting appropriate referents within the different contexts. Specifically, predictions to the target emerged very early when there was no conflict between the character's basic preferences and higher-order desires, but appeared to be relatively delayed when comprehenders were provided with conflicting information about that character's desire to keep a secret. However, a second experiment demonstrated that this apparent 'cognitive cost' in inferring behaviour based on higher-order desires was in fact driven by low-level features between the context sentence and visual scene. Taken together, these results suggest that healthy adults are able to make complex higher-order ToM inferences without the need to call on costly cognitive processes. Results are discussed relative to previous accounts of ToM and language processing. Copyright © 2011 Elsevier B.V. All rights reserved.

  8. Buildup of spatial information over time and across eye-movements.

    PubMed

    Zimmermann, Eckart; Morrone, M Concetta; Burr, David C

    2014-12-15

    To interact rapidly and effectively with our environment, our brain needs access to a neural representation of the spatial layout of the external world. However, the construction of such a map poses major challenges, as the images on our retinae depend on where the eyes are looking, and shift each time we move our eyes, head and body to explore the world. Research from many laboratories including our own suggests that the visual system does compute spatial maps that are anchored to real-world coordinates. However, the construction of these maps takes time (up to 500ms) and also attentional resources. We discuss research investigating how retinotopic reference frames are transformed into spatiotopic reference-frames, and how this transformation takes time to complete. These results have implications for theories about visual space coordinates and particularly for the current debate about the existence of spatiotopic representations. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. Eye Tracking of Occluded Self-Moved Targets: Role of Haptic Feedback and Hand-Target Dynamics.

    PubMed

    Danion, Frederic; Mathew, James; Flanagan, J Randall

    2017-01-01

    Previous studies on smooth pursuit eye movements have shown that humans can continue to track the position of their hand, or a target controlled by the hand, after it is occluded, thereby demonstrating that arm motor commands contribute to the prediction of target motion driving pursuit eye movements. Here, we investigated this predictive mechanism by manipulating both the complexity of the hand-target mapping and the provision of haptic feedback. Two hand-target mappings were used, either a rigid (simple) one in which hand and target motion matched perfectly or a nonrigid (complex) one in which the target behaved as a mass attached to the hand by means of a spring. Target animation was obtained by asking participants to oscillate a lightweight robotic device that provided (or not) haptic feedback consistent with the target dynamics. Results showed that as long as 7 s after target occlusion, smooth pursuit continued to be the main contributor to total eye displacement (∼60%). However, the accuracy of eye-tracking varied substantially across experimental conditions. In general, eye-tracking was less accurate under the nonrigid mapping, as reflected by higher positional and velocity errors. Interestingly, haptic feedback helped to reduce the detrimental effects of target occlusion when participants used the nonrigid mapping, but not when they used the rigid one. Overall, we conclude that the ability to maintain smooth pursuit in the absence of visual information can extend to complex hand-target mappings, but the provision of haptic feedback is critical for the maintenance of accurate eye-tracking performance.

  10. Eye Tracking of Occluded Self-Moved Targets: Role of Haptic Feedback and Hand-Target Dynamics

    PubMed Central

    Mathew, James

    2017-01-01

    Abstract Previous studies on smooth pursuit eye movements have shown that humans can continue to track the position of their hand, or a target controlled by the hand, after it is occluded, thereby demonstrating that arm motor commands contribute to the prediction of target motion driving pursuit eye movements. Here, we investigated this predictive mechanism by manipulating both the complexity of the hand-target mapping and the provision of haptic feedback. Two hand-target mappings were used, either a rigid (simple) one in which hand and target motion matched perfectly or a nonrigid (complex) one in which the target behaved as a mass attached to the hand by means of a spring. Target animation was obtained by asking participants to oscillate a lightweight robotic device that provided (or not) haptic feedback consistent with the target dynamics. Results showed that as long as 7 s after target occlusion, smooth pursuit continued to be the main contributor to total eye displacement (∼60%). However, the accuracy of eye-tracking varied substantially across experimental conditions. In general, eye-tracking was less accurate under the nonrigid mapping, as reflected by higher positional and velocity errors. Interestingly, haptic feedback helped to reduce the detrimental effects of target occlusion when participants used the nonrigid mapping, but not when they used the rigid one. Overall, we conclude that the ability to maintain smooth pursuit in the absence of visual information can extend to complex hand-target mappings, but the provision of haptic feedback is critical for the maintenance of accurate eye-tracking performance. PMID:28680964

  11. Attention and Recall of Point-of-sale Tobacco Marketing: A Mobile Eye-Tracking Pilot Study.

    PubMed

    Bansal-Travers, Maansi; Adkison, Sarah E; O'Connor, Richard J; Thrasher, James F

    2016-01-01

    As tobacco advertising restrictions have increased, the retail 'power wall' behind the counter is increasingly invaluable for marketing tobacco products. The primary objectives of this pilot study were 3-fold: (1) evaluate the attention paid/fixations on the area behind the cash register where tobacco advertising is concentrated and tobacco products are displayed in a real-world setting, (2) evaluate the duration (dwell-time) of these fixations, and (3) evaluate the recall of advertising displayed on the tobacco power wall. Data from 13 Smokers (S) and 12 Susceptible or non-daily Smokers (SS) aged 180-30 from a mobile eye-tracking study. Mobile-eye tracking technology records the orientation (fixation) and duration (dwell-time) of visual attention. Participants were randomized to one of three purchase tasks at a convenience store: Candy bar Only (CO; N = 10), Candy bar + Specified cigarette Brand (CSB; N = 6), and Candy bar + cigarette Brand of their Choice (CBC; N = 9). A post-session survey evaluated recall of tobacco marketing. Key outcomes were fixations and dwell-time on the cigarette displays at the point-of-sale. Participants spent a median time of 44 seconds during the standardized time evaluated and nearly three-quarters (72%) fixated on the power wall during their purchase, regardless of smoking status (S: 77%, SS: 67%) or purchase task (CO: 44%, CSB: 71%, CBC: 100%). In the post session survey, nearly all participants (96%) indicated they noticed a cigarette brand and 64% were able to describe a specific part of the tobacco wall or recall a promotional offer. Consumers are exposed to point-of-sale tobacco marketing, regardless of smoking status. FDA should consider regulations that limit exposure to point-of-sale tobacco marketing among consumers.

  12. Eye gaze tracking based on the shape of pupil image

    NASA Astrophysics Data System (ADS)

    Wang, Rui; Qiu, Jian; Luo, Kaiqing; Peng, Li; Han, Peng

    2018-01-01

    Eye tracker is an important instrument for research in psychology, widely used in attention, visual perception, reading and other fields of research. Because of its potential function in human-computer interaction, the eye gaze tracking has already been a topic of research in many fields over the last decades. Nowadays, with the development of technology, non-intrusive methods are more and more welcomed. In this paper, we will present a method based on the shape of pupil image to estimate the gaze point of human eyes without any other intrusive devices such as a hat, a pair of glasses and so on. After using the ellipse fitting algorithm to deal with the pupil image we get, we can determine the direction of the fixation by the shape of the pupil.The innovative aspect of this method is to utilize the new idea of the shape of the pupil so that we can avoid much complicated algorithm. The performance proposed is very helpful for the study of eye gaze tracking, which just needs one camera without infrared light to know the changes in the shape of the pupil to determine the direction of the eye gazing, no additional condition is required.

  13. Looking for ideas: Eye behavior during goal-directed internally focused cognition☆

    PubMed Central

    Walcher, Sonja; Körner, Christof; Benedek, Mathias

    2017-01-01

    Humans have a highly developed visual system, yet we spend a high proportion of our time awake ignoring the visual world and attending to our own thoughts. The present study examined eye movement characteristics of goal-directed internally focused cognition. Deliberate internally focused cognition was induced by an idea generation task. A letter-by-letter reading task served as external task. Idea generation (vs. reading) was associated with more and longer blinks and fewer microsaccades indicating an attenuation of visual input. Idea generation was further associated with more and shorter fixations, more saccades and saccades with higher amplitudes as well as heightened stimulus-independent variation of eye vergence. The latter results suggest a coupling of eye behavior to internally generated information and associated cognitive processes, i.e. searching for ideas. Our results support eye behavior patterns as indicators of goal-directed internally focused cognition through mechanisms of attenuation of visual input and coupling of eye behavior to internally generated information. PMID:28689088

  14. Real-time lexical comprehension in young children learning American Sign Language.

    PubMed

    MacDonald, Kyle; LaMarr, Todd; Corina, David; Marchman, Virginia A; Fernald, Anne

    2018-04-16

    When children interpret spoken language in real time, linguistic information drives rapid shifts in visual attention to objects in the visual world. This language-vision interaction can provide insights into children's developing efficiency in language comprehension. But how does language influence visual attention when the linguistic signal and the visual world are both processed via the visual channel? Here, we measured eye movements during real-time comprehension of a visual-manual language, American Sign Language (ASL), by 29 native ASL-learning children (16-53 mos, 16 deaf, 13 hearing) and 16 fluent deaf adult signers. All signers showed evidence of rapid, incremental language comprehension, tending to initiate an eye movement before sign offset. Deaf and hearing ASL-learners showed similar gaze patterns, suggesting that the in-the-moment dynamics of eye movements during ASL processing are shaped by the constraints of processing a visual language in real time and not by differential access to auditory information in day-to-day life. Finally, variation in children's ASL processing was positively correlated with age and vocabulary size. Thus, despite competition for attention within a single modality, the timing and accuracy of visual fixations during ASL comprehension reflect information processing skills that are important for language acquisition regardless of language modality. © 2018 John Wiley & Sons Ltd.

  15. The Impact of Salient Advertisements on Reading and Attention on Web Pages

    ERIC Educational Resources Information Center

    Simola, Jaana; Kuisma, Jarmo; Oorni, Anssi; Uusitalo, Liisa; Hyona, Jukka

    2011-01-01

    Human vision is sensitive to salient features such as motion. Therefore, animation and onset of advertisements on Websites may attract visual attention and disrupt reading. We conducted three eye tracking experiments with authentic Web pages to assess whether (a) ads are efficiently ignored, (b) ads attract overt visual attention and disrupt…

  16. The Interplay between Methodologies, Tasks and Visualisation Formats in the Study of Visual Expertise

    ERIC Educational Resources Information Center

    Boucheix, Jean-Michel

    2017-01-01

    This article introduces this special issue of "Frontline Learning Research." The first paper offers a methodological guide using Ericsson & Smith's (1991) "expert performance approach." This is followed by three papers that analyze the use of eye tracking in visual expertise models, and a paper reviewing the use of methods…

  17. Patterns of Visual Attention to Faces and Objects in Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    McPartland, James C.; Webb, Sara Jane; Keehn, Brandon; Dawson, Geraldine

    2011-01-01

    This study used eye-tracking to examine visual attention to faces and objects in adolescents with autism spectrum disorder (ASD) and typical peers. Point of gaze was recorded during passive viewing of images of human faces, inverted human faces, monkey faces, three-dimensional curvilinear objects, and two-dimensional geometric patterns.…

  18. Discourse intervention strategies in Alzheimer's disease: Eye-tracking and the effect of visual cues in conversation.

    PubMed

    Brandão, Lenisa; Monção, Ana Maria; Andersson, Richard; Holmqvist, Kenneth

    2014-01-01

    The goal of this study was to investigate whether on-topic visual cues can serve as aids for the maintenance of discourse coherence and informativeness in autobiographical narratives of persons with Alzheimer's disease (AD). The experiment consisted of three randomized conversation conditions: one without prompts, showing a blank computer screen; an on-topic condition, showing a picture and a sentence about the conversation; and an off-topic condition, showing a picture and a sentence which were unrelated to the conversation. Speech was recorded while visual attention was examined using eye tracking to measure how long participants looked at cues and the face of the listener. Results suggest that interventions using visual cues in the form of images and written information are useful to improve discourse informativeness in AD. This study demonstrated the potential of using images and short written messages as means of compensating for the cognitive deficits which underlie uninformative discourse in AD. Future studies should further investigate the efficacy of language interventions based in the use of these compensation strategies for AD patients and their family members and friends.

  19. Benefits of Motion in Animated Storybooks for Children’s Visual Attention and Story Comprehension. An Eye-Tracking Study

    PubMed Central

    Takacs, Zsofia K.; Bus, Adriana G.

    2016-01-01

    The present study provides experimental evidence regarding 4–6-year-old children’s visual processing of animated versus static illustrations in storybooks. Thirty nine participants listened to an animated and a static book, both three times, while eye movements were registered with an eye-tracker. Outcomes corroborate the hypothesis that specifically motion is what attracts children’s attention while looking at illustrations. It is proposed that animated illustrations that are well matched to the text of the story guide children to those parts of the illustration that are important for understanding the story. This may explain why animated books resulted in better comprehension than static books. PMID:27790183

  20. Calibration-free gaze tracking for automatic measurement of visual acuity in human infants.

    PubMed

    Xiong, Chunshui; Huang, Lei; Liu, Changping

    2014-01-01

    Most existing vision-based methods for gaze tracking need a tedious calibration process. In this process, subjects are required to fixate on a specific point or several specific points in space. However, it is hard to cooperate, especially for children and human infants. In this paper, a new calibration-free gaze tracking system and method is presented for automatic measurement of visual acuity in human infants. As far as I know, it is the first time to apply the vision-based gaze tracking in the measurement of visual acuity. Firstly, a polynomial of pupil center-cornea reflections (PCCR) vector is presented to be used as the gaze feature. Then, Gaussian mixture models (GMM) is employed for gaze behavior classification, which is trained offline using labeled data from subjects with healthy eyes. Experimental results on several subjects show that the proposed method is accurate, robust and sufficient for the application of measurement of visual acuity in human infants.

  1. SET: a pupil detection method using sinusoidal approximation

    PubMed Central

    Javadi, Amir-Homayoun; Hakimi, Zahra; Barati, Morteza; Walsh, Vincent; Tcheang, Lili

    2015-01-01

    Mobile eye-tracking in external environments remains challenging, despite recent advances in eye-tracking software and hardware engineering. Many current methods fail to deal with the vast range of outdoor lighting conditions and the speed at which these can change. This confines experiments to artificial environments where conditions must be tightly controlled. Additionally, the emergence of low-cost eye tracking devices calls for the development of analysis tools that enable non-technical researchers to process the output of their images. We have developed a fast and accurate method (known as “SET”) that is suitable even for natural environments with uncontrolled, dynamic and even extreme lighting conditions. We compared the performance of SET with that of two open-source alternatives by processing two collections of eye images: images of natural outdoor scenes with extreme lighting variations (“Natural”); and images of less challenging indoor scenes (“CASIA-Iris-Thousand”). We show that SET excelled in outdoor conditions and was faster, without significant loss of accuracy, indoors. SET offers a low cost eye-tracking solution, delivering high performance even in challenging outdoor environments. It is offered through an open-source MATLAB toolkit as well as a dynamic-link library (“DLL”), which can be imported into many programming languages including C# and Visual Basic in Windows OS (www.eyegoeyetracker.co.uk). PMID:25914641

  2. Updating visual memory across eye movements for ocular and arm motor control.

    PubMed

    Thompson, Aidan A; Henriques, Denise Y P

    2008-11-01

    Remembered object locations are stored in an eye-fixed reference frame, so that every time the eyes move, spatial representations must be updated for the arm-motor system to reflect the target's new relative position. To date, studies have not investigated how the brain updates these spatial representations during other types of eye movements, such as smooth-pursuit. Further, it is unclear what information is used in spatial updating. To address these questions we investigated whether remembered locations of pointing targets are updated following smooth-pursuit eye movements, as they are following saccades, and also investigated the role of visual information in estimating eye-movement amplitude for updating spatial memory. Misestimates of eye-movement amplitude were induced when participants visually tracked stimuli presented with a background that moved in either the same or opposite direction of the eye before pointing or looking back to the remembered target location. We found that gaze-dependent pointing errors were similar following saccades and smooth-pursuit and that incongruent background motion did result in a misestimate of eye-movement amplitude. However, the background motion had no effect on spatial updating for pointing, but did when subjects made a return saccade, suggesting that the oculomotor and arm-motor systems may rely on different sources of information for spatial updating.

  3. The Development of Expertise in Radiology: In Chest Radiograph Interpretation, "Expert" Search Pattern May Predate "Expert" Levels of Diagnostic Accuracy for Pneumothorax Identification.

    PubMed

    Kelly, Brendan S; Rainford, Louise A; Darcy, Sarah P; Kavanagh, Eoin C; Toomey, Rachel J

    2016-07-01

    Purpose To investigate the development of chest radiograph interpretation skill through medical training by measuring both diagnostic accuracy and eye movements during visual search. Materials and Methods An institutional exemption from full ethical review was granted for the study. Five consultant radiologists were deemed the reference expert group, and four radiology registrars, five senior house officers (SHOs), and six interns formed four clinician groups. Participants were shown 30 chest radiographs, 14 of which had a pneumothorax, and were asked to give their level of confidence as to whether a pneumothorax was present. Receiver operating characteristic (ROC) curve analysis was carried out on diagnostic decisions. Eye movements were recorded with a Tobii TX300 (Tobii Technology, Stockholm, Sweden) eye tracker. Four eye-tracking metrics were analyzed. Variables were compared to identify any differences between groups. All data were compared by using the Friedman nonparametric method. Results The average area under the ROC curve for the groups increased with experience (0.947 for consultants, 0.792 for registrars, 0.693 for SHOs, and 0.659 for interns; P = .009). A significant difference in diagnostic accuracy was found between consultants and registrars (P = .046). All four eye-tracking metrics decreased with experience, and there were significant differences between registrars and SHOs. Total reading time decreased with experience; it was significantly lower for registrars compared with SHOs (P = .046) and for SHOs compared with interns (P = .025). Conclusion Chest radiograph interpretation skill increased with experience, both in terms of diagnostic accuracy and visual search. The observed level of experience at which there was a significant difference was higher for diagnostic accuracy than for eye-tracking metrics. (©) RSNA, 2016 Online supplemental material is available for this article.

  4. Learning style preferences and their influence on students' problem solving in kinematics observed by eye-tracking method

    NASA Astrophysics Data System (ADS)

    Kekule, Martina

    2017-01-01

    The article presents eye-tracking method and its using for observing students when they solve problems from kinematics. Particularly, multiple-choice items in TUG-K test by Robert Beichner. Moreover, student's preference for visual way of learning as a possible influential aspect is proofed and discussed. Learning Style Inventory by Dunn, Dunn&Price was administered to students in order to find out their preferences. More than 20 high school and college students about 20 years old took part in the research. Preferred visual way of learning in contrast to the other ways of learning (audio, tactile, kinesthetic) shows very slight correlation with the total score of the test, none correlation with the average fixation duration and slight correlation with average fixation count on a task and average total visit duration on a task.

  5. Disentangling the initiation from the response in joint attention: an eye-tracking study in toddlers with autism spectrum disorders.

    PubMed

    Billeci, L; Narzisi, A; Campatelli, G; Crifaci, G; Calderoni, S; Gagliano, A; Calzone, C; Colombi, C; Pioggia, G; Muratori, F

    2016-05-17

    Joint attention (JA), whose deficit is an early risk marker for autism spectrum disorder (ASD), has two dimensions: (1) responding to JA and (2) initiating JA. Eye-tracking technology has largely been used to investigate responding JA, but rarely to study initiating JA especially in young children with ASD. The aim of this study was to describe the differences in the visual patterns of toddlers with ASD and those with typical development (TD) during both responding JA and initiating JA tasks. Eye-tracking technology was used to monitor the gaze of 17 children with ASD and 15 age-matched children with TD during the presentation of short video sequences involving one responding JA and two initiating JA tasks (initiating JA-1 and initiating JA-2). Gaze accuracy, transitions and fixations were analyzed. No differences were found in the responding JA task between children with ASD and those with TD, whereas, in the initiating JA tasks, different patterns of fixation and transitions were shown between the groups. These results suggest that children with ASD and those with TD show different visual patterns when they are expected to initiate joint attention but not when they respond to joint attention. We hypothesized that differences in transitions and fixations are linked to ASD impairments in visual disengagement from face, in global scanning of the scene and in the ability to anticipate object's action.

  6. Kinematics of Visually-Guided Eye Movements

    PubMed Central

    Hess, Bernhard J. M.; Thomassen, Jakob S.

    2014-01-01

    One of the hallmarks of an eye movement that follows Listing’s law is the half-angle rule that says that the angular velocity of the eye tilts by half the angle of eccentricity of the line of sight relative to primary eye position. Since all visually-guided eye movements in the regime of far viewing follow Listing’s law (with the head still and upright), the question about its origin is of considerable importance. Here, we provide theoretical and experimental evidence that Listing’s law results from a unique motor strategy that allows minimizing ocular torsion while smoothly tracking objects of interest along any path in visual space. The strategy consists in compounding conventional ocular rotations in meridian planes, that is in horizontal, vertical and oblique directions (which are all torsion-free) with small linear displacements of the eye in the frontal plane. Such compound rotation-displacements of the eye can explain the kinematic paradox that the fixation point may rotate in one plane while the eye rotates in other planes. Its unique signature is the half-angle law in the position domain, which means that the rotation plane of the eye tilts by half-the angle of gaze eccentricity. We show that this law does not readily generalize to the velocity domain of visually-guided eye movements because the angular eye velocity is the sum of two terms, one associated with rotations in meridian planes and one associated with displacements of the eye in the frontal plane. While the first term does not depend on eye position the second term does depend on eye position. We show that compounded rotation - displacements perfectly predict the average smooth kinematics of the eye during steady- state pursuit in both the position and velocity domain. PMID:24751602

  7. Neural mechanisms underlying visual attention to health warnings on branded and plain cigarette packs.

    PubMed

    Maynard, Olivia M; Brooks, Jonathan C W; Munafò, Marcus R; Leonards, Ute

    2017-04-01

    To (1) test if activation in brain regions related to reward (nucleus accumbens) and emotion (amygdala) differ when branded and plain packs of cigarettes are viewed, (2) test whether these activation patterns differ by smoking status and (3) examine whether activation patterns differ as a function of visual attention to health warning labels on cigarette packs. Cross-sectional observational study combining functional magnetic resonance imaging (fMRI) with eye-tracking. Non-smokers, weekly smokers and daily smokers performed a memory task on branded and plain cigarette packs with pictorial health warnings presented in an event-related design. Clinical Research and Imaging Centre, University of Bristol, UK. Non-smokers, weekly smokers and daily smokers (n = 72) were tested. After exclusions, data from 19 non-smokers, 19 weekly smokers and 20 daily smokers were analysed. Brain activity was assessed in whole brain analyses and in pre-specified masked analyses in the amygdala and nucleus accumbens. On-line eye-tracking during scanning recorded visual attention to health warnings. There was no evidence for a main effect of pack type or smoking status in either the nucleus accumbens or amygdala, and this was unchanged when taking account of visual attention to health warnings. However, there was evidence for an interaction, such that we observed increased activation in the right amygdala when viewing branded as compared with plain packs among weekly smokers (P = 0.003). When taking into account visual attention to health warnings, we observed higher levels of activation in the visual cortex in response to plain packaging compared with branded packaging of cigarettes (P = 0.020). Based on functional magnetic resonance imaging and eye-tracking data, health warnings appear to be more salient on 'plain' cigarette packs than branded packs. © 2016 The Authors. Addiction published by John Wiley & Sons Ltd on behalf of Society for the Study of Addiction.

  8. Real-Time Mutual Gaze Perception Enhances Collaborative Learning and Collaboration Quality

    ERIC Educational Resources Information Center

    Schneider, Bertrand; Pea, Roy

    2013-01-01

    In this paper we present the results of an eye-tracking study on collaborative problem-solving dyads. Dyads remotely collaborated to learn from contrasting cases involving basic concepts about how the human brain processes visual information. In one condition, dyads saw the eye gazes of their partner on the screen; in a control group, they did not…

  9. The Face Perception System becomes Species-Specific at 3 Months: An Eye-Tracking Study

    ERIC Educational Resources Information Center

    Di Giorgio, Elisa; Meary, David; Pascalis, Olivier; Simion, Francesca

    2013-01-01

    The current study aimed at investigating own- vs. other-species preferences in 3-month-old infants. The infants' eye movements were recorded during a visual preference paradigm to assess whether they show a preference for own-species faces when contrasted with other-species faces. Human and monkey faces, equated for all low-level perceptual…

  10. The Learning Benefits of Using Eye Trackers to Enhance the Geospatial Abilities of Elementary School Students

    ERIC Educational Resources Information Center

    Wang, Hsiao-shen; Chen, Yi-Ting; Lin, Chih-Hung

    2014-01-01

    In this study, we examined the spatial abilities of students using eye-movement tracking devices to identify and analyze their characteristics. For this research, 12 students aged 11-12 years participated as novices and 4 mathematics students participated as experts. A comparison of the visual-spatial abilities of each group showed key factors of…

  11. Brief Report: Patterns of Eye Movements in Face to Face Conversation are Associated with Autistic Traits: Evidence from a Student Sample.

    PubMed

    Vabalas, Andrius; Freeth, Megan

    2016-01-01

    The current study investigated whether the amount of autistic traits shown by an individual is associated with viewing behaviour during a face-to-face interaction. The eye movements of 36 neurotypical university students were recorded using a mobile eye-tracking device. High amounts of autistic traits were neither associated with reduced looking to the social partner overall, nor with reduced looking to the face. However, individuals who were high in autistic traits exhibited reduced visual exploration during the face-to-face interaction overall, as demonstrated by shorter and less frequent saccades. Visual exploration was not related to social anxiety. This study suggests that there are systematic individual differences in visual exploration during social interactions and these are related to amount of autistic traits.

  12. Head-mounted eye tracking: a new method to describe infant looking.

    PubMed

    Franchak, John M; Kretch, Kari S; Soska, Kasey C; Adolph, Karen E

    2011-01-01

    Despite hundreds of studies describing infants' visual exploration of experimental stimuli, researchers know little about where infants look during everyday interactions. The current study describes the first method for studying visual behavior during natural interactions in mobile infants. Six 14-month-old infants wore a head-mounted eye-tracker that recorded gaze during free play with mothers. Results revealed that infants' visual exploration is opportunistic and depends on the availability of information and the constraints of infants' own bodies. Looks to mothers' faces were rare following infant-directed utterances but more likely if mothers were sitting at infants' eye level. Gaze toward the destination of infants' hand movements was common during manual actions and crawling, but looks toward obstacles during leg movements were less frequent. © 2011 The Authors. Child Development © 2011 Society for Research in Child Development, Inc.

  13. Eye movements: The past 25 years

    PubMed Central

    Kowler, Eileen

    2011-01-01

    This article reviews the past 25 of research on eye movements (1986–2011). Emphasis is on three oculomotor behaviors: gaze control, smooth pursuit and saccades, and on their interactions with vision. Focus over the past 25 years has remained on the fundamental and classical questions: What are the mechanisms that keep gaze stable with either stationary or moving targets? How does the motion of the image on the retina affect vision? Where do we look – and why – when performing a complex task? How can the world appear clear and stable despite continual movements of the eyes? The past 25 years of investigation of these questions has seen progress and transformations at all levels due to new approaches (behavioral, neural and theoretical) aimed at studying how eye movements cope with real-world visual and cognitive demands. The work has led to a better understanding of how prediction, learning and attention work with sensory signals to contribute to the effective operation of eye movements in visually rich environments. PMID:21237189

  14. Eye Tracking System for Enhanced Learning Experiences

    ERIC Educational Resources Information Center

    Sungkur, R. K.; Antoaroo, M. A.; Beeharry, A.

    2016-01-01

    Nowadays, we are living in a world where information is readily available and being able to provide the learner with the best suited situations and environment for his/her learning experiences is of utmost importance. In most learning environments, information is basically available in the form of written text. According to the eye-tracking…

  15. The Coordinated Interplay of Scene, Utterance, and World Knowledge: Evidence from Eye Tracking

    ERIC Educational Resources Information Center

    Knoeferle, Pia; Crocker, Matthew W.

    2006-01-01

    Two studies investigated the interaction between utterance and scene processing by monitoring eye movements in agent-action-patient events, while participants listened to related utterances. The aim of Experiment 1 was to determine if and when depicted events are used for thematic role assignment and structural disambiguation of temporarily…

  16. Effects of anger and sadness on attentional patterns in decision making: an eye-tracking study.

    PubMed

    Xing, Cai

    2014-02-01

    Past research examining the effect of anger and sadness on decision making has associated anger with a relatively more heuristic decision-making approach. However, it is unclear whether angry and sad individuals differ while attending to decision-relevant information. An eye-tracking experiment (N=87) was conducted to examine the role of attention in links between emotion and decision making. Angry individuals looked more and earlier toward heuristic cues while making decisions, whereas sad individuals did not show such bias. Implications for designing persuasive messages and studying motivated visual processing were discussed.

  17. Oculomotor Behavior Metrics Change According to Circadian Phase and Time Awake

    NASA Technical Reports Server (NTRS)

    Flynn-Evans, Erin E.; Tyson, Terence L.; Cravalho, Patrick; Feick, Nathan; Stone, Leland S.

    2017-01-01

    There is a need for non-invasive, objective measures to forecast performance impairment arising from sleep loss and circadian misalignment, particularly in safety-sensitive occupations. Eye-tracking devices have been used in some operational scenarios, but such devices typically focus on eyelid closures and slow rolling eye movements and are susceptible to the intrusion of head movement artifacts. We hypothesized that an expanded suite of oculomotor behavior metrics, collected during a visual tracking task, would change according to circadian phase and time awake, and could be used as a marker of performance impairment.

  18. Exploring the potential of analysing visual search behaviour data using FROC (free-response receiver operating characteristic) method: an initial study

    NASA Astrophysics Data System (ADS)

    Dong, Leng; Chen, Yan; Dias, Sarah; Stone, William; Dias, Joseph; Rout, John; Gale, Alastair G.

    2017-03-01

    Visual search techniques and FROC analysis have been widely used in radiology to understand medical image perceptual behaviour and diagnostic performance. The potential of exploiting the advantages of both methodologies is of great interest to medical researchers. In this study, eye tracking data of eight dental practitioners was investigated. The visual search measures and their analyses are considered here. Each participant interpreted 20 dental radiographs which were chosen by an expert dental radiologist. Various eye movement measurements were obtained based on image area of interest (AOI) information. FROC analysis was then carried out by using these eye movement measurements as a direct input source. The performance of FROC methods using different input parameters was tested. The results showed that there were significant differences in FROC measures, based on eye movement data, between groups with different experience levels. Namely, the area under the curve (AUC) score evidenced higher values for experienced group for the measurements of fixation and dwell time. Also, positive correlations were found for AUC scores between the eye movement data conducted FROC and rating based FROC. FROC analysis using eye movement measurements as input variables can act as a potential performance indicator to deliver assessment in medical imaging interpretation and assess training procedures. Visual search data analyses lead to new ways of combining eye movement data and FROC methods to provide an alternative dimension to assess performance and visual search behaviour in the area of medical imaging perceptual tasks.

  19. Experience-induced interocular plasticity of vision in infancy.

    PubMed

    Tschetter, Wayne W; Douglas, Robert M; Prusky, Glen T

    2011-01-01

    Animal model studies of amblyopia have generally concluded that enduring effects of monocular deprivation (MD) on visual behavior (i.e., loss of visual acuity) are limited to the deprived eye, and are restricted to juvenile life. We have previously reported, however, that lasting effects of MD on visual function can be elicited in adulthood by stimulating visuomotor experience through the non-deprived eye. To test whether stimulating experience would also induce interocular plasticity of vision in infancy, we assessed in rats from eye-opening on postnatal day (P) 15, the effect of pairing MD with the daily experience of measuring thresholds for optokinetic tracking (OKT). MD with visuomotor experience from P15 to P25 led to a ~60% enhancement of the spatial frequency threshold for OKT through the non-deprived eye during the deprivation, which was followed by loss-of-function (~60% below normal) through both eyes when the deprived eye was opened. Reduced thresholds were maintained into adulthood with binocular OKT experience from P25 to P30. The ability to generate the plasticity and maintain lost function was dependent on visual cortex. Strictly limiting the period of deprivation to infancy by opening the deprived eye at P19 resulted in a comparable loss-of-function. Animals with reduced OKT responses also had significantly reduced visual acuity, measured independently in a discrimination task. Thus, experience-dependent cortical plasticity that can lead to amblyopia is present earlier in life than previously recognized.

  20. Visualization of Sliding and Deformation of Orbital Fat During Eye Rotation

    PubMed Central

    Hötte, Gijsbert J.; Schaafsma, Peter J.; Botha, Charl P.; Wielopolski, Piotr A.; Simonsz, Huibert J.

    2016-01-01

    Purpose Little is known about the way orbital fat slides and/or deforms during eye movements. We compared two deformation algorithms from a sequence of MRI volumes to visualize this complex behavior. Methods Time-dependent deformation data were derived from motion-MRI volumes using Lucas and Kanade Optical Flow (LK3D) and nonrigid registration (B-splines) deformation algorithms. We compared how these two algorithms performed regarding sliding and deformation in three critical areas: the sclera-fat interface, how the optic nerve moves through the fat, and how the fat is squeezed out under the tendon of a relaxing rectus muscle. The efficacy was validated using identified tissue markers such as the lens and blood vessels in the fat. Results Fat immediately behind the eye followed eye rotation by approximately one-half. This was best visualized using the B-splines technique as it showed less ripping of tissue and less distortion. Orbital fat flowed around the optic nerve during eye rotation. In this case, LK3D provided better visualization as it allowed orbital fat tissue to split. The resolution was insufficient to visualize fat being squeezed out between tendon and sclera. Conclusion B-splines performs better in tracking structures such as the lens, while LK3D allows fat tissue to split as should happen as the optic nerve slides through the fat. Orbital fat follows eye rotation by one-half and flows around the optic nerve during eye rotation. Translational Relevance Visualizing orbital fat deformation and sliding offers the opportunity to accurately locate a region of cicatrization and permit an individualized surgical plan. PMID:27540495

  1. Development of internal models and predictive abilities for visual tracking during childhood

    PubMed Central

    Ego, Caroline; Yüksel, Demet

    2015-01-01

    The prediction of the consequences of our own actions through internal models is an essential component of motor control. Previous studies showed improvement of anticipatory behaviors with age for grasping, drawing, and postural control. Since these actions require visual and proprioceptive feedback, these improvements might reflect both the development of internal models and the feedback control. In contrast, visual tracking of a temporarily invisible target gives specific markers of prediction and internal models for eye movements. Therefore, we recorded eye movements in 50 children (aged 5–19 yr) and in 10 adults, who were asked to pursue a visual target that is temporarily blanked. Results show that the youngest children (5–7 yr) have a general oculomotor behavior in this task, qualitatively similar to the one observed in adults. However, the overall performance of older subjects in terms of accuracy at target reappearance and variability in their behavior was much better than the youngest children. This late maturation of predictive mechanisms with age was reflected into the development of the accuracy of the internal models governing the synergy between the saccadic and pursuit systems with age. Altogether, we hypothesize that the maturation of the interaction between smooth pursuit and saccades that relies on internal models of the eye and target displacement is related to the continuous maturation of the cerebellum. PMID:26510757

  2. Development of internal models and predictive abilities for visual tracking during childhood.

    PubMed

    Ego, Caroline; Yüksel, Demet; Orban de Xivry, Jean-Jacques; Lefèvre, Philippe

    2016-01-01

    The prediction of the consequences of our own actions through internal models is an essential component of motor control. Previous studies showed improvement of anticipatory behaviors with age for grasping, drawing, and postural control. Since these actions require visual and proprioceptive feedback, these improvements might reflect both the development of internal models and the feedback control. In contrast, visual tracking of a temporarily invisible target gives specific markers of prediction and internal models for eye movements. Therefore, we recorded eye movements in 50 children (aged 5-19 yr) and in 10 adults, who were asked to pursue a visual target that is temporarily blanked. Results show that the youngest children (5-7 yr) have a general oculomotor behavior in this task, qualitatively similar to the one observed in adults. However, the overall performance of older subjects in terms of accuracy at target reappearance and variability in their behavior was much better than the youngest children. This late maturation of predictive mechanisms with age was reflected into the development of the accuracy of the internal models governing the synergy between the saccadic and pursuit systems with age. Altogether, we hypothesize that the maturation of the interaction between smooth pursuit and saccades that relies on internal models of the eye and target displacement is related to the continuous maturation of the cerebellum. Copyright © 2016 the American Physiological Society.

  3. The Pattern of Visual Fixation Eccentricity and Instability in Optic Neuropathy and Its Spatial Relationship to Retinal Ganglion Cell Layer Thickness.

    PubMed

    Mallery, Robert M; Poolman, Pieter; Thurtell, Matthew J; Wang, Jui-Kai; Garvin, Mona K; Ledolter, Johannes; Kardon, Randy H

    2016-07-01

    The purpose of this study was to assess whether clinically useful measures of fixation instability and eccentricity can be derived from retinal tracking data obtained during optical coherence tomography (OCT) in patients with optic neuropathy (ON) and to develop a method for relating fixation to the retinal ganglion cell complex (GCC) thickness. Twenty-nine patients with ON underwent macular volume OCT with 30 seconds of confocal scanning laser ophthalmoscope (cSLO)-based eye tracking during fixation. Kernel density estimation quantified fixation instability and fixation eccentricity from the distribution of fixation points on the retina. Preferred ganglion cell layer loci (PGCL) and their relationship to the GCC thickness map were derived, accounting for radial displacement of retinal ganglion cell soma from their corresponding cones. Fixation instability was increased in ON eyes (0.21 deg2) compared with normal eyes (0.06982 deg2; P < 0.001), and fixation eccentricity was increased in ON eyes (0.48°) compared with normal eyes (0.24°; P = 0.03). Fixation instability and eccentricity each correlated moderately with logMAR acuity and were highly predictive of central visual field loss. Twenty-six of 35 ON eyes had PGCL skewed toward local maxima of the GCC thickness map. Patients with bilateral dense central scotomas had PGCL in homonymous retinal locations with respect to the fovea. Fixation instability and eccentricity measures obtained during cSLO-OCT assess the function of perifoveal retinal elements and predict central visual field loss in patients with ON. A model relating fixation to the GCC thickness map offers a method to assess the structure-function relationship between fixation and areas of preserved GCC in patients with ON.

  4. The Pattern of Visual Fixation Eccentricity and Instability in Optic Neuropathy and Its Spatial Relationship to Retinal Ganglion Cell Layer Thickness

    PubMed Central

    M. Mallery, Robert; Poolman, Pieter; J. Thurtell, Matthew; Wang, Jui-Kai; K. Garvin, Mona; Ledolter, Johannes; Kardon, Randy H.

    2016-01-01

    Purpose The purpose of this study was to assess whether clinically useful measures of fixation instability and eccentricity can be derived from retinal tracking data obtained during optical coherence tomography (OCT) in patients with optic neuropathy (ON) and to develop a method for relating fixation to the retinal ganglion cell complex (GCC) thickness. Methods Twenty-nine patients with ON underwent macular volume OCT with 30 seconds of confocal scanning laser ophthalmoscope (cSLO)-based eye tracking during fixation. Kernel density estimation quantified fixation instability and fixation eccentricity from the distribution of fixation points on the retina. Preferred ganglion cell layer loci (PGCL) and their relationship to the GCC thickness map were derived, accounting for radial displacement of retinal ganglion cell soma from their corresponding cones. Results Fixation instability was increased in ON eyes (0.21 deg2) compared with normal eyes (0.06982 deg2; P < 0.001), and fixation eccentricity was increased in ON eyes (0.48°) compared with normal eyes (0.24°; P = 0.03). Fixation instability and eccentricity each correlated moderately with logMAR acuity and were highly predictive of central visual field loss. Twenty-six of 35 ON eyes had PGCL skewed toward local maxima of the GCC thickness map. Patients with bilateral dense central scotomas had PGCL in homonymous retinal locations with respect to the fovea. Conclusions Fixation instability and eccentricity measures obtained during cSLO-OCT assess the function of perifoveal retinal elements and predict central visual field loss in patients with ON. A model relating fixation to the GCC thickness map offers a method to assess the structure–function relationship between fixation and areas of preserved GCC in patients with ON. PMID:27409502

  5. Influence of Interpretation Aids on Attentional Capture, Visual Processing, and Understanding of Front-of-Package Nutrition Labels.

    PubMed

    Antúnez, Lucía; Giménez, Ana; Maiche, Alejandro; Ares, Gastón

    2015-01-01

    To study the influence of 2 interpretational aids of front-of-package (FOP) nutrition labels (color code and text descriptors) on attentional capture and consumers' understanding of nutritional information. A full factorial design was used to assess the influence of color code and text descriptors using visual search and eye tracking. Ten trained assessors participated in the visual search study and 54 consumers completed the eye-tracking study. In the visual search study, assessors were asked to indicate whether there was a label high in fat within sets of mayonnaise labels with different FOP labels. In the eye-tracking study, assessors answered a set of questions about the nutritional content of labels. The researchers used logistic regression to evaluate the influence of interpretational aids of FOP nutrition labels on the percentage of correct answers. Analyses of variance were used to evaluate the influence of the studied variables on attentional measures and participants' response times. Response times were significantly higher for monochromatic FOP labels compared with color-coded ones (3,225 vs 964 ms; P < .001), which suggests that color codes increase attentional capture. The highest number and duration of fixations and visits were recorded on labels that did not include color codes or text descriptors (P < .05). The lowest percentage of incorrect answers was observed when the nutrient level was indicated using color code and text descriptors (P < .05). The combination of color codes and text descriptors seems to be the most effective alternative to increase attentional capture and understanding of nutritional information. Copyright © 2015 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.

  6. Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays.

    PubMed

    Padmanaban, Nitish; Konrad, Robert; Stramer, Tal; Cooper, Emily A; Wetzstein, Gordon

    2017-02-28

    From the desktop to the laptop to the mobile device, personal computing platforms evolve over time. Moving forward, wearable computing is widely expected to be integral to consumer electronics and beyond. The primary interface between a wearable computer and a user is often a near-eye display. However, current generation near-eye displays suffer from multiple limitations: they are unable to provide fully natural visual cues and comfortable viewing experiences for all users. At their core, many of the issues with near-eye displays are caused by limitations in conventional optics. Current displays cannot reproduce the changes in focus that accompany natural vision, and they cannot support users with uncorrected refractive errors. With two prototype near-eye displays, we show how these issues can be overcome using display modes that adapt to the user via computational optics. By using focus-tunable lenses, mechanically actuated displays, and mobile gaze-tracking technology, these displays can be tailored to correct common refractive errors and provide natural focus cues by dynamically updating the system based on where a user looks in a virtual scene. Indeed, the opportunities afforded by recent advances in computational optics open up the possibility of creating a computing platform in which some users may experience better quality vision in the virtual world than in the real one.

  7. Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays

    NASA Astrophysics Data System (ADS)

    Padmanaban, Nitish; Konrad, Robert; Stramer, Tal; Cooper, Emily A.; Wetzstein, Gordon

    2017-02-01

    From the desktop to the laptop to the mobile device, personal computing platforms evolve over time. Moving forward, wearable computing is widely expected to be integral to consumer electronics and beyond. The primary interface between a wearable computer and a user is often a near-eye display. However, current generation near-eye displays suffer from multiple limitations: they are unable to provide fully natural visual cues and comfortable viewing experiences for all users. At their core, many of the issues with near-eye displays are caused by limitations in conventional optics. Current displays cannot reproduce the changes in focus that accompany natural vision, and they cannot support users with uncorrected refractive errors. With two prototype near-eye displays, we show how these issues can be overcome using display modes that adapt to the user via computational optics. By using focus-tunable lenses, mechanically actuated displays, and mobile gaze-tracking technology, these displays can be tailored to correct common refractive errors and provide natural focus cues by dynamically updating the system based on where a user looks in a virtual scene. Indeed, the opportunities afforded by recent advances in computational optics open up the possibility of creating a computing platform in which some users may experience better quality vision in the virtual world than in the real one.

  8. Kinesthesis can make an invisible hand visible

    PubMed Central

    Dieter, Kevin C.; Hu, Bo; Knill, David C.; Blake, Randolph; Tadin, Duje

    2014-01-01

    Self-generated body movements have reliable visual consequences. This predictive association between vision and action likely underlies modulatory effects of action on visual processing. However, it is unknown if our own actions can have generative effects on visual perception. We asked whether, in total darkness, self-generated body movements are sufficient to evoke normally concomitant visual perceptions. Using a deceptive experimental design, we discovered that waving one’s own hand in front of one’s covered eyes can cause visual sensations of motion. Conjecturing that these visual sensations arise from multisensory connectivity, we showed that individuals with synesthesia experience substantially stronger kinesthesis-induced visual sensations. Finally, we found that the perceived vividness of kinesthesis-induced visual sensations predicted participants’ ability to smoothly eye-track self-generated hand movements in darkness, indicating that these sensations function like typical retinally-driven visual sensations. Evidently, even in the complete absence of external visual input, our brains predict visual consequences of our actions. PMID:24171930

  9. Eye-tracking-based assessment of cognitive function in low-resource settings.

    PubMed

    Forssman, Linda; Ashorn, Per; Ashorn, Ulla; Maleta, Kenneth; Matchado, Andrew; Kortekangas, Emma; Leppänen, Jukka M

    2017-04-01

    Early development of neurocognitive functions in infants can be compromised by poverty, malnutrition and lack of adequate stimulation. Optimal management of neurodevelopmental problems in infants requires assessment tools that can be used early in life, and are objective and applicable across economic, cultural and educational settings. The present study examined the feasibility of infrared eye tracking as a novel and highly automated technique for assessing visual-orienting and sequence-learning abilities as well as attention to facial expressions in young (9-month-old) infants. Techniques piloted in a high-resource laboratory setting in Finland (N=39) were subsequently field-tested in a community health centre in rural Malawi (N=40). Parents' perception of the acceptability of the method (Finland 95%, Malawi 92%) and percentages of infants completing the whole eye-tracking test (Finland 95%, Malawi 90%) were high, and percentages of valid test trials (Finland 69-85%, Malawi 68-73%) satisfactory at both sites. Test completion rates were slightly higher for eye tracking (90%) than traditional observational tests (87%) in Malawi. The predicted response pattern indicative of specific cognitive function was replicated in Malawi, but Malawian infants exhibited lower response rates and slower processing speed across tasks. High test completion rates and the replication of the predicted test patterns in a novel environment in Malawi support the feasibility of eye tracking as a technique for assessing infant development in low-resource setting. Further research is needed to the test-retest stability and predictive validity of the eye-tracking scores in low-income settings. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  10. Visual Attention to Competing Social and Object Images by Preschool Children with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Sasson, Noah J.; Touchstone, Emily W.

    2014-01-01

    Eye tracking studies of young children with autism spectrum disorder (ASD) report a reduction in social attention and an increase in visual attention to non-social stimuli, including objects related to circumscribed interests (CI) (e.g., trains). In the current study, fifteen preschoolers with ASD and 15 typically developing controls matched on…

  11. The Role of Clarity and Blur in Guiding Visual Attention in Photographs

    ERIC Educational Resources Information Center

    Enns, James T.; MacDonald, Sarah C.

    2013-01-01

    Visual artists and photographers believe that a viewer's gaze can be guided by selective use of image clarity and blur, but there is little systematic research. In this study, participants performed several eye-tracking tasks with the same naturalistic photographs, including recognition memory for the entire photo, as well as recognition memory…

  12. Infants' Selective Attention to Reliable Visual Cues in the Presence of Salient Distractors

    ERIC Educational Resources Information Center

    Tummeltshammer, Kristen Swan; Mareschal, Denis; Kirkham, Natasha Z.

    2014-01-01

    With many features competing for attention in their visual environment, infants must learn to deploy attention toward informative cues while ignoring distractions. Three eye tracking experiments were conducted to investigate whether 6- and 8-month-olds (total N = 102) would shift attention away from a distractor stimulus to learn a cue-reward…

  13. Similar effects of feature-based attention on motion perception and pursuit eye movements at different levels of awareness.

    PubMed

    Spering, Miriam; Carrasco, Marisa

    2012-05-30

    Feature-based attention enhances visual processing and improves perception, even for visual features that we are not aware of. Does feature-based attention also modulate motor behavior in response to visual information that does or does not reach awareness? Here we compare the effect of feature-based attention on motion perception and smooth-pursuit eye movements in response to moving dichoptic plaids--stimuli composed of two orthogonally drifting gratings, presented separately to each eye--in human observers. Monocular adaptation to one grating before the presentation of both gratings renders the adapted grating perceptually weaker than the unadapted grating and decreases the level of awareness. Feature-based attention was directed to either the adapted or the unadapted grating's motion direction or to both (neutral condition). We show that observers were better at detecting a speed change in the attended than the unattended motion direction, indicating that they had successfully attended to one grating. Speed change detection was also better when the change occurred in the unadapted than the adapted grating, indicating that the adapted grating was perceptually weaker. In neutral conditions, perception and pursuit in response to plaid motion were dissociated: While perception followed one grating's motion direction almost exclusively (component motion), the eyes tracked the average of both gratings (pattern motion). In attention conditions, perception and pursuit were shifted toward the attended component. These results suggest that attention affects perception and pursuit similarly even though only the former reflects awareness. The eyes can track an attended feature even if observers do not perceive it.

  14. Identifying selective visual attention biases related to fear of pain by tracking eye movements within a dot-probe paradigm.

    PubMed

    Yang, Zhou; Jackson, Todd; Gao, Xiao; Chen, Hong

    2012-08-01

    This research examined selective biases in visual attention related to fear of pain by tracking eye movements (EM) toward pain-related stimuli among the pain-fearful. EM of 21 young adults scoring high on a fear of pain measure (H-FOP) and 20 lower-scoring (L-FOP) control participants were measured during a dot-probe task that featured sensory pain-neutral, health catastrophe-neutral and neutral-neutral word pairs. Analyses indicated that the H-FOP group was more likely to direct immediate visual attention toward sensory pain and health catastrophe words than was the L-FOP group. The H-FOP group also had comparatively shorter first fixation latencies toward sensory pain and health catastrophe words. Conversely, groups did not differ on EM indices of attentional maintenance (i.e., first fixation duration, gaze duration, and average fixation duration) or reaction times to dot probes. Finally, both groups showed a cycle of disengagement followed by re-engagement toward sensory pain words relative to other word types. In sum, this research is the first to reveal biases toward pain stimuli during very early stages of visual information processing among the highly pain-fearful and highlights the utility of EM tracking as a means to evaluate visual attention as a dynamic process in the context of FOP. Copyright © 2012 International Association for the Study of Pain. Published by Elsevier B.V. All rights reserved.

  15. The effect of human image in B2C website design: an eye-tracking study

    NASA Astrophysics Data System (ADS)

    Wang, Qiuzhen; Yang, Yi; Wang, Qi; Ma, Qingguo

    2014-09-01

    On B2C shopping websites, effective visual designs can bring about consumers' positive emotional experience. From this perspective, this article developed a research model to explore the impact of human image as a visual element on consumers' online shopping emotions and subsequent attitudes towards websites. This study conducted an eye-tracking experiment to collect both eye movement data and questionnaire data to test the research model. Questionnaire data analysis showed that product pictures combined with human image induced positive emotions among participants, thus promoting their attitudes towards online shopping websites. Specifically, product pictures with human image first produced higher levels of image appeal and perceived social presence, thus stimulating higher levels of enjoyment and subsequent positive attitudes towards the websites. Moreover, a moderating effect of product type was demonstrated on the relationship between the presence of human image and the level of image appeal. Specifically, human image significantly increased the level of image appeal when integrated in entertainment product pictures while this relationship was not significant in terms of utilitarian products. Eye-tracking data analysis further supported these results and provided plausible explanations. The presence of human image significantly increased the pupil size of participants regardless of product types. For entertainment products, participants paid more attention to product pictures integrated with human image whereas for utilitarian products more attention was paid to functional information of products than to product pictures no matter whether or not integrated with human image.

  16. Corneal Complications And Visual Impairment In Vernal Keratoconjunctivitis Patients.

    PubMed

    Arif, Abdus Salam; Aaqil, Bushra; Siddiqui, Afsheen; Nazneen, Zainab; Farooq, Umer

    2017-01-01

    Vernal kerato-conjunctivitis (VKC) is an infrequent but serious form of allergic conjunctivitis common in warm and humid areas where air is rich in allergens. It affects both eyes asymmetrically. Although VKC is a self-limiting disease but visions affecting corneal complications influence the quality of life in school children. The aim of this study was to list the corneal complications due to this condition and to find out the extent of visual impairment among VKC patients. This cross-sectional study was conducted in the department of Ophthalmology, Benazir Bhutto Shaheed Hospital on 290 eyes of diagnosed cases of VKC. The diagnosis of VKC was made on the basis of history and examination. Visual acuity was recorded using Snellen's notation and visual impairment was classified according to World Health Organization classification for visual disabilities. The mean age of presentation was 10.83±6.13 years. There were 207 (71.4%) males and 83 (28.6%) females. Corneal scarring was observed in 59 (20.3%) eyes. Keratoconus was found to be in 17 (5.9%) eyes. Shield ulcer was detected in 09 (3.1%) eyes while 07 (2.4%) eyes had corneal neovascularization. Majority of the patients with visual loss had corneal scarring and the complication that led to severe visual loss in most of the eyes was Keratoconus. Vernal kerato-conjunctivitis in the presence of corneal complications is a sight threatening disease and can lead to severe visual impairment.

  17. MAINTENANCE OF GOOD VISUAL ACUITY IN BEST DISEASE ASSOCIATED WITH CHRONIC BILATERAL SEROUS MACULAR DETACHMENT.

    PubMed

    Gattoussi, Sarra; Boon, Camiel J F; Freund, K Bailey

    2017-08-10

    We describe the long-term follow-up of a patient with multifocal Best disease with chronic bilateral serous macular detachment and unusual peripheral findings associated with a novel mutation in the BEST1 gene. Case report. A 59-year-old white woman was referred for an evaluation of her macular findings in 1992. There was a family history of Best disease in the patient's mother and a male sibling. Her medical history was unremarkable. Best-corrected visual acuity was 20/20 in her right eye and 20/25 in her left eye. The anterior segment examination was normal in both eyes. Funduscopic examination showed multifocal hyperautofluorescent vitelliform deposits with areas of subretinal fibrosis in both eyes. An electrooculogram showed Arden ratios of 1.32 in the right eye and 1.97 in the left eye. Ultra-widefield color and fundus autofluorescence imaging showed degenerative retinal changes in areas throughout the entire fundus in both eyes. Optical coherence tomography, including annual eye-tracked scans from 2005 to 2016, showed persistent bilateral serous macular detachments. Despite chronic foveal detachment, visual acuity was 20/25 in her right eye and 20/40 in her left eye, 24 years after initial presentation. Genetic testing showed a novel c.238T>A (p.Phe80Ile) missense mutation in the BEST1 gene. Some patients with Best disease associated with chronic serous macular detachment can maintain good visual acuity over an extended follow-up. To our knowledge, this is the first report of Best disease associated with this mutation in the BEST1 gene.

  18. SacLab: A toolbox for saccade analysis to increase usability of eye tracking systems in clinical ophthalmology practice.

    PubMed

    Cercenelli, Laura; Tiberi, Guido; Corazza, Ivan; Giannaccare, Giuseppe; Fresina, Michela; Marcelli, Emanuela

    2017-01-01

    Many open source software packages have been recently developed to expand the usability of eye tracking systems to study oculomotor behavior, but none of these is specifically designed to encompass all the main functions required for creating eye tracking tests and for providing the automatic analysis of saccadic eye movements. The aim of this study is to introduce SacLab, an intuitive, freely-available MATLAB toolbox based on Graphical User Interfaces (GUIs) that we have developed to increase the usability of the ViewPoint EyeTracker (Arrington Research, Scottsdale, AZ, USA) in clinical ophthalmology practice. SacLab consists of four processing modules that enable the user to easily create visual stimuli tests (Test Designer), record saccadic eye movements (Data Recorder), analyze the recorded data to automatically extract saccadic parameters of clinical interest (Data Analyzer) and provide an aggregate analysis from multiple eye movements recordings (Saccade Analyzer), without requiring any programming effort by the user. A demo application of SacLab to carry out eye tracking tests for the analysis of horizontal saccades was reported. We tested the usability of SacLab toolbox with three ophthalmologists who had no programming experience; the ophthalmologists were briefly trained in the use of SacLab GUIs and were asked to perform the demo application. The toolbox gained an enthusiastic feedback from all the clinicians in terms of intuitiveness, ease of use and flexibility. Test creation and data processing were accomplished in 52±21s and 46±19s, respectively, using the SacLab GUIs. SacLab may represent a useful tool to ease the application of the ViewPoint EyeTracker system in clinical routine in ophthalmology. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Simultaneous Recordings of Human Microsaccades and Drifts with a Contemporary Video Eye Tracker and the Search Coil Technique

    PubMed Central

    McCamy, Michael B.; Otero-Millan, Jorge; Leigh, R. John; King, Susan A.; Schneider, Rosalyn M.; Macknik, Stephen L.; Martinez-Conde, Susana

    2015-01-01

    Human eyes move continuously, even during visual fixation. These “fixational eye movements” (FEMs) include microsaccades, intersaccadic drift and oculomotor tremor. Research in human FEMs has grown considerably in the last decade, facilitated by the manufacture of noninvasive, high-resolution/speed video-oculography eye trackers. Due to the small magnitude of FEMs, obtaining reliable data can be challenging, however, and depends critically on the sensitivity and precision of the eye tracking system. Yet, no study has conducted an in-depth comparison of human FEM recordings obtained with the search coil (considered the gold standard for measuring microsaccades and drift) and with contemporary, state-of-the art video trackers. Here we measured human microsaccades and drift simultaneously with the search coil and a popular state-of-the-art video tracker. We found that 95% of microsaccades detected with the search coil were also detected with the video tracker, and 95% of microsaccades detected with video tracking were also detected with the search coil, indicating substantial agreement between the two systems. Peak/mean velocities and main sequence slopes of microsaccades detected with video tracking were significantly higher than those of the same microsaccades detected with the search coil, however. Ocular drift was significantly correlated between the two systems, but drift speeds were higher with video tracking than with the search coil. Overall, our combined results suggest that contemporary video tracking now approaches the search coil for measuring FEMs. PMID:26035820

  20. Exploring responses to art in adolescence: a behavioral and eye-tracking study.

    PubMed

    Savazzi, Federica; Massaro, Davide; Di Dio, Cinzia; Gallese, Vittorio; Gilli, Gabriella; Marchetti, Antonella

    2014-01-01

    Adolescence is a peculiar age mainly characterized by physical and psychological changes that may affect the perception of one's own and others' body. This perceptual peculiarity may influence the way in which bottom-up and top-down processes interact and, consequently, the perception and evaluation of art. This study is aimed at investigating, by means of the eye-tracking technique, the visual explorative behavior of adolescents while looking at paintings. Sixteen color paintings, categorized as dynamic and static, were presented to twenty adolescents; half of the images represented natural environments and half human individuals; all stimuli were displayed under aesthetic and movement judgment tasks. Participants' ratings revealed that, generally, nature images are explicitly evaluated as more appealing than human images. Eye movement data, on the other hand, showed that the human body exerts a strong power in orienting and attracting visual attention and that, in adolescence, it plays a fundamental role during aesthetic experience. In particular, adolescents seem to approach human-content images by giving priority to elements calling forth movement and action, supporting the embodiment theory of aesthetic perception.

  1. Exploring Responses to Art in Adolescence: A Behavioral and Eye-Tracking Study

    PubMed Central

    Savazzi, Federica; Massaro, Davide; Di Dio, Cinzia; Gallese, Vittorio; Gilli, Gabriella; Marchetti, Antonella

    2014-01-01

    Adolescence is a peculiar age mainly characterized by physical and psychological changes that may affect the perception of one's own and others' body. This perceptual peculiarity may influence the way in which bottom-up and top-down processes interact and, consequently, the perception and evaluation of art. This study is aimed at investigating, by means of the eye-tracking technique, the visual explorative behavior of adolescents while looking at paintings. Sixteen color paintings, categorized as dynamic and static, were presented to twenty adolescents; half of the images represented natural environments and half human individuals; all stimuli were displayed under aesthetic and movement judgment tasks. Participants' ratings revealed that, generally, nature images are explicitly evaluated as more appealing than human images. Eye movement data, on the other hand, showed that the human body exerts a strong power in orienting and attracting visual attention and that, in adolescence, it plays a fundamental role during aesthetic experience. In particular, adolescents seem to approach human-content images by giving priority to elements calling forth movement and action, supporting the embodiment theory of aesthetic perception. PMID:25048813

  2. Infants’ Looking to Surprising Events: When Eye-Tracking Reveals More than Looking Time

    PubMed Central

    Yeung, H. Henny; Denison, Stephanie; Johnson, Scott P.

    2016-01-01

    Research on infants’ reasoning abilities often rely on looking times, which are longer to surprising and unexpected visual scenes compared to unsurprising and expected ones. Few researchers have examined more precise visual scanning patterns in these scenes, and so, here, we recorded 8- to 11-month-olds’ gaze with an eye tracker as we presented a sampling event whose outcome was either surprising, neutral, or unsurprising: A red (or yellow) ball was drawn from one of three visible containers populated 0%, 50%, or 100% with identically colored balls. When measuring looking time to the whole scene, infants were insensitive to the likelihood of the sampling event, replicating failures in similar paradigms. Nevertheless, a new analysis of visual scanning showed that infants did spend more time fixating specific areas-of-interest as a function of the event likelihood. The drawn ball and its associated container attracted more looking than the other containers in the 0% condition, but this pattern was weaker in the 50% condition, and even less strong in the 100% condition. Results suggest that measuring where infants look may be more sensitive than simply how much looking there is to the whole scene. The advantages of eye tracking measures over traditional looking measures are discussed. PMID:27926920

  3. Immediate use of prosody and context in predicting a syntactic structure.

    PubMed

    Nakamura, Chie; Arai, Manabu; Mazuka, Reiko

    2012-11-01

    Numerous studies have reported an effect of prosodic information on parsing but whether prosody can impact even the initial parsing decision is still not evident. In a visual world eye-tracking experiment, we investigated the influence of contrastive intonation and visual context on processing temporarily ambiguous relative clause sentences in Japanese. Our results showed that listeners used the prosodic cue to make a structural prediction before hearing disambiguating information. Importantly, the effect was limited to cases where the visual scene provided an appropriate context for the prosodic cue, thus eliminating the explanation that listeners have simply associated marked prosodic information with a less frequent structure. Furthermore, the influence of the prosodic information was also evident following disambiguating information, in a way that reflected the initial analysis. The current study demonstrates that prosody, when provided with an appropriate context, influences the initial syntactic analysis and also the subsequent cost at disambiguating information. The results also provide first evidence for pre-head structural prediction driven by prosodic and contextual information with a head-final construction. Copyright © 2012 Elsevier B.V. All rights reserved.

  4. The eyes have it: Using eye tracking to inform information processing strategies in multi-attributes choices.

    PubMed

    Ryan, Mandy; Krucien, Nicolas; Hermens, Frouke

    2018-04-01

    Although choice experiments (CEs) are widely applied in economics to study choice behaviour, understanding of how individuals process attribute information remains limited. We show how eye-tracking methods can provide insight into how decisions are made. Participants completed a CE, while their eye movements were recorded. Results show that although the information presented guided participants' decisions, there were also several processing biases at work. Evidence was found of (a) top-to-bottom, (b) left-to-right, and (c) first-to-last order biases. Experimental factors-whether attributes are defined as "best" or "worst," choice task complexity, and attribute ordering-also influence information processing. How individuals visually process attribute information was shown to be related to their choices. Implications for the design and analysis of CEs and future research are discussed. Copyright © 2017 John Wiley & Sons, Ltd.

  5. The functional consequences of social distraction: Attention and memory for complex scenes.

    PubMed

    Doherty, Brianna Ruth; Patai, Eva Zita; Duta, Mihaela; Nobre, Anna Christina; Scerif, Gaia

    2017-01-01

    Cognitive scientists have long proposed that social stimuli attract visual attention even when task irrelevant, but the consequences of this privileged status for memory are unknown. To address this, we combined computational approaches, eye-tracking methodology, and individual-differences measures. Participants searched for targets in scenes containing social or non-social distractors equated for low-level visual salience. Subsequent memory precision for target locations was tested. Individual differences in autistic traits and social anxiety were also measured. Eye-tracking revealed significantly more attentional capture to social compared to non-social distractors. Critically, memory precision for target locations was poorer for social scenes. This effect was moderated by social anxiety, with anxious individuals remembering target locations better under conditions of social distraction. These findings shed further light onto the privileged attentional status of social stimuli and its functional consequences on memory across individuals. Copyright © 2016. Published by Elsevier B.V.

  6. Workflows and individual differences during visually guided routine tasks in a road traffic management control room.

    PubMed

    Starke, Sandra D; Baber, Chris; Cooke, Neil J; Howes, Andrew

    2017-05-01

    Road traffic control rooms rely on human operators to monitor and interact with information presented on multiple displays. Past studies have found inconsistent use of available visual information sources in such settings across different domains. In this study, we aimed to broaden the understanding of observer behaviour in control rooms by analysing a case study in road traffic control. We conducted a field study in a live road traffic control room where five operators responded to incidents while wearing a mobile eye tracker. Using qualitative and quantitative approaches, we investigated the operators' workflow using ergonomics methods and quantified visual information sampling. We found that individuals showed differing preferences for viewing modalities and weighting of task components, with a strong coupling between eye and head movement. For the quantitative analysis of the eye tracking data, we propose a number of metrics which may prove useful to compare visual sampling behaviour across domains in future. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  7. Vision restoration therapy does not benefit from costimulation: A pilot study.

    PubMed

    Kasten, Erich; Bunzenthal, Ulrike; Müller-Oehring, Eva M; Mueller, Iris; Sabel, Bernhard A

    2007-08-01

    Visual field deficits in patients have long been considered to be nontreatable, but in previous studies we have found an enlargement of the intact visual field following vision restoration therapy (VRT). In the present pilot study, we wished to determine whether a double-stimulation approach would facilitate visual field enlargements beyond those achieved by the single-stimulus paradigm used in standard VRT. This was motivated by the findings that following visual cortex injury in animals, the size of receptive fields could be enlarged by systematic costimulation, where two stimuli were used to excite visual cortex neurons (Eysel, Eyding, & Schweigart, 1998). Patients (n = 23) with stable homonymous field deficits after trauma, cerebral ischemia, or hemorrhage (lesion age > 6 months) carried out either (a) standard VRT with a single stimulation (n = 9), or vision therapy with (b) a parallel costimulation (n = 7) or (c) a moving costimulation paradigm (n = 7). Training was carried out twice daily for 30 min over a 3-month period. Before and after therapy, visual fields were tested with 30 degrees and 90 degrees Tübinger automatic perimetry (TAP) and with high-resolution perimetry (HRP). Eye movements were recorded with an eye tracking system. When data of all three types of visual field training were pooled, we found significant improvements of stimulus detection in HRP (4.2%) and fewer misses within the central 30 degrees perimetrically (-3.7% right eye, OD, or -4.4% left eye, OS). However, the type of training did not make any difference such that the three training groups profited equally. A more detailed analysis of trained versus untrained visual field areas in 16 patients revealed a superiority of the trained area of only 1.1% in HRP and between 3.5% (OS) and 4.4% (OD) in TAP. Spatial attention and alertness improved significantly in all three groups and correlated significantly with visual field enlargements. While vision training had no influence on the patient's testimonials concerning their visual abilities, the patients significantly improved in a practical paper-and-pencil number tracking task (Zahlen-Verbindungs Test; ZVT). Visual field enlargement does not benefit from a double-stimulation paradigm, but visual attention seems to play an important role in vision restoration. The improvements in trained as well as in untrained areas are explained by top-down attentional control mechanisms interacting with local visual cortex plasticity.

  8. Unsold is unseen … or is it? Examining the role of peripheral vision in the consumer choice process using eye-tracking methodology.

    PubMed

    Wästlund, Erik; Shams, Poja; Otterbring, Tobias

    2018-01-01

    In visual marketing, the truism that "unseen is unsold" means that products that are not noticed will not be sold. This truism rests on the idea that the consumer choice process is heavily influenced by visual search. However, given that the majority of available products are not seen by consumers, this article examines the role of peripheral vision in guiding attention during the consumer choice process. In two eye-tracking studies, one conducted in a lab facility and the other conducted in a supermarket, the authors investigate the role and limitations of peripheral vision. The results show that peripheral vision is used to direct visual attention when discriminating between target and non-target objects in an eye-tracking laboratory. Target and non-target similarity, as well as visual saliency of non-targets, constitute the boundary conditions for this effect, which generalizes from instruction-based laboratory tasks to preference-based choice tasks in a real supermarket setting. Thus, peripheral vision helps customers to devote a larger share of attention to relevant products during the consumer choice process. Taken together, the results show how the creation of consideration set (sets of possible choice options) relies on both goal-directed attention and peripheral vision. These results could explain how visually similar packaging positively influences market leaders, while making novel brands almost invisible on supermarket shelves. The findings show that even though unsold products might be unseen, in the sense that they have not been directly observed, they might still have been evaluated and excluded by means of peripheral vision. This article is based on controlled lab experiments as well as a field study conducted in a complex retail environment. Thus, the findings are valid both under controlled and ecologically valid conditions. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  9. Joint representation of translational and rotational components of optic flow in parietal cortex

    PubMed Central

    Sunkara, Adhira; DeAngelis, Gregory C.; Angelaki, Dora E.

    2016-01-01

    Terrestrial navigation naturally involves translations within the horizontal plane and eye rotations about a vertical (yaw) axis to track and fixate targets of interest. Neurons in the macaque ventral intraparietal (VIP) area are known to represent heading (the direction of self-translation) from optic flow in a manner that is tolerant to rotational visual cues generated during pursuit eye movements. Previous studies have also reported that eye rotations modulate the response gain of heading tuning curves in VIP neurons. We tested the hypothesis that VIP neurons simultaneously represent both heading and horizontal (yaw) eye rotation velocity by measuring heading tuning curves for a range of rotational velocities of either real or simulated eye movements. Three findings support the hypothesis of a joint representation. First, we show that rotation velocity selectivity based on gain modulations of visual heading tuning is similar to that measured during pure rotations. Second, gain modulations of heading tuning are similar for self-generated eye rotations and visually simulated rotations, indicating that the representation of rotation velocity in VIP is multimodal, driven by both visual and extraretinal signals. Third, we show that roughly one-half of VIP neurons jointly represent heading and rotation velocity in a multiplicatively separable manner. These results provide the first evidence, to our knowledge, for a joint representation of translation direction and rotation velocity in parietal cortex and show that rotation velocity can be represented based on visual cues, even in the absence of efference copy signals. PMID:27095846

  10. Structural functional associations of the orbit in thyroid eye disease: Kalman filters to track extraocular rectal muscles

    NASA Astrophysics Data System (ADS)

    Chaganti, Shikha; Nelson, Katrina; Mundy, Kevin; Luo, Yifu; Harrigan, Robert L.; Damon, Steve; Fabbri, Daniel; Mawn, Louise; Landman, Bennett

    2016-03-01

    Pathologies of the optic nerve and orbit impact millions of Americans and quantitative assessment of the orbital structures on 3-D imaging would provide objective markers to enhance diagnostic accuracy, improve timely intervention, and eventually preserve visual function. Recent studies have shown that the multi-atlas methodology is suitable for identifying orbital structures, but challenges arise in the identification of the individual extraocular rectus muscles that control eye movement. This is increasingly problematic in diseased eyes, where these muscles often appear to fuse at the back of the orbit (at the resolution of clinical computed tomography imaging) due to inflammation or crowding. We propose the use of Kalman filters to track the muscles in three-dimensions to refine multi-atlas segmentation and resolve ambiguity due to imaging resolution, noise, and artifacts. The purpose of our study is to investigate a method of automatically generating orbital metrics from CT imaging and demonstrate the utility of the approach by correlating structural metrics of the eye orbit with clinical data and visual function measures in subjects with thyroid eye disease. The pilot study demonstrates that automatically calculated orbital metrics are strongly correlated with several clinical characteristics. Moreover, it is shown that the superior, inferior, medial and lateral rectus muscles obtained using Kalman filters are each correlated with different categories of functional deficit. These findings serve as foundation for further investigation in the use of CT imaging in the study, analysis and diagnosis of ocular diseases, specifically thyroid eye disease.

  11. A small-scale hyperacute compound eye featuring active eye tremor: application to visual stabilization, target tracking, and short-range odometry.

    PubMed

    Colonnier, Fabien; Manecy, Augustin; Juston, Raphaël; Mallot, Hanspeter; Leitel, Robert; Floreano, Dario; Viollet, Stéphane

    2015-02-25

    In this study, a miniature artificial compound eye (15 mm in diameter) called the curved artificial compound eye (CurvACE) was endowed for the first time with hyperacuity, using similar micro-movements to those occurring in the fly's compound eye. A periodic micro-scanning movement of only a few degrees enables the vibrating compound eye to locate contrasting objects with a 40-fold greater resolution than that imposed by the interommatidial angle. In this study, we developed a new algorithm merging the output of 35 local processing units consisting of adjacent pairs of artificial ommatidia. The local measurements performed by each pair are processed in parallel with very few computational resources, which makes it possible to reach a high refresh rate of 500 Hz. An aerial robotic platform with two degrees of freedom equipped with the active CurvACE placed over naturally textured panels was able to assess its linear position accurately with respect to the environment thanks to its efficient gaze stabilization system. The algorithm was found to perform robustly at different light conditions as well as distance variations relative to the ground and featured small closed-loop positioning errors of the robot in the range of 45 mm. In addition, three tasks of interest were performed without having to change the algorithm: short-range odometry, visual stabilization, and tracking contrasting objects (hands) moving over a textured background.

  12. Eye-tracking Reveals Abnormal Visual Preference for Geometric Images as an Early Biomarker of an ASD Subtype Associated with Increased Symptom Severity

    PubMed Central

    Pierce, Karen; Marinero, Steven; Hazin, Roxana; McKenna, Benjamin; Barnes, Cynthia Carter; Malige, Ajith

    2015-01-01

    Background Clinically and biologically, ASD is heterogeneous. Unusual patterns of visual preference as indexed by eye-tracking are hallmarks, yet whether they can be used to define an early biomarker of ASD as a whole, or leveraged to define a subtype is unclear. To begin to examine this issue, large cohorts are required. Methods A sample of 334 toddlers from 6 distinct groups (115 ASD, 20 ASD-Features, 57 DD, 53 Other, 64 TD, and 25 Typ SIB) participated. Toddlers watched a movie containing both geometric and social images. Fixation duration and number of saccades within each AOI and validation statistics for this independent sample computed. Next, to maximize power, data from our previous study (N=110) was added totaling 444 subjects. A subset of toddlers repeated the eye-tracking procedure. Results As in the original study, a subset of toddlers with ASD fixated on geometric images greater than 69%. Using this cutoff, sensitivity for ASD was 21%, specificity 98%, and PPV 86%. Toddlers with ASD who strongly preferred geometric images had (a) worse cognitive, language, and social skills relative to toddlers with ASD who strongly preferred social images and (b) fewer saccades when viewing geometric images. Unaffected siblings of ASD probands did not show evidence of heightened preference for geometric images. Test-retest reliability was good. Examination of age effects suggest that this test may not be appropriate with children > 4 years. Conclusions Enhanced visual preference for geometric repetition may be an early developmental biomarker of an ASD subtype with more severe symptoms. PMID:25981170

  13. Color impact in visual attention deployment considering emotional images

    NASA Astrophysics Data System (ADS)

    Chamaret, C.

    2012-03-01

    Color is a predominant factor in the human visual attention system. Even if it cannot be sufficient to the global or complete understanding of a scene, it may impact the visual attention deployment. We propose to study the color impact as well as the emotion aspect of pictures regarding the visual attention deployment. An eye-tracking campaign has been conducted involving twenty people watching half pictures of database in full color and the other half of database in grey color. The eye fixations of color and black and white images were highly correlated leading to the question of the integration of such cues in the design of visual attention model. Indeed, the prediction of two state-of-the-art computational models shows similar results for the two color categories. Similarly, the study of saccade amplitude and fixation duration versus time viewing did not bring any significant differences between the two mentioned categories. In addition, spatial coordinates of eye fixations reveal an interesting indicator for investigating the differences of visual attention deployment over time and fixation number. The second factor related to emotion categories shows evidences of emotional inter-categories differences between color and grey eye fixations for passive and positive emotion. The particular aspect associated to this category induces a specific behavior, rather based on high frequencies, where the color components influence the visual attention deployment.

  14. Investigating an Application of Speech-to-Text Recognition: A Study on Visual Attention and Learning Behaviour

    ERIC Educational Resources Information Center

    Huang, Y-M.; Liu, C-J.; Shadiev, Rustam; Shen, M-H.; Hwang, W-Y.

    2015-01-01

    One major drawback of previous research on speech-to-text recognition (STR) is that most findings showing the effectiveness of STR for learning were based upon subjective evidence. Very few studies have used eye-tracking techniques to investigate visual attention of students on STR-generated text. Furthermore, not much attention was paid to…

  15. How Prior Knowledge and Colour Contrast Interfere Visual Search Processes in Novice Learners: An Eye Tracking Study

    ERIC Educational Resources Information Center

    Sonmez, Duygu; Altun, Arif; Mazman, Sacide Guzin

    2012-01-01

    This study investigates how prior content knowledge and prior exposure to microscope slides on the phases of mitosis effect students' visual search strategies and their ability to differentiate cells that are going through any phases of mitosis. Two different sets of microscope slide views were used for this purpose; with high and low colour…

  16. Instruction-Based Clinical Eye-Tracking Study on the Visual Interpretation of Divergence: How Do Students Look at Vector Field Plots?

    ERIC Educational Resources Information Center

    Klein, P.; Viiri, J.; Mozaffari, S.; Dengel, A.; Kuhn, J.

    2018-01-01

    Relating mathematical concepts to graphical representations is a challenging task for students. In this paper, we introduce two visual strategies to qualitatively interpret the divergence of graphical vector field representations. One strategy is based on the graphical interpretation of partial derivatives, while the other is based on the flux…

  17. Development of Visual Preference for Own- versus Other-Race Faces in Infancy

    ERIC Educational Resources Information Center

    Liu, Shaoying; Xiao, Wen Sara; Xiao, Naiqi G.; Quinn, Paul C.; Zhang, Yueyan; Chen, Hui; Ge, Liezhong; Pascalis, Olivier; Lee, Kang

    2015-01-01

    Previous research has shown that 3-month-olds prefer own- over other-race faces. The current study used eye-tracking methodology to examine how this visual preference develops with age beyond 3 months and how infants differentially scan between own- and other-race faces when presented simultaneously. We showed own- versus other-race face pairs to…

  18. Objective eye-gaze behaviour during face-to-face communication with proficient alaryngeal speakers: a preliminary study.

    PubMed

    Evitts, Paul; Gallop, Robert

    2011-01-01

    There is a large body of research demonstrating the impact of visual information on speaker intelligibility in both normal and disordered speaker populations. However, there is minimal information on which specific visual features listeners find salient during conversational discourse. To investigate listeners' eye-gaze behaviour during face-to-face conversation with normal, laryngeal and proficient alaryngeal speakers. Sixty participants individually participated in a 10-min conversation with one of four speakers (typical laryngeal, tracheoesophageal, oesophageal, electrolaryngeal; 15 participants randomly assigned to one mode of speech). All speakers were > 85% intelligible and were judged to be 'proficient' by two certified speech-language pathologists. Participants were fitted with a head-mounted eye-gaze tracking device (Mobile Eye, ASL) that calculated the region of interest and mean duration of eye-gaze. Self-reported gaze behaviour was also obtained following the conversation using a 10 cm visual analogue scale. While listening, participants viewed the lower facial region of the oesophageal speaker more than the normal or tracheoesophageal speaker. Results of non-hierarchical cluster analyses showed that while listening, the pattern of eye-gaze was predominantly directed at the lower face of the oesophageal and electrolaryngeal speaker and more evenly dispersed among the background, lower face, and eyes of the normal and tracheoesophageal speakers. Finally, results show a low correlation between self-reported eye-gaze behaviour and objective regions of interest data. Overall, results suggest similar eye-gaze behaviour when healthy controls converse with normal and tracheoesophageal speakers and that participants had significantly different eye-gaze patterns when conversing with an oesophageal speaker. Results are discussed in terms of existing eye-gaze data and its potential implications on auditory-visual speech perception. © 2011 Royal College of Speech & Language Therapists.

  19. EEG and Eye Tracking Signatures of Target Encoding during Structured Visual Search

    PubMed Central

    Brouwer, Anne-Marie; Hogervorst, Maarten A.; Oudejans, Bob; Ries, Anthony J.; Touryan, Jonathan

    2017-01-01

    EEG and eye tracking variables are potential sources of information about the underlying processes of target detection and storage during visual search. Fixation duration, pupil size and event related potentials (ERPs) locked to the onset of fixation or saccade (saccade-related potentials, SRPs) have been reported to differ dependent on whether a target or a non-target is currently fixated. Here we focus on the question of whether these variables also differ between targets that are subsequently reported (hits) and targets that are not (misses). Observers were asked to scan 15 locations that were consecutively highlighted for 1 s in pseudo-random order. Highlighted locations displayed either a target or a non-target stimulus with two, three or four targets per trial. After scanning, participants indicated which locations had displayed a target. To induce memory encoding failures, participants concurrently performed an aurally presented math task (high load condition). In a low load condition, participants ignored the math task. As expected, more targets were missed in the high compared with the low load condition. For both conditions, eye tracking features distinguished better between hits and misses than between targets and non-targets (with larger pupil size and shorter fixations for missed compared with correctly encoded targets). In contrast, SRP features distinguished better between targets and non-targets than between hits and misses (with average SRPs showing larger P300 waveforms for targets than for non-targets). Single trial classification results were consistent with these averages. This work suggests complementary contributions of eye and EEG measures in potential applications to support search and detect tasks. SRPs may be useful to monitor what objects are relevant to an observer, and eye variables may indicate whether the observer should be reminded of them later. PMID:28559807

  20. Discourse intervention strategies in Alzheimer's disease: Eye-tracking and the effect of visual cues in conversation

    PubMed Central

    Brandão, Lenisa; Monção, Ana Maria; Andersson, Richard; Holmqvist, Kenneth

    2014-01-01

    Objective The goal of this study was to investigate whether on-topic visual cues can serve as aids for the maintenance of discourse coherence and informativeness in autobiographical narratives of persons with Alzheimer's disease (AD). Methods The experiment consisted of three randomized conversation conditions: one without prompts, showing a blank computer screen; an on-topic condition, showing a picture and a sentence about the conversation; and an off-topic condition, showing a picture and a sentence which were unrelated to the conversation. Speech was recorded while visual attention was examined using eye tracking to measure how long participants looked at cues and the face of the listener. Results Results suggest that interventions using visual cues in the form of images and written information are useful to improve discourse informativeness in AD. Conclusion This study demonstrated the potential of using images and short written messages as means of compensating for the cognitive deficits which underlie uninformative discourse in AD. Future studies should further investigate the efficacy of language interventions based in the use of these compensation strategies for AD patients and their family members and friends. PMID:29213914

  1. The Initiation of Smooth Pursuit is Delayed in Anisometropic Amblyopia.

    PubMed

    Raashid, Rana Arham; Liu, Ivy Ziqian; Blakeman, Alan; Goltz, Herbert C; Wong, Agnes M F

    2016-04-01

    Several behavioral studies have shown that the reaction times of visually guided movements are slower in people with amblyopia, particularly during amblyopic eye viewing. Here, we tested the hypothesis that the initiation of smooth pursuit eye movements, which are responsible for accurately keeping moving objects on the fovea, is delayed in people with anisometropic amblyopia. Eleven participants with anisometropic amblyopia and 14 visually normal observers were asked to track a step-ramp target moving at ±15°/s horizontally as quickly and as accurately as possible. The experiment was conducted under three viewing conditions: amblyopic/nondominant eye, binocular, and fellow/dominant eye viewing. Outcome measures were smooth pursuit latency, open-loop gain, steady state gain, and catch-up saccade frequency. Participants with anisometropic amblyopia initiated smooth pursuit significantly slower during amblyopic eye viewing (206 ± 20 ms) than visually normal observers viewing with their nondominant eye (183 ± 17 ms, P = 0.002). However, mean pursuit latency in the anisometropic amblyopia group during binocular and monocular fellow eye viewing was comparable to the visually normal group. Mean open-loop gain, steady state gain, and catch-up saccade frequency were similar between the two groups, but participants with anisometropic amblyopia exhibited more variable steady state gain (P = 0.045). This study provides evidence of temporally delayed smooth pursuit initiation in anisometropic amblyopia. After initiation, the smooth pursuit velocity profile in anisometropic amblyopia participants is similar to visually normal controls. This finding differs from what has been observed previously in participants with strabismic amblyopia who exhibit reduced smooth pursuit velocity gains with more catch-up saccades.

  2. Eye tracking reveals a crucial role for facial motion in recognition of faces by infants

    PubMed Central

    Xiao, Naiqi G.; Quinn, Paul C.; Liu, Shaoying; Ge, Liezhong; Pascalis, Olivier; Lee, Kang

    2015-01-01

    Current knowledge about face processing in infancy comes largely from studies using static face stimuli, but faces that infants see in the real world are mostly moving ones. To bridge this gap, 3-, 6-, and 9-month-old Asian infants (N = 118) were familiarized with either moving or static Asian female faces and then their face recognition was tested with static face images. Eye tracking methodology was used to record eye movements during familiarization and test phases. The results showed a developmental change in eye movement patterns, but only for the moving faces. In addition, the more infants shifted their fixations across facial regions, the better was their face recognition, but only for the moving faces. The results suggest that facial movement influences the way faces are encoded from early in development. PMID:26010387

  3. Proposed New Vision Standards for the 1980’s and Beyond: Contrast Sensitivity

    DTIC Science & Technology

    1981-09-01

    spatial frequency, visual acuity, target aquistion, visual filters, spatial filtering, target detection, recognitio identification, eye charts, workload...visual standards, as well as other performance criteria, are required to be thown relevant to "real-world" performance before acceptance. On the sur- face

  4. Obstacle Avoidance, Visual Detection Performance, and Eye-Scanning Behavior of Glaucoma Patients in a Driving Simulator: A Preliminary Study

    PubMed Central

    Prado Vega, Rocío; van Leeuwen, Peter M.; Rendón Vélez, Elizabeth; Lemij, Hans G.; de Winter, Joost C. F.

    2013-01-01

    The objective of this study was to evaluate differences in driving performance, visual detection performance, and eye-scanning behavior between glaucoma patients and control participants without glaucoma. Glaucoma patients (n = 23) and control participants (n = 12) completed four 5-min driving sessions in a simulator. The participants were instructed to maintain the car in the right lane of a two-lane highway while their speed was automatically maintained at 100 km/h. Additional tasks per session were: Session 1: none, Session 2: verbalization of projected letters, Session 3: avoidance of static obstacles, and Session 4: combined letter verbalization and avoidance of static obstacles. Eye-scanning behavior was recorded with an eye-tracker. Results showed no statistically significant differences between patients and control participants for lane keeping, obstacle avoidance, and eye-scanning behavior. Steering activity, number of missed letters, and letter reaction time were significantly higher for glaucoma patients than for control participants. In conclusion, glaucoma patients were able to avoid objects and maintain a nominal lane keeping performance, but applied more steering input than control participants, and were more likely than control participants to miss peripherally projected stimuli. The eye-tracking results suggest that glaucoma patients did not use extra visual search to compensate for their visual field loss. Limitations of the study, such as small sample size, are discussed. PMID:24146975

  5. Automation trust and attention allocation in multitasking workspace.

    PubMed

    Karpinsky, Nicole D; Chancey, Eric T; Palmer, Dakota B; Yamani, Yusuke

    2018-07-01

    Previous research suggests that operators with high workload can distrust and then poorly monitor automation, which has been generally inferred from automation dependence behaviors. To test automation monitoring more directly, the current study measured operators' visual attention allocation, workload, and trust toward imperfect automation in a dynamic multitasking environment. Participants concurrently performed a manual tracking task with two levels of difficulty and a system monitoring task assisted by an unreliable signaling system. Eye movement data indicate that operators allocate less visual attention to monitor automation when the tracking task is more difficult. Participants reported reduced levels of trust toward the signaling system when the tracking task demanded more focused visual attention. Analyses revealed that trust mediated the relationship between the load of the tracking task and attention allocation in Experiment 1, an effect that was not replicated in Experiment 2. Results imply a complex process underlying task load, visual attention allocation, and automation trust during multitasking. Automation designers should consider operators' task load in multitasking workspaces to avoid reduced automation monitoring and distrust toward imperfect signaling systems. Copyright © 2018. Published by Elsevier Ltd.

  6. Improving visual search in instruction manuals using pictograms.

    PubMed

    Kovačević, Dorotea; Brozović, Maja; Možina, Klementina

    2016-11-01

    Instruction manuals provide important messages about the proper use of a product. They should communicate in such a way that they facilitate users' searches for specific information. Despite the increasing research interest in visual search, there is a lack of empirical knowledge concerning the role of pictograms in search performance during the browsing of a manual's pages. This study investigates how the inclusion of pictograms improves the search for the target information. Furthermore, it examines whether this search process is influenced by the visual similarity between the pictograms and the searched for information. On the basis of eye-tracking measurements, as objective indicators of the participants' visual attention, it was found that pictograms can be a useful element of search strategy. Another interesting finding was that boldface highlighting is a more effective method for improving user experience in information seeking, rather than the similarity between the pictorial and adjacent textual information. Implications for designing effective user manuals are discussed. Practitioner Summary: Users often view instruction manuals with the aim of finding specific information. We used eye-tracking technology to examine different manual pages in order to improve the user's visual search for target information. The results indicate that the use of pictograms and bold highlighting of relevant information facilitate the search process.

  7. Eye Movements Affect Postural Control in Young and Older Females

    PubMed Central

    Thomas, Neil M.; Bampouras, Theodoros M.; Donovan, Tim; Dewhurst, Susan

    2016-01-01

    Visual information is used for postural stabilization in humans. However, little is known about how eye movements prevalent in everyday life interact with the postural control system in older individuals. Therefore, the present study assessed the effects of stationary gaze fixations, smooth pursuits, and saccadic eye movements, with combinations of absent, fixed and oscillating large-field visual backgrounds to generate different forms of retinal flow, on postural control in healthy young and older females. Participants were presented with computer generated visual stimuli, whilst postural sway and gaze fixations were simultaneously assessed with a force platform and eye tracking equipment, respectively. The results showed that fixed backgrounds and stationary gaze fixations attenuated postural sway. In contrast, oscillating backgrounds and smooth pursuits increased postural sway. There were no differences regarding saccades. There were also no differences in postural sway or gaze errors between age groups in any visual condition. The stabilizing effect of the fixed visual stimuli show how retinal flow and extraocular factors guide postural adjustments. The destabilizing effect of oscillating visual backgrounds and smooth pursuits may be related to more challenging conditions for determining body shifts from retinal flow, and more complex extraocular signals, respectively. Because the older participants matched the young group's performance in all conditions, decreases of posture and gaze control during stance may not be a direct consequence of healthy aging. Further research examining extraocular and retinal mechanisms of balance control and the effects of eye movements, during locomotion, is needed to better inform fall prevention interventions. PMID:27695412

  8. Eye Movements Affect Postural Control in Young and Older Females.

    PubMed

    Thomas, Neil M; Bampouras, Theodoros M; Donovan, Tim; Dewhurst, Susan

    2016-01-01

    Visual information is used for postural stabilization in humans. However, little is known about how eye movements prevalent in everyday life interact with the postural control system in older individuals. Therefore, the present study assessed the effects of stationary gaze fixations, smooth pursuits, and saccadic eye movements, with combinations of absent, fixed and oscillating large-field visual backgrounds to generate different forms of retinal flow, on postural control in healthy young and older females. Participants were presented with computer generated visual stimuli, whilst postural sway and gaze fixations were simultaneously assessed with a force platform and eye tracking equipment, respectively. The results showed that fixed backgrounds and stationary gaze fixations attenuated postural sway. In contrast, oscillating backgrounds and smooth pursuits increased postural sway. There were no differences regarding saccades. There were also no differences in postural sway or gaze errors between age groups in any visual condition. The stabilizing effect of the fixed visual stimuli show how retinal flow and extraocular factors guide postural adjustments. The destabilizing effect of oscillating visual backgrounds and smooth pursuits may be related to more challenging conditions for determining body shifts from retinal flow, and more complex extraocular signals, respectively. Because the older participants matched the young group's performance in all conditions, decreases of posture and gaze control during stance may not be a direct consequence of healthy aging. Further research examining extraocular and retinal mechanisms of balance control and the effects of eye movements, during locomotion, is needed to better inform fall prevention interventions.

  9. [Slowing down the flow of facial information enhances facial scanning in children with autism spectrum disorders: A pilot eye tracking study].

    PubMed

    Charrier, A; Tardif, C; Gepner, B

    2017-02-01

    Face and gaze avoidance are among the most characteristic and salient symptoms of autism spectrum disorders (ASD). Studies using eye tracking highlighted early and lifelong ASD-specific abnormalities in attention to face such as decreased attention to internal facial features. These specificities could be partly explained by disorders in the perception and integration of rapid and complex information such as that conveyed by facial movements and more broadly by biological and physical environment. Therefore, we wish to test whether slowing down facial dynamics may improve the way children with ASD attend to a face. We used an eye tracking method to examine gaze patterns of children with ASD aged 3 to 8 (n=23) and TD controls (n=29) while viewing the face of a speaker telling a story. The story was divided into 6 sequences that were randomly displayed at 3 different speeds, i.e. a real-time speed (RT), a slow speed (S70=70% of RT speed), a very slow speed (S50=50% of RT speed). S70 and S50 were displayed thanks to software called Logiral™, aimed at slowing down visual and auditory stimuli simultaneously and without tone distortion. The visual scene was divided into four regions of interest (ROI): eyes region; mouth region; whole face region; outside the face region. The total time, number and mean duration of visual fixations on the whole visual scene and the four ROI were measured between and within the two groups. Compared to TD children, children with ASD spent significantly less time attending to the visual scenes and, when they looked at the scene, they spent less time scanning the speaker's face in general and her mouth in particular, and more time looking outside facial area. Within the ASD group mean duration of fixation increased on the whole scene and particularly on the mouth area, in R50 compared to RT. Children with mild autism spent more time looking at the face than the two other groups of ASD children, and spent more time attending to the face and mouth as well as longer mean duration of visual fixation on mouth and eyes, at slow speeds (S50 and/or S70) than at RT one. Slowing down facial dynamics enhances looking time on face, and particularly on mouth and/or eyes, in a group of 23 children with ASD and particularly in a small subgroup with mild autism. Given the crucial role of reading the eyes for emotional processing and that of lip-reading for language processing, our present result and other converging ones could pave the way for novel socio-emotional and verbal rehabilitation methods for autistic population. Further studies should investigate whether increased attention to face and particularly eyes and mouth is correlated to emotional/social and/or verbal/language improvements. Copyright © 2016 L'Encéphale, Paris. Published by Elsevier Masson SAS. All rights reserved.

  10. Selective Attention to a Talker's Mouth in Infancy: Role of Audiovisual Temporal Synchrony and Linguistic Experience

    ERIC Educational Resources Information Center

    Hillairet de Boisferon, Anne; Tift, Amy H.; Minar, Nicholas J.; Lewkowicz, David J.

    2017-01-01

    Previous studies have found that infants shift their attention from the eyes to the mouth of a talker when they enter the canonical babbling phase after 6 months of age. Here, we investigated whether this increased attentional focus on the mouth is mediated by audio-visual synchrony and linguistic experience. To do so, we tracked eye gaze in 4-,…

  11. Comprehensive Oculomotor Behavioral Response Assessment (COBRA)

    NASA Technical Reports Server (NTRS)

    Stone, Leland S. (Inventor); Liston, Dorion B. (Inventor)

    2017-01-01

    An eye movement-based methodology and assessment tool may be used to quantify many aspects of human dynamic visual processing using a relatively simple and short oculomotor task, noninvasive video-based eye tracking, and validated oculometric analysis techniques. By examining the eye movement responses to a task including a radially-organized appropriately randomized sequence of Rashbass-like step-ramp pursuit-tracking trials, distinct performance measurements may be generated that may be associated with, for example, pursuit initiation (e.g., latency and open-loop pursuit acceleration), steady-state tracking (e.g., gain, catch-up saccade amplitude, and the proportion of the steady-state response consisting of smooth movement), direction tuning (e.g., oblique effect amplitude, horizontal-vertical asymmetry, and direction noise), and speed tuning (e.g., speed responsiveness and noise). This quantitative approach may provide fast and results (e.g., a multi-dimensional set of oculometrics and a single scalar impairment index) that can be interpreted by one without a high degree of scientific sophistication or extensive training.

  12. Disappearance of the inversion effect during memory-guided tracking of scrambled biological motion.

    PubMed

    Jiang, Changhao; Yue, Guang H; Chen, Tingting; Ding, Jinhong

    2016-08-01

    The human visual system is highly sensitive to biological motion. Even when a point-light walker is temporarily occluded from view by other objects, our eyes are still able to maintain tracking continuity. To investigate how the visual system establishes a correspondence between the biological-motion stimuli visible before and after the disruption, we used the occlusion paradigm with biological-motion stimuli that were intact or scrambled. The results showed that during visually guided tracking, both the observers' predicted times and predictive smooth pursuit were more accurate for upright biological motion (intact and scrambled) than for inverted biological motion. During memory-guided tracking, however, the processing advantage for upright as compared with inverted biological motion was not found in the scrambled condition, but in the intact condition only. This suggests that spatial location information alone is not sufficient to build and maintain the representational continuity of the biological motion across the occlusion, and that the object identity may act as an important information source in visual tracking. The inversion effect disappeared when the scrambled biological motion was occluded, which indicates that when biological motion is temporarily occluded and there is a complete absence of visual feedback signals, an oculomotor prediction is executed to maintain the tracking continuity, which is established not only by updating the target's spatial location, but also by the retrieval of identity information stored in long-term memory.

  13. Oculometric Assessment of Dynamic Visual Processing

    NASA Technical Reports Server (NTRS)

    Liston, Dorion Bryce; Stone, Lee

    2014-01-01

    Eye movements are the most frequent (3 per second), shortest-latency (150-250 ms), and biomechanically simplest (1 joint, no inertial complexities) voluntary motor behavior in primates, providing a model system to assess sensorimotor disturbances arising from trauma, fatigue, aging, or disease states (e.g., Diefendorf and Dodge, 1908). We developed a 15-minute behavioral tracking protocol consisting of randomized stepramp radial target motion to assess several aspects of the behavioral response to dynamic visual motion, including pursuit initiation, steadystate tracking, direction-tuning, and speed-tuning thresholds. This set of oculomotor metrics provide valid and reliable measures of dynamic visual performance (Stone and Krauzlis, 2003; Krukowski and Stone, 2005; Stone et al, 2009; Liston and Stone, 2014), and may prove to be a useful assessment tool for functional impairments of dynamic visual processing.

  14. Nonhuman Primate Studies to Advance Vision Science and Prevent Blindness.

    PubMed

    Mustari, Michael J

    2017-12-01

    Most primate behavior is dependent on high acuity vision. Optimal visual performance in primates depends heavily upon frontally placed eyes, retinal specializations, and binocular vision. To see an object clearly its image must be placed on or near the fovea of each eye. The oculomotor system is responsible for maintaining precise eye alignment during fixation and generating eye movements to track moving targets. The visual system of nonhuman primates has a similar anatomical organization and functional capability to that of humans. This allows results obtained in nonhuman primates to be applied to humans. The visual and oculomotor systems of primates are immature at birth and sensitive to the quality of binocular visual and eye movement experience during the first months of life. Disruption of postnatal experience can lead to problems in eye alignment (strabismus), amblyopia, unsteady gaze (nystagmus), and defective eye movements. Recent studies in nonhuman primates have begun to discover the neural mechanisms associated with these conditions. In addition, genetic defects that target the retina can lead to blindness. A variety of approaches including gene therapy, stem cell treatment, neuroprosthetics, and optogenetics are currently being used to restore function associated with retinal diseases. Nonhuman primates often provide the best animal model for advancing fundamental knowledge and developing new treatments and cures for blinding diseases. © The Author(s) 2017. Published by Oxford University Press on behalf of the National Academy of Sciences. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  15. Online webcam-based eye tracking in cognitive science: A first look.

    PubMed

    Semmelmann, Kilian; Weigelt, Sarah

    2018-04-01

    Online experimentation is emerging in many areas of cognitive psychology as a viable alternative or supplement to classical in-lab experimentation. While performance- and reaction-time-based paradigms are covered in recent studies, one instrument of cognitive psychology has not received much attention up to now: eye tracking. In this study, we used JavaScript-based eye tracking algorithms recently made available by Papoutsaki et al. (International Joint Conference on Artificial Intelligence, 2016) together with consumer-grade webcams to investigate the potential of online eye tracking to benefit from the common advantages of online data conduction. We compared three in-lab conducted tasks (fixation, pursuit, and free viewing) with online-acquired data to analyze the spatial precision in the first two, and replicability of well-known gazing patterns in the third task. Our results indicate that in-lab data exhibit an offset of about 172 px (15% of screen size, 3.94° visual angle) in the fixation task, while online data is slightly less accurate (18% of screen size, 207 px), and shows higher variance. The same results were found for the pursuit task with a constant offset during the stimulus movement (211 px in-lab, 216 px online). In the free-viewing task, we were able to replicate the high attention attribution to eyes (28.25%) compared to other key regions like the nose (9.71%) and mouth (4.00%). Overall, we found web technology-based eye tracking to be suitable for all three tasks and are confident that the required hard- and software will be improved continuously for even more sophisticated experimental paradigms in all of cognitive psychology.

  16. An automatic calibration procedure for remote eye-gaze tracking systems.

    PubMed

    Model, Dmitri; Guestrin, Elias D; Eizenman, Moshe

    2009-01-01

    Remote gaze estimation systems use calibration procedures to estimate subject-specific parameters that are needed for the calculation of the point-of-gaze. In these procedures, subjects are required to fixate on a specific point or points at specific time instances. Advanced remote gaze estimation systems can estimate the optical axis of the eye without any personal calibration procedure, but use a single calibration point to estimate the angle between the optical axis and the visual axis (line-of-sight). This paper presents a novel automatic calibration procedure that does not require active user participation. To estimate the angles between the optical and visual axes of each eye, this procedure minimizes the distance between the intersections of the visual axes of the left and right eyes with the surface of a display while subjects look naturally at the display (e.g., watching a video clip). Simulation results demonstrate that the performance of the algorithm improves as the range of viewing angles increases. For a subject sitting 75 cm in front of an 80 cm x 60 cm display (40" TV) the standard deviation of the error in the estimation of the angles between the optical and visual axes is 0.5 degrees.

  17. A Method to Quantify Visual Information Processing in Children Using Eye Tracking

    PubMed Central

    Kooiker, Marlou J.G.; Pel, Johan J.M.; van der Steen-Kant, Sanny P.; van der Steen, Johannes

    2016-01-01

    Visual problems that occur early in life can have major impact on a child's development. Without verbal communication and only based on observational methods, it is difficult to make a quantitative assessment of a child's visual problems. This limits accurate diagnostics in children under the age of 4 years and in children with intellectual disabilities. Here we describe a quantitative method that overcomes these problems. The method uses a remote eye tracker and a four choice preferential looking paradigm to measure eye movement responses to different visual stimuli. The child sits without head support in front of a monitor with integrated infrared cameras. In one of four monitor quadrants a visual stimulus is presented. Each stimulus has a specific visual modality with respect to the background, e.g., form, motion, contrast or color. From the reflexive eye movement responses to these specific visual modalities, output parameters such as reaction times, fixation accuracy and fixation duration are calculated to quantify a child's viewing behavior. With this approach, the quality of visual information processing can be assessed without the use of communication. By comparing results with reference values obtained in typically developing children from 0-12 years, the method provides a characterization of visual information processing in visually impaired children. The quantitative information provided by this method can be advantageous for the field of clinical visual assessment and rehabilitation in multiple ways. The parameter values provide a good basis to: (i) characterize early visual capacities and consequently to enable early interventions; (ii) compare risk groups and follow visual development over time; and (iii), construct an individual visual profile for each child. PMID:27500922

  18. A Method to Quantify Visual Information Processing in Children Using Eye Tracking.

    PubMed

    Kooiker, Marlou J G; Pel, Johan J M; van der Steen-Kant, Sanny P; van der Steen, Johannes

    2016-07-09

    Visual problems that occur early in life can have major impact on a child's development. Without verbal communication and only based on observational methods, it is difficult to make a quantitative assessment of a child's visual problems. This limits accurate diagnostics in children under the age of 4 years and in children with intellectual disabilities. Here we describe a quantitative method that overcomes these problems. The method uses a remote eye tracker and a four choice preferential looking paradigm to measure eye movement responses to different visual stimuli. The child sits without head support in front of a monitor with integrated infrared cameras. In one of four monitor quadrants a visual stimulus is presented. Each stimulus has a specific visual modality with respect to the background, e.g., form, motion, contrast or color. From the reflexive eye movement responses to these specific visual modalities, output parameters such as reaction times, fixation accuracy and fixation duration are calculated to quantify a child's viewing behavior. With this approach, the quality of visual information processing can be assessed without the use of communication. By comparing results with reference values obtained in typically developing children from 0-12 years, the method provides a characterization of visual information processing in visually impaired children. The quantitative information provided by this method can be advantageous for the field of clinical visual assessment and rehabilitation in multiple ways. The parameter values provide a good basis to: (i) characterize early visual capacities and consequently to enable early interventions; (ii) compare risk groups and follow visual development over time; and (iii), construct an individual visual profile for each child.

  19. Statistical regularities in art: Relations with visual coding and perception.

    PubMed

    Graham, Daniel J; Redies, Christoph

    2010-07-21

    Since at least 1935, vision researchers have used art stimuli to test human response to complex scenes. This is sensible given the "inherent interestingness" of art and its relation to the natural visual world. The use of art stimuli has remained popular, especially in eye tracking studies. Moreover, stimuli in common use by vision scientists are inspired by the work of famous artists (e.g., Mondrians). Artworks are also popular in vision science as illustrations of a host of visual phenomena, such as depth cues and surface properties. However, until recently, there has been scant consideration of the spatial, luminance, and color statistics of artwork, and even less study of ways that regularities in such statistics could affect visual processing. Furthermore, the relationship between regularities in art images and those in natural scenes has received little or no attention. In the past few years, there has been a concerted effort to study statistical regularities in art as they relate to neural coding and visual perception, and art stimuli have begun to be studied in rigorous ways, as natural scenes have been. In this minireview, we summarize quantitative studies of links between regular statistics in artwork and processing in the visual stream. The results of these studies suggest that art is especially germane to understanding human visual coding and perception, and it therefore warrants wider study. Copyright 2010 Elsevier Ltd. All rights reserved.

  20. The effect of extended sensory range via the EyeCane sensory substitution device on the characteristics of visionless virtual navigation.

    PubMed

    Maidenbaum, Shachar; Levy-Tzedek, Shelly; Chebat, Daniel Robert; Namer-Furstenberg, Rinat; Amedi, Amir

    2014-01-01

    Mobility training programs for helping the blind navigate through unknown places with a White-Cane significantly improve their mobility. However, what is the effect of new assistive technologies, offering more information to the blind user, on the underlying premises of these programs such as navigation patterns? We developed the virtual-EyeCane, a minimalistic sensory substitution device translating single-point-distance into auditory cues identical to the EyeCane's in the real world. We compared performance in virtual environments when using the virtual-EyeCane, a virtual-White-Cane, no device and visual navigation. We show that the characteristics of virtual-EyeCane navigation differ from navigation with a virtual-White-Cane or no device, and that virtual-EyeCane users complete more levels successfully, taking shorter paths and with less collisions than these groups, and we demonstrate the relative similarity of virtual-EyeCane and visual navigation patterns. This suggests that additional distance information indeed changes navigation patterns from virtual-White-Cane use, and brings them closer to visual navigation.

  1. The eye-tracking of social stimuli in patients with Rett syndrome and autism spectrum disorders: a pilot study.

    PubMed

    Schwartzman, José Salomão; Velloso, Renata de Lima; D'Antino, Maria Eloísa Famá; Santos, Silvana

    2015-05-01

    To compare visual fixation at social stimuli in Rett syndrome (RT) and autism spectrum disorders (ASD) patients. Visual fixation at social stimuli was analyzed in 14 RS female patients (age range 4-30 years), 11 ASD male patients (age range 4-20 years), and 17 children with typical development (TD). Patients were exposed to three different pictures (two of human faces and one with social and non-social stimuli) presented for 8 seconds each on the screen of a computer attached to an eye-tracker equipment. Percentage of visual fixation at social stimuli was significantly higher in the RS group compared to ASD and even to TD groups. Visual fixation at social stimuli seems to be one more endophenotype making RS to be very different from ASD.

  2. Visual selective attention in body dysmorphic disorder, bulimia nervosa and healthy controls.

    PubMed

    Kollei, Ines; Horndasch, Stefanie; Erim, Yesim; Martin, Alexandra

    2017-01-01

    Cognitive behavioral models postulate that selective attention plays an important role in the maintenance of body dysmorphic disorder (BDD). It is suggested that individuals with BDD overfocus on perceived defects in their appearance, which may contribute to the excessive preoccupation with their appearance. The present study used eye tracking to examine visual selective attention in individuals with BDD (n=19), as compared to individuals with bulimia nervosa (BN) (n=21) and healthy controls (HCs) (n=21). Participants completed interviews, questionnaires, rating scales and an eye tracking task: Eye movements were recorded while participants viewed photographs of their own face and attractive as well as unattractive other faces. Eye tracking data showed that BDD and BN participants focused less on their self-rated most attractive facial part than HCs. Scanning patterns in own and other faces showed that BDD and BN participants paid as much attention to attractive as to unattractive features in their own face, whereas they focused more on attractive features in attractive other faces. HCs paid more attention to attractive features in their own face and did the same in attractive other faces. Results indicate an attentional bias in BDD and BN participants manifesting itself in a neglect of positive features compared to HCs. Perceptual retraining may be an important aspect to focus on in therapy in order to overcome the neglect of positive facial aspects. Future research should aim to disentangle attentional processes in BDD by examining the time course of attentional processing. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Spatial updating in area LIP is independent of saccade direction.

    PubMed

    Heiser, Laura M; Colby, Carol L

    2006-05-01

    We explore the world around us by making rapid eye movements to objects of interest. Remarkably, these eye movements go unnoticed, and we perceive the world as stable. Spatial updating is one of the neural mechanisms that contributes to this perception of spatial constancy. Previous studies in macaque lateral intraparietal cortex (area LIP) have shown that individual neurons update, or "remap," the locations of salient visual stimuli at the time of an eye movement. The existence of remapping implies that neurons have access to visual information from regions far beyond the classically defined receptive field. We hypothesized that neurons have access to information located anywhere in the visual field. We tested this by recording the activity of LIP neurons while systematically varying the direction in which a stimulus location must be updated. Our primary finding is that individual neurons remap stimulus traces in multiple directions, indicating that LIP neurons have access to information throughout the visual field. At the population level, stimulus traces are updated in conjunction with all saccade directions, even when we consider direction as a function of receptive field location. These results show that spatial updating in LIP is effectively independent of saccade direction. Our findings support the hypothesis that the activity of LIP neurons contributes to the maintenance of spatial constancy throughout the visual field.

  4. Hand-Eye Calibration in Visually-Guided Robot Grinding.

    PubMed

    Li, Wen-Long; Xie, He; Zhang, Gang; Yan, Si-Jie; Yin, Zhou-Ping

    2016-11-01

    Visually-guided robot grinding is a novel and promising automation technique for blade manufacturing. One common problem encountered in robot grinding is hand-eye calibration, which establishes the pose relationship between the end effector (hand) and the scanning sensor (eye). This paper proposes a new calibration approach for robot belt grinding. The main contribution of this paper is its consideration of both joint parameter errors and pose parameter errors in a hand-eye calibration equation. The objective function of the hand-eye calibration is built and solved, from which 30 compensated values (corresponding to 24 joint parameters and six pose parameters) are easily calculated in a closed solution. The proposed approach is economic and simple because only a criterion sphere is used to calculate the calibration parameters, avoiding the need for an expensive and complicated tracking process using a laser tracker. The effectiveness of this method is verified using a calibration experiment and a blade grinding experiment. The code used in this approach is attached in the Appendix.

  5. Joint Attention without Gaze Following: Human Infants and Their Parents Coordinate Visual Attention to Objects through Eye-Hand Coordination

    PubMed Central

    Yu, Chen; Smith, Linda B.

    2013-01-01

    The coordination of visual attention among social partners is central to many components of human behavior and human development. Previous research has focused on one pathway to the coordination of looking behavior by social partners, gaze following. The extant evidence shows that even very young infants follow the direction of another's gaze but they do so only in highly constrained spatial contexts because gaze direction is not a spatially precise cue as to the visual target and not easily used in spatially complex social interactions. Our findings, derived from the moment-to-moment tracking of eye gaze of one-year-olds and their parents as they actively played with toys, provide evidence for an alternative pathway, through the coordination of hands and eyes in goal-directed action. In goal-directed actions, the hands and eyes of the actor are tightly coordinated both temporally and spatially, and thus, in contexts including manual engagement with objects, hand movements and eye movements provide redundant information about where the eyes are looking. Our findings show that one-year-olds rarely look to the parent's face and eyes in these contexts but rather infants and parents coordinate looking behavior without gaze following by attending to objects held by the self or the social partner. This pathway, through eye-hand coupling, leads to coordinated joint switches in visual attention and to an overall high rate of looking at the same object at the same time, and may be the dominant pathway through which physically active toddlers align their looking behavior with a social partner. PMID:24236151

  6. Eye gaze tracking reveals heightened attention to food in adults with binge eating when viewing images of real-world scenes.

    PubMed

    Popien, Avery; Frayn, Mallory; von Ranson, Kristin M; Sears, Christopher R

    2015-08-01

    Individuals with eating disorders often exhibit food-related biases in attention tasks. To assess the engagement and maintenance of attention to food in adults with binge eating, in the present study, eye gaze tracking was used to compare fixations to food among non-clinical adults with versus without binge eating while they viewed images of real-world scenes. Fifty-seven participants' eye fixations were tracked and recorded throughout 8-second presentations of scenes containing high-calorie and/or low-caloriefood items in various settings (restaurants, social gatherings, etc.). Participants with binge eating fixated on both high-calorie and low-calorie food items significantly more than controls, and this was the case when the high- and low-calorie food items were presented in the same image and in different images. Participants with binge eating also fixated on food items significantly earlier in the presentations. A time course analysis that divided each 8-second presentation into 2-second intervals revealed that participants with binge eating attended to food items more than control participants throughout the 8-second presentation. These results have implications for theory regarding the initiation and maintenance of binge eating. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Improvement of design of a surgical interface using an eye tracking device

    PubMed Central

    2014-01-01

    Background Surgical interfaces are used for helping surgeons in interpretation and quantification of the patient information, and for the presentation of an integrated workflow where all available data are combined to enable optimal treatments. Human factors research provides a systematic approach to design user interfaces with safety, accuracy, satisfaction and comfort. One of the human factors research called user-centered design approach is used to develop a surgical interface for kidney tumor cryoablation. An eye tracking device is used to obtain the best configuration of the developed surgical interface. Methods Surgical interface for kidney tumor cryoablation has been developed considering the four phases of user-centered design approach, which are analysis, design, implementation and deployment. Possible configurations of the surgical interface, which comprise various combinations of menu-based command controls, visual display of multi-modal medical images, 2D and 3D models of the surgical environment, graphical or tabulated information, visual alerts, etc., has been developed. Experiments of a simulated cryoablation of a tumor task have been performed with surgeons to evaluate the proposed surgical interface. Fixation durations and number of fixations at informative regions of the surgical interface have been analyzed, and these data are used to modify the surgical interface. Results Eye movement data has shown that participants concentrated their attention on informative regions more when the number of displayed Computer Tomography (CT) images has been reduced. Additionally, the time required to complete the kidney tumor cryoablation task by the participants had been decreased with the reduced number of CT images. Furthermore, the fixation durations obtained after the revision of the surgical interface are very close to what is observed in visual search and natural scene perception studies suggesting more efficient and comfortable interaction with the surgical interface. The National Aeronautics and Space Administration Task Load Index (NASA-TLX) and Short Post-Assessment Situational Awareness (SPASA) questionnaire results have shown that overall mental workload of surgeons related with surgical interface has been low as it has been aimed, and overall situational awareness scores of surgeons have been considerably high. Conclusions This preliminary study highlights the improvement of a developed surgical interface using eye tracking technology to obtain the best SI configuration. The results presented here reveal that visual surgical interface design prepared according to eye movement characteristics may lead to improved usability. PMID:25080176

  8. Improvement of design of a surgical interface using an eye tracking device.

    PubMed

    Erol Barkana, Duygun; Açık, Alper; Duru, Dilek Goksel; Duru, Adil Deniz

    2014-05-07

    Surgical interfaces are used for helping surgeons in interpretation and quantification of the patient information, and for the presentation of an integrated workflow where all available data are combined to enable optimal treatments. Human factors research provides a systematic approach to design user interfaces with safety, accuracy, satisfaction and comfort. One of the human factors research called user-centered design approach is used to develop a surgical interface for kidney tumor cryoablation. An eye tracking device is used to obtain the best configuration of the developed surgical interface. Surgical interface for kidney tumor cryoablation has been developed considering the four phases of user-centered design approach, which are analysis, design, implementation and deployment. Possible configurations of the surgical interface, which comprise various combinations of menu-based command controls, visual display of multi-modal medical images, 2D and 3D models of the surgical environment, graphical or tabulated information, visual alerts, etc., has been developed. Experiments of a simulated cryoablation of a tumor task have been performed with surgeons to evaluate the proposed surgical interface. Fixation durations and number of fixations at informative regions of the surgical interface have been analyzed, and these data are used to modify the surgical interface. Eye movement data has shown that participants concentrated their attention on informative regions more when the number of displayed Computer Tomography (CT) images has been reduced. Additionally, the time required to complete the kidney tumor cryoablation task by the participants had been decreased with the reduced number of CT images. Furthermore, the fixation durations obtained after the revision of the surgical interface are very close to what is observed in visual search and natural scene perception studies suggesting more efficient and comfortable interaction with the surgical interface. The National Aeronautics and Space Administration Task Load Index (NASA-TLX) and Short Post-Assessment Situational Awareness (SPASA) questionnaire results have shown that overall mental workload of surgeons related with surgical interface has been low as it has been aimed, and overall situational awareness scores of surgeons have been considerably high. This preliminary study highlights the improvement of a developed surgical interface using eye tracking technology to obtain the best SI configuration. The results presented here reveal that visual surgical interface design prepared according to eye movement characteristics may lead to improved usability.

  9. Eye-Tracking Measures Reveal How Changes in the Design of Aided AAC Displays Influence the Efficiency of Locating Symbols by School-Age Children without Disabilities

    ERIC Educational Resources Information Center

    Wilkinson, Krista M.; O'Neill, Tara; McIlvane, William J.

    2014-01-01

    Purpose: Many individuals with communication impairments use aided augmentative and alternative communication (AAC) systems involving letters, words, or line drawings that rely on the visual modality. It seems reasonable to suggest that display design should incorporate information about how users attend to and process visual information. The…

  10. Eye movements and serial memory for visual-spatial information: does time spent fixating contribute to recall?

    PubMed

    Saint-Aubin, Jean; Tremblay, Sébastien; Jalbert, Annie

    2007-01-01

    This research investigated the nature of encoding and its contribution to serial recall for visual-spatial information. In order to do so, we examined the relationship between fixation duration and recall performance. Using the dot task--a series of seven dots spatially distributed on a monitor screen is presented sequentially for immediate recall--performance and eye-tracking data were recorded during the presentation of the to-be-remembered items. When participants were free to move their eyes at their will, both fixation durations and probability of correct recall decreased as a function of serial position. Furthermore, imposing constant durations of fixation across all serial positions had a beneficial impact (though relatively small) on item but not order recall. Great care was taken to isolate the effect of fixation duration from that of presentation duration. Although eye movement at encoding contributes to immediate memory, it is not decisive in shaping serial recall performance. Our results also provide further evidence that the distinction between item and order information, well-established in the verbal domain, extends to visual-spatial information.

  11. Gaze-contingent displays: a review.

    PubMed

    Duchowski, Andrew T; Cournia, Nathan; Murphy, Hunter

    2004-12-01

    Gaze-contingent displays (GCDs) attempt to balance the amount of information displayed against the visual information processing capacity of the observer through real-time eye movement sensing. Based on the assumed knowledge of the instantaneous location of the observer's focus of attention, GCD content can be "tuned" through several display processing means. Screen-based displays alter pixel level information generally matching the resolvability of the human retina in an effort to maximize bandwidth. Model-based displays alter geometric-level primitives along similar goals. Attentive user interfaces (AUIs) manage object- level entities (e.g., windows, applications) depending on the assumed attentive state of the observer. Such real-time display manipulation is generally achieved through non-contact, unobtrusive tracking of the observer's eye movements. This paper briefly reviews past and present display techniques as well as emerging graphics and eye tracking technology for GCD development.

  12. Adolescents' attention to responsibility messages in magazine alcohol advertisements: an eye-tracking approach.

    PubMed

    Thomsen, Steven R; Fulton, Kristi

    2007-07-01

    To investigate whether adolescent readers attend to responsibility or moderation messages (e.g., "drink responsibly") included in magazine advertisements for alcoholic beverages and to assess the association between attention and the ability to accurately recall the content of these messages. An integrated head-eye tracking system (ASL Eye-TRAC 6000) was used to measure the eye movements, including fixations and fixation duration, of a group of 63 adolescents (ages 12-14 years) as they viewed six print advertisements for alcoholic beverages. Immediately after the eye-tracking sessions, participants completed a masked-recall exercise. Overall, the responsibility or moderation messages were the least frequently viewed textual or visual areas of the advertisements. Participants spent an average of only .35 seconds, or 7% of the total viewing time, fixating on each responsibility message. Beverage bottles, product logos, and cartoon illustrations were the most frequently viewed elements of the advertisements. Among those participants who fixated at least once on an advertisement's warning message, only a relatively small percentage were able to recall its general concept or restate it verbatim in the masked recall test. Voluntary responsibility or moderation messages failed to capture the attention of teenagers who participated in this study and need to be typographically modified to be more effective.

  13. Anticipation in Real-world Scenes: The Role of Visual Context and Visual Memory

    ERIC Educational Resources Information Center

    Coco, Moreno I.; Keller, Frank; Malcolm, George L.

    2016-01-01

    The human sentence processor is able to make rapid predictions about upcoming linguistic input. For example, upon hearing the verb eat, anticipatory eye-movements are launched toward edible objects in a visual scene (Altmann & Kamide, 1999). However, the cognitive mechanisms that underlie anticipation remain to be elucidated in ecologically…

  14. Eye tracking reveals a crucial role for facial motion in recognition of faces by infants.

    PubMed

    Xiao, Naiqi G; Quinn, Paul C; Liu, Shaoying; Ge, Liezhong; Pascalis, Olivier; Lee, Kang

    2015-06-01

    Current knowledge about face processing in infancy comes largely from studies using static face stimuli, but faces that infants see in the real world are mostly moving ones. To bridge this gap, 3-, 6-, and 9-month-old Asian infants (N = 118) were familiarized with either moving or static Asian female faces, and then their face recognition was tested with static face images. Eye-tracking methodology was used to record eye movements during the familiarization and test phases. The results showed a developmental change in eye movement patterns, but only for the moving faces. In addition, the more infants shifted their fixations across facial regions, the better their face recognition was, but only for the moving faces. The results suggest that facial movement influences the way faces are encoded from early in development. (c) 2015 APA, all rights reserved).

  15. Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays

    PubMed Central

    Padmanaban, Nitish; Konrad, Robert; Stramer, Tal; Wetzstein, Gordon

    2017-01-01

    From the desktop to the laptop to the mobile device, personal computing platforms evolve over time. Moving forward, wearable computing is widely expected to be integral to consumer electronics and beyond. The primary interface between a wearable computer and a user is often a near-eye display. However, current generation near-eye displays suffer from multiple limitations: they are unable to provide fully natural visual cues and comfortable viewing experiences for all users. At their core, many of the issues with near-eye displays are caused by limitations in conventional optics. Current displays cannot reproduce the changes in focus that accompany natural vision, and they cannot support users with uncorrected refractive errors. With two prototype near-eye displays, we show how these issues can be overcome using display modes that adapt to the user via computational optics. By using focus-tunable lenses, mechanically actuated displays, and mobile gaze-tracking technology, these displays can be tailored to correct common refractive errors and provide natural focus cues by dynamically updating the system based on where a user looks in a virtual scene. Indeed, the opportunities afforded by recent advances in computational optics open up the possibility of creating a computing platform in which some users may experience better quality vision in the virtual world than in the real one. PMID:28193871

  16. A treat for the eyes. An eye-tracking study on children's attention to unhealthy and healthy food cues in media content.

    PubMed

    Spielvogel, Ines; Matthes, Jörg; Naderer, Brigitte; Karsay, Kathrin

    2018-06-01

    Based on cue reactivity theory, food cues embedded in media content can lead to physiological and psychological responses in children. Research suggests that unhealthy food cues are represented more extensively and interactively in children's media environments than healthy ones. However, it is not clear to this date whether children react differently to unhealthy compared to healthy food cues. In an experimental study with 56 children (55.4% girls; M age  = 8.00, SD = 1.58), we used eye-tracking to determine children's attention to unhealthy and healthy food cues embedded in a narrative cartoon movie. Besides varying the food type (i.e., healthy vs. unhealthy), we also manipulated the integration levels of food cues with characters (i.e., level of food integration; no interaction vs. handling vs. consumption), and we assessed children's individual susceptibility factors by measuring the impact of their hunger level. Our results indicated that unhealthy food cues attract children's visual attention to a larger extent than healthy cues. However, their initial visual interest did not differ between unhealthy and healthy food cues. Furthermore, an increase in the level of food integration led to an increase in visual attention. Our findings showed no moderating impact of hunger. We conclude that especially unhealthy food cues with an interactive connection trigger cue reactivity in children. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. The Initiation of Smooth Pursuit is Delayed in Anisometropic Amblyopia

    PubMed Central

    Raashid, Rana Arham; Liu, Ivy Ziqian; Blakeman, Alan; Goltz, Herbert C.; Wong, Agnes M. F.

    2016-01-01

    Purpose Several behavioral studies have shown that the reaction times of visually guided movements are slower in people with amblyopia, particularly during amblyopic eye viewing. Here, we tested the hypothesis that the initiation of smooth pursuit eye movements, which are responsible for accurately keeping moving objects on the fovea, is delayed in people with anisometropic amblyopia. Methods Eleven participants with anisometropic amblyopia and 14 visually normal observers were asked to track a step-ramp target moving at ±15°/s horizontally as quickly and as accurately as possible. The experiment was conducted under three viewing conditions: amblyopic/nondominant eye, binocular, and fellow/dominant eye viewing. Outcome measures were smooth pursuit latency, open-loop gain, steady state gain, and catch-up saccade frequency. Results Participants with anisometropic amblyopia initiated smooth pursuit significantly slower during amblyopic eye viewing (206 ± 20 ms) than visually normal observers viewing with their nondominant eye (183 ± 17 ms, P = 0.002). However, mean pursuit latency in the anisometropic amblyopia group during binocular and monocular fellow eye viewing was comparable to the visually normal group. Mean open-loop gain, steady state gain, and catch-up saccade frequency were similar between the two groups, but participants with anisometropic amblyopia exhibited more variable steady state gain (P = 0.045). Conclusions This study provides evidence of temporally delayed smooth pursuit initiation in anisometropic amblyopia. After initiation, the smooth pursuit velocity profile in anisometropic amblyopia participants is similar to visually normal controls. This finding differs from what has been observed previously in participants with strabismic amblyopia who exhibit reduced smooth pursuit velocity gains with more catch-up saccades. PMID:27070109

  18. Similar effects of feature-based attention on motion perception and pursuit eye movements at different levels of awareness

    PubMed Central

    Spering, Miriam; Carrasco, Marisa

    2012-01-01

    Feature-based attention enhances visual processing and improves perception, even for visual features that we are not aware of. Does feature-based attention also modulate motor behavior in response to visual information that does or does not reach awareness? Here we compare the effect of feature-based attention on motion perception and smooth pursuit eye movements in response to moving dichoptic plaids–stimuli composed of two orthogonally-drifting gratings, presented separately to each eye–in human observers. Monocular adaptation to one grating prior to the presentation of both gratings renders the adapted grating perceptually weaker than the unadapted grating and decreases the level of awareness. Feature-based attention was directed to either the adapted or the unadapted grating’s motion direction or to both (neutral condition). We show that observers were better in detecting a speed change in the attended than the unattended motion direction, indicating that they had successfully attended to one grating. Speed change detection was also better when the change occurred in the unadapted than the adapted grating, indicating that the adapted grating was perceptually weaker. In neutral conditions, perception and pursuit in response to plaid motion were dissociated: While perception followed one grating’s motion direction almost exclusively (component motion), the eyes tracked the average of both gratings (pattern motion). In attention conditions, perception and pursuit were shifted towards the attended component. These results suggest that attention affects perception and pursuit similarly even though only the former reflects awareness. The eyes can track an attended feature even if observers do not perceive it. PMID:22649238

  19. Visual attention mechanisms in happiness versus trustworthiness processing of facial expressions.

    PubMed

    Calvo, Manuel G; Krumhuber, Eva G; Fernández-Martín, Andrés

    2018-03-01

    A happy facial expression makes a person look (more) trustworthy. Do perceptions of happiness and trustworthiness rely on the same face regions and visual attention processes? In an eye-tracking study, eye movements and fixations were recorded while participants judged the un/happiness or the un/trustworthiness of dynamic facial expressions in which the eyes and/or the mouth unfolded from neutral to happy or vice versa. A smiling mouth and happy eyes enhanced perceived happiness and trustworthiness similarly, with a greater contribution of the smile relative to the eyes. This comparable judgement output for happiness and trustworthiness was reached through shared as well as distinct attentional mechanisms: (a) entry times and (b) initial fixation thresholds for each face region were equivalent for both judgements, thereby revealing the same attentional orienting in happiness and trustworthiness processing. However, (c) greater and (d) longer fixation density for the mouth region in the happiness task, and for the eye region in the trustworthiness task, demonstrated different selective attentional engagement. Relatedly, (e) mean fixation duration across face regions was longer in the trustworthiness task, thus showing increased attentional intensity or processing effort.

  20. Younger and Older Adults' Use of Verb Aspect and World Knowledge in the Online Interpretation of Discourse

    ERIC Educational Resources Information Center

    Mozuraitis, Mindaugas; Chambers, Craig G.; Daneman, Meredyth

    2013-01-01

    Eye tracking was used to explore the role of grammatical aspect and world knowledge in establishing temporal relationships across sentences in discourse. Younger and older adult participants read short passages that included sentences such as "Mrs. Adams was knitting/knitted a new sweater"..."She wore her new garment...".…

  1. Others' emotions teach, but not in autism: an eye-tracking pupillometry study.

    PubMed

    Nuske, Heather J; Vivanti, Giacomo; Dissanayake, Cheryl

    2016-01-01

    Much research has investigated deficit in emotional reactivity to others in people with autism, but scant attention has been paid to how this deficit affects their own reactions to features of their environment (objects, events, practices, etc.). The present study presents a preliminary analysis on whether calibrating one's own emotional reactions to others' emotional reactions about features of the world, a process we term social-emotional calibration, is disrupted in autism. To examine this process, we used a novel eye-tracking pupillometry paradigm in which we showed 20 preschoolers with autism and 20 matched typically developing preschoolers' videos of an actor opening a box and reacting to the occluded object inside, with fear or happiness. We expected preschoolers to come to perceive the box as containing a positive or threatening stimulus through emotionally calibrating to the actor's emotional expressions. Children's mean pupil diameter (indicating emotional reactivity) was measured whilst viewing an up-close, visually identical image of the box before and then after the scene, and this difference was taken as an index of social-emotional calibration and compared between groups. Whilst the typically developing preschoolers responded more emotionally to the box after, compared to before the scene (as indexed by an increase in pupil size), those with autism did not, suggesting their reaction to the object was not affected by the actor's emotional expressions. The groups did not differ in looking duration to the emotional expressions; thus, the pupil dilation findings cannot be explained by differences in visual attention. More social-emotional calibration on the happy condition was associated with less severe autism symptoms. Through the measurement of physiological reactivity, findings suggest social-emotional calibration is diminished in children with autism, with calibration to others' positive emotions as particularly important. This study highlights a possible mechanism by which individuals with autism develop idiosyncratic reactions to features of their environment, which is likely to impact their active and harmonious participation on social and cultural practices from infancy, throughout the lifespan. More research is needed to examine the mediators and developmental sequence of this tendency to emotionally calibrate to others' feelings about the world.

  2. Role of Oculoproprioception in Coding the Locus of Attention.

    PubMed

    Odoj, Bartholomaeus; Balslev, Daniela

    2016-03-01

    The most common neural representations for spatial attention encode locations retinotopically, relative to center of gaze. To keep track of visual objects across saccades or to orient toward sounds, retinotopic representations must be combined with information about the rotation of one's own eyes in the orbits. Although gaze input is critical for a correct allocation of attention, the source of this input has so far remained unidentified. Two main signals are available: corollary discharge (copy of oculomotor command) and oculoproprioception (feedback from extraocular muscles). Here we asked whether the oculoproprioceptive signal relayed from the somatosensory cortex contributes to coding the locus of attention. We used continuous theta burst stimulation (cTBS) over a human oculoproprioceptive area in the postcentral gyrus (S1EYE). S1EYE-cTBS reduces proprioceptive processing, causing ∼1° underestimation of gaze angle. Participants discriminated visual targets whose location was cued in a nonvisual modality. Throughout the visual space, S1EYE-cTBS shifted the locus of attention away from the cue by ∼1°, in the same direction and by the same magnitude as the oculoproprioceptive bias. This systematic shift cannot be attributed to visual mislocalization. Accuracy of open-loop pointing to the same visual targets, a function thought to rely mainly on the corollary discharge, was unchanged. We argue that oculoproprioception is selective for attention maps. By identifying a potential substrate for the coupling between eye and attention, this study contributes to the theoretical models for spatial attention.

  3. Eye movement assessment of selective attentional capture by emotional pictures.

    PubMed

    Nummenmaa, Lauri; Hyönä, Jukka; Calvo, Manuel G

    2006-05-01

    The eye-tracking method was used to assess attentional orienting to and engagement on emotional visual scenes. In Experiment 1, unpleasant, neutral, or pleasant target pictures were presented simultaneously with neutral control pictures in peripheral vision under instruction to compare pleasantness of the pictures. The probability of first fixating an emotional picture, and the frequency of subsequent fixations, were greater than those for neutral pictures. In Experiment 2, participants were instructed to avoid looking at the emotional pictures, but these were still more likely to be fixated first and gazed longer during the first-pass viewing than neutral pictures. Low-level visual features cannot explain the results. It is concluded that overt visual attention is captured by both unpleasant and pleasant emotional content. 2006 APA, all rights reserved

  4. The effect of different brightness conditions on visually and memory guided saccades.

    PubMed

    Felßberg, Anna-Maria; Dombrowe, Isabel

    2018-01-01

    It is commonly assumed that saccades in the dark are slower than saccades in a lit room. Early studies that investigated this issue using electrooculography (EOG) often compared memory guided saccades in darkness to visually guided saccades in an illuminated room. However, later studies showed that memory guided saccades are generally slower than visually guided saccades. Research on this topic is further complicated by the fact that the different existing eyetracking methods do not necessarily lead to consistent measurements. In the present study, we independently manipulated task (memory guided/visually guided) and screen brightness (dark, medium and light) in an otherwise completely dark room, and measured the peak velocity and the duration of the participant's saccades using a popular pupil-cornea reflection (p-cr) eyetracker (Eyelink 1000). Based on a critical reading of the literature, including a recent study using cornea-reflection (cr) eye tracking, we did not expect any velocity or duration differences between the three brightness conditions. We found that memory guided saccades were generally slower than visually guided saccades. In both tasks, eye movements on a medium and light background were equally fast and had similar durations. However, saccades on the dark background were slower and had shorter durations, even after we corrected for the effect of pupil size changes. This means that this is most likely an artifact of current pupil-based eye tracking. We conclude that the common assumption that saccades in the dark are slower than in the light is probably not true, however pupil-based eyetrackers tend to underestimate the peak velocity of saccades on very dark backgrounds, creating the impression that this might be the case. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. A study on the natural history of scanning behaviour in patients with visual field defects after stroke.

    PubMed

    Loetscher, Tobias; Chen, Celia; Wignall, Sophie; Bulling, Andreas; Hoppe, Sabrina; Churches, Owen; Thomas, Nicole A; Nicholls, Michael E R; Lee, Andrew

    2015-04-24

    A visual field defect (VFD) is a common consequence of stroke with a detrimental effect upon the survivors' functional ability and quality of life. The identification of effective treatments for VFD is a key priority relating to life post-stroke. Understanding the natural evolution of scanning compensation over time may have important ramifications for the development of efficacious therapies. The study aims to unravel the natural history of visual scanning behaviour in patients with VFD. The assessment of scanning patterns in the acute to chronic stages of stroke will reveal who does and does not learn to compensate for vision loss. Eye-tracking glasses are used to delineate eye movements in a cohort of 100 stroke patients immediately after stroke, and additionally at 6 and 12 months post-stroke. The longitudinal study will assess eye movements in static (sitting) and dynamic (walking) conditions. The primary outcome constitutes the change of lateral eye movements from the acute to chronic stages of stroke. Secondary outcomes include changes of lateral eye movements over time as a function of subgroup characteristics, such as side of VFD, stroke location, stroke severity and cognitive functioning. The longitudinal comparison of patients who do and do not learn compensatory scanning techniques may reveal important prognostic markers of natural recovery. Importantly, it may also help to determine the most effective treatment window for visual rehabilitation.

  6. Death anxiety and visual oculomotor processing of arousing stimuli in a free view setting.

    PubMed

    Wendelberg, Linda; Volden, Frode; Yildirim-Yayilgan, Sule

    2017-04-01

    The main goal of this study was to determine how death anxiety (DA) affects visual processing when confronted with arousing stimuli. A total of 26 males and females were primed with either DA or a neutral primer and were given a free view/free choice task where eye movement was measured using an eye tracker. The goal was to identify measurable/observable indicators of whether the subjects were under the influence of DA during the free view. We conducted an eye tracking study because this is an area where we believe it is possible to find observable indicators. Ultimately, we observed some changes in the visual behavior, such as a prolonged average latency, altered sensitivity to the repetition of stimuli, longer fixations, less time in saccadic activity, and fewer classifications related to focal and ambient processing, which appear to occur under the influence of DA when the subjects are confronted with arousing stimuli. © 2017 Scandinavian Psychological Associations and John Wiley & Sons Ltd.

  7. GazeParser: an open-source and multiplatform library for low-cost eye tracking and analysis.

    PubMed

    Sogo, Hiroyuki

    2013-09-01

    Eye movement analysis is an effective method for research on visual perception and cognition. However, recordings of eye movements present practical difficulties related to the cost of the recording devices and the programming of device controls for use in experiments. GazeParser is an open-source library for low-cost eye tracking and data analysis; it consists of a video-based eyetracker and libraries for data recording and analysis. The libraries are written in Python and can be used in conjunction with PsychoPy and VisionEgg experimental control libraries. Three eye movement experiments are reported on performance tests of GazeParser. These showed that the means and standard deviations for errors in sampling intervals were less than 1 ms. Spatial accuracy ranged from 0.7° to 1.2°, depending on participant. In gap/overlap tasks and antisaccade tasks, the latency and amplitude of the saccades detected by GazeParser agreed with those detected by a commercial eyetracker. These results showed that the GazeParser demonstrates adequate performance for use in psychological experiments.

  8. [To promote universal eye health to push forward sustaining development of the prevention of blindness in China].

    PubMed

    Zhao, Jialiang

    2014-03-01

    Action plan for the prevention of avoidable blindness and visual impairment for 2014-2019 endorsed by 66(th) World Health Assembly is an important document for promoting the global prevention of blindness. This action plan summarized the experiences and lessons in the global prevention of avoidable blindness and visual impairment from 2009 to 2013, raised the global goal for the prevention of blindness-the reduction in prevalence of avoidable visual impairment by 25% by 2019 from the baseline of 2010, set up the monitoring indicators for realizing the global goal. This document can be served as a roadmap to consolidate joint efforts aimed at working towards universal eye health in the world. This action plan must give a deep and important impact on the prevention of blindness in China.We should implement the action plan for the prevention of avoidable blindness and visual impairment for 2014-2019 to push forward sustaining development of the prevention of blindness in China.

  9. Classroom Displays--Attraction or Distraction? Evidence of Impact on Attention and Learning from Children with and without Autism

    ERIC Educational Resources Information Center

    Hanley, Mary; Khairat, Mariam; Taylor, Korey; Wilson, Rachel; Cole-Fletcher, Rachel; Riby, Deborah M.

    2017-01-01

    Paying attention is a critical first step toward learning. For children in primary school classrooms there can be many things to attend to other than the focus of a lesson, such as visual displays on classroom walls. The aim of this study was to use eye-tracking techniques to explore the impact of visual displays on attention and learning for…

  10. Looking but Not Seeing: Atypical Visual Scanning and Recognition of Faces in 2 and 4-Year-Old Children with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Chawarska, Katarzyna; Shic, Frederick

    2009-01-01

    This study used eye-tracking to examine visual scanning and recognition of faces by 2- and 4-year-old children with autism spectrum disorder (ASD) (N = 44) and typically developing (TD) controls (N = 30). TD toddlers at both age levels scanned and recognized faces similarly. Toddlers with ASD looked increasingly away from faces with age,…

  11. Cognitive Food Processing in Binge-Eating Disorder: An Eye-Tracking Study.

    PubMed

    Sperling, Ingmar; Baldofski, Sabrina; Lüthold, Patrick; Hilbert, Anja

    2017-08-19

    Studies indicate an attentional bias towards food in binge-eating disorder (BED); however, more evidence on attentional engagement and disengagement and processing of multiple attention-competing stimuli is needed. This study aimed to examine visual attention to food and non-food stimuli in BED. In n = 23 participants with full-syndrome and subsyndromal BED and n = 23 individually matched healthy controls, eye-tracking was used to assess attention to food and non-food stimuli during a free exploration paradigm and a visual search task. In the free exploration paradigm, groups did not differ in their initial fixation position. While both groups fixated non-food stimuli significantly longer than food stimuli, the BED group allocated significantly more attention towards food than controls. In the visual search task, groups did not differ in detection times. However, a significant detection bias for food was found in full-syndrome BED, but not in controls. An increased initial attention towards food was related to greater BED symptomatology and lower body mass index (BMI) only in full-syndrome BED, while a greater maintained attention to food was associated with lower BMI in controls. The results suggest food-biased visual attentional processing in adults with BED. Further studies should clarify the implications of attentional processes for the etiology and maintenance of BED.

  12. Cognitive Food Processing in Binge-Eating Disorder: An Eye-Tracking Study

    PubMed Central

    Sperling, Ingmar; Lüthold, Patrick; Hilbert, Anja

    2017-01-01

    Studies indicate an attentional bias towards food in binge-eating disorder (BED); however, more evidence on attentional engagement and disengagement and processing of multiple attention-competing stimuli is needed. This study aimed to examine visual attention to food and non-food stimuli in BED. In n = 23 participants with full-syndrome and subsyndromal BED and n = 23 individually matched healthy controls, eye-tracking was used to assess attention to food and non-food stimuli during a free exploration paradigm and a visual search task. In the free exploration paradigm, groups did not differ in their initial fixation position. While both groups fixated non-food stimuli significantly longer than food stimuli, the BED group allocated significantly more attention towards food than controls. In the visual search task, groups did not differ in detection times. However, a significant detection bias for food was found in full-syndrome BED, but not in controls. An increased initial attention towards food was related to greater BED symptomatology and lower body mass index (BMI) only in full-syndrome BED, while a greater maintained attention to food was associated with lower BMI in controls. The results suggest food-biased visual attentional processing in adults with BED. Further studies should clarify the implications of attentional processes for the etiology and maintenance of BED. PMID:28825607

  13. Visual stimuli that elicit appetitive behaviors in three morphologically distinct species of praying mantis.

    PubMed

    Prete, Frederick R; Komito, Justin L; Dominguez, Salina; Svenson, Gavin; López, LeoLin Y; Guillen, Alex; Bogdanivich, Nicole

    2011-09-01

    We assessed the differences in appetitive responses to visual stimuli by three species of praying mantis (Insecta: Mantodea), Tenodera aridifolia sinensis, Mantis religiosa, and Cilnia humeralis. Tethered, adult females watched computer generated stimuli (erratically moving disks or linearly moving rectangles) that varied along predetermined parameters. Three responses were scored: tracking, approaching, and striking. Threshold stimulus size (diameter) for tracking and striking at disks ranged from 3.5 deg (C. humeralis) to 7.8 deg (M. religiosa), and from 3.3 deg (C. humeralis) to 11.7 deg (M. religiosa), respectively. Unlike the other species which struck at disks as large as 44 deg, T. a. sinensis displayed a preference for 14 deg disks. Disks moving at 143 deg/s were preferred by all species. M. religiosa exhibited the most approaching behavior, and with T. a. sinensis distinguished between rectangular stimuli moving parallel versus perpendicular to their long axes. C. humeralis did not make this distinction. Stimulus sizes that elicited the target behaviors were not related to mantis size. However, differences in compound eye morphology may be related to species differences: C. humeralis' eyes are farthest apart, and it has an apparently narrower binocular visual field which may affect retinal inputs to movement-sensitive visual interneurons.

  14. Procedural learning and associative memory mechanisms contribute to contextual cueing: Evidence from fMRI and eye-tracking.

    PubMed

    Manelis, Anna; Reder, Lynne M

    2012-10-16

    Using a combination of eye tracking and fMRI in a contextual cueing task, we explored the mechanisms underlying the facilitation of visual search for repeated spatial configurations. When configurations of distractors were repeated, greater activation in the right hippocampus corresponded to greater reductions in the number of saccades to locate the target. A psychophysiological interactions analysis for repeated configurations revealed that a strong functional connectivity between this area in the right hippocampus and the left superior parietal lobule early in learning was significantly reduced toward the end of the task. Practice related changes (which we call "procedural learning") in activation in temporo-occipital and parietal brain regions depended on whether or not spatial context was repeated. We conclude that context repetition facilitates visual search through chunk formation that reduces the number of effective distractors that have to be processed during the search. Context repetition influences procedural learning in a way that allows for continuous and effective chunk updating.

  15. Procedural learning and associative memory mechanisms contribute to contextual cueing: Evidence from fMRI and eye-tracking

    PubMed Central

    Manelis, Anna; Reder, Lynne M.

    2012-01-01

    Using a combination of eye tracking and fMRI in a contextual cueing task, we explored the mechanisms underlying the facilitation of visual search for repeated spatial configurations. When configurations of distractors were repeated, greater activation in the right hippocampus corresponded to greater reductions in the number of saccades to locate the target. A psychophysiological interactions analysis for repeated configurations revealed that a strong functional connectivity between this area in the right hippocampus and the left superior parietal lobule early in learning was significantly reduced toward the end of the task. Practice related changes (which we call “procedural learning”) in activation in temporo-occipital and parietal brain regions depended on whether or not spatial context was repeated. We conclude that context repetition facilitates visual search through chunk formation that reduces the number of effective distractors that have to be processed during the search. Context repetition influences procedural learning in a way that allows for continuous and effective chunk updating. PMID:23073642

  16. Subtyping of Toddlers with ASD Based on Patterns of Social Attention Deficits

    DTIC Science & Technology

    2014-10-01

    AWARD NUMBER: W81XWH-13-1-0179 TITLE: Subtyping of Toddlers with ASD Based on Patterns of Social Attention Deficits PRINCIPAL INVESTIGATOR...TITLE AND SUBTITLE Subtyping of Toddlers with ASD Based on Patterns of Social Attention Deficits 5a. CONTRACT NUMBER 5b. GRANT NUMBER W81XWH-13-1-0179...tracking data. 15. SUBJECT TERMS ASD, subgrouping, toddlers , heterogeneity, eye-tracking, visual attention, dyadic orienting, hierarchical

  17. Comparing Eye Tracking with Electrooculography for Measuring Individual Sentence Comprehension Duration

    PubMed Central

    Müller, Jana Annina; Wendt, Dorothea; Kollmeier, Birger; Brand, Thomas

    2016-01-01

    The aim of this study was to validate a procedure for performing the audio-visual paradigm introduced by Wendt et al. (2015) with reduced practical challenges. The original paradigm records eye fixations using an eye tracker and calculates the duration of sentence comprehension based on a bootstrap procedure. In order to reduce practical challenges, we first reduced the measurement time by evaluating a smaller measurement set with fewer trials. The results of 16 listeners showed effects comparable to those obtained when testing the original full measurement set on a different collective of listeners. Secondly, we introduced electrooculography as an alternative technique for recording eye movements. The correlation between the results of the two recording techniques (eye tracker and electrooculography) was r = 0.97, indicating that both methods are suitable for estimating the processing duration of individual participants. Similar changes in processing duration arising from sentence complexity were found using the eye tracker and the electrooculography procedure. Thirdly, the time course of eye fixations was estimated with an alternative procedure, growth curve analysis, which is more commonly used in recent studies analyzing eye tracking data. The results of the growth curve analysis were compared with the results of the bootstrap procedure. Both analysis methods show similar processing durations. PMID:27764125

  18. Attention to body-parts varies with visual preference and verb-effector associations.

    PubMed

    Boyer, Ty W; Maouene, Josita; Sethuraman, Nitya

    2017-05-01

    Theories of embodied conceptual meaning suggest fundamental relations between others' actions, language, and our own actions and visual attention processes. Prior studies have found that when people view an image of a neutral body in a scene they first look toward, in order, the head, torso, hands, and legs. Other studies show associations between action verbs and the body-effectors used in performing the action (e.g., "jump" with feet/legs; "talk" with face/head). In the present experiment, the visual attention of participants was recorded with a remote eye-tracking system while they viewed an image of an actor pantomiming an action and heard a concrete action verb. Participants manually responded whether or not the action image was a good example of the verb they heard. The eye-tracking results confirmed that participants looked at the head most, followed by the hands, and the feet least of all; however, visual attention to each of the body-parts also varied as a function of the effector associated with the spoken verb on image/verb congruent trials, particularly for verbs associated with the legs. Overall, these results suggest that language influences some perceptual processes; however, hearing auditory verbs did not alter the previously reported fundamental hierarchical sequence of directed attention, and fixations on specific body-effectors may not be essential for verb comprehension as peripheral visual cues may be sufficient to perform the task.

  19. Neurons in the monkey amygdala detect eye-contact during naturalistic social interactions

    PubMed Central

    Mosher, Clayton P.; Zimmerman, Prisca E.; Gothard, Katalin M.

    2014-01-01

    Summary Primates explore the visual world through eye-movement sequences. Saccades bring details of interest into the fovea while fixations stabilize the image [1]. During natural vision, social primates direct their gaze at the eyes of others to communicate their own emotions and intentions and to gather information about the mental states of others [2]. Direct gaze is an integral part of facial expressions that signals cooperation or conflict over resources and social status [3-6]. Despite the great importance of making and breaking eye contact in the behavioral repertoire of primates, little is known about the neural substrates that support these behaviors. Here we show that the monkey amygdala contains neurons that respond selectively to fixations at the eyes of others and to eye contact. These “eye cells” share several features with the canonical, visually responsive neurons in the monkey amygdala, however, they respond to the eyes only when they fall within the fovea of the viewer, either as a result of a deliberate saccade, or as eyes move into the fovea of the viewer during a fixation intended to explore a different feature. The presence of eyes in peripheral vision fails to activate the eye cells. These findings link the primate amygdala to eye-movements involved in the exploration and selection of details in visual scenes that contain socially and emotionally salient features. PMID:25283782

  20. Neurons in the monkey amygdala detect eye contact during naturalistic social interactions.

    PubMed

    Mosher, Clayton P; Zimmerman, Prisca E; Gothard, Katalin M

    2014-10-20

    Primates explore the visual world through eye-movement sequences. Saccades bring details of interest into the fovea, while fixations stabilize the image. During natural vision, social primates direct their gaze at the eyes of others to communicate their own emotions and intentions and to gather information about the mental states of others. Direct gaze is an integral part of facial expressions that signals cooperation or conflict over resources and social status. Despite the great importance of making and breaking eye contact in the behavioral repertoire of primates, little is known about the neural substrates that support these behaviors. Here we show that the monkey amygdala contains neurons that respond selectively to fixations on the eyes of others and to eye contact. These "eye cells" share several features with the canonical, visually responsive neurons in the monkey amygdala; however, they respond to the eyes only when they fall within the fovea of the viewer, either as a result of a deliberate saccade or as eyes move into the fovea of the viewer during a fixation intended to explore a different feature. The presence of eyes in peripheral vision fails to activate the eye cells. These findings link the primate amygdala to eye movements involved in the exploration and selection of details in visual scenes that contain socially and emotionally salient features. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Eye-Tracking Reveals that the Strength of the Vertical-Horizontal Illusion Increases as the Retinal Image Becomes More Stable with Fixation

    PubMed Central

    Chouinard, Philippe A.; Peel, Hayden J.; Landry, Oriane

    2017-01-01

    The closer a line extends toward a surrounding frame, the longer it appears. This is known as a framing effect. Over 70 years ago, Teodor Künnapas demonstrated that the shape of the visual field itself can act as a frame to influence the perceived length of lines in the vertical-horizontal illusion. This illusion is typically created by having a vertical line rise from the center of a horizontal line of the same length creating an inverted T figure. We aimed to determine if the degree to which one fixates on a spatial location where the two lines bisect could influence the strength of the illusion, assuming that the framing effect would be stronger when the retinal image is more stable. We performed two experiments: the visual-field and vertical-horizontal illusion experiments. The visual-field experiment demonstrated that the participants could discriminate a target more easily when it was presented along the horizontal vs. vertical meridian, confirming a framing influence on visual perception. The vertical-horizontal illusion experiment determined the effects of orientation, size and eye gaze on the strength of the illusion. As predicted, the illusion was strongest when the stimulus was presented in either its standard inverted T orientation or when it was rotated 180° compared to other orientations, and in conditions in which the retinal image was more stable, as indexed by eye tracking. Taken together, we conclude that the results provide support for Teodor Künnapas’ explanation of the vertical-horizontal illusion. PMID:28392764

  2. The socialization effect on decision making in the Prisoner's Dilemma game: An eye-tracking study

    PubMed Central

    Myagkov, Mikhail G.; Harriff, Kyle

    2017-01-01

    We used a mobile eye-tracking system (in the form of glasses) to study the characteristics of visual perception in decision making in the Prisoner's Dilemma game. In each experiment, one of the 12 participants was equipped with eye-tracking glasses. The experiment was conducted in three stages: an anonymous Individual Game stage against a randomly chosen partner (one of the 12 other participants of the experiment); a Socialization stage, in which the participants were divided into two groups; and a Group Game stage, in which the participants played with partners in the groups. After each round, the respondent received information about his or her personal score in the last round and the overall winner of the game at the moment. The study proves that eye-tracking systems can be used for studying the process of decision making and forecasting. The total viewing time and the time of fixation on areas corresponding to noncooperative decisions is related to the participants’ overall level of cooperation. The increase in the total viewing time and the time of fixation on the areas of noncooperative choice is due to a preference for noncooperative decisions and a decrease in the overall level of cooperation. The number of fixations on the group attributes is associated with group identity, but does not necessarily lead to cooperative behavior. PMID:28394939

  3. Perceived state of self during motion can differentially modulate numerical magnitude allocation.

    PubMed

    Arshad, Q; Nigmatullina, Y; Roberts, R E; Goga, U; Pikovsky, M; Khan, S; Lobo, R; Flury, A-S; Pettorossi, V E; Cohen-Kadosh, R; Malhotra, P A; Bronstein, A M

    2016-09-01

    Although a direct relationship between numerical allocation and spatial attention has been proposed, recent research suggests that these processes are not directly coupled. In keeping with this, spatial attention shifts induced either via visual or vestibular motion can modulate numerical allocation in some circumstances but not in others. In addition to shifting spatial attention, visual or vestibular motion paradigms also (i) elicit compensatory eye movements which themselves can influence numerical processing and (ii) alter the perceptual state of 'self', inducing changes in bodily self-consciousness impacting upon cognitive mechanisms. Thus, the precise mechanism by which motion modulates numerical allocation remains unknown. We sought to investigate the influence that different perceptual experiences of motion have upon numerical magnitude allocation while controlling for both eye movements and task-related effects. We first used optokinetic visual motion stimulation (OKS) to elicit the perceptual experience of either 'visual world' or 'self'-motion during which eye movements were identical. In a second experiment, we used a vestibular protocol examining the effects of perceived and subliminal angular rotations in darkness, which also provoked identical eye movements. We observed that during the perceptual experience of 'visual world' motion, rightward OKS-biased judgments towards smaller numbers, whereas leftward OKS-biased judgments towards larger numbers. During the perceptual experience of 'self-motion', judgments were biased towards larger numbers irrespective of the OKS direction. Contrastingly, vestibular motion perception was found not to modulate numerical magnitude allocation, nor was there any differential modulation when comparing 'perceived' vs. 'subliminal' rotations. We provide a novel demonstration that numerical magnitude allocation can be differentially modulated by the perceptual state of self during visual but not vestibular mediated motion. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  4. Saccadic Corollary Discharge Underlies Stable Visual Perception

    PubMed Central

    Berman, Rebecca A.; Joiner, Wilsaan M.; Wurtz, Robert H.

    2016-01-01

    Saccadic eye movements direct the high-resolution foveae of our retinas toward objects of interest. With each saccade, the image jumps on the retina, causing a discontinuity in visual input. Our visual perception, however, remains stable. Philosophers and scientists over centuries have proposed that visual stability depends upon an internal neuronal signal that is a copy of the neuronal signal driving the eye movement, now referred to as a corollary discharge (CD) or efference copy. In the old world monkey, such a CD circuit for saccades has been identified extending from superior colliculus through MD thalamus to frontal cortex, but there is little evidence that this circuit actually contributes to visual perception. We tested the influence of this CD circuit on visual perception by first training macaque monkeys to report their perceived eye direction, and then reversibly inactivating the CD as it passes through the thalamus. We found that the monkey's perception changed; during CD inactivation, there was a difference between where the monkey perceived its eyes to be directed and where they were actually directed. Perception and saccade were decoupled. We established that the perceived eye direction at the end of the saccade was not derived from proprioceptive input from eye muscles, and was not altered by contextual visual information. We conclude that the CD provides internal information contributing to the brain's creation of perceived visual stability. More specifically, the CD might provide the internal saccade vector used to unite separate retinal images into a stable visual scene. SIGNIFICANCE STATEMENT Visual stability is one of the most remarkable aspects of human vision. The eyes move rapidly several times per second, displacing the retinal image each time. The brain compensates for this disruption, keeping our visual perception stable. A major hypothesis explaining this stability invokes a signal within the brain, a corollary discharge, that informs visual regions of the brain when and where the eyes are about to move. Such a corollary discharge circuit for eye movements has been identified in macaque monkey. We now show that selectively inactivating this brain circuit alters the monkey's visual perception. We conclude that this corollary discharge provides a critical signal that can be used to unite jumping retinal images into a consistent visual scene. PMID:26740647

  5. Directional asymmetries in human smooth pursuit eye movements.

    PubMed

    Ke, Sally R; Lam, Jessica; Pai, Dinesh K; Spering, Miriam

    2013-06-27

    Humans make smooth pursuit eye movements to bring the image of a moving object onto the fovea. Although pursuit accuracy is critical to prevent motion blur, the eye often falls behind the target. Previous studies suggest that pursuit accuracy differs between motion directions. Here, we systematically assess asymmetries in smooth pursuit. In experiment 1, binocular eye movements were recorded while observers (n = 20) tracked a small spot of light moving along one of four cardinal or diagonal axes across a featureless background. We analyzed pursuit latency, acceleration, peak velocity, gain, and catch-up saccade latency, number, and amplitude. In experiment 2 (n = 22), we examined the effects of spatial location and constrained stimulus motion within the upper or lower visual field. Pursuit was significantly faster (higher acceleration, peak velocity, and gain) and smoother (fewer and later catch-up saccades) in response to downward versus upward motion in both the upper and the lower visual fields. Pursuit was also more accurate and smoother in response to horizontal versus vertical motion. CONCLUSIONS. Our study is the first to report a consistent up-down asymmetry in human adults, regardless of visual field. Our findings suggest that pursuit asymmetries are adaptive responses to the requirements of the visual context: preferred motion directions (horizontal and downward) are more critical to our survival than nonpreferred ones.

  6. Extracting information of fixational eye movements through pupil tracking

    NASA Astrophysics Data System (ADS)

    Xiao, JiangWei; Qiu, Jian; Luo, Kaiqin; Peng, Li; Han, Peng

    2018-01-01

    Human eyes are never completely static even when they are fixing a stationary point. These irregular, small movements, which consist of micro-tremors, micro-saccades and drifts, can prevent the fading of the images that enter our eyes. The importance of researching the fixational eye movements has been experimentally demonstrated recently. However, the characteristics of fixational eye movements and their roles in visual process have not been explained clearly, because these signals can hardly be completely extracted by now. In this paper, we developed a new eye movement detection device with a high-speed camera. This device includes a beam splitter mirror, an infrared light source and a high-speed digital video camera with a frame rate of 200Hz. To avoid the influence of head shaking, we made the device wearable by fixing the camera on a safety helmet. Using this device, the experiments of pupil tracking were conducted. By localizing the pupil center and spectrum analysis, the envelope frequency spectrum of micro-saccades, micro-tremors and drifts are shown obviously. The experimental results show that the device is feasible and effective, so that the device can be applied in further characteristic analysis.

  7. Placebo effects in spider phobia: an eye-tracking experiment.

    PubMed

    Gremsl, Andreas; Schwab, Daniela; Höfler, Carina; Schienle, Anne

    2018-01-05

    Several eye-tracking studies have revealed that spider phobic patients show a typical hypervigilance-avoidance pattern when confronted with images of spiders. The present experiment investigated if this pattern can be changed via placebo treatment. We conducted an eye-tracking experiment with 37 women with spider phobia. They looked at picture pairs (a spider paired with a neutral picture) for 7 s each in a retest design: once with and once without a placebo pill presented along with the verbal suggestion that it can reduce phobic symptoms. The placebo was labelled as Propranolol, a beta-blocker that has been successfully used to treat spider phobia. In the placebo condition, both the fixation count and the dwell time on the spider pictures increased, especially in the second half of the presentation time. This was associated with a slight decrease in self-reported symptom severity. In summary, we were able to show that a placebo was able to positively influence visual avoidance in spider phobia. This effect might help to overcome apprehension about engaging in exposure therapy, which is present in many phobic patients.

  8. Eye guidance during real-world scene search: The role color plays in central and peripheral vision.

    PubMed

    Nuthmann, Antje; Malcolm, George L

    2016-01-01

    The visual system utilizes environmental features to direct gaze efficiently when locating objects. While previous research has isolated various features' contributions to gaze guidance, these studies generally used sparse displays and did not investigate how features facilitated search as a function of their location on the visual field. The current study investigated how features across the visual field--particularly color--facilitate gaze guidance during real-world search. A gaze-contingent window followed participants' eye movements, restricting color information to specified regions. Scene images were presented in full color, with color in the periphery and gray in central vision or gray in the periphery and color in central vision, or in grayscale. Color conditions were crossed with a search cue manipulation, with the target cued either with a word label or an exact picture. Search times increased as color information in the scene decreased. A gaze-data based decomposition of search time revealed color-mediated effects on specific subprocesses of search. Color in peripheral vision facilitated target localization, whereas color in central vision facilitated target verification. Picture cues facilitated search, with the effects of cue specificity and scene color combining additively. When available, the visual system utilizes the environment's color information to facilitate different real-world visual search behaviors based on the location within the visual field.

  9. Role of visual and non-visual cues in constructing a rotation-invariant representation of heading in parietal cortex

    PubMed Central

    Sunkara, Adhira

    2015-01-01

    As we navigate through the world, eye and head movements add rotational velocity patterns to the retinal image. When such rotations accompany observer translation, the rotational velocity patterns must be discounted to accurately perceive heading. The conventional view holds that this computation requires efference copies of self-generated eye/head movements. Here we demonstrate that the brain implements an alternative solution in which retinal velocity patterns are themselves used to dissociate translations from rotations. These results reveal a novel role for visual cues in achieving a rotation-invariant representation of heading in the macaque ventral intraparietal area. Specifically, we show that the visual system utilizes both local motion parallax cues and global perspective distortions to estimate heading in the presence of rotations. These findings further suggest that the brain is capable of performing complex computations to infer eye movements and discount their sensory consequences based solely on visual cues. DOI: http://dx.doi.org/10.7554/eLife.04693.001 PMID:25693417

  10. Predicting 2D target velocity cannot help 2D motion integration for smooth pursuit initiation.

    PubMed

    Montagnini, Anna; Spering, Miriam; Masson, Guillaume S

    2006-12-01

    Smooth pursuit eye movements reflect the temporal dynamics of bidimensional (2D) visual motion integration. When tracking a single, tilted line, initial pursuit direction is biased toward unidimensional (1D) edge motion signals, which are orthogonal to the line orientation. Over 200 ms, tracking direction is slowly corrected to finally match the 2D object motion during steady-state pursuit. We now show that repetition of line orientation and/or motion direction does not eliminate the transient tracking direction error nor change the time course of pursuit correction. Nonetheless, multiple successive presentations of a single orientation/direction condition elicit robust anticipatory pursuit eye movements that always go in the 2D object motion direction not the 1D edge motion direction. These results demonstrate that predictive signals about target motion cannot be used for an efficient integration of ambiguous velocity signals at pursuit initiation.

  11. Using eye tracking to test for individual differences in attention to attractive faces

    PubMed Central

    Valuch, Christian; Pflüger, Lena S.; Wallner, Bernard; Laeng, Bruno; Ansorge, Ulrich

    2015-01-01

    We assessed individual differences in visual attention toward faces in relation to their attractiveness via saccadic reaction times. Motivated by the aim to understand individual differences in attention to faces, we tested three hypotheses: (a) Attractive faces hold or capture attention more effectively than less attractive faces; (b) men show a stronger bias toward attractive opposite-sex faces than women; and (c) blue-eyed men show a stronger bias toward blue-eyed than brown-eyed feminine faces. The latter test was included because prior research suggested a high effect size. Our data supported hypotheses (a) and (b) but not (c). By conducting separate tests for disengagement of attention and attention capture, we found that individual differences exist at distinct stages of attentional processing but these differences are of varying robustness and importance. In our conclusion, we also advocate the use of linear mixed effects models as the most appropriate statistical approach for studying inter-individual differences in visual attention with naturalistic stimuli. PMID:25698993

  12. A Link Between Attentional Function, Effective Eye Movements, and Driving Ability

    PubMed Central

    2016-01-01

    The misallocation of driver visual attention has been suggested as a major contributing factor to vehicle accidents. One possible reason is that the relatively high cognitive demands of driving limit the ability to efficiently allocate gaze. We present an experiment that explores the relationship between attentional function and visual performance when driving. Drivers performed 2 variations of a multiple-object tracking task targeting aspects of cognition including sustained attention, dual-tasking, covert attention, and visuomotor skill. They also drove a number of courses in a driving simulator. Eye movements were recorded throughout. We found that individuals who performed better in the cognitive tasks exhibited more effective eye movement strategies when driving, such as scanning more of the road, and they also exhibited better driving performance. We discuss the potential link between an individual’s attentional function, effective eye movements, and driving ability. We also discuss the use of a visuomotor task in assessing driving behavior. PMID:27893270

  13. Using eye tracking to test for individual differences in attention to attractive faces.

    PubMed

    Valuch, Christian; Pflüger, Lena S; Wallner, Bernard; Laeng, Bruno; Ansorge, Ulrich

    2015-01-01

    We assessed individual differences in visual attention toward faces in relation to their attractiveness via saccadic reaction times. Motivated by the aim to understand individual differences in attention to faces, we tested three hypotheses: (a) Attractive faces hold or capture attention more effectively than less attractive faces; (b) men show a stronger bias toward attractive opposite-sex faces than women; and (c) blue-eyed men show a stronger bias toward blue-eyed than brown-eyed feminine faces. The latter test was included because prior research suggested a high effect size. Our data supported hypotheses (a) and (b) but not (c). By conducting separate tests for disengagement of attention and attention capture, we found that individual differences exist at distinct stages of attentional processing but these differences are of varying robustness and importance. In our conclusion, we also advocate the use of linear mixed effects models as the most appropriate statistical approach for studying inter-individual differences in visual attention with naturalistic stimuli.

  14. Constraints on Multiple Object Tracking in Williams Syndrome: How Atypical Development Can Inform Theories of Visual Processing

    ERIC Educational Resources Information Center

    Ferrara, Katrina; Hoffman, James E.; O'Hearn, Kirsten; Landau, Barbara

    2016-01-01

    The ability to track moving objects is a crucial skill for performance in everyday spatial tasks. The tracking mechanism depends on representation of moving items as coherent entities, which follow the spatiotemporal constraints of objects in the world. In the present experiment, participants tracked 1 to 4 targets in a display of 8 identical…

  15. Visual optics: an engineering approach

    NASA Astrophysics Data System (ADS)

    Toadere, Florin

    2010-11-01

    The human eyes' visual system interprets the information from the visible light in order to build a representation of the world surrounding the body. It derives color by comparing the responses to light from the three types of photoreceptor cones in the eyes. These long medium and short cones are sensitive to blue, green and red portions of the visible spectrum. We simulate the color vision for the normal eyes. We see the effects of the dyes, filters, glasses and windows on color perception when the test image is illuminated with the D65 light sources. In addition to colors' perception, the human eyes can suffer from diseases and disorders. The eye can be seen as an optical instrument which has its own eye print. We present aspects of some nowadays methods and technologies which can capture and correct the human eyes' wavefront aberrations. We focus our attention to Siedel aberrations formula, Zenike polynomials, Shack-Hartmann Sensor, LASIK, interferograms fringes aberrations and Talbot effect.

  16. Visual Attention and Quantifier-Spreading in Heritage Russian Bilinguals

    ERIC Educational Resources Information Center

    Sekerina, Irina A.; Sauermann, Antje

    2015-01-01

    It is well established in language acquisition research that monolingual children and adult second language learners misinterpret sentences with the universal quantifier "every" and make quantifier-spreading errors that are attributed to a preference for a match in number between two sets of objects. The present Visual World eye-tracking…

  17. Real-time computer-based visual feedback improves visual acuity in downbeat nystagmus - a pilot study.

    PubMed

    Teufel, Julian; Bardins, S; Spiegel, Rainer; Kremmyda, O; Schneider, E; Strupp, M; Kalla, R

    2016-01-04

    Patients with downbeat nystagmus syndrome suffer from oscillopsia, which leads to an unstable visual perception and therefore impaired visual acuity. The aim of this study was to use real-time computer-based visual feedback to compensate for the destabilizing slow phase eye movements. The patients were sitting in front of a computer screen with the head fixed on a chin rest. The eye movements were recorded by an eye tracking system (EyeSeeCam®). We tested the visual acuity with a fixed Landolt C (static) and during real-time feedback driven condition (dynamic) in gaze straight ahead and (20°) sideward gaze. In the dynamic condition, the Landolt C moved according to the slow phase eye velocity of the downbeat nystagmus. The Shapiro-Wilk test was used to test for normal distribution and one-way ANOVA for comparison. Ten patients with downbeat nystagmus were included in the study. Median age was 76 years and the median duration of symptoms was 6.3 years (SD +/- 3.1y). The mean slow phase velocity was moderate during gaze straight ahead (1.44°/s, SD +/- 1.18°/s) and increased significantly in sideward gaze (mean left 3.36°/s; right 3.58°/s). In gaze straight ahead, we found no difference between the static and feedback driven condition. In sideward gaze, visual acuity improved in five out of ten subjects during the feedback-driven condition (p = 0.043). This study provides proof of concept that non-invasive real-time computer-based visual feedback compensates for the SPV in DBN. Therefore, real-time visual feedback may be a promising aid for patients suffering from oscillopsia and impaired text reading on screen. Recent technological advances in the area of virtual reality displays might soon render this approach feasible in fully mobile settings.

  18. Eye movements during listening reveal spontaneous grammatical processing.

    PubMed

    Huette, Stephanie; Winter, Bodo; Matlock, Teenie; Ardell, David H; Spivey, Michael

    2014-01-01

    Recent research using eye-tracking typically relies on constrained visual contexts in particular goal-oriented contexts, viewing a small array of objects on a computer screen and performing some overt decision or identification. Eyetracking paradigms that use pictures as a measure of word or sentence comprehension are sometimes touted as ecologically invalid because pictures and explicit tasks are not always present during language comprehension. This study compared the comprehension of sentences with two different grammatical forms: the past progressive (e.g., was walking), which emphasizes the ongoing nature of actions, and the simple past (e.g., walked), which emphasizes the end-state of an action. The results showed that the distribution and timing of eye movements mirrors the underlying conceptual structure of this linguistic difference in the absence of any visual stimuli or task constraint: Fixations were shorter and saccades were more dispersed across the screen, as if thinking about more dynamic events when listening to the past progressive stories. Thus, eye movement data suggest that visual inputs or an explicit task are unnecessary to solicit analog representations of features such as movement, that could be a key perceptual component to grammatical comprehension.

  19. Grapheme-color synesthesia influences overt visual attention.

    PubMed

    Carriere, Jonathan S A; Eaton, Daniel; Reynolds, Michael G; Dixon, Mike J; Smilek, Daniel

    2009-02-01

    For individuals with grapheme-color synesthesia, achromatic letters and digits elicit vivid perceptual experiences of color. We report two experiments that evaluate whether synesthesia influences overt visual attention. In these experiments, two grapheme-color synesthetes viewed colored letters while their eye movements were monitored. Letters were presented in colors that were either congruent or incongruent with the synesthetes' colors. Eye tracking analysis showed that synesthetes exhibited a color congruity bias-a propensity to fixate congruently colored letters more often and for longer durations than incongruently colored letters-in a naturalistic free-viewing task. In a more structured visual search task, this congruity bias caused synesthetes to rapidly fixate and identify congruently colored target letters, but led to problems in identifying incongruently colored target letters. The results are discussed in terms of their implications for perception in synesthesia.

  20. Looking you in the mouth: abnormal gaze in autism resulting from impaired top-down modulation of visual attention.

    PubMed

    Neumann, Dirk; Spezio, Michael L; Piven, Joseph; Adolphs, Ralph

    2006-12-01

    People with autism are impaired in their social behavior, including their eye contact with others, but the processes that underlie this impairment remain elusive. We combined high-resolution eye tracking with computational modeling in a group of 10 high-functioning individuals with autism to address this issue. The group fixated the location of the mouth in facial expressions more than did matched controls, even when the mouth was not shown, even in faces that were inverted and most noticeably at latencies of 200-400 ms. Comparisons with a computational model of visual saliency argue that the abnormal bias for fixating the mouth in autism is not driven by an exaggerated sensitivity to the bottom-up saliency of the features, but rather by an abnormal top-down strategy for allocating visual attention.

  1. The Vestibular System and Human Dynamic Space Orientation

    NASA Technical Reports Server (NTRS)

    Meiry, J. L.

    1966-01-01

    The motion sensors of the vestibular system are studied to determine their role in human dynamic space orientation and manual vehicle control. The investigation yielded control models for the sensors, descriptions of the subsystems for eye stabilization, and demonstrations of the effects of motion cues on closed loop manual control. Experiments on the abilities of subjects to perceive a variety of linear motions provided data on the dynamic characteristics of the otoliths, the linear motion sensors. Angular acceleration threshold measurements supplemented knowledge of the semicircular canals, the angular motion sensors. Mathematical models are presented to describe the known control characteristics of the vestibular sensors, relating subjective perception of motion to objective motion of a vehicle. The vestibular system, the neck rotation proprioceptors and the visual system form part of the control system which maintains the eye stationary relative to a target or a reference. The contribution of each of these systems was identified through experiments involving head and body rotations about a vertical axis. Compensatory eye movements in response to neck rotation were demonstrated and their dynamic characteristics described by a lag-lead model. The eye motions attributable to neck rotations and vestibular stimulation obey superposition when both systems are active. Human operator compensatory tracking is investigated in simple vehicle orientation control system with stable and unstable controlled elements. Control of vehicle orientation to a reference is simulated in three modes: visual, motion and combined. Motion cues sensed by the vestibular system through tactile sensation enable the operator to generate more lead compensation than in fixed base simulation with only visual input. The tracking performance of the human in an unstable control system near the limits of controllability is shown to depend heavily upon the rate information provided by the vestibular sensors.

  2. Gesture helps learners learn, but not merely by guiding their visual attention.

    PubMed

    Wakefield, Elizabeth; Novack, Miriam A; Congdon, Eliza L; Franconeri, Steven; Goldin-Meadow, Susan

    2018-04-16

    Teaching a new concept through gestures-hand movements that accompany speech-facilitates learning above-and-beyond instruction through speech alone (e.g., Singer & Goldin-Meadow, ). However, the mechanisms underlying this phenomenon are still under investigation. Here, we use eye tracking to explore one often proposed mechanism-gesture's ability to direct visual attention. Behaviorally, we replicate previous findings: Children perform significantly better on a posttest after learning through Speech+Gesture instruction than through Speech Alone instruction. Using eye tracking measures, we show that children who watch a math lesson with gesture do allocate their visual attention differently from children who watch a math lesson without gesture-they look more to the problem being explained, less to the instructor, and are more likely to synchronize their visual attention with information presented in the instructor's speech (i.e., follow along with speech) than children who watch the no-gesture lesson. The striking finding is that, even though these looking patterns positively predict learning outcomes, the patterns do not mediate the effects of training condition (Speech Alone vs. Speech+Gesture) on posttest success. We find instead a complex relation between gesture and visual attention in which gesture moderates the impact of visual looking patterns on learning-following along with speech predicts learning for children in the Speech+Gesture condition, but not for children in the Speech Alone condition. Gesture's beneficial effects on learning thus come not merely from its ability to guide visual attention, but also from its ability to synchronize with speech and affect what learners glean from that speech. © 2018 John Wiley & Sons Ltd.

  3. Coordinated Control of Three-Dimensional Components of Smooth Pursuit to Rotating and Translating Textures.

    PubMed

    Edinger, Janick; Pai, Dinesh K; Spering, Miriam

    2017-01-01

    The neural control of pursuit eye movements to visual textures that simultaneously translate and rotate has largely been neglected. Here we propose that pursuit of such targets-texture pursuit-is a fully three-dimensional task that utilizes all three degrees of freedom of the eye, including torsion. Head-fixed healthy human adults (n = 8) tracked a translating and rotating random dot pattern, shown on a computer monitor, with their eyes. Horizontal, vertical, and torsional eye positions were recorded with a head-mounted eye tracker. The torsional component of pursuit is a function of the rotation of the texture, aligned with its visual properties. We observed distinct behaviors between those trials in which stimulus rotation was in the same direction as that of a rolling ball ("natural") in comparison to those with the opposite rotation ("unnatural"): Natural rotation enhanced and unnatural rotation reversed torsional velocity during pursuit, as compared to torsion triggered by a nonrotating random dot pattern. Natural rotation also triggered pursuit with a higher horizontal velocity gain and fewer and smaller corrective saccades. Furthermore, we show that horizontal corrective saccades are synchronized with torsional corrective saccades, indicating temporal coupling of horizontal and torsional saccade control. Pursuit eye movements have a torsional component that depends on the visual stimulus. Horizontal and torsional eye movements are separated in the motor periphery. Our findings suggest that translational and rotational motion signals might be coordinated in descending pursuit pathways.

  4. Camera Perspective Bias in Videotaped Confessions: Evidence that Visual Attention Is a Mediator

    ERIC Educational Resources Information Center

    Ware, Lezlee J.; Lassiter, G. Daniel; Patterson, Stephen M.; Ransom, Michael R.

    2008-01-01

    Several experiments have demonstrated a "camera perspective bias" in evaluations of videotaped confessions: videotapes with the camera focused on the suspect lead to judgments of greater voluntariness than alternative presentation formats. The present research investigated potential mediators of this bias. Using eye tracking to measure visual…

  5. U.S. Army Research Institute Program in Basic Research FY 2005 and FY 2006

    DTIC Science & Technology

    2007-11-01

    designed to tap different levels of processing-from visual attention (measured via eye-tracking) and interpretation through memory and decision-making (e.g...Test ( EFT ; Witkin, 1950; Witkin, Dyk, Faterson, Goodenough, & Karp, 1962) modified for group administration. It measures competence in perceptual field

  6. Learned filters for object detection in multi-object visual tracking

    NASA Astrophysics Data System (ADS)

    Stamatescu, Victor; Wong, Sebastien; McDonnell, Mark D.; Kearney, David

    2016-05-01

    We investigate the application of learned convolutional filters in multi-object visual tracking. The filters were learned in both a supervised and unsupervised manner from image data using artificial neural networks. This work follows recent results in the field of machine learning that demonstrate the use learned filters for enhanced object detection and classification. Here we employ a track-before-detect approach to multi-object tracking, where tracking guides the detection process. The object detection provides a probabilistic input image calculated by selecting from features obtained using banks of generative or discriminative learned filters. We present a systematic evaluation of these convolutional filters using a real-world data set that examines their performance as generic object detectors.

  7. Eyes and ears: Using eye tracking and pupillometry to understand challenges to speech recognition.

    PubMed

    Van Engen, Kristin J; McLaughlin, Drew J

    2018-05-04

    Although human speech recognition is often experienced as relatively effortless, a number of common challenges can render the task more difficult. Such challenges may originate in talkers (e.g., unfamiliar accents, varying speech styles), the environment (e.g. noise), or in listeners themselves (e.g., hearing loss, aging, different native language backgrounds). Each of these challenges can reduce the intelligibility of spoken language, but even when intelligibility remains high, they can place greater processing demands on listeners. Noisy conditions, for example, can lead to poorer recall for speech, even when it has been correctly understood. Speech intelligibility measures, memory tasks, and subjective reports of listener difficulty all provide critical information about the effects of such challenges on speech recognition. Eye tracking and pupillometry complement these methods by providing objective physiological measures of online cognitive processing during listening. Eye tracking records the moment-to-moment direction of listeners' visual attention, which is closely time-locked to unfolding speech signals, and pupillometry measures the moment-to-moment size of listeners' pupils, which dilate in response to increased cognitive load. In this paper, we review the uses of these two methods for studying challenges to speech recognition. Copyright © 2018. Published by Elsevier B.V.

  8. Preliminary results of tracked laser-assisted in-situ keratomileusis (T-LASIK) for myopia and hyperopia using the autonomous technologies excimer laser system

    NASA Astrophysics Data System (ADS)

    Maguen, Ezra I.; Nesburn, Anthony B.; Salz, James J.

    2000-06-01

    A study was undertaken to assess the safety and efficacy of LASIK with the LADARVision laser by Autonomous Technologies, (Orlando, FL). The study included four subsets: Spherical myopia -- up to -11.00D, spherical hyperopia -- up to +6.00D. Both myopic and hyperopic astigmatism could be corrected, up to 6.00D of astigmatism. A total of 105 patients participated. Sixty-six patients were myopic and 39 were hyperopic. The mean (+/- SD) age was 42.8 +/- 9.3 years for myopia and 53.2 +/- 9.9 years for hyperopia. At 3 months postop. Sixty-one myopic eyes were available for evaluation. Uncorrected visual acuity was 20/20 in 70% of eyes and 20/40 in 92.9% of all eyes. The refractive outcome was within +/- 0.50D in 73.8% of eyes and within +/- 1.00D in 96.7 of eyes. Thirty-eight hyperopic eyes were available. Uncorrected visual acuity was 20/20 in 42.1% of eyes and 20/40 in 88% of all eyes. The refractive outcome was within +/- 0.50D in 57.9% of eyes and within +/- 1.00D in 86.8% of eyes. Complications were not sight threatening and were discussed in detail. Lasik with the LADARVision laser appears to be safe and effective.

  9. Recognition and attention guidance during contextual cueing in real-world scenes: evidence from eye movements.

    PubMed

    Brockmole, James R; Henderson, John M

    2006-07-01

    When confronted with a previously encountered scene, what information is used to guide search to a known target? We contrasted the role of a scene's basic-level category membership with its specific arrangement of visual properties. Observers were repeatedly shown photographs of scenes that contained consistently but arbitrarily located targets, allowing target positions to be associated with scene content. Learned scenes were then unexpectedly mirror reversed, spatially translating visual features as well as the target across the display while preserving the scene's identity and concept. Mirror reversals produced a cost as the eyes initially moved toward the position in the display in which the target had previously appeared. The cost was not complete, however; when initial search failed, the eyes were quickly directed to the target's new position. These results suggest that in real-world scenes, shifts of attention are initially based on scene identity, and subsequent shifts are guided by more detailed information regarding scene and object layout.

  10. To search or to like: Mapping fixations to differentiate two forms of incidental scene memory.

    PubMed

    Choe, Kyoung Whan; Kardan, Omid; Kotabe, Hiroki P; Henderson, John M; Berman, Marc G

    2017-10-01

    We employed eye-tracking to investigate how performing different tasks on scenes (e.g., intentionally memorizing them, searching for an object, evaluating aesthetic preference) can affect eye movements during encoding and subsequent scene memory. We found that scene memorability decreased after visual search (one incidental encoding task) compared to intentional memorization, and that preference evaluation (another incidental encoding task) produced better memory, similar to the incidental memory boost previously observed for words and faces. By analyzing fixation maps, we found that although fixation map similarity could explain how eye movements during visual search impairs incidental scene memory, it could not explain the incidental memory boost from aesthetic preference evaluation, implying that implicit mechanisms were at play. We conclude that not all incidental encoding tasks should be taken to be similar, as different mechanisms (e.g., explicit or implicit) lead to memory enhancements or decrements for different incidental encoding tasks.

  11. Psychopathic traits affect the visual exploration of facial expressions.

    PubMed

    Boll, Sabrina; Gamer, Matthias

    2016-05-01

    Deficits in emotional reactivity and recognition have been reported in psychopathy. Impaired attention to the eyes along with amygdala malfunctions may underlie these problems. Here, we investigated how different facets of psychopathy modulate the visual exploration of facial expressions by assessing personality traits in a sample of healthy young adults using an eye-tracking based face perception task. Fearless Dominance (the interpersonal-emotional facet of psychopathy) and Coldheartedness scores predicted reduced face exploration consistent with findings on lowered emotional reactivity in psychopathy. Moreover, participants high on the social deviance facet of psychopathy ('Self-Centered Impulsivity') showed a reduced bias to shift attention towards the eyes. Our data suggest that facets of psychopathy modulate face processing in healthy individuals and reveal possible attentional mechanisms which might be responsible for the severe impairments of social perception and behavior observed in psychopathy. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Statistical patterns of visual search for hidden objects

    PubMed Central

    Credidio, Heitor F.; Teixeira, Elisângela N.; Reis, Saulo D. S.; Moreira, André A.; Andrade Jr, José S.

    2012-01-01

    The movement of the eyes has been the subject of intensive research as a way to elucidate inner mechanisms of cognitive processes. A cognitive task that is rather frequent in our daily life is the visual search for hidden objects. Here we investigate through eye-tracking experiments the statistical properties associated with the search of target images embedded in a landscape of distractors. Specifically, our results show that the twofold process of eye movement, composed of sequences of fixations (small steps) intercalated by saccades (longer jumps), displays characteristic statistical signatures. While the saccadic jumps follow a log-normal distribution of distances, which is typical of multiplicative processes, the lengths of the smaller steps in the fixation trajectories are consistent with a power-law distribution. Moreover, the present analysis reveals a clear transition between a directional serial search to an isotropic random movement as the difficulty level of the searching task is increased. PMID:23226829

  13. Dynamic polarization vision in mantis shrimps

    PubMed Central

    Daly, Ilse M.; How, Martin J.; Partridge, Julian C.; Temple, Shelby E.; Marshall, N. Justin; Cronin, Thomas W.; Roberts, Nicholas W.

    2016-01-01

    Gaze stabilization is an almost ubiquitous animal behaviour, one that is required to see the world clearly and without blur. Stomatopods, however, only fix their eyes on scenes or objects of interest occasionally. Almost uniquely among animals they explore their visual environment with a series pitch, yaw and torsional (roll) rotations of their eyes, where each eye may also move largely independently of the other. In this work, we demonstrate that the torsional rotations are used to actively enhance their ability to see the polarization of light. Both Gonodactylus smithii and Odontodactylus scyllarus rotate their eyes to align particular photoreceptors relative to the angle of polarization of a linearly polarized visual stimulus, thereby maximizing the polarization contrast between an object of interest and its background. This is the first documented example of any animal displaying dynamic polarization vision, in which the polarization information is actively maximized through rotational eye movements. PMID:27401817

  14. Trained Eyes: Experience Promotes Adaptive Gaze Control in Dynamic and Uncertain Visual Environments

    PubMed Central

    Taya, Shuichiro; Windridge, David; Osman, Magda

    2013-01-01

    Current eye-tracking research suggests that our eyes make anticipatory movements to a location that is relevant for a forthcoming task. Moreover, there is evidence to suggest that with more practice anticipatory gaze control can improve. However, these findings are largely limited to situations where participants are actively engaged in a task. We ask: does experience modulate anticipative gaze control while passively observing a visual scene? To tackle this we tested people with varying degrees of experience of tennis, in order to uncover potential associations between experience and eye movement behaviour while they watched tennis videos. The number, size, and accuracy of saccades (rapid eye-movements) made around ‘events,’ which is critical for the scene context (i.e. hit and bounce) were analysed. Overall, we found that experience improved anticipatory eye-movements while watching tennis clips. In general, those with extensive experience showed greater accuracy of saccades to upcoming event locations; this was particularly prevalent for events in the scene that carried high uncertainty (i.e. ball bounces). The results indicate that, even when passively observing, our gaze control system utilizes prior relevant knowledge in order to anticipate upcoming uncertain event locations. PMID:23951147

  15. Learning and Treatment of Anaphylaxis by Laypeople: A Simulation Study Using Pupilar Technology

    PubMed Central

    Fernandez-Mendez, Felipe; Barcala-Furelos, Roberto; Padron-Cabo, Alexis; Garcia-Magan, Carlos; Moure-Gonzalez, Jose; Contreras-Jordan, Onofre; Rodriguez-Nuñez, Antonio

    2017-01-01

    An anaphylactic shock is a time-critical emergency situation. The decision-making during emergencies is an important responsibility but difficult to study. Eye-tracking technology allows us to identify visual patterns involved in the decision-making. The aim of this pilot study was to evaluate two training models for the recognition and treatment of anaphylaxis by laypeople, based on expert assessment and eye-tracking technology. A cross-sectional quasi-experimental simulation study was made to evaluate the identification and treatment of anaphylaxis. 50 subjects were randomly assigned to four groups: three groups watching different training videos with content supervised by sanitary personnel and one control group who received face-to-face training during paediatric practice. To evaluate the learning, a simulation scenario represented by an anaphylaxis' victim was designed. A device capturing eye movement as well as expert valuation was used to evaluate the performance. The subjects that underwent paediatric face-to-face training achieved better and faster recognition of the anaphylaxis. They also used the adrenaline injector with better precision and less mistakes, and they needed a smaller number of visual fixations to recognise the anaphylaxis and to make the decision to inject epinephrine. Analysing the different video formats, mixed results were obtained. Therefore, they should be tested to evaluate their usability before implementation. PMID:28758128

  16. Where Do Neurologists Look When Viewing Brain CT Images? An Eye-Tracking Study Involving Stroke Cases

    PubMed Central

    Matsumoto, Hideyuki; Terao, Yasuo; Yugeta, Akihiro; Fukuda, Hideki; Emoto, Masaki; Furubayashi, Toshiaki; Okano, Tomoko; Hanajima, Ritsuko; Ugawa, Yoshikazu

    2011-01-01

    The aim of this study was to investigate where neurologists look when they view brain computed tomography (CT) images and to evaluate how they deploy their visual attention by comparing their gaze distribution with saliency maps. Brain CT images showing cerebrovascular accidents were presented to 12 neurologists and 12 control subjects. The subjects' ocular fixation positions were recorded using an eye-tracking device (Eyelink 1000). Heat maps were created based on the eye-fixation patterns of each group and compared between the two groups. The heat maps revealed that the areas on which control subjects frequently fixated often coincided with areas identified as outstanding in saliency maps, while the areas on which neurologists frequently fixated often did not. Dwell time in regions of interest (ROI) was likewise compared between the two groups, revealing that, although dwell time on large lesions was not different between the two groups, dwell time in clinically important areas with low salience was longer in neurologists than in controls. Therefore it appears that neurologists intentionally scan clinically important areas when reading brain CT images showing cerebrovascular accidents. Both neurologists and control subjects used the “bottom-up salience” form of visual attention, although the neurologists more effectively used the “top-down instruction” form. PMID:22174928

  17. Learning and Treatment of Anaphylaxis by Laypeople: A Simulation Study Using Pupilar Technology.

    PubMed

    Fernandez-Mendez, Felipe; Saez-Gallego, Nieves Maria; Barcala-Furelos, Roberto; Abelairas-Gomez, Cristian; Padron-Cabo, Alexis; Perez-Ferreiros, Alexandra; Garcia-Magan, Carlos; Moure-Gonzalez, Jose; Contreras-Jordan, Onofre; Rodriguez-Nuñez, Antonio

    2017-01-01

    An anaphylactic shock is a time-critical emergency situation. The decision-making during emergencies is an important responsibility but difficult to study. Eye-tracking technology allows us to identify visual patterns involved in the decision-making. The aim of this pilot study was to evaluate two training models for the recognition and treatment of anaphylaxis by laypeople, based on expert assessment and eye-tracking technology. A cross-sectional quasi-experimental simulation study was made to evaluate the identification and treatment of anaphylaxis. 50 subjects were randomly assigned to four groups: three groups watching different training videos with content supervised by sanitary personnel and one control group who received face-to-face training during paediatric practice. To evaluate the learning, a simulation scenario represented by an anaphylaxis' victim was designed. A device capturing eye movement as well as expert valuation was used to evaluate the performance. The subjects that underwent paediatric face-to-face training achieved better and faster recognition of the anaphylaxis. They also used the adrenaline injector with better precision and less mistakes, and they needed a smaller number of visual fixations to recognise the anaphylaxis and to make the decision to inject epinephrine. Analysing the different video formats, mixed results were obtained. Therefore, they should be tested to evaluate their usability before implementation.

  18. Rapid Linguistic Ambiguity Resolution in Young Children with Autism Spectrum Disorder: Eye Tracking Evidence for the Limits of Weak Central Coherence.

    PubMed

    Hahn, Noemi; Snedeker, Jesse; Rabagliati, Hugh

    2015-12-01

    Individuals with autism spectrum disorders (ASD) have often been reported to have difficulty integrating information into its broader context, which has motivated the Weak Central Coherence theory of ASD. In the linguistic domain, evidence for this difficulty comes from reports of impaired use of linguistic context to resolve ambiguous words. However, recent work has suggested that impaired use of linguistic context may not be characteristic of ASD, and is instead better explained by co-occurring language impairments. Here, we provide a strong test of these claims, using the visual world eye tracking paradigm to examine the online mechanisms by which children with autism resolve linguistic ambiguity. To address concerns about both language impairments and compensatory strategies, we used a sample whose verbal skills were strong and whose average age (7; 6) was lower than previous work on lexical ambiguity resolution in ASD. Participants (40 with autism and 40 controls) heard sentences with ambiguous words in contexts that either strongly supported one reading or were consistent with both (John fed/saw the bat). We measured activation of the unintended meaning through implicit semantic priming of an associate (looks to a depicted baseball glove). Contrary to the predictions of weak central coherence, children with ASD, like controls, quickly used context to resolve ambiguity, selecting appropriate meanings within a second. We discuss how these results constrain the generality of weak central coherence. © 2015 International Society for Autism Research, Wiley Periodicals, Inc.

  19. International Vision Care: Issues and Approaches.

    PubMed

    Khanna, Rohit C; Marmamula, Srinivas; Rao, Gullapalli N

    2017-09-15

    Globally, 32.4 million individuals are blind and 191 million have moderate or severe visual impairment (MSVI); 80% of cases of blindness and MSVI are avoidable. However, great efforts are needed to tackle blindness and MSVI, as eye care in most places is delivered in isolation from and without significant integration with general health sectors. Success stories, including control of vitamin A deficiency, onchocerciasis, and trachoma, showed that global partnerships, multisectoral collaboration, public-private partnerships, corporate philanthropy, support from nongovernmental organizations-both local and international-and governments are responsible for the success of these programs. Hence, the World Health Organization's universal eye health global action plan for 2014-2019 has a goal of reducing the public health problem of blindness and ensuring access to comprehensive eye care; the plan aims to integrate eye health into health systems, thus providing universal eye health coverage (UEHC). This article discusses the challenges faced by low- and middle-income countries in strengthening the six building blocks of the health system. It discusses how the health systems in these countries need to be geared toward tackling the issues of emerging noncommunicable eye diseases, existing infectious diseases, and the common causes of blindness and visual impairment, such as cataract and refractive error. It also discusses how some of the comprehensive eye care models in the developing world have addressed these challenges. Moving ahead, if we are to achieve UEHC, we need to develop robust, sustainable, good-quality, comprehensive eye care programs throughout the world, focusing on the areas of greatest need. We also need to develop public health approaches for more complex problems such as diabetic retinopathy, glaucoma, childhood blindness, corneal blindness, and low vision. There is also a great need to train high-level human resources of all cadres in adequate numbers and quality. In addition to this, we need to exploit the benefits of modern technological innovations in information, communications, biomedical technology, and other domains to enhance quality of, access to, and equity in eye care.

  20. Longitudinal strain bull's eye plot patterns in patients with cardiomyopathy and concentric left ventricular hypertrophy.

    PubMed

    Liu, Dan; Hu, Kai; Nordbeck, Peter; Ertl, Georg; Störk, Stefan; Weidemann, Frank

    2016-05-10

    Despite substantial advances in the imaging techniques and pathophysiological understanding over the last decades, identification of the underlying causes of left ventricular hypertrophy by means of echocardiographic examination remains a challenge in current clinical practice. The longitudinal strain bull's eye plot derived from 2D speckle tracking imaging offers an intuitive visual overview of the global and regional left ventricular myocardial function in a single diagram. The bull's eye mapping is clinically feasible and the plot patterns could provide clues to the etiology of cardiomyopathies. The present review summarizes the longitudinal strain, bull's eye plot features in patients with various cardiomyopathies and concentric left ventricular hypertrophy and the bull's eye plot features might serve as one of the cardiac workup steps on evaluating patients with left ventricular hypertrophy.

  1. The risk of newly developed visual impairment in treated normal-tension glaucoma: 10-year follow-up.

    PubMed

    Choi, Yun Jeong; Kim, Martha; Park, Ki Ho; Kim, Dong Myung; Kim, Seok Hwan

    2014-12-01

    To investigate the risk and risk factors for newly developed visual impairment in treated patients with normal-tension glaucoma (NTG) followed up on for 10 years. Patients with NTG, who did not have visual impairment at the initial diagnosis and had undergone intraocular pressure (IOP)-lowering treatment for more than 7 years, were included on the basis of a retrospective chart review. Visual impairment was defined as either low vision (0.05 [20/400] ≤ visual acuity (VA) <0.3 [20/60] and/or 10 degrees ≤ central visual field (VF) <20 degrees) or blindness (VA <0.05 [20/400] and/or central VF <10 degrees) by World Health Organization (WHO) criteria. To investigate the risk and risk factors for newly developed visual impairment, Kaplan-Meier survival analysis and generalized linear mixed effects models were utilized. During the 10.8 years mean follow-up period, 20 eyes of 16 patients were diagnosed as visual impairment (12 eyes as low vision, 8 as blindness) among 623 eyes of 411 patients. The cumulative risk of visual impairment in at least one eye was 2.8% at 10 years and 8.7% at 15 years. The risk factors for visual impairment from treated NTG were worse VF mean deviation (MD) at diagnosis and longer follow-up period. The risk of newly developed visual impairment in the treated patients with NTG was relatively low. Worse VF MD at diagnosis and longer follow-up period were associated with development of visual impairment. © 2014 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

  2. Eye tracking to evaluate evidence recognition in crime scene investigations.

    PubMed

    Watalingam, Renuka Devi; Richetelli, Nicole; Pelz, Jeff B; Speir, Jacqueline A

    2017-11-01

    Crime scene analysts are the core of criminal investigations; decisions made at the scene greatly affect the speed of analysis and the quality of conclusions, thereby directly impacting the successful resolution of a case. If an examiner fails to recognize the pertinence of an item on scene, the analyst's theory regarding the crime will be limited. Conversely, unselective evidence collection will most likely include irrelevant material, thus increasing a forensic laboratory's backlog and potentially sending the investigation into an unproductive and costly direction. Therefore, it is critical that analysts recognize and properly evaluate forensic evidence that can assess the relative support of differing hypotheses related to event reconstruction. With this in mind, the aim of this study was to determine if quantitative eye tracking data and qualitative reconstruction accuracy could be used to distinguish investigator expertise. In order to assess this, 32 participants were successfully recruited and categorized as experts or trained novices based on their practical experiences and educational backgrounds. Each volunteer then processed a mock crime scene while wearing a mobile eye tracker, wherein visual fixations, durations, search patterns, and reconstruction accuracy were evaluated. The eye tracking data (dwell time and task percentage on areas of interest or AOIs) were compared using Earth Mover's Distance (EMD) and the Needleman-Wunsch (N-W) algorithm, revealing significant group differences for both search duration (EMD), as well as search sequence (N-W). More specifically, experts exhibited greater dissimilarity in search duration, but greater similarity in search sequences than their novice counterparts. In addition to the quantitative visual assessment of examiner variability, each participant's reconstruction skill was assessed using a 22-point binary scoring system, in which significant group differences were detected as a function of total reconstruction accuracy. This result, coupled with the fact that the study failed to detect a significant difference between the groups when evaluating the total time needed to complete the investigation, indicates that experts are more efficient and effective. Finally, the results presented here provide a basis for continued research in the use of eye trackers to assess expertise in complex and distributed environments, including suggestions for future work, and cautions regarding the degree to which visual attention can infer cognitive understanding. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Vision Outcomes Following Anti-Vascular Endothelial Growth Factor Treatment of Diabetic Macular Edema in Clinical Practice.

    PubMed

    Holekamp, Nancy M; Campbell, Joanna; Almony, Arghavan; Ingraham, Herbert; Marks, Steven; Chandwani, Hitesh; Cole, Ashley L; Kiss, Szilárd

    2018-04-20

    To determine monitoring and treatment patterns, and vision outcomes in real-world patients initiating anti-vascular endothelial growth factor (anti-VEGF) therapy for diabetic macular edema (DME). Retrospective interventional cohort study. SETTING: Electronic medical record analysis of Geisinger Health System data. 110 patients (121 study eyes) initiating intravitreal ranibizumab or bevacizumab for DME during January 2007‒May 2012, with baseline corrected visual acuity of 20/40‒20/320, and ≥1 ophthalmologist visits during follow-up. Intravitreal injections per study eye during the first 12 months; corrected visual acuity, change in corrected visual acuity from baseline, proportions of eyes with ≥10 or ≥15 approxEarly Treatment Diabetic Retinopathy Study letter gain/loss at 12 months; number of ophthalmologist visits. Over 12 months, mean number of ophthalmologist visits was 9.2; mean number of intravitreal injections was 3.1 (range, 1-12), with most eyes (68.6%) receiving ≤3 injections. At 12 months, mean corrected visual acuity change was +4.7 letters (mean 56.9 letters at baseline); proportions of eyes gaining ≥10 or ≥15 letters were 31.4% and 24.0%, respectively; proportions of eyes losing ≥10 or ≥15 letters were 10.8% and 8.3%, respectively. Eyes receiving adjunctive laser during the first 6 months (n = 33) showed similar change in corrected visual acuity to non-laser-treated eyes (n = 88) (+3.1 vs +5.3 letters at 12 months). DME patients receiving anti-VEGF therapy in clinical practice undergo less frequent monitoring and intravitreal injections, and achieve inferior vision outcomes to patients in landmark clinical trials. Copyright © 2018. Published by Elsevier Inc.

  4. Goal-directed visual attention drives health goal priming: An eye-tracking experiment.

    PubMed

    van der Laan, Laura N; Papies, Esther K; Hooge, Ignace T C; Smeets, Paul A M

    2017-01-01

    Several lab and field experiments have shown that goal priming interventions can be highly effective in promoting healthy food choices. Less is known, however, about the mechanisms by which goal priming affects food choice. This experiment tested the hypothesis that goal priming affects food choices through changes in visual attention. Specifically, it was hypothesized that priming with the dieting goal steers attention toward goal-relevant, low energy food products, which, in turn, increases the likelihood of choosing these products. In this eye-tracking experiment, 125 participants chose between high and low energy food products in a realistic online supermarket task while their eye movements were recorded with an eye-tracker. One group was primed with a health and dieting goal, a second group was exposed to a control prime, and a third group was exposed to no prime at all. The health goal prime increased low energy food choices and decreased high energy food choices. Furthermore, the health goal prime resulted in proportionally longer total dwell times on low energy food products, and this effect mediated the goal priming effect on choices. The findings suggest that the effect of priming on consumer choice may originate from an increase in attention for prime-congruent items. This study supports the effectiveness of health goal priming interventions in promoting healthy eating and opens up directions for research on other behavioral interventions that steer attention toward healthy foods. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  5. Using an auditory sensory substitution device to augment vision: evidence from eye movements.

    PubMed

    Wright, Thomas D; Margolis, Aaron; Ward, Jamie

    2015-03-01

    Sensory substitution devices convert information normally associated with one sense into another sense (e.g. converting vision into sound). This is often done to compensate for an impaired sense. The present research uses a multimodal approach in which both natural vision and sound-from-vision ('soundscapes') are simultaneously presented. Although there is a systematic correspondence between what is seen and what is heard, we introduce a local discrepancy between the signals (the presence of a target object that is heard but not seen) that the participant is required to locate. In addition to behavioural responses, the participants' gaze is monitored with eye-tracking. Although the target object is only presented in the auditory channel, behavioural performance is enhanced when visual information relating to the non-target background is presented. In this instance, vision may be used to generate predictions about the soundscape that enhances the ability to detect the hidden auditory object. The eye-tracking data reveal that participants look for longer in the quadrant containing the auditory target even when they subsequently judge it to be located elsewhere. As such, eye movements generated by soundscapes reveal the knowledge of the target location that does not necessarily correspond to the actual judgment made. The results provide a proof of principle that multimodal sensory substitution may be of benefit to visually impaired people with some residual vision and, in normally sighted participants, for guiding search within complex scenes.

  6. Tracking the impact of depression in a perspective-taking task.

    PubMed

    Ferguson, Heather J; Cane, James

    2017-11-01

    Research has identified impairments in Theory of Mind (ToM) abilities in depressed patients, particularly in relation to tasks involving empathetic responses and belief reasoning. We aimed to build on this research by exploring the relationship between depressed mood and cognitive ToM, specifically visual perspective-taking ability. High and low depressed participants were eye-tracked as they completed a perspective-taking task, in which they followed the instructions of a 'director' to move target objects (e.g. a "teapot with spots on") around a grid, in the presence of a temporarily-ambiguous competitor object (e.g. a "teapot with stars on"). Importantly, some of the objects in the grid were occluded from the director's (but not the participant's) view. Results revealed no group-based difference in participants' ability to use perspective cues to identify the target object. All participants were faster to select the target object when the competitor was only available to the participant, compared to when the competitor was mutually available to the participant and director. Eye-tracking measures supported this pattern, revealing that perspective directed participants' visual search immediately upon hearing the ambiguous object's name (e.g. "teapot"). We discuss how these results fit with previous studies that have shown a negative relationship between depression and ToM.

  7. Reading Stories Activates Neural Representations of Visual and Motor Experiences

    PubMed Central

    Speer, Nicole K.; Reynolds, Jeremy R.; Swallow, Khena M.; Zacks, Jeffrey M.

    2010-01-01

    To understand and remember stories, readers integrate their knowledge of the world with information in the text. Here we present functional neuroimaging evidence that neural systems track changes in the situation described by a story. Different brain regions track different aspects of a story, such as a character’s physical location or current goals. Some of these regions mirror those involved when people perform, imagine, or observe similar real-world activities. These results support the view that readers understand a story by simulating the events in the story world and updating their simulation when features of that world change. PMID:19572969

  8. Eye Movement Analysis and Cognitive Assessment. The Use of Comparative Visual Search Tasks in a Non-immersive VR Application.

    PubMed

    Rosa, Pedro J; Gamito, Pedro; Oliveira, Jorge; Morais, Diogo; Pavlovic, Matthew; Smyth, Olivia; Maia, Inês; Gomes, Tiago

    2017-03-23

    An adequate behavioral response depends on attentional and mnesic processes. When these basic cognitive functions are impaired, the use of non-immersive Virtual Reality Applications (VRAs) can be a reliable technique for assessing the level of impairment. However, most non-immersive VRAs use indirect measures to make inferences about visual attention and mnesic processes (e.g., time to task completion, error rate). To examine whether the eye movement analysis through eye tracking (ET) can be a reliable method to probe more effectively where and how attention is deployed and how it is linked with visual working memory during comparative visual search tasks (CVSTs) in non-immersive VRAs. The eye movements of 50 healthy participants were continuously recorded while CVSTs, selected from a set of cognitive tasks in the Systemic Lisbon Battery (SLB). Then a VRA designed to assess of cognitive impairments were randomly presented. The total fixation duration, the number of visits in the areas of interest and in the interstimulus space, along with the total execution time was significantly different as a function of the Mini Mental State Examination (MMSE) scores. The present study demonstrates that CVSTs in SLB, when combined with ET, can be a reliable and unobtrusive method for assessing cognitive abilities in healthy individuals, opening it to potential use in clinical samples.

  9. Top-down knowledge modulates onset capture in a feedforward manner.

    PubMed

    Becker, Stefanie I; Lewis, Amanda J; Axtens, Jenna E

    2017-04-01

    How do we select behaviourally important information from cluttered visual environments? Previous research has shown that both top-down, goal-driven factors and bottom-up, stimulus-driven factors determine which stimuli are selected. However, it is still debated when top-down processes modulate visual selection. According to a feedforward account, top-down processes modulate visual processing even before the appearance of any stimuli, whereas others claim that top-down processes modulate visual selection only at a late stage, via feedback processing. In line with such a dual stage account, some studies found that eye movements to an irrelevant onset distractor are not modulated by its similarity to the target stimulus, especially when eye movements are launched early (within 150-ms post stimulus onset). However, in these studies the target transiently changed colour due to a colour after-effect that occurred during premasking, and the time course analyses were incomplete. The present study tested the feedforward account against the dual stage account in two eye tracking experiments, with and without colour after-effects (Exp. 1), as well when the target colour varied randomly and observers were informed of the target colour with a word cue (Exp. 2). The results showed that top-down processes modulated the earliest eye movements to the onset distractors (<150-ms latencies), without incurring any costs for selection of target matching distractors. These results unambiguously support a feedforward account of top-down modulation.

  10. New Perspectives in Amblyopia Therapy on Adults: A Critical Role for the Excitatory/Inhibitory Balance

    PubMed Central

    Baroncelli, Laura; Maffei, Lamberto; Sale, Alessandro

    2011-01-01

    Amblyopia is the most common form of impairment of visual function affecting one eye, with a prevalence of about 1–5% of the total world population. This pathology is caused by early abnormal visual experience with a functional imbalance between the two eyes owing to anisometropia, strabismus, or congenital cataract, resulting in a dramatic loss of visual acuity in an apparently healthy eye and various other perceptual abnormalities, including deficits in contrast sensitivity and in stereopsis. It is currently accepted that, due to a lack of sufficient plasticity within the brain, amblyopia is untreatable in adulthood. However, recent results obtained both in clinical trials and in animal models have challenged this traditional view, unmasking a previously unsuspected potential for promoting recovery after the end of the critical period for visual cortex plasticity. These studies point toward the intracortical inhibitory transmission as a crucial brake for therapeutic rehabilitation and recovery from amblyopia in the adult brain. PMID:22144947

  11. Shall we stay, or shall we switch? Continued anti-VEGF therapy versus early switch to dexamethasone implant in refractory diabetic macular edema.

    PubMed

    Busch, Catharina; Zur, Dinah; Fraser-Bell, Samantha; Laíns, Inês; Santos, Ana Rita; Lupidi, Marco; Cagini, Carlo; Gabrielle, Pierre-Henry; Couturier, Aude; Mané-Tauty, Valérie; Giancipoli, Ermete; Ricci, Giuseppe D'Amico; Cebeci, Zafer; Rodríguez-Valdés, Patricio J; Chaikitmongkol, Voraporn; Amphornphruet, Atchara; Hindi, Isaac; Agrawal, Kushal; Chhablani, Jay; Loewenstein, Anat; Iglicki, Matias; Rehak, Matus

    2018-05-05

    To compare functional and anatomical outcomes of continued anti-vascular endothelial growth factor (VEGF) therapy versus dexamethasone (DEX) implant in eyes with refractory diabetic macular edema (DME) after three initial anti-VEGF injections in a real-world setting. To be included in this retrospective multicenter, case-control study, eyes were required: (1) to present with early refractory DME, as defined by visual acuity (VA) gain ≤ 5 letters or reduction in central subfield thickness (CST) ≤ 20%, after a loading phase of anti-VEGF therapy (three monthly injections) and (2) to treat further with (a) anti-VEGF therapy or (b) DEX implant. Main outcome measures were change in visual acuity (VA) and central subfield thickness (CST) at 12 months. Due to imbalanced baseline characteristics, a matched anti-VEGF group was formed by only keeping eyes with similar baseline characteristics as those in the DEX group. A total of 110 eyes from 105 patients were included (anti-VEGF group: 72 eyes, DEX group: 38 eyes). Mean change in VA at 12 months was - 0.4 ± 10.8 letters (anti-VEGF group), and + 6.1 ± 10.6 letters (DEX group) (P = 0.004). Over the same period, mean change in CST was + 18.3 ± 145.9 µm (anti-VEGF group) and - 92.8 ± 173.6 µm (DEX group) (P < 0.001). Eyes in the DEX group were more likely to gain ≥ 10 letters (OR 3.71, 95% CI 1.19-11.61, P = 0.024) at month 12. In a real-world setting, eyes with DME considered refractory to anti-VEGF therapy after three monthly injections which were switched to DEX implant and had better visual and anatomical outcomes at 12 months than those that continued treatment with anti-VEGF therapy.

  12. Rotational symmetric HMD with eye-tracking capability

    NASA Astrophysics Data System (ADS)

    Liu, Fangfang; Cheng, Dewen; Wang, Qiwei; Wang, Yongtian

    2016-10-01

    As an important auxiliary function of head-mounted displays (HMDs), eye tracking has an important role in the field of intelligent human-machine interaction. In this paper, an eye-tracking HMD system (ET-HMD) is designed based on the rotational symmetric system. The tracking principle in this paper is based on pupil-corneal reflection. The ET-HMD system comprises three optical paths for virtual display, infrared illumination, and eye tracking. The display optics is shared by three optical paths and consists of four spherical lenses. For the eye-tracking path, an extra imaging lens is added to match the image sensor and achieve eye tracking. The display optics provides users a 40° diagonal FOV with a ״ 0.61 OLED, the 19 mm eye clearance, and 10 mm exit pupil diameter. The eye-tracking path can capture 15 mm × 15 mm of the users' eyes. The average MTF is above 0.1 at 26 lp/mm for the display path, and exceeds 0.2 at 46 lp/mm for the eye-tracking path. Eye illumination is simulated using LightTools with an eye model and an 850 nm near-infrared LED (NIR-LED). The results of the simulation show that the illumination of the NIR-LED can cover the area of the eye model with the display optics that is sufficient for eye tracking. The integrated optical system HMDs with eye-tracking feature can help improve the HMD experience of users.

  13. A non-invasive method for studying an index of pupil diameter and visual performance in the rhesus monkey.

    PubMed

    Fairhall, Sarah J; Dickson, Carol A; Scott, Leah; Pearce, Peter C

    2006-04-01

    A non-invasive model has been developed to estimate gaze direction and relative pupil diameter, in minimally restrained rhesus monkeys, to investigate the effects of low doses of ocularly administered cholinergic compounds on visual performance. Animals were trained to co-operate with a novel device, which enabled eye movements to be recorded using modified human eye-tracking equipment, and to perform a task which determined visual threshold contrast. Responses were made by gaze transfer under twilight conditions. 4% w/v pilocarpine nitrate was studied to demonstrate the suitability of the model. Pilocarpine induced marked miosis for >3 h which was accompanied by a decrement in task performance. The method obviates the need for invasive surgery and, as the position of point of gaze can be approximately defined, the approach may have utility in other areas of research involving non-human primates.

  14. Filling in the gaps: Anticipatory control of eye movements in chronic mild traumatic brain injury.

    PubMed

    Diwakar, Mithun; Harrington, Deborah L; Maruta, Jun; Ghajar, Jamshid; El-Gabalawy, Fady; Muzzatti, Laura; Corbetta, Maurizio; Huang, Ming-Xiong; Lee, Roland R

    2015-01-01

    A barrier in the diagnosis of mild traumatic brain injury (mTBI) stems from the lack of measures that are adequately sensitive in detecting mild head injuries. MRI and CT are typically negative in mTBI patients with persistent symptoms of post-concussive syndrome (PCS), and characteristic difficulties in sustaining attention often go undetected on neuropsychological testing, which can be insensitive to momentary lapses in concentration. Conversely, visual tracking strongly depends on sustained attention over time and is impaired in chronic mTBI patients, especially when tracking an occluded target. This finding suggests deficient internal anticipatory control in mTBI, the neural underpinnings of which are poorly understood. The present study investigated the neuronal bases for deficient anticipatory control during visual tracking in 25 chronic mTBI patients with persistent PCS symptoms and 25 healthy control subjects. The task was performed while undergoing magnetoencephalography (MEG), which allowed us to examine whether neural dysfunction associated with anticipatory control deficits was due to altered alpha, beta, and/or gamma activity. Neuropsychological examinations characterized cognition in both groups. During MEG recordings, subjects tracked a predictably moving target that was either continuously visible or randomly occluded (gap condition). MEG source-imaging analyses tested for group differences in alpha, beta, and gamma frequency bands. The results showed executive functioning, information processing speed, and verbal memory deficits in the mTBI group. Visual tracking was impaired in the mTBI group only in the gap condition. Patients showed greater error than controls before and during target occlusion, and were slower to resynchronize with the target when it reappeared. Impaired tracking concurred with abnormal beta activity, which was suppressed in the parietal cortex, especially the right hemisphere, and enhanced in left caudate and frontal-temporal areas. Regional beta-amplitude demonstrated high classification accuracy (92%) compared to eye-tracking (65%) and neuropsychological variables (80%). These findings show that deficient internal anticipatory control in mTBI is associated with altered beta activity, which is remarkably sensitive given the heterogeneity of injuries.

  15. The Role of Early Visual Attention in Social Development

    ERIC Educational Resources Information Center

    Wagner, Jennifer B.; Luyster, Rhiannon J.; Yim, Jung Yeon; Tager-Flusberg, Helen; Nelson, Charles A.

    2013-01-01

    Faces convey important information about the social environment, and even very young infants are preferentially attentive to face-like over non-face stimuli. Eye-tracking studies have allowed researchers to examine which features of faces infants find most salient across development, and the present study examined scanning of familiar (i.e.,…

  16. Processing Trade-Offs in the Reading of Dutch Derived Words

    ERIC Educational Resources Information Center

    Kuperman, Victor; Bertram, Raymond; Baayen, R. Harald

    2010-01-01

    This eye-tracking study explores visual recognition of Dutch suffixed words (e.g., "plaats+ing" "placing") embedded in sentential contexts, and provides new evidence on the interplay between storage and computation in morphological processing. We show that suffix length crucially moderates the use of morphological properties. In words with shorter…

  17. Brief Report: Circumscribed Attention in Young Children with Autism

    ERIC Educational Resources Information Center

    Sasson, Noah J.; Elison, Jed T.; Turner-Brown, Lauren M.; Dichter, Gabriel S.; Bodfish, James W.

    2011-01-01

    School-aged children and adolescents with autism demonstrate circumscribed attentional patterns to nonsocial aspects of complex visual arrays (Sasson et al. "2008"). The current study downward extended these findings to a sample of 2-5 year-olds with autism and 2-5 year-old typically developing children. Eye-tracking was used to quantify discrete…

  18. Usability Testing and Workflow Analysis of the TRADOC Data Visualization Tool

    DTIC Science & Technology

    2012-09-01

    software such as blink data, saccades, and cognitive load based on pupil contraction. Eye-tracking was only a component of the data evaluated and as...line charts were a pain to read) Yes Yes Projecting the charts directly onto the regions increased clutter on the screen and is a bad stylistic

  19. Eye-tracking and EMG supported 3D Virtual Reality - an integrated tool for perceptual and motor development of children with severe physical disabilities: a research concept.

    PubMed

    Pulay, Márk Ágoston

    2015-01-01

    Letting children with severe physical disabilities (like Tetraparesis spastica) to get relevant motional experiences of appropriate quality and quantity is now the greatest challenge for us in the field of neurorehabilitation. These motional experiences may establish many cognitive processes, but may also cause additional secondary cognitive dysfunctions such as disorders in body image, figure invariance, visual perception, auditory differentiation, concentration, analytic and synthetic ways of thinking, visual memory etc. Virtual Reality is a technology that provides a sense of presence in a real environment with the help of 3D pictures and animations formed in a computer environment and enable the person to interact with the objects in that environment. One of our biggest challenges is to find a well suited input device (hardware) to let the children with severe physical disabilities to interact with the computer. Based on our own experiences and a thorough literature review we have come to the conclusion that an effective combination of eye-tracking and EMG devices should work well.

  20. Aging and goal-directed emotional attention: distraction reverses emotional biases.

    PubMed

    Knight, Marisa; Seymour, Travis L; Gaunt, Joshua T; Baker, Christopher; Nesmith, Kathryn; Mather, Mara

    2007-11-01

    Previous findings reveal that older adults favor positive over negative stimuli in both memory and attention (for a review, see Mather & Carstensen, 2005). This study used eye tracking to investigate the role of cognitive control in older adults' selective visual attention. Younger and older adults viewed emotional-neutral and emotional-emotional pairs of faces and pictures while their gaze patterns were recorded under full or divided attention conditions. Replicating previous eye-tracking findings, older adults allocated less of their visual attention to negative stimuli in negative-neutral stimulus pairings in the full attention condition than younger adults did. However, as predicted by a cognitive-control-based account of the positivity effect in older adults' information processing tendencies (Mather & Knight, 2005), older adults' tendency to avoid negative stimuli was reversed in the divided attention condition. Compared with younger adults, older adults' limited attentional resources were more likely to be drawn to negative stimuli when they were distracted. These findings indicate that emotional goals can have unintended consequences when cognitive control mechanisms are not fully available.

  1. Anticipation in Real-World Scenes: The Role of Visual Context and Visual Memory.

    PubMed

    Coco, Moreno I; Keller, Frank; Malcolm, George L

    2016-11-01

    The human sentence processor is able to make rapid predictions about upcoming linguistic input. For example, upon hearing the verb eat, anticipatory eye-movements are launched toward edible objects in a visual scene (Altmann & Kamide, 1999). However, the cognitive mechanisms that underlie anticipation remain to be elucidated in ecologically valid contexts. Previous research has, in fact, mainly used clip-art scenes and object arrays, raising the possibility that anticipatory eye-movements are limited to displays containing a small number of objects in a visually impoverished context. In Experiment 1, we confirm that anticipation effects occur in real-world scenes and investigate the mechanisms that underlie such anticipation. In particular, we demonstrate that real-world scenes provide contextual information that anticipation can draw on: When the target object is not present in the scene, participants infer and fixate regions that are contextually appropriate (e.g., a table upon hearing eat). Experiment 2 investigates whether such contextual inference requires the co-presence of the scene, or whether memory representations can be utilized instead. The same real-world scenes as in Experiment 1 are presented to participants, but the scene disappears before the sentence is heard. We find that anticipation occurs even when the screen is blank, including when contextual inference is required. We conclude that anticipatory language processing is able to draw upon global scene representations (such as scene type) to make contextual inferences. These findings are compatible with theories assuming contextual guidance, but posit a challenge for theories assuming object-based visual indices. Copyright © 2015 Cognitive Science Society, Inc.

  2. Optical coherence tomography measurement of the retinal nerve fiber layer in normal and juvenile glaucomatous eyes.

    PubMed

    Mrugacz, Malgorzata; Bakunowicz-Lazarczyk, Alina

    2005-01-01

    The aim of this study was to quantitatively assess and compare the thickness of the retinal nerve fiber layer (RNFL) in normal and glaucomatous eyes of children using the optical coherence tomograph. The mean RNFL thickness of normal eyes (n=26) was compared with that of glaucomatous eyes (n=26). The eyes were classified into diagnostic groups based on conventional ophthalmological physical examination, Humphrey 30-2 visual fields, stereoscopic optic nerve head photography, and optical coherence tomography. The mean RNFL was significantly thinner in glaucomatous eyes than in normal eyes: 95+/-26.3 and 132+/-24.5 microm, respectively. More specifically, the RNFL was significantly thinner in glaucomatous eyes than in normal eyes in the inferior quadrant: 87+/-23.5 and 122+/-24.2 microm, respectively. The mean and inferior quadrant RFNL thicknesses as measured by the optical coherence tomograph showed a statistically significant correlation with glaucoma. Optical coherence tomography may contribute to tracking of juvenile glaucoma progression. Copyright (c) 2005 S. Karger AG, Basel.

  3. Understanding Health Literacy Measurement Through Eye Tracking

    PubMed Central

    Mackert, Michael; Champlin, Sara E.; Pasch, Keryn E.; Weiss, Barry D.

    2013-01-01

    This study used eye-tracking technology to explore how individuals with different levels of health literacy visualize health-related information. The authors recruited 25 university administrative staff (more likely to have adequate health literacy skills) and 25 adults enrolled in an adult literacy program (more likely to have limited health literacy skills). The authors administered the Newest Vital Sign (NVS) health literacy assessment to each participant. The assessment involves having individuals answer questions about a nutrition label while viewing the label. The authors used computerized eye-tracking technology to measure the amount of time each participant spent fixing their view at nutrition label information that was relevant to the questions being asked and the amount of time they spent viewing nonrelevant information. Results showed that lower NVS scores were significantly associated with more time spent on information not relevant for answering the NVS items. This finding suggests that efforts to improve health literacy measurement should include the ability to differentiate not just between individuals who have difficulty interpreting and using health information, but also between those who have difficulty finding relevant information. In addition, this finding suggests that health education material should minimize the inclusion of nonrelevant information. PMID:24093355

  4. Evaluation of methods for the assessment of attention while driving.

    PubMed

    Kircher, Katja; Ahlstrom, Christer

    2018-05-01

    The ability to assess the current attentional state of the driver is important for many aspects of driving, not least in the field of partial automation for transfer of control between vehicle and driver. Knowledge about the driver's attentional state is also necessary for the assessment of the effects of additional tasks on attention. The objective of this paper is to evaluate different methods that can be used to assess attention, first theoretically, and then empirically in a controlled field study and in the laboratory. Six driving instructors participated in all experimental conditions of the study, delivering within-subjects data for all tested methods. Additional participants were recruited for some of the conditions. The test route consisted of 14km of motorway with low to moderate traffic, which was driven three times per participant per condition. The on-road conditions were: baseline, driving with eye tracking and self-paced visual occlusion, and driving while thinking aloud. The laboratory conditions were: Describing how attention should be distributed on a motorway, and thinking aloud while watching a video from the baseline drive. The results show that visual occlusion, especially in combination with eye tracking, was appropriate for assessing spare capacity. The think aloud protocol was appropriate to gain insight about the driver's actual mental representation of the situation at hand. Expert judgement in the laboratory was not reliable for the assessment of drivers' attentional distribution in traffic. Across all assessment techniques, it is evident that meaningful assessment of attention in a dynamic traffic situation can only be achieved when the infrastructure layout, surrounding road users, and intended manoeuvres are taken into account. This requires advanced instrumentation of the vehicle, and subsequent data reduction, analysis and interpretation are demanding. In conclusion, driver attention assessment in real traffic is a complex task, but a combination of visual occlusion, eye tracking and thinking aloud is a promising combination of methods to come further on the way. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. The Effectiveness of Gaze-Contingent Control in Computer Games.

    PubMed

    Orlov, Paul A; Apraksin, Nikolay

    2015-01-01

    Eye-tracking technology and gaze-contingent control in human-computer interaction have become an objective reality. This article reports on a series of eye-tracking experiments, in which we concentrated on one aspect of gaze-contingent interaction: Its effectiveness compared with mouse-based control in a computer strategy game. We propose a measure for evaluating the effectiveness of interaction based on "the time of recognition" the game unit. In this article, we use this measure to compare gaze- and mouse-contingent systems, and we present the analysis of the differences as a function of the number of game units. Our results indicate that performance of gaze-contingent interaction is typically higher than mouse manipulation in a visual searching task. When tested on 60 subjects, the results showed that the effectiveness of gaze-contingent systems over 1.5 times higher. In addition, we obtained that eye behavior stays quite stabile with or without mouse interaction. © The Author(s) 2015.

  6. A neurocomputational model of figure-ground discrimination and target tracking.

    PubMed

    Sun, H; Liu, L; Guo, A

    1999-01-01

    A neurocomputational model is presented for figureground discrimination and target tracking. In the model, the elementary motion detectors of the correlation type, the computational modules of saccadic and smooth pursuit eye movement, an oscillatory neural-network motion perception module and a selective attention module are involved. It is shown that through the oscillatory amplitude and frequency encoding, and selective synchronization of phase oscillators, the figure and the ground can be successfully discriminated from each other. The receptive fields developed by hidden units of the networks were surprisingly similar to the actual receptive fields and columnar organization found in the primate visual cortex. It is suggested that equivalent mechanisms may exist in the primate visual cortex to discriminate figure-ground in both temporal and spatial domains.

  7. [A cross-sectional study of moderate or severe visual impairment and blindness in residents with type 2 diabetes living in Xinjing Town, Shanghai].

    PubMed

    Bai, X L; Xu, X; Lu, M; He, J N; Xu, X; Du, X; Zhang, B; He, X G; Lu, L N; Zhu, J F; Zou, H D; Zhao, J L

    2016-11-11

    Objective: To investigate the prevalence, underlying causes and risk factors of moderate or severe visual impairment and blindness in a population with type 2 diabetes in Xinjing Town, Shanghai, China. Methods: A cross-sectional survey among local Han adult residents, who were previously diagnosed as type 2 diabetes, was conducted between October 2014 and January 2015. The survey was preceded by a pilot study; operational methods were refined and quality assurance evaluation was carried out. The best corrected visual acuity was recorded and classified following the modified World Health Organization grading system. Assigned ophthalmic doctors assured the leading causes of every blind or visually impaired eye. Binary logistic regression analysis was used to determine the related factors of blindness and moderate or severe visual impairment. Results: A total of 2 216 type 2 diabetic residents were enrolled, and 166 eyes (3.7%, 166/4 432) were blind. Cataract was the leading cause of blindness (39.8%); macular degeneration (18.0%) and eyeball atrophy (11.4%) were the second and third leading causes of blindness, respectively. Moderate or severe visual impairment was found in 376 eyes (8.5%, 376/4 432), and the most frequent cause was cataract (65.7%), followed by diabetic retinopathy (9.8%) and macular degeneration (9.4% ). Older age, female gender, earlier onset diabetes and a lower spherical equivalent in the better eye were associated with best corrected visual acuity<20/63 in the better eye. Conclusion: The prevalences of moderate or severe visual impairment and blindness in our population with type 2 diabetes were high. (Chin J Ophthalmol, 2016, 52: 825-830) .

  8. STS-47 Payload Specialist Mohri conducts visual stability experiment in SLJ

    NASA Image and Video Library

    1992-09-20

    STS047-204-006 (12 - 20 Sept 1992) --- Dr. Mamoru Mohri, payload specialist representing Japan's National Space Development Agency (NASDA), participates in an experiment designed to learn more about Space Adaptation Syndrome (SAS). The experiment is titled, "Comparative Measurement of Visual Stability in Earth and Cosmic Space." During the experiment, Dr. Mohri tracked a flickering light target while eye movements and neck muscle tension were measured. This 45-degree angle position was one of four studied during the eight-day Spacelab-J mission.

  9. An Attention-Sensitive Memory Trace in Macaque MT Following Saccadic Eye Movements

    PubMed Central

    Yao, Tao; Treue, Stefan; Krishna, B. Suresh

    2016-01-01

    We experience a visually stable world despite frequent retinal image displacements induced by eye, head, and body movements. The neural mechanisms underlying this remain unclear. One mechanism that may contribute is transsaccadic remapping, in which the responses of some neurons in various attentional, oculomotor, and visual brain areas appear to anticipate the consequences of saccades. The functional role of transsaccadic remapping is actively debated, and many of its key properties remain unknown. Here, recording from two monkeys trained to make a saccade while directing attention to one of two spatial locations, we show that neurons in the middle temporal area (MT), a key locus in the motion-processing pathway of humans and macaques, show a form of transsaccadic remapping called a memory trace. The memory trace in MT neurons is enhanced by the allocation of top-down spatial attention. Our data provide the first demonstration, to our knowledge, of the influence of top-down attention on the memory trace anywhere in the brain. We find evidence only for a small and transient effect of motion direction on the memory trace (and in only one of two monkeys), arguing against a role for MT in the theoretically critical yet empirically contentious phenomenon of spatiotopic feature-comparison and adaptation transfer across saccades. Our data support the hypothesis that transsaccadic remapping represents the shift of attentional pointers in a retinotopic map, so that relevant locations can be tracked and rapidly processed across saccades. Our results resolve important issues concerning the perisaccadic representation of visual stimuli in the dorsal stream and demonstrate a significant role for top-down attention in modulating this representation. PMID:26901857

  10. Evaluation of the User Strategy on 2d and 3d City Maps Based on Novel Scanpath Comparison Method and Graph Visualization

    NASA Astrophysics Data System (ADS)

    Dolezalova, J.; Popelka, S.

    2016-06-01

    The paper is dealing with scanpath comparison of eye-tracking data recorded during case study focused on the evaluation of 2D and 3D city maps. The experiment contained screenshots from three map portals. Two types of maps were used - standard map and 3D visualization. Respondents' task was to find particular point symbol on the map as fast as possible. Scanpath comparison is one group of the eye-tracking data analyses methods used for revealing the strategy of the respondents. In cartographic studies, the most commonly used application for scanpath comparison is eyePatterns that output is hierarchical clustering and a tree graph representing the relationships between analysed sequences. During an analysis of the algorithm generating a tree graph, it was found that the outputs do not correspond to the reality. We proceeded to the creation of a new tool called ScanGraph. This tool uses visualization of cliques in simple graphs and is freely available at www.eyetracking.upol.cz/scangraph. Results of the study proved the functionality of the tool and its suitability for analyses of different strategies of map readers. Based on the results of the tool, similar scanpaths were selected, and groups of respondents with similar strategies were identified. With this knowledge, it is possible to analyse the relationship between belonging to the group with similar strategy and data gathered from the questionnaire (age, sex, cartographic knowledge, etc.) or type of stimuli (2D, 3D map).

  11. Spatial frequency characteristics at image decision-point locations for observers with different radiological backgrounds in lung nodule detection

    NASA Astrophysics Data System (ADS)

    Pietrzyk, Mariusz W.; Manning, David J.; Dix, Alan; Donovan, Tim

    2009-02-01

    Aim: The goal of the study is to determine the spatial frequency characteristics at locations in the image of overt and covert observers' decisions and find out if there are any similarities in different observers' groups: the same radiological experience group or the same accuracy scored level. Background: The radiological task is described as a visual searching decision making procedure involving visual perception and cognitive processing. Humans perceive the world through a number of spatial frequency channels, each sensitive to visual information carried by different spatial frequency ranges and orientations. Recent studies have shown that particular physical properties of local and global image-based elements are correlated with the performance and the level of experience of human observers in breast cancer and lung nodule detections. Neurological findings in visual perception were an inspiration for wavelet applications in vision research because the methodology tries to mimic the brain processing algorithms. Methods: The wavelet approach to the set of postero-anterior chest radiographs analysis has been used to characterize perceptual preferences observers with different levels of experience in the radiological task. Psychophysical methodology has been applied to track eye movements over the image, where particular ROIs related to the observers' fixation clusters has been analysed in the spaces frame by Daubechies functions. Results: Significance differences have been found between the spatial frequency characteristics at the location of different decisions.

  12. An Eye Does Not Make an I: Expanding the Sensorium

    ERIC Educational Resources Information Center

    Duncum, Paul

    2012-01-01

    While visual art appeals to the sense of sight, both recent art and popular visual culture appeal to the whole sensorium, the sum total of the ways we experience the world. Common assumptions about the senses regarding their number, their relative importance, and their relation to one another are problematized in light of recent psychological and…

  13. Words in Context: The Effects of Length, Frequency, and Predictability on Brain Responses During Natural Reading

    PubMed Central

    Schuster, Sarah; Hawelka, Stefan; Hutzler, Florian; Kronbichler, Martin; Richlan, Fabio

    2016-01-01

    Word length, frequency, and predictability count among the most influential variables during reading. Their effects are well-documented in eye movement studies, but pertinent evidence from neuroimaging primarily stem from single-word presentations. We investigated the effects of these variables during reading of whole sentences with simultaneous eye-tracking and functional magnetic resonance imaging (fixation-related fMRI). Increasing word length was associated with increasing activation in occipital areas linked to visual analysis. Additionally, length elicited a U-shaped modulation (i.e., least activation for medium-length words) within a brain stem region presumably linked to eye movement control. These effects, however, were diminished when accounting for multiple fixation cases. Increasing frequency was associated with decreasing activation within left inferior frontal, superior parietal, and occipito-temporal regions. The function of the latter region—hosting the putative visual word form area—was originally considered as limited to sublexical processing. An exploratory analysis revealed that increasing predictability was associated with decreasing activation within middle temporal and inferior frontal regions previously implicated in memory access and unification. The findings are discussed with regard to their correspondence with findings from single-word presentations and with regard to neurocognitive models of visual word recognition, semantic processing, and eye movement control during reading. PMID:27365297

  14. Fixation-related FMRI analysis in the domain of reading research: using self-paced eye movements as markers for hemodynamic brain responses during visual letter string processing.

    PubMed

    Richlan, Fabio; Gagl, Benjamin; Hawelka, Stefan; Braun, Mario; Schurz, Matthias; Kronbichler, Martin; Hutzler, Florian

    2014-10-01

    The present study investigated the feasibility of using self-paced eye movements during reading (measured by an eye tracker) as markers for calculating hemodynamic brain responses measured by functional magnetic resonance imaging (fMRI). Specifically, we were interested in whether the fixation-related fMRI analysis approach was sensitive enough to detect activation differences between reading material (words and pseudowords) and nonreading material (line and unfamiliar Hebrew strings). Reliable reading-related activation was identified in left hemisphere superior temporal, middle temporal, and occipito-temporal regions including the visual word form area (VWFA). The results of the present study are encouraging insofar as fixation-related analysis could be used in future fMRI studies to clarify some of the inconsistent findings in the literature regarding the VWFA. Our study is the first step in investigating specific visual word recognition processes during self-paced natural sentence reading via simultaneous eye tracking and fMRI, thus aiming at an ecologically valid measurement of reading processes. We provided the proof of concept and methodological framework for the analysis of fixation-related fMRI activation in the domain of reading research. © The Author 2013. Published by Oxford University Press.

  15. Differences between Dyslexic and Non-Dyslexic Children in the Performance of Phonological Visual-Auditory Recognition Tasks: An Eye-Tracking Study

    PubMed Central

    Tiadi, Aimé; Seassau, Magali; Gerard, Christophe-Loïc; Bucci, Maria Pia

    2016-01-01

    The object of this study was to explore further phonological visual-auditory recognition tasks in a group of fifty-six healthy children (mean age: 9.9 ± 0.3) and to compare these data to those recorded in twenty-six age-matched dyslexic children (mean age: 9.8 ± 0.2). Eye movements from both eyes were recorded using an infrared video-oculography system (MobileEBT® e(y)e BRAIN). The recognition task was performed under four conditions in which the target object was displayed either with phonologically unrelated objects (baseline condition), or with cohort or rhyme objects (cohort and rhyme conditions, respectively), or both together (rhyme + cohort condition). The percentage of the total time spent on the targets and the latency of the first saccade on the target were measured. Results in healthy children showed that the percentage of the total time spent in the baseline condition was significantly longer than in the other conditions, and that the latency of the first saccade in the cohort condition was significantly longer than in the other conditions; interestingly, the latency decreased significantly with the increasing age of the children. The developmental trend of phonological awareness was also observed in healthy children only. In contrast, we observed that for dyslexic children the total time spent on the target was similar in all four conditions tested, and also that they had similar latency values in both cohort and rhyme conditions. These findings suggest a different sensitivity to the phonological competitors between dyslexic and non-dyslexic children. Also, the eye-tracking technique provides online information about phonological awareness capabilities in children. PMID:27438352

  16. Cue-dependent memory-based smooth-pursuit in normal human subjects: importance of extra-retinal mechanisms for initial pursuit.

    PubMed

    Ito, Norie; Barnes, Graham R; Fukushima, Junko; Fukushima, Kikuro; Warabi, Tateo

    2013-08-01

    Using a cue-dependent memory-based smooth-pursuit task previously applied to monkeys, we examined the effects of visual motion-memory on smooth-pursuit eye movements in normal human subjects and compared the results with those of the trained monkeys. These results were also compared with those during simple ramp-pursuit that did not require visual motion-memory. During memory-based pursuit, all subjects exhibited virtually no errors in either pursuit-direction or go/no-go selection. Tracking eye movements of humans and monkeys were similar in the two tasks, but tracking eye movements were different between the two tasks; latencies of the pursuit and corrective saccades were prolonged, initial pursuit eye velocity and acceleration were lower, peak velocities were lower, and time to reach peak velocities lengthened during memory-based pursuit. These characteristics were similar to anticipatory pursuit initiated by extra-retinal components during the initial extinction task of Barnes and Collins (J Neurophysiol 100:1135-1146, 2008b). We suggest that the differences between the two tasks reflect differences between the contribution of extra-retinal and retinal components. This interpretation is supported by two further studies: (1) during popping out of the correct spot to enhance retinal image-motion inputs during memory-based pursuit, pursuit eye velocities approached those during simple ramp-pursuit, and (2) during initial blanking of spot motion during memory-based pursuit, pursuit components appeared in the correct direction. Our results showed the importance of extra-retinal mechanisms for initial pursuit during memory-based pursuit, which include priming effects and extra-retinal drive components. Comparison with monkey studies on neuronal responses and model analysis suggested possible pathways for the extra-retinal mechanisms.

  17. Event-related potential and eye tracking evidence of the developmental dynamics of face processing.

    PubMed

    Meaux, Emilie; Hernandez, Nadia; Carteau-Martin, Isabelle; Martineau, Joëlle; Barthélémy, Catherine; Bonnet-Brilhault, Frédérique; Batty, Magali

    2014-04-01

    Although the wide neural network and specific processes related to faces have been revealed, the process by which face-processing ability develops remains unclear. An interest in faces appears early in infancy, and developmental findings to date have suggested a long maturation process of the mechanisms involved in face processing. These developmental changes may be supported by the acquisition of more efficient strategies to process faces (theory of expertise) and by the maturation of the face neural network identified in adults. This study aimed to clarify the link between event-related potential (ERP) development in response to faces and the behavioral changes in the way faces are scanned throughout childhood. Twenty-six young children (4-10 years of age) were included in two experimental paradigms, the first exploring ERPs during face processing, the second investigating the visual exploration of faces using an eye-tracking system. The results confirmed significant age-related changes in visual ERPs (P1, N170 and P2). Moreover, an increased interest in the eye region and an attentional shift from the mouth to the eyes were also revealed. The proportion of early fixations on the eye region was correlated with N170 and P2 characteristics, highlighting a link between the development of ERPs and gaze behavior. We suggest that these overall developmental dynamics may be sustained by a gradual, experience-dependent specialization in face processing (i.e. acquisition of face expertise), which produces a more automatic and efficient network associated with effortless identification of faces, and allows the emergence of human-specific social and communication skills. © 2014 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  18. Do we track what we see? Common versus independent processing for motion perception and smooth pursuit eye movements: a review.

    PubMed

    Spering, Miriam; Montagnini, Anna

    2011-04-22

    Many neurophysiological studies in monkeys have indicated that visual motion information for the guidance of perception and smooth pursuit eye movements is - at an early stage - processed in the same visual pathway in the brain, crucially involving the middle temporal area (MT). However, these studies left some questions unanswered: Are perception and pursuit driven by the same or independent neuronal signals within this pathway? Are the perceptual interpretation of visual motion information and the motor response to visual signals limited by the same source of neuronal noise? Here, we review psychophysical studies that were motivated by these questions and compared perception and pursuit behaviorally in healthy human observers. We further review studies that focused on the interaction between perception and pursuit. The majority of results point to similarities between perception and pursuit, but dissociations were also reported. We discuss recent developments in this research area and conclude with suggestions for common and separate principles for the guidance of perceptual and motor responses to visual motion information. Copyright © 2010 Elsevier Ltd. All rights reserved.

  19. Word meaning and the control of eye fixation: semantic competitor effects and the visual world paradigm.

    PubMed

    Huettig, Falk; Altmann, Gerry T M

    2005-05-01

    When participants are presented simultaneously with spoken language and a visual display depicting objects to which that language refers, participants spontaneously fixate the visual referents of the words being heard [Cooper, R. M. (1974). The control of eye fixation by the meaning of spoken language: A new methodology for the real-time investigation of speech perception, memory, and language processing. Cognitive Psychology, 6(1), 84-107; ]. We demonstrate here that such spontaneous fixation can be driven by partial semantic overlap between a word and a visual object. Participants heard the word 'piano' when (a) a piano was depicted amongst unrelated distractors; (b) a trumpet was depicted amongst those same distractors; and (c), both the piano and trumpet were depicted. The probability of fixating the piano and the trumpet in the first two conditions rose as the word 'piano' unfolded. In the final condition, only fixations to the piano rose, although the trumpet was fixated more than the distractors. We conclude that eye movements are driven by the degree of match, along various dimensions that go beyond simple visual form, between a word and the mental representations of objects in the concurrent visual field.

  20. Using eye movements to explore mental representations of space.

    PubMed

    Fourtassi, Maryam; Rode, Gilles; Pisella, Laure

    2017-06-01

    Visual mental imagery is a cognitive experience characterised by the activation of the mental representation of an object or scene in the absence of the corresponding stimulus. According to the analogical theory, mental representations have a pictorial nature that preserves the spatial characteristics of the environment that is mentally represented. This cognitive experience shares many similarities with the experience of visual perception, including eye movements. The mental visualisation of a scene is accompanied by eye movements that reflect the spatial content of the mental image, and which can mirror the deformations of this mental image with respect to the real image, such as asymmetries or size reduction. The present article offers a concise overview of the main theories explaining the interactions between eye movements and mental representations, with some examples of the studies supporting them. It also aims to explain how ocular-tracking could be a useful tool in exploring the dynamics of spatial mental representations, especially in pathological situations where these representations can be altered, for instance in unilateral spatial neglect. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  1. Light-Field Correction for Spatial Calibration of Optical See-Through Head-Mounted Displays.

    PubMed

    Itoh, Yuta; Klinker, Gudrun

    2015-04-01

    A critical requirement for AR applications with Optical See-Through Head-Mounted Displays (OST-HMD) is to project 3D information correctly into the current viewpoint of the user - more particularly, according to the user's eye position. Recently-proposed interaction-free calibration methods [16], [17] automatically estimate this projection by tracking the user's eye position, thereby freeing users from tedious manual calibrations. However, the method is still prone to contain systematic calibration errors. Such errors stem from eye-/HMD-related factors and are not represented in the conventional eye-HMD model used for HMD calibration. This paper investigates one of these factors - the fact that optical elements of OST-HMDs distort incoming world-light rays before they reach the eye, just as corrective glasses do. Any OST-HMD requires an optical element to display a virtual screen. Each such optical element has different distortions. Since users see a distorted world through the element, ignoring this distortion degenerates the projection quality. We propose a light-field correction method, based on a machine learning technique, which compensates the world-scene distortion caused by OST-HMD optics. We demonstrate that our method reduces the systematic error and significantly increases the calibration accuracy of the interaction-free calibration.

  2. Integration of World Knowledge and Temporary Information about Changes in an Object's Environmental Location during Different Stages of Sentence Comprehension

    PubMed Central

    Chen, Xuqian; Yang, Wei; Ma, Lijun; Li, Jiaxin

    2018-01-01

    Recent findings have shown that information about changes in an object's environmental location in the context of discourse is stored in working memory during sentence comprehension. However, in these studies, changes in the object's location were always consistent with world knowledge (e.g., in “The writer picked up the pen from the floor and moved it to the desk,” the floor and the desk are both common locations for a pen). How do people accomplish comprehension when the object-location information in working memory is inconsistent with world knowledge (e.g., a pen being moved from the floor to the bathtub)? In two visual world experiments, with a “look-and-listen” task, we used eye-tracking data to investigate comprehension of sentences that described location changes under different conditions of appropriateness (i.e., the object and its location were typically vs. unusually coexistent, based on world knowledge) and antecedent context (i.e., contextual information that did vs. did not temporarily normalize unusual coexistence between object and location). Results showed that listeners' retrieval of the critical location was affected by both world knowledge and working memory, and the effect of world knowledge was reduced when the antecedent context normalized unusual coexistence of object and location. More importantly, activation of world knowledge and working memory seemed to change during the comprehension process. These results are important because they demonstrate that interference between world knowledge and information in working memory, appears to be activated dynamically during sentence comprehension. PMID:29520249

  3. Integration of World Knowledge and Temporary Information about Changes in an Object's Environmental Location during Different Stages of Sentence Comprehension.

    PubMed

    Chen, Xuqian; Yang, Wei; Ma, Lijun; Li, Jiaxin

    2018-01-01

    Recent findings have shown that information about changes in an object's environmental location in the context of discourse is stored in working memory during sentence comprehension. However, in these studies, changes in the object's location were always consistent with world knowledge (e.g., in "The writer picked up the pen from the floor and moved it to the desk," the floor and the desk are both common locations for a pen). How do people accomplish comprehension when the object-location information in working memory is inconsistent with world knowledge (e.g., a pen being moved from the floor to the bathtub)? In two visual world experiments, with a "look-and-listen" task, we used eye-tracking data to investigate comprehension of sentences that described location changes under different conditions of appropriateness (i.e., the object and its location were typically vs. unusually coexistent, based on world knowledge) and antecedent context (i.e., contextual information that did vs. did not temporarily normalize unusual coexistence between object and location). Results showed that listeners' retrieval of the critical location was affected by both world knowledge and working memory, and the effect of world knowledge was reduced when the antecedent context normalized unusual coexistence of object and location. More importantly, activation of world knowledge and working memory seemed to change during the comprehension process. These results are important because they demonstrate that interference between world knowledge and information in working memory, appears to be activated dynamically during sentence comprehension.

  4. Correlation between Inter-Blink Interval and Episodic Encoding during Movie Watching.

    PubMed

    Shin, Young Seok; Chang, Won-du; Park, Jinsick; Im, Chang-Hwan; Lee, Sang In; Kim, In Young; Jang, Dong Pyo

    2015-01-01

    Human eye blinking is cognitively suppressed to minimize loss of visual information for important real-world events. Despite the relationship between eye blinking and cognitive state, the effect of eye blinks on cognition in real-world environments has received limited research attention. In this study, we focused on the temporal pattern of inter-eye blink interval (IEBI) during movie watching and investigated its relationship with episodic memory. As a control condition, 24 healthy subjects watched a nature documentary that lacked a specific story line while electroencephalography was performed. Immediately after viewing the movie, the subjects were asked to report its most memorable scene. Four weeks later, subjects were asked to score 32 randomly selected scenes from the movie, based on how much they were able to remember and describe. The results showed that the average IEBI was significantly longer during the movie than in the control condition. In addition, the significant increase in IEBI when watching a movie coincided with the most memorable scenes of the movie. The results suggested that the interesting episodic narrative of the movie attracted the subjects' visual attention relative to the documentary clip that did not have a story line. In the episodic memory test executed four weeks later, memory performance was significantly positively correlated with IEBI (p<0.001). In summary, IEBI may be a reliable bio-marker of the degree of concentration on naturalistic content that requires visual attention, such as a movie.

  5. Correlation between Inter-Blink Interval and Episodic Encoding during Movie Watching

    PubMed Central

    Shin, Young Seok; Chang, Won-du; Park, Jinsick; Im, Chang-Hwan; Lee, Sang In; Kim, In Young; Jang, Dong Pyo

    2015-01-01

    Human eye blinking is cognitively suppressed to minimize loss of visual information for important real-world events. Despite the relationship between eye blinking and cognitive state, the effect of eye blinks on cognition in real-world environments has received limited research attention. In this study, we focused on the temporal pattern of inter-eye blink interval (IEBI) during movie watching and investigated its relationship with episodic memory. As a control condition, 24 healthy subjects watched a nature documentary that lacked a specific story line while electroencephalography was performed. Immediately after viewing the movie, the subjects were asked to report its most memorable scene. Four weeks later, subjects were asked to score 32 randomly selected scenes from the movie, based on how much they were able to remember and describe. The results showed that the average IEBI was significantly longer during the movie than in the control condition. In addition, the significant increase in IEBI when watching a movie coincided with the most memorable scenes of the movie. The results suggested that the interesting episodic narrative of the movie attracted the subjects’ visual attention relative to the documentary clip that did not have a story line. In the episodic memory test executed four weeks later, memory performance was significantly positively correlated with IEBI (p<0.001). In summary, IEBI may be a reliable bio-marker of the degree of concentration on naturalistic content that requires visual attention, such as a movie. PMID:26529091

  6. Saccade-synchronized rapid attention shifts in macaque visual cortical area MT.

    PubMed

    Yao, Tao; Treue, Stefan; Krishna, B Suresh

    2018-03-06

    While making saccadic eye-movements to scan a visual scene, humans and monkeys are able to keep track of relevant visual stimuli by maintaining spatial attention on them. This ability requires a shift of attentional modulation from the neuronal population representing the relevant stimulus pre-saccadically to the one representing it post-saccadically. For optimal performance, this trans-saccadic attention shift should be rapid and saccade-synchronized. Whether this is so is not known. We trained two rhesus monkeys to make saccades while maintaining covert attention at a fixed spatial location. We show that the trans-saccadic attention shift in cortical visual medial temporal (MT) area is well synchronized to saccades. Attentional modulation crosses over from the pre-saccadic to the post-saccadic neuronal representation by about 50 ms after a saccade. Taking response latency into account, the trans-saccadic attention shift is well timed to maintain spatial attention on relevant stimuli, so that they can be optimally tracked and processed across saccades.

  7. The role of peripheral vision in saccade planning: learning from people with tunnel vision.

    PubMed

    Luo, Gang; Vargas-Martin, Fernando; Peli, Eli

    2008-12-22

    Both visually salient and top-down information are important in eye movement control, but their relative roles in the planning of daily saccades are unclear. We investigated the effect of peripheral vision loss on saccadic behaviors in patients with tunnel vision (visual field diameters 7 degrees-16 degrees) in visual search and real-world walking experiments. The patients made up to two saccades per second to their pre-saccadic blind areas, about half of which had no overlap between the post- and pre-saccadic views. In the visual search experiment, visual field size and the background (blank or picture) did not affect the saccade sizes and direction of patients (n = 9). In the walking experiment, the patients (n = 5) and normal controls (n = 3) had similar distributions of saccade sizes and directions. These findings might provide a clue about the large extent of the top-down mechanism influence on eye movement control.

  8. Role of peripheral vision in saccade planning: Learning from people with tunnel vision

    PubMed Central

    Luo, Gang; Vargas-Martin, Fernando; Peli, Eli

    2008-01-01

    Both visually salient and top-down information are important in eye movement control, but their relative roles in the planning of daily saccades are unclear. We investigated the effect of peripheral vision loss on saccadic behaviors in patients with tunnel vision (visual field diameters 7°–16°) in visual search and real-world walking experiments. The patients made up to two saccades per second to their pre-saccadic blind areas, about half of which had no overlap between the post- and pre-saccadic views. In the visual search experiment, visual field size and the background (blank or picture) did not affect the saccade sizes and direction of patients (n=9). In the walking experiment, the patients (n=5) and normal controls (n=3) had similar distributions of saccade sizes and directions. These findings might provide a clue about the extent of the top-down mechanism influence on eye movement control. PMID:19146326

  9. The Biosocial Subject: Sensor Technologies and Worldly Sensibility

    ERIC Educational Resources Information Center

    de Freitas, Elizabeth

    2018-01-01

    Sensor technologies are increasingly part of everyday life, embedded in buildings (movement, sound, temperature) and worn on persons (heart rate, electro-dermal activity, eye tracking). This paper presents a theoretical framework for research on computational sensor data. My approach moves away from theories of agent-centered perceptual synthesis…

  10. Learning the trajectory of a moving visual target and evolution of its tracking in the monkey

    PubMed Central

    Bourrelly, Clara; Quinet, Julie; Cavanagh, Patrick

    2016-01-01

    An object moving in the visual field triggers a saccade that brings its image onto the fovea. It is followed by a combination of slow eye movements and catch-up saccades that try to keep the target image on the fovea as long as possible. The accuracy of this ability to track the “here-and-now” location of a visual target contrasts with the spatiotemporally distributed nature of its encoding in the brain. We show in six experimentally naive monkeys how this performance is acquired and gradually evolves during successive daily sessions. During the early exposure, the tracking is mostly saltatory, made of relatively large saccades separated by low eye velocity episodes, demonstrating that accurate (here and now) pursuit is not spontaneous and that gaze direction lags behind its location most of the time. Over the sessions, while the pursuit velocity is enhanced, the gaze is more frequently directed toward the current target location as a consequence of a 25% reduction in the number of catch-up saccades and a 37% reduction in size (for the first saccade). This smoothing is observed at several scales: during the course of single trials, across the set of trials within a session, and over successive sessions. We explain the neurophysiological processes responsible for this combined evolution of saccades and pursuit in the absence of stringent training constraints. More generally, our study shows that the oculomotor system can be used to discover the neural mechanisms underlying the ability to synchronize a motor effector with a dynamic external event. PMID:27683886

  11. Attentional biases in body dysmorphic disorder (BDD): Eye-tracking using the emotional Stroop task.

    PubMed

    Toh, Wei Lin; Castle, David J; Rossell, Susan L

    2017-04-01

    Body dysmorphic disorder (BDD) is characterised by repetitive behaviours and/or mental acts occurring in response to preoccupations with perceived defects or flaws in physical appearance. This study aimed to examine attentional biases in BDD via the emotional Stroop task with two modifications: i) incorporating an eye-tracking paradigm, and ii) employing an obsessive-compulsive disorder (OCD) control group. Twenty-one BDD, 19 OCD and 21 HC participants, who were age-, sex-, and IQ-matched, were included. A card version of the emotional Stroop task was employed based on seven 10-word lists: (i) BDD-positive, (ii) BDD-negative, (iii) OCD-checking, (iv) OCD-washing, (v) general positive, (vi) general threat, and (vii) neutral (as baseline). Participants were asked to read aloud words and word colours consecutively, thereby yielding accuracy and latency scores. Eye-tracking parameters were also measured. Participants with BDD exhibited significant Stroop interference for BDD-negative words relative to HC participants, as shown by extended colour-naming latencies. In contrast, the OCD group did not exhibit Stroop interference for OCD-related nor general threat words. Only mild eye-tracking anomalies were uncovered in clinical groups. Inspection of individual scanning styles and fixation heat maps however revealed that viewing strategies adopted by clinical groups were generally disorganised, with avoidance of certain disorder-relevant words and considerable visual attention devoted to non-salient card regions. The operation of attentional biases to negative disorder-specific words was corroborated in BDD. Future replication studies using other paradigms are vital, given potential ambiguities inherent in emotional Stroop task interpretation. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. The Cognitive Processing of an Educational App with Electroencephalogram and "Eye Tracking"

    ERIC Educational Resources Information Center

    Cuesta-Cambra, Ubaldo; Niño-González, José Ignacio; Rodríguez-Terceño, José

    2017-01-01

    The use of apps in education is becoming more frequent. However, the mechanisms of attention and processing of their contents and their consequences in learning have not been sufficiently studied. The objective of this work is to analyze how information is processed and learned and how visual attention takes place. It also investigates the…

  13. Transfer of Expertise: An Eye Tracking and Think Aloud Study Using Dynamic Medical Visualizations

    ERIC Educational Resources Information Center

    Gegenfurtner, Andreas; Seppanen, Marko

    2013-01-01

    Expertise research has produced mixed results regarding the problem of transfer of expertise. Is expert performance context-bound or can the underlying processes be applied to more general situations? The present study tests whether expert performance and its underlying processes transfer to novel tasks within a domain. A mixed method study using…

  14. Do Gaze Cues in Complex Scenes Capture and Direct the Attention of High Functioning Adolescents with ASD? Evidence from Eye-Tracking

    ERIC Educational Resources Information Center

    Freeth, M.; Chapman, P.; Ropar, D.; Mitchell, P.

    2010-01-01

    Visual fixation patterns whilst viewing complex photographic scenes containing one person were studied in 24 high-functioning adolescents with Autism Spectrum Disorders (ASD) and 24 matched typically developing adolescents. Over two different scene presentation durations both groups spent a large, strikingly similar proportion of their viewing…

  15. Lexical Competition during Second-Language Listening: Sentence Context, but Not Proficiency, Constrains Interference from the Native Lexicon

    ERIC Educational Resources Information Center

    Chambers, Craig G.; Cooke, Hilary

    2009-01-01

    A spoken language eye-tracking methodology was used to evaluate the effects of sentence context and proficiency on parallel language activation during spoken language comprehension. Nonnative speakers with varying proficiency levels viewed visual displays while listening to French sentences (e.g., "Marie va decrire la poule" [Marie will…

  16. Spatiotopic coding during dynamic head tilt

    PubMed Central

    Turi, Marco; Burr, David C.

    2016-01-01

    Humans maintain a stable representation of the visual world effortlessly, despite constant movements of the eyes, head, and body, across multiple planes. Whereas visual stability in the face of saccadic eye movements has been intensely researched, fewer studies have investigated retinal image transformations induced by head movements, especially in the frontal plane. Unlike head rotations in the horizontal and sagittal planes, tilting the head in the frontal plane is only partially counteracted by torsional eye movements and consequently induces a distortion of the retinal image to which we seem to be completely oblivious. One possible mechanism aiding perceptual stability is an active reconstruction of a spatiotopic map of the visual world, anchored in allocentric coordinates. To explore this possibility, we measured the positional motion aftereffect (PMAE; the apparent change in position after adaptation to motion) with head tilts of ∼42° between adaptation and test (to dissociate retinal from allocentric coordinates). The aftereffect was shown to have both a retinotopic and spatiotopic component. When tested with unpatterned Gaussian blobs rather than sinusoidal grating stimuli, the retinotopic component was greatly reduced, whereas the spatiotopic component remained. The results suggest that perceptual stability may be maintained at least partially through mechanisms involving spatiotopic coding. NEW & NOTEWORTHY Given that spatiotopic coding could play a key role in maintaining visual stability, we look for evidence of spatiotopic coding after retinal image transformations caused by head tilt. To this end, we measure the strength of the positional motion aftereffect (PMAE; previously shown to be largely spatiotopic after saccades) after large head tilts. We find that, as with eye movements, the spatial selectivity of the PMAE has a large spatiotopic component after head rotation. PMID:27903636

  17. Studying visual attention using the multiple object tracking paradigm: A tutorial review.

    PubMed

    Meyerhoff, Hauke S; Papenmeier, Frank; Huff, Markus

    2017-07-01

    Human observers are capable of tracking multiple objects among identical distractors based only on their spatiotemporal information. Since the first report of this ability in the seminal work of Pylyshyn and Storm (1988, Spatial Vision, 3, 179-197), multiple object tracking has attracted many researchers. A reason for this is that it is commonly argued that the attentional processes studied with the multiple object paradigm apparently match the attentional processing during real-world tasks such as driving or team sports. We argue that multiple object tracking provides a good mean to study the broader topic of continuous and dynamic visual attention. Indeed, several (partially contradicting) theories of attentive tracking have been proposed within the almost 30 years since its first report, and a large body of research has been conducted to test these theories. With regard to the richness and diversity of this literature, the aim of this tutorial review is to provide researchers who are new in the field of multiple object tracking with an overview over the multiple object tracking paradigm, its basic manipulations, as well as links to other paradigms investigating visual attention and working memory. Further, we aim at reviewing current theories of tracking as well as their empirical evidence. Finally, we review the state of the art in the most prominent research fields of multiple object tracking and how this research has helped to understand visual attention in dynamic settings.

  18. Home use of binocular dichoptic video content device for treatment of amblyopia: a pilot study.

    PubMed

    Mezad-Koursh, Daphna; Rosenblatt, Amir; Newman, Hadas; Stolovitch, Chaim

    2018-04-01

    To evaluate the efficacy of the BinoVision home system as measured by improvement of visual acuity in the patient's amblyopic eye. An open-label prospective pilot-trial of the system was conducted with amblyopic children aged 4-8 years at the pediatric ophthalmology unit, Tel-Aviv Medical Center, January 2014 to October 2015. Participants were assigned to the study or sham group for treatment with BinoVision for 8 or 12 weeks. Patients were instructed to watch animated television shows and videos at home using the BinoVision device for 60 minutes, 6 days a week. The BinoVision program incorporates elements at different contrast and brightness levels for both eyes, weak eye tracking training by superimposed screen images, and weak eye flicker stimuli with alerting sound manipulations. Patients were examined at 4, 8, 12, 24, and 36 weeks. A total of 27 children were recruited (14 boys), with 19 in the treatment group. Median age was 5 years (range, 4-8 years). Mean visual acuity improved by 0.26 logMAR lines in the treatment group from baseline to 12 weeks. Visual acuity was improved compared to baseline during all study and follow-up appointments (P < 0.01), with stabilization of visual acuity after cessation of treatment. The sham group completed 4 weeks of sham protocol with no change in visual acuity (P = 0.285). The average compliance rate was 88% ± 16% (50% to 100%) in treatment group. This pilot trial of 12 weeks of amblyopia treatment with the BinoVision home system demonstrated significant improvement in patients' visual acuity. Copyright © 2018 American Association for Pediatric Ophthalmology and Strabismus. Published by Elsevier Inc. All rights reserved.

  19. Visual exploration during locomotion limited by fear of heights.

    PubMed

    Kugler, Günter; Huppert, Doreen; Eckl, Maria; Schneider, Erich; Brandt, Thomas

    2014-01-01

    Visual exploration of the surroundings during locomotion at heights has not yet been investigated in subjects suffering from fear of heights. Eye and head movements were recorded separately in 16 subjects susceptible to fear of heights and in 16 non-susceptible controls while walking on an emergency escape balcony 20 meters above ground level. Participants wore mobile infrared eye-tracking goggles with a head-fixed scene camera and integrated 6-degrees-of-freedom inertial sensors for recording head movements. Video recordings of the subjects were simultaneously made to correlate gaze and gait behavior. Susceptibles exhibited a limited visual exploration of the surroundings, particularly the depth. Head movements were significantly reduced in all three planes (yaw, pitch, and roll) with less vertical head oscillations, whereas total eye movements (saccade amplitudes, frequencies, fixation durations) did not differ from those of controls. However, there was an anisotropy, with a preference for the vertical as opposed to the horizontal direction of saccades. Comparison of eye and head movement histograms and the resulting gaze-in-space revealed a smaller total area of visual exploration, which was mainly directed straight ahead and covered vertically an area from the horizon to the ground in front of the feet. This gaze behavior was associated with a slow, cautious gait. The visual exploration of the surroundings by susceptibles to fear of heights differs during locomotion at heights from the earlier investigated behavior of standing still and looking from a balcony. During locomotion, anisotropy of gaze-in-space shows a preference for the vertical as opposed to the horizontal direction during stance. Avoiding looking into the abyss may reduce anxiety in both conditions; exploration of the "vertical strip" in the heading direction is beneficial for visual control of balance and avoidance of obstacles during locomotion.

  20. Vitamin and mineral deficiencies in the developed world and their effect on the eye and vision.

    PubMed

    Whatham, Andrew; Bartlett, Hannah; Eperjesi, Frank; Blumenthal, Caron; Allen, Jane; Suttle, Catherine; Gaskin, Kevin

    2008-01-01

    Vitamin and mineral deficiencies are common in developing countries, but also occur in developed countries. We review micronutrient deficiencies for the major vitamins A, cobalamin (B(12)), biotin (vitamin H), vitamins C and E, as well as the minerals iron, and zinc, in the developed world, in terms of their relationship to systemic health and any resulting ocular disease and/or visual dysfunction. A knowledge of these effects is important as individuals with consequent poor ocular health and reduced visual function may present for ophthalmic care.

Top