Computational Model-Based Prediction of Human Episodic Memory Performance Based on Eye Movements
NASA Astrophysics Data System (ADS)
Sato, Naoyuki; Yamaguchi, Yoko
Subjects' episodic memory performance is not simply reflected by eye movements. We use a ‘theta phase coding’ model of the hippocampus to predict subjects' memory performance from their eye movements. Results demonstrate the ability of the model to predict subjects' memory performance. These studies provide a novel approach to computational modeling in the human-machine interface.
Implicit prosody mining based on the human eye image capture technology
NASA Astrophysics Data System (ADS)
Gao, Pei-pei; Liu, Feng
2013-08-01
The technology of eye tracker has become the main methods of analyzing the recognition issues in human-computer interaction. Human eye image capture is the key problem of the eye tracking. Based on further research, a new human-computer interaction method introduced to enrich the form of speech synthetic. We propose a method of Implicit Prosody mining based on the human eye image capture technology to extract the parameters from the image of human eyes when reading, control and drive prosody generation in speech synthesis, and establish prosodic model with high simulation accuracy. Duration model is key issues for prosody generation. For the duration model, this paper put forward a new idea for obtaining gaze duration of eyes when reading based on the eye image capture technology, and synchronous controlling this duration and pronunciation duration in speech synthesis. The movement of human eyes during reading is a comprehensive multi-factor interactive process, such as gaze, twitching and backsight. Therefore, how to extract the appropriate information from the image of human eyes need to be considered and the gaze regularity of eyes need to be obtained as references of modeling. Based on the analysis of current three kinds of eye movement control model and the characteristics of the Implicit Prosody reading, relative independence between speech processing system of text and eye movement control system was discussed. It was proved that under the same text familiarity condition, gaze duration of eyes when reading and internal voice pronunciation duration are synchronous. The eye gaze duration model based on the Chinese language level prosodic structure was presented to change previous methods of machine learning and probability forecasting, obtain readers' real internal reading rhythm and to synthesize voice with personalized rhythm. This research will enrich human-computer interactive form, and will be practical significance and application prospect in terms of disabled assisted speech interaction. Experiments show that Implicit Prosody mining based on the human eye image capture technology makes the synthesized speech has more flexible expressions.
Mala, S.; Latha, K.
2014-01-01
Activity recognition is needed in different requisition, for example, reconnaissance system, patient monitoring, and human-computer interfaces. Feature selection plays an important role in activity recognition, data mining, and machine learning. In selecting subset of features, an efficient evolutionary algorithm Differential Evolution (DE), a very efficient optimizer, is used for finding informative features from eye movements using electrooculography (EOG). Many researchers use EOG signals in human-computer interactions with various computational intelligence methods to analyze eye movements. The proposed system involves analysis of EOG signals using clearness based features, minimum redundancy maximum relevance features, and Differential Evolution based features. This work concentrates more on the feature selection algorithm based on DE in order to improve the classification for faultless activity recognition. PMID:25574185
Mala, S; Latha, K
2014-01-01
Activity recognition is needed in different requisition, for example, reconnaissance system, patient monitoring, and human-computer interfaces. Feature selection plays an important role in activity recognition, data mining, and machine learning. In selecting subset of features, an efficient evolutionary algorithm Differential Evolution (DE), a very efficient optimizer, is used for finding informative features from eye movements using electrooculography (EOG). Many researchers use EOG signals in human-computer interactions with various computational intelligence methods to analyze eye movements. The proposed system involves analysis of EOG signals using clearness based features, minimum redundancy maximum relevance features, and Differential Evolution based features. This work concentrates more on the feature selection algorithm based on DE in order to improve the classification for faultless activity recognition.
Iáñez, Eduardo; Azorin, Jose M.; Perez-Vidal, Carlos
2013-01-01
This paper describes a human-computer interface based on electro-oculography (EOG) that allows interaction with a computer using eye movement. The EOG registers the movement of the eye by measuring, through electrodes, the difference of potential between the cornea and the retina. A new pair of EOG glasses have been designed to improve the user's comfort and to remove the manual procedure of placing the EOG electrodes around the user's eye. The interface, which includes the EOG electrodes, uses a new processing algorithm that is able to detect the gaze direction and the blink of the eyes from the EOG signals. The system reliably enabled subjects to control the movement of a dot on a video screen. PMID:23843986
NASA Astrophysics Data System (ADS)
Kajiwara, Yusuke; Murata, Hiroaki; Kimura, Haruhiko; Abe, Koji
As a communication support tool for cases of amyotrophic lateral sclerosis (ALS), researches on eye gaze human-computer interfaces have been active. However, since voluntary and involuntary eye movements cannot be distinguished in the interfaces, their performance is still not sufficient for practical use. This paper presents a high performance human-computer interface system which unites high quality recognitions of horizontal directional eye movements and voluntary blinks. The experimental results have shown that the number of incorrect inputs is decreased by 35.1% in an existing system which equips recognitions of horizontal and vertical directional eye movements in addition to voluntary blinks and character inputs are speeded up by 17.4% from the existing system.
Wu, Shang-Lin; Liao, Lun-De; Lu, Shao-Wei; Jiang, Wei-Ling; Chen, Shi-An; Lin, Chin-Teng
2013-08-01
Electrooculography (EOG) signals can be used to control human-computer interface (HCI) systems, if properly classified. The ability to measure and process these signals may help HCI users to overcome many of the physical limitations and inconveniences in daily life. However, there are currently no effective multidirectional classification methods for monitoring eye movements. Here, we describe a classification method used in a wireless EOG-based HCI device for detecting eye movements in eight directions. This device includes wireless EOG signal acquisition components, wet electrodes and an EOG signal classification algorithm. The EOG classification algorithm is based on extracting features from the electrical signals corresponding to eight directions of eye movement (up, down, left, right, up-left, down-left, up-right, and down-right) and blinking. The recognition and processing of these eight different features were achieved in real-life conditions, demonstrating that this device can reliably measure the features of EOG signals. This system and its classification procedure provide an effective method for identifying eye movements. Additionally, it may be applied to study eye functions in real-life conditions in the near future.
Banerjee, Jayeeta; Majumdar, Dhurjati; Majumdar, Deepti; Pal, Madhu Sudan
2010-06-01
We are experiencing a shifting of media: from the printed paper to the computer screen. This transition is modifying the process of how we read and understand a text. It is very difficult to conclude on suitability of font characters based upon subjective evaluation method only. Present study evaluates the effect of font type on human cognitive workload during perception of individual alphabets on a computer screen. Twenty six young subjects volunteered for this study. Here, subjects have been shown individual characters of different font types and their eye movements have been recorded. A binocular eye movement recorder was used for eye movement recording. The results showed that different eye movement parameters such as pupil diameter, number of fixations, fixation duration were less for font type Verdana. The present study recommends the use of font type Verdana for presentation of individual alphabets on various electronic displays in order to reduce cognitive workload.
Evaluation of an eye-pointer interaction device for human-computer interaction.
Cáceres, Enrique; Carrasco, Miguel; Ríos, Sebastián
2018-03-01
Advances in eye-tracking technology have led to better human-computer interaction, and involve controlling a computer without any kind of physical contact. This research describes the transformation of a commercial eye-tracker for use as an alternative peripheral device in human-computer interactions, implementing a pointer that only needs the eye movements of a user facing a computer screen, thus replacing the need to control the software by hand movements. The experiment was performed with 30 test individuals who used the prototype with a set of educational videogames. The results show that, although most of the test subjects would prefer a mouse to control the pointer, the prototype tested has an empirical precision similar to that of the mouse, either when trying to control its movements or when attempting to click on a point of the screen.
Learning rational temporal eye movement strategies.
Hoppe, David; Rothkopf, Constantin A
2016-07-19
During active behavior humans redirect their gaze several times every second within the visual environment. Where we look within static images is highly efficient, as quantified by computational models of human gaze shifts in visual search and face recognition tasks. However, when we shift gaze is mostly unknown despite its fundamental importance for survival in a dynamic world. It has been suggested that during naturalistic visuomotor behavior gaze deployment is coordinated with task-relevant events, often predictive of future events, and studies in sportsmen suggest that timing of eye movements is learned. Here we establish that humans efficiently learn to adjust the timing of eye movements in response to environmental regularities when monitoring locations in the visual scene to detect probabilistically occurring events. To detect the events humans adopt strategies that can be understood through a computational model that includes perceptual and acting uncertainties, a minimal processing time, and, crucially, the intrinsic costs of gaze behavior. Thus, subjects traded off event detection rate with behavioral costs of carrying out eye movements. Remarkably, based on this rational bounded actor model the time course of learning the gaze strategies is fully explained by an optimal Bayesian learner with humans' characteristic uncertainty in time estimation, the well-known scalar law of biological timing. Taken together, these findings establish that the human visual system is highly efficient in learning temporal regularities in the environment and that it can use these regularities to control the timing of eye movements to detect behaviorally relevant events.
System for assisted mobility using eye movements based on electrooculography.
Barea, Rafael; Boquete, Luciano; Mazo, Manuel; López, Elena
2002-12-01
This paper describes an eye-control method based on electrooculography (EOG) to develop a system for assisted mobility. One of its most important features is its modularity, making it adaptable to the particular needs of each user according to the type and degree of handicap involved. An eye model based on electroculographic signal is proposed and its validity is studied. Several human-machine interfaces (HMI) based on EOG are commented, focusing our study on guiding and controlling a wheelchair for disabled people, where the control is actually effected by eye movements within the socket. Different techniques and guidance strategies are then shown with comments on the advantages and disadvantages of each one. The system consists of a standard electric wheelchair with an on-board computer, sensors and a graphic user interface run by the computer. On the other hand, this eye-control method can be applied to handle graphical interfaces, where the eye is used as a mouse computer. Results obtained show that this control technique could be useful in multiple applications, such as mobility and communication aid for handicapped persons.
Real time eye tracking using Kalman extended spatio-temporal context learning
NASA Astrophysics Data System (ADS)
Munir, Farzeen; Minhas, Fayyaz ul Amir Asfar; Jalil, Abdul; Jeon, Moongu
2017-06-01
Real time eye tracking has numerous applications in human computer interaction such as a mouse cursor control in a computer system. It is useful for persons with muscular or motion impairments. However, tracking the movement of the eye is complicated by occlusion due to blinking, head movement, screen glare, rapid eye movements, etc. In this work, we present the algorithmic and construction details of a real time eye tracking system. Our proposed system is an extension of Spatio-Temporal context learning through Kalman Filtering. Spatio-Temporal Context Learning offers state of the art accuracy in general object tracking but its performance suffers due to object occlusion. Addition of the Kalman filter allows the proposed method to model the dynamics of the motion of the eye and provide robust eye tracking in cases of occlusion. We demonstrate the effectiveness of this tracking technique by controlling the computer cursor in real time by eye movements.
A Novel Wearable Forehead EOG Measurement System for Human Computer Interfaces
Heo, Jeong; Yoon, Heenam; Park, Kwang Suk
2017-01-01
Amyotrophic lateral sclerosis (ALS) patients whose voluntary muscles are paralyzed commonly communicate with the outside world using eye movement. There have been many efforts to support this method of communication by tracking or detecting eye movement. An electrooculogram (EOG), an electro-physiological signal, is generated by eye movements and can be measured with electrodes placed around the eye. In this study, we proposed a new practical electrode position on the forehead to measure EOG signals, and we developed a wearable forehead EOG measurement system for use in Human Computer/Machine interfaces (HCIs/HMIs). Four electrodes, including the ground electrode, were placed on the forehead. The two channels were arranged vertically and horizontally, sharing a positive electrode. Additionally, a real-time eye movement classification algorithm was developed based on the characteristics of the forehead EOG. Three applications were employed to evaluate the proposed system: a virtual keyboard using a modified Bremen BCI speller and an automatic sequential row-column scanner, and a drivable power wheelchair. The mean typing speeds of the modified Bremen brain–computer interface (BCI) speller and automatic row-column scanner were 10.81 and 7.74 letters per minute, and the mean classification accuracies were 91.25% and 95.12%, respectively. In the power wheelchair demonstration, the user drove the wheelchair through an 8-shape course without collision with obstacles. PMID:28644398
A Novel Wearable Forehead EOG Measurement System for Human Computer Interfaces.
Heo, Jeong; Yoon, Heenam; Park, Kwang Suk
2017-06-23
Amyotrophic lateral sclerosis (ALS) patients whose voluntary muscles are paralyzed commonly communicate with the outside world using eye movement. There have been many efforts to support this method of communication by tracking or detecting eye movement. An electrooculogram (EOG), an electro-physiological signal, is generated by eye movements and can be measured with electrodes placed around the eye. In this study, we proposed a new practical electrode position on the forehead to measure EOG signals, and we developed a wearable forehead EOG measurement system for use in Human Computer/Machine interfaces (HCIs/HMIs). Four electrodes, including the ground electrode, were placed on the forehead. The two channels were arranged vertically and horizontally, sharing a positive electrode. Additionally, a real-time eye movement classification algorithm was developed based on the characteristics of the forehead EOG. Three applications were employed to evaluate the proposed system: a virtual keyboard using a modified Bremen BCI speller and an automatic sequential row-column scanner, and a drivable power wheelchair. The mean typing speeds of the modified Bremen brain-computer interface (BCI) speller and automatic row-column scanner were 10.81 and 7.74 letters per minute, and the mean classification accuracies were 91.25% and 95.12%, respectively. In the power wheelchair demonstration, the user drove the wheelchair through an 8-shape course without collision with obstacles.
Chang, Won-Du; Cha, Ho-Seung; Im, Chang-Hwan
2016-01-01
This paper introduces a method to remove the unwanted interdependency between vertical and horizontal eye-movement components in electrooculograms (EOGs). EOGs have been widely used to estimate eye movements without a camera in a variety of human-computer interaction (HCI) applications using pairs of electrodes generally attached either above and below the eye (vertical EOG) or to the left and right of the eyes (horizontal EOG). It has been well documented that the vertical EOG component has less stability than the horizontal EOG one, making accurate estimation of the vertical location of the eyes difficult. To address this issue, an experiment was designed in which ten subjects participated. Visual inspection of the recorded EOG signals showed that the vertical EOG component is highly influenced by horizontal eye movements, whereas the horizontal EOG is rarely affected by vertical eye movements. Moreover, the results showed that this interdependency could be effectively removed by introducing an individual constant value. It is therefore expected that the proposed method can enhance the overall performance of practical EOG-based eye-tracking systems. PMID:26907271
Eye/Brain/Task Testbed And Software
NASA Technical Reports Server (NTRS)
Janiszewski, Thomas; Mainland, Nora; Roden, Joseph C.; Rothenheber, Edward H.; Ryan, Arthur M.; Stokes, James M.
1994-01-01
Eye/brain/task (EBT) testbed records electroencephalograms, movements of eyes, and structures of tasks to provide comprehensive data on neurophysiological experiments. Intended to serve continuing effort to develop means for interactions between human brain waves and computers. Software library associated with testbed provides capabilities to recall collected data, to process data on movements of eyes, to correlate eye-movement data with electroencephalographic data, and to present data graphically. Cognitive processes investigated in ways not previously possible.
EYE MOVEMENT RECORDING AND NONLINEAR DYNAMICS ANALYSIS – THE CASE OF SACCADES#
Aştefănoaei, Corina; Pretegiani, Elena; Optican, L.M.; Creangă, Dorina; Rufa, Alessandra
2015-01-01
Evidence of a chaotic behavioral trend in eye movement dynamics was examined in the case of a saccadic temporal series collected from a healthy human subject. Saccades are highvelocity eye movements of very short duration, their recording being relatively accessible, so that the resulting data series could be studied computationally for understanding the neural processing in a motor system. The aim of this study was to assess the complexity degree in the eye movement dynamics. To do this we analyzed the saccadic temporal series recorded with an infrared camera eye tracker from a healthy human subject in a special experimental arrangement which provides continuous records of eye position, both saccades (eye shifting movements) and fixations (focusing over regions of interest, with rapid, small fluctuations). The semi-quantitative approach used in this paper in studying the eye functioning from the viewpoint of non-linear dynamics was accomplished by some computational tests (power spectrum, portrait in the state space and its fractal dimension, Hurst exponent and largest Lyapunov exponent) derived from chaos theory. A high complexity dynamical trend was found. Lyapunov largest exponent test suggested bi-stability of cellular membrane resting potential during saccadic experiment. PMID:25698889
Eye Tracking Based Control System for Natural Human-Computer Interaction
Lin, Shu-Fan
2017-01-01
Eye movement can be regarded as a pivotal real-time input medium for human-computer communication, which is especially important for people with physical disability. In order to improve the reliability, mobility, and usability of eye tracking technique in user-computer dialogue, a novel eye control system with integrating both mouse and keyboard functions is proposed in this paper. The proposed system focuses on providing a simple and convenient interactive mode by only using user's eye. The usage flow of the proposed system is designed to perfectly follow human natural habits. Additionally, a magnifier module is proposed to allow the accurate operation. In the experiment, two interactive tasks with different difficulty (searching article and browsing multimedia web) were done to compare the proposed eye control tool with an existing system. The Technology Acceptance Model (TAM) measures are used to evaluate the perceived effectiveness of our system. It is demonstrated that the proposed system is very effective with regard to usability and interface design. PMID:29403528
Eye Tracking Based Control System for Natural Human-Computer Interaction.
Zhang, Xuebai; Liu, Xiaolong; Yuan, Shyan-Ming; Lin, Shu-Fan
2017-01-01
Eye movement can be regarded as a pivotal real-time input medium for human-computer communication, which is especially important for people with physical disability. In order to improve the reliability, mobility, and usability of eye tracking technique in user-computer dialogue, a novel eye control system with integrating both mouse and keyboard functions is proposed in this paper. The proposed system focuses on providing a simple and convenient interactive mode by only using user's eye. The usage flow of the proposed system is designed to perfectly follow human natural habits. Additionally, a magnifier module is proposed to allow the accurate operation. In the experiment, two interactive tasks with different difficulty (searching article and browsing multimedia web) were done to compare the proposed eye control tool with an existing system. The Technology Acceptance Model (TAM) measures are used to evaluate the perceived effectiveness of our system. It is demonstrated that the proposed system is very effective with regard to usability and interface design.
NASA Astrophysics Data System (ADS)
Komogortsev, Oleg V.; Karpov, Alexey; Holland, Corey D.
2012-06-01
The widespread use of computers throughout modern society introduces the necessity for usable and counterfeit-resistant authentication methods to ensure secure access to personal resources such as bank accounts, e-mail, and social media. Current authentication methods require tedious memorization of lengthy pass phrases, are often prone to shouldersurfing, and may be easily replicated (either by counterfeiting parts of the human body or by guessing an authentication token based on readily available information). This paper describes preliminary work toward a counterfeit-resistant usable eye movement-based (CUE) authentication method. CUE does not require any passwords (improving the memorability aspect of the authentication system), and aims to provide high resistance to spoofing and shoulder-surfing by employing the combined biometric capabilities of two behavioral biometric traits: 1) oculomotor plant characteristics (OPC) which represent the internal, non-visible, anatomical structure of the eye; 2) complex eye movement patterns (CEM) which represent the strategies employed by the brain to guide visual attention. Both OPC and CEM are extracted from the eye movement signal provided by an eye tracking system. Preliminary results indicate that the fusion of OPC and CEM traits is capable of providing a 30% reduction in authentication error when compared to the authentication accuracy of individual traits.
2003-01-22
One concern about human adaptation to space is how returning from the microgravity of orbit to Earth can affect an astronaut's ability to fly safely. There are monitors and infrared video cameras to measure eye movements without having to affect the crew member. A computer screen provides moving images which the eye tracks while the brain determines what it is seeing. A video camera records movement of the subject's eyes. Researchers can then correlate perception and response. Test subjects perceive different images when a moving object is covered by a mask that is visible or invisible (above). Early results challenge the accepted theory that smooth pursuit -- the fluid eye movement that humans and primates have -- does not involve the higher brain. NASA results show that: Eye movement can predict human perceptual performance, smooth pursuit and saccadic (quick or ballistic) movement share some signal pathways, and common factors can make both smooth pursuit and visual perception produce errors in motor responses.
Understanding Visible Perception
NASA Technical Reports Server (NTRS)
2003-01-01
One concern about human adaptation to space is how returning from the microgravity of orbit to Earth can affect an astronaut's ability to fly safely. There are monitors and infrared video cameras to measure eye movements without having to affect the crew member. A computer screen provides moving images which the eye tracks while the brain determines what it is seeing. A video camera records movement of the subject's eyes. Researchers can then correlate perception and response. Test subjects perceive different images when a moving object is covered by a mask that is visible or invisible (above). Early results challenge the accepted theory that smooth pursuit -- the fluid eye movement that humans and primates have -- does not involve the higher brain. NASA results show that: Eye movement can predict human perceptual performance, smooth pursuit and saccadic (quick or ballistic) movement share some signal pathways, and common factors can make both smooth pursuit and visual perception produce errors in motor responses.
Image-based computer-assisted diagnosis system for benign paroxysmal positional vertigo
NASA Astrophysics Data System (ADS)
Kohigashi, Satoru; Nakamae, Koji; Fujioka, Hiromu
2005-04-01
We develop the image based computer assisted diagnosis system for benign paroxysmal positional vertigo (BPPV) that consists of the balance control system simulator, the 3D eye movement simulator, and the extraction method of nystagmus response directly from an eye movement image sequence. In the system, the causes and conditions of BPPV are estimated by searching the database for record matching with the nystagmus response for the observed eye image sequence of the patient with BPPV. The database includes the nystagmus responses for simulated eye movement sequences. The eye movement velocity is obtained by using the balance control system simulator that allows us to simulate BPPV under various conditions such as canalithiasis, cupulolithiasis, number of otoconia, otoconium size, and so on. Then the eye movement image sequence is displayed on the CRT by the 3D eye movement simulator. The nystagmus responses are extracted from the image sequence by the proposed method and are stored in the database. In order to enhance the diagnosis accuracy, the nystagmus response for a newly simulated sequence is matched with that for the observed sequence. From the matched simulation conditions, the causes and conditions of BPPV are estimated. We apply our image based computer assisted diagnosis system to two real eye movement image sequences for patients with BPPV to show its validity.
Eye Movements During Everyday Behavior Predict Personality Traits.
Hoppe, Sabrina; Loetscher, Tobias; Morey, Stephanie A; Bulling, Andreas
2018-01-01
Besides allowing us to perceive our surroundings, eye movements are also a window into our mind and a rich source of information on who we are, how we feel, and what we do. Here we show that eye movements during an everyday task predict aspects of our personality. We tracked eye movements of 42 participants while they ran an errand on a university campus and subsequently assessed their personality traits using well-established questionnaires. Using a state-of-the-art machine learning method and a rich set of features encoding different eye movement characteristics, we were able to reliably predict four of the Big Five personality traits (neuroticism, extraversion, agreeableness, conscientiousness) as well as perceptual curiosity only from eye movements. Further analysis revealed new relations between previously neglected eye movement characteristics and personality. Our findings demonstrate a considerable influence of personality on everyday eye movement control, thereby complementing earlier studies in laboratory settings. Improving automatic recognition and interpretation of human social signals is an important endeavor, enabling innovative design of human-computer systems capable of sensing spontaneous natural user behavior to facilitate efficient interaction and personalization.
Gertz, Hanna; Hilger, Maximilian; Hegele, Mathias; Fiehler, Katja
2016-09-01
Previous studies have shown that beliefs about the human origin of a stimulus are capable of modulating the coupling of perception and action. Such beliefs can be based on top-down recognition of the identity of an actor or bottom-up observation of the behavior of the stimulus. Instructed human agency has been shown to lead to superior tracking performance of a moving dot as compared to instructed computer agency, especially when the dot followed a biological velocity profile and thus matched the predicted movement, whereas a violation of instructed human agency by a nonbiological dot motion impaired oculomotor tracking (Zwickel et al., 2012). This suggests that the instructed agency biases the selection of predictive models on the movement trajectory of the dot motion. The aim of the present fMRI study was to examine the neural correlates of top-down and bottom-up modulations of perception-action couplings by manipulating the instructed agency (human action vs. computer-generated action) and the observable behavior of the stimulus (biological vs. nonbiological velocity profile). To this end, participants performed an oculomotor tracking task in an MRI environment. Oculomotor tracking activated areas of the eye movement network. A right-hemisphere occipito-temporal cluster comprising the motion-sensitive area V5 showed a preference for the biological as compared to the nonbiological velocity profile. Importantly, a mismatch between instructed human agency and a nonbiological velocity profile primarily activated medial-frontal areas comprising the frontal pole, the paracingulate gyrus, and the anterior cingulate gyrus, as well as the cerebellum and the supplementary eye field as part of the eye movement network. This mismatch effect was specific to the instructed human agency and did not occur in conditions with a mismatch between instructed computer agency and a biological velocity profile. Our results support the hypothesis that humans activate a specific predictive model for biological movements based on their own motor expertise. A violation of this predictive model causes costs as the movement needs to be corrected in accordance with incoming (nonbiological) sensory information. Copyright © 2016 Elsevier Inc. All rights reserved.
Ocular attention-sensing interface system
NASA Technical Reports Server (NTRS)
Zaklad, Allen; Glenn, Floyd A., III; Iavecchia, Helene P.; Stokes, James M.
1986-01-01
The purpose of the research was to develop an innovative human-computer interface based on eye movement and voice control. By eliminating a manual interface (keyboard, joystick, etc.), OASIS provides a control mechanism that is natural, efficient, accurate, and low in workload.
Eye-movements and Voice as Interface Modalities to Computer Systems
NASA Astrophysics Data System (ADS)
Farid, Mohsen M.; Murtagh, Fionn D.
2003-03-01
We investigate the visual and vocal modalities of interaction with computer systems. We focus our attention on the integration of visual and vocal interface as possible replacement and/or additional modalities to enhance human-computer interaction. We present a new framework for employing eye gaze as a modality of interface. While voice commands, as means of interaction with computers, have been around for a number of years, integration of both the vocal interface and the visual interface, in terms of detecting user's eye movements through an eye-tracking device, is novel and promises to open the horizons for new applications where a hand-mouse interface provides little or no apparent support to the task to be accomplished. We present an array of applications to illustrate the new framework and eye-voice integration.
Action perception as hypothesis testing.
Donnarumma, Francesco; Costantini, Marcello; Ambrosini, Ettore; Friston, Karl; Pezzulo, Giovanni
2017-04-01
We present a novel computational model that describes action perception as an active inferential process that combines motor prediction (the reuse of our own motor system to predict perceived movements) and hypothesis testing (the use of eye movements to disambiguate amongst hypotheses). The system uses a generative model of how (arm and hand) actions are performed to generate hypothesis-specific visual predictions, and directs saccades to the most informative places of the visual scene to test these predictions - and underlying hypotheses. We test the model using eye movement data from a human action observation study. In both the human study and our model, saccades are proactive whenever context affords accurate action prediction; but uncertainty induces a more reactive gaze strategy, via tracking the observed movements. Our model offers a novel perspective on action observation that highlights its active nature based on prediction dynamics and hypothesis testing. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
NASA Astrophysics Data System (ADS)
Gao, Pei-pei; Liu, Feng
2016-10-01
With the development of information technology and artificial intelligence, speech synthesis plays a significant role in the fields of Human-Computer Interaction Techniques. However, the main problem of current speech synthesis techniques is lacking of naturalness and expressiveness so that it is not yet close to the standard of natural language. Another problem is that the human-computer interaction based on the speech synthesis is too monotonous to realize mechanism of user subjective drive. This thesis introduces the historical development of speech synthesis and summarizes the general process of this technique. It is pointed out that prosody generation module is an important part in the process of speech synthesis. On the basis of further research, using eye activity rules when reading to control and drive prosody generation was introduced as a new human-computer interaction method to enrich the synthetic form. In this article, the present situation of speech synthesis technology is reviewed in detail. Based on the premise of eye gaze data extraction, using eye movement signal in real-time driving, a speech synthesis method which can express the real speech rhythm of the speaker is proposed. That is, when reader is watching corpora with its eyes in silent reading, capture the reading information such as the eye gaze duration per prosodic unit, and establish a hierarchical prosodic pattern of duration model to determine the duration parameters of synthesized speech. At last, after the analysis, the feasibility of the above method is verified.
Eye Tracking and Head Movement Detection: A State-of-Art Survey
2013-01-01
Eye-gaze detection and tracking have been an active research field in the past years as it adds convenience to a variety of applications. It is considered a significant untraditional method of human computer interaction. Head movement detection has also received researchers' attention and interest as it has been found to be a simple and effective interaction method. Both technologies are considered the easiest alternative interface methods. They serve a wide range of severely disabled people who are left with minimal motor abilities. For both eye tracking and head movement detection, several different approaches have been proposed and used to implement different algorithms for these technologies. Despite the amount of research done on both technologies, researchers are still trying to find robust methods to use effectively in various applications. This paper presents a state-of-art survey for eye tracking and head movement detection methods proposed in the literature. Examples of different fields of applications for both technologies, such as human-computer interaction, driving assistance systems, and assistive technologies are also investigated. PMID:27170851
Eye Movements During Everyday Behavior Predict Personality Traits
Hoppe, Sabrina; Loetscher, Tobias; Morey, Stephanie A.; Bulling, Andreas
2018-01-01
Besides allowing us to perceive our surroundings, eye movements are also a window into our mind and a rich source of information on who we are, how we feel, and what we do. Here we show that eye movements during an everyday task predict aspects of our personality. We tracked eye movements of 42 participants while they ran an errand on a university campus and subsequently assessed their personality traits using well-established questionnaires. Using a state-of-the-art machine learning method and a rich set of features encoding different eye movement characteristics, we were able to reliably predict four of the Big Five personality traits (neuroticism, extraversion, agreeableness, conscientiousness) as well as perceptual curiosity only from eye movements. Further analysis revealed new relations between previously neglected eye movement characteristics and personality. Our findings demonstrate a considerable influence of personality on everyday eye movement control, thereby complementing earlier studies in laboratory settings. Improving automatic recognition and interpretation of human social signals is an important endeavor, enabling innovative design of human–computer systems capable of sensing spontaneous natural user behavior to facilitate efficient interaction and personalization. PMID:29713270
Lévy-like diffusion in eye movements during spoken-language comprehension.
Stephen, Damian G; Mirman, Daniel; Magnuson, James S; Dixon, James A
2009-05-01
This study explores the diffusive properties of human eye movements during a language comprehension task. In this task, adults are given auditory instructions to locate named objects on a computer screen. Although it has been convention to model visual search as standard Brownian diffusion, we find evidence that eye movements are hyperdiffusive. Specifically, we use comparisons of maximum-likelihood fit as well as standard deviation analysis and diffusion entropy analysis to show that visual search during language comprehension exhibits Lévy-like rather than Gaussian diffusion.
Lévy-like diffusion in eye movements during spoken-language comprehension
NASA Astrophysics Data System (ADS)
Stephen, Damian G.; Mirman, Daniel; Magnuson, James S.; Dixon, James A.
2009-05-01
This study explores the diffusive properties of human eye movements during a language comprehension task. In this task, adults are given auditory instructions to locate named objects on a computer screen. Although it has been convention to model visual search as standard Brownian diffusion, we find evidence that eye movements are hyperdiffusive. Specifically, we use comparisons of maximum-likelihood fit as well as standard deviation analysis and diffusion entropy analysis to show that visual search during language comprehension exhibits Lévy-like rather than Gaussian diffusion.
Toward statistical modeling of saccadic eye-movement and visual saliency.
Sun, Xiaoshuai; Yao, Hongxun; Ji, Rongrong; Liu, Xian-Ming
2014-11-01
In this paper, we present a unified statistical framework for modeling both saccadic eye movements and visual saliency. By analyzing the statistical properties of human eye fixations on natural images, we found that human attention is sparsely distributed and usually deployed to locations with abundant structural information. This observations inspired us to model saccadic behavior and visual saliency based on super-Gaussian component (SGC) analysis. Our model sequentially obtains SGC using projection pursuit, and generates eye movements by selecting the location with maximum SGC response. Besides human saccadic behavior simulation, we also demonstrated our superior effectiveness and robustness over state-of-the-arts by carrying out dense experiments on synthetic patterns and human eye fixation benchmarks. Multiple key issues in saliency modeling research, such as individual differences, the effects of scale and blur, are explored in this paper. Based on extensive qualitative and quantitative experimental results, we show promising potentials of statistical approaches for human behavior research.
Singh, Tarkeshwar; Perry, Christopher M; Herter, Troy M
2016-01-26
Robotic and virtual-reality systems offer tremendous potential for improving assessment and rehabilitation of neurological disorders affecting the upper extremity. A key feature of these systems is that visual stimuli are often presented within the same workspace as the hands (i.e., peripersonal space). Integrating video-based remote eye tracking with robotic and virtual-reality systems can provide an additional tool for investigating how cognitive processes influence visuomotor learning and rehabilitation of the upper extremity. However, remote eye tracking systems typically compute ocular kinematics by assuming eye movements are made in a plane with constant depth (e.g. frontal plane). When visual stimuli are presented at variable depths (e.g. transverse plane), eye movements have a vergence component that may influence reliable detection of gaze events (fixations, smooth pursuits and saccades). To our knowledge, there are no available methods to classify gaze events in the transverse plane for monocular remote eye tracking systems. Here we present a geometrical method to compute ocular kinematics from a monocular remote eye tracking system when visual stimuli are presented in the transverse plane. We then use the obtained kinematics to compute velocity-based thresholds that allow us to accurately identify onsets and offsets of fixations, saccades and smooth pursuits. Finally, we validate our algorithm by comparing the gaze events computed by the algorithm with those obtained from the eye-tracking software and manual digitization. Within the transverse plane, our algorithm reliably differentiates saccades from fixations (static visual stimuli) and smooth pursuits from saccades and fixations when visual stimuli are dynamic. The proposed methods provide advancements for examining eye movements in robotic and virtual-reality systems. Our methods can also be used with other video-based or tablet-based systems in which eye movements are performed in a peripersonal plane with variable depth.
Eye-head coordination during free exploration in human and cat.
Einhäuser, Wolfgang; Moeller, Gudrun U; Schumann, Frank; Conradt, Jörg; Vockeroth, Johannes; Bartl, Klaus; Schneider, Erich; König, Peter
2009-05-01
Eye, head, and body movements jointly control the direction of gaze and the stability of retinal images in most mammalian species. The contribution of the individual movement components, however, will largely depend on the ecological niche the animal occupies and the layout of the animal's retina, in particular its photoreceptor density distribution. Here the relative contribution of eye-in-head and head-in-world movements in cats is measured, and the results are compared to recent human data. For the cat, a lightweight custom-made head-mounted video setup was used (CatCam). Human data were acquired with the novel EyeSeeCam device, which measures eye position to control a gaze-contingent camera in real time. For both species, analysis was based on simultaneous recordings of eye and head movements during free exploration of a natural environment. Despite the substantial differences in ecological niche, photoreceptor density, and saccade frequency, eye-movement characteristics in both species are remarkably similar. Coordinated eye and head movements dominate the dynamics of the retinal input. Interestingly, compensatory (gaze-stabilizing) movements play a more dominant role in humans than they do in cats. This finding was interpreted to be a consequence of substantially different timescales for head movements, with cats' head movements showing about a 5-fold faster dynamics than humans. For both species, models and laboratory experiments therefore need to account for this rich input dynamic to obtain validity for ecologically realistic settings.
Contrast and assimilation in motion perception and smooth pursuit eye movements.
Spering, Miriam; Gegenfurtner, Karl R
2007-09-01
The analysis of visual motion serves many different functions ranging from object motion perception to the control of self-motion. The perception of visual motion and the oculomotor tracking of a moving object are known to be closely related and are assumed to be controlled by shared brain areas. We compared perceived velocity and the velocity of smooth pursuit eye movements in human observers in a paradigm that required the segmentation of target object motion from context motion. In each trial, a pursuit target and a visual context were independently perturbed simultaneously to briefly increase or decrease in speed. Observers had to accurately track the target and estimate target speed during the perturbation interval. Here we show that the same motion signals are processed in fundamentally different ways for perception and steady-state smooth pursuit eye movements. For the computation of perceived velocity, motion of the context was subtracted from target motion (motion contrast), whereas pursuit velocity was determined by the motion average (motion assimilation). We conclude that the human motion system uses these computations to optimally accomplish different functions: image segmentation for object motion perception and velocity estimation for the control of smooth pursuit eye movements.
Development of a Computer Writing System Based on EOG
López, Alberto; Ferrero, Francisco; Yangüela, David; Álvarez, Constantina; Postolache, Octavian
2017-01-01
The development of a novel computer writing system based on eye movements is introduced herein. A system of these characteristics requires the consideration of three subsystems: (1) A hardware device for the acquisition and transmission of the signals generated by eye movement to the computer; (2) A software application that allows, among other functions, data processing in order to minimize noise and classify signals; and (3) A graphical interface that allows the user to write text easily on the computer screen using eye movements only. This work analyzes these three subsystems and proposes innovative and low cost solutions for each one of them. This computer writing system was tested with 20 users and its efficiency was compared to a traditional virtual keyboard. The results have shown an important reduction in the time spent on writing, which can be very useful, especially for people with severe motor disorders. PMID:28672863
Development of a Computer Writing System Based on EOG.
López, Alberto; Ferrero, Francisco; Yangüela, David; Álvarez, Constantina; Postolache, Octavian
2017-06-26
The development of a novel computer writing system based on eye movements is introduced herein. A system of these characteristics requires the consideration of three subsystems: (1) A hardware device for the acquisition and transmission of the signals generated by eye movement to the computer; (2) A software application that allows, among other functions, data processing in order to minimize noise and classify signals; and (3) A graphical interface that allows the user to write text easily on the computer screen using eye movements only. This work analyzes these three subsystems and proposes innovative and low cost solutions for each one of them. This computer writing system was tested with 20 users and its efficiency was compared to a traditional virtual keyboard. The results have shown an important reduction in the time spent on writing, which can be very useful, especially for people with severe motor disorders.
Shinozaki, Takahiro
2018-01-01
Human-computer interface systems whose input is based on eye movements can serve as a means of communication for patients with locked-in syndrome. Eye-writing is one such system; users can input characters by moving their eyes to follow the lines of the strokes corresponding to characters. Although this input method makes it easy for patients to get started because of their familiarity with handwriting, existing eye-writing systems suffer from slow input rates because they require a pause between input characters to simplify the automatic recognition process. In this paper, we propose a continuous eye-writing recognition system that achieves a rapid input rate because it accepts characters eye-written continuously, with no pauses. For recognition purposes, the proposed system first detects eye movements using electrooculography (EOG), and then a hidden Markov model (HMM) is applied to model the EOG signals and recognize the eye-written characters. Additionally, this paper investigates an EOG adaptation that uses a deep neural network (DNN)-based HMM. Experiments with six participants showed an average input speed of 27.9 character/min using Japanese Katakana as the input target characters. A Katakana character-recognition error rate of only 5.0% was achieved using 13.8 minutes of adaptation data. PMID:29425248
Learning optimal eye movements to unusual faces
Peterson, Matthew F.; Eckstein, Miguel P.
2014-01-01
Eye movements, which guide the fovea’s high resolution and computational power to relevant areas of the visual scene, are integral to efficient, successful completion of many visual tasks. How humans modify their eye movements through experience with their perceptual environments, and its functional role in learning new tasks, has not been fully investigated. Here, we used a face identification task where only the mouth discriminated exemplars to assess if, how, and when eye movement modulation may mediate learning. By interleaving trials of unconstrained eye movements with trials of forced fixation, we attempted to separate the contributions of eye movements and covert mechanisms to performance improvements. Without instruction, a majority of observers substantially increased accuracy and learned to direct their initial eye movements towards the optimal fixation point. The proximity of an observer’s default face identification eye movement behavior to the new optimal fixation point and the observer’s peripheral processing ability were predictive of performance gains and eye movement learning. After practice in a subsequent condition in which observers were directed to fixate different locations along the face, including the relevant mouth region, all observers learned to make eye movements to the optimal fixation point. In this fully learned state, augmented fixation strategy accounted for 43% of total efficiency improvements while covert mechanisms accounted for the remaining 57%. The findings suggest a critical role for eye movement planning to perceptual learning, and elucidate factors that can predict when and how well an observer can learn a new task with unusual exemplars. PMID:24291712
Online Learners’ Reading Ability Detection Based on Eye-Tracking Sensors
Zhan, Zehui; Zhang, Lei; Mei, Hu; Fong, Patrick S. W.
2016-01-01
The detection of university online learners’ reading ability is generally problematic and time-consuming. Thus the eye-tracking sensors have been employed in this study, to record temporal and spatial human eye movements. Learners’ pupils, blinks, fixation, saccade, and regression are recognized as primary indicators for detecting reading abilities. A computational model is established according to the empirical eye-tracking data, and applying the multi-feature regularization machine learning mechanism based on a Low-rank Constraint. The model presents good generalization ability with an error of only 4.9% when randomly running 100 times. It has obvious advantages in saving time and improving precision, with only 20 min of testing required for prediction of an individual learner’s reading ability. PMID:27626418
Enhanced Video-Oculography System
NASA Technical Reports Server (NTRS)
Moore, Steven T.; MacDougall, Hamish G.
2009-01-01
A previously developed video-oculography system has been enhanced for use in measuring vestibulo-ocular reflexes of a human subject in a centrifuge, motor vehicle, or other setting. The system as previously developed included a lightweight digital video camera mounted on goggles. The left eye was illuminated by an infrared light-emitting diode via a dichroic mirror, and the camera captured images of the left eye in infrared light. To extract eye-movement data, the digitized video images were processed by software running in a laptop computer. Eye movements were calibrated by having the subject view a target pattern, fixed with respect to the subject s head, generated by a goggle-mounted laser with a diffraction grating. The system as enhanced includes a second camera for imaging the scene from the subject s perspective, and two inertial measurement units (IMUs) for measuring linear accelerations and rates of rotation for computing head movements. One IMU is mounted on the goggles, the other on the centrifuge or vehicle frame. All eye-movement and head-motion data are time-stamped. In addition, the subject s point of regard is superimposed on each scene image to enable analysis of patterns of gaze in real time.
Saccadic eye movements analysis as a measure of drug effect on central nervous system function.
Tedeschi, G; Quattrone, A; Bonavita, V
1986-04-01
Peak velocity (PSV) and duration (SD) of horizontal saccadic eye movements are demonstrably under the control of specific brain stem structures. Experimental and clinical evidence suggest the existence of an immediate premotor system for saccade generation located in the paramedian pontine reticular formation (PPRF). Effects on saccadic eye movements have been studied in normal volunteers with barbiturates, benzodiazepines, amphetamine and ethanol. On two occasions computer analysis of PSV, SD, saccade reaction time (SRT) and saccade accuracy (SA) was carried out in comparison with more traditional methods of assessment of human psychomotor performance like choice reaction time (CRT) and critical flicker fusion threshold (CFFT). The computer system proved to be a highly sensitive and objective method for measuring drug effect on central nervous system (CNS) function. It allows almost continuous sampling of data and appears to be particularly suitable for studying rapidly changing drug effects on the CNS.
Adaptive control for eye-gaze input system
NASA Astrophysics Data System (ADS)
Zhao, Qijie; Tu, Dawei; Yin, Hairong
2004-01-01
The characteristics of the vision-based human-computer interaction system have been analyzed, and the practical application and its limited factors at present time have also been mentioned. The information process methods have been put forward. In order to make the communication flexible and spontaneous, the algorithms to adaptive control of user"s head movement has been designed, and the events-based methods and object-oriented computer language is used to develop the system software, by experiment testing, we found that under given condition, these methods and algorithms can meet the need of the HCI.
Decoding Saccadic Directions Using Epidural ECoG in Non-Human Primates
2017-01-01
A brain-computer interface (BCI) can be used to restore some communication as an alternative interface for patients suffering from locked-in syndrome. However, most BCI systems are based on SSVEP, P300, or motor imagery, and a diversity of BCI protocols would be needed for various types of patients. In this paper, we trained the choice saccade (CS) task in 2 non-human primate monkeys and recorded the brain signal using an epidural electrocorticogram (eECoG) to predict eye movement direction. We successfully predicted the direction of the upcoming eye movement using a support vector machine (SVM) with the brain signals after the directional cue onset and before the saccade execution. The mean accuracies were 80% for 2 directions and 43% for 4 directions. We also quantified the spatial-spectro-temporal contribution ratio using SVM recursive feature elimination (RFE). The channels over the frontal eye field (FEF), supplementary eye field (SEF), and superior parietal lobule (SPL) area were dominantly used for classification. The α-band in the spectral domain and the time bins just after the directional cue onset and just before the saccadic execution were mainly useful for prediction. A saccade based BCI paradigm can be projected in the 2D space, and will hopefully provide an intuitive and convenient communication platform for users. PMID:28665058
2017-01-01
Eye movements provide insights into what people pay attention to, and therefore are commonly included in a variety of human-computer interaction studies. Eye movement recording devices (eye trackers) produce gaze trajectories, that is, sequences of gaze location on the screen. Despite recent technological developments that enabled more affordable hardware, gaze data are still costly and time consuming to collect, therefore some propose using mouse movements instead. These are easy to collect automatically and on a large scale. If and how these two movement types are linked, however, is less clear and highly debated. We address this problem in two ways. First, we introduce a new movement analytics methodology to quantify the level of dynamic interaction between the gaze and the mouse pointer on the screen. Our method uses volumetric representation of movement, the space-time densities, which allows us to calculate interaction levels between two physically different types of movement. We describe the method and compare the results with existing dynamic interaction methods from movement ecology. The sensitivity to method parameters is evaluated on simulated trajectories where we can control interaction levels. Second, we perform an experiment with eye and mouse tracking to generate real data with real levels of interaction, to apply and test our new methodology on a real case. Further, as our experiment tasks mimics route-tracing when using a map, it is more than a data collection exercise and it simultaneously allows us to investigate the actual connection between the eye and the mouse. We find that there seem to be natural coupling when eyes are not under conscious control, but that this coupling breaks down when instructed to move them intentionally. Based on these observations, we tentatively suggest that for natural tracing tasks, mouse tracking could potentially provide similar information as eye-tracking and therefore be used as a proxy for attention. However, more research is needed to confirm this. PMID:28777822
Demšar, Urška; Çöltekin, Arzu
2017-01-01
Eye movements provide insights into what people pay attention to, and therefore are commonly included in a variety of human-computer interaction studies. Eye movement recording devices (eye trackers) produce gaze trajectories, that is, sequences of gaze location on the screen. Despite recent technological developments that enabled more affordable hardware, gaze data are still costly and time consuming to collect, therefore some propose using mouse movements instead. These are easy to collect automatically and on a large scale. If and how these two movement types are linked, however, is less clear and highly debated. We address this problem in two ways. First, we introduce a new movement analytics methodology to quantify the level of dynamic interaction between the gaze and the mouse pointer on the screen. Our method uses volumetric representation of movement, the space-time densities, which allows us to calculate interaction levels between two physically different types of movement. We describe the method and compare the results with existing dynamic interaction methods from movement ecology. The sensitivity to method parameters is evaluated on simulated trajectories where we can control interaction levels. Second, we perform an experiment with eye and mouse tracking to generate real data with real levels of interaction, to apply and test our new methodology on a real case. Further, as our experiment tasks mimics route-tracing when using a map, it is more than a data collection exercise and it simultaneously allows us to investigate the actual connection between the eye and the mouse. We find that there seem to be natural coupling when eyes are not under conscious control, but that this coupling breaks down when instructed to move them intentionally. Based on these observations, we tentatively suggest that for natural tracing tasks, mouse tracking could potentially provide similar information as eye-tracking and therefore be used as a proxy for attention. However, more research is needed to confirm this.
Han, Ying; Ciuffreda, Kenneth J; Selenow, Arkady; Ali, Steven R
2003-04-01
To assess dynamic interactions of eye and head movements during return-sweep saccades (RSS) when reading with single-vision (SVL) versus progressive-addition (PAL) lenses in a simulated computer-based business environment. Horizontal eye and head movements were recorded objectively and simultaneously at a rate of 60 Hz during reading of single-page (SP; 14 degrees horizontal [H]) and double-page (DP; 37 degrees H) formats at 60 cm with binocular viewing. Subjects included 11 individuals with normal presbyopic vision aged 45 to 71 years selected by convenience sampling from a clinic population. Reading was performed with three types of spectacle lenses with a different clear near field of view (FOV): a SVL (60 degrees H clear FOV), a PAL-I with a relatively wide intermediate zone (7.85 mm; 18 degrees H clear FOV), and a PAL-II with a relatively narrow intermediate zone (5.60 mm; 13 degrees H clear FOV). Eye movements were initiated before head movements in the SP condition, and the reverse was found in the DP condition, with all three lens types. Duration of eye movements increased as the zone of clear vision decreased in the SP condition, and they were longer with the PALs than with the SVL in the DP condition. Gaze stabilization occurred later with the PALs than with the SVL in both the SP and DP conditions. The duration of head movements was longer with the PAL-II than with the SVL in both the SP and DP conditions. Eye movement peak velocity was greater with the SVL than the PALs in the DP condition. Eye movement and head movement strategies and timing were contingent on viewing conditions. The longer eye movement duration and gaze-stabilization times suggested that additional eye movements were needed to locate the clear-vision zone and commence reading after the RSS. Head movements with PALs for the SP condition were similarly optically induced. These eye movement and head movement results may contribute to the reduced reading rate and related symptoms reported by some PAL wearers. The dynamic interactions of eye movements and head movements during reading with the PALs appear to be a sensitive indicator of the effect of lens optical design parameters on overall reading performance, because the movements can discriminate between SVL and PAL designs and at times even between PALs.
A Theory of Eye Movements during Target Acquisition
ERIC Educational Resources Information Center
Zelinsky, Gregory J.
2008-01-01
The gaze movements accompanying target localization were examined via human observers and a computational model (target acquisition model [TAM]). Search contexts ranged from fully realistic scenes to toys in a crib to Os and Qs, and manipulations included set size, target eccentricity, and target-distractor similarity. Observers and the model…
Video-Based Eye Tracking to Detect the Attention Shift: A Computer Classroom Context-Aware System
ERIC Educational Resources Information Center
Kuo, Yung-Lung; Lee, Jiann-Shu; Hsieh, Min-Chai
2014-01-01
Eye and head movements evoked in response to obvious visual attention shifts. However, there has been little progress on the causes of absent-mindedness so far. The paper proposes an attention awareness system that captures the conditions regarding the interaction of eye gaze and head pose under various attentional switching in computer classroom.…
Eye movement-invariant representations in the human visual system.
Nishimoto, Shinji; Huth, Alexander G; Bilenko, Natalia Y; Gallant, Jack L
2017-01-01
During natural vision, humans make frequent eye movements but perceive a stable visual world. It is therefore likely that the human visual system contains representations of the visual world that are invariant to eye movements. Here we present an experiment designed to identify visual areas that might contain eye-movement-invariant representations. We used functional MRI to record brain activity from four human subjects who watched natural movies. In one condition subjects were required to fixate steadily, and in the other they were allowed to freely make voluntary eye movements. The movies used in each condition were identical. We reasoned that the brain activity recorded in a visual area that is invariant to eye movement should be similar under fixation and free viewing conditions. In contrast, activity in a visual area that is sensitive to eye movement should differ between fixation and free viewing. We therefore measured the similarity of brain activity across repeated presentations of the same movie within the fixation condition, and separately between the fixation and free viewing conditions. The ratio of these measures was used to determine which brain areas are most likely to contain eye movement-invariant representations. We found that voxels located in early visual areas are strongly affected by eye movements, while voxels in ventral temporal areas are only weakly affected by eye movements. These results suggest that the ventral temporal visual areas contain a stable representation of the visual world that is invariant to eye movements made during natural vision.
Binocular eye movement control and motion perception: what is being tracked?
van der Steen, Johannes; Dits, Joyce
2012-10-19
We investigated under what conditions humans can make independent slow phase eye movements. The ability to make independent movements of the two eyes generally is attributed to few specialized lateral eyed animal species, for example chameleons. In our study, we showed that humans also can move the eyes in different directions. To maintain binocular retinal correspondence independent slow phase movements of each eye are produced. We used the scleral search coil method to measure binocular eye movements in response to dichoptically viewed visual stimuli oscillating in orthogonal direction. Correlated stimuli led to orthogonal slow eye movements, while the binocularly perceived motion was the vector sum of the motion presented to each eye. The importance of binocular fusion on independency of the movements of the two eyes was investigated with anti-correlated stimuli. The perceived global motion pattern of anti-correlated dichoptic stimuli was perceived as an oblique oscillatory motion, as well as resulted in a conjugate oblique motion of the eyes. We propose that the ability to make independent slow phase eye movements in humans is used to maintain binocular retinal correspondence. Eye-of-origin and binocular information are used during the processing of binocular visual information, and it is decided at an early stage whether binocular or monocular motion information and independent slow phase eye movements of each eye are produced during binocular tracking.
Binocular Eye Movement Control and Motion Perception: What Is Being Tracked?
van der Steen, Johannes; Dits, Joyce
2012-01-01
Purpose. We investigated under what conditions humans can make independent slow phase eye movements. The ability to make independent movements of the two eyes generally is attributed to few specialized lateral eyed animal species, for example chameleons. In our study, we showed that humans also can move the eyes in different directions. To maintain binocular retinal correspondence independent slow phase movements of each eye are produced. Methods. We used the scleral search coil method to measure binocular eye movements in response to dichoptically viewed visual stimuli oscillating in orthogonal direction. Results. Correlated stimuli led to orthogonal slow eye movements, while the binocularly perceived motion was the vector sum of the motion presented to each eye. The importance of binocular fusion on independency of the movements of the two eyes was investigated with anti-correlated stimuli. The perceived global motion pattern of anti-correlated dichoptic stimuli was perceived as an oblique oscillatory motion, as well as resulted in a conjugate oblique motion of the eyes. Conclusions. We propose that the ability to make independent slow phase eye movements in humans is used to maintain binocular retinal correspondence. Eye-of-origin and binocular information are used during the processing of binocular visual information, and it is decided at an early stage whether binocular or monocular motion information and independent slow phase eye movements of each eye are produced during binocular tracking. PMID:22997286
NASA Astrophysics Data System (ADS)
Lin, Chern-Sheng; Ho, Chien-Wa; Chang, Kai-Chieh; Hung, San-Shan; Shei, Hung-Jung; Yeh, Mau-Shiun
2006-06-01
This study describes the design and combination of an eye-controlled and a head-controlled human-machine interface system. This system is a highly effective human-machine interface, detecting head movement by changing positions and numbers of light sources on the head. When the users utilize the head-mounted display to browse a computer screen, the system will catch the images of the user's eyes with CCD cameras, which can also measure the angle and position of the light sources. In the eye-tracking system, the program in the computer will locate each center point of the pupils in the images, and record the information on moving traces and pupil diameters. In the head gesture measurement system, the user wears a double-source eyeglass frame, so the system catches images of the user's head by using a CCD camera in front of the user. The computer program will locate the center point of the head, transferring it to the screen coordinates, and then the user can control the cursor by head motions. We combine the eye-controlled and head-controlled human-machine interface system for the virtual reality applications.
Visual Data Mining: An Exploratory Approach to Analyzing Temporal Patterns of Eye Movements
ERIC Educational Resources Information Center
Yu, Chen; Yurovsky, Daniel; Xu, Tian
2012-01-01
Infant eye movements are an important behavioral resource to understand early human development and learning. But the complexity and amount of gaze data recorded from state-of-the-art eye-tracking systems also pose a challenge: how does one make sense of such dense data? Toward this goal, this article describes an interactive approach based on…
Decoding Saccadic Directions Using Epidural ECoG in Non-Human Primates.
Lee, Jeyeon; Choi, Hoseok; Lee, Seho; Cho, Baek Hwan; Ahn, Kyoung Ha; Kim, In Young; Lee, Kyoung Min; Jang, Dong Pyo
2017-08-01
A brain-computer interface (BCI) can be used to restore some communication as an alternative interface for patients suffering from locked-in syndrome. However, most BCI systems are based on SSVEP, P300, or motor imagery, and a diversity of BCI protocols would be needed for various types of patients. In this paper, we trained the choice saccade (CS) task in 2 non-human primate monkeys and recorded the brain signal using an epidural electrocorticogram (eECoG) to predict eye movement direction. We successfully predicted the direction of the upcoming eye movement using a support vector machine (SVM) with the brain signals after the directional cue onset and before the saccade execution. The mean accuracies were 80% for 2 directions and 43% for 4 directions. We also quantified the spatial-spectro-temporal contribution ratio using SVM recursive feature elimination (RFE). The channels over the frontal eye field (FEF), supplementary eye field (SEF), and superior parietal lobule (SPL) area were dominantly used for classification. The α-band in the spectral domain and the time bins just after the directional cue onset and just before the saccadic execution were mainly useful for prediction. A saccade based BCI paradigm can be projected in the 2D space, and will hopefully provide an intuitive and convenient communication platform for users. © 2017 The Korean Academy of Medical Sciences.
Eye vs. Text Movement: Which Technique Leads to Faster Reading Comprehension?
ERIC Educational Resources Information Center
Abdellah, Antar Solhy
2009-01-01
Eye fixation is a frequent problem that faces foreign language learners and hinders the flow of their reading comprehension. Although students are usually advised to read fast/skim to overcome this problem, eye fixation persists. The present study investigates the effect of using a paper-based program as compared to a computer-based software in…
On the Visual Input Driving Human Smooth-Pursuit Eye Movements
NASA Technical Reports Server (NTRS)
Stone, Leland S.; Beutter, Brent R.; Lorenceau, Jean
1996-01-01
Current computational models of smooth-pursuit eye movements assume that the primary visual input is local retinal-image motion (often referred to as retinal slip). However, we show that humans can pursue object motion with considerable accuracy, even in the presence of conflicting local image motion. This finding indicates that the visual cortical area(s) controlling pursuit must be able to perform a spatio-temporal integration of local image motion into a signal related to object motion. We also provide evidence that the object-motion signal that drives pursuit is related to the signal that supports perception. We conclude that current models of pursuit should be modified to include a visual input that encodes perceived object motion and not merely retinal image motion. Finally, our findings suggest that the measurement of eye movements can be used to monitor visual perception, with particular value in applied settings as this non-intrusive approach would not require interrupting ongoing work or training.
Comparison of ANN and SVM for classification of eye movements in EOG signals
NASA Astrophysics Data System (ADS)
Qi, Lim Jia; Alias, Norma
2018-03-01
Nowadays, electrooculogram is regarded as one of the most important biomedical signal in measuring and analyzing eye movement patterns. Thus, it is helpful in designing EOG-based Human Computer Interface (HCI). In this research, electrooculography (EOG) data was obtained from five volunteers. The (EOG) data was then preprocessed before feature extraction methods were employed to further reduce the dimensionality of data. Three feature extraction approaches were put forward, namely statistical parameters, autoregressive (AR) coefficients using Burg method, and power spectral density (PSD) using Yule-Walker method. These features would then become input to both artificial neural network (ANN) and support vector machine (SVM). The performance of the combination of different feature extraction methods and classifiers was presented and analyzed. It was found that statistical parameters + SVM achieved the highest classification accuracy of 69.75%.
Rhesus Monkeys Behave As If They Perceive the Duncker Illusion
Zivotofsky, A. Z.; Goldberg, M. E.; Powell, K. D.
2008-01-01
The visual system uses the pattern of motion on the retina to analyze the motion of objects in the world, and the motion of the observer him/herself. Distinguishing between retinal motion evoked by movement of the retina in space and retinal motion evoked by movement of objects in the environment is computationally difficult, and the human visual system frequently misinterprets the meaning of retinal motion. In this study, we demonstrate that the visual system of the Rhesus monkey also misinterprets retinal motion. We show that monkeys erroneously report the trajectories of pursuit targets or their own pursuit eye movements during an epoch of smooth pursuit across an orthogonally moving background. Furthermore, when they make saccades to the spatial location of stimuli that flashed early in an epoch of smooth pursuit or fixation, they make large errors that appear to take into account the erroneous smooth eye movement that they report in the first experiment, and not the eye movement that they actually make. PMID:16102233
Research on wheelchair robot control system based on EOG
NASA Astrophysics Data System (ADS)
Xu, Wang; Chen, Naijian; Han, Xiangdong; Sun, Jianbo
2018-04-01
The paper describes an intelligent wheelchair control system based on EOG. It can help disabled people improve their living ability. The system can acquire EOG signal from the user, detect the number of blink and the direction of glancing, and then send commands to the wheelchair robot via RS-232 to achieve the control of wheelchair robot. Wheelchair robot control system based on EOG is composed of processing EOG signal and human-computer interactive technology, which achieves a purpose of using conscious eye movement to control wheelchair robot.
Sensing Passive Eye Response to Impact Induced Head Acceleration Using MEMS IMUs.
Meng, Yuan; Bottenfield, Brent; Bolding, Mark; Liu, Lei; Adams, Mark L
2018-02-01
The eye may act as a surrogate for the brain in response to head acceleration during an impact. Passive eye movements in a dynamic system are sensed by microelectromechanical systems (MEMS) inertial measurement units (IMU) in this paper. The technique is validated using a three-dimensional printed scaled human skull model and on human volunteers by performing drop-and-impact experiments with ribbon-style flexible printed circuit board IMUs inserted in the eyes and reference IMUs on the heads. Data are captured by a microcontroller unit and processed using data fusion. Displacements are thus estimated and match the measured parameters. Relative accelerations and displacements of the eye to the head are computed indicating the influence of the concussion causing impacts.
Video-based eye tracking for neuropsychiatric assessment.
Adhikari, Sam; Stark, David E
2017-01-01
This paper presents a video-based eye-tracking method, ideally deployed via a mobile device or laptop-based webcam, as a tool for measuring brain function. Eye movements and pupillary motility are tightly regulated by brain circuits, are subtly perturbed by many disease states, and are measurable using video-based methods. Quantitative measurement of eye movement by readily available webcams may enable early detection and diagnosis, as well as remote/serial monitoring, of neurological and neuropsychiatric disorders. We successfully extracted computational and semantic features for 14 testing sessions, comprising 42 individual video blocks and approximately 17,000 image frames generated across several days of testing. Here, we demonstrate the feasibility of collecting video-based eye-tracking data from a standard webcam in order to assess psychomotor function. Furthermore, we were able to demonstrate through systematic analysis of this data set that eye-tracking features (in particular, radial and tangential variance on a circular visual-tracking paradigm) predict performance on well-validated psychomotor tests. © 2017 New York Academy of Sciences.
Signal-dependent noise determines motor planning
NASA Astrophysics Data System (ADS)
Harris, Christopher M.; Wolpert, Daniel M.
1998-08-01
When we make saccadic eye movements or goal-directed arm movements, there is an infinite number of possible trajectories that the eye or arm could take to reach the target,. However, humans show highly stereotyped trajectories in which velocity profiles of both the eye and hand are smooth and symmetric for brief movements,. Here we present a unifying theory of eye and arm movements based on the single physiological assumption that the neural control signals are corrupted by noise whose variance increases with the size of the control signal. We propose that in the presence of such signal-dependent noise, the shape of a trajectory is selected to minimize the variance of the final eye or arm position. This minimum-variance theory accurately predicts the trajectories of both saccades and arm movements and the speed-accuracy trade-off described by Fitt's law. These profiles are robust to changes in the dynamics of the eye or arm, as found empirically,. Moreover, the relation between path curvature and hand velocity during drawing movements reproduces the empirical `two-thirds power law',. This theory provides a simple and powerful unifying perspective for both eye and arm movement control.
Eye movement analysis of reading from computer displays, eReaders and printed books.
Zambarbieri, Daniela; Carniglia, Elena
2012-09-01
To compare eye movements during silent reading of three eBooks and a printed book. The three different eReading tools were a desktop PC, iPad tablet and Kindle eReader. Video-oculographic technology was used for recording eye movements. In the case of reading from the computer display the recordings were made by a video camera placed below the computer screen, whereas for reading from the iPad tablet, eReader and printed book the recording system was worn by the subject and had two cameras: one for recording the movement of the eyes and the other for recording the scene in front of the subject. Data analysis provided quantitative information in terms of number of fixations, their duration, and the direction of the movement, the latter to distinguish between fixations and regressions. Mean fixation duration was different only in reading from the computer display, and was similar for the Tablet, eReader and printed book. The percentage of regressions with respect to the total amount of fixations was comparable for eReading tools and the printed book. The analysis of eye movements during reading an eBook from different eReading tools suggests that subjects' reading behaviour is similar to reading from a printed book. © 2012 The College of Optometrists.
Plöchl, Michael; Ossandón, José P.; König, Peter
2012-01-01
Eye movements introduce large artifacts to electroencephalographic recordings (EEG) and thus render data analysis difficult or even impossible. Trials contaminated by eye movement and blink artifacts have to be discarded, hence in standard EEG-paradigms subjects are required to fixate on the screen. To overcome this restriction, several correction methods including regression and blind source separation have been proposed. Yet, there is no automated standard procedure established. By simultaneously recording eye movements and 64-channel-EEG during a guided eye movement paradigm, we investigate and review the properties of eye movement artifacts, including corneo-retinal dipole changes, saccadic spike potentials and eyelid artifacts, and study their interrelations during different types of eye- and eyelid movements. In concordance with earlier studies our results confirm that these artifacts arise from different independent sources and that depending on electrode site, gaze direction, and choice of reference these sources contribute differently to the measured signal. We assess the respective implications for artifact correction methods and therefore compare the performance of two prominent approaches, namely linear regression and independent component analysis (ICA). We show and discuss that due to the independence of eye artifact sources, regression-based correction methods inevitably over- or under-correct individual artifact components, while ICA is in principle suited to address such mixtures of different types of artifacts. Finally, we propose an algorithm, which uses eye tracker information to objectively identify eye-artifact related ICA-components (ICs) in an automated manner. In the data presented here, the algorithm performed very similar to human experts when those were given both, the topographies of the ICs and their respective activations in a large amount of trials. Moreover it performed more reliable and almost twice as effective than human experts when those had to base their decision on IC topographies only. Furthermore, a receiver operating characteristic (ROC) analysis demonstrated an optimal balance of false positive and false negative at an area under curve (AUC) of more than 0.99. Removing the automatically detected ICs from the data resulted in removal or substantial suppression of ocular artifacts including microsaccadic spike potentials, while the relevant neural signal remained unaffected. In conclusion the present work aims at a better understanding of individual eye movement artifacts, their interrelations and the respective implications for eye artifact correction. Additionally, the proposed ICA-procedure provides a tool for optimized detection and correction of eye movement-related artifact components. PMID:23087632
Corsi-Cabrera, María; Velasco, Francisco; Del Río-Portilla, Yolanda; Armony, Jorge L; Trejo-Martínez, David; Guevara, Miguel A; Velasco, Ana L
2016-10-01
The amygdaloid complex plays a crucial role in processing emotional signals and in the formation of emotional memories. Neuroimaging studies have shown human amygdala activation during rapid eye movement sleep (REM). Stereotactically implanted electrodes for presurgical evaluation in epileptic patients provide a unique opportunity to directly record amygdala activity. The present study analysed amygdala activity associated with REM sleep eye movements on the millisecond scale. We propose that phasic activation associated with rapid eye movements may provide the amygdala with endogenous excitation during REM sleep. Standard polysomnography and stereo-electroencephalograph (SEEG) were recorded simultaneously during spontaneous sleep in the left amygdala of four patients. Time-frequency analysis and absolute power of gamma activity were obtained for 250 ms time windows preceding and following eye movement onset in REM sleep, and in spontaneous waking eye movements in the dark. Absolute power of the 44-48 Hz band increased significantly during the 250 ms time window after REM sleep rapid eye movements onset, but not during waking eye movements. Transient activation of the amygdala provides physiological support for the proposed participation of the amygdala in emotional expression, in the emotional content of dreams and for the reactivation and consolidation of emotional memories during REM sleep, as well as for next-day emotional regulation, and its possible role in the bidirectional interaction between REM sleep and such sleep disorders as nightmares, anxiety and post-traumatic sleep disorder. These results provide unique, direct evidence of increased activation of the human amygdala time-locked to REM sleep rapid eye movements. © 2016 European Sleep Research Society.
A Single-Channel EOG-Based Speller.
He, Shenghong; Li, Yuanqing
2017-11-01
Electrooculography (EOG) signals, which can be used to infer the intentions of a user based on eye movements, are widely used in human-computer interface (HCI) systems. Most existing EOG-based HCI systems incorporate a limited number of commands because they generally associate different commands with a few different types of eye movements, such as looking up, down, left, or right. This paper presents a novel single-channel EOG-based HCI that allows users to spell asynchronously by only blinking. Forty buttons corresponding to 40 characters displayed to the user via a graphical user interface are intensified in a random order. To select a button, the user must blink his/her eyes in synchrony as the target button is flashed. Two data processing procedures, specifically support vector machine (SVM) classification and waveform detection, are combined to detect eye blinks. During detection, we simultaneously feed the feature vectors extracted from the ongoing EOG signal into the SVM classification and waveform detection modules. Decisions are made based on the results of the SVM classification and waveform detection. Three online experiments were conducted with eight healthy subjects. We achieved an average accuracy of 94.4% and a response time of 4.14 s for selecting a character in synchronous mode, as well as an average accuracy of 93.43% and a false positive rate of 0.03/min in the idle state in asynchronous mode. The experimental results, therefore, demonstrated the effectiveness of this single-channel EOG-based speller.
Towards free 3D end-point control for robotic-assisted human reaching using binocular eye tracking.
Maimon-Dror, Roni O; Fernandez-Quesada, Jorge; Zito, Giuseppe A; Konnaris, Charalambos; Dziemian, Sabine; Faisal, A Aldo
2017-07-01
Eye-movements are the only directly observable behavioural signals that are highly correlated with actions at the task level, and proactive of body movements and thus reflect action intentions. Moreover, eye movements are preserved in many movement disorders leading to paralysis (or amputees) from stroke, spinal cord injury, Parkinson's disease, multiple sclerosis, and muscular dystrophy among others. Despite this benefit, eye tracking is not widely used as control interface for robotic interfaces in movement impaired patients due to poor human-robot interfaces. We demonstrate here how combining 3D gaze tracking using our GT3D binocular eye tracker with custom designed 3D head tracking system and calibration method enables continuous 3D end-point control of a robotic arm support system. The users can move their own hand to any location of the workspace by simple looking at the target and winking once. This purely eye tracking based system enables the end-user to retain free head movement and yet achieves high spatial end point accuracy in the order of 6 cm RMSE error in each dimension and standard deviation of 4 cm. 3D calibration is achieved by moving the robot along a 3 dimensional space filling Peano curve while the user is tracking it with their eyes. This results in a fully automated calibration procedure that yields several thousand calibration points versus standard approaches using a dozen points, resulting in beyond state-of-the-art 3D accuracy and precision.
Binocular coordination in response to stereoscopic stimuli
NASA Astrophysics Data System (ADS)
Liversedge, Simon P.; Holliman, Nicolas S.; Blythe, Hazel I.
2009-02-01
Humans actively explore their visual environment by moving their eyes. Precise coordination of the eyes during visual scanning underlies the experience of a unified perceptual representation and is important for the perception of depth. We report data from three psychological experiments investigating human binocular coordination during visual processing of stereoscopic stimuli.In the first experiment participants were required to read sentences that contained a stereoscopically presented target word. Half of the word was presented exclusively to one eye and half exclusively to the other eye. Eye movements were recorded and showed that saccadic targeting was uninfluenced by the stereoscopic presentation, strongly suggesting that complementary retinal stimuli are perceived as a single, unified input prior to saccade initiation. In a second eye movement experiment we presented words stereoscopically to measure Panum's Fusional Area for linguistic stimuli. In the final experiment we compared binocular coordination during saccades between simple dot stimuli under 2D, stereoscopic 3D and real 3D viewing conditions. Results showed that depth appropriate vergence movements were made during saccades and fixations to real 3D stimuli, but only during fixations on stereoscopic 3D stimuli. 2D stimuli did not induce depth vergence movements. Together, these experiments indicate that stereoscopic visual stimuli are fused when they fall within Panum's Fusional Area, and that saccade metrics are computed on the basis of a unified percept. Also, there is sensitivity to non-foveal retinal disparity in real 3D stimuli, but not in stereoscopic 3D stimuli, and the system responsible for binocular coordination responds to this during saccades as well as fixations.
On the Use of Electrooculogram for Efficient Human Computer Interfaces
Usakli, A. B.; Gurkan, S.; Aloise, F.; Vecchiato, G.; Babiloni, F.
2010-01-01
The aim of this study is to present electrooculogram signals that can be used for human computer interface efficiently. Establishing an efficient alternative channel for communication without overt speech and hand movements is important to increase the quality of life for patients suffering from Amyotrophic Lateral Sclerosis or other illnesses that prevent correct limb and facial muscular responses. We have made several experiments to compare the P300-based BCI speller and EOG-based new system. A five-letter word can be written on average in 25 seconds and in 105 seconds with the EEG-based device. Giving message such as “clean-up” could be performed in 3 seconds with the new system. The new system is more efficient than P300-based BCI system in terms of accuracy, speed, applicability, and cost efficiency. Using EOG signals, it is possible to improve the communication abilities of those patients who can move their eyes. PMID:19841687
Monitoring and decision making by people in man machine systems
NASA Technical Reports Server (NTRS)
Johannsen, G.
1979-01-01
The analysis of human monitoring and decision making behavior as well as its modeling are described. Classic and optimal control theoretical, monitoring models are surveyed. The relationship between attention allocation and eye movements is discussed. As an example of applications, the evaluation of predictor displays by means of the optimal control model is explained. Fault detection involving continuous signals and decision making behavior of a human operator engaged in fault diagnosis during different operation and maintenance situations are illustrated. Computer aided decision making is considered as a queueing problem. It is shown to what extent computer aids can be based on the state of human activity as measured by psychophysiological quantities. Finally, management information systems for different application areas are mentioned. The possibilities of mathematical modeling of human behavior in complex man machine systems are also critically assessed.
Using eye tracking to identify faking attempts during penile plethysmography assessment.
Trottier, Dominique; Rouleau, Joanne-Lucine; Renaud, Patrice; Goyette, Mathieu
2014-01-01
Penile plethysmography (PPG) is considered the most rigorous method for sexual interest assessment. Nevertheless, it is subject to faking attempts by participants, which compromises the internal validity of the instrument. To date, various attempts have been made to limit voluntary control of sexual response during PPG assessments, without satisfactory results. This exploratory research examined eye-tracking technologies' ability to identify the presence of cognitive strategies responsible for erectile inhibition during PPG assessment. Eye movements and penile responses for 20 subjects were recorded while exploring animated human-like computer-generated stimuli in a virtual environment under three distinct viewing conditions: (a) the free visual exploration of a preferred sexual stimulus without erectile inhibition; (b) the viewing of a preferred sexual stimulus with erectile inhibition; and (c) the free visual exploration of a non-preferred sexual stimulus. Results suggest that attempts to control erectile responses generate specific eye-movement variations, characterized by a general deceleration of the exploration process and limited exploration of the erogenous zone. Findings indicate that recording eye movements can provide significant information on the presence of competing covert processes responsible for erectile inhibition. The use of eye-tracking technologies during PPG could therefore lead to improved internal validity of the plethysmographic procedure.
Evidence of common and separate eye and hand accumulators underlying flexible eye-hand coordination
Jana, Sumitash; Gopal, Atul
2016-01-01
Eye and hand movements are initiated by anatomically separate regions in the brain, and yet these movements can be flexibly coupled and decoupled, depending on the need. The computational architecture that enables this flexible coupling of independent effectors is not understood. Here, we studied the computational architecture that enables flexible eye-hand coordination using a drift diffusion framework, which predicts that the variability of the reaction time (RT) distribution scales with its mean. We show that a common stochastic accumulator to threshold, followed by a noisy effector-dependent delay, explains eye-hand RT distributions and their correlation in a visual search task that required decision-making, while an interactive eye and hand accumulator model did not. In contrast, in an eye-hand dual task, an interactive model better predicted the observed correlations and RT distributions than a common accumulator model. Notably, these two models could only be distinguished on the basis of the variability and not the means of the predicted RT distributions. Additionally, signatures of separate initiation signals were also observed in a small fraction of trials in the visual search task, implying that these distinct computational architectures were not a manifestation of the task design per se. Taken together, our results suggest two unique computational architectures for eye-hand coordination, with task context biasing the brain toward instantiating one of the two architectures. NEW & NOTEWORTHY Previous studies on eye-hand coordination have considered mainly the means of eye and hand reaction time (RT) distributions. Here, we leverage the approximately linear relationship between the mean and standard deviation of RT distributions, as predicted by the drift-diffusion model, to propose the existence of two distinct computational architectures underlying coordinated eye-hand movements. These architectures, for the first time, provide a computational basis for the flexible coupling between eye and hand movements. PMID:27784809
The Human Engineering Eye Movement Measurement Research Facility.
1985-04-01
tracked reliably. When tracking is disrupted (e.g., by gross and sudden head movements, gross change in the head position, sneezing, prolonged eye...these are density ^\\ and " busyness " of the slides (stimulus material), as well as consistency . I„ between successive... change the material being projected based on the subject’s previous performance. The minicomputer relays the calibrated data to one of the magnetic
Ma, Jiaxin; Zhang, Yu; Cichocki, Andrzej; Matsuno, Fumitoshi
2015-03-01
This study presents a novel human-machine interface (HMI) based on both electrooculography (EOG) and electroencephalography (EEG). This hybrid interface works in two modes: an EOG mode recognizes eye movements such as blinks, and an EEG mode detects event related potentials (ERPs) like P300. While both eye movements and ERPs have been separately used for implementing assistive interfaces, which help patients with motor disabilities in performing daily tasks, the proposed hybrid interface integrates them together. In this way, both the eye movements and ERPs complement each other. Therefore, it can provide a better efficiency and a wider scope of application. In this study, we design a threshold algorithm that can recognize four kinds of eye movements including blink, wink, gaze, and frown. In addition, an oddball paradigm with stimuli of inverted faces is used to evoke multiple ERP components including P300, N170, and VPP. To verify the effectiveness of the proposed system, two different online experiments are carried out. One is to control a multifunctional humanoid robot, and the other is to control four mobile robots. In both experiments, the subjects can complete tasks effectively by using the proposed interface, whereas the best completion time is relatively short and very close to the one operated by hand.
Gaze-independent brain-computer interfaces based on covert attention and feature attention
NASA Astrophysics Data System (ADS)
Treder, M. S.; Schmidt, N. M.; Blankertz, B.
2011-10-01
There is evidence that conventional visual brain-computer interfaces (BCIs) based on event-related potentials cannot be operated efficiently when eye movements are not allowed. To overcome this limitation, the aim of this study was to develop a visual speller that does not require eye movements. Three different variants of a two-stage visual speller based on covert spatial attention and non-spatial feature attention (i.e. attention to colour and form) were tested in an online experiment with 13 healthy participants. All participants achieved highly accurate BCI control. They could select one out of thirty symbols (chance level 3.3%) with mean accuracies of 88%-97% for the different spellers. The best results were obtained for a speller that was operated using non-spatial feature attention only. These results show that, using feature attention, it is possible to realize high-accuracy, fast-paced visual spellers that have a large vocabulary and are independent of eye gaze.
An embodiment effect in computer-based learning with animated pedagogical agents.
Mayer, Richard E; DaPra, C Scott
2012-09-01
How do social cues such as gesturing, facial expression, eye gaze, and human-like movement affect multimedia learning with onscreen agents? To help address this question, students were asked to twice view a 4-min narrated presentation on how solar cells work in which the screen showed an animated pedagogical agent standing to the left of 11 successive slides. Across three experiments, learners performed better on a transfer test when a human-voiced agent displayed human-like gestures, facial expression, eye gaze, and body movement than when the agent did not, yielding an embodiment effect. In Experiment 2 the embodiment effect was found when the agent spoke in a human voice but not in a machine voice. In Experiment 3, the embodiment effect was found both when students were told the onscreen agent was consistent with their choice of agent characteristics and when inconsistent. Students who viewed a highly embodied agent also rated the social attributes of the agent more positively than did students who viewed a nongesturing agent. The results are explained by social agency theory, in which social cues in a multimedia message prime a feeling of social partnership in the learner, which leads to deeper cognitive processing during learning, and results in a more meaningful learning outcome as reflected in transfer test performance.
Method of Menu Selection by Gaze Movement Using AC EOG Signals
NASA Astrophysics Data System (ADS)
Kanoh, Shin'ichiro; Futami, Ryoko; Yoshinobu, Tatsuo; Hoshimiya, Nozomu
A method to detect the direction and the distance of voluntary eye gaze movement from EOG (electrooculogram) signals was proposed and tested. In this method, AC-amplified vertical and horizontal transient EOG signals were classified into 8-class directions and 2-class distances of voluntary eye gaze movements. A horizontal and a vertical EOGs during eye gaze movement at each sampling time were treated as a two-dimensional vector, and the center of gravity of the sample vectors whose norms were more than 80% of the maximum norm was used as a feature vector to be classified. By the classification using the k-nearest neighbor algorithm, it was shown that the averaged correct detection rates on each subject were 98.9%, 98.7%, 94.4%, respectively. This method can avoid strict EOG-based eye tracking which requires DC amplification of very small signal. It would be useful to develop robust human interfacing systems based on menu selection for severely paralyzed patients.
Eye Movements in Darkness Modulate Self-Motion Perception.
Clemens, Ivar Adrianus H; Selen, Luc P J; Pomante, Antonella; MacNeilage, Paul R; Medendorp, W Pieter
2017-01-01
During self-motion, humans typically move the eyes to maintain fixation on the stationary environment around them. These eye movements could in principle be used to estimate self-motion, but their impact on perception is unknown. We had participants judge self-motion during different eye-movement conditions in the absence of full-field optic flow. In a two-alternative forced choice task, participants indicated whether the second of two successive passive lateral whole-body translations was longer or shorter than the first. This task was used in two experiments. In the first ( n = 8), eye movements were constrained differently in the two translation intervals by presenting either a world-fixed or body-fixed fixation point or no fixation point at all (allowing free gaze). Results show that perceived translations were shorter with a body-fixed than a world-fixed fixation point. A linear model indicated that eye-movement signals received a weight of ∼25% for the self-motion percept. This model was independently validated in the trials without a fixation point (free gaze). In the second experiment ( n = 10), gaze was free during both translation intervals. Results show that the translation with the larger eye-movement excursion was judged more often to be larger than chance, based on an oculomotor choice probability analysis. We conclude that eye-movement signals influence self-motion perception, even in the absence of visual stimulation.
Eye Movements in Darkness Modulate Self-Motion Perception
Pomante, Antonella
2017-01-01
Abstract During self-motion, humans typically move the eyes to maintain fixation on the stationary environment around them. These eye movements could in principle be used to estimate self-motion, but their impact on perception is unknown. We had participants judge self-motion during different eye-movement conditions in the absence of full-field optic flow. In a two-alternative forced choice task, participants indicated whether the second of two successive passive lateral whole-body translations was longer or shorter than the first. This task was used in two experiments. In the first (n = 8), eye movements were constrained differently in the two translation intervals by presenting either a world-fixed or body-fixed fixation point or no fixation point at all (allowing free gaze). Results show that perceived translations were shorter with a body-fixed than a world-fixed fixation point. A linear model indicated that eye-movement signals received a weight of ∼25% for the self-motion percept. This model was independently validated in the trials without a fixation point (free gaze). In the second experiment (n = 10), gaze was free during both translation intervals. Results show that the translation with the larger eye-movement excursion was judged more often to be larger than chance, based on an oculomotor choice probability analysis. We conclude that eye-movement signals influence self-motion perception, even in the absence of visual stimulation. PMID:28144623
What interests them in the pictures?--differences in eye-tracking between rhesus monkeys and humans.
Hu, Ying-Zhou; Jiang, Hui-Hui; Liu, Ci-Rong; Wang, Jian-Hong; Yu, Cheng-Yang; Carlson, Synnöve; Yang, Shang-Chuan; Saarinen, Veli-Matti; Rizak, Joshua D; Tian, Xiao-Guang; Tan, Hen; Chen, Zhu-Yue; Ma, Yuan-Ye; Hu, Xin-Tian
2013-10-01
Studies estimating eye movements have demonstrated that non-human primates have fixation patterns similar to humans at the first sight of a picture. In the current study, three sets of pictures containing monkeys, humans or both were presented to rhesus monkeys and humans. The eye movements on these pictures by the two species were recorded using a Tobii eye-tracking system. We found that monkeys paid more attention to the head and body in pictures containing monkeys, whereas both monkeys and humans paid more attention to the head in pictures containing humans. The humans always concentrated on the eyes and head in all the pictures, indicating the social role of facial cues in society. Although humans paid more attention to the hands than monkeys, both monkeys and humans were interested in the hands and what was being done with them in the pictures. This may suggest the importance and necessity of hands for survival. Finally, monkeys scored lower in eye-tracking when fixating on the pictures, as if they were less interested in looking at the screen than humans. The locations of fixation in monkeys may provide insight into the role of eye movements in an evolutionary context.
Ito, Norie; Barnes, Graham R; Fukushima, Junko; Fukushima, Kikuro; Warabi, Tateo
2013-08-01
Using a cue-dependent memory-based smooth-pursuit task previously applied to monkeys, we examined the effects of visual motion-memory on smooth-pursuit eye movements in normal human subjects and compared the results with those of the trained monkeys. These results were also compared with those during simple ramp-pursuit that did not require visual motion-memory. During memory-based pursuit, all subjects exhibited virtually no errors in either pursuit-direction or go/no-go selection. Tracking eye movements of humans and monkeys were similar in the two tasks, but tracking eye movements were different between the two tasks; latencies of the pursuit and corrective saccades were prolonged, initial pursuit eye velocity and acceleration were lower, peak velocities were lower, and time to reach peak velocities lengthened during memory-based pursuit. These characteristics were similar to anticipatory pursuit initiated by extra-retinal components during the initial extinction task of Barnes and Collins (J Neurophysiol 100:1135-1146, 2008b). We suggest that the differences between the two tasks reflect differences between the contribution of extra-retinal and retinal components. This interpretation is supported by two further studies: (1) during popping out of the correct spot to enhance retinal image-motion inputs during memory-based pursuit, pursuit eye velocities approached those during simple ramp-pursuit, and (2) during initial blanking of spot motion during memory-based pursuit, pursuit components appeared in the correct direction. Our results showed the importance of extra-retinal mechanisms for initial pursuit during memory-based pursuit, which include priming effects and extra-retinal drive components. Comparison with monkey studies on neuronal responses and model analysis suggested possible pathways for the extra-retinal mechanisms.
Tracking Students' Cognitive Processes during Program Debugging--An Eye-Movement Approach
ERIC Educational Resources Information Center
Lin, Yu-Tzu; Wu, Cheng-Chih; Hou, Ting-Yun; Lin, Yu-Chih; Yang, Fang-Ying; Chang, Chia-Hu
2016-01-01
This study explores students' cognitive processes while debugging programs by using an eye tracker. Students' eye movements during debugging were recorded by an eye tracker to investigate whether and how high- and low-performance students act differently during debugging. Thirty-eight computer science undergraduates were asked to debug two C…
Alkan, Yelda; Biswal, Bharat B.; Alvarez, Tara L.
2011-01-01
Purpose Eye movement research has traditionally studied solely saccade and/or vergence eye movements by isolating these systems within a laboratory setting. While the neural correlates of saccadic eye movements are established, few studies have quantified the functional activity of vergence eye movements using fMRI. This study mapped the neural substrates of vergence eye movements and compared them to saccades to elucidate the spatial commonality and differentiation between these systems. Methodology The stimulus was presented in a block design where the ‘off’ stimulus was a sustained fixation and the ‘on’ stimulus was random vergence or saccadic eye movements. Data were collected with a 3T scanner. A general linear model (GLM) was used in conjunction with cluster size to determine significantly active regions. A paired t-test of the GLM beta weight coefficients was computed between the saccade and vergence functional activities to test the hypothesis that vergence and saccadic stimulation would have spatial differentiation in addition to shared neural substrates. Results Segregated functional activation was observed within the frontal eye fields where a portion of the functional activity from the vergence task was located anterior to the saccadic functional activity (z>2.3; p<0.03). An area within the midbrain was significantly correlated with the experimental design for the vergence but not the saccade data set. Similar functional activation was observed within the following regions of interest: the supplementary eye field, dorsolateral prefrontal cortex, ventral lateral prefrontal cortex, lateral intraparietal area, cuneus, precuneus, anterior and posterior cingulates, and cerebellar vermis. The functional activity from these regions was not different between the vergence and saccade data sets assessed by analyzing the beta weights of the paired t-test (p>0.2). Conclusion Functional MRI can elucidate the differences between the vergence and saccade neural substrates within the frontal eye fields and midbrain. PMID:22073141
Acting without seeing: eye movements reveal visual processing without awareness.
Spering, Miriam; Carrasco, Marisa
2015-04-01
Visual perception and eye movements are considered to be tightly linked. Diverse fields, ranging from developmental psychology to computer science, utilize eye tracking to measure visual perception. However, this prevailing view has been challenged by recent behavioral studies. Here, we review converging evidence revealing dissociations between the contents of perceptual awareness and different types of eye movement. Such dissociations reveal situations in which eye movements are sensitive to particular visual features that fail to modulate perceptual reports. We also discuss neurophysiological, neuroimaging, and clinical studies supporting the role of subcortical pathways for visual processing without awareness. Our review links awareness to perceptual-eye movement dissociations and furthers our understanding of the brain pathways underlying vision and movement with and without awareness. Copyright © 2015 Elsevier Ltd. All rights reserved.
Spering, Miriam; Carrasco, Marisa
2015-01-01
Visual perception and eye movements are considered to be tightly linked. Diverse fields, ranging from developmental psychology to computer science, utilize eye tracking to measure visual perception. However, this prevailing view has been challenged by recent behavioral studies. We review converging evidence revealing dissociations between the contents of perceptual awareness and different types of eye movements. Such dissociations reveal situations in which eye movements are sensitive to particular visual features that fail to modulate perceptual reports. We also discuss neurophysiological, neuroimaging and clinical studies supporting the role of subcortical pathways for visual processing without awareness. Our review links awareness to perceptual-eye movement dissociations and furthers our understanding of the brain pathways underlying vision and movement with and without awareness. PMID:25765322
Ictal SPECT in patients with rapid eye movement sleep behaviour disorder.
Mayer, Geert; Bitterlich, Marion; Kuwert, Torsten; Ritt, Philipp; Stefan, Hermann
2015-05-01
Rapid eye movement sleep behaviour disorder is a rapid eye movement parasomnia clinically characterized by acting out dreams due to disinhibition of muscle tone in rapid eye movement sleep. Up to 80-90% of the patients with rapid eye movement sleep behaviour disorder develop neurodegenerative disorders within 10-15 years after symptom onset. The disorder is reported in 45-60% of all narcoleptic patients. Whether rapid eye movement sleep behaviour disorder is also a predictor for neurodegeneration in narcolepsy is not known. Although the pathophysiology causing the disinhibition of muscle tone in rapid eye movement sleep behaviour disorder has been studied extensively in animals, little is known about the mechanisms in humans. Most of the human data are from imaging or post-mortem studies. Recent studies show altered functional connectivity between substantia nigra and striatum in patients with rapid eye movement sleep behaviour disorder. We were interested to study which regions are activated in rapid eye movement sleep behaviour disorder during actual episodes by performing ictal single photon emission tomography. We studied one patient with idiopathic rapid eye movement sleep behaviour disorder, one with Parkinson's disease and rapid eye movement sleep behaviour disorder, and two patients with narcolepsy and rapid eye movement sleep behaviour disorder. All patients underwent extended video polysomnography. The tracer was injected after at least 10 s of consecutive rapid eye movement sleep and 10 s of disinhibited muscle tone accompanied by movements registered by an experienced sleep technician. Ictal single photon emission tomography displayed the same activation in the bilateral premotor areas, the interhemispheric cleft, the periaqueductal area, the dorsal and ventral pons and the anterior lobe of the cerebellum in all patients. Our study shows that in patients with Parkinson's disease and rapid eye movement sleep behaviour disorder-in contrast to wakefulness-the neural activity generating movement during episodes of rapid eye movement sleep behaviour disorder bypasses the basal ganglia, a mechanism that is shared by patients with idiopathic rapid eye movement sleep behaviour disorder and narcolepsy patients with rapid eye movement sleep behaviour disorder. © The Author (2015). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Mannan, Malik M Naeem; Kim, Shinjung; Jeong, Myung Yung; Kamran, M Ahmad
2016-02-19
Contamination of eye movement and blink artifacts in Electroencephalogram (EEG) recording makes the analysis of EEG data more difficult and could result in mislead findings. Efficient removal of these artifacts from EEG data is an essential step in improving classification accuracy to develop the brain-computer interface (BCI). In this paper, we proposed an automatic framework based on independent component analysis (ICA) and system identification to identify and remove ocular artifacts from EEG data by using hybrid EEG and eye tracker system. The performance of the proposed algorithm is illustrated using experimental and standard EEG datasets. The proposed algorithm not only removes the ocular artifacts from artifactual zone but also preserves the neuronal activity related EEG signals in non-artifactual zone. The comparison with the two state-of-the-art techniques namely ADJUST based ICA and REGICA reveals the significant improved performance of the proposed algorithm for removing eye movement and blink artifacts from EEG data. Additionally, results demonstrate that the proposed algorithm can achieve lower relative error and higher mutual information values between corrected EEG and artifact-free EEG data.
Sunkara, Adhira
2015-01-01
As we navigate through the world, eye and head movements add rotational velocity patterns to the retinal image. When such rotations accompany observer translation, the rotational velocity patterns must be discounted to accurately perceive heading. The conventional view holds that this computation requires efference copies of self-generated eye/head movements. Here we demonstrate that the brain implements an alternative solution in which retinal velocity patterns are themselves used to dissociate translations from rotations. These results reveal a novel role for visual cues in achieving a rotation-invariant representation of heading in the macaque ventral intraparietal area. Specifically, we show that the visual system utilizes both local motion parallax cues and global perspective distortions to estimate heading in the presence of rotations. These findings further suggest that the brain is capable of performing complex computations to infer eye movements and discount their sensory consequences based solely on visual cues. DOI: http://dx.doi.org/10.7554/eLife.04693.001 PMID:25693417
ERIC Educational Resources Information Center
Metzner, Paul; von der Malsburg, Titus; Vasishth, Shravan; Rösler, Frank
2017-01-01
How important is the ability to freely control eye movements for reading comprehension? And how does the parser make use of this freedom? We investigated these questions using coregistration of eye movements and event-related brain potentials (ERPs) while participants read either freely or in a computer-controlled word-by-word format (also known…
Shishkin, Sergei L.; Nuzhdin, Yuri O.; Svirin, Evgeny P.; Trofimov, Alexander G.; Fedorova, Anastasia A.; Kozyrskiy, Bogdan L.; Velichkovsky, Boris M.
2016-01-01
We usually look at an object when we are going to manipulate it. Thus, eye tracking can be used to communicate intended actions. An effective human-machine interface, however, should be able to differentiate intentional and spontaneous eye movements. We report an electroencephalogram (EEG) marker that differentiates gaze fixations used for control from spontaneous fixations involved in visual exploration. Eight healthy participants played a game with their eye movements only. Their gaze-synchronized EEG data (fixation-related potentials, FRPs) were collected during game's control-on and control-off conditions. A slow negative wave with a maximum in the parietooccipital region was present in each participant's averaged FRPs in the control-on conditions and was absent or had much lower amplitude in the control-off condition. This wave was similar but not identical to stimulus-preceding negativity, a slow negative wave that can be observed during feedback expectation. Classification of intentional vs. spontaneous fixations was based on amplitude features from 13 EEG channels using 300 ms length segments free from electrooculogram contamination (200–500 ms relative to the fixation onset). For the first fixations in the fixation triplets required to make moves in the game, classified against control-off data, a committee of greedy classifiers provided 0.90 ± 0.07 specificity and 0.38 ± 0.14 sensitivity. Similar (slightly lower) results were obtained for the shrinkage Linear Discriminate Analysis (LDA) classifier. The second and third fixations in the triplets were classified at lower rate. We expect that, with improved feature sets and classifiers, a hybrid dwell-based Eye-Brain-Computer Interface (EBCI) can be built using the FRP difference between the intended and spontaneous fixations. If this direction of BCI development will be successful, such a multimodal interface may improve the fluency of interaction and can possibly become the basis for a new input device for paralyzed and healthy users, the EBCI “Wish Mouse.” PMID:27917105
GOM-Face: GKP, EOG, and EMG-based multimodal interface with application to humanoid robot control.
Nam, Yunjun; Koo, Bonkon; Cichocki, Andrzej; Choi, Seungjin
2014-02-01
We present a novel human-machine interface, called GOM-Face , and its application to humanoid robot control. The GOM-Face bases its interfacing on three electric potentials measured on the face: 1) glossokinetic potential (GKP), which involves the tongue movement; 2) electrooculogram (EOG), which involves the eye movement; 3) electromyogram, which involves the teeth clenching. Each potential has been individually used for assistive interfacing to provide persons with limb motor disabilities or even complete quadriplegia an alternative communication channel. However, to the best of our knowledge, GOM-Face is the first interface that exploits all these potentials together. We resolved the interference between GKP and EOG by extracting discriminative features from two covariance matrices: a tongue-movement-only data matrix and eye-movement-only data matrix. With the feature extraction method, GOM-Face can detect four kinds of horizontal tongue or eye movements with an accuracy of 86.7% within 2.77 s. We demonstrated the applicability of the GOM-Face to humanoid robot control: users were able to communicate with the robot by selecting from a predefined menu using the eye and tongue movements.
Eye movement sequence generation in humans: Motor or goal updating?
Quaia, Christian; Joiner, Wilsaan M.; FitzGibbon, Edmond J.; Optican, Lance M.; Smith, Maurice A.
2011-01-01
Saccadic eye movements are often grouped in pre-programmed sequences. The mechanism underlying the generation of each saccade in a sequence is currently poorly understood. Broadly speaking, two alternative schemes are possible: first, after each saccade the retinotopic location of the next target could be estimated, and an appropriate saccade could be generated. We call this the goal updating hypothesis. Alternatively, multiple motor plans could be pre-computed, and they could then be updated after each movement. We call this the motor updating hypothesis. We used McLaughlin’s intra-saccadic step paradigm to artificially create a condition under which these two hypotheses make discriminable predictions. We found that in human subjects, when sequences of two saccades are planned, the motor updating hypothesis predicts the landing position of the second saccade in two-saccade sequences much better than the goal updating hypothesis. This finding suggests that the human saccadic system is capable of executing sequences of saccades to multiple targets by planning multiple motor commands, which are then updated by serial subtraction of ongoing motor output. PMID:21191134
NASA Astrophysics Data System (ADS)
Taylor, Natalie M.; van Saarloos, Paul P.; Eikelboom, Robert H.
2000-06-01
This study aimed to gauge the effect of the patient's eye movement during Photo Refractive Keratectomy (PRK) on post- operative vision. A computer simulation of both the PRK procedure and the visual outcome has been performed. The PRK simulation incorporated the pattern of movement of the laser beam to perform a given correction, the beam characteristics, an initial corneal profile, and an eye movement scenario; and generated the corrected corneal profile. The regrowth of the epithelium was simulated by selecting the smoothing filter which, when applied to a corrected cornea with no patient eye movement, produced similar ray tracing results to the original corneal model. Ray tracing several objects, such as letters of various contrast and sizes was performed to assess the quality of the post-operative vision. Eye movement scenarios included no eye movement, constant decentration and normally distributed random eye movement of varying magnitudes. Random eye movement of even small amounts, such as 50 microns reduces the contrast sensitivity of the image. Constant decentration decenters the projected image on the retina, and in extreme cases can lead to astigmatism. Eye movements of the magnitude expected during laser refractive surgery have minimal effect on the final visual outcome.
Learning to Interact with a Computer by Gaze
ERIC Educational Resources Information Center
Aoki, Hirotaka; Hansen, John Paulin; Itoh, Kenji
2008-01-01
The aim of this paper is to examine the learning processes that subjects undertake when they start using gaze as computer input. A 7-day experiment with eight Japanese students was carried out to record novice users' eye movement data during typing of 110 sentences. The experiment revealed that inefficient eye movements was dramatically reduced…
NASA Astrophysics Data System (ADS)
Namazi, Hamidreza; Kulish, Vladimir V.; Akrami, Amin
2016-05-01
One of the major challenges in vision research is to analyze the effect of visual stimuli on human vision. However, no relationship has been yet discovered between the structure of the visual stimulus, and the structure of fixational eye movements. This study reveals the plasticity of human fixational eye movements in relation to the ‘complex’ visual stimulus. We demonstrated that the fractal temporal structure of visual dynamics shifts towards the fractal dynamics of the visual stimulus (image). The results showed that images with higher complexity (higher fractality) cause fixational eye movements with lower fractality. Considering the brain, as the main part of nervous system that is engaged in eye movements, we analyzed the governed Electroencephalogram (EEG) signal during fixation. We have found out that there is a coupling between fractality of image, EEG and fixational eye movements. The capability observed in this research can be further investigated and applied for treatment of different vision disorders.
NASA Astrophysics Data System (ADS)
Sorokoumov, P. S.; Khabibullin, T. R.; Tolstaya, A. M.
2017-01-01
The existing psychological theories associate the movement of a human eye with its reactions to external change: what we see, hear and feel. By analyzing the glance, we can compare the external human response (which shows the behavior of a person), and the natural reaction (that they actually feels). This article describes the complex for detection of visual activity and its application for evaluation of the psycho-physiological state of a person. The glasses with a camera capture all the movements of the human eye in real time. The data recorded by the camera are transmitted to the computer for processing implemented with the help of the software developed by the authors. The result is given in an informative and an understandable report, which can be used for further analysis. The complex shows a high efficiency and stable operation and can be used both, for the pedagogic personnel recruitment and for testing students during the educational process.
Sesin, Anaelis; Adjouadi, Malek; Cabrerizo, Mercedes; Ayala, Melvin; Barreto, Armando
2008-01-01
This study developed an adaptive real-time human-computer interface (HCI) that serves as an assistive technology tool for people with severe motor disability. The proposed HCI design uses eye gaze as the primary computer input device. Controlling the mouse cursor with raw eye coordinates results in sporadic motion of the pointer because of the saccadic nature of the eye. Even though eye movements are subtle and completely imperceptible under normal circumstances, they considerably affect the accuracy of an eye-gaze-based HCI. The proposed HCI system is novel because it adapts to each specific user's different and potentially changing jitter characteristics through the configuration and training of an artificial neural network (ANN) that is structured to minimize the mouse jitter. This task is based on feeding the ANN a user's initially recorded eye-gaze behavior through a short training session. The ANN finds the relationship between the gaze coordinates and the mouse cursor position based on the multilayer perceptron model. An embedded graphical interface is used during the training session to generate user profiles that make up these unique ANN configurations. The results with 12 subjects in test 1, which involved following a moving target, showed an average jitter reduction of 35%; the results with 9 subjects in test 2, which involved following the contour of a square object, showed an average jitter reduction of 53%. For both results, the outcomes led to trajectories that were significantly smoother and apt at reaching fixed or moving targets with relative ease and within a 5% error margin or deviation from desired trajectories. The positive effects of such jitter reduction are presented graphically for visual appreciation.
Object motion computation for the initiation of smooth pursuit eye movements in humans.
Wallace, Julian M; Stone, Leland S; Masson, Guillaume S
2005-04-01
Pursuing an object with smooth eye movements requires an accurate estimate of its two-dimensional (2D) trajectory. This 2D motion computation requires that different local motion measurements are extracted and combined to recover the global object-motion direction and speed. Several combination rules have been proposed such as vector averaging (VA), intersection of constraints (IOC), or 2D feature tracking (2DFT). To examine this computation, we investigated the time course of smooth pursuit eye movements driven by simple objects of different shapes. For type II diamond (where the direction of true object motion is dramatically different from the vector average of the 1-dimensional edge motions, i.e., VA not equal IOC = 2DFT), the ocular tracking is initiated in the vector average direction. Over a period of less than 300 ms, the eye-tracking direction converges on the true object motion. The reduction of the tracking error starts before the closing of the oculomotor loop. For type I diamonds (where the direction of true object motion is identical to the vector average direction, i.e., VA = IOC = 2DFT), there is no such bias. We quantified this effect by calculating the direction error between responses to types I and II and measuring its maximum value and time constant. At low contrast and high speeds, the initial bias in tracking direction is larger and takes longer to converge onto the actual object-motion direction. This effect is attenuated with the introduction of more 2D information to the extent that it was totally obliterated with a texture-filled type II diamond. These results suggest a flexible 2D computation for motion integration, which combines all available one-dimensional (edge) and 2D (feature) motion information to refine the estimate of object-motion direction over time.
An ocular biomechanic model for dynamic simulation of different eye movements.
Iskander, J; Hossny, M; Nahavandi, S; Del Porto, L
2018-04-11
Simulating and analysing eye movement is useful for assessing visual system contribution to discomfort with respect to body movements, especially in virtual environments where simulation sickness might occur. It can also be used in the design of eye prosthesis or humanoid robot eye. In this paper, we present two biomechanic ocular models that are easily integrated into the available musculoskeletal models. The model was previously used to simulate eye-head coordination. The models are used to simulate and analyse eye movements. The proposed models are based on physiological and kinematic properties of the human eye. They incorporate an eye-globe, orbital suspension tissues and six muscles with their connective tissues (pulleys). Pulleys were incorporated in rectus and inferior oblique muscles. The two proposed models are the passive pulleys and the active pulleys models. Dynamic simulations of different eye movements, including fixation, saccade and smooth pursuit, are performed to validate both models. The resultant force-length curves of the models were similar to the experimental data. The simulation results show that the proposed models are suitable to generate eye movement simulations with results comparable to other musculoskeletal models. The maximum kinematic root mean square error (RMSE) is 5.68° and 4.35° for the passive and active pulley models, respectively. The analysis of the muscle forces showed realistic muscle activation with increased muscle synergy in the active pulley model. Copyright © 2018 Elsevier Ltd. All rights reserved.
Effects of phencyclidine, secobarbital and diazepam on eye tracking in rhesus monkeys.
Ando, K; Johanson, C E; Levy, D L; Yasillo, N J; Holzman, P S; Schuster, C R
1983-01-01
Rhesus monkeys were trained to track a moving disk using a procedure in which responses on a lever were reinforced with water delivery only when the disk, oscillating in a horizontal plane on a screen at a frequency of 0.4 Hz in a visual angle of 20 degrees, dimmed for a brief period. Pursuit eye movements were recorded by electrooculography (EOG). IM phencyclidine, secobarbital, and diazepam injections decreased the number of reinforced lever presses in a dose-related manner. Both secobarbital and diazepam produced episodic jerky-pursuit eye movements, while phencyclidine had no consistent effects on eye movements. Lever pressing was disrupted at doses which had little effect on the quality of smooth-pursuit eye movements in some monkeys. This separation was particularly pronounced with diazepam. The similarities of the drug effects on smooth-pursuit eye movements between the present study and human studies indicate that the present method using rhesus monkeys may be useful for predicting drug effects on eye tracking and oculomotor function in humans.
Integrated framework for developing search and discrimination metrics
NASA Astrophysics Data System (ADS)
Copeland, Anthony C.; Trivedi, Mohan M.
1997-06-01
This paper presents an experimental framework for evaluating target signature metrics as models of human visual search and discrimination. This framework is based on a prototype eye tracking testbed, the Integrated Testbed for Eye Movement Studies (ITEMS). ITEMS determines an observer's visual fixation point while he studies a displayed image scene, by processing video of the observer's eye. The utility of this framework is illustrated with an experiment using gray-scale images of outdoor scenes that contain randomly placed targets. Each target is a square region of a specific size containing pixel values from another image of an outdoor scene. The real-world analogy of this experiment is that of a military observer looking upon the sensed image of a static scene to find camouflaged enemy targets that are reported to be in the area. ITEMS provides the data necessary to compute various statistics for each target to describe how easily the observers located it, including the likelihood the target was fixated or identified and the time required to do so. The computed values of several target signature metrics are compared to these statistics, and a second-order metric based on a model of image texture was found to be the most highly correlated.
Cognitive context detection in UAS operators using eye-gaze patterns on computer screens
NASA Astrophysics Data System (ADS)
Mannaru, Pujitha; Balasingam, Balakumar; Pattipati, Krishna; Sibley, Ciara; Coyne, Joseph
2016-05-01
In this paper, we demonstrate the use of eye-gaze metrics of unmanned aerial systems (UAS) operators as effective indices of their cognitive workload. Our analyses are based on an experiment where twenty participants performed pre-scripted UAS missions of three different difficulty levels by interacting with two custom designed graphical user interfaces (GUIs) that are displayed side by side. First, we compute several eye-gaze metrics, traditional eye movement metrics as well as newly proposed ones, and analyze their effectiveness as cognitive classifiers. Most of the eye-gaze metrics are computed by dividing the computer screen into "cells". Then, we perform several analyses in order to select metrics for effective cognitive context classification related to our specific application; the objective of these analyses are to (i) identify appropriate ways to divide the screen into cells; (ii) select appropriate metrics for training and classification of cognitive features; and (iii) identify a suitable classification method.
NASA Astrophysics Data System (ADS)
Wu, Di; Torres, Elizabeth B.; Jose, Jorge V.
2015-03-01
ASD is a spectrum of neurodevelopmental disorders. The high heterogeneity of the symptoms associated with the disorder impedes efficient diagnoses based on human observations. Recent advances with high-resolution MEM wearable sensors enable accurate movement measurements that may escape the naked eye. It calls for objective metrics to extract physiological relevant information from the rapidly accumulating data. In this talk we'll discuss the statistical analysis of movement data continuously collected with high-resolution sensors at 240Hz. We calculated statistical properties of speed fluctuations within the millisecond time range that closely correlate with the subjects' cognitive abilities. We computed the periodicity and synchronicity of the speed fluctuations' from their power spectrum and ensemble averaged two-point cross-correlation function. We built a two-parameter phase space from the temporal statistical analyses of the nearest neighbor fluctuations that provided a quantitative biomarker for ASD and adult normal subjects and further classified ASD severity. We also found age related developmental statistical signatures and potential ASD parental links in our movement dynamical studies. Our results may have direct clinical applications.
Temporal eye movement strategies during naturalistic viewing
Wang, Helena X.; Freeman, Jeremy; Merriam, Elisha P.; Hasson, Uri; Heeger, David J.
2011-01-01
The deployment of eye movements to complex spatiotemporal stimuli likely involves a variety of cognitive factors. However, eye movements to movies are surprisingly reliable both within and across observers. We exploited and manipulated that reliability to characterize observers’ temporal viewing strategies. Introducing cuts and scrambling the temporal order of the resulting clips systematically changed eye movement reliability. We developed a computational model that exhibited this behavior and provided an excellent fit to the measured eye movement reliability. The model assumed that observers searched for, found, and tracked a point-of-interest, and that this process reset when there was a cut. The model did not require that eye movements depend on temporal context in any other way, and it managed to describe eye movements consistently across different observers and two movie sequences. Thus, we found no evidence for the integration of information over long time scales (greater than a second). The results are consistent with the idea that observers employ a simple tracking strategy even while viewing complex, engaging naturalistic stimuli. PMID:22262911
Infant and Adult Perceptions of Possible and Impossible Body Movements: An Eye-Tracking Study
ERIC Educational Resources Information Center
Morita, Tomoyo; Slaughter, Virginia; Katayama, Nobuko; Kitazaki, Michiteru; Kakigi, Ryusuke; Itakura, Shoji
2012-01-01
This study investigated how infants perceive and interpret human body movement. We recorded the eye movements and pupil sizes of 9- and 12-month-old infants and of adults (N = 14 per group) as they observed animation clips of biomechanically possible and impossible arm movements performed by a human and by a humanoid robot. Both 12-month-old…
Multipulse control of saccadic eye movements
NASA Technical Reports Server (NTRS)
Lehman, S. L.; Stark, L.
1981-01-01
We present three conclusions regarding the neural control of saccadic eye movements, resulting from comparisons between recorded movements and computer simulations. The controller signal to the muscles is probably a multipulse-step. This kind of signal drives the fastest model trajectories. Finally, multipulse signals explain differences between model and electrophysiological results.
A multimodal dataset for authoring and editing multimedia content: The MAMEM project.
Nikolopoulos, Spiros; Petrantonakis, Panagiotis C; Georgiadis, Kostas; Kalaganis, Fotis; Liaros, Georgios; Lazarou, Ioulietta; Adam, Katerina; Papazoglou-Chalikias, Anastasios; Chatzilari, Elisavet; Oikonomou, Vangelis P; Kumar, Chandan; Menges, Raphael; Staab, Steffen; Müller, Daniel; Sengupta, Korok; Bostantjopoulou, Sevasti; Katsarou, Zoe; Zeilig, Gabi; Plotnik, Meir; Gotlieb, Amihai; Kizoni, Racheli; Fountoukidou, Sofia; Ham, Jaap; Athanasiou, Dimitrios; Mariakaki, Agnes; Comanducci, Dario; Sabatini, Edoardo; Nistico, Walter; Plank, Markus; Kompatsiaris, Ioannis
2017-12-01
We present a dataset that combines multimodal biosignals and eye tracking information gathered under a human-computer interaction framework. The dataset was developed in the vein of the MAMEM project that aims to endow people with motor disabilities with the ability to edit and author multimedia content through mental commands and gaze activity. The dataset includes EEG, eye-tracking, and physiological (GSR and Heart rate) signals collected from 34 individuals (18 able-bodied and 16 motor-impaired). Data were collected during the interaction with specifically designed interface for web browsing and multimedia content manipulation and during imaginary movement tasks. The presented dataset will contribute towards the development and evaluation of modern human-computer interaction systems that would foster the integration of people with severe motor impairments back into society.
Murata, Atsuo; Fukunaga, Daichi
2018-04-01
This study attempted to investigate the effects of the target shape and the movement direction on the pointing time using an eye-gaze input system and extend Fitts' model so that these factors are incorporated into the model and the predictive power of Fitts' model is enhanced. The target shape, the target size, the movement distance, and the direction of target presentation were set as within-subject experimental variables. The target shape included: a circle, and rectangles with an aspect ratio of 1:1, 1:2, 1:3, and 1:4. The movement direction included eight directions: upper, lower, left, right, upper left, upper right, lower left, and lower right. On the basis of the data for identifying the effects of the target shape and the movement direction on the pointing time, an attempt was made to develop a generalized and extended Fitts' model that took into account the movement direction and the target shape. As a result, the generalized and extended model was found to fit better to the experimental data, and be more effective for predicting the pointing time for a variety of human-computer interaction (HCI) task using an eye-gaze input system. Copyright © 2017. Published by Elsevier Ltd.
Eibenberger, Karin; Eibenberger, Bernhard; Rucci, Michele
2016-08-01
The precise measurement of eye movements is important for investigating vision, oculomotor control and vestibular function. The magnetic scleral search coil technique is one of the most precise measurement techniques for recording eye movements with very high spatial (≈ 1 arcmin) and temporal (>kHz) resolution. The technique is based on measuring voltage induced in a search coil through a large magnetic field. This search coil is embedded in a contact lens worn by a human subject. The measured voltage is in direct relationship to the orientation of the eye in space. This requires a magnetic field with a high homogeneity in the center, since otherwise the field inhomogeneity would give the false impression of a rotation of the eye due to a translational movement of the head. To circumvent this problem, a bite bar typically restricts head movement to a minimum. However, the need often emerges to precisely record eye movements under natural viewing conditions. To this end, one needs a uniform magnetic field that is uniform over a large area. In this paper, we present the numerical and finite element simulations of the magnetic flux density of different coil geometries that could be used for search coil recordings. Based on the results, we built a 2.2 × 2.2 × 2.2 meter coil frame with a set of 3 × 4 coils to generate a 3D magnetic field and compared the measured flux density with our simulation results. In agreement with simulation results, the system yields a highly uniform field enabling high-resolution recordings of eye movements.
2011-01-01
Background In humans, rapid eye movements (REM) density during REM sleep plays a prominent role in psychiatric diseases. Especially in depression, an increased REM density is a vulnerability marker for depression. In clinical practice and research measurement of REM density is highly standardized. In basic animal research, almost no tools are available to obtain and systematically evaluate eye movement data, although, this would create increased comparability between human and animal sleep studies. Methods We obtained standardized electroencephalographic (EEG), electromyographic (EMG) and electrooculographic (EOG) signals from freely behaving mice. EOG electrodes were bilaterally and chronically implanted with placement of the electrodes directly between the musculus rectus superior and musculus rectus lateralis. After recovery, EEG, EMG and EOG signals were obtained for four days. Subsequent to the implantation process, we developed and validated an Eye Movement scoring in Mice Algorithm (EMMA) to detect REM as singularities of the EOG signal, based on wavelet methodology. Results The distribution of wakefulness, non-REM (NREM) sleep and rapid eye movement (REM) sleep was typical of nocturnal rodents with small amounts of wakefulness and large amounts of NREM sleep during the light period and reversed proportions during the dark period. REM sleep was distributed correspondingly. REM density was significantly higher during REM sleep than NREM sleep. REM bursts were detected more often at the end of the dark period than the beginning of the light period. During REM sleep REM density showed an ultradian course, and during NREM sleep REM density peaked at the beginning of the dark period. Concerning individual eye movements, REM duration was longer and amplitude was lower during REM sleep than NREM sleep. The majority of single REM and REM bursts were associated with micro-arousals during NREM sleep, but not during REM sleep. Conclusions Sleep-stage specific distributions of REM in mice correspond to human REM density during sleep. REM density, now also assessable in animal models through our approach, is increased in humans after acute stress, during PTSD and in depression. This relationship can now be exploited to match animal models more closely to clinical situations, especially in animal models of depression. PMID:22047102
The Study Of Optometry Apparatus Of Laser Speckles
NASA Astrophysics Data System (ADS)
Bao-cheng, Wang; Kun, Yao; Xiu-qing, Wu; Chang-ying, Long; Jia-qi, Shi; Shi-zhong, Shi
1988-01-01
Based on the regularity of laser speckles movement the method of exam the uncorrected eyes is determined. The apparatus with micro-computer and optical transformation is made. Its practical function is excellent.
Mannan, Malik M. Naeem; Kim, Shinjung; Jeong, Myung Yung; Kamran, M. Ahmad
2016-01-01
Contamination of eye movement and blink artifacts in Electroencephalogram (EEG) recording makes the analysis of EEG data more difficult and could result in mislead findings. Efficient removal of these artifacts from EEG data is an essential step in improving classification accuracy to develop the brain-computer interface (BCI). In this paper, we proposed an automatic framework based on independent component analysis (ICA) and system identification to identify and remove ocular artifacts from EEG data by using hybrid EEG and eye tracker system. The performance of the proposed algorithm is illustrated using experimental and standard EEG datasets. The proposed algorithm not only removes the ocular artifacts from artifactual zone but also preserves the neuronal activity related EEG signals in non-artifactual zone. The comparison with the two state-of-the-art techniques namely ADJUST based ICA and REGICA reveals the significant improved performance of the proposed algorithm for removing eye movement and blink artifacts from EEG data. Additionally, results demonstrate that the proposed algorithm can achieve lower relative error and higher mutual information values between corrected EEG and artifact-free EEG data. PMID:26907276
Airborne sensors for detecting large marine debris at sea.
Veenstra, Timothy S; Churnside, James H
2012-01-01
The human eye is an excellent, general-purpose airborne sensor for detecting marine debris larger than 10 cm on or near the surface of the water. Coupled with the human brain, it can adjust for light conditions and sea-surface roughness, track persistence, differentiate color and texture, detect change in movement, and combine all of the available information to detect and identify marine debris. Matching this performance with computers and sensors is difficult at best. However, there are distinct advantages over the human eye and brain that sensors and computers can offer such as the ability to use finer spectral resolution, to work outside the spectral range of human vision, to control the illumination, to process the information in ways unavailable to the human vision system, to provide a more objective and reproducible result, to operate from unmanned aircraft, and to provide a permanent record that can be used for later analysis. Copyright © 2010 Elsevier Ltd. All rights reserved.
Morokuma, S; Horimoto, N; Nakano, H
2001-08-01
It is well known that 1/f characteristics in power spectral patterns exist in various biological factors including heart rate variability. In the present study, we tried to elucidate the diurnal variation in spectral properties of eye movement and heart rate variability in the human fetus at term, via continuous 24-h observation of both these parameters. Studied were five uncomplicated fetuses at term. We observed eye movement and fetal heart rate (FHR) with real-time ultrasound and Doppler cardiotocograph, respectively, and analyzed the diurnal change in spectral properties, using the maximum entropy method. In four of five cases, the slope values of power spectra for both eye movement frequency and FHR, ranging approximately between 0.5 and 1.8, indicated diurnal variation, where the slopes tended to have high values during the day and low values at night. These findings suggest that, in the human fetus at term, eye movement and FHR are under the control of a common central mechanism, and this center changes its complexity as seen through diurnal rhythm.
Predicting the Valence of a Scene from Observers’ Eye Movements
R.-Tavakoli, Hamed; Atyabi, Adham; Rantanen, Antti; Laukka, Seppo J.; Nefti-Meziani, Samia; Heikkilä, Janne
2015-01-01
Multimedia analysis benefits from understanding the emotional content of a scene in a variety of tasks such as video genre classification and content-based image retrieval. Recently, there has been an increasing interest in applying human bio-signals, particularly eye movements, to recognize the emotional gist of a scene such as its valence. In order to determine the emotional category of images using eye movements, the existing methods often learn a classifier using several features that are extracted from eye movements. Although it has been shown that eye movement is potentially useful for recognition of scene valence, the contribution of each feature is not well-studied. To address the issue, we study the contribution of features extracted from eye movements in the classification of images into pleasant, neutral, and unpleasant categories. We assess ten features and their fusion. The features are histogram of saccade orientation, histogram of saccade slope, histogram of saccade length, histogram of saccade duration, histogram of saccade velocity, histogram of fixation duration, fixation histogram, top-ten salient coordinates, and saliency map. We utilize machine learning approach to analyze the performance of features by learning a support vector machine and exploiting various feature fusion schemes. The experiments reveal that ‘saliency map’, ‘fixation histogram’, ‘histogram of fixation duration’, and ‘histogram of saccade slope’ are the most contributing features. The selected features signify the influence of fixation information and angular behavior of eye movements in the recognition of the valence of images. PMID:26407322
Modeling Cognitive Strategies during Complex Task Performing Process
ERIC Educational Resources Information Center
Mazman, Sacide Guzin; Altun, Arif
2012-01-01
The purpose of this study is to examine individuals' computer based complex task performing processes and strategies in order to determine the reasons of failure by cognitive task analysis method and cued retrospective think aloud with eye movement data. Study group was five senior students from Computer Education and Instructional Technologies…
A Computational Model of Active Vision for Visual Search in Human-Computer Interaction
2010-08-01
processors that interact with the production rules to produce behavior, and (c) parameters that constrain the behavior of the model (e.g., the...velocity of a saccadic eye movement). While the parameters can be task-specific, the majority of the parameters are usually fixed across a wide variety...previously estimated durations. Hooge and Erkelens (1996) review these four explanations of fixation duration control. A variety of research
DOE Office of Scientific and Technical Information (OSTI.GOV)
Via, Riccardo, E-mail: riccardo.via@polimi.it; Fassi, Aurora; Fattori, Giovanni
Purpose: External beam radiotherapy currently represents an important therapeutic strategy for the treatment of intraocular tumors. Accurate target localization and efficient compensation of involuntary eye movements are crucial to avoid deviations in dose distribution with respect to the treatment plan. This paper describes an eye tracking system (ETS) based on noninvasive infrared video imaging. The system was designed for capturing the tridimensional (3D) ocular motion and provides an on-line estimation of intraocular lesions position based on a priori knowledge coming from volumetric imaging. Methods: Eye tracking is performed by localizing cornea and pupil centers on stereo images captured by twomore » calibrated video cameras, exploiting eye reflections produced by infrared illumination. Additionally, torsional eye movements are detected by template matching in the iris region of eye images. This information allows estimating the 3D position and orientation of the eye by means of an eye local reference system. By combining ETS measurements with volumetric imaging for treatment planning [computed tomography (CT) and magnetic resonance (MR)], one is able to map the position of the lesion to be treated in local eye coordinates, thus enabling real-time tumor referencing during treatment setup and irradiation. Experimental tests on an eye phantom and seven healthy subjects were performed to assess ETS tracking accuracy. Results: Measurements on phantom showed an overall median accuracy within 0.16 mm and 0.40° for translations and rotations, respectively. Torsional movements were affected by 0.28° median uncertainty. On healthy subjects, the gaze direction error ranged between 0.19° and 0.82° at a median working distance of 29 cm. The median processing time of the eye tracking algorithm was 18.60 ms, thus allowing eye monitoring up to 50 Hz. Conclusions: A noninvasive ETS prototype was designed to perform real-time target localization and eye movement monitoring during ocular radiotherapy treatments. The device aims at improving state-of-the-art invasive procedures based on surgical implantation of radiopaque clips and repeated acquisition of X-ray images, with expected positive effects on treatment quality and patient outcome.« less
Via, Riccardo; Fassi, Aurora; Fattori, Giovanni; Fontana, Giulia; Pella, Andrea; Tagaste, Barbara; Riboldi, Marco; Ciocca, Mario; Orecchia, Roberto; Baroni, Guido
2015-05-01
External beam radiotherapy currently represents an important therapeutic strategy for the treatment of intraocular tumors. Accurate target localization and efficient compensation of involuntary eye movements are crucial to avoid deviations in dose distribution with respect to the treatment plan. This paper describes an eye tracking system (ETS) based on noninvasive infrared video imaging. The system was designed for capturing the tridimensional (3D) ocular motion and provides an on-line estimation of intraocular lesions position based on a priori knowledge coming from volumetric imaging. Eye tracking is performed by localizing cornea and pupil centers on stereo images captured by two calibrated video cameras, exploiting eye reflections produced by infrared illumination. Additionally, torsional eye movements are detected by template matching in the iris region of eye images. This information allows estimating the 3D position and orientation of the eye by means of an eye local reference system. By combining ETS measurements with volumetric imaging for treatment planning [computed tomography (CT) and magnetic resonance (MR)], one is able to map the position of the lesion to be treated in local eye coordinates, thus enabling real-time tumor referencing during treatment setup and irradiation. Experimental tests on an eye phantom and seven healthy subjects were performed to assess ETS tracking accuracy. Measurements on phantom showed an overall median accuracy within 0.16 mm and 0.40° for translations and rotations, respectively. Torsional movements were affected by 0.28° median uncertainty. On healthy subjects, the gaze direction error ranged between 0.19° and 0.82° at a median working distance of 29 cm. The median processing time of the eye tracking algorithm was 18.60 ms, thus allowing eye monitoring up to 50 Hz. A noninvasive ETS prototype was designed to perform real-time target localization and eye movement monitoring during ocular radiotherapy treatments. The device aims at improving state-of-the-art invasive procedures based on surgical implantation of radiopaque clips and repeated acquisition of X-ray images, with expected positive effects on treatment quality and patient outcome.
Automated nystagmus analysis. [on-line computer technique for eye data processing
NASA Technical Reports Server (NTRS)
Oman, C. M.; Allum, J. H. J.; Tole, J. R.; Young, L. R.
1973-01-01
Several methods have recently been used for on-line analysis of nystagmus: A digital computer program has been developed to accept sampled records of eye position, detect fast phase components, and output cumulative slow phase position, continuous slow phase velocity, instantaneous fast phase frequency, and other parameters. The slow phase velocity is obtained by differentiation of the calculated cumulative position rather than the original eye movement record. Also, a prototype analog device has been devised which calculates the velocity of the slow phase component during caloric testing. Examples of clinical and research eye movement records analyzed with these devices are shown.
Automatic detection of confusion in elderly users of a web-based health instruction video.
Postma-Nilsenová, Marie; Postma, Eric; Tates, Kiek
2015-06-01
Because of cognitive limitations and lower health literacy, many elderly patients have difficulty understanding verbal medical instructions. Automatic detection of facial movements provides a nonintrusive basis for building technological tools supporting confusion detection in healthcare delivery applications on the Internet. Twenty-four elderly participants (70-90 years old) were recorded while watching Web-based health instruction videos involving easy and complex medical terminology. Relevant fragments of the participants' facial expressions were rated by 40 medical students for perceived level of confusion and analyzed with automatic software for facial movement recognition. A computer classification of the automatically detected facial features performed more accurately and with a higher sensitivity than the human observers (automatic detection and classification, 64% accuracy, 0.64 sensitivity; human observers, 41% accuracy, 0.43 sensitivity). A drill-down analysis of cues to confusion indicated the importance of the eye and eyebrow region. Confusion caused by misunderstanding of medical terminology is signaled by facial cues that can be automatically detected with currently available facial expression detection technology. The findings are relevant for the development of Web-based services for healthcare consumers.
The Neural Basis of Smooth Pursuit Eye Movements in the Rhesus Monkey Brain
ERIC Educational Resources Information Center
Ilg, Uwe J.; Thier, Peter
2008-01-01
Smooth pursuit eye movements are performed in order to prevent retinal image blur of a moving object. Rhesus monkeys are able to perform smooth pursuit eye movements quite similar as humans, even if the pursuit target does not consist in a simple moving dot. Therefore, the study of the neuronal responses as well as the consequences of…
Eye Movements Affect Postural Control in Young and Older Females
Thomas, Neil M.; Bampouras, Theodoros M.; Donovan, Tim; Dewhurst, Susan
2016-01-01
Visual information is used for postural stabilization in humans. However, little is known about how eye movements prevalent in everyday life interact with the postural control system in older individuals. Therefore, the present study assessed the effects of stationary gaze fixations, smooth pursuits, and saccadic eye movements, with combinations of absent, fixed and oscillating large-field visual backgrounds to generate different forms of retinal flow, on postural control in healthy young and older females. Participants were presented with computer generated visual stimuli, whilst postural sway and gaze fixations were simultaneously assessed with a force platform and eye tracking equipment, respectively. The results showed that fixed backgrounds and stationary gaze fixations attenuated postural sway. In contrast, oscillating backgrounds and smooth pursuits increased postural sway. There were no differences regarding saccades. There were also no differences in postural sway or gaze errors between age groups in any visual condition. The stabilizing effect of the fixed visual stimuli show how retinal flow and extraocular factors guide postural adjustments. The destabilizing effect of oscillating visual backgrounds and smooth pursuits may be related to more challenging conditions for determining body shifts from retinal flow, and more complex extraocular signals, respectively. Because the older participants matched the young group's performance in all conditions, decreases of posture and gaze control during stance may not be a direct consequence of healthy aging. Further research examining extraocular and retinal mechanisms of balance control and the effects of eye movements, during locomotion, is needed to better inform fall prevention interventions. PMID:27695412
Eye Movements Affect Postural Control in Young and Older Females.
Thomas, Neil M; Bampouras, Theodoros M; Donovan, Tim; Dewhurst, Susan
2016-01-01
Visual information is used for postural stabilization in humans. However, little is known about how eye movements prevalent in everyday life interact with the postural control system in older individuals. Therefore, the present study assessed the effects of stationary gaze fixations, smooth pursuits, and saccadic eye movements, with combinations of absent, fixed and oscillating large-field visual backgrounds to generate different forms of retinal flow, on postural control in healthy young and older females. Participants were presented with computer generated visual stimuli, whilst postural sway and gaze fixations were simultaneously assessed with a force platform and eye tracking equipment, respectively. The results showed that fixed backgrounds and stationary gaze fixations attenuated postural sway. In contrast, oscillating backgrounds and smooth pursuits increased postural sway. There were no differences regarding saccades. There were also no differences in postural sway or gaze errors between age groups in any visual condition. The stabilizing effect of the fixed visual stimuli show how retinal flow and extraocular factors guide postural adjustments. The destabilizing effect of oscillating visual backgrounds and smooth pursuits may be related to more challenging conditions for determining body shifts from retinal flow, and more complex extraocular signals, respectively. Because the older participants matched the young group's performance in all conditions, decreases of posture and gaze control during stance may not be a direct consequence of healthy aging. Further research examining extraocular and retinal mechanisms of balance control and the effects of eye movements, during locomotion, is needed to better inform fall prevention interventions.
Edinger, Janick; Pai, Dinesh K; Spering, Miriam
2017-01-01
The neural control of pursuit eye movements to visual textures that simultaneously translate and rotate has largely been neglected. Here we propose that pursuit of such targets-texture pursuit-is a fully three-dimensional task that utilizes all three degrees of freedom of the eye, including torsion. Head-fixed healthy human adults (n = 8) tracked a translating and rotating random dot pattern, shown on a computer monitor, with their eyes. Horizontal, vertical, and torsional eye positions were recorded with a head-mounted eye tracker. The torsional component of pursuit is a function of the rotation of the texture, aligned with its visual properties. We observed distinct behaviors between those trials in which stimulus rotation was in the same direction as that of a rolling ball ("natural") in comparison to those with the opposite rotation ("unnatural"): Natural rotation enhanced and unnatural rotation reversed torsional velocity during pursuit, as compared to torsion triggered by a nonrotating random dot pattern. Natural rotation also triggered pursuit with a higher horizontal velocity gain and fewer and smaller corrective saccades. Furthermore, we show that horizontal corrective saccades are synchronized with torsional corrective saccades, indicating temporal coupling of horizontal and torsional saccade control. Pursuit eye movements have a torsional component that depends on the visual stimulus. Horizontal and torsional eye movements are separated in the motor periphery. Our findings suggest that translational and rotational motion signals might be coordinated in descending pursuit pathways.
Spatial constancy mechanisms in motor control
Medendorp, W. Pieter
2011-01-01
The success of the human species in interacting with the environment depends on the ability to maintain spatial stability despite the continuous changes in sensory and motor inputs owing to movements of eyes, head and body. In this paper, I will review recent advances in the understanding of how the brain deals with the dynamic flow of sensory and motor information in order to maintain spatial constancy of movement goals. The first part summarizes studies in the saccadic system, showing that spatial constancy is governed by a dynamic feed-forward process, by gaze-centred remapping of target representations in anticipation of and across eye movements. The subsequent sections relate to other oculomotor behaviour, such as eye–head gaze shifts, smooth pursuit and vergence eye movements, and their implications for feed-forward mechanisms for spatial constancy. Work that studied the geometric complexities in spatial constancy and saccadic guidance across head and body movements, distinguishing between self-generated and passively induced motion, indicates that both feed-forward and sensory feedback processing play a role in spatial updating of movement goals. The paper ends with a discussion of the behavioural mechanisms of spatial constancy for arm motor control and their physiological implications for the brain. Taken together, the emerging picture is that the brain computes an evolving representation of three-dimensional action space, whose internal metric is updated in a nonlinear way, by optimally integrating noisy and ambiguous afferent and efferent signals. PMID:21242137
Learning-based saliency model with depth information.
Ma, Chih-Yao; Hang, Hsueh-Ming
2015-01-01
Most previous studies on visual saliency focused on two-dimensional (2D) scenes. Due to the rapidly growing three-dimensional (3D) video applications, it is very desirable to know how depth information affects human visual attention. In this study, we first conducted eye-fixation experiments on 3D images. Our fixation data set comprises 475 3D images and 16 subjects. We used a Tobii TX300 eye tracker (Tobii, Stockholm, Sweden) to track the eye movement of each subject. In addition, this database contains 475 computed depth maps. Due to the scarcity of public-domain 3D fixation data, this data set should be useful to the 3D visual attention research community. Then, a learning-based visual attention model was designed to predict human attention. In addition to the popular 2D features, we included the depth map and its derived features. The results indicate that the extra depth information can enhance the saliency estimation accuracy specifically for close-up objects hidden in a complex-texture background. In addition, we examined the effectiveness of various low-, mid-, and high-level features on saliency prediction. Compared with both 2D and 3D state-of-the-art saliency estimation models, our methods show better performance on the 3D test images. The eye-tracking database and the MATLAB source codes for the proposed saliency model and evaluation methods are available on our website.
Context effects on smooth pursuit and manual interception of a disappearing target.
Kreyenmeier, Philipp; Fooken, Jolande; Spering, Miriam
2017-07-01
In our natural environment, we interact with moving objects that are surrounded by richly textured, dynamic visual contexts. Yet most laboratory studies on vision and movement show visual objects in front of uniform gray backgrounds. Context effects on eye movements have been widely studied, but it is less well known how visual contexts affect hand movements. Here we ask whether eye and hand movements integrate motion signals from target and context similarly or differently, and whether context effects on eye and hand change over time. We developed a track-intercept task requiring participants to track the initial launch of a moving object ("ball") with smooth pursuit eye movements. The ball disappeared after a brief presentation, and participants had to intercept it in a designated "hit zone." In two experiments ( n = 18 human observers each), the ball was shown in front of a uniform or a textured background that either was stationary or moved along with the target. Eye and hand movement latencies and speeds were similarly affected by the visual context, but eye and hand interception (eye position at time of interception, and hand interception timing error) did not differ significantly between context conditions. Eye and hand interception timing errors were strongly correlated on a trial-by-trial basis across all context conditions, highlighting the close relation between these responses in manual interception tasks. Our results indicate that visual contexts similarly affect eye and hand movements but that these effects may be short-lasting, affecting movement trajectories more than movement end points. NEW & NOTEWORTHY In a novel track-intercept paradigm, human observers tracked a briefly shown object moving across a textured, dynamic context and intercepted it with their finger after it had disappeared. Context motion significantly affected eye and hand movement latency and speed, but not interception accuracy; eye and hand position at interception were correlated on a trial-by-trial basis. Visual context effects may be short-lasting, affecting movement trajectories more than movement end points. Copyright © 2017 the American Physiological Society.
Language-driven anticipatory eye movements in virtual reality.
Eichert, Nicole; Peeters, David; Hagoort, Peter
2018-06-01
Predictive language processing is often studied by measuring eye movements as participants look at objects on a computer screen while they listen to spoken sentences. This variant of the visual-world paradigm has revealed that information encountered by a listener at a spoken verb can give rise to anticipatory eye movements to a target object, which is taken to indicate that people predict upcoming words. The ecological validity of such findings remains questionable, however, because these computer experiments used two-dimensional stimuli that were mere abstractions of real-world objects. Here we present a visual-world paradigm study in a three-dimensional (3-D) immersive virtual reality environment. Despite significant changes in the stimulus materials and the different mode of stimulus presentation, language-mediated anticipatory eye movements were still observed. These findings thus indicate that people do predict upcoming words during language comprehension in a more naturalistic setting where natural depth cues are preserved. Moreover, the results confirm the feasibility of using eyetracking in rich and multimodal 3-D virtual environments.
Spatiotemporal Filter for Visual Motion Integration from Pursuit Eye Movements in Humans and Monkeys
Liu, Bing
2017-01-01
Despite the enduring interest in motion integration, a direct measure of the space–time filter that the brain imposes on a visual scene has been elusive. This is perhaps because of the challenge of estimating a 3D function from perceptual reports in psychophysical tasks. We take a different approach. We exploit the close connection between visual motion estimates and smooth pursuit eye movements to measure stimulus–response correlations across space and time, computing the linear space–time filter for global motion direction in humans and monkeys. Although derived from eye movements, we find that the filter predicts perceptual motion estimates quite well. To distinguish visual from motor contributions to the temporal duration of the pursuit motion filter, we recorded single-unit responses in the monkey middle temporal cortical area (MT). We find that pursuit response delays are consistent with the distribution of cortical neuron latencies and that temporal motion integration for pursuit is consistent with a short integration MT subpopulation. Remarkably, the visual system appears to preferentially weight motion signals across a narrow range of foveal eccentricities rather than uniformly over the whole visual field, with a transiently enhanced contribution from locations along the direction of motion. We find that the visual system is most sensitive to motion falling at approximately one-third the radius of the stimulus aperture. Hypothesizing that the visual drive for pursuit is related to the filtered motion energy in a motion stimulus, we compare measured and predicted eye acceleration across several other target forms. SIGNIFICANCE STATEMENT A compact model of the spatial and temporal processing underlying global motion perception has been elusive. We used visually driven smooth eye movements to find the 3D space–time function that best predicts both eye movements and perception of translating dot patterns. We found that the visual system does not appear to use all available motion signals uniformly, but rather weights motion preferentially in a narrow band at approximately one-third the radius of the stimulus. Although not universal, the filter predicts responses to other types of stimuli, demonstrating a remarkable degree of generalization that may lead to a deeper understanding of visual motion processing. PMID:28003348
Canessa, Andrea; Gibaldi, Agostino; Chessa, Manuela; Fato, Marco; Solari, Fabio; Sabatini, Silvio P.
2017-01-01
Binocular stereopsis is the ability of a visual system, belonging to a live being or a machine, to interpret the different visual information deriving from two eyes/cameras for depth perception. From this perspective, the ground-truth information about three-dimensional visual space, which is hardly available, is an ideal tool both for evaluating human performance and for benchmarking machine vision algorithms. In the present work, we implemented a rendering methodology in which the camera pose mimics realistic eye pose for a fixating observer, thus including convergent eye geometry and cyclotorsion. The virtual environment we developed relies on highly accurate 3D virtual models, and its full controllability allows us to obtain the stereoscopic pairs together with the ground-truth depth and camera pose information. We thus created a stereoscopic dataset: GENUA PESTO—GENoa hUman Active fixation database: PEripersonal space STereoscopic images and grOund truth disparity. The dataset aims to provide a unified framework useful for a number of problems relevant to human and computer vision, from scene exploration and eye movement studies to 3D scene reconstruction. PMID:28350382
Infrared dim and small target detecting and tracking method inspired by Human Visual System
NASA Astrophysics Data System (ADS)
Dong, Xiabin; Huang, Xinsheng; Zheng, Yongbin; Shen, Lurong; Bai, Shengjian
2014-01-01
Detecting and tracking dim and small target in infrared images and videos is one of the most important techniques in many computer vision applications, such as video surveillance and infrared imaging precise guidance. Recently, more and more algorithms based on Human Visual System (HVS) have been proposed to detect and track the infrared dim and small target. In general, HVS concerns at least three mechanisms including contrast mechanism, visual attention and eye movement. However, most of the existing algorithms simulate only a single one of the HVS mechanisms, resulting in many drawbacks of these algorithms. A novel method which combines the three mechanisms of HVS is proposed in this paper. First, a group of Difference of Gaussians (DOG) filters which simulate the contrast mechanism are used to filter the input image. Second, a visual attention, which is simulated by a Gaussian window, is added at a point near the target in order to further enhance the dim small target. This point is named as the attention point. Eventually, the Proportional-Integral-Derivative (PID) algorithm is first introduced to predict the attention point of the next frame of an image which simulates the eye movement of human being. Experimental results of infrared images with different types of backgrounds demonstrate the high efficiency and accuracy of the proposed method to detect and track the dim and small targets.
Application of eye movement measuring system OBER 2 to medicine and technology
NASA Astrophysics Data System (ADS)
Ober, Jozef; Hajda, Janusz; Loska, Jacek; Jamicki, Michal
1997-08-01
The OBER 2 is an infrared light eye movement measuring system and it works with IBM PC compatible computers. As one of the safest systems for measuring of eye movement it uses a very short period of infrared light flashing time (80 microsecond for each measure point). System has an advanced analog-digital controller, which includes background suppression and prediction mechanisms guaranteeing elimination of slow changes and fluctuations of external illumination frequency up to 100 Hz, with effectiveness better than 40 dB. Setting from PC the active measure axis, sampling rate (25 - 4000 Hz) and making start and stop the measure, make it possible to control the outside environment in real-time. By proper controlling of gain it is possible to get high time and position resolution of 0.5 minute of arc even for big amplitude of eye movement (plus or minus 20 degree of visual angle). The whole communication system can also be driven directly by eye movement in real time. The possibility of automatic selection of the most essential elements of eye movement, individual for each person and those that take place for each person in determined situations of life independently from personal features, is a key to practical application. Hence one of conducted research topic is a personal identification based on personal features. Another task is a research project of falling asleep detection, which can be applied to warn the drivers before falling asleep while driving. This measuring system with a proper expert system can also be used to detect a dyslexia and other disabilities of the optic system.
Depth-estimation-enabled compound eyes
NASA Astrophysics Data System (ADS)
Lee, Woong-Bi; Lee, Heung-No
2018-04-01
Most animals that have compound eyes determine object distances by using monocular cues, especially motion parallax. In artificial compound eye imaging systems inspired by natural compound eyes, object depths are typically estimated by measuring optic flow; however, this requires mechanical movement of the compound eyes or additional acquisition time. In this paper, we propose a method for estimating object depths in a monocular compound eye imaging system based on the computational compound eye (COMPU-EYE) framework. In the COMPU-EYE system, acceptance angles are considerably larger than interommatidial angles, causing overlap between the ommatidial receptive fields. In the proposed depth estimation technique, the disparities between these receptive fields are used to determine object distances. We demonstrate that the proposed depth estimation technique can estimate the distances of multiple objects.
Theta synchronization networks emerge during human object-place memory encoding.
Sato, Naoyuki; Yamaguchi, Yoko
2007-03-26
Recent rodent hippocampus studies have suggested that theta rhythm-dependent neural dynamics ('theta phase precession') is essential for an on-line memory formation. A computational study indicated that the phase precession enables a human object-place association memory with voluntary eye movements, although it is still an open question whether the human brain uses the dynamics. Here we elucidated subsequent memory-correlated activities in human scalp electroencephalography in an object-place association memory designed according the former computational study. Our results successfully demonstrated that subsequent memory recall is characterized by an increase in theta power and coherence, and further, that multiple theta synchronization networks emerge. These findings suggest the human theta dynamics in common with rodents in episodic memory formation.
NASA Technical Reports Server (NTRS)
Beutter, B. R.; Mulligan, J. B.; Stone, L. S.; Hargens, Alan R. (Technical Monitor)
1995-01-01
We have shown that moving a plaid in an asymmetric window biases the perceived direction of motion (Beutter, Mulligan & Stone, ARVO 1994). We now explore whether these biased motion signals might also drive the smooth eye-movement response by comparing the perceived and tracked directions. The human smooth oculomotor response to moving plaids appears to be driven by the perceived rather than the veridical direction of motion. This suggests that human motion perception and smooth eye movements share underlying neural motion-processing substrates as has already been shown to be true for monkeys.
Gaze-contingent displays: a review.
Duchowski, Andrew T; Cournia, Nathan; Murphy, Hunter
2004-12-01
Gaze-contingent displays (GCDs) attempt to balance the amount of information displayed against the visual information processing capacity of the observer through real-time eye movement sensing. Based on the assumed knowledge of the instantaneous location of the observer's focus of attention, GCD content can be "tuned" through several display processing means. Screen-based displays alter pixel level information generally matching the resolvability of the human retina in an effort to maximize bandwidth. Model-based displays alter geometric-level primitives along similar goals. Attentive user interfaces (AUIs) manage object- level entities (e.g., windows, applications) depending on the assumed attentive state of the observer. Such real-time display manipulation is generally achieved through non-contact, unobtrusive tracking of the observer's eye movements. This paper briefly reviews past and present display techniques as well as emerging graphics and eye tracking technology for GCD development.
Real Time Eye Tracking and Hand Tracking Using Regular Video Cameras for Human Computer Interaction
2011-01-01
Paperwork Reduction Project (0704-0188) Washington, DC 20503. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1. REPORT DATE (DD-MM-YYYY) January...understand us. More specifically, the computer should be able to infer what we wish to see, do , and interact with through our movements, gestures, and...in depth freedom. Our system differs from the majority of other systems in that we do not use infrared, stereo-cameras, specially-constructed
Assisting autistic children with wireless EOG technology.
Rapela, Joaquin; Lin, Tsong-Yan; Westerfield, Marissa; Jung, Tzyy-Ping; Townsend, Jeanne
2012-01-01
We propose a novel intervention to train the speed and accuracy of attention orienting and eye movements in Autism Spectrum Disorder (ASD). Training eye movements and attention could not only affect those important functions directly, but could also result in broader improvement of social communication skills. To this end we describe a system that would allow ASD children to improve their fixation skills while playing a computer game controlled by an eye tracker. Because this intervention will probably be time consuming, this system should be designed to be used at homes. To make this possible, we propose an implementation based on wireless and dry electrooculography (EOG) technology. If successful, this system would develop an approach to therapy that would improve clinical and behavioral function in children and adults with ASD. As our initial steps in this direction, here we describe the design of a computer game to be used in this system, and the predictions of gaze position from EOG data recorded while a subject played this game.
iTemplate: A template-based eye movement data analysis approach.
Xiao, Naiqi G; Lee, Kang
2018-02-08
Current eye movement data analysis methods rely on defining areas of interest (AOIs). Due to the fact that AOIs are created and modified manually, variances in their size, shape, and location are unavoidable. These variances affect not only the consistency of the AOI definitions, but also the validity of the eye movement analyses based on the AOIs. To reduce the variances in AOI creation and modification and achieve a procedure to process eye movement data with high precision and efficiency, we propose a template-based eye movement data analysis method. Using a linear transformation algorithm, this method registers the eye movement data from each individual stimulus to a template. Thus, users only need to create one set of AOIs for the template in order to analyze eye movement data, rather than creating a unique set of AOIs for all individual stimuli. This change greatly reduces the error caused by the variance from manually created AOIs and boosts the efficiency of the data analysis. Furthermore, this method can help researchers prepare eye movement data for some advanced analysis approaches, such as iMap. We have developed software (iTemplate) with a graphic user interface to make this analysis method available to researchers.
Eye movement identification based on accumulated time feature
NASA Astrophysics Data System (ADS)
Guo, Baobao; Wu, Qiang; Sun, Jiande; Yan, Hua
2017-06-01
Eye movement is a new kind of feature for biometrical recognition, it has many advantages compared with other features such as fingerprint, face, and iris. It is not only a sort of static characteristics, but also a combination of brain activity and muscle behavior, which makes it effective to prevent spoofing attack. In addition, eye movements can be incorporated with faces, iris and other features recorded from the face region into multimode systems. In this paper, we do an exploring study on eye movement identification based on the eye movement datasets provided by Komogortsev et al. in 2011 with different classification methods. The time of saccade and fixation are extracted from the eye movement data as the eye movement features. Furthermore, the performance analysis was conducted on different classification methods such as the BP, RBF, ELMAN and SVM in order to provide a reference to the future research in this field.
Anatomy of emotion: a 3D study of facial mimicry.
Ferrario, V F; Sforza, C
2007-01-01
Alterations in facial motion severely impair the quality of life and social interaction of patients, and an objective grading of facial function is necessary. A method for the non-invasive detection of 3D facial movements was developed. Sequences of six standardized facial movements (maximum smile; free smile; surprise with closed mouth; surprise with open mouth; right side eye closure; left side eye closure) were recorded in 20 healthy young adults (10 men, 10 women) using an optoelectronic motion analyzer. For each subject, 21 cutaneous landmarks were identified by 2-mm reflective markers, and their 3D movements during each facial animation were computed. Three repetitions of each expression were recorded (within-session error), and four separate sessions were used (between-session error). To assess the within-session error, the technical error of the measurement (random error, TEM) was computed separately for each sex, movement and landmark. To assess the between-session repeatability, the standard deviation among the mean displacements of each landmark (four independent sessions) was computed for each movement. TEM for the single landmarks ranged between 0.3 and 9.42 mm (intrasession error). The sex- and movement-related differences were statistically significant (two-way analysis of variance, p=0.003 for sex comparison, p=0.009 for the six movements, p<0.001 for the sex x movement interaction). Among four different (independent) sessions, the left eye closure had the worst repeatability, the right eye closure had the best one; the differences among various movements were statistically significant (one-way analysis of variance, p=0.041). In conclusion, the current protocol demonstrated a sufficient repeatability for a future clinical application. Great care should be taken to assure a consistent marker positioning in all the subjects.
NASA Astrophysics Data System (ADS)
Tornow, Ralf P.; Milczarek, Aleksandra; Odstrcilik, Jan; Kolar, Radim
2017-07-01
A parallel video ophthalmoscope was developed to acquire short video sequences (25 fps, 250 frames) of both eyes simultaneously with exact synchronization. Video sequences were registered off-line to compensate for eye movements. From registered video sequences dynamic parameters like cardiac cycle induced reflection changes and eye movements can be calculated and compared between eyes.
Lewis, Richard L; Shvartsman, Michael; Singh, Satinder
2013-07-01
We explore the idea that eye-movement strategies in reading are precisely adapted to the joint constraints of task structure, task payoff, and processing architecture. We present a model of saccadic control that separates a parametric control policy space from a parametric machine architecture, the latter based on a small set of assumptions derived from research on eye movements in reading (Engbert, Nuthmann, Richter, & Kliegl, 2005; Reichle, Warren, & McConnell, 2009). The eye-control model is embedded in a decision architecture (a machine and policy space) that is capable of performing a simple linguistic task integrating information across saccades. Model predictions are derived by jointly optimizing the control of eye movements and task decisions under payoffs that quantitatively express different desired speed-accuracy trade-offs. The model yields distinct eye-movement predictions for the same task under different payoffs, including single-fixation durations, frequency effects, accuracy effects, and list position effects, and their modulation by task payoff. The predictions are compared to-and found to accord with-eye-movement data obtained from human participants performing the same task under the same payoffs, but they are found not to accord as well when the assumptions concerning payoff optimization and processing architecture are varied. These results extend work on rational analysis of oculomotor control and adaptation of reading strategy (Bicknell & Levy, ; McConkie, Rayner, & Wilson, 1973; Norris, 2009; Wotschack, 2009) by providing evidence for adaptation at low levels of saccadic control that is shaped by quantitatively varying task demands and the dynamics of processing architecture. Copyright © 2013 Cognitive Science Society, Inc.
Effects of reward on the accuracy and dynamics of smooth pursuit eye movements.
Brielmann, Aenne A; Spering, Miriam
2015-08-01
Reward modulates behavioral choices and biases goal-oriented behavior, such as eye or hand movements, toward locations or stimuli associated with higher rewards. We investigated reward effects on the accuracy and timing of smooth pursuit eye movements in 4 experiments. Eye movements were recorded in participants tracking a moving visual target on a computer monitor. Before target motion onset, a monetary reward cue indicated whether participants could earn money by tracking accurately, or whether the trial was unrewarded (Experiments 1 and 2, n = 11 each). Reward significantly improved eye-movement accuracy across different levels of task difficulty. Improvements were seen even in the earliest phase of the eye movement, within 70 ms of tracking onset, indicating that reward impacts visual-motor processing at an early level. We obtained similar findings when reward was not precued but explicitly associated with the pursuit target (Experiment 3, n = 16); critically, these results were not driven by stimulus prevalence or other factors such as preparation or motivation. Numerical cues (Experiment 4, n = 9) were not effective. (c) 2015 APA, all rights reserved).
Fetal eye movements on magnetic resonance imaging.
Woitek, Ramona; Kasprian, Gregor; Lindner, Christian; Stuhr, Fritz; Weber, Michael; Schöpf, Veronika; Brugger, Peter C; Asenbaum, Ulrika; Furtner, Julia; Bettelheim, Dieter; Seidl, Rainer; Prayer, Daniela
2013-01-01
Eye movements are the physical expression of upper fetal brainstem function. Our aim was to identify and differentiate specific types of fetal eye movement patterns using dynamic MRI sequences. Their occurrence as well as the presence of conjugated eyeball motion and consistently parallel eyeball position was systematically analyzed. Dynamic SSFP sequences were acquired in 72 singleton fetuses (17-40 GW, three age groups [17-23 GW, 24-32 GW, 33-40 GW]). Fetal eye movements were evaluated according to a modified classification originally published by Birnholz (1981): Type 0: no eye movements; Type I: single transient deviations; Type Ia: fast deviation, slower reposition; Type Ib: fast deviation, fast reposition; Type II: single prolonged eye movements; Type III: complex sequences; and Type IV: nystagmoid. In 95.8% of fetuses, the evaluation of eye movements was possible using MRI, with a mean acquisition time of 70 seconds. Due to head motion, 4.2% of the fetuses and 20.1% of all dynamic SSFP sequences were excluded. Eye movements were observed in 45 fetuses (65.2%). Significant differences between the age groups were found for Type I (p = 0.03), Type Ia (p = 0.031), and Type IV eye movements (p = 0.033). Consistently parallel bulbs were found in 27.3-45%. In human fetuses, different eye movement patterns can be identified and described by MRI in utero. In addition to the originally classified eye movement patterns, a novel subtype has been observed, which apparently characterizes an important step in fetal brainstem development. We evaluated, for the first time, eyeball position in fetuses. Ultimately, the assessment of fetal eye movements by MRI yields the potential to identify early signs of brainstem dysfunction, as encountered in brain malformations such as Chiari II or molar tooth malformations.
Modifications of spontaneous oculomotor activity in microgravitational conditions
NASA Astrophysics Data System (ADS)
Kornilova, L. N.; Goncharenko, A. M.; Polyakov, V. V.; Grigorova, V.; Manev, A.
Investigations on spontaneous oculomotor activity were carried out prior to and after (five cosmonauts) and during space flight (two cosmonauts) on the 3rd, 5th and 164th days of the space flight. Recording of oculomotor activity was carried out by electrooculography on automated data acquisition and processing system "Zora" based on personal computers. During the space flight and after it all the cosmonauts with the eyes closed or open and dark-goggled showed an essential increase of the movements' amplitude when removing the eyes into the extreme positions especially in a vertical direction, occurrence of correcting saccadic movements (or nystagmus), an increase in time of fixing reactions.
Intrinsic dimensionality predicts the saliency of natural dynamic scenes.
Vig, Eleonora; Dorr, Michael; Martinetz, Thomas; Barth, Erhardt
2012-06-01
Since visual attention-based computer vision applications have gained popularity, ever more complex, biologically inspired models seem to be needed to predict salient locations (or interest points) in naturalistic scenes. In this paper, we explore how far one can go in predicting eye movements by using only basic signal processing, such as image representations derived from efficient coding principles, and machine learning. To this end, we gradually increase the complexity of a model from simple single-scale saliency maps computed on grayscale videos to spatiotemporal multiscale and multispectral representations. Using a large collection of eye movements on high-resolution videos, supervised learning techniques fine-tune the free parameters whose addition is inevitable with increasing complexity. The proposed model, although very simple, demonstrates significant improvement in predicting salient locations in naturalistic videos over four selected baseline models and two distinct data labeling scenarios.
Comprehensive Oculomotor Behavioral Response Assessment (COBRA)
NASA Technical Reports Server (NTRS)
Stone, Leland S. (Inventor); Liston, Dorion B. (Inventor)
2017-01-01
An eye movement-based methodology and assessment tool may be used to quantify many aspects of human dynamic visual processing using a relatively simple and short oculomotor task, noninvasive video-based eye tracking, and validated oculometric analysis techniques. By examining the eye movement responses to a task including a radially-organized appropriately randomized sequence of Rashbass-like step-ramp pursuit-tracking trials, distinct performance measurements may be generated that may be associated with, for example, pursuit initiation (e.g., latency and open-loop pursuit acceleration), steady-state tracking (e.g., gain, catch-up saccade amplitude, and the proportion of the steady-state response consisting of smooth movement), direction tuning (e.g., oblique effect amplitude, horizontal-vertical asymmetry, and direction noise), and speed tuning (e.g., speed responsiveness and noise). This quantitative approach may provide fast and results (e.g., a multi-dimensional set of oculometrics and a single scalar impairment index) that can be interpreted by one without a high degree of scientific sophistication or extensive training.
Human Visual Search Does Not Maximize the Post-Saccadic Probability of Identifying Targets
Morvan, Camille; Maloney, Laurence T.
2012-01-01
Researchers have conjectured that eye movements during visual search are selected to minimize the number of saccades. The optimal Bayesian eye movement strategy minimizing saccades does not simply direct the eye to whichever location is judged most likely to contain the target but makes use of the entire retina as an information gathering device during each fixation. Here we show that human observers do not minimize the expected number of saccades in planning saccades in a simple visual search task composed of three tokens. In this task, the optimal eye movement strategy varied, depending on the spacing between tokens (in the first experiment) or the size of tokens (in the second experiment), and changed abruptly once the separation or size surpassed a critical value. None of our observers changed strategy as a function of separation or size. Human performance fell far short of ideal, both qualitatively and quantitatively. PMID:22319428
Pursuit tracks chase: exploring the role of eye movements in the detection of chasing
Träuble, Birgit
2015-01-01
We explore the role of eye movements in a chase detection task. Unlike the previous studies, which focused on overall performance as indicated by response speed and chase detection accuracy, we decompose the search process into gaze events such as smooth eye movements and use a data-driven approach to separately describe these gaze events. We measured eye movements of four human subjects engaged in a chase detection task displayed on a computer screen. The subjects were asked to detect two chasing rings among twelve other randomly moving rings. Using principal component analysis and support vector machines, we looked at the template and classification images that describe various stages of the detection process. We showed that the subjects mostly search for pairs of rings that move one after another in the same direction with a distance of 3.5–3.8 degrees. To find such pairs, the subjects first looked for regions with a high ring density and then pursued the rings in this region. Most of these groups consisted of two rings. Three subjects preferred to pursue the pair as a single object, while the remaining subject pursued the group by alternating the gaze between the two individual rings. In the discussion, we argue that subjects do not compare the movement of the pursued pair to a singular preformed template that describes a chasing motion. Rather, subjects bring certain hypotheses about what motion may qualify as chase and then, through feedback, they learn to look for a motion pattern that maximizes their performance. PMID:26401454
Teufel, Julian; Bardins, S; Spiegel, Rainer; Kremmyda, O; Schneider, E; Strupp, M; Kalla, R
2016-01-04
Patients with downbeat nystagmus syndrome suffer from oscillopsia, which leads to an unstable visual perception and therefore impaired visual acuity. The aim of this study was to use real-time computer-based visual feedback to compensate for the destabilizing slow phase eye movements. The patients were sitting in front of a computer screen with the head fixed on a chin rest. The eye movements were recorded by an eye tracking system (EyeSeeCam®). We tested the visual acuity with a fixed Landolt C (static) and during real-time feedback driven condition (dynamic) in gaze straight ahead and (20°) sideward gaze. In the dynamic condition, the Landolt C moved according to the slow phase eye velocity of the downbeat nystagmus. The Shapiro-Wilk test was used to test for normal distribution and one-way ANOVA for comparison. Ten patients with downbeat nystagmus were included in the study. Median age was 76 years and the median duration of symptoms was 6.3 years (SD +/- 3.1y). The mean slow phase velocity was moderate during gaze straight ahead (1.44°/s, SD +/- 1.18°/s) and increased significantly in sideward gaze (mean left 3.36°/s; right 3.58°/s). In gaze straight ahead, we found no difference between the static and feedback driven condition. In sideward gaze, visual acuity improved in five out of ten subjects during the feedback-driven condition (p = 0.043). This study provides proof of concept that non-invasive real-time computer-based visual feedback compensates for the SPV in DBN. Therefore, real-time visual feedback may be a promising aid for patients suffering from oscillopsia and impaired text reading on screen. Recent technological advances in the area of virtual reality displays might soon render this approach feasible in fully mobile settings.
ERIC Educational Resources Information Center
Lin, John Jr-Hung; Lin, Sunny S. J.
2014-01-01
The present study investigated (a) whether the perceived cognitive load was different when geometry problems with various levels of configuration comprehension were solved and (b) whether eye movements in comprehending geometry problems showed sources of cognitive loads. In the first investigation, three characteristics of geometry configurations…
On Biometrics With Eye Movements.
Zhang, Youming; Juhola, Martti
2017-09-01
Eye movements are a relatively novel data source for biometric identification. When video cameras applied to eye tracking become smaller and more efficient, this data source could offer interesting opportunities for the development of eye movement biometrics. In this paper, we study primarily biometric identification as seen as a classification task of multiple classes, and secondarily biometric verification considered as binary classification. Our research is based on the saccadic eye movement signal measurements from 109 young subjects. In order to test the data measured, we use a procedure of biometric identification according to the one-versus-one (subject) principle. In a development from our previous research, which also involved biometric verification based on saccadic eye movements, we now apply another eye movement tracker device with a higher sampling frequency of 250 Hz. The results obtained are good, with correct identification rates at 80-90% at their best.
Touch and Gesture-Based Language Learning: Some Possible Avenues for Research and Classroom Practice
ERIC Educational Resources Information Center
Reinders, Hayo
2014-01-01
Our interaction with digital resources is becoming increasingly based on touch, gestures, and now also eye movement. Many everyday consumer electronics products already include touch-based interfaces, from e-book readers to tablets, and from the last personal computers to the GPS system in your car. What implications do these new forms of…
Generating and Describing Affective Eye Behaviors
NASA Astrophysics Data System (ADS)
Mao, Xia; Li, Zheng
The manner of a person's eye movement conveys much about nonverbal information and emotional intent beyond speech. This paper describes work on expressing emotion through eye behaviors in virtual agents based on the parameters selected from the AU-Coded facial expression database and real-time eye movement data (pupil size, blink rate and saccade). A rule-based approach to generate primary (joyful, sad, angry, afraid, disgusted and surprise) and intermediate emotions (emotions that can be represented as the mixture of two primary emotions) utilized the MPEG4 FAPs (facial animation parameters) is introduced. Meanwhile, based on our research, a scripting tool, named EEMML (Emotional Eye Movement Markup Language) that enables authors to describe and generate emotional eye movement of virtual agents, is proposed.
Anticipatory Eye Movements in Interleaving Templates of Human Behavior
NASA Technical Reports Server (NTRS)
Matessa, Michael
2004-01-01
Performance modeling has been made easier by architectures which package psychological theory for reuse at useful levels of abstraction. CPM-GOMS uses templates of behavior to package at a task level (e.g., mouse move-click, typing) predictions of lower-level cognitive, perceptual, and motor resource use. CPM-GOMS also has a theory for interleaving resource use between templates. One example of interleaving is anticipatory eye movements. This paper describes the use of ACT-Stitch, a framework for translating CPM-GOMS templates and interleaving theory into ACT-R, to model anticipatory eye movements in skilled behavior. The anticipatory eye movements explain performance in a well-practiced perceptual/motor task, and the interleaving theory is supported with results from an eye-tracking experiment.
Destabilizing effects of visual environment motions simulating eye movements or head movements
NASA Technical Reports Server (NTRS)
White, Keith D.; Shuman, D.; Krantz, J. H.; Woods, C. B.; Kuntz, L. A.
1991-01-01
In the present paper, we explore effects on the human of exposure to a visual virtual environment which has been enslaved to simulate the human user's head movements or eye movements. Specifically, we have studied the capacity of our experimental subjects to maintain stable spatial orientation in the context of moving their entire visible surroundings by using the parameters of the subjects' natural movements. Our index of the subjects' spatial orientation was the extent of involuntary sways of the body while attempting to stand still, as measured by translations and rotations of the head. We also observed, informally, their symptoms of motion sickness.
NASA Technical Reports Server (NTRS)
Angelaki, Dora E.
2003-01-01
Previous studies have reported that the translational vestibuloocular reflex (TVOR) follows a three-dimensional (3D) kinematic behavior that is more similar to visually guided eye movements, like pursuit, rather than the rotational VOR (RVOR). Accordingly, TVOR rotation axes tilted with eye position toward an eye-fixed reference frame rather than staying relatively fixed in the head like in the RVOR. This difference arises because, contrary to the RVOR where peripheral image stability is functionally important, the TVOR like pursuit and saccades cares to stabilize images on the fovea. During most natural head and body movements, both VORs are simultaneously activated. In the present study, we have investigated in rhesus monkeys the 3D kinematics of the combined VOR during yaw rotation about eccentric axes. The experiments were motivated by and quantitatively compared with the predictions of two distinct hypotheses. According to the first (fixed-rule) hypothesis, an eye-position-dependent torsion is computed downstream of a site for RVOR/TVOR convergence, and the combined VOR axis would tilt through an angle that is proportional to gaze angle and independent of the relative RVOR/TVOR contributions to the total eye movement. This hypothesis would be consistent with the recently postulated mechanical constraints imposed by extraocular muscle pulleys. According to the second (image-stabilization) hypothesis, an eye-position-dependent torsion is computed separately for the RVOR and the TVOR components, implying a processing that takes place upstream of a site for RVOR/TVOR convergence. The latter hypothesis is based on the functional requirement that the 3D kinematics of the combined VOR should be governed by the need to keep images stable on the fovea with slip on the peripheral retina being dependent on the different functional goals of the two VORs. In contrast to the fixed-rule hypothesis, the data demonstrated a variable eye-position-dependent torsion for the combined VOR that was different for synergistic versus antagonistic RVOR/TVOR interactions. Furthermore, not only were the eye-velocity tilt slopes of the combined VOR as much as 10 times larger than what would be expected based on extraocular muscle pulley location, but also eye velocity during antagonistic RVOR/TVOR combinations often tilted opposite to gaze. These results are qualitatively and quantitatively consistent with the image-stabilization hypothesis, suggesting that the eye-position-dependent torsion is computed separately for the RVOR and the TVOR and that the 3D kinematics of the combined VOR are dependent on functional rather than mechanical constraints.
Parker, Andrew; Parkin, Adam; Dagnall, Neil
2013-01-01
Performing a sequence of fast saccadic horizontal eye movements has been shown to facilitate performance on a range of cognitive tasks, including the retrieval of episodic memories. One explanation for these effects is based on the hypothesis that saccadic eye movements increase hemispheric interaction, and that such interactions are important for particular types of memory. The aim of the current research was to assess the effect of horizontal saccadic eye movements on the retrieval of both episodic autobiographical memory (event/incident based memory) and semantic autobiographical memory (fact based memory) over recent and more distant time periods. It was found that saccadic eye movements facilitated the retrieval of episodic autobiographical memories (over all time periods) but not semantic autobiographical memories. In addition, eye movements did not enhance the retrieval of non-autobiographical semantic memory. This finding illustrates a dissociation between the episodic and semantic characteristics of personal memory and is considered within the context of hemispheric contributions to episodic memory performance.
Effects of Saccadic Bilateral Eye Movements on Episodic and Semantic Autobiographical Memory Fluency
Parker, Andrew; Parkin, Adam; Dagnall, Neil
2013-01-01
Performing a sequence of fast saccadic horizontal eye movements has been shown to facilitate performance on a range of cognitive tasks, including the retrieval of episodic memories. One explanation for these effects is based on the hypothesis that saccadic eye movements increase hemispheric interaction, and that such interactions are important for particular types of memory. The aim of the current research was to assess the effect of horizontal saccadic eye movements on the retrieval of both episodic autobiographical memory (event/incident based memory) and semantic autobiographical memory (fact based memory) over recent and more distant time periods. It was found that saccadic eye movements facilitated the retrieval of episodic autobiographical memories (over all time periods) but not semantic autobiographical memories. In addition, eye movements did not enhance the retrieval of non-autobiographical semantic memory. This finding illustrates a dissociation between the episodic and semantic characteristics of personal memory and is considered within the context of hemispheric contributions to episodic memory performance. PMID:24133435
Upward gaze and head deviation with frontal eye field stimulation.
Kaiboriboon, Kitti; Lüders, Hans O; Miller, Jonathan P; Leigh, R John
2012-03-01
Using electrical stimulation to the deep, most caudal part of the right frontal eye field (FEF), we demonstrate a novel pattern of vertical (upward) eye movement that was previously only thought possible by stimulating both frontal eye fields simultaneously. If stimulation was started when the subject looked laterally, the initial eye movement was back to the midline, followed by upward deviation. Our finding challenges current view of topological organisation in the human FEF and may have general implications for concepts of topological organisation of the motor cortex, since sustained stimulation also induced upward head movements as a component of the vertical gaze shift. [Published with video sequences].
Motion perception: behavior and neural substrate.
Mather, George
2011-05-01
Visual motion perception is vital for survival. Single-unit recordings in primate primary visual cortex (V1) have revealed the existence of specialized motion sensing neurons; perceptual effects such as the motion after-effect demonstrate their importance for motion perception. Human psychophysical data on motion detection can be explained by a computational model of cortical motion sensors. Both psychophysical and physiological data reveal at least two classes of motion sensor capable of sensing motion in luminance-defined and texture-defined patterns, respectively. Psychophysical experiments also reveal that motion can be seen independently of motion sensor output, based on attentive tracking of visual features. Sensor outputs are inherently ambiguous, due to the problem of univariance in neural responses. In order to compute stimulus direction and speed, the visual system must compare the responses of many different sensors sensitive to different directions and speeds. Physiological data show that this computation occurs in the visual middle temporal (MT) area. Recent psychophysical studies indicate that information about spatial form may also play a role in motion computations. Adaptation studies show that the human visual system is selectively sensitive to large-scale optic flow patterns, and physiological studies indicate that cells in the middle superior temporal (MST) area derive this sensitivity from the combined responses of many MT cells. Extraretinal signals used to control eye movements are an important source of signals to cancel out the retinal motion responses generated by eye movements, though visual information also plays a role. A number of issues remain to be resolved at all levels of the motion-processing hierarchy. WIREs Cogni Sci 2011 2 305-314 DOI: 10.1002/wcs.110 For further resources related to this article, please visit the WIREs website Additional Supporting Information may be found in http://www.lifesci.sussex.ac.uk/home/George_Mather/Motion/index.html. Copyright © 2010 John Wiley & Sons, Ltd.
Yang, Jin; Lee, Joonyeol; Lisberger, Stephen G.
2012-01-01
Sensory-motor behavior results from a complex interaction of noisy sensory data with priors based on recent experience. By varying the stimulus form and contrast for the initiation of smooth pursuit eye movements in monkeys, we show that visual motion inputs compete with two independent priors: one prior biases eye speed toward zero; the other prior attracts eye direction according to the past several days’ history of target directions. The priors bias the speed and direction of the initiation of pursuit for the weak sensory data provided by the motion of a low-contrast sine wave grating. However, the priors have relatively little effect on pursuit speed and direction when the visual stimulus arises from the coherent motion of a high-contrast patch of dots. For any given stimulus form, the mean and variance of eye speed co-vary in the initiation of pursuit, as expected for signal-dependent noise. This relationship suggests that pursuit implements a trade-off between movement accuracy and variation, reducing both when the sensory signals are noisy. The tradeoff is implemented as a competition of sensory data and priors that follows the rules of Bayesian estimation. Computer simulations show that the priors can be understood as direction specific control of the strength of visual-motor transmission, and can be implemented in a neural-network model that makes testable predictions about the population response in the smooth eye movement region of the frontal eye fields. PMID:23223286
Transitions between discrete and rhythmic primitives in a unimanual task
Sternad, Dagmar; Marino, Hamal; Charles, Steven K.; Duarte, Marcos; Dipietro, Laura; Hogan, Neville
2013-01-01
Given the vast complexity of human actions and interactions with objects, we proposed that control of sensorimotor behavior may utilize dynamic primitives. However, greater computational simplicity may come at the cost of reduced versatility. Evidence for primitives may be garnered by revealing such limitations. This study tested subjects performing a sequence of progressively faster discrete movements in order to “stress” the system. We hypothesized that the increasing pace would elicit a transition to rhythmic movements, assumed to be computationally and neurally more efficient. Abrupt transitions between the two types of movements would support the hypothesis that rhythmic and discrete movements are distinct primitives. Ten subjects performed planar point-to-point arm movements paced by a metronome: starting at 2 s, the metronome intervals decreased by 36 ms per cycle to 200 ms, stayed at 200 ms for several cycles, then increased by similar increments. Instructions emphasized to insert explicit stops between each movement with a duration that equaled the movement time. The experiment was performed with eyes open and closed, and with short and long metronome sounds, the latter explicitly specifying the dwell duration. Results showed that subjects matched instructed movement times but did not preserve the dwell times. Rather, they progressively reduced dwell time to zero, transitioning to continuous rhythmic movements before movement times reached their minimum. The acceleration profiles showed an abrupt change between discrete and rhythmic profiles. The loss of dwell time occurred earlier with long auditory specification, when subjects also showed evidence of predictive control. While evidence for hysteresis was weak, taken together, the results clearly indicated a transition between discrete and rhythmic movements, supporting the proposal that representation is based on primitives rather than on veridical internal models. PMID:23888139
Quantitative analysis on electrooculography (EOG) for neurodegenerative disease
NASA Astrophysics Data System (ADS)
Liu, Chang-Chia; Chaovalitwongse, W. Art; Pardalos, Panos M.; Seref, Onur; Xanthopoulos, Petros; Sackellares, J. C.; Skidmore, Frank M.
2007-11-01
Many studies have documented abnormal horizontal and vertical eye movements in human neurodegenerative disease as well as during altered states of consciousness (including drowsiness and intoxication) in healthy adults. Eye movement measurement may play an important role measuring the progress of neurodegenerative diseases and state of alertness in healthy individuals. There are several techniques for measuring eye movement, Infrared detection technique (IR). Video-oculography (VOG), Scleral eye coil and EOG. Among those available recording techniques, EOG is a major source for monitoring the abnormal eye movement. In this real-time quantitative analysis study, the methods which can capture the characteristic of the eye movement were proposed to accurately categorize the state of neurodegenerative subjects. The EOG recordings were taken while 5 tested subjects were watching a short (>120 s) animation clip. In response to the animated clip the participants executed a number of eye movements, including vertical smooth pursued (SVP), horizontal smooth pursued (HVP) and random saccades (RS). Detection of abnormalities in ocular movement may improve our diagnosis and understanding a neurodegenerative disease and altered states of consciousness. A standard real-time quantitative analysis will improve detection and provide a better understanding of pathology in these disorders.
Real-time inference of word relevance from electroencephalogram and eye gaze
NASA Astrophysics Data System (ADS)
Wenzel, M. A.; Bogojeski, M.; Blankertz, B.
2017-10-01
Objective. Brain-computer interfaces can potentially map the subjective relevance of the visual surroundings, based on neural activity and eye movements, in order to infer the interest of a person in real-time. Approach. Readers looked for words belonging to one out of five semantic categories, while a stream of words passed at different locations on the screen. It was estimated in real-time which words and thus which semantic category interested each reader based on the electroencephalogram (EEG) and the eye gaze. Main results. Words that were subjectively relevant could be decoded online from the signals. The estimation resulted in an average rank of 1.62 for the category of interest among the five categories after a hundred words had been read. Significance. It was demonstrated that the interest of a reader can be inferred online from EEG and eye tracking signals, which can potentially be used in novel types of adaptive software, which enrich the interaction by adding implicit information about the interest of the user to the explicit interaction. The study is characterised by the following novelties. Interpretation with respect to the word meaning was necessary in contrast to the usual practice in brain-computer interfacing where stimulus recognition is sufficient. The typical counting task was avoided because it would not be sensible for implicit relevance detection. Several words were displayed at the same time, in contrast to the typical sequences of single stimuli. Neural activity was related with eye tracking to the words, which were scanned without restrictions on the eye movements.
NASA Technical Reports Server (NTRS)
Beutter, Brent R.; Stone, Leland S.
1997-01-01
Although numerous studies have examined the relationship between smooth-pursuit eye movements and motion perception, it remains unresolved whether a common motion-processing system subserves both perception and pursuit. To address this question, we simultaneously recorded perceptual direction judgments and the concomitant smooth eye movement response to a plaid stimulus that we have previously shown generates systematic perceptual errors. We measured the perceptual direction biases psychophysically and the smooth eye-movement direction biases using two methods (standard averaging and oculometric analysis). We found that the perceptual and oculomotor biases were nearly identical suggesting that pursuit and perception share a critical motion processing stage, perhaps in area MT or MST of extrastriate visual cortex.
NASA Technical Reports Server (NTRS)
Beutter, B. R.; Stone, L. S.
1998-01-01
Although numerous studies have examined the relationship between smooth-pursuit eye movements and motion perception, it remains unresolved whether a common motion-processing system subserves both perception and pursuit. To address this question, we simultaneously recorded perceptual direction judgments and the concomitant smooth eye-movement response to a plaid stimulus that we have previously shown generates systematic perceptual errors. We measured the perceptual direction biases psychophysically and the smooth eye-movement direction biases using two methods (standard averaging and oculometric analysis). We found that the perceptual and oculomotor biases were nearly identical, suggesting that pursuit and perception share a critical motion processing stage, perhaps in area MT or MST of extrastriate visual cortex.
Fetal Eye Movements on Magnetic Resonance Imaging
Woitek, Ramona; Kasprian, Gregor; Lindner, Christian; Stuhr, Fritz; Weber, Michael; Schöpf, Veronika; Brugger, Peter C.; Asenbaum, Ulrika; Furtner, Julia; Bettelheim, Dieter; Seidl, Rainer; Prayer, Daniela
2013-01-01
Objectives Eye movements are the physical expression of upper fetal brainstem function. Our aim was to identify and differentiate specific types of fetal eye movement patterns using dynamic MRI sequences. Their occurrence as well as the presence of conjugated eyeball motion and consistently parallel eyeball position was systematically analyzed. Methods Dynamic SSFP sequences were acquired in 72 singleton fetuses (17–40 GW, three age groups [17–23 GW, 24–32 GW, 33–40 GW]). Fetal eye movements were evaluated according to a modified classification originally published by Birnholz (1981): Type 0: no eye movements; Type I: single transient deviations; Type Ia: fast deviation, slower reposition; Type Ib: fast deviation, fast reposition; Type II: single prolonged eye movements; Type III: complex sequences; and Type IV: nystagmoid. Results In 95.8% of fetuses, the evaluation of eye movements was possible using MRI, with a mean acquisition time of 70 seconds. Due to head motion, 4.2% of the fetuses and 20.1% of all dynamic SSFP sequences were excluded. Eye movements were observed in 45 fetuses (65.2%). Significant differences between the age groups were found for Type I (p = 0.03), Type Ia (p = 0.031), and Type IV eye movements (p = 0.033). Consistently parallel bulbs were found in 27.3–45%. Conclusions In human fetuses, different eye movement patterns can be identified and described by MRI in utero. In addition to the originally classified eye movement patterns, a novel subtype has been observed, which apparently characterizes an important step in fetal brainstem development. We evaluated, for the first time, eyeball position in fetuses. Ultimately, the assessment of fetal eye movements by MRI yields the potential to identify early signs of brainstem dysfunction, as encountered in brain malformations such as Chiari II or molar tooth malformations. PMID:24194885
ECEM (Eye Closure, Eye Movements): application to depersonalization disorder.
Harriet, E Hollander
2009-10-01
Eye Closure, Eye Movements (ECEM) is a hypnotically-based approach to treatment that incorporates eye movements adapted from the Eye Movement Desensitization and Reprocessing (EMDR) protocol in conjunction with hypnosis for the treatment of depersonalization disorder. Depersonalization Disorder has been differentiated from post-traumatic stress disorders and has recently been conceptualized as a subtype of panic disorder (Baker et al., 2003; David, Phillips, Medford, & Sierra, 2004; Segui et. al., 2000). During ECEM, while remaining in a hypnotic state, clients self-generated six to seven trials of eye movements to reduce anticipatory anxiety associated with depersonalization disorder. Eye movements were also used to process triggers that elicited breath holding, often followed by episodes of depersonalization. Hypnotic suggestions were used to reverse core symptoms of depersonalization, subjectively described as "feeling unreal" (Simeon et al., 1997).
Gaze control for an active camera system by modeling human pursuit eye movements
NASA Astrophysics Data System (ADS)
Toelg, Sebastian
1992-11-01
The ability to stabilize the image of one moving object in the presence of others by active movements of the visual sensor is an essential task for biological systems, as well as for autonomous mobile robots. An algorithm is presented that evaluates the necessary movements from acquired visual data and controls an active camera system (ACS) in a feedback loop. No a priori assumptions about the visual scene and objects are needed. The algorithm is based on functional models of human pursuit eye movements and is to a large extent influenced by structural principles of neural information processing. An intrinsic object definition based on the homogeneity of the optical flow field of relevant objects, i.e., moving mainly fronto- parallel, is used. Velocity and spatial information are processed in separate pathways, resulting in either smooth or saccadic sensor movements. The program generates a dynamic shape model of the moving object and focuses its attention to regions where the object is expected. The system proved to behave in a stable manner under real-time conditions in complex natural environments and manages general object motion. In addition it exhibits several interesting abilities well-known from psychophysics like: catch-up saccades, grouping due to coherent motion, and optokinetic nystagmus.
Getting Ahead of Oneself: Anticipation and the Vestibulo-ocular Reflex (VOR)
King, W. Michael
2014-01-01
Compensatory counter-rotations of the eyes provoked by head turns are commonly attributed to the vestibulo-ocular reflex (VOR). A recent study in guinea pigs demonstrates, however, that this assumption is not always valid. During voluntary head turns, guinea pigs make highly accurate compensatory eye movements that occur with zero or even negative latencies with respect to the onset of the provoking head movements. Furthermore, the anticipatory eye movements occur in animals with bilateral peripheral vestibular lesions, thus confirming that they have an extra vestibular origin. This discovery suggests the possibility that anticipatory responses might also occur in other species including humans and non-human primates, but have been overlooked and mistakenly identified as being produced by the VOR. This review will compare primate and guinea pig vestibular physiology in light of these new findings. A unified model of vestibular and cerebellar pathways will be presented that is consistent with current data in primates and guinea pigs. The model is capable of accurately simulating compensatory eye movements to active head turns (anticipatory responses) and to passive head perturbations (VOR induced eye movements) in guinea pigs and in human subjects who use coordinated eye and head movements to shift gaze direction in space. Anticipatory responses provide new evidence and opportunities to study the role of extra vestibular signals in motor control and sensory-motor transformations. Exercises that employ voluntary head turns are frequently used to improve visual stability in patients with vestibular hypofunction. Thus, a deeper understanding of the origin and physiology of anticipatory responses could suggest new translational approaches to rehabilitative training of patients with bilateral vestibular loss. PMID:23370320
The effect of age and sex on facial mimicry: a three-dimensional study in healthy adults.
Sforza, C; Mapelli, A; Galante, D; Moriconi, S; Ibba, T M; Ferraro, L; Ferrario, V F
2010-10-01
To assess sex- and age-related characteristics in standardized facial movements, 40 healthy adults (20 men, 20 women; aged 20-50 years) performed seven standardized facial movements (maximum smile; free smile; "surprise" with closed mouth; "surprise" with open mouth; eye closure; right- and left-side eye closures). The three-dimensional coordinates of 21 soft tissue facial landmarks were recorded by a motion analyser, their movements computed, and asymmetry indices calculated. Within each movement, total facial mobility was independent from sex and age (analysis of variance, p>0.05). Asymmetry indices of the eyes and mouth were similar in both sexes (p>0.05). Age significantly influenced eye and mouth asymmetries of the right-side eye closure, and eye asymmetry of the surprise movement. On average, the asymmetry indices of the symmetric movements were always lower than 8%, and most did not deviate from the expected value of 0 (Student's t). Larger asymmetries were found for the asymmetric eye closures (eyes, up to 50%, p<0.05; mouth, up to 30%, p<0.05 only in the 20-30-year-old subjects). In conclusion, sex and age had a limited influence on total facial motion and asymmetry in normal adult men and women. Copyright © 2010 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
An automatic eye detection and tracking technique for stereo video sequences
NASA Astrophysics Data System (ADS)
Paduru, Anirudh; Charalampidis, Dimitrios; Fouts, Brandon; Jovanovich, Kim
2009-05-01
Human-computer interfacing (HCI) describes a system or process with which two information processors, namely a human and a computer, attempt to exchange information. Computer-to-human (CtH) information transfer has been relatively effective through visual displays and sound devices. On the other hand, the human-tocomputer (HtC) interfacing avenue has yet to reach its full potential. For instance, the most common HtC communication means are the keyboard and mouse, which are already becoming a bottleneck in the effective transfer of information. The solution to the problem is the development of algorithms that allow the computer to understand human intentions based on their facial expressions, head motion patterns, and speech. In this work, we are investigating the feasibility of a stereo system to effectively determine the head position, including the head rotation angles, based on the detection of eye pupils.
Very Slow Search and Reach: Failure to Maximize Expected Gain in an Eye-Hand Coordination Task
Zhang, Hang; Morvan, Camille; Etezad-Heydari, Louis-Alexandre; Maloney, Laurence T.
2012-01-01
We examined an eye-hand coordination task where optimal visual search and hand movement strategies were inter-related. Observers were asked to find and touch a target among five distractors on a touch screen. Their reward for touching the target was reduced by an amount proportional to how long they took to locate and reach to it. Coordinating the eye and the hand appropriately would markedly reduce the search-reach time. Using statistical decision theory we derived the sequence of interrelated eye and hand movements that would maximize expected gain and we predicted how hand movements should change as the eye gathered further information about target location. We recorded human observers' eye movements and hand movements and compared them with the optimal strategy that would have maximized expected gain. We found that most observers failed to adopt the optimal search-reach strategy. We analyze and describe the strategies they did adopt. PMID:23071430
Optimizations and Applications in Head-Mounted Video-Based Eye Tracking
ERIC Educational Resources Information Center
Li, Feng
2011-01-01
Video-based eye tracking techniques have become increasingly attractive in many research fields, such as visual perception and human-computer interface design. The technique primarily relies on the positional difference between the center of the eye's pupil and the first-surface reflection at the cornea, the corneal reflection (CR). This…
Geometry and Gesture-Based Features from Saccadic Eye-Movement as a Biometric in Radiology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hammond, Tracy; Tourassi, Georgia; Yoon, Hong-Jun
In this study, we present a novel application of sketch gesture recognition on eye-movement for biometric identification and estimating task expertise. The study was performed for the task of mammographic screening with simultaneous viewing of four coordinated breast views as typically done in clinical practice. Eye-tracking data and diagnostic decisions collected for 100 mammographic cases (25 normal, 25 benign, 50 malignant) and 10 readers (three board certified radiologists and seven radiology residents), formed the corpus for this study. Sketch gesture recognition techniques were employed to extract geometric and gesture-based features from saccadic eye-movements. Our results show that saccadic eye-movement, characterizedmore » using sketch-based features, result in more accurate models for predicting individual identity and level of expertise than more traditional eye-tracking features.« less
Eye Movement Control during Reading: II. Frequency of Refixating a Word. Technical Report No. 469.
ERIC Educational Resources Information Center
McConkie, G. W.; And Others
As part of a series of studies describing the oculomotor behavior of skilled readers, a study investigated whether a word refixation curve exists. Subjects, 66 college students fixating over 40,000 times, read lines of text from a computer screen and were instructed to read for meaning without regard to errors. Results of eye movement control…
[Eye movement study in multiple object search process].
Xu, Zhaofang; Liu, Zhongqi; Wang, Xingwei; Zhang, Xin
2017-04-01
The aim of this study is to investigate the search time regulation of objectives and eye movement behavior characteristics in the multi-objective visual search. The experimental task was accomplished with computer programming and presented characters on a 24 inch computer display. The subjects were asked to search three targets among the characters. Three target characters in the same group were of high similarity degree while those in different groups of target characters and distraction characters were in different similarity degrees. We recorded the search time and eye movement data through the whole experiment. It could be seen from the eye movement data that the quantity of fixation points was large when the target characters and distraction characters were similar. There were three kinds of visual search patterns for the subjects including parallel search, serial search, and parallel-serial search. In addition, the last pattern had the best search performance among the three search patterns, that is, the subjects who used parallel-serial search pattern spent shorter time finding the target. The order that the targets presented were able to affect the search performance significantly; and the similarity degree between target characters and distraction characters could also affect the search performance.
Diurnal variation of eye movement and heart rate variability in the human fetus at term.
Morokuma, S; Horimoto, N; Satoh, S; Nakano, H
2001-07-01
To elucidate diurnal variations in eye movement and fetal heart rate (FHR) variability in the term fetus, we observed these two parameters continuously for 24 h, using real-time ultrasound and Doppler cardiotocograph, respectively. Studied were five uncomplicated fetuses at term. The time series data of the presence and absence of eye movement and mean FHR value for each 1 min were analyzed using the maximum entropy method (MEM) and subsequent nonlinear least squares fitting. According to the power value of eye movement, all five cases were classified into two groups: three cases in the large power group and two cases in the small power group. The acrophases of eye movement and FHR variability in the large power group were close, thereby implying the existence of a diurnal rhythm in both these parameters and also that they are synchronized. In the small power group, the acrophases were separated. The synchronization of eye movement and FHR variability in the large power group suggests that these phenomena are governed by a common central mechanism related to diurnal rhythm generation.
Payne, Hannah L
2017-01-01
Eye movements provide insights about a wide range of brain functions, from sensorimotor integration to cognition; hence, the measurement of eye movements is an important tool in neuroscience research. We describe a method, based on magnetic sensing, for measuring eye movements in head-fixed and freely moving mice. A small magnet was surgically implanted on the eye, and changes in the magnet angle as the eye rotated were detected by a magnetic field sensor. Systematic testing demonstrated high resolution measurements of eye position of <0.1°. Magnetic eye tracking offers several advantages over the well-established eye coil and video-oculography methods. Most notably, it provides the first method for reliable, high-resolution measurement of eye movements in freely moving mice, revealing increased eye movements and altered binocular coordination compared to head-fixed mice. Overall, magnetic eye tracking provides a lightweight, inexpensive, easily implemented, and high-resolution method suitable for a wide range of applications. PMID:28872455
Intraocular lens design for treating high myopia based on individual eye model
NASA Astrophysics Data System (ADS)
Wang, Yang; Wang, Zhaoqi; Wang, Yan; Zuo, Tong
2007-02-01
In this research, we firstly design the phakic intraocular lens (PIOL) based on individual eye model with optical design software ZEMAX. The individual PIOL is designed to correct the defocus and astigmatism, and then we compare the PIOL power calculated from the individual eye model with that from the experiential formula. Close values of PIOL power are obtained between the individual eye model and the formula, but the suggested method has more accuracy with more functions. The impact of PIOL decentration on human eye is evaluated, including rotation decentration, flat axis decentration, steep axis decentration and axial movement of PIOL, which is impossible with traditional method. To control the PIOL decentration errors, we give the limit values of PIOL decentration for the specific eye in this study.
Gradiency and Visual Context in Syntactic Garden-Paths
ERIC Educational Resources Information Center
Farmer, Thomas A.; Anderson, Sarah E.; Spivey, Michael J.
2007-01-01
Through recording the streaming x- and y-coordinates of computer-mouse movements, we report evidence that visual context provides an immediate constraint on the resolution of syntactic ambiguity in the visual-world paradigm. This finding converges with previous eye-tracking results that support a constraint-based account of sentence processing, in…
Self-motion perception: assessment by real-time computer-generated animations
NASA Technical Reports Server (NTRS)
Parker, D. E.; Phillips, J. O.
2001-01-01
We report a new procedure for assessing complex self-motion perception. In three experiments, subjects manipulated a 6 degree-of-freedom magnetic-field tracker which controlled the motion of a virtual avatar so that its motion corresponded to the subjects' perceived self-motion. The real-time animation created by this procedure was stored using a virtual video recorder for subsequent analysis. Combined real and illusory self-motion and vestibulo-ocular reflex eye movements were evoked by cross-coupled angular accelerations produced by roll and pitch head movements during passive yaw rotation in a chair. Contrary to previous reports, illusory self-motion did not correspond to expectations based on semicircular canal stimulation. Illusory pitch head-motion directions were as predicted for only 37% of trials; whereas, slow-phase eye movements were in the predicted direction for 98% of the trials. The real-time computer-generated animations procedure permits use of naive, untrained subjects who lack a vocabulary for reporting motion perception and is applicable to basic self-motion perception studies, evaluation of motion simulators, assessment of balance disorders and so on.
Schiller, Peter H; Kwak, Michelle C; Slocum, Warren M
2012-08-01
This study examined how effectively visual and auditory cues can be integrated in the brain for the generation of motor responses. The latencies with which saccadic eye movements are produced in humans and monkeys form, under certain conditions, a bimodal distribution, the first mode of which has been termed express saccades. In humans, a much higher percentage of express saccades is generated when both visual and auditory cues are provided compared with the single presentation of these cues [H. C. Hughes et al. (1994) J. Exp. Psychol. Hum. Percept. Perform., 20, 131-153]. In this study, we addressed two questions: first, do monkeys also integrate visual and auditory cues for express saccade generation as do humans and second, does such integration take place in humans when, instead of eye movements, the task is to press levers with fingers? Our results show that (i) in monkeys, as in humans, the combined visual and auditory cues generate a much higher percentage of express saccades than do singly presented cues and (ii) the latencies with which levers are pressed by humans are shorter when both visual and auditory cues are provided compared with the presentation of single cues, but the distribution in all cases is unimodal; response latencies in the express range seen in the execution of saccadic eye movements are not obtained with lever pressing. © 2012 The Authors. European Journal of Neuroscience © 2012 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.
Effects of Bilateral Eye Movements on Gist Based False Recognition in the DRM Paradigm
ERIC Educational Resources Information Center
Parker, Andrew; Dagnall, Neil
2007-01-01
The effects of saccadic bilateral (horizontal) eye movements on gist based false recognition was investigated. Following exposure to lists of words related to a critical but non-studied word participants were asked to engage in 30s of bilateral vs. vertical vs. no eye movements. Subsequent testing of recognition memory revealed that those who…
Eye-hand coupling during closed-loop drawing: evidence of shared motor planning?
Reina, G Anthony; Schwartz, Andrew B
2003-04-01
Previous paradigms have used reaching movements to study coupling of eye-hand kinematics. In the present study, we investigated eye-hand kinematics as curved trajectories were drawn at normal speeds. Eye and hand movements were tracked as a monkey traced ellipses and circles with the hand in free space while viewing the hand's position on a computer monitor. The results demonstrate that the movement of the hand was smooth and obeyed the 2/3 power law. Eye position, however, was restricted to 2-3 clusters along the hand's trajectory and fixed approximately 80% of the time in one of these clusters. The eye remained stationary as the hand moved away from the fixation for up to 200 ms and saccaded ahead of the hand position to the next fixation along the trajectory. The movement from one fixation cluster to another consistently occurred just after the tangential hand velocity had reached a local minimum, but before the next segment of the hand's trajectory began. The next fixation point was close to an area of high curvature along the hand's trajectory even though the hand had not reached that point along the path. A visuo-motor illusion of hand movement demonstrated that the eye movement was influenced by hand movement and not simply by visual input. During the task, neural activity of pre-motor cortex (area F4) was recorded using extracellular electrodes and used to construct a population vector of the hand's trajectory. The results suggest that the saccade onset is correlated in time with maximum curvature in the population vector trajectory for the hand movement. We hypothesize that eye and arm movements may have common, or shared, information in forming their motor plans.
Developing Educational Computer Animation Based on Human Personality Types
ERIC Educational Resources Information Center
Musa, Sajid; Ziatdinov, Rushan; Sozcu, Omer Faruk; Griffiths, Carol
2015-01-01
Computer animation in the past decade has become one of the most noticeable features of technology-based learning environments. By its definition, it refers to simulated motion pictures showing movement of drawn objects, and is often defined as the art in movement. Its educational application known as educational computer animation is considered…
Simple summation rule for optimal fixation selection in visual search.
Najemnik, Jiri; Geisler, Wilson S
2009-06-01
When searching for a known target in a natural texture, practiced humans achieve near-optimal performance compared to a Bayesian ideal searcher constrained with the human map of target detectability across the visual field [Najemnik, J., & Geisler, W. S. (2005). Optimal eye movement strategies in visual search. Nature, 434, 387-391]. To do so, humans must be good at choosing where to fixate during the search [Najemnik, J., & Geisler, W.S. (2008). Eye movement statistics in humans are consistent with an optimal strategy. Journal of Vision, 8(3), 1-14. 4]; however, it seems unlikely that a biological nervous system would implement the computations for the Bayesian ideal fixation selection because of their complexity. Here we derive and test a simple heuristic for optimal fixation selection that appears to be a much better candidate for implementation within a biological nervous system. Specifically, we show that the near-optimal fixation location is the maximum of the current posterior probability distribution for target location after the distribution is filtered by (convolved with) the square of the retinotopic target detectability map. We term the model that uses this strategy the entropy limit minimization (ELM) searcher. We show that when constrained with human-like retinotopic map of target detectability and human search error rates, the ELM searcher performs as well as the Bayesian ideal searcher, and produces fixation statistics similar to human.
Hill, N Jeremy; Moinuddin, Aisha; Häuser, Ann-Katrin; Kienzle, Stephan; Schalk, Gerwin
2012-01-01
Most brain-computer interface (BCI) systems require users to modulate brain signals in response to visual stimuli. Thus, they may not be useful to people with limited vision, such as those with severe paralysis. One important approach for overcoming this issue is auditory streaming, an approach whereby a BCI system is driven by shifts of attention between two simultaneously presented auditory stimulus streams. Motivated by the long-term goal of translating such a system into a reliable, simple yes-no interface for clinical usage, we aim to answer two main questions. First, we asked which of two previously published variants provides superior performance: a fixed-phase (FP) design in which the streams have equal period and opposite phase, or a drifting-phase (DP) design where the periods are unequal. We found FP to be superior to DP (p = 0.002): average performance levels were 80 and 72% correct, respectively. We were also able to show, in a pilot with one subject, that auditory streaming can support continuous control and neurofeedback applications: by shifting attention between ongoing left and right auditory streams, the subject was able to control the position of a paddle in a computer game. Second, we examined whether the system is dependent on eye movements, since it is known that eye movements and auditory attention may influence each other, and any dependence on the ability to move one's eyes would be a barrier to translation to paralyzed users. We discovered that, despite instructions, some subjects did make eye movements that were indicative of the direction of attention. However, there was no correlation, across subjects, between the reliability of the eye movement signal and the reliability of the BCI system, indicating that our system was configured to work independently of eye movement. Together, these findings are an encouraging step forward toward BCIs that provide practical communication and control options for the most severely paralyzed users.
An information maximization model of eye movements
NASA Technical Reports Server (NTRS)
Renninger, Laura Walker; Coughlan, James; Verghese, Preeti; Malik, Jitendra
2005-01-01
We propose a sequential information maximization model as a general strategy for programming eye movements. The model reconstructs high-resolution visual information from a sequence of fixations, taking into account the fall-off in resolution from the fovea to the periphery. From this framework we get a simple rule for predicting fixation sequences: after each fixation, fixate next at the location that minimizes uncertainty (maximizes information) about the stimulus. By comparing our model performance to human eye movement data and to predictions from a saliency and random model, we demonstrate that our model is best at predicting fixation locations. Modeling additional biological constraints will improve the prediction of fixation sequences. Our results suggest that information maximization is a useful principle for programming eye movements.
A Statistical Physics Perspective to Understand Social Visual Attention in Autism Spectrum Disorder.
Liberati, Alessio; Fadda, Roberta; Doneddu, Giuseppe; Congiu, Sara; Javarone, Marco A; Striano, Tricia; Chessa, Alessandro
2017-08-01
This study investigated social visual attention in children with Autism Spectrum Disorder (ASD) and with typical development (TD) in the light of Brockmann and Geisel's model of visual attention. The probability distribution of gaze movements and clustering of gaze points, registered with eye-tracking technology, was studied during a free visual exploration of a gaze stimulus. A data-driven analysis of the distribution of eye movements was chosen to overcome any possible methodological problems related to the subjective expectations of the experimenters about the informative contents of the image in addition to a computational model to simulate group differences. Analysis of the eye-tracking data indicated that the scanpaths of children with TD and ASD were characterized by eye movements geometrically equivalent to Lévy flights. Children with ASD showed a higher frequency of long saccadic amplitudes compared with controls. A clustering analysis revealed a greater dispersion of eye movements for these children. Modeling of the results indicated higher values of the model parameter modulating the dispersion of eye movements for children with ASD. Together, the experimental results and the model point to a greater dispersion of gaze points in ASD.
A laser-based eye-tracking system.
Irie, Kenji; Wilson, Bruce A; Jones, Richard D; Bones, Philip J; Anderson, Tim J
2002-11-01
This paper reports on the development of a new eye-tracking system for noninvasive recording of eye movements. The eye tracker uses a flying-spot laser to selectively image landmarks on the eye and, subsequently, measure horizontal, vertical, and torsional eye movements. Considerable work was required to overcome the adverse effects of specular reflection of the flying-spot from the surface of the eye onto the sensing elements of the eye tracker. These effects have been largely overcome, and the eye-tracker has been used to document eye movement abnormalities, such as abnormal torsional pulsion of saccades, in the clinical setting.
Zimmermann, Jan; Vazquez, Yuriria; Glimcher, Paul W; Pesaran, Bijan; Louie, Kenway
2016-09-01
Video-based noninvasive eye trackers are an extremely useful tool for many areas of research. Many open-source eye trackers are available but current open-source systems are not designed to track eye movements with the temporal resolution required to investigate the mechanisms of oculomotor behavior. Commercial systems are available but employ closed source hardware and software and are relatively expensive, limiting wide-spread use. Here we present Oculomatic, an open-source software and modular hardware solution to eye tracking for use in humans and non-human primates. Oculomatic features high temporal resolution (up to 600Hz), real-time eye tracking with high spatial accuracy (<0.5°), and low system latency (∼1.8ms, 0.32ms STD) at a relatively low-cost. Oculomatic compares favorably to our existing scleral search-coil system while being fully non invasive. We propose that Oculomatic can support a wide range of research into the properties and neural mechanisms of oculomotor behavior. Copyright © 2016 Elsevier B.V. All rights reserved.
Lukic, Luka; Santos-Victor, José; Billard, Aude
2014-04-01
We investigate the role of obstacle avoidance in visually guided reaching and grasping movements. We report on a human study in which subjects performed prehensile motion with obstacle avoidance where the position of the obstacle was systematically varied across trials. These experiments suggest that reaching with obstacle avoidance is organized in a sequential manner, where the obstacle acts as an intermediary target. Furthermore, we demonstrate that the notion of workspace travelled by the hand is embedded explicitly in a forward planning scheme, which is actively involved in detecting obstacles on the way when performing reaching. We find that the gaze proactively coordinates the pattern of eye-arm motion during obstacle avoidance. This study provides also a quantitative assessment of the coupling between the eye-arm-hand motion. We show that the coupling follows regular phase dependencies and is unaltered during obstacle avoidance. These observations provide a basis for the design of a computational model. Our controller extends the coupled dynamical systems framework and provides fast and synchronous control of the eyes, the arm and the hand within a single and compact framework, mimicking similar control system found in humans. We validate our model for visuomotor control of a humanoid robot.
The brain stem saccadic burst generator encodes gaze in three-dimensional space.
Van Horn, Marion R; Sylvestre, Pierre A; Cullen, Kathleen E
2008-05-01
When we look between objects located at different depths the horizontal movement of each eye is different from that of the other, yet temporally synchronized. Traditionally, a vergence-specific neuronal subsystem, independent from other oculomotor subsystems, has been thought to generate all eye movements in depth. However, recent studies have challenged this view by unmasking interactions between vergence and saccadic eye movements during disconjugate saccades. Here, we combined experimental and modeling approaches to address whether the premotor command to generate disconjugate saccades originates exclusively in "vergence centers." We found that the brain stem burst generator, which is commonly assumed to drive only the conjugate component of eye movements, carries substantial vergence-related information during disconjugate saccades. Notably, facilitated vergence velocities during disconjugate saccades were synchronized with the burst onset of excitatory and inhibitory brain stem saccadic burst neurons (SBNs). Furthermore, the time-varying discharge properties of the majority of SBNs (>70%) preferentially encoded the dynamics of an individual eye during disconjugate saccades. When these experimental results were implemented into a computer-based simulation, to further evaluate the contribution of the saccadic burst generator in generating disconjugate saccades, we found that it carries all the vergence drive that is necessary to shape the activity of the abducens motoneurons to which it projects. Taken together, our results provide evidence that the premotor commands from the brain stem saccadic circuitry, to the target motoneurons, are sufficient to ensure the accurate control shifts of gaze in three dimensions.
Eye movements reflect and shape strategies in fraction comparison.
Ischebeck, Anja; Weilharter, Marina; Körner, Christof
2016-01-01
The comparison of fractions is a difficult task that can often be facilitated by separately comparing components (numerators and denominators) of the fractions--that is, by applying so-called component-based strategies. The usefulness of such strategies depends on the type of fraction pair to be compared. We investigated the temporal organization and the flexibility of strategy deployment in fraction comparison by evaluating sequences of eye movements in 20 young adults. We found that component-based strategies could account for the response times and the overall number of fixations observed for the different fraction pairs. The analysis of eye movement sequences showed that the initial eye movements in a trial were characterized by stereotypical scanning patterns indicative of an exploratory phase that served to establish the kind of fraction pair presented. Eye movements that followed this phase adapted to the particular type of fraction pair and indicated the deployment of specific comparison strategies. These results demonstrate that participants employ eye movements systematically to support strategy use in fraction comparison. Participants showed a remarkable flexibility to adapt to the most efficient strategy on a trial-by-trial basis. Our results confirm the value of eye movement measurements in the exploration of strategic adaptation in complex tasks.
Non-mydriatic video ophthalmoscope to measure fast temporal changes of the human retina
NASA Astrophysics Data System (ADS)
Tornow, Ralf P.; Kolář, Radim; Odstrčilík, Jan
2015-07-01
The analysis of fast temporal changes of the human retina can be used to get insight to normal physiological behavior and to detect pathological deviations. This can be important for the early detection of glaucoma and other eye diseases. We developed a small, lightweight, USB powered video ophthalmoscope that allows taking video sequences of the human retina with at least 25 frames per second without dilating the pupil. Short sequences (about 10 s) of the optic nerve head (20° x 15°) are recorded from subjects and registered offline using two-stage process (phase correlation and Lucas-Kanade approach) to compensate for eye movements. From registered video sequences, different parameters can be calculated. Two applications are described here: measurement of (i) cardiac cycle induced pulsatile reflection changes and (ii) eye movements and fixation pattern. Cardiac cycle induced pulsatile reflection changes are caused by changing blood volume in the retina. Waveform and pulse parameters like amplitude and rise time can be measured in any selected areas within the retinal image. Fixation pattern ΔY(ΔX) can be assessed from eye movements during video acquisition. The eye movements ΔX[t], ΔY[t] are derived from image registration results with high temporal (40 ms) and spatial (1,86 arcmin) resolution. Parameters of pulsatile reflection changes and fixation pattern can be affected in beginning glaucoma and the method described here may support early detection of glaucoma and other eye disease.
Delle Monache, Sergio; Lacquaniti, Francesco; Bosco, Gianfranco
2015-02-01
Manual interceptions are known to depend critically on integration of visual feedback information and experience-based predictions of the interceptive event. Within this framework, coupling between gaze and limb movements might also contribute to the interceptive outcome, since eye movements afford acquisition of high-resolution visual information. We investigated this issue by analyzing subjects' head-fixed oculomotor behavior during manual interceptions. Subjects moved a mouse cursor to intercept computer-generated ballistic trajectories either congruent with Earth's gravity or perturbed with weightlessness (0 g) or hypergravity (2 g) effects. In separate sessions, trajectories were either fully visible or occluded before interception to enforce visual prediction. Subjects' oculomotor behavior was classified in terms of amounts of time they gazed at different visual targets and of overall number of saccades. Then, by way of multivariate analyses, we assessed the following: (1) whether eye movement patterns depended on targets' laws of motion and occlusions; and (2) whether interceptive performance was related to the oculomotor behavior. First, we found that eye movement patterns depended significantly on targets' laws of motion and occlusion, suggesting predictive mechanisms. Second, subjects coupled differently oculomotor and interceptive behavior depending on whether targets were visible or occluded. With visible targets, subjects made smaller interceptive errors if they gazed longer at the mouse cursor. Instead, with occluded targets, they achieved better performance by increasing the target's tracking accuracy and by avoiding gaze shifts near interception, suggesting that precise ocular tracking provided better trajectory predictions for the interceptive response.
Performing saccadic eye movements or blinking improves postural control.
Rougier, Patrice; Garin, Mélanie
2007-07-01
To determine the relationship between eye movement and postural control on an undisturbed upright stance maintenance protocol, 15 young, healthy individuals were tested in various conditions. These conditions included imposed blinking patterns and horizontal and vertical saccadic eye movements. The directions taken by the center of pressure (CP) were recorded via a force platform on which the participants remained in an upright position. The CP trajectories were used to estimate, via a low-pass filter, the vertically projected movements of the center of gravity (CGv) and consequently the difference CP-CGv. An analysis of the frequency shows that regular bilateral blinking does not produce a significant change in postural control. In contrast, performing saccadic eye movements induces some reduced amplitude for both basic CGv and CP-CGv movements principally along the antero-posterior axis. The present result supports the theory that some ocular movements may modify postural control in the maintenance of the upright standing position in human participants.
Sex differences in a virtual water maze: an eye tracking and pupillometry study.
Mueller, Sven C; Jackson, Carl P T; Skelton, Ron W
2008-11-21
Sex differences in human spatial navigation are well known. However, the exact strategies that males and females employ in order to navigate successfully around the environment are unclear. While some researchers propose that males prefer environment-centred (allocentric) and females prefer self-centred (egocentric) navigation, these findings have proved difficult to replicate. In the present study we examined eye movements and physiological measures of memory (pupillometry) in order to compare visual scanning of spatial orientation using a human virtual analogue of the Morris Water Maze task. Twelve women and twelve men (average age=24 years) were trained on a visible platform and had to locate an invisible platform over a series of trials. On all but the first trial, participants' eye movements were recorded for 3s and they were asked to orient themselves in the environment. While the behavioural data replicated previous findings of improved spatial performance for males relative to females, distinct sex differences in eye movements were found. Males tended to explore consistently more space early on while females demonstrated initially longer fixation durations and increases in pupil diameter usually associated with memory processing. The eye movement data provides novel insight into differences in navigational strategies between the sexes.
Instrument Display Visual Angles for Conventional Aircraft and the MQ-9 Ground Control Station
NASA Technical Reports Server (NTRS)
Kamine, Tovy Haber; Bendrick, Gregg A.
2008-01-01
Aircraft instrument panels should be designed such that primary displays are in optimal viewing location to minimize pilot perception and response time. Human Factors engineers define three zones (i.e. cones ) of visual location: 1) "Easy Eye Movement" (foveal vision); 2) "Maximum Eye Movement" (peripheral vision with saccades), and 3) "Head Movement (head movement required). Instrument display visual angles were measured to determine how well conventional aircraft (T-34, T-38, F- 15B, F-16XL, F/A-18A, U-2D, ER-2, King Air, G-III, B-52H, DC-10, B747-SCA) and the MQ-9 ground control station (GCS) complied with these standards, and how they compared with each other. Selected instrument parameters included: attitude, pitch, bank, power, airspeed, altitude, vertical speed, heading, turn rate, slip/skid, AOA, flight path, latitude, longitude, course, bearing, range and time. Vertical and horizontal visual angles for each component were measured from the pilot s eye position in each system. The vertical visual angles of displays in conventional aircraft lay within the cone of "Easy Eye Movement" for all but three of the parameters measured, and almost all of the horizontal visual angles fell within this range. All conventional vertical and horizontal visual angles lay within the cone of Maximum Eye Movement. However, most instrument vertical visual angles of the MQ-9 GCS lay outside the cone of Easy Eye Movement, though all were within the cone of Maximum Eye Movement. All the horizontal visual angles for the MQ-9 GCS were within the cone of "Easy Eye Movement". Most instrument displays in conventional aircraft lay within the cone of Easy Eye Movement, though mission-critical instruments sometimes displaced less important instruments outside this area. Many of the MQ-9 GCS systems lay outside this area. Specific training for MQ-9 pilots may be needed to avoid increased response time and potential error during flight. The learning objectives include: 1) Know three physiologic cones of eye/head movement; 2) Understand how instrument displays comply with these design principles in conventional aircraft and an uninhabited aerial vehicle system. Which of the following is NOT a recognized physiologic principle of instrument display design? Cone of Easy Eye Movement 2) Cone of Binocular Eye Movement 3) Cone of Maximum Eye Movement 4) Cone of Head Movement 5) None of the above. Answer: # 2) Cone of Binocular Eye Movement
A computer-generated animated face stimulus set for psychophysiological research
Naples, Adam; Nguyen-Phuc, Alyssa; Coffman, Marika; Kresse, Anna; Faja, Susan; Bernier, Raphael; McPartland., James
2014-01-01
Human faces are fundamentally dynamic, but experimental investigations of face perception traditionally rely on static images of faces. While naturalistic videos of actors have been used with success in some contexts, much research in neuroscience and psychophysics demands carefully controlled stimuli. In this paper, we describe a novel set of computer generated, dynamic, face stimuli. These grayscale faces are tightly controlled for low- and high-level visual properties. All faces are standardized in terms of size, luminance, and location and size of facial features. Each face begins with a neutral pose and transitions to an expression over the course of 30 frames. Altogether there are 222 stimuli spanning 3 different categories of movement: (1) an affective movement (fearful face); (2) a neutral movement (close-lipped, puffed cheeks with open eyes); and (3) a biologically impossible movement (upward dislocation of eyes and mouth). To determine whether early brain responses sensitive to low-level visual features differed between expressions, we measured the occipital P100 event related potential (ERP), which is known to reflect differences in early stages of visual processing and the N170, which reflects structural encoding of faces. We found no differences between faces at the P100, indicating that different face categories were well matched on low-level image properties. This database provides researchers with a well-controlled set of dynamic faces controlled on low-level image characteristics that are applicable to a range of research questions in social perception. PMID:25028164
ERIC Educational Resources Information Center
Nitschke, Kai; Ruh, Nina; Kappler, Sonja; Stahl, Christoph; Kaller, Christoph P.
2012-01-01
Understanding the functional neuroanatomy of planning and problem solving may substantially benefit from better insight into the chronology of the cognitive processes involved. Based on the assumption that regularities in cognitive processing are reflected in overtly observable eye-movement patterns, here we recorded eye movements while…
Soft, Conformal Bioelectronics for a Wireless Human-Wheelchair Interface
Mishra, Saswat; Norton, James J. S.; Lee, Yongkuk; Lee, Dong Sup; Agee, Nicolas; Chen, Yanfei; Chun, Youngjae; Yeo, Woon-Hong
2017-01-01
There are more than 3 million people in the world whose mobility relies on wheelchairs. Recent advancement on engineering technology enables more intuitive, easy-to-use rehabilitation systems. A human-machine interface that uses non-invasive, electrophysiological signals can allow a systematic interaction between human and devices; for example, eye movement-based wheelchair control. However, the existing machine-interface platforms are obtrusive, uncomfortable, and often cause skin irritations as they require a metal electrode affixed to the skin with a gel and acrylic pad. Here, we introduce a bioelectronic system that makes dry, conformal contact to the skin. The mechanically comfortable sensor records high-fidelity electrooculograms, comparable to the conventional gel electrode. Quantitative signal analysis and infrared thermographs show the advantages of the soft biosensor for an ergonomic human-machine interface. A classification algorithm with an optimized set of features shows the accuracy of 94% with five eye movements. A Bluetooth-enabled system incorporating the soft bioelectronics demonstrates a precise, hands-free control of a robotic wheelchair via electrooculograms. PMID:28152485
Simulation of wave propagation inside a human eye: acoustic eye model (AEM)
NASA Astrophysics Data System (ADS)
Požar, T.; Halilovič, M.; Horvat, D.; Petkovšek, R.
2018-02-01
The design and development of the acoustic eye model (AEM) is reported. The model consists of a computer-based simulation that describes the propagation of mechanical disturbance inside a simplified model of a human eye. The capabilities of the model are illustrated with examples, using different laser-induced initial loading conditions in different geometrical configurations typically occurring in ophthalmic medical procedures. The potential of the AEM is to predict the mechanical response of the treated eye tissue in advance, thus complementing other preliminary procedures preceding medical treatments.
Beyond scene gist: Objects guide search more than scene background.
Koehler, Kathryn; Eckstein, Miguel P
2017-06-01
Although the facilitation of visual search by contextual information is well established, there is little understanding of the independent contributions of different types of contextual cues in scenes. Here we manipulated 3 types of contextual information: object co-occurrence, multiple object configurations, and background category. We isolated the benefits of each contextual cue to target detectability, its impact on decision bias, confidence, and the guidance of eye movements. We find that object-based information guides eye movements and facilitates perceptual judgments more than scene background. The degree of guidance and facilitation of each contextual cue can be related to its inherent informativeness about the target spatial location as measured by human explicit judgments about likely target locations. Our results improve the understanding of the contributions of distinct contextual scene components to search and suggest that the brain's utilization of cues to guide eye movements is linked to the cue's informativeness about the target's location. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
The selective disruption of spatial working memory by eye movements
Postle, Bradley R.; Idzikowski, Christopher; Sala, Sergio Della; Logie, Robert H.; Baddeley, Alan D.
2005-01-01
In the late 1970s/early 1980s, Baddeley and colleagues conducted a series of experiments investigating the role of eye movements in visual working memory. Although only described briefly in a book (Baddeley, 1986), these studies have influenced a remarkable number of empirical and theoretical developments in fields ranging from experimental psychology to human neuropsychology to nonhuman primate electrophysiology. This paper presents, in full detail, three critical studies from this series, together with a recently performed study that includes a level of eye movement measurement and control that was not available for the older studies. Together, the results demonstrate several facts about the sensitivity of visuospatial working memory to eye movements. First, it is eye movement control, not movement per se, that produces the disruptive effects. Second, these effects are limited to working memory for locations, and do not generalize to visual working memory for shapes. Third, they can be isolated to the storage/maintenance components of working memory (e.g., to the delay period of the delayed-recognition task). These facts have important implications for models of visual working memory. PMID:16556561
Rolfs, Martin; Carrasco, Marisa
2012-01-01
Humans and other animals with foveate vision make saccadic eye movements to prioritize the visual analysis of behaviorally relevant information. Even before movement onset, visual processing is selectively enhanced at the target of a saccade, presumably gated by brain areas controlling eye movements. Here we assess concurrent changes in visual performance and perceived contrast before saccades, and show that saccade preparation enhances perception rapidly, altering early visual processing in a manner akin to increasing the physical contrast of the visual input. Observers compared orientation and contrast of a test stimulus, appearing briefly before a saccade, to a standard stimulus, presented previously during a fixation period. We found simultaneous progressive enhancement in both orientation discrimination performance and perceived contrast as time approached saccade onset. These effects were robust as early as 60 ms after the eye movement was cued, much faster than the voluntary deployment of covert attention (without eye movements), which takes ~300 ms. Our results link the dynamics of saccade preparation, visual performance, and subjective experience and show that upcoming eye movements alter visual processing by increasing the signal strength. PMID:23035086
Young children with autism spectrum disorder use predictive eye movements in action observation.
Falck-Ytter, Terje
2010-06-23
Does a dysfunction in the mirror neuron system (MNS) underlie the social symptoms defining autism spectrum disorder (ASD)? Research suggests that the MNS matches observed actions to motor plans for similar actions, and that these motor plans include directions for predictive eye movements when observing goal-directed actions. Thus, one important question is whether children with ASD use predictive eye movements in action observation. Young children with ASD as well as typically developing children and adults were shown videos in which an actor performed object-directed actions (human agent condition). Children with ASD were also shown control videos showing objects moving by themselves (self-propelled condition). Gaze was measured using a corneal reflection technique. Children with ASD and typically developing individuals used strikingly similar goal-directed eye movements when observing others' actions in the human agent condition. Gaze was reactive in the self-propelled condition, suggesting that prediction is linked to seeing a hand-object interaction. This study does not support the view that ASD is characterized by a global dysfunction in the MNS.
How is visual salience computed in the brain? Insights from behaviour, neurobiology and modelling
Veale, Richard; Hafed, Ziad M.
2017-01-01
Inherent in visual scene analysis is a bottleneck associated with the need to sequentially sample locations with foveating eye movements. The concept of a ‘saliency map’ topographically encoding stimulus conspicuity over the visual scene has proven to be an efficient predictor of eye movements. Our work reviews insights into the neurobiological implementation of visual salience computation. We start by summarizing the role that different visual brain areas play in salience computation, whether at the level of feature analysis for bottom-up salience or at the level of goal-directed priority maps for output behaviour. We then delve into how a subcortical structure, the superior colliculus (SC), participates in salience computation. The SC represents a visual saliency map via a centre-surround inhibition mechanism in the superficial layers, which feeds into priority selection mechanisms in the deeper layers, thereby affecting saccadic and microsaccadic eye movements. Lateral interactions in the local SC circuit are particularly important for controlling active populations of neurons. This, in turn, might help explain long-range effects, such as those of peripheral cues on tiny microsaccades. Finally, we show how a combination of in vitro neurophysiology and large-scale computational modelling is able to clarify how salience computation is implemented in the local circuit of the SC. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044023
Three-Dimensional Eye Tracking in a Surgical Scenario.
Bogdanova, Rositsa; Boulanger, Pierre; Zheng, Bin
2015-10-01
Eye tracking has been widely used in studying the eye behavior of surgeons in the past decade. Most eye-tracking data are reported in a 2-dimensional (2D) fashion, and data for describing surgeons' behaviors on stereoperception are often missed. With the introduction of stereoscopes in laparoscopic procedures, there is an increasing need for studying the depth perception of surgeons under 3D image-guided surgery. We developed a new algorithm for the computation of convergence points in stereovision by measuring surgeons' interpupillary distance, the distance to the view target, and the difference between gaze locations of the 2 eyes. To test the feasibility of our new algorithm, we recruited 10 individuals to watch stereograms using binocular disparity and asked them to develop stereoperception using a cross-eyed viewing technique. Participants' eye motions were recorded by the Tobii eye tracker while they performed the trials. Convergence points between normal and stereo-viewing conditions were computed using the developed algorithm. All 10 participants were able to develop stereovision after a short period of training. During stereovision, participants' eye convergence points were 14 ± 1 cm in front of their eyes, which was significantly closer than the convergence points under the normal viewing condition (77 ± 20 cm). By applying our method of calculating convergence points using eye tracking, we were able to elicit the eye movement patterns of human operators between the normal and stereovision conditions. Knowledge from this study can be applied to the design of surgical visual systems, with the goal of improving surgical performance and patient safety. © The Author(s) 2015.
Eye movements reflect and shape strategies in fraction comparison
Ischebeck, Anja; Weilharter, Marina; Körner, Christof
2016-01-01
The comparison of fractions is a difficult task that can often be facilitated by separately comparing components (numerators and denominators) of the fractions—that is, by applying so-called component-based strategies. The usefulness of such strategies depends on the type of fraction pair to be compared. We investigated the temporal organization and the flexibility of strategy deployment in fraction comparison by evaluating sequences of eye movements in 20 young adults. We found that component-based strategies could account for the response times and the overall number of fixations observed for the different fraction pairs. The analysis of eye movement sequences showed that the initial eye movements in a trial were characterized by stereotypical scanning patterns indicative of an exploratory phase that served to establish the kind of fraction pair presented. Eye movements that followed this phase adapted to the particular type of fraction pair and indicated the deployment of specific comparison strategies. These results demonstrate that participants employ eye movements systematically to support strategy use in fraction comparison. Participants showed a remarkable flexibility to adapt to the most efficient strategy on a trial-by-trial basis. Our results confirm the value of eye movement measurements in the exploration of strategic adaptation in complex tasks. PMID:26039819
Caffeine increases the velocity of rapid eye movements in unfatigued humans.
Connell, Charlotte J W; Thompson, Benjamin; Turuwhenua, Jason; Hess, Robert F; Gant, Nicholas
2017-08-01
Caffeine is a widely used dietary stimulant that can reverse the effects of fatigue on cognitive, motor and oculomotor function. However, few studies have examined the effect of caffeine on the oculomotor system when homeostasis has not been disrupted by physical fatigue. This study examined the influence of a moderate dose of caffeine on oculomotor control and visual perception in participants who were not fatigued. Within a placebo-controlled crossover design, 13 healthy adults ingested caffeine (5 mg·kg -1 body mass) and were tested over 3 h. Eye movements, including saccades, smooth pursuit and optokinetic nystagmus, were measured using infrared oculography. Caffeine was associated with higher peak saccade velocities (472 ± 60° s -1 ) compared to placebo (455 ± 62° s -1 ). Quick phases of optokinetic nystagmus were also significantly faster with caffeine, whereas pursuit eye movements were unchanged. Non-oculomotor perceptual tasks (global motion and global orientation processing) were unaffected by caffeine. These results show that oculomotor control is modulated by a moderate dose of caffeine in unfatigued humans. These effects are detectable in the kinematics of rapid eye movements, whereas pursuit eye movements and visual perception are unaffected. Oculomotor functions may be sensitive to changes in central catecholamines mediated via caffeine's action as an adenosine antagonist, even when participants are not fatigued.
Exploring the Relationship Between Eye Movements and Electrocardiogram Interpretation Accuracy
NASA Astrophysics Data System (ADS)
Davies, Alan; Brown, Gavin; Vigo, Markel; Harper, Simon; Horseman, Laura; Splendiani, Bruno; Hill, Elspeth; Jay, Caroline
2016-12-01
Interpretation of electrocardiograms (ECGs) is a complex task involving visual inspection. This paper aims to improve understanding of how practitioners perceive ECGs, and determine whether visual behaviour can indicate differences in interpretation accuracy. A group of healthcare practitioners (n = 31) who interpret ECGs as part of their clinical role were shown 11 commonly encountered ECGs on a computer screen. The participants’ eye movement data were recorded as they viewed the ECGs and attempted interpretation. The Jensen-Shannon distance was computed for the distance between two Markov chains, constructed from the transition matrices (visual shifts from and to ECG leads) of the correct and incorrect interpretation groups for each ECG. A permutation test was then used to compare this distance against 10,000 randomly shuffled groups made up of the same participants. The results demonstrated a statistically significant (α 0.05) result in 5 of the 11 stimuli demonstrating that the gaze shift between the ECG leads is different between the groups making correct and incorrect interpretations and therefore a factor in interpretation accuracy. The results shed further light on the relationship between visual behaviour and ECG interpretation accuracy, providing information that can be used to improve both human and automated interpretation approaches.
Human Visuospatial Updating After Passive Translations In Three-Dimensional Space
Klier, Eliana M.; Hess, Bernhard J. M.; Angelaki, Dora E.
2013-01-01
To maintain a stable representation of the visual environment as we move, the brain must update the locations of targets in space using extra-retinal signals. Humans can accurately update after intervening active whole-body translations. But can they also update for passive translations (i.e., without efference copy signals of an outgoing motor command)? We asked six head-fixed subjects to remember the location of a briefly flashed target (five possible targets were located at depths of 23, 33, 43, 63 and 150cm in front of the cyclopean eye) as they moved 10cm left, right, up, down, forward or backward, while fixating a head-fixed target at 53cm. After the movement, the subjects made a saccade to the remembered location of the flash with a combination of version and vergence eye movements. We computed an updating ratio where 0 indicates no updating and 1 indicates perfect updating. For lateral and vertical whole-body motion, where updating performance is judged by the size of the version movement, the updating ratios were similar for leftward and rightward translations, averaging 0.84±0.28 (mean±SD), as compared to 0.51±0.33 for downward and 1.05±0.50 for upward translations. For forward/backward movements, where updating performance is judged by the size of the vergence movement, the average updating ratio was 1.12±0.45. Updating ratios tended to be larger for far targets than near targets, although both intra- and inter-subject variabilities were smallest for near targets. Thus, in addition to self-generated movements, extra-retinal signals involving otolith and proprioceptive cues can also be used for spatial constancy. PMID:18256164
Idiosyncratic characteristics of saccadic eye movements when viewing different visual environments.
Andrews, T J; Coppola, D M
1999-08-01
Eye position was recorded in different viewing conditions to assess whether the temporal and spatial characteristics of saccadic eye movements in different individuals are idiosyncratic. Our aim was to determine the degree to which oculomotor control is based on endogenous factors. A total of 15 naive subjects viewed five visual environments: (1) The absence of visual stimulation (i.e. a dark room); (2) a repetitive visual environment (i.e. simple textured patterns); (3) a complex natural scene; (4) a visual search task; and (5) reading text. Although differences in visual environment had significant effects on eye movements, idiosyncrasies were also apparent. For example, the mean fixation duration and size of an individual's saccadic eye movements when passively viewing a complex natural scene covaried significantly with those same parameters in the absence of visual stimulation and in a repetitive visual environment. In contrast, an individual's spatio-temporal characteristics of eye movements during active tasks such as reading text or visual search covaried together, but did not correlate with the pattern of eye movements detected when viewing a natural scene, simple patterns or in the dark. These idiosyncratic patterns of eye movements in normal viewing reveal an endogenous influence on oculomotor control. The independent covariance of eye movements during different visual tasks shows that saccadic eye movements during active tasks like reading or visual search differ from those engaged during the passive inspection of visual scenes.
Biometric recognition via texture features of eye movement trajectories in a visual searching task.
Li, Chunyong; Xue, Jiguo; Quan, Cheng; Yue, Jingwei; Zhang, Chenggang
2018-01-01
Biometric recognition technology based on eye-movement dynamics has been in development for more than ten years. Different visual tasks, feature extraction and feature recognition methods are proposed to improve the performance of eye movement biometric system. However, the correct identification and verification rates, especially in long-term experiments, as well as the effects of visual tasks and eye trackers' temporal and spatial resolution are still the foremost considerations in eye movement biometrics. With a focus on these issues, we proposed a new visual searching task for eye movement data collection and a new class of eye movement features for biometric recognition. In order to demonstrate the improvement of this visual searching task being used in eye movement biometrics, three other eye movement feature extraction methods were also tested on our eye movement datasets. Compared with the original results, all three methods yielded better results as expected. In addition, the biometric performance of these four feature extraction methods was also compared using the equal error rate (EER) and Rank-1 identification rate (Rank-1 IR), and the texture features introduced in this paper were ultimately shown to offer some advantages with regard to long-term stability and robustness over time and spatial precision. Finally, the results of different combinations of these methods with a score-level fusion method indicated that multi-biometric methods perform better in most cases.
Biometric recognition via texture features of eye movement trajectories in a visual searching task
Li, Chunyong; Xue, Jiguo; Quan, Cheng; Yue, Jingwei
2018-01-01
Biometric recognition technology based on eye-movement dynamics has been in development for more than ten years. Different visual tasks, feature extraction and feature recognition methods are proposed to improve the performance of eye movement biometric system. However, the correct identification and verification rates, especially in long-term experiments, as well as the effects of visual tasks and eye trackers’ temporal and spatial resolution are still the foremost considerations in eye movement biometrics. With a focus on these issues, we proposed a new visual searching task for eye movement data collection and a new class of eye movement features for biometric recognition. In order to demonstrate the improvement of this visual searching task being used in eye movement biometrics, three other eye movement feature extraction methods were also tested on our eye movement datasets. Compared with the original results, all three methods yielded better results as expected. In addition, the biometric performance of these four feature extraction methods was also compared using the equal error rate (EER) and Rank-1 identification rate (Rank-1 IR), and the texture features introduced in this paper were ultimately shown to offer some advantages with regard to long-term stability and robustness over time and spatial precision. Finally, the results of different combinations of these methods with a score-level fusion method indicated that multi-biometric methods perform better in most cases. PMID:29617383
New methods for the assessment of accommodative convergence.
Asakawa, Ken; Ishikawa, Hitoshi; Shoji, Nobuyuki
2009-01-01
The authors introduced a new objective method for measuring horizontal eye movements based on the first Purkinje image with the use of infrared charge-coupled device (CCD) cameras and compared stimulus accommodative convergence to accommodation (AC/A) ratios as determined by a standard gradient method. The study included 20 patients, 5 to 9 years old, who had intermittent exotropia (10 eyes) and accommodative esotropia (10 eyes). Measurement of horizontal eye movements in millimeters (mm), based on the first Purkinje image, was obtained with a TriIRIS C9000 instrument (Hamamatsu Photonics K.K., Hamamatsu, Japan). The stimulus AC/A ratio was determined with the far gradient method. The average values of horizontal eye movements (mm) and eye deviation (Delta) (a) before and (b) after an accommodative stimulus of 3.00 diopters (D) were calculated with the following formula: horizontal eye movements (mm/D) and stimulus AC/A ratio (Delta/D) = (b - a)/3. The average values of the horizontal eye movements and the stimulus AC/A ratio were 0.5 mm/D and 3.8 Delta/D, respectively. Correlation analysis showed a strong positive correlation between these two parameters (r = 0.92). Moreover, horizontal eye movements are directly proportional to the AC/A ratio measured with the gradient method. The methods used in this study allow objective recordings of accommodative convergence to be obtained in many clinical situations. Copyright 2009, SLACK Incorporated.
ERIC Educational Resources Information Center
White, Brian J.; Theeuwes, Jan; Munoz, Douglas P.
2012-01-01
During natural viewing, the trajectories of saccadic eye movements often deviate dramatically from a straight-line path between objects. In human studies, saccades have been shown to deviate toward or away from salient visual distractors depending on visual- and goal-related parameters, but the neurophysiological basis for this is not well…
A human visual model-based approach of the visual attention and performance evaluation
NASA Astrophysics Data System (ADS)
Le Meur, Olivier; Barba, Dominique; Le Callet, Patrick; Thoreau, Dominique
2005-03-01
In this paper, a coherent computational model of visual selective attention for color pictures is described and its performances are precisely evaluated. The model based on some important behaviours of the human visual system is composed of four parts: visibility, perception, perceptual grouping and saliency map construction. This paper focuses mainly on its performances assessment by achieving extended subjective and objective comparisons with real fixation points captured by an eye-tracking system used by the observers in a task-free viewing mode. From the knowledge of the ground truth, qualitatively and quantitatively comparisons have been made in terms of the measurement of the linear correlation coefficient (CC) and of the Kulback Liebler divergence (KL). On a set of 10 natural color images, the results show that the linear correlation coefficient and the Kullback Leibler divergence are of about 0.71 and 0.46, respectively. CC and Kl measures with this model are respectively improved by about 4% and 7% compared to the best model proposed by L.Itti. Moreover, by comparing the ability of our model to predict eye movements produced by an average observer, we can conclude that our model succeeds quite well in predicting the spatial locations of the most important areas of the image content.
Computations underlying the visuomotor transformation for smooth pursuit eye movements
Murdison, T. Scott; Leclercq, Guillaume; Lefèvre, Philippe
2014-01-01
Smooth pursuit eye movements are driven by retinal motion and enable us to view moving targets with high acuity. Complicating the generation of these movements is the fact that different eye and head rotations can produce different retinal stimuli but giving rise to identical smooth pursuit trajectories. However, because our eyes accurately pursue targets regardless of eye and head orientation (Blohm G, Lefèvre P. J Neurophysiol 104: 2103–2115, 2010), the brain must somehow take these signals into account. To learn about the neural mechanisms potentially underlying this visual-to-motor transformation, we trained a physiologically inspired neural network model to combine two-dimensional (2D) retinal motion signals with three-dimensional (3D) eye and head orientation and velocity signals to generate a spatially correct 3D pursuit command. We then simulated conditions of 1) head roll-induced ocular counterroll, 2) oblique gaze-induced retinal rotations, 3) eccentric gazes (invoking the half-angle rule), and 4) optokinetic nystagmus to investigate how units in the intermediate layers of the network accounted for different 3D constraints. Simultaneously, we simulated electrophysiological recordings (visual and motor tunings) and microstimulation experiments to quantify the reference frames of signals at each processing stage. We found a gradual retinal-to-intermediate-to-spatial feedforward transformation through the hidden layers. Our model is the first to describe the general 3D transformation for smooth pursuit mediated by eye- and head-dependent gain modulation. Based on several testable experimental predictions, our model provides a mechanism by which the brain could perform the 3D visuomotor transformation for smooth pursuit. PMID:25475344
Design considerations for a real-time ocular counterroll instrument
NASA Technical Reports Server (NTRS)
Hatamian, M.; Anderson, D. J.
1983-01-01
A real-time algorithm for measuring three-dimensional movement of the human eye, especially torsional movement, is presented. As its input, the system uses images of the eyeball taken at video rate. The amount of horizontal and vertical movement is extracted using a pupil tracking technique. The torsional movement is then measured by computing the discrete cross-correlation function between the circular samples of successive images of the iris patterns and searching for the position of the peak of the function. A local least square interpolation around the peak of the cross-correlation function is used to produce nearly unbiased estimates of torsion angle with accuracy of about 3-4 arcmin. Accuracies of better than 0.03 deg are achievable in torsional measurement with SNR higher than 36 dB. Horizontal and vertical rotations of up to + or - 13 deg can occur simultaneously with torsion without introducing any appreciable error in the counterrolling measurement process.
Simulation of Thin Film Equations on an Eye-Shaped Domain with Moving Boundary
NASA Astrophysics Data System (ADS)
Brosch, Joseph; Driscoll, Tobin; Braun, Richard
During a normal eye blink, the upper lid moves, and during the upstroke the lid paints a thin tear film over the exposed corneal and conjunctival surfaces. This thin tear film may be modeled by a nonlinear fourth-order PDE derived from lubrication theory. A major stumbling block in the numerical simulation of this model is to include both the geometry of the eye and the movement of the eyelid. Using a pair of orthogonal and conformal maps, we transform a computational box into a rough representation of a human eye where we proceed to simulate the thin tear film equations. Although we give up some realism, we gain spectrally accurate numerical methods on the computational box. We have applied this method to the heat equation on the blinking domain with both Dirichlet and no-flux boundary conditions, in each case demonstrating at least 10 digits of accuracy.. We are able to perform these simulations very quickly (generally in under a minute) using a desktop version of MATLAB. This project was supported by Grant 1022706 (R.J.B., T.A.D., J.K.B.) from the NSF.
Instrument Display Visual Angles for Conventional Aircraft and the MQ-9 Ground Control Station
NASA Technical Reports Server (NTRS)
Bendrick, Gregg A.; Kamine, Tovy Haber
2008-01-01
Aircraft instrument panels should be designed such that primary displays are in optimal viewing location to minimize pilot perception and response time. Human Factors engineers define three zones (i.e. "cones") of visual location: 1) "Easy Eye Movement" (foveal vision); 2) "Maximum Eye Movement" (peripheral vision with saccades), and 3) "Head Movement" (head movement required). Instrument display visual angles were measured to determine how well conventional aircraft (T-34, T-38, F- 15B, F-16XL, F/A-18A, U-2D, ER-2, King Air, G-III, B-52H, DC-10, B747-SCA) and the MQ-9 ground control station (GCS) complied with these standards, and how they compared with each other. Methods: Selected instrument parameters included: attitude, pitch, bank, power, airspeed, altitude, vertical speed, heading, turn rate, slip/skid, AOA, flight path, latitude, longitude, course, bearing, range and time. Vertical and horizontal visual angles for each component were measured from the pilot s eye position in each system. Results: The vertical visual angles of displays in conventional aircraft lay within the cone of "Easy Eye Movement" for all but three of the parameters measured, and almost all of the horizontal visual angles fell within this range. All conventional vertical and horizontal visual angles lay within the cone of "Maximum Eye Movement". However, most instrument vertical visual angles of the MQ-9 GCS lay outside the cone of "Easy Eye Movement", though all were within the cone of "Maximum Eye Movement". All the horizontal visual angles for the MQ-9 GCS were within the cone of "Easy Eye Movement". Discussion: Most instrument displays in conventional aircraft lay within the cone of "Easy Eye Movement", though mission-critical instruments sometimes displaced less important instruments outside this area. Many of the MQ-9 GCS systems lay outside this area. Specific training for MQ-9 pilots may be needed to avoid increased response time and potential error during flight.
Eyes that bind us: Gaze leading induces an implicit sense of agency.
Stephenson, Lisa J; Edwards, S Gareth; Howard, Emma E; Bayliss, Andrew P
2018-03-01
Humans feel a sense of agency over the effects their motor system causes. This is the case for manual actions such as pushing buttons, kicking footballs, and all acts that affect the physical environment. We ask whether initiating joint attention - causing another person to follow our eye movement - can elicit an implicit sense of agency over this congruent gaze response. Eye movements themselves cannot directly affect the physical environment, but joint attention is an example of how eye movements can indirectly cause social outcomes. Here we show that leading the gaze of an on-screen face induces an underestimation of the temporal gap between action and consequence (Experiments 1 and 2). This underestimation effect, named 'temporal binding,' is thought to be a measure of an implicit sense of agency. Experiment 3 asked whether merely making an eye movement in a non-agentic, non-social context might also affect temporal estimation, and no reliable effects were detected, implying that inconsequential oculomotor acts do not reliably affect temporal estimations under these conditions. Together, these findings suggest that an implicit sense of agency is generated when initiating joint attention interactions. This is important for understanding how humans can efficiently detect and understand the social consequences of their actions. Copyright © 2017 Elsevier B.V. All rights reserved.
Enabling Disabled Persons to Gain Access to Digital Media
NASA Technical Reports Server (NTRS)
Beach, Glenn; OGrady, Ryan
2011-01-01
A report describes the first phase in an effort to enhance the NaviGaze software to enable profoundly disabled persons to operate computers. (Running on a Windows-based computer equipped with a video camera aimed at the user s head, the original NaviGaze software processes the user's head movements and eye blinks into cursor movements and mouse clicks to enable hands-free control of the computer.) To accommodate large variations in movement capabilities among disabled individuals, one of the enhancements was the addition of a graphical user interface for selection of parameters that affect the way the software interacts with the computer and tracks the user s movements. Tracking algorithms were improved to reduce sensitivity to rotations and reduce the likelihood of tracking the wrong features. Visual feedback to the user was improved to provide an indication of the state of the computer system. It was found that users can quickly learn to use the enhanced software, performing single clicks, double clicks, and drags within minutes of first use. Available programs that could increase the usability of NaviGaze were identified. One of these enables entry of text by using NaviGaze as a mouse to select keys on a virtual keyboard.
Face recognition increases during saccade preparation.
Lin, Hai; Rizak, Joshua D; Ma, Yuan-ye; Yang, Shang-chuan; Chen, Lin; Hu, Xin-tian
2014-01-01
Face perception is integral to human perception system as it underlies social interactions. Saccadic eye movements are frequently made to bring interesting visual information, such as faces, onto the fovea for detailed processing. Just before eye movement onset, the processing of some basic features, such as the orientation, of an object improves at the saccade landing point. Interestingly, there is also evidence that indicates faces are processed in early visual processing stages similar to basic features. However, it is not known whether this early enhancement of processing includes face recognition. In this study, three experiments were performed to map the timing of face presentation to the beginning of the eye movement in order to evaluate pre-saccadic face recognition. Faces were found to be similarly processed as simple objects immediately prior to saccadic movements. Starting ∼ 120 ms before a saccade to a target face, independent of whether or not the face was surrounded by other faces, the face recognition gradually improved and the critical spacing of the crowding decreased as saccade onset was approaching. These results suggest that an upcoming saccade prepares the visual system for new information about faces at the saccade landing site and may reduce the background in a crowd to target the intended face. This indicates an important role of pre-saccadic eye movement signals in human face recognition.
Extracting information of fixational eye movements through pupil tracking
NASA Astrophysics Data System (ADS)
Xiao, JiangWei; Qiu, Jian; Luo, Kaiqin; Peng, Li; Han, Peng
2018-01-01
Human eyes are never completely static even when they are fixing a stationary point. These irregular, small movements, which consist of micro-tremors, micro-saccades and drifts, can prevent the fading of the images that enter our eyes. The importance of researching the fixational eye movements has been experimentally demonstrated recently. However, the characteristics of fixational eye movements and their roles in visual process have not been explained clearly, because these signals can hardly be completely extracted by now. In this paper, we developed a new eye movement detection device with a high-speed camera. This device includes a beam splitter mirror, an infrared light source and a high-speed digital video camera with a frame rate of 200Hz. To avoid the influence of head shaking, we made the device wearable by fixing the camera on a safety helmet. Using this device, the experiments of pupil tracking were conducted. By localizing the pupil center and spectrum analysis, the envelope frequency spectrum of micro-saccades, micro-tremors and drifts are shown obviously. The experimental results show that the device is feasible and effective, so that the device can be applied in further characteristic analysis.
Fukushima, Kikuro; Fukushima, Junko; Warabi, Tateo; Barnes, Graham R.
2013-01-01
Smooth-pursuit eye movements allow primates to track moving objects. Efficient pursuit requires appropriate target selection and predictive compensation for inherent processing delays. Prediction depends on expectation of future object motion, storage of motion information and use of extra-retinal mechanisms in addition to visual feedback. We present behavioral evidence of how cognitive processes are involved in predictive pursuit in normal humans and then describe neuronal responses in monkeys and behavioral responses in patients using a new technique to test these cognitive controls. The new technique examines the neural substrate of working memory and movement preparation for predictive pursuit by using a memory-based task in macaque monkeys trained to pursue (go) or not pursue (no-go) according to a go/no-go cue, in a direction based on memory of a previously presented visual motion display. Single-unit task-related neuronal activity was examined in medial superior temporal cortex (MST), supplementary eye fields (SEF), caudal frontal eye fields (FEF), cerebellar dorsal vermis lobules VI–VII, caudal fastigial nuclei (cFN), and floccular region. Neuronal activity reflecting working memory of visual motion direction and go/no-go selection was found predominantly in SEF, cerebellar dorsal vermis and cFN, whereas movement preparation related signals were found predominantly in caudal FEF and the same cerebellar areas. Chemical inactivation produced effects consistent with differences in signals represented in each area. When applied to patients with Parkinson's disease (PD), the task revealed deficits in movement preparation but not working memory. In contrast, patients with frontal cortical or cerebellar dysfunction had high error rates, suggesting impaired working memory. We show how neuronal activity may be explained by models of retinal and extra-retinal interaction in target selection and predictive control and thus aid understanding of underlying pathophysiology. PMID:23515488
Exploring responses to art in adolescence: a behavioral and eye-tracking study.
Savazzi, Federica; Massaro, Davide; Di Dio, Cinzia; Gallese, Vittorio; Gilli, Gabriella; Marchetti, Antonella
2014-01-01
Adolescence is a peculiar age mainly characterized by physical and psychological changes that may affect the perception of one's own and others' body. This perceptual peculiarity may influence the way in which bottom-up and top-down processes interact and, consequently, the perception and evaluation of art. This study is aimed at investigating, by means of the eye-tracking technique, the visual explorative behavior of adolescents while looking at paintings. Sixteen color paintings, categorized as dynamic and static, were presented to twenty adolescents; half of the images represented natural environments and half human individuals; all stimuli were displayed under aesthetic and movement judgment tasks. Participants' ratings revealed that, generally, nature images are explicitly evaluated as more appealing than human images. Eye movement data, on the other hand, showed that the human body exerts a strong power in orienting and attracting visual attention and that, in adolescence, it plays a fundamental role during aesthetic experience. In particular, adolescents seem to approach human-content images by giving priority to elements calling forth movement and action, supporting the embodiment theory of aesthetic perception.
Exploring Responses to Art in Adolescence: A Behavioral and Eye-Tracking Study
Savazzi, Federica; Massaro, Davide; Di Dio, Cinzia; Gallese, Vittorio; Gilli, Gabriella; Marchetti, Antonella
2014-01-01
Adolescence is a peculiar age mainly characterized by physical and psychological changes that may affect the perception of one's own and others' body. This perceptual peculiarity may influence the way in which bottom-up and top-down processes interact and, consequently, the perception and evaluation of art. This study is aimed at investigating, by means of the eye-tracking technique, the visual explorative behavior of adolescents while looking at paintings. Sixteen color paintings, categorized as dynamic and static, were presented to twenty adolescents; half of the images represented natural environments and half human individuals; all stimuli were displayed under aesthetic and movement judgment tasks. Participants' ratings revealed that, generally, nature images are explicitly evaluated as more appealing than human images. Eye movement data, on the other hand, showed that the human body exerts a strong power in orienting and attracting visual attention and that, in adolescence, it plays a fundamental role during aesthetic experience. In particular, adolescents seem to approach human-content images by giving priority to elements calling forth movement and action, supporting the embodiment theory of aesthetic perception. PMID:25048813
Brain–computer interfaces: communication and restoration of movement in paralysis
Birbaumer, Niels; Cohen, Leonardo G
2007-01-01
The review describes the status of brain–computer or brain–machine interface research. We focus on non-invasive brain–computer interfaces (BCIs) and their clinical utility for direct brain communication in paralysis and motor restoration in stroke. A large gap between the promises of invasive animal and human BCI preparations and the clinical reality characterizes the literature: while intact monkeys learn to execute more or less complex upper limb movements with spike patterns from motor brain regions alone without concomitant peripheral motor activity usually after extensive training, clinical applications in human diseases such as amyotrophic lateral sclerosis and paralysis from stroke or spinal cord lesions show only limited success, with the exception of verbal communication in paralysed and locked-in patients. BCIs based on electroencephalographic potentials or oscillations are ready to undergo large clinical studies and commercial production as an adjunct or a major assisted communication device for paralysed and locked-in patients. However, attempts to train completely locked-in patients with BCI communication after entering the complete locked-in state with no remaining eye movement failed. We propose that a lack of contingencies between goal directed thoughts and intentions may be at the heart of this problem. Experiments with chronically curarized rats support our hypothesis; operant conditioning and voluntary control of autonomic physiological functions turned out to be impossible in this preparation. In addition to assisted communication, BCIs consisting of operant learning of EEG slow cortical potentials and sensorimotor rhythm were demonstrated to be successful in drug resistant focal epilepsy and attention deficit disorder. First studies of non-invasive BCIs using sensorimotor rhythm of the EEG and MEG in restoration of paralysed hand movements in chronic stroke and single cases of high spinal cord lesions show some promise, but need extensive evaluation in well-controlled experiments. Invasive BMIs based on neuronal spike patterns, local field potentials or electrocorticogram may constitute the strategy of choice in severe cases of stroke and spinal cord paralysis. Future directions of BCI research should include the regulation of brain metabolism and blood flow and electrical and magnetic stimulation of the human brain (invasive and non-invasive). A series of studies using BOLD response regulation with functional magnetic resonance imaging (fMRI) and near infrared spectroscopy demonstrated a tight correlation between voluntary changes in brain metabolism and behaviour. PMID:17234696
Electroencephalographic prodromal markers of dementia across conscious states in Parkinson’s disease
Latreille, Véronique; Gaudet-Fex, Benjamin; Rodrigues-Brazète, Jessica; Panisset, Michel; Chouinard, Sylvain; Postuma, Ronald B.
2016-01-01
Abstract In Parkinson’s disease, electroencephalographic abnormalities during wakefulness and non-rapid eye movement sleep (spindles) were found to be predictive biomarkers of dementia. Because rapid eye movement sleep is regulated by the cholinergic system, which shows early degeneration in Parkinson’s disease with cognitive impairment, anomalies during this sleep stage might mirror dementia development. In this prospective study, we examined baseline electroencephalographic absolute spectral power across three states of consciousness (non-rapid eye movement sleep, rapid eye movement sleep, and wakefulness) in 68 non-demented patients with Parkinson’s disease and 44 healthy controls. All participants underwent baseline polysomnographic recordings and a comprehensive neuropsychological assessment. Power spectral analyses were performed on standard frequency bands. Dominant occipital frequency during wakefulness and ratios of slow-to-fast frequencies during rapid eye movement sleep and wakefulness were also computed. At follow-up (an average 4.5 years after baseline), 18 patients with Parkinson’s disease had developed dementia and 50 patients remained dementia-free. In rapid eye movement sleep, patients with Parkinson’s disease who later developed dementia showed, at baseline, higher absolute power in delta and theta bands and a higher slowing ratio, especially in temporal, parietal, and occipital regions, compared to patients who remained dementia-free and controls. In non-rapid eye movement sleep, lower baseline sigma power in parietal cortical regions also predicted development of dementia. During wakefulness, patients with Parkinson’s disease who later developed dementia showed lower dominant occipital frequency as well as higher delta and slowing ratio compared to patients who remained dementia-free and controls. At baseline, higher slowing ratios in temporo-occipital regions during rapid eye movement sleep were associated with poor performance on visuospatial tests in patients with Parkinson’s disease. Using receiver operating characteristic curves, we found that best predictors of dementia in Parkinson’s disease were rapid eye movement sleep slowing ratios in posterior regions, wakefulness slowing ratios in temporal areas, and lower dominant occipital frequency. These results suggest that electroencephalographic slowing during sleep is a new promising predictive biomarker for Parkinson’s disease dementia, perhaps as a marker of cholinergic denervation. PMID:26912643
Exploring eye movements in patients with glaucoma when viewing a driving scene.
Crabb, David P; Smith, Nicholas D; Rauscher, Franziska G; Chisholm, Catharine M; Barbur, John L; Edgar, David F; Garway-Heath, David F
2010-03-16
Glaucoma is a progressive eye disease and a leading cause of visual disability. Automated assessment of the visual field determines the different stages in the disease process: it would be desirable to link these measurements taken in the clinic with patient's actual function, or establish if patients compensate for their restricted field of view when performing everyday tasks. Hence, this study investigated eye movements in glaucomatous patients when viewing driving scenes in a hazard perception test (HPT). The HPT is a component of the UK driving licence test consisting of a series of short film clips of various traffic scenes viewed from the driver's perspective each containing hazardous situations that require the camera car to change direction or slow down. Data from nine glaucomatous patients with binocular visual field defects and ten age-matched control subjects were considered (all experienced drivers). Each subject viewed 26 different films with eye movements simultaneously monitored by an eye tracker. Computer software was purpose written to pre-process the data, co-register it to the film clips and to quantify eye movements and point-of-regard (using a dynamic bivariate contour ellipse analysis). On average, and across all HPT films, patients exhibited different eye movement characteristics to controls making, for example, significantly more saccades (P<0.001; 95% confidence interval for mean increase: 9.2 to 22.4%). Whilst the average region of 'point-of-regard' of the patients did not differ significantly from the controls, there were revealing cases where patients failed to see a hazard in relation to their binocular visual field defect. Characteristics of eye movement patterns in patients with bilateral glaucoma can differ significantly from age-matched controls when viewing a traffic scene. Further studies of eye movements made by glaucomatous patients could provide useful information about the definition of the visual field component required for fitness to drive.
Exploring Eye Movements in Patients with Glaucoma When Viewing a Driving Scene
Crabb, David P.; Smith, Nicholas D.; Rauscher, Franziska G.; Chisholm, Catharine M.; Barbur, John L.; Edgar, David F.; Garway-Heath, David F.
2010-01-01
Background Glaucoma is a progressive eye disease and a leading cause of visual disability. Automated assessment of the visual field determines the different stages in the disease process: it would be desirable to link these measurements taken in the clinic with patient's actual function, or establish if patients compensate for their restricted field of view when performing everyday tasks. Hence, this study investigated eye movements in glaucomatous patients when viewing driving scenes in a hazard perception test (HPT). Methodology/Principal Findings The HPT is a component of the UK driving licence test consisting of a series of short film clips of various traffic scenes viewed from the driver's perspective each containing hazardous situations that require the camera car to change direction or slow down. Data from nine glaucomatous patients with binocular visual field defects and ten age-matched control subjects were considered (all experienced drivers). Each subject viewed 26 different films with eye movements simultaneously monitored by an eye tracker. Computer software was purpose written to pre-process the data, co-register it to the film clips and to quantify eye movements and point-of-regard (using a dynamic bivariate contour ellipse analysis). On average, and across all HPT films, patients exhibited different eye movement characteristics to controls making, for example, significantly more saccades (P<0.001; 95% confidence interval for mean increase: 9.2 to 22.4%). Whilst the average region of ‘point-of-regard’ of the patients did not differ significantly from the controls, there were revealing cases where patients failed to see a hazard in relation to their binocular visual field defect. Conclusions/Significance Characteristics of eye movement patterns in patients with bilateral glaucoma can differ significantly from age-matched controls when viewing a traffic scene. Further studies of eye movements made by glaucomatous patients could provide useful information about the definition of the visual field component required for fitness to drive. PMID:20300522
Inertial Movements of the Iris as the Origin of Postsaccadic Oscillations.
Bouzat, S; Freije, M L; Frapiccini, A L; Gasaneo, G
2018-04-27
Recent studies on the human eye indicate that the pupil moves inside the eyeball due to deformations of the iris. Here we show that this phenomenon can be originated by inertial forces undergone by the iris during the rotation of the eyeball. Moreover, these forces affect the iris in such a way that the pupil behaves effectively as a massive particle. To show this, we develop a model based on the Newton equation on the noninertial reference frame of the eyeball. The model allows us to reproduce and interpret several important findings of recent eye-tracking experiments on saccadic movements. In particular, we get correct results for the dependence of the amplitude and period of the postsaccadic oscillations on the saccade size and also for the peak velocity. The model developed may serve as a tool for characterizing eye properties of individuals.
Inertial Movements of the Iris as the Origin of Postsaccadic Oscillations
NASA Astrophysics Data System (ADS)
Bouzat, S.; Freije, M. L.; Frapiccini, A. L.; Gasaneo, G.
2018-04-01
Recent studies on the human eye indicate that the pupil moves inside the eyeball due to deformations of the iris. Here we show that this phenomenon can be originated by inertial forces undergone by the iris during the rotation of the eyeball. Moreover, these forces affect the iris in such a way that the pupil behaves effectively as a massive particle. To show this, we develop a model based on the Newton equation on the noninertial reference frame of the eyeball. The model allows us to reproduce and interpret several important findings of recent eye-tracking experiments on saccadic movements. In particular, we get correct results for the dependence of the amplitude and period of the postsaccadic oscillations on the saccade size and also for the peak velocity. The model developed may serve as a tool for characterizing eye properties of individuals.
Relationship between saccadic eye movements and formation of the Krukenberg's spindle-a CFD study.
Boushehrian, Hamidreza Hajiani; Abouali, Omid; Jafarpur, Khosrow; Ghaffarieh, Alireza; Ahmadi, Goodarz
2017-09-01
In this research, a series of numerical simulations for evaluating the effects of saccadic eye movement on the aqueous humour (AH) flow field and movement of pigment particles in the anterior chamber (AC) was performed. To predict the flow field of AH in the AC, the unsteady forms of continuity, momentum balance and conservation of energy equations were solved using the dynamic mesh technique for simulating the saccadic motions. Different orientations of the human eye including horizontal, vertical and angles of 10° and 20° were considered. The Lagrangian particle trajectory analysis approach was used to find the trajectories of pigment particles in the eye. Particular attention was given to the relation between the saccadic eye movement and potential formation of Krukenberg's spindle in the eye. The simulation results revealed that the natural convection flow was an effective mechanism for transferring pigment particles from the iris to near the cornea. In addition, the saccadic eye movement was the dominant mechanism for deposition of pigment particles on the cornea, which could lead to the formation of Krukenberg's spindle. The effect of amplitude of saccade motion angle in addition to the orientation of the eye on the formation of Krukenberg's spindle was investigated. © The authors 2016. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.
NASA Astrophysics Data System (ADS)
Dong, Leng; Chen, Yan; Dias, Sarah; Stone, William; Dias, Joseph; Rout, John; Gale, Alastair G.
2017-03-01
Visual search techniques and FROC analysis have been widely used in radiology to understand medical image perceptual behaviour and diagnostic performance. The potential of exploiting the advantages of both methodologies is of great interest to medical researchers. In this study, eye tracking data of eight dental practitioners was investigated. The visual search measures and their analyses are considered here. Each participant interpreted 20 dental radiographs which were chosen by an expert dental radiologist. Various eye movement measurements were obtained based on image area of interest (AOI) information. FROC analysis was then carried out by using these eye movement measurements as a direct input source. The performance of FROC methods using different input parameters was tested. The results showed that there were significant differences in FROC measures, based on eye movement data, between groups with different experience levels. Namely, the area under the curve (AUC) score evidenced higher values for experienced group for the measurements of fixation and dwell time. Also, positive correlations were found for AUC scores between the eye movement data conducted FROC and rating based FROC. FROC analysis using eye movement measurements as input variables can act as a potential performance indicator to deliver assessment in medical imaging interpretation and assess training procedures. Visual search data analyses lead to new ways of combining eye movement data and FROC methods to provide an alternative dimension to assess performance and visual search behaviour in the area of medical imaging perceptual tasks.
Combining EEG and eye movement recording in free viewing: Pitfalls and possibilities.
Nikolaev, Andrey R; Meghanathan, Radha Nila; van Leeuwen, Cees
2016-08-01
Co-registration of EEG and eye movement has promise for investigating perceptual processes in free viewing conditions, provided certain methodological challenges can be addressed. Most of these arise from the self-paced character of eye movements in free viewing conditions. Successive eye movements occur within short time intervals. Their evoked activity is likely to distort the EEG signal during fixation. Due to the non-uniform distribution of fixation durations, these distortions are systematic, survive across-trials averaging, and can become a source of confounding. We illustrate this problem with effects of sequential eye movements on the evoked potentials and time-frequency components of EEG and propose a solution based on matching of eye movement characteristics between experimental conditions. The proposal leads to a discussion of which eye movement characteristics are to be matched, depending on the EEG activity of interest. We also compare segmentation of EEG into saccade-related epochs relative to saccade and fixation onsets and discuss the problem of baseline selection and its solution. Further recommendations are given for implementing EEG-eye movement co-registration in free viewing conditions. By resolving some of the methodological problems involved, we aim to facilitate the transition from the traditional stimulus-response paradigm to the study of visual perception in more naturalistic conditions. Copyright © 2016 Elsevier Inc. All rights reserved.
Web Camera Based Eye Tracking to Assess Visual Memory on a Visual Paired Comparison Task.
Bott, Nicholas T; Lange, Alex; Rentz, Dorene; Buffalo, Elizabeth; Clopton, Paul; Zola, Stuart
2017-01-01
Background: Web cameras are increasingly part of the standard hardware of most smart devices. Eye movements can often provide a noninvasive "window on the brain," and the recording of eye movements using web cameras is a burgeoning area of research. Objective: This study investigated a novel methodology for administering a visual paired comparison (VPC) decisional task using a web camera.To further assess this method, we examined the correlation between a standard eye-tracking camera automated scoring procedure [obtaining images at 60 frames per second (FPS)] and a manually scored procedure using a built-in laptop web camera (obtaining images at 3 FPS). Methods: This was an observational study of 54 clinically normal older adults.Subjects completed three in-clinic visits with simultaneous recording of eye movements on a VPC decision task by a standard eye tracker camera and a built-in laptop-based web camera. Inter-rater reliability was analyzed using Siegel and Castellan's kappa formula. Pearson correlations were used to investigate the correlation between VPC performance using a standard eye tracker camera and a built-in web camera. Results: Strong associations were observed on VPC mean novelty preference score between the 60 FPS eye tracker and 3 FPS built-in web camera at each of the three visits ( r = 0.88-0.92). Inter-rater agreement of web camera scoring at each time point was high (κ = 0.81-0.88). There were strong relationships on VPC mean novelty preference score between 10, 5, and 3 FPS training sets ( r = 0.88-0.94). Significantly fewer data quality issues were encountered using the built-in web camera. Conclusions: Human scoring of a VPC decisional task using a built-in laptop web camera correlated strongly with automated scoring of the same task using a standard high frame rate eye tracker camera.While this method is not suitable for eye tracking paradigms requiring the collection and analysis of fine-grained metrics, such as fixation points, built-in web cameras are a standard feature of most smart devices (e.g., laptops, tablets, smart phones) and can be effectively employed to track eye movements on decisional tasks with high accuracy and minimal cost.
Vernet, Marine; Quentin, Romain; Chanes, Lorena; Mitsumasu, Andres; Valero-Cabré, Antoni
2014-01-01
The planning, control and execution of eye movements in 3D space relies on a distributed system of cortical and subcortical brain regions. Within this network, the Eye Fields have been described in animals as cortical regions in which electrical stimulation is able to trigger eye movements and influence their latency or accuracy. This review focuses on the Frontal Eye Field (FEF) a “hub” region located in Humans in the vicinity of the pre-central sulcus and the dorsal-most portion of the superior frontal sulcus. The straightforward localization of the FEF through electrical stimulation in animals is difficult to translate to the healthy human brain, particularly with non-invasive neuroimaging techniques. Hence, in the first part of this review, we describe attempts made to characterize the anatomical localization of this area in the human brain. The outcome of functional Magnetic Resonance Imaging (fMRI), Magneto-encephalography (MEG) and particularly, non-invasive mapping methods such a Transcranial Magnetic Stimulation (TMS) are described and the variability of FEF localization across individuals and mapping techniques are discussed. In the second part of this review, we will address the role of the FEF. We explore its involvement both in the physiology of fixation, saccade, pursuit, and vergence movements and in associated cognitive processes such as attentional orienting, visual awareness and perceptual modulation. Finally in the third part, we review recent evidence suggesting the high level of malleability and plasticity of these regions and associated networks to non-invasive stimulation. The exploratory, diagnostic, and therapeutic interest of such interventions for the modulation and improvement of perception in 3D space are discussed. PMID:25202241
Andersson, Richard; Larsson, Linnea; Holmqvist, Kenneth; Stridh, Martin; Nyström, Marcus
2017-04-01
Almost all eye-movement researchers use algorithms to parse raw data and detect distinct types of eye movement events, such as fixations, saccades, and pursuit, and then base their results on these. Surprisingly, these algorithms are rarely evaluated. We evaluated the classifications of ten eye-movement event detection algorithms, on data from an SMI HiSpeed 1250 system, and compared them to manual ratings of two human experts. The evaluation focused on fixations, saccades, and post-saccadic oscillations. The evaluation used both event duration parameters, and sample-by-sample comparisons to rank the algorithms. The resulting event durations varied substantially as a function of what algorithm was used. This evaluation differed from previous evaluations by considering a relatively large set of algorithms, multiple events, and data from both static and dynamic stimuli. The main conclusion is that current detectors of only fixations and saccades work reasonably well for static stimuli, but barely better than chance for dynamic stimuli. Differing results across evaluation methods make it difficult to select one winner for fixation detection. For saccade detection, however, the algorithm by Larsson, Nyström and Stridh (IEEE Transaction on Biomedical Engineering, 60(9):2484-2493,2013) outperforms all algorithms in data from both static and dynamic stimuli. The data also show how improperly selected algorithms applied to dynamic data misestimate fixation and saccade properties.
Consequences of Traumatic Brain Injury for Human Vergence Dynamics
Tyler, Christopher W.; Likova, Lora T.; Mineff, Kristyo N.; Elsaid, Anas M.; Nicholas, Spero C.
2015-01-01
Purpose: Traumatic brain injury involving loss of consciousness has focal effects in the human brainstem, suggesting that it may have particular consequences for eye movement control. This hypothesis was investigated by measurements of vergence eye movement parameters. Methods: Disparity vergence eye movements were measured for a population of 123 normally sighted individuals, 26 of whom had suffered diffuse traumatic brain injury (dTBI) in the past, while the remainder served as controls. Vergence tracking responses were measured to sinusoidal disparity modulation of a random-dot field. Disparity vergence step responses were characterized in terms of their dynamic parameters separately for the convergence and divergence directions. Results: The control group showed notable differences between convergence and divergence dynamics. The dTBI group showed significantly abnormal vergence behavior on many of the dynamic parameters. Conclusion: The results support the hypothesis that occult injury to the oculomotor control system is a common residual outcome of dTBI. PMID:25691880
Green, C R; Mihic, A M; Brien, D C; Armstrong, I T; Nikkel, S M; Stade, B C; Rasmussen, C; Munoz, D P; Reynolds, J N
2009-03-01
Prenatal exposure to alcohol can result in a spectrum of adverse developmental outcomes, collectively termed fetal alcohol spectrum disorders (FASDs). This study evaluated deficits in sensory, motor and cognitive processing in children with FASD that can be identified using eye movement testing. Our study group was composed of 89 children aged 8-15 years with a diagnosis within the FASD spectrum [i.e. fetal alcohol syndrome (FAS), partial fetal alcohol syndrome (pFAS), and alcohol-related neurodevelopmental disorder (ARND)], and 92 controls. Subjects looked either towards (prosaccade) or away from (antisaccade) a peripheral target that appeared on a computer monitor, and eye movements were recorded with a mobile, video-based eye tracker. We hypothesized that: (i) differences in the magnitude of deficits in eye movement control exist across the three diagnostic subgroups; and (ii) children with FASD display a developmental delay in oculomotor control. Children with FASD had increased saccadic reaction times (SRTs), increased intra-subject variability in SRTs, and increased direction errors in both the prosaccade and antisaccade tasks. Although development was associated with improvements across tasks, children with FASD failed to achieve age-matched control levels of performance at any of the ages tested. Moreover, children with ARND had faster SRTs and made fewer direction errors in the antisaccade task than children with pFAS or FAS, although all subgroups were different from controls. Our results demonstrate that eye tracking can be used as an objective measure of brain injury in FASD, revealing behavioral deficits in all three diagnostic subgroups independent of facial dysmorphology.
A novel role for visual perspective cues in the neural computation of depth.
Kim, HyungGoo R; Angelaki, Dora E; DeAngelis, Gregory C
2015-01-01
As we explore a scene, our eye movements add global patterns of motion to the retinal image, complicating visual motion produced by self-motion or moving objects. Conventionally, it has been assumed that extraretinal signals, such as efference copy of smooth pursuit commands, are required to compensate for the visual consequences of eye rotations. We consider an alternative possibility: namely, that the visual system can infer eye rotations from global patterns of image motion. We visually simulated combinations of eye translation and rotation, including perspective distortions that change dynamically over time. We found that incorporating these 'dynamic perspective' cues allowed the visual system to generate selectivity for depth sign from motion parallax in macaque cortical area MT, a computation that was previously thought to require extraretinal signals regarding eye velocity. Our findings suggest neural mechanisms that analyze global patterns of visual motion to perform computations that require knowledge of eye rotations.
Eye Tracking Metrics for Workload Estimation in Flight Deck Operation
NASA Technical Reports Server (NTRS)
Ellis, Kyle; Schnell, Thomas
2010-01-01
Flight decks of the future are being enhanced through improved avionics that adapt to both aircraft and operator state. Eye tracking allows for non-invasive analysis of pilot eye movements, from which a set of metrics can be derived to effectively and reliably characterize workload. This research identifies eye tracking metrics that correlate to aircraft automation conditions, and identifies the correlation of pilot workload to the same automation conditions. Saccade length was used as an indirect index of pilot workload: Pilots in the fully automated condition were observed to have on average, larger saccadic movements in contrast to the guidance and manual flight conditions. The data set itself also provides a general model of human eye movement behavior and so ostensibly visual attention distribution in the cockpit for approach to land tasks with various levels of automation, by means of the same metrics used for workload algorithm development.
Evaluation of Eye Metrics as a Detector of Fatigue
2010-03-01
eyeglass frames . The cameras are angled upward toward the eyes and extract real-time pupil diameter, eye-lid movement, and eye-ball movement. The...because the cameras were mounted on eyeglass -like frames , the system was able to continuously monitor the eye throughout all sessions. Overall, the...of “ fitness for duty” testing and “real-time monitoring” of operator performance has been slow (Institute of Medicine, 2004). Oculometric-based
Observers' cognitive states modulate how visual inputs relate to gaze control.
Kardan, Omid; Henderson, John M; Yourganov, Grigori; Berman, Marc G
2016-09-01
Previous research has shown that eye-movements change depending on both the visual features of our environment, and the viewer's top-down knowledge. One important question that is unclear is the degree to which the visual goals of the viewer modulate how visual features of scenes guide eye-movements. Here, we propose a systematic framework to investigate this question. In our study, participants performed 3 different visual tasks on 135 scenes: search, memorization, and aesthetic judgment, while their eye-movements were tracked. Canonical correlation analyses showed that eye-movements were reliably more related to low-level visual features at fixations during the visual search task compared to the aesthetic judgment and scene memorization tasks. Different visual features also had different relevance to eye-movements between tasks. This modulation of the relationship between visual features and eye-movements by task was also demonstrated with classification analyses, where classifiers were trained to predict the viewing task based on eye movements and visual features at fixations. Feature loadings showed that the visual features at fixations could signal task differences independent of temporal and spatial properties of eye-movements. When classifying across participants, edge density and saliency at fixations were as important as eye-movements in the successful prediction of task, with entropy and hue also being significant, but with smaller effect sizes. When classifying within participants, brightness and saturation were also significant contributors. Canonical correlation and classification results, together with a test of moderation versus mediation, suggest that the cognitive state of the observer moderates the relationship between stimulus-driven visual features and eye-movements. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Distinct timing mechanisms produce discrete and continuous movements.
Huys, Raoul; Studenka, Breanna E; Rheaume, Nicole L; Zelaznik, Howard N; Jirsa, Viktor K
2008-04-25
The differentiation of discrete and continuous movement is one of the pillars of motor behavior classification. Discrete movements have a definite beginning and end, whereas continuous movements do not have such discriminable end points. In the past decade there has been vigorous debate whether this classification implies different control processes. This debate up until the present has been empirically based. Here, we present an unambiguous non-empirical classification based on theorems in dynamical system theory that sets discrete and continuous movements apart. Through computational simulations of representative modes of each class and topological analysis of the flow in state space, we show that distinct control mechanisms underwrite discrete and fast rhythmic movements. In particular, we demonstrate that discrete movements require a time keeper while fast rhythmic movements do not. We validate our computational findings experimentally using a behavioral paradigm in which human participants performed finger flexion-extension movements at various movement paces and under different instructions. Our results demonstrate that the human motor system employs different timing control mechanisms (presumably via differential recruitment of neural subsystems) to accomplish varying behavioral functions such as speed constraints.
Measuring Human Performance in Simulated Nuclear Power Plant Control Rooms Using Eye Tracking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kovesdi, Casey Robert; Rice, Brandon Charles; Bower, Gordon Ross
Control room modernization will be an important part of life extension for the existing light water reactor fleet. As part of modernization efforts, personnel will need to gain a full understanding of how control room technologies affect performance of human operators. Recent advances in technology enables the use of eye tracking technology to continuously measure an operator’s eye movement, which correlates with a variety of human performance constructs such as situation awareness and workload. This report describes eye tracking metrics in the context of how they will be used in nuclear power plant control room simulator studies.
NASA Astrophysics Data System (ADS)
Huang, Hongxin; Toyoda, Haruyoshi; Inoue, Takashi
2017-09-01
The performance of an adaptive optics scanning laser ophthalmoscope (AO-SLO) using a liquid crystal on silicon spatial light modulator and Shack-Hartmann wavefront sensor was investigated. The system achieved high-resolution and high-contrast images of human retinas by dynamic compensation for the aberrations in the eyes. Retinal structures such as photoreceptor cells, blood vessels, and nerve fiber bundles, as well as blood flow, could be observed in vivo. We also investigated involuntary eye movements and ascertained microsaccades and drifts using both the retinal images and the aberrations recorded simultaneously. Furthermore, we measured the interframe displacement of retinal images and found that during eye drift, the displacement has a linear relationship with the residual low-order aberration. The estimated duration and cumulative displacement of the drift were within the ranges estimated by a video tracking technique. The AO-SLO would not only be used for the early detection of eye diseases, but would also offer a new approach for involuntary eye movement research.
Modern Speed-Reading Apps Do Not Foster Reading Comprehension.
Acklin, Dina; Papesh, Megan H
2017-01-01
New computer apps are gaining popularity by suggesting that reading speeds can be drastically increased when eye movements that normally occur during reading are eliminated. This is done using rapid serial visual presentation (RSVP), where words are presented 1 at a time, thus preventing natural eye movements such as saccades, fixations, and regressions from occurring. Al- though the companies producing these apps suggest that RSVP reading does not yield comprehension deficits, research investigating the role of eye movements in reading documents shows the necessity of natural eye movements for accurate comprehension. The current study explored variables that may affect reading comprehension during RSVP reading, including text difficulty (6th grade and 12th grade), text presentation speed (static, 700 wpm, and 1,000 wpm), and working memory capacity (WMC). Consistent with recent work showing a tenuous relationship between comprehension and WMC, participants' WMC did not predict comprehension scores. Instead, comprehension was most affected by reading speed: Static text was associated with superior performance, relative to either RSVP reading condition. Furthermore, slower RSVP speeds yielded better verbatim comprehension, and faster speeds benefited inferential comprehension.
Maroufi, Mohsen; Zamani, Shahla; Izadikhah, Zahra; Marofi, Maryam; O'Connor, Peter
2016-09-01
To investigate the efficacy of Eye Movement Desensitization and Reprocessing for postoperative pain management in adolescents. Eye Movement Desensitization and Reprocessing is an inexpensive, non-pharmacological intervention that has successfully been used to treat chronic pain. It holds promise in the treatment of acute, postsurgical pain based on its purported effects on the brain and nervous system. A randomized controlled trial was used. Fifty-six adolescent surgical patients aged between 12-18 years were allocated to gender-balanced Eye Movement Desensitization and Reprocessing (treatment) or non-Eye Movement Desensitization and Reprocessing (control) groups. Pain was measured using the Wong-Baker FACES(®) Pain Rating Scale (WBFS) before and after the intervention (or non-intervention for the control group). A Wilcoxon signed-rank test demonstrated that the Eye Movement Desensitization and Reprocessing group experienced a significant reduction in pain intensity after treatment intervention, whereas the control group did not. Additionally, a Mann-Whitney U-test showed that, while there was no significant difference between the two groups at time 1, there was a significant difference in pain intensity between the two groups at time 2, with the Eye Movement Desensitization and Reprocessing group experiencing lower levels of pain. These results suggest that Eye Movement Desensitization and Reprocessing may be an effective treatment modality for postoperative pain. © 2016 John Wiley & Sons Ltd.
Eye movement analysis for activity recognition using electrooculography.
Bulling, Andreas; Ward, Jamie A; Gellersen, Hans; Tröster, Gerhard
2011-04-01
In this work, we investigate eye movement analysis as a new sensing modality for activity recognition. Eye movement data were recorded using an electrooculography (EOG) system. We first describe and evaluate algorithms for detecting three eye movement characteristics from EOG signals-saccades, fixations, and blinks-and propose a method for assessing repetitive patterns of eye movements. We then devise 90 different features based on these characteristics and select a subset of them using minimum redundancy maximum relevance (mRMR) feature selection. We validate the method using an eight participant study in an office environment using an example set of five activity classes: copying a text, reading a printed paper, taking handwritten notes, watching a video, and browsing the Web. We also include periods with no specific activity (the NULL class). Using a support vector machine (SVM) classifier and person-independent (leave-one-person-out) training, we obtain an average precision of 76.1 percent and recall of 70.5 percent over all classes and participants. The work demonstrates the promise of eye-based activity recognition (EAR) and opens up discussion on the wider applicability of EAR to other activities that are difficult, or even impossible, to detect using common sensing modalities.
Curved Saccade Trajectories Reveal Conflicting Predictions in Associative Learning
ERIC Educational Resources Information Center
Koenig, Stephan; Lachnit, Harald
2011-01-01
We report how the trajectories of saccadic eye movements are affected by memory interference acquired during associative learning. Human participants learned to perform saccadic choice responses based on the presentation of arbitrary central cues A, B, AC, BC, AX, BY, X, and Y that were trained to predict the appearance of a peripheral target…
Objective Methods to Test Visual Dysfunction in the Presence of Cognitive Impairment
2015-12-01
the eye and 3) purposeful eye movements to track targets that are resolved. Major Findings: Three major objective tests of vision were successfully...developed and optimized to detect disease. These were 1) the pupil light reflex (either comparing the two eyes or independently evaluating each eye ...separately for retina or optic nerve damage, 2) eye movement based analysis of target acquisition, fixation, and eccentric viewing as a means of
Real-time eye tracking for the assessment of driver fatigue.
Xu, Junli; Min, Jianliang; Hu, Jianfeng
2018-04-01
Eye-tracking is an important approach to collect evidence regarding some participants' driving fatigue. In this contribution, the authors present a non-intrusive system for evaluating driver fatigue by tracking eye movement behaviours. A real-time eye-tracker was used to monitor participants' eye state for collecting eye-movement data. These data are useful to get insights into assessing participants' fatigue state during monotonous driving. Ten healthy subjects performed continuous simulated driving for 1-2 h with eye state monitoring on a driving simulator in this study, and these measured features of the fixation time and the pupil area were recorded via using eye movement tracking device. For achieving a good cost-performance ratio and fast computation time, the fuzzy K -nearest neighbour was employed to evaluate and analyse the influence of different participants on the variations in the fixation duration and pupil area of drivers. The findings of this study indicated that there are significant differences in domain value distribution of the pupil area under the condition with normal and fatigue driving state. Result also suggests that the recognition accuracy by jackknife validation reaches to about 89% in average, implying that show a significant potential of real-time applicability of the proposed approach and is capable of detecting driver fatigue.
Real-time eye tracking for the assessment of driver fatigue
Xu, Junli; Min, Jianliang
2018-01-01
Eye-tracking is an important approach to collect evidence regarding some participants’ driving fatigue. In this contribution, the authors present a non-intrusive system for evaluating driver fatigue by tracking eye movement behaviours. A real-time eye-tracker was used to monitor participants’ eye state for collecting eye-movement data. These data are useful to get insights into assessing participants’ fatigue state during monotonous driving. Ten healthy subjects performed continuous simulated driving for 1–2 h with eye state monitoring on a driving simulator in this study, and these measured features of the fixation time and the pupil area were recorded via using eye movement tracking device. For achieving a good cost-performance ratio and fast computation time, the fuzzy K-nearest neighbour was employed to evaluate and analyse the influence of different participants on the variations in the fixation duration and pupil area of drivers. The findings of this study indicated that there are significant differences in domain value distribution of the pupil area under the condition with normal and fatigue driving state. Result also suggests that the recognition accuracy by jackknife validation reaches to about 89% in average, implying that show a significant potential of real-time applicability of the proposed approach and is capable of detecting driver fatigue. PMID:29750113
A common stochastic accumulator with effector-dependent noise can explain eye-hand coordination
Gopal, Atul; Viswanathan, Pooja
2015-01-01
The computational architecture that enables the flexible coupling between otherwise independent eye and hand effector systems is not understood. By using a drift diffusion framework, in which variability of the reaction time (RT) distribution scales with mean RT, we tested the ability of a common stochastic accumulator to explain eye-hand coordination. Using a combination of behavior, computational modeling and electromyography, we show how a single stochastic accumulator to threshold, followed by noisy effector-dependent delays, explains eye-hand RT distributions and their correlation, while an alternate independent, interactive eye and hand accumulator model does not. Interestingly, the common accumulator model did not explain the RT distributions of the same subjects when they made eye and hand movements in isolation. Taken together, these data suggest that a dedicated circuit underlies coordinated eye-hand planning. PMID:25568161
ERIC Educational Resources Information Center
Spichtig, Alexandra N.; Hiebert, Elfrieda H.; Vorstius, Christian; Pascoe, Jeffrey P.; Pearson, P. David; Radach, Ralph
2016-01-01
The present study measured the comprehension-based silent reading efficiency of U.S. students in grades 2, 4, 6, 8, 10, and 12. Students read standardized grade-level passages while an eye movement recording system was used to measure reading rate, fixations (eye stops) per word, fixation durations, and regressions (right-to-left eye movements)…
Henderson, John M; Choi, Wonil
2015-06-01
During active scene perception, our eyes move from one location to another via saccadic eye movements, with the eyes fixating objects and scene elements for varying amounts of time. Much of the variability in fixation duration is accounted for by attentional, perceptual, and cognitive processes associated with scene analysis and comprehension. For this reason, current theories of active scene viewing attempt to account for the influence of attention and cognition on fixation duration. Yet almost nothing is known about the neurocognitive systems associated with variation in fixation duration during scene viewing. We addressed this topic using fixation-related fMRI, which involves coregistering high-resolution eye tracking and magnetic resonance scanning to conduct event-related fMRI analysis based on characteristics of eye movements. We observed that activation in visual and prefrontal executive control areas was positively correlated with fixation duration, whereas activation in ventral areas associated with scene encoding and medial superior frontal and paracentral regions associated with changing action plans was negatively correlated with fixation duration. The results suggest that fixation duration in scene viewing is controlled by cognitive processes associated with real-time scene analysis interacting with motor planning, consistent with current computational models of active vision for scene perception.
2017-01-01
Cortex in and around the human posterior superior temporal sulcus (pSTS) is known to be critical for speech perception. The pSTS responds to both the visual modality (especially biological motion) and the auditory modality (especially human voices). Using fMRI in single subjects with no spatial smoothing, we show that visual and auditory selectivity are linked. Regions of the pSTS were identified that preferred visually presented moving mouths (presented in isolation or as part of a whole face) or moving eyes. Mouth-preferring regions responded strongly to voices and showed a significant preference for vocal compared with nonvocal sounds. In contrast, eye-preferring regions did not respond to either vocal or nonvocal sounds. The converse was also true: regions of the pSTS that showed a significant response to speech or preferred vocal to nonvocal sounds responded more strongly to visually presented mouths than eyes. These findings can be explained by environmental statistics. In natural environments, humans see visual mouth movements at the same time as they hear voices, while there is no auditory accompaniment to visual eye movements. The strength of a voxel's preference for visual mouth movements was strongly correlated with the magnitude of its auditory speech response and its preference for vocal sounds, suggesting that visual and auditory speech features are coded together in small populations of neurons within the pSTS. SIGNIFICANCE STATEMENT Humans interacting face to face make use of auditory cues from the talker's voice and visual cues from the talker's mouth to understand speech. The human posterior superior temporal sulcus (pSTS), a brain region known to be important for speech perception, is complex, with some regions responding to specific visual stimuli and others to specific auditory stimuli. Using BOLD fMRI, we show that the natural statistics of human speech, in which voices co-occur with mouth movements, are reflected in the neural architecture of the pSTS. Different pSTS regions prefer visually presented faces containing either a moving mouth or moving eyes, but only mouth-preferring regions respond strongly to voices. PMID:28179553
Zhu, Lin L; Beauchamp, Michael S
2017-03-08
Cortex in and around the human posterior superior temporal sulcus (pSTS) is known to be critical for speech perception. The pSTS responds to both the visual modality (especially biological motion) and the auditory modality (especially human voices). Using fMRI in single subjects with no spatial smoothing, we show that visual and auditory selectivity are linked. Regions of the pSTS were identified that preferred visually presented moving mouths (presented in isolation or as part of a whole face) or moving eyes. Mouth-preferring regions responded strongly to voices and showed a significant preference for vocal compared with nonvocal sounds. In contrast, eye-preferring regions did not respond to either vocal or nonvocal sounds. The converse was also true: regions of the pSTS that showed a significant response to speech or preferred vocal to nonvocal sounds responded more strongly to visually presented mouths than eyes. These findings can be explained by environmental statistics. In natural environments, humans see visual mouth movements at the same time as they hear voices, while there is no auditory accompaniment to visual eye movements. The strength of a voxel's preference for visual mouth movements was strongly correlated with the magnitude of its auditory speech response and its preference for vocal sounds, suggesting that visual and auditory speech features are coded together in small populations of neurons within the pSTS. SIGNIFICANCE STATEMENT Humans interacting face to face make use of auditory cues from the talker's voice and visual cues from the talker's mouth to understand speech. The human posterior superior temporal sulcus (pSTS), a brain region known to be important for speech perception, is complex, with some regions responding to specific visual stimuli and others to specific auditory stimuli. Using BOLD fMRI, we show that the natural statistics of human speech, in which voices co-occur with mouth movements, are reflected in the neural architecture of the pSTS. Different pSTS regions prefer visually presented faces containing either a moving mouth or moving eyes, but only mouth-preferring regions respond strongly to voices. Copyright © 2017 the authors 0270-6474/17/372697-12$15.00/0.
NASA Technical Reports Server (NTRS)
Young, L. R.
1975-01-01
Preliminary tests and evaluation are presented of pilot performance during landing (flight paths) using computer generated images (video tapes). Psychophysiological factors affecting pilot visual perception were measured. A turning flight maneuver (pitch and roll) was specifically studied using a training device, and the scaling laws involved were determined. Also presented are medical studies (abstracts) on human response to gravity variations without visual cues, acceleration stimuli effects on the semicircular canals, and neurons affecting eye movements, and vestibular tests.
Dynamic Features for Iris Recognition.
da Costa, R M; Gonzaga, A
2012-08-01
The human eye is sensitive to visible light. Increasing illumination on the eye causes the pupil of the eye to contract, while decreasing illumination causes the pupil to dilate. Visible light causes specular reflections inside the iris ring. On the other hand, the human retina is less sensitive to near infra-red (NIR) radiation in the wavelength range from 800 nm to 1400 nm, but iris detail can still be imaged with NIR illumination. In order to measure the dynamic movement of the human pupil and iris while keeping the light-induced reflexes from affecting the quality of the digitalized image, this paper describes a device based on the consensual reflex. This biological phenomenon contracts and dilates the two pupils synchronously when illuminating one of the eyes by visible light. In this paper, we propose to capture images of the pupil of one eye using NIR illumination while illuminating the other eye using a visible-light pulse. This new approach extracts iris features called "dynamic features (DFs)." This innovative methodology proposes the extraction of information about the way the human eye reacts to light, and to use such information for biometric recognition purposes. The results demonstrate that these features are discriminating features, and, even using the Euclidean distance measure, an average accuracy of recognition of 99.1% was obtained. The proposed methodology has the potential to be "fraud-proof," because these DFs can only be extracted from living irises.
Directional asymmetries in human smooth pursuit eye movements.
Ke, Sally R; Lam, Jessica; Pai, Dinesh K; Spering, Miriam
2013-06-27
Humans make smooth pursuit eye movements to bring the image of a moving object onto the fovea. Although pursuit accuracy is critical to prevent motion blur, the eye often falls behind the target. Previous studies suggest that pursuit accuracy differs between motion directions. Here, we systematically assess asymmetries in smooth pursuit. In experiment 1, binocular eye movements were recorded while observers (n = 20) tracked a small spot of light moving along one of four cardinal or diagonal axes across a featureless background. We analyzed pursuit latency, acceleration, peak velocity, gain, and catch-up saccade latency, number, and amplitude. In experiment 2 (n = 22), we examined the effects of spatial location and constrained stimulus motion within the upper or lower visual field. Pursuit was significantly faster (higher acceleration, peak velocity, and gain) and smoother (fewer and later catch-up saccades) in response to downward versus upward motion in both the upper and the lower visual fields. Pursuit was also more accurate and smoother in response to horizontal versus vertical motion. CONCLUSIONS. Our study is the first to report a consistent up-down asymmetry in human adults, regardless of visual field. Our findings suggest that pursuit asymmetries are adaptive responses to the requirements of the visual context: preferred motion directions (horizontal and downward) are more critical to our survival than nonpreferred ones.
Driver fatigue detection based on eye state.
Lin, Lizong; Huang, Chao; Ni, Xiaopeng; Wang, Jiawen; Zhang, Hao; Li, Xiao; Qian, Zhiqin
2015-01-01
Nowadays, more and more traffic accidents occur because of driver fatigue. In order to reduce and prevent it, in this study, a calculation method using PERCLOS (percentage of eye closure time) parameter characteristics based on machine vision was developed. It determined whether a driver's eyes were in a fatigue state according to the PERCLOS value. The overall workflow solutions included face detection and tracking, detection and location of the human eye, human eye tracking, eye state recognition, and driver fatigue testing. The key aspects of the detection system incorporated the detection and location of human eyes and driver fatigue testing. The simplified method of measuring the PERCLOS value of the driver was to calculate the ratio of the eyes being open and closed with the total number of frames for a given period. If the eyes were closed more than the set threshold in the total number of frames, the system would alert the driver. Through many experiments, it was shown that besides the simple detection algorithm, the rapid computing speed, and the high detection and recognition accuracies of the system, the system was demonstrated to be in accord with the real-time requirements of a driver fatigue detection system.
Ho-Phuoc, Tien; Guyader, Nathalie; Landragin, Frédéric; Guérin-Dugué, Anne
2012-02-03
Since Treisman's theory, it has been generally accepted that color is an elementary feature that guides eye movements when looking at natural scenes. Hence, most computational models of visual attention predict eye movements using color as an important visual feature. In this paper, using experimental data, we show that color does not affect where observers look when viewing natural scene images. Neither colors nor abnormal colors modify observers' fixation locations when compared to the same scenes in grayscale. In the same way, we did not find any significant difference between the scanpaths under grayscale, color, or abnormal color viewing conditions. However, we observed a decrease in fixation duration for color and abnormal color, and this was particularly true at the beginning of scene exploration. Finally, we found that abnormal color modifies saccade amplitude distribution.
An Examination of Cognitive Processing of Multimedia Information Based on Viewers' Eye Movements
ERIC Educational Resources Information Center
Liu, Han-Chin; Chuang, Hsueh-Hua
2011-01-01
This study utilized qualitative and quantitative designs and eye-tracking technology to understand how viewers process multimedia information. Eye movement data were collected from eight college students (non-science majors) while they were viewing web pages containing different types of text and illustrations depicting the mechanism of…
Predictors of Verb-Mediated Anticipatory Eye Movements in the Visual World
ERIC Educational Resources Information Center
Hintz, Florian; Meyer, Antje S.; Huettig, Falk
2017-01-01
Many studies have demonstrated that listeners use information extracted from verbs to guide anticipatory eye movements to objects in the visual context that satisfy the selection restrictions of the verb. An important question is what underlies such verb-mediated anticipatory eye gaze. Based on empirical and theoretical suggestions, we…
Understanding eye movements in face recognition using hidden Markov models.
Chuk, Tim; Chan, Antoni B; Hsiao, Janet H
2014-09-16
We use a hidden Markov model (HMM) based approach to analyze eye movement data in face recognition. HMMs are statistical models that are specialized in handling time-series data. We conducted a face recognition task with Asian participants, and model each participant's eye movement pattern with an HMM, which summarized the participant's scan paths in face recognition with both regions of interest and the transition probabilities among them. By clustering these HMMs, we showed that participants' eye movements could be categorized into holistic or analytic patterns, demonstrating significant individual differences even within the same culture. Participants with the analytic pattern had longer response times, but did not differ significantly in recognition accuracy from those with the holistic pattern. We also found that correct and wrong recognitions were associated with distinctive eye movement patterns; the difference between the two patterns lies in the transitions rather than locations of the fixations alone. © 2014 ARVO.
Matsumoto, Akihiro; Tachibana, Masao
2017-01-01
Even when the body is stationary, the whole retinal image is always in motion by fixational eye movements and saccades that move the eye between fixation points. Accumulating evidence indicates that the brain is equipped with specific mechanisms for compensating for the global motion induced by these eye movements. However, it is not yet fully understood how the retina processes global motion images during eye movements. Here we show that global motion images evoke novel coordinated firing in retinal ganglion cells (GCs). We simultaneously recorded the firing of GCs in the goldfish isolated retina using a multi-electrode array, and classified each GC based on the temporal profile of its receptive field (RF). A moving target that accompanied the global motion (simulating a saccade following a period of fixational eye movements) modulated the RF properties and evoked synchronized and correlated firing among local clusters of the specific GCs. Our findings provide a novel concept for retinal information processing during eye movements.
Barnes, G; Goodbody, S; Collins, S
1995-01-01
Ocular pursuit responses have been examined in humans in three experiments in which the pursuit target image has been fully or partially stabilised on the fovea by feeding a recorded eye movement signal back to drive the target motion. The objective was to establish whether subjects could volitionally control smooth eye movement to reproduce trajectories of target motion in the absence of a concurrent target motion stimulus. In experiment 1 subjects were presented with a target moving with a triangular waveform in the horizontal axis with a frequency of 0.325 Hz and velocities of +/- 10-50 degrees/s. The target was illuminated twice per cycle for pulse durations (PD) of 160-640 ms as it passed through the centre position; otherwise subjects were in darkness. Subjects initially tracked the target motion in a conventional closed-loop mode for four cycles. Prior to the next target presentation the target image was stabilised on the fovea, so that any target motion generated resulted solely from volitional eye movement. Subjects continued to make anticipatory smooth eye movements both to the left and the right with a velocity trajectory similar to that observed in the closed-loop phase. Peak velocity in the stabilised-image mode was highly correlated with that in the prior closed-loop phase, but was slightly less (84% on average). In experiment 2 subjects were presented with a continuously illuminated target that was oscillated sinusoidally at frequencies of 0.2-1.34 Hz and amplitudes of +/- 5-20 degrees. After four cycles of closed-loop stimulation the image was stabilised on the fovea at the time of peak target displacement. Subjects continued to generate an oscillatory smooth eye velocity pattern that mimicked the sinusoidal motion of the previous closed-loop phase for at least three further cycles. The peak eye velocity generated ranged from 57-95% of that in the closed-loop phase at frequencies up to 0.8 Hz but decreased significantly at 1.34 Hz. In experiment 3 subjects were presented with a stabilised display throughout and generated smooth eye movements with peak velocity up to 84 degrees/s in the complete absence of any prior external target motion stimulus, by transferring their attention alternately to left and right of the centre of the display. Eye velocity was found to be dependent on the eccentricity of the centre of attention and the frequency of alternation. When the target was partially stabilised on the retina by feeding back only a proportion (Kf = 0.6-0.9) of the eye movement signal to drive the target, subjects were still able to generate smooth movements at will, even though the display did not move as far or as fast as the eye. Peak eye velocity decreased as Kf decreased, suggesting that there was a continuous competitive interaction between the volitional drive and the visual feedback provided by the relative motion of the display with respect to the retina. These results support the evidence for two separate mechanisms of smooth eye movement control in ocular pursuit: reflex control from retinal velocity error feedback and volitional control from an internal source. Arguments are presented to indicate how smooth pursuit may be controlled by matching a voluntarily initiated estimate of the required smooth movement, normally derived from storage of past re-afferent information, against current visual feedback information. Such a mechanism allows preemptive smooth eye movements to be made that can overcome the inherent delays in the visual feedback pathway.
Disk space and load time requirements for eye movement biometric databases
NASA Astrophysics Data System (ADS)
Kasprowski, Pawel; Harezlak, Katarzyna
2016-06-01
Biometric identification is a very popular area of interest nowadays. Problems with the so-called physiological methods like fingerprints or iris recognition resulted in increased attention paid to methods measuring behavioral patterns. Eye movement based biometric (EMB) identification is one of the interesting behavioral methods and due to the intensive development of eye tracking devices it has become possible to define new methods for the eye movement signal processing. Such method should be supported by an efficient storage used to collect eye movement data and provide it for further analysis. The aim of the research was to check various setups enabling such a storage choice. There were various aspects taken into consideration, like disk space usage, time required for loading and saving whole data set or its chosen parts.
Spering, Miriam; Carrasco, Marisa
2012-05-30
Feature-based attention enhances visual processing and improves perception, even for visual features that we are not aware of. Does feature-based attention also modulate motor behavior in response to visual information that does or does not reach awareness? Here we compare the effect of feature-based attention on motion perception and smooth-pursuit eye movements in response to moving dichoptic plaids--stimuli composed of two orthogonally drifting gratings, presented separately to each eye--in human observers. Monocular adaptation to one grating before the presentation of both gratings renders the adapted grating perceptually weaker than the unadapted grating and decreases the level of awareness. Feature-based attention was directed to either the adapted or the unadapted grating's motion direction or to both (neutral condition). We show that observers were better at detecting a speed change in the attended than the unattended motion direction, indicating that they had successfully attended to one grating. Speed change detection was also better when the change occurred in the unadapted than the adapted grating, indicating that the adapted grating was perceptually weaker. In neutral conditions, perception and pursuit in response to plaid motion were dissociated: While perception followed one grating's motion direction almost exclusively (component motion), the eyes tracked the average of both gratings (pattern motion). In attention conditions, perception and pursuit were shifted toward the attended component. These results suggest that attention affects perception and pursuit similarly even though only the former reflects awareness. The eyes can track an attended feature even if observers do not perceive it.
Three-dimensional microscopic tomographic imagings of the cataract in a human lens in vivo
NASA Astrophysics Data System (ADS)
Masters, Barry R.
1998-10-01
The problem of three-dimensional visualization of a human lens in vivo has been solved by a technique of volume rendering a transformed series of 60 rotated Scheimpflug (a dual slit reflected light microscope) digital images. The data set was obtained by rotating the Scheimpflug camera about the optic axis of the lens in 3 degree increments. The transformed set of optical sections were first aligned to correct for small eye movements, and then rendered into a volume reconstruction with volume rendering computer graphics techniques. To help visualize the distribution of lens opacities (cataracts) in the living, human lens the intensity of light scattering was pseudocolor coded and the cataract opacities were displayed as a movie.
Sinskey, Robert M; Eshete, Almaz
2002-01-01
To evaluate the visual and restoration of normal appearance results of maximal excision of the horizontal rectus muscles in nystagmus patients. Menelik II Hospital, Addis Ababa, Ethiopia and the Sinskey Eye Institute, Santa Monica, California. The medial and lateral rectus muscles were extirpated as far back as possible with an enucleation snare in four patients with horizontal nystagmus. A complete eye examination was performed pre- and postoperatively. Using a camcorder, ocular movements were recorded before surgery, and at postop; days 1 and 40, and months 1, 3 and 10. All four patients had a marked reduction in both abnormal and normal horizontal eye movement, and improvement in objective visual acuity. Postoperative residual intermittent fine horizontal movement was recorded in the left eye in a 6 year old and in both eyes of a 41 year old patient. A residual rotary component was recorded in a 15 year-old patient. The 6 and 9 year-old patients each developed a moderate exotropia. The 15 and 41 year-old patients maintained binocular fusion with some residual ability to converge. Vision increased subjectively in all cases. Subtotal myectomy of the horizontal muscles in horizontal nystagmus with no null point was very effective in improving and/or eliminating horizontal eye movement. Restoration of normal or near normal appearance and improvement in visual acuity occurred in all cases. None of the patients complained of their loss of horizontal gaze and eye movement. More complete myectomy of the muscles should produce total elimination of both normal and abnormal horizontal eye movement including nystagmus.
Spatial updating in human parietal cortex
NASA Technical Reports Server (NTRS)
Merriam, Elisha P.; Genovese, Christopher R.; Colby, Carol L.
2003-01-01
Single neurons in monkey parietal cortex update visual information in conjunction with eye movements. This remapping of stimulus representations is thought to contribute to spatial constancy. We hypothesized that a similar process occurs in human parietal cortex and that we could visualize it with functional MRI. We scanned subjects during a task that involved remapping of visual signals across hemifields. We observed an initial response in the hemisphere contralateral to the visual stimulus, followed by a remapped response in the hemisphere ipsilateral to the stimulus. We ruled out the possibility that this remapped response resulted from either eye movements or visual stimuli alone. Our results demonstrate that updating of visual information occurs in human parietal cortex.
NASA Astrophysics Data System (ADS)
Liu, Yanju; Shi, Liang; Liu, Liwu; Zhang, Zhen; Leng, Jinsong
2008-03-01
Bio-mimetic actuators are inspired to the human or animal organ and they are aimed at replicating actions exerted by the main organic muscles. We present here an inflated dielectric Electroactive Polymer actuator based on acrylic elastomer aiming at mimicing the ocular muscular of the human eye. Two sheets of polyacrylic elastomer coated with conductive carbon grease are sticked to a rotatable backbone, which function like an agonist-antagonist configuration. When stimulating the two elastomer sheets separately, the rotatable mid-arc of the actuator is capable of rotating from -50° to 50°. Experiments shows that the inflated actuator, compared with uninflated one, performs much bigger rotating angle and more strengthened. Connected with the actuator via an elastic tensive line, the eyeball rotates around the symmetrical axes. The realization of more accurate movements and emotional expressions of our native eye system is the next step of our research and still under studied. This inflated dielectric elastomer actuator shows as well great potential application in robofish and adaptive stucture.
A relationship between eye movement patterns and performance in a precognitive tracking task
NASA Technical Reports Server (NTRS)
Repperger, D. W.; Hartzell, E. J.
1977-01-01
Eye movements made by various subjects in the performance of a precognitive tracking task are studied. The tracking task persented by an antiaircraft artillery (AAA) simulator has an input forcing function represented by a deterministic aircraft fly-by. The performance of subjects is ranked by two metrics. Good, mediocre, and poor trackers are selected for analysis based on performance during the difficult segment of the tracking task and over replications. Using phase planes to characterize both the eye movement patterns and the displayed error signal, a simple metric is developed to study these patterns. Two characterizations of eye movement strategies are defined and quantified. Using these two types of eye strategies, two conclusions are obtained about good, mediocre, and poor trackers. First, the eye tracker who used a fixed strategy will consistently perform better. Secondly, the best fixed strategy is defined as a Crosshair Fixator.
Is there a common motor dysregulation in sleepwalking and REM sleep behaviour disorder?
Haridi, Mehdi; Weyn Banningh, Sebastian; Clé, Marion; Leu-Semenescu, Smaranda; Vidailhet, Marie; Arnulf, Isabelle
2017-10-01
This study sought to determine if there is any overlap between the two major non-rapid eye movement and rapid eye movement parasomnias, i.e. sleepwalking/sleep terrors and rapid eye movement sleep behaviour disorder. We assessed adult patients with sleepwalking/sleep terrors using rapid eye movement sleep behaviour disorder screening questionnaires and determined if they had enhanced muscle tone during rapid eye movement sleep. Conversely, we assessed rapid eye movement sleep behaviour disorder patients using the Paris Arousal Disorders Severity Scale and determined if they had more N3 awakenings. The 251 participants included 64 patients with rapid eye movement sleep behaviour disorder (29 with idiopathic rapid eye movement sleep behaviour disorder and 35 with rapid eye movement sleep behaviour disorder associated with Parkinson's disease), 62 patients with sleepwalking/sleep terrors, 66 old healthy controls (age-matched with the rapid eye movement sleep behaviour disorder group) and 59 young healthy controls (age-matched with the sleepwalking/sleep terrors group). They completed the rapid eye movement sleep behaviour disorder screening questionnaire, rapid eye movement sleep behaviour disorder single question and Paris Arousal Disorders Severity Scale. In addition, all the participants underwent a video-polysomnography. The sleepwalking/sleep terrors patients scored positive on rapid eye movement sleep behaviour disorder scales and had a higher percentage of 'any' phasic rapid eye movement sleep without atonia when compared with controls; however, these patients did not have higher tonic rapid eye movement sleep without atonia or complex behaviours during rapid eye movement sleep. Patients with rapid eye movement sleep behaviour disorder had moderately elevated scores on the Paris Arousal Disorders Severity Scale but did not exhibit more N3 arousals (suggestive of non-rapid eye movement parasomnia) than the control group. These results indicate that dream-enacting behaviours (assessed by rapid eye movement sleep behaviour disorder screening questionnaires) are commonly reported by sleepwalking/sleep terrors patients, thus decreasing the questionnaire's specificity. Furthermore, sleepwalking/sleep terrors patients have excessive twitching during rapid eye movement sleep, which may result either from a higher dreaming activity in rapid eye movement sleep or from a more generalised non-rapid eye movement/rapid eye movement motor dyscontrol during sleep. © 2017 European Sleep Research Society.
Nonhuman Primate Studies to Advance Vision Science and Prevent Blindness.
Mustari, Michael J
2017-12-01
Most primate behavior is dependent on high acuity vision. Optimal visual performance in primates depends heavily upon frontally placed eyes, retinal specializations, and binocular vision. To see an object clearly its image must be placed on or near the fovea of each eye. The oculomotor system is responsible for maintaining precise eye alignment during fixation and generating eye movements to track moving targets. The visual system of nonhuman primates has a similar anatomical organization and functional capability to that of humans. This allows results obtained in nonhuman primates to be applied to humans. The visual and oculomotor systems of primates are immature at birth and sensitive to the quality of binocular visual and eye movement experience during the first months of life. Disruption of postnatal experience can lead to problems in eye alignment (strabismus), amblyopia, unsteady gaze (nystagmus), and defective eye movements. Recent studies in nonhuman primates have begun to discover the neural mechanisms associated with these conditions. In addition, genetic defects that target the retina can lead to blindness. A variety of approaches including gene therapy, stem cell treatment, neuroprosthetics, and optogenetics are currently being used to restore function associated with retinal diseases. Nonhuman primates often provide the best animal model for advancing fundamental knowledge and developing new treatments and cures for blinding diseases. © The Author(s) 2017. Published by Oxford University Press on behalf of the National Academy of Sciences. All rights reserved. For permissions, please email: journals.permissions@oup.com.
Nonexplicit change detection in complex dynamic settings: what eye movements reveal.
Vachon, François; Vallières, Benoît R; Jones, Dylan M; Tremblay, Sébastien
2012-12-01
We employed a computer-controlled command-and-control (C2) simulation and recorded eye movements to examine the extent and nature of the inability to detect critical changes in dynamic displays when change detection is implicit (i.e., requires no explicit report) to the operator's task. Change blindness-the failure to notice significant changes to a visual scene-may have dire consequences on performance in C2 and surveillance operations. Participants performed a radar-based risk-assessment task involving multiple subtasks. Although participants were not required to explicitly report critical changes to the operational display, change detection was critical in informing decision making. Participants' eye movements were used as an index of visual attention across the display. Nonfixated (i.e., unattended) changes were more likely to be missed than were fixated (i.e., attended) changes, supporting the idea that focused attention is necessary for conscious change detection. The finding of significant pupil dilation for changes undetected but fixated suggests that attended changes can nonetheless be missed because of a failure of attentional processes. Change blindness in complex dynamic displays takes the form of failures in establishing task-appropriate patterns of attentional allocation. These findings have implications in the design of change-detection support tools for dynamic displays and work procedure in C2 and surveillance.
Using Eye Movement Analysis to Study Auditory Effects on Visual Memory Recall
Marandi, Ramtin Zargari; Sabzpoushan, Seyed Hojjat
2014-01-01
Recent studies in affective computing are focused on sensing human cognitive context using biosignals. In this study, electrooculography (EOG) was utilized to investigate memory recall accessibility via eye movement patterns. 12 subjects were participated in our experiment wherein pictures from four categories were presented. Each category contained nine pictures of which three were presented twice and the rest were presented once only. Each picture presentation took five seconds with an adjoining three seconds interval. Similarly, this task was performed with new pictures together with related sounds. The task was free viewing and participants were not informed about the task's purpose. Using pattern recognition techniques, participants’ EOG signals in response to repeated and non-repeated pictures were classified for with and without sound stages. The method was validated with eight different participants. Recognition rate in “with sound” stage was significantly reduced as compared with “without sound” stage. The result demonstrated that the familiarity of visual-auditory stimuli can be detected from EOG signals and the auditory input potentially improves the visual recall process. PMID:25436085
Bai, Ou; Lin, Peter; Vorbach, Sherry; Li, Jiang; Furlani, Steve; Hallett, Mark
2007-12-01
To explore effective combinations of computational methods for the prediction of movement intention preceding the production of self-paced right and left hand movements from single trial scalp electroencephalogram (EEG). Twelve naïve subjects performed self-paced movements consisting of three key strokes with either hand. EEG was recorded from 128 channels. The exploration was performed offline on single trial EEG data. We proposed that a successful computational procedure for classification would consist of spatial filtering, temporal filtering, feature selection, and pattern classification. A systematic investigation was performed with combinations of spatial filtering using principal component analysis (PCA), independent component analysis (ICA), common spatial patterns analysis (CSP), and surface Laplacian derivation (SLD); temporal filtering using power spectral density estimation (PSD) and discrete wavelet transform (DWT); pattern classification using linear Mahalanobis distance classifier (LMD), quadratic Mahalanobis distance classifier (QMD), Bayesian classifier (BSC), multi-layer perceptron neural network (MLP), probabilistic neural network (PNN), and support vector machine (SVM). A robust multivariate feature selection strategy using a genetic algorithm was employed. The combinations of spatial filtering using ICA and SLD, temporal filtering using PSD and DWT, and classification methods using LMD, QMD, BSC and SVM provided higher performance than those of other combinations. Utilizing one of the better combinations of ICA, PSD and SVM, the discrimination accuracy was as high as 75%. Further feature analysis showed that beta band EEG activity of the channels over right sensorimotor cortex was most appropriate for discrimination of right and left hand movement intention. Effective combinations of computational methods provide possible classification of human movement intention from single trial EEG. Such a method could be the basis for a potential brain-computer interface based on human natural movement, which might reduce the requirement of long-term training. Effective combinations of computational methods can classify human movement intention from single trial EEG with reasonable accuracy.
Król, Magdalena Ewa; Król, Michał
2018-02-20
The aim of the study was not only to demonstrate whether eye-movement-based task decoding was possible but also to investigate whether eye-movement patterns can be used to identify cognitive processes behind the tasks. We compared eye-movement patterns elicited under different task conditions, with tasks differing systematically with regard to the types of cognitive processes involved in solving them. We used four tasks, differing along two dimensions: spatial (global vs. local) processing (Navon, Cognit Psychol, 9(3):353-383 1977) and semantic (deep vs. shallow) processing (Craik and Lockhart, J Verbal Learn Verbal Behav, 11(6):671-684 1972). We used eye-movement patterns obtained from two time periods: fixation cross preceding the target stimulus and the target stimulus. We found significant effects of both spatial and semantic processing, but in case of the latter, the effect might be an artefact of insufficient task control. We found above chance task classification accuracy for both time periods: 51.4% for the period of stimulus presentation and 34.8% for the period of fixation cross presentation. Therefore, we show that task can be to some extent decoded from the preparatory eye-movements before the stimulus is displayed. This suggests that anticipatory eye-movements reflect the visual scanning strategy employed for the task at hand. Finally, this study also demonstrates that decoding is possible even from very scant eye-movement data similar to Coco and Keller, J Vis 14(3):11-11 (2014). This means that task decoding is not limited to tasks that naturally take longer to perform and yield multi-second eye-movement recordings.
Maximum entropy perception-action space: a Bayesian model of eye movement selection
NASA Astrophysics Data System (ADS)
Colas, Francis; Bessière, Pierre; Girard, Benoît
2011-03-01
In this article, we investigate the issue of the selection of eye movements in a free-eye Multiple Object Tracking task. We propose a Bayesian model of retinotopic maps with a complex logarithmic mapping. This model is structured in two parts: a representation of the visual scene, and a decision model based on the representation. We compare different decision models based on different features of the representation and we show that taking into account uncertainty helps predict the eye movements of subjects recorded in a psychophysics experiment. Finally, based on experimental data, we postulate that the complex logarithmic mapping has a functional relevance, as the density of objects in this space in more uniform than expected. This may indicate that the representation space and control strategies are such that the object density is of maximum entropy.
Can Changes in Eye Movement Scanning Alter the Age-Related Deficit in Recognition Memory?
Chan, Jessica P. K.; Kamino, Daphne; Binns, Malcolm A.; Ryan, Jennifer D.
2011-01-01
Older adults typically exhibit poorer face recognition compared to younger adults. These recognition differences may be due to underlying age-related changes in eye movement scanning. We examined whether older adults’ recognition could be improved by yoking their eye movements to those of younger adults. Participants studied younger and older faces, under free viewing conditions (bases), through a gaze-contingent moving window (own), or a moving window which replayed the eye movements of a base participant (yoked). During the recognition test, participants freely viewed the faces with no viewing restrictions. Own-age recognition biases were observed for older adults in all viewing conditions, suggesting that this effect occurs independently of scanning. Participants in the bases condition had the highest recognition accuracy, and participants in the yoked condition were more accurate than participants in the own condition. Among yoked participants, recognition did not depend on age of the base participant. These results suggest that successful encoding for all participants requires the bottom-up contribution of peripheral information, regardless of the locus of control of the viewer. Although altering the pattern of eye movements did not increase recognition, the amount of sampling of the face during encoding predicted subsequent recognition accuracy for all participants. Increased sampling may confer some advantages for subsequent recognition, particularly for people who have declining memory abilities. PMID:21687460
Visuomotor cerebellum in human and nonhuman primates.
Voogd, Jan; Schraa-Tam, Caroline K L; van der Geest, Jos N; De Zeeuw, Chris I
2012-06-01
In this paper, we will review the anatomical components of the visuomotor cerebellum in human and, where possible, in non-human primates and discuss their function in relation to those of extracerebellar visuomotor regions with which they are connected. The floccular lobe, the dorsal paraflocculus, the oculomotor vermis, the uvula-nodulus, and the ansiform lobule are more or less independent components of the visuomotor cerebellum that are involved in different corticocerebellar and/or brain stem olivocerebellar loops. The floccular lobe and the oculomotor vermis share different mossy fiber inputs from the brain stem; the dorsal paraflocculus and the ansiform lobule receive corticopontine mossy fibers from postrolandic visual areas and the frontal eye fields, respectively. Of the visuomotor functions of the cerebellum, the vestibulo-ocular reflex is controlled by the floccular lobe; saccadic eye movements are controlled by the oculomotor vermis and ansiform lobule, while control of smooth pursuit involves all these cerebellar visuomotor regions. Functional imaging studies in humans further emphasize cerebellar involvement in visual reflexive eye movements and are discussed.
Dynamic visual attention: motion direction versus motion magnitude
NASA Astrophysics Data System (ADS)
Bur, A.; Wurtz, P.; Müri, R. M.; Hügli, H.
2008-02-01
Defined as an attentive process in the context of visual sequences, dynamic visual attention refers to the selection of the most informative parts of video sequence. This paper investigates the contribution of motion in dynamic visual attention, and specifically compares computer models designed with the motion component expressed either as the speed magnitude or as the speed vector. Several computer models, including static features (color, intensity and orientation) and motion features (magnitude and vector) are considered. Qualitative and quantitative evaluations are performed by comparing the computer model output with human saliency maps obtained experimentally from eye movement recordings. The model suitability is evaluated in various situations (synthetic and real sequences, acquired with fixed and moving camera perspective), showing advantages and inconveniences of each method as well as preferred domain of application.
A novel role for visual perspective cues in the neural computation of depth
Kim, HyungGoo R.; Angelaki, Dora E.; DeAngelis, Gregory C.
2014-01-01
As we explore a scene, our eye movements add global patterns of motion to the retinal image, complicating visual motion produced by self-motion or moving objects. Conventionally, it has been assumed that extra-retinal signals, such as efference copy of smooth pursuit commands, are required to compensate for the visual consequences of eye rotations. We consider an alternative possibility: namely, that the visual system can infer eye rotations from global patterns of image motion. We visually simulated combinations of eye translation and rotation, including perspective distortions that change dynamically over time. We demonstrate that incorporating these “dynamic perspective” cues allows the visual system to generate selectivity for depth sign from motion parallax in macaque area MT, a computation that was previously thought to require extra-retinal signals regarding eye velocity. Our findings suggest novel neural mechanisms that analyze global patterns of visual motion to perform computations that require knowledge of eye rotations. PMID:25436667
Development and learning of saccadic eye movements in 7- to 42-month-old children.
Alahyane, Nadia; Lemoine-Lardennois, Christelle; Tailhefer, Coline; Collins, Thérèse; Fagard, Jacqueline; Doré-Mazars, Karine
2016-01-01
From birth, infants move their eyes to explore their environment, interact with it, and progressively develop a multitude of motor and cognitive abilities. The characteristics and development of oculomotor control in early childhood remain poorly understood today. Here, we examined reaction time and amplitude of saccadic eye movements in 93 7- to 42-month-old children while they oriented toward visual animated cartoon characters appearing at unpredictable locations on a computer screen over 140 trials. Results revealed that saccade performance is immature in children compared to a group of adults: Saccade reaction times were longer, and saccade amplitude relative to target location (10° eccentricity) was shorter. Results also indicated that performance is flexible in children. Although saccade reaction time decreased as age increased, suggesting developmental improvements in saccade control, saccade amplitude gradually improved over trials. Moreover, similar to adults, children were able to modify saccade amplitude based on the visual error made in the previous trial. This second set of results suggests that short visual experience and/or rapid sensorimotor learning are functional in children and can also affect saccade performance.
Garg, Arun; Schwartz, Daniel; Stevens, Alexander A.
2007-01-01
What happens in vision related cortical areas when congenitally blind (CB) individuals orient attention to spatial locations? Previous neuroimaging of sighted individuals has found overlapping activation in a network of frontoparietal areas including frontal eye-fields (FEF), during both overt (with eye movement) and covert (without eye movement) shifts of spatial attention. Since voluntary eye movement planning seems irrelevant in CB, their FEF neurons should be recruited for alternative functions if their attentional role in sighted individuals is only due to eye movement planning. Recent neuroimaging of the blind has also reported activation in medial occipital areas, normally associated with visual processing, during a diverse set of non-visual tasks, but their response to attentional shifts remains poorly understood. Here, we used event-related fMRI to explore FEF and medial occipital areas in CB individuals and sighted controls with eyes closed (SC) performing a covert attention orienting task, using endogenous verbal cues and spatialized auditory targets. We found robust stimulus-locked FEF activation of all CB subjects, similar but stronger than in SC, suggesting that FEF plays a role in endogenous orienting of covert spatial attention even in individuals in whom voluntary eye movements are irrelevant. We also found robust activation in bilateral medial occipital cortex in CB but not in SC subjects. The response decreased below baseline following endogenous verbal cues but increased following auditory targets, suggesting that the medial occipital area in CB does not directly engage during cued orienting of attention but may be recruited for processing of spatialized auditory targets. PMID:17397882
Ipsiversive ictal eye deviation in inferioposterior temporal lobe epilepsy-Two SEEG cases report.
Zhang, Wei; Liu, Xingzhou; Zuo, Lijun; Guo, Qiang; Chen, Qi; Wang, Yongjun
2017-02-21
Versive seizure characterized by conjugate eye movement during epileptic seizure has been considered commonly as one of the most valuable semiological signs for epilepsy localization, especially for frontal lobe epilepsy. However, the lateralizing and localizing significance of ictaleye deviation has been questioned by clinical observation of a series of focal epilepsy studies, including frontal, central, temporal, parietal and occipital epilepsy. Two epileptic cases characterized by ipsiversive eye deviation as initial clinical sign during the habitual epileptic seizures are presented in this paper. The localization of the epileptogenic zone of both of the cases has been confirmed as inferioposterior temporal region by the findings of ictalstereoelectroencephalography (SEEG) and a good result after epileptic surgery. Detailed analysis of the exact position of the key contacts of the SEEG electrodes identified the overlap between the location of the epileptogenic zone and human MT/MST complex, which play a crucial role in the control of smooth pursuit eye movement. Ipsiversive eye deviation could be the initial clinical sign of inferioposterior temporal lobe epilepsy and attribute to the involvement of human MT/MST complex, especially human MST whichwas located on the anterior/dorsal bank of the anterior occipital sulcus (AOS).
Ma, Yingliang; Paterson, Helena M; Pollick, Frank E
2006-02-01
We present the methods that were used in capturing a library of human movements for use in computer-animated displays of human movement. The library is an attempt to systematically tap into and represent the wide range of personal properties, such as identity, gender, and emotion, that are available in a person's movements. The movements from a total of 30 nonprofessional actors (15 of them female) were captured while they performed walking, knocking, lifting, and throwing actions, as well as their combination in angry, happy, neutral, and sad affective styles. From the raw motion capture data, a library of 4,080 movements was obtained, using techniques based on Character Studio (plug-ins for 3D Studio MAX, AutoDesk, Inc.), MATLAB The MathWorks, Inc.), or a combination of these two. For the knocking, lifting, and throwing actions, 10 repetitions of the simple action unit were obtained for each affect, and for the other actions, two longer movement recordings were obtained for each affect. We discuss the potential use of the library for computational and behavioral analyses of movement variability, of human character animation, and of how gender, emotion, and identity are encoded and decoded from human movement.
Gender Classification Based on Eye Movements: A Processing Effect During Passive Face Viewing
Sammaknejad, Negar; Pouretemad, Hamidreza; Eslahchi, Changiz; Salahirad, Alireza; Alinejad, Ashkan
2017-01-01
Studies have revealed superior face recognition skills in females, partially due to their different eye movement strategies when encoding faces. In the current study, we utilized these slight but important differences and proposed a model that estimates the gender of the viewers and classifies them into two subgroups, males and females. An eye tracker recorded participant’s eye movements while they viewed images of faces. Regions of interest (ROIs) were defined for each face. Results showed that the gender dissimilarity in eye movements was not due to differences in frequency of fixations in the ROI s per se. Instead, it was caused by dissimilarity in saccade paths between the ROIs. The difference enhanced when saccades were towards the eyes. Females showed significant increase in transitions from other ROI s to the eyes. Consequently, the extraction of temporal transient information of saccade paths through a transition probability matrix, similar to a first order Markov chain model, significantly improved the accuracy of the gender classification results. PMID:29071007
Gender Classification Based on Eye Movements: A Processing Effect During Passive Face Viewing.
Sammaknejad, Negar; Pouretemad, Hamidreza; Eslahchi, Changiz; Salahirad, Alireza; Alinejad, Ashkan
2017-01-01
Studies have revealed superior face recognition skills in females, partially due to their different eye movement strategies when encoding faces. In the current study, we utilized these slight but important differences and proposed a model that estimates the gender of the viewers and classifies them into two subgroups, males and females. An eye tracker recorded participant's eye movements while they viewed images of faces. Regions of interest (ROIs) were defined for each face. Results showed that the gender dissimilarity in eye movements was not due to differences in frequency of fixations in the ROI s per se. Instead, it was caused by dissimilarity in saccade paths between the ROIs. The difference enhanced when saccades were towards the eyes. Females showed significant increase in transitions from other ROI s to the eyes. Consequently, the extraction of temporal transient information of saccade paths through a transition probability matrix, similar to a first order Markov chain model, significantly improved the accuracy of the gender classification results.
Do individuals with autism process words in context? Evidence from language-mediated eye-movements.
Brock, Jon; Norbury, Courtenay; Einav, Shiri; Nation, Kate
2008-09-01
It is widely argued that people with autism have difficulty processing ambiguous linguistic information in context. To investigate this claim, we recorded the eye-movements of 24 adolescents with autism spectrum disorder and 24 language-matched peers as they monitored spoken sentences for words corresponding to objects on a computer display. Following a target word, participants looked more at a competitor object sharing the same onset than at phonologically unrelated objects. This effect was, however, mediated by the sentence context such that participants looked less at the phonological competitor if it was semantically incongruous with the preceding verb. Contrary to predictions, the two groups evidenced similar effects of context on eye-movements. Instead, across both groups, the effect of sentence context was reduced in individuals with relatively poor language skills. Implications for the weak central coherence account of autism are discussed.
Improvement of design of a surgical interface using an eye tracking device
2014-01-01
Background Surgical interfaces are used for helping surgeons in interpretation and quantification of the patient information, and for the presentation of an integrated workflow where all available data are combined to enable optimal treatments. Human factors research provides a systematic approach to design user interfaces with safety, accuracy, satisfaction and comfort. One of the human factors research called user-centered design approach is used to develop a surgical interface for kidney tumor cryoablation. An eye tracking device is used to obtain the best configuration of the developed surgical interface. Methods Surgical interface for kidney tumor cryoablation has been developed considering the four phases of user-centered design approach, which are analysis, design, implementation and deployment. Possible configurations of the surgical interface, which comprise various combinations of menu-based command controls, visual display of multi-modal medical images, 2D and 3D models of the surgical environment, graphical or tabulated information, visual alerts, etc., has been developed. Experiments of a simulated cryoablation of a tumor task have been performed with surgeons to evaluate the proposed surgical interface. Fixation durations and number of fixations at informative regions of the surgical interface have been analyzed, and these data are used to modify the surgical interface. Results Eye movement data has shown that participants concentrated their attention on informative regions more when the number of displayed Computer Tomography (CT) images has been reduced. Additionally, the time required to complete the kidney tumor cryoablation task by the participants had been decreased with the reduced number of CT images. Furthermore, the fixation durations obtained after the revision of the surgical interface are very close to what is observed in visual search and natural scene perception studies suggesting more efficient and comfortable interaction with the surgical interface. The National Aeronautics and Space Administration Task Load Index (NASA-TLX) and Short Post-Assessment Situational Awareness (SPASA) questionnaire results have shown that overall mental workload of surgeons related with surgical interface has been low as it has been aimed, and overall situational awareness scores of surgeons have been considerably high. Conclusions This preliminary study highlights the improvement of a developed surgical interface using eye tracking technology to obtain the best SI configuration. The results presented here reveal that visual surgical interface design prepared according to eye movement characteristics may lead to improved usability. PMID:25080176
Improvement of design of a surgical interface using an eye tracking device.
Erol Barkana, Duygun; Açık, Alper; Duru, Dilek Goksel; Duru, Adil Deniz
2014-05-07
Surgical interfaces are used for helping surgeons in interpretation and quantification of the patient information, and for the presentation of an integrated workflow where all available data are combined to enable optimal treatments. Human factors research provides a systematic approach to design user interfaces with safety, accuracy, satisfaction and comfort. One of the human factors research called user-centered design approach is used to develop a surgical interface for kidney tumor cryoablation. An eye tracking device is used to obtain the best configuration of the developed surgical interface. Surgical interface for kidney tumor cryoablation has been developed considering the four phases of user-centered design approach, which are analysis, design, implementation and deployment. Possible configurations of the surgical interface, which comprise various combinations of menu-based command controls, visual display of multi-modal medical images, 2D and 3D models of the surgical environment, graphical or tabulated information, visual alerts, etc., has been developed. Experiments of a simulated cryoablation of a tumor task have been performed with surgeons to evaluate the proposed surgical interface. Fixation durations and number of fixations at informative regions of the surgical interface have been analyzed, and these data are used to modify the surgical interface. Eye movement data has shown that participants concentrated their attention on informative regions more when the number of displayed Computer Tomography (CT) images has been reduced. Additionally, the time required to complete the kidney tumor cryoablation task by the participants had been decreased with the reduced number of CT images. Furthermore, the fixation durations obtained after the revision of the surgical interface are very close to what is observed in visual search and natural scene perception studies suggesting more efficient and comfortable interaction with the surgical interface. The National Aeronautics and Space Administration Task Load Index (NASA-TLX) and Short Post-Assessment Situational Awareness (SPASA) questionnaire results have shown that overall mental workload of surgeons related with surgical interface has been low as it has been aimed, and overall situational awareness scores of surgeons have been considerably high. This preliminary study highlights the improvement of a developed surgical interface using eye tracking technology to obtain the best SI configuration. The results presented here reveal that visual surgical interface design prepared according to eye movement characteristics may lead to improved usability.
fMRI evidence for sensorimotor transformations in human cortex during smooth pursuit eye movements.
Kimmig, H; Ohlendorf, S; Speck, O; Sprenger, A; Rutschmann, R M; Haller, S; Greenlee, M W
2008-01-01
Smooth pursuit eye movements (SP) are driven by moving objects. The pursuit system processes the visual input signals and transforms this information into an oculomotor output signal. Despite the object's movement on the retina and the eyes' movement in the head, we are able to locate the object in space implying coordinate transformations from retinal to head and space coordinates. To test for the visual and oculomotor components of SP and the possible transformation sites, we investigated three experimental conditions: (I) fixation of a stationary target with a second target moving across the retina (visual), (II) pursuit of the moving target with the second target moving in phase (oculomotor), (III) pursuit of the moving target with the second target remaining stationary (visuo-oculomotor). Precise eye movement data were simultaneously measured with the fMRI data. Visual components of activation during SP were located in the motion-sensitive, temporo-parieto-occipital region MT+ and the right posterior parietal cortex (PPC). Motor components comprised more widespread activation in these regions and additional activations in the frontal and supplementary eye fields (FEF, SEF), the cingulate gyrus and precuneus. The combined visuo-oculomotor stimulus revealed additional activation in the putamen. Possible transformation sites were found in MT+ and PPC. The MT+ activation evoked by the motion of a single visual dot was very localized, while the activation of the same single dot motion driving the eye was rather extended across MT+. The eye movement information appeared to be dispersed across the visual map of MT+. This could be interpreted as a transfer of the one-dimensional eye movement information into the two-dimensional visual map. Potentially, the dispersed information could be used to remap MT+ to space coordinates rather than retinal coordinates and to provide the basis for a motor output control. A similar interpretation holds for our results in the PPC region.
Irsch, Kristina; Gramatikov, Boris; Wu, Yi-Kai; Guyton, David
2011-01-01
Utilizing the measured corneal birefringence from a data set of 150 eyes of 75 human subjects, an algorithm and related computer program, based on Müller-Stokes matrix calculus, were developed in MATLAB for assessing the influence of corneal birefringence on retinal birefringence scanning (RBS) and for converging upon an optical/mechanical design using wave plates (“wave-plate-enhanced RBS”) that allows foveal fixation detection essentially independently of corneal birefringence. The RBS computer model, and in particular the optimization algorithm, were verified with experimental human data using an available monocular RBS-based eye fixation monitor. Fixation detection using wave-plate-enhanced RBS is adaptable to less cooperative subjects, including young children at risk for developing amblyopia. PMID:21750772
Eye movements during listening reveal spontaneous grammatical processing.
Huette, Stephanie; Winter, Bodo; Matlock, Teenie; Ardell, David H; Spivey, Michael
2014-01-01
Recent research using eye-tracking typically relies on constrained visual contexts in particular goal-oriented contexts, viewing a small array of objects on a computer screen and performing some overt decision or identification. Eyetracking paradigms that use pictures as a measure of word or sentence comprehension are sometimes touted as ecologically invalid because pictures and explicit tasks are not always present during language comprehension. This study compared the comprehension of sentences with two different grammatical forms: the past progressive (e.g., was walking), which emphasizes the ongoing nature of actions, and the simple past (e.g., walked), which emphasizes the end-state of an action. The results showed that the distribution and timing of eye movements mirrors the underlying conceptual structure of this linguistic difference in the absence of any visual stimuli or task constraint: Fixations were shorter and saccades were more dispersed across the screen, as if thinking about more dynamic events when listening to the past progressive stories. Thus, eye movement data suggest that visual inputs or an explicit task are unnecessary to solicit analog representations of features such as movement, that could be a key perceptual component to grammatical comprehension.
Tonic and phasic phenomena underlying eye movements during sleep in the cat
Márquez-Ruiz, Javier; Escudero, Miguel
2008-01-01
Mammalian sleep is not a homogenous state, and different variables have traditionally been used to distinguish different periods during sleep. Of these variables, eye movement is one of the most paradigmatic, and has been used to differentiate between the so-called rapid eye movement (REM) and non-REM (NREM) sleep periods. Despite this, eye movements during sleep are poorly understood, and the behaviour of the oculomotor system remains almost unknown. In the present work, we recorded binocular eye movements during the sleep–wake cycle of adult cats by the scleral search-coil technique. During alertness, eye movements consisted of conjugated saccades and eye fixations. During NREM sleep, eye movements were slow and mostly unconjugated. The two eyes moved upwardly and in the abducting direction, producing a tonic divergence and elevation of the visual axis. During the transition period between NREM and REM sleep, rapid monocular eye movements of low amplitude in the abducting direction occurred in coincidence with ponto-geniculo-occipital waves. Along REM sleep, the eyes tended to maintain a tonic convergence and depression, broken by high-frequency bursts of complex rapid eye movements. In the horizontal plane, each eye movement in the burst comprised two consecutive movements in opposite directions, which were more evident in the eye that performed the abducting movements. In the vertical plane, rapid eye movements were always upward. Comparisons of the characteristics of eye movements during the sleep–wake cycle reveal the uniqueness of eye movements during sleep, and the noteworthy existence of tonic and phasic phenomena in the oculomotor system, not observed until now. PMID:18499729
Tonic and phasic phenomena underlying eye movements during sleep in the cat.
Márquez-Ruiz, Javier; Escudero, Miguel
2008-07-15
Mammalian sleep is not a homogenous state, and different variables have traditionally been used to distinguish different periods during sleep. Of these variables, eye movement is one of the most paradigmatic, and has been used to differentiate between the so-called rapid eye movement (REM) and non-REM (NREM) sleep periods. Despite this, eye movements during sleep are poorly understood, and the behaviour of the oculomotor system remains almost unknown. In the present work, we recorded binocular eye movements during the sleep-wake cycle of adult cats by the scleral search-coil technique. During alertness, eye movements consisted of conjugated saccades and eye fixations. During NREM sleep, eye movements were slow and mostly unconjugated. The two eyes moved upwardly and in the abducting direction, producing a tonic divergence and elevation of the visual axis. During the transition period between NREM and REM sleep, rapid monocular eye movements of low amplitude in the abducting direction occurred in coincidence with ponto-geniculo-occipital waves. Along REM sleep, the eyes tended to maintain a tonic convergence and depression, broken by high-frequency bursts of complex rapid eye movements. In the horizontal plane, each eye movement in the burst comprised two consecutive movements in opposite directions, which were more evident in the eye that performed the abducting movements. In the vertical plane, rapid eye movements were always upward. Comparisons of the characteristics of eye movements during the sleep-wake cycle reveal the uniqueness of eye movements during sleep, and the noteworthy existence of tonic and phasic phenomena in the oculomotor system, not observed until now.
Eye Movements in Reading as Rational Behavior
ERIC Educational Resources Information Center
Bicknell, Klinton
2011-01-01
Moving one's eyes while reading is one of the most complex everyday tasks humans face. To perform efficiently, readers must make decisions about when and where to move their eyes every 200-300ms. Over the past decades, it has been demonstrated that these fine-grained decisions are influenced by a range of linguistic properties of the text, and…
Extralenticular and lenticular aspects of accommodation and presbyopia in human versus monkey eyes.
Croft, Mary Ann; McDonald, Jared P; Katz, Alexander; Lin, Ting-Li; Lütjen-Drecoll, Elke; Kaufman, Paul L
2013-07-26
To determine if the accommodative forward movements of the vitreous zonule and lens equator occur in the human eye, as they do in the rhesus monkey eye; to investigate the connection between the vitreous zonule posterior insertion zone and the posterior lens equator; and to determine which components-muscle apex width, lens thickness, lens equator position, vitreous zonule, circumlental space, and/or other intraocular dimensions, including those stated in the objectives above-are most important in predicting accommodative amplitude and presbyopia. Accommodation was induced pharmacologically in 12 visually normal human subjects (ages 19-65 years) and by midbrain electrical stimulation in 11 rhesus monkeys (ages 6-27 years). Ultrasound biomicroscopy imaged the entire ciliary body, anterior and posterior lens surfaces, and the zonule. Relevant distances were measured in the resting and accommodated eyes. Stepwise regression analysis determined which variables were the most important predictors. The human vitreous zonule and lens equator move forward (anteriorly) during accommodation, and their movements decline with age, as in the monkey. Over all ages studied, age could explain accommodative amplitude, but not as well as accommodative lens thickening and resting muscle apex thickness did together. Accommodative change in distances between the vitreous zonule insertion zone and the posterior lens equator or muscle apex were important for predicting accommodative lens thickening. Our findings quantify the movements of the zonule and ciliary muscle during accommodation, and identify their age-related changes that could impact the optical change that occurs during accommodation and IOL function.
High-Speed Noninvasive Eye-Tracking System
NASA Technical Reports Server (NTRS)
Talukder, Ashit; LaBaw, Clayton; Michael-Morookian, John; Monacos, Steve; Serviss, Orin
2007-01-01
The figure schematically depicts a system of electronic hardware and software that noninvasively tracks the direction of a person s gaze in real time. Like prior commercial noninvasive eye-tracking systems, this system is based on (1) illumination of an eye by a low-power infrared light-emitting diode (LED); (2) acquisition of video images of the pupil, iris, and cornea in the reflected infrared light; (3) digitization of the images; and (4) processing the digital image data to determine the direction of gaze from the centroids of the pupil and cornea in the images. Relative to the prior commercial systems, the present system operates at much higher speed and thereby offers enhanced capability for applications that involve human-computer interactions, including typing and computer command and control by handicapped individuals,and eye-based diagnosis of physiological disorders that affect gaze responses.
Contextual effects on smooth-pursuit eye movements.
Spering, Miriam; Gegenfurtner, Karl R
2007-02-01
Segregating a moving object from its visual context is particularly relevant for the control of smooth-pursuit eye movements. We examined the interaction between a moving object and a stationary or moving visual context to determine the role of the context motion signal in driving pursuit. Eye movements were recorded from human observers to a medium-contrast Gaussian dot that moved horizontally at constant velocity. A peripheral context consisted of two vertically oriented sinusoidal gratings, one above and one below the stimulus trajectory, that were either stationary or drifted into the same or opposite direction as that of the target at different velocities. We found that a stationary context impaired pursuit acceleration and velocity and prolonged pursuit latency. A drifting context enhanced pursuit performance, irrespective of its motion direction. This effect was modulated by context contrast and orientation. When a context was briefly perturbed to move faster or slower eye velocity changed accordingly, but only when the context was drifting along with the target. Perturbing a context into the direction orthogonal to target motion evoked a deviation of the eye opposite to the perturbation direction. We therefore provide evidence for the use of absolute and relative motion cues, or motion assimilation and motion contrast, for the control of smooth-pursuit eye movements.
Video attention deviation estimation using inter-frame visual saliency map analysis
NASA Astrophysics Data System (ADS)
Feng, Yunlong; Cheung, Gene; Le Callet, Patrick; Ji, Yusheng
2012-01-01
A viewer's visual attention during video playback is the matching of his eye gaze movement to the changing video content over time. If the gaze movement matches the video content (e.g., follow a rolling soccer ball), then the viewer keeps his visual attention. If the gaze location moves from one video object to another, then the viewer shifts his visual attention. A video that causes a viewer to shift his attention often is a "busy" video. Determination of which video content is busy is an important practical problem; a busy video is difficult for encoder to deploy region of interest (ROI)-based bit allocation, and hard for content provider to insert additional overlays like advertisements, making the video even busier. One way to determine the busyness of video content is to conduct eye gaze experiments with a sizable group of test subjects, but this is time-consuming and costineffective. In this paper, we propose an alternative method to determine the busyness of video-formally called video attention deviation (VAD): analyze the spatial visual saliency maps of the video frames across time. We first derive transition probabilities of a Markov model for eye gaze using saliency maps of a number of consecutive frames. We then compute steady state probability of the saccade state in the model-our estimate of VAD. We demonstrate that the computed steady state probability for saccade using saliency map analysis matches that computed using actual gaze traces for a range of videos with different degrees of busyness. Further, our analysis can also be used to segment video into shorter clips of different degrees of busyness by computing the Kullback-Leibler divergence using consecutive motion compensated saliency maps.
Miyata, Hiromitsu; Minagawa-Kawai, Yasuyo; Watanabe, Shigeru; Sasaki, Toyofumi; Ueda, Kazuhiro
2012-01-01
Background A growing body of evidence suggests that meditative training enhances perception and cognition. In Japan, the Park-Sasaki method of speed-reading involves organized visual training while forming both a relaxed and concentrated state of mind, as in meditation. The present study examined relationships between reading speed, sentence comprehension, and eye movements while reading short Japanese novels. In addition to normal untrained readers, three middle-level trainees and one high-level expert on this method were included for the two case studies. Methodology/Principal Findings In Study 1, three of 17 participants were middle-level trainees on the speed-reading method. Immediately after reading each story once on a computer monitor, participants answered true or false questions regarding the content of the novel. Eye movements while reading were recorded using an eye-tracking system. Results revealed higher reading speed and lower comprehension scores in the trainees than in the untrained participants. Furthermore, eye-tracking data by untrained participants revealed multiple correlations between reading speed, accuracy and eye-movement measures, with faster readers showing shorter fixation durations and larger saccades in X than slower readers. In Study 2, participants included a high-level expert and 14 untrained students. The expert showed higher reading speed and statistically comparable, although numerically lower, comprehension scores compared with the untrained participants. During test sessions this expert moved her eyes along a nearly straight horizontal line as a first pass, without moving her eyes over the whole sentence display as did the untrained students. Conclusions/Significance In addition to revealing correlations between speed, comprehension and eye movements in reading Japanese contemporary novels by untrained readers, we describe cases of speed-reading trainees regarding relationships between these variables. The trainees overall tended to show poor performance influenced by the speed-accuracy trade-off, although this trade-off may be reduced in the case of at least one high-level expert. PMID:22590519
Maier, Felix M; Schaeffel, Frank
2013-07-24
To find out whether adaptation to a vertical prism involves more than fusional vertical eye movements. Adaptation to a vertical base-up 3 prism diopter prism was measured in a custom-programmed Maddox test in nine visually normal emmetropic subjects (mean age 27.0 ± 2.8 years). Vertical eye movements were binocularly measured in six of the subjects with a custom-programmed binocular video eye tracker. In the Maddox test, some subjects adjusted the perceived height as expected from the power of the prism while others appeared to ignore the prism. After 15 minutes of adaptation, the interocular difference in perceived height was reduced by on average 51% (from 0.86°-0.44°). The larger the initially perceived difference in height in a subject, the larger the amplitude of adaptation was. Eye tracking showed that the prism generated divergent vertical eye movements of 1.2° on average, which was less than expected from its power. Differences in eye elevation were maintained as long as the prism was in place. Small angles of lateral head tilt generated large interocular differences in eye elevation, much larger than the effects introduced by the prism. Vertical differences in retinal image height were compensated by vertical fusional eye movements but some subjects responded poorly to a vertical prism in both experiments; fusional eye movements were generally too small to realign both foveae with the fixation target; and the prism adaptation in the Maddox test was fully explained by the changes in vertical eye position, suggesting that no further adaptational mechanism may be involved.
The role of eye movements in depth from motion parallax during infancy
Nawrot, Elizabeth; Nawrot, Mark
2013-01-01
Motion parallax is a motion-based, monocular depth cue that uses an object's relative motion and velocity as a cue to relative depth. In adults, and in monkeys, a smooth pursuit eye movement signal is used to disambiguate the depth-sign provided by these relative motion cues. The current study investigates infants' perception of depth from motion parallax and the development of two oculomotor functions, smooth pursuit and the ocular following response (OFR) eye movements. Infants 8 to 20 weeks of age were presented with three tasks in a single session: depth from motion parallax, smooth pursuit tracking, and OFR to translation. The development of smooth pursuit was significantly related to age, as was sensitivity to motion parallax. OFR eye movements also corresponded to both age and smooth pursuit gain, with groups of infants demonstrating asymmetric function in both types of eye movements. These results suggest that the development of the eye movement system may play a crucial role in the sensitivity to depth from motion parallax in infancy. Moreover, describing the development of these oculomotor functions in relation to depth perception may aid in the understanding of certain visual dysfunctions. PMID:24353309
Luque, M A; Pérez-Pérez, M P; Herrero, L; Waitzman, D M; Torres, B
2006-02-01
Anatomical studies in goldfish show that the tectofugal axons provide a large number of boutons within the mesencephalic reticular formation. Electrical stimulation, reversible inactivation and cell recording in the primate central mesencephalic reticular formation have suggested that it participates in the control of rapid eye movements (saccades). Moreover, the role of this tecto-recipient area in the generation of saccadic eye movements in fish is unknown. In this study we show that the electrical microstimulation of the mesencephalic reticular formation of goldfish evoked short latency saccadic eye movements in any direction (contraversive or ipsiversive, upward or downward). Movements of the eyes were usually disjunctive. Based on the location of the sites from which eye movements were evoked and the preferred saccade direction, eye movements were divided into different groups: pure vertical saccades were mainly elicited from the rostral mesencephalic reticular formation, while oblique and pure horizontal were largely evoked from middle and caudal mesencephalic reticular formation zones. The direction and amplitude of pure vertical and horizontal saccades were unaffected by initial eye position. However the amplitude, but not the direction of most oblique saccades was systematically modified by initial eye position. At the same time, the amplitude of elicited saccades did not vary in any consistent manner along either the anteroposterior, dorsoventral or mediolateral axes (i.e. there was no topographic organization of the mesencephalic reticular formation with respect to amplitude). In addition to these groups of movements, we found convergent and goal-directed saccades evoked primarily from the anterior and posterior mesencephalic reticular formation, respectively. Finally, the metric and kinetic characteristics of saccades could be manipulated by changes in the stimulation parameters. We conclude that the mesencephalic reticular formation in goldfish shares physiological functions that correspond closely with those found in mammals.
Desseilles, Martin; Vu, Thanh Dang; Laureys, Steven; Peigneux, Philippe; Degueldre, Christian; Phillips, Christophe; Maquet, Pierre
2006-09-01
Rapid eye movement sleep (REMS) is associated with intense neuronal activity, rapid eye movements, muscular atonia and dreaming. Another important feature in REMS is the instability in autonomic, especially in cardiovascular regulation. The neural mechanisms underpinning the variability in heart rate (VHR) during REMS are not known in detail, especially in humans. During wakefulness, the right insula has frequently been reported as involved in cardiovascular regulation but this might not be the case during REMS. We aimed at characterizing the neural correlates of VHR during REMS as compared to wakefulness and to slow wave sleep (SWS), the other main component of human sleep, in normal young adults, based on the statistical analysis of a set of H(2)(15)O positron emission tomography (PET) sleep data acquired during SWS, REMS and wakefulness. The results showed that VHR correlated more tightly during REMS than during wakefulness with the rCBF in the right amygdaloid complex. Moreover, we assessed whether functional relationships between amygdala and any brain area changed depending the state of vigilance. Only the activity within in the insula was found to covary with the amygdala, significantly more tightly during wakefulness than during REMS in relation to the VHR. The functional connectivity between the amygdala and the insular cortex, two brain areas involved in cardiovascular regulation, differs significantly in REMS as compared to wakefulness. This suggests a functional reorganization of central cardiovascular regulation during REMS.
Hutson, John P; Smith, Tim J; Magliano, Joseph P; Loschky, Lester C
2017-01-01
Film is ubiquitous, but the processes that guide viewers' attention while viewing film narratives are poorly understood. In fact, many film theorists and practitioners disagree on whether the film stimulus (bottom-up) or the viewer (top-down) is more important in determining how we watch movies. Reading research has shown a strong connection between eye movements and comprehension, and scene perception studies have shown strong effects of viewing tasks on eye movements, but such idiosyncratic top-down control of gaze in film would be anathema to the universal control mainstream filmmakers typically aim for. Thus, in two experiments we tested whether the eye movements and comprehension relationship similarly held in a classic film example, the famous opening scene of Orson Welles' Touch of Evil (Welles & Zugsmith, Touch of Evil, 1958). Comprehension differences were compared with more volitionally controlled task-based effects on eye movements. To investigate the effects of comprehension on eye movements during film viewing, we manipulated viewers' comprehension by starting participants at different points in a film, and then tracked their eyes. Overall, the manipulation created large differences in comprehension, but only produced modest differences in eye movements. To amplify top-down effects on eye movements, a task manipulation was designed to prioritize peripheral scene features: a map task. This task manipulation created large differences in eye movements when compared to participants freely viewing the clip for comprehension. Thus, to allow for strong, volitional top-down control of eye movements in film, task manipulations need to make features that are important to narrative comprehension irrelevant to the viewing task. The evidence provided by this experimental case study suggests that filmmakers' belief in their ability to create systematic gaze behavior across viewers is confirmed, but that this does not indicate universally similar comprehension of the film narrative.
Ayasse, Nicole D.; Lash, Amanda; Wingfield, Arthur
2017-01-01
In spite of the rapidity of everyday speech, older adults tend to keep up relatively well in day-to-day listening. In laboratory settings older adults do not respond as quickly as younger adults in off-line tests of sentence comprehension, but the question is whether comprehension itself is actually slower. Two unique features of the human eye were used to address this question. First, we tracked eye-movements as 20 young adults and 20 healthy older adults listened to sentences that referred to one of four objects pictured on a computer screen. Although the older adults took longer to indicate the referenced object with a cursor-pointing response, their gaze moved to the correct object as rapidly as that of the younger adults. Second, we concurrently measured dilation of the pupil of the eye as a physiological index of effort. This measure revealed that although poorer hearing acuity did not slow processing, success came at the cost of greater processing effort. PMID:28119598
Ayasse, Nicole D; Lash, Amanda; Wingfield, Arthur
2016-01-01
In spite of the rapidity of everyday speech, older adults tend to keep up relatively well in day-to-day listening. In laboratory settings older adults do not respond as quickly as younger adults in off-line tests of sentence comprehension, but the question is whether comprehension itself is actually slower. Two unique features of the human eye were used to address this question. First, we tracked eye-movements as 20 young adults and 20 healthy older adults listened to sentences that referred to one of four objects pictured on a computer screen. Although the older adults took longer to indicate the referenced object with a cursor-pointing response, their gaze moved to the correct object as rapidly as that of the younger adults. Second, we concurrently measured dilation of the pupil of the eye as a physiological index of effort. This measure revealed that although poorer hearing acuity did not slow processing, success came at the cost of greater processing effort.
Steering a Tractor by Means of an EMG-Based Human-Machine Interface
Gomez-Gil, Jaime; San-Jose-Gonzalez, Israel; Nicolas-Alonso, Luis Fernando; Alonso-Garcia, Sergio
2011-01-01
An electromiographic (EMG)-based human-machine interface (HMI) is a communication pathway between a human and a machine that operates by means of the acquisition and processing of EMG signals. This article explores the use of EMG-based HMIs in the steering of farm tractors. An EPOC, a low-cost human-computer interface (HCI) from the Emotiv Company, was employed. This device, by means of 14 saline sensors, measures and processes EMG and electroencephalographic (EEG) signals from the scalp of the driver. In our tests, the HMI took into account only the detection of four trained muscular events on the driver’s scalp: eyes looking to the right and jaw opened, eyes looking to the right and jaw closed, eyes looking to the left and jaw opened, and eyes looking to the left and jaw closed. The EMG-based HMI guidance was compared with manual guidance and with autonomous GPS guidance. A driver tested these three guidance systems along three different trajectories: a straight line, a step, and a circumference. The accuracy of the EMG-based HMI guidance was lower than the accuracy obtained by manual guidance, which was lower in turn than the accuracy obtained by the autonomous GPS guidance; the computed standard deviations of error to the desired trajectory in the straight line were 16 cm, 9 cm, and 4 cm, respectively. Since the standard deviation between the manual guidance and the EMG-based HMI guidance differed only 7 cm, and this difference is not relevant in agricultural steering, it can be concluded that it is possible to steer a tractor by an EMG-based HMI with almost the same accuracy as with manual steering. PMID:22164006
Steering a tractor by means of an EMG-based human-machine interface.
Gomez-Gil, Jaime; San-Jose-Gonzalez, Israel; Nicolas-Alonso, Luis Fernando; Alonso-Garcia, Sergio
2011-01-01
An electromiographic (EMG)-based human-machine interface (HMI) is a communication pathway between a human and a machine that operates by means of the acquisition and processing of EMG signals. This article explores the use of EMG-based HMIs in the steering of farm tractors. An EPOC, a low-cost human-computer interface (HCI) from the Emotiv Company, was employed. This device, by means of 14 saline sensors, measures and processes EMG and electroencephalographic (EEG) signals from the scalp of the driver. In our tests, the HMI took into account only the detection of four trained muscular events on the driver's scalp: eyes looking to the right and jaw opened, eyes looking to the right and jaw closed, eyes looking to the left and jaw opened, and eyes looking to the left and jaw closed. The EMG-based HMI guidance was compared with manual guidance and with autonomous GPS guidance. A driver tested these three guidance systems along three different trajectories: a straight line, a step, and a circumference. The accuracy of the EMG-based HMI guidance was lower than the accuracy obtained by manual guidance, which was lower in turn than the accuracy obtained by the autonomous GPS guidance; the computed standard deviations of error to the desired trajectory in the straight line were 16 cm, 9 cm, and 4 cm, respectively. Since the standard deviation between the manual guidance and the EMG-based HMI guidance differed only 7 cm, and this difference is not relevant in agricultural steering, it can be concluded that it is possible to steer a tractor by an EMG-based HMI with almost the same accuracy as with manual steering.
Like a rolling stone: naturalistic visual kinematics facilitate tracking eye movements.
Souto, David; Kerzel, Dirk
2013-02-06
Newtonian physics constrains object kinematics in the real world. We asked whether eye movements towards tracked objects depend on their compliance with those constraints. In particular, the force of gravity constrains round objects to roll on the ground with a particular rotational and translational motion. We measured tracking eye movements towards rolling objects. We found that objects with rotational and translational motion that was congruent with an object rolling on the ground elicited faster tracking eye movements during pursuit initiation than incongruent stimuli. Relative to a condition without rotational component, we compared objects with this motion with a condition in which there was no rotational component, we essentially obtained benefits of congruence, and, to a lesser extent, costs from incongruence. Anticipatory pursuit responses showed no congruence effect, suggesting that the effect is based on visually-driven predictions, not on velocity storage. We suggest that the eye movement system incorporates information about object kinematics acquired by a lifetime of experience with visual stimuli obeying the laws of Newtonian physics.
Eye-movements and ongoing task processing.
Burke, David T; Meleger, Alec; Schneider, Jeffrey C; Snyder, Jim; Dorvlo, Atsu S S; Al-Adawi, Samir
2003-06-01
This study tests the relation between eye-movements and thought processing. Subjects were given specific modality tasks (visual, gustatory, kinesthetic) and assessed on whether they responded with distinct eye-movements. Some subjects' eye-movements reflected ongoing thought processing. Instead of a universal pattern, as suggested by the neurolinguistic programming hypothesis, this study yielded subject-specific idiosyncratic eye-movements across all modalities. Included is a discussion of the neurolinguistic programming hypothesis regarding eye-movements and its implications for the eye-movement desensitization and reprocessing theory.
Auditory Inhibition of Rapid Eye Movements and Dream Recall from REM Sleep
Stuart, Katrina; Conduit, Russell
2009-01-01
Study Objectives: There is debate in dream research as to whether ponto-geniculo-occipital (PGO) waves or cortical arousal during sleep underlie the biological mechanisms of dreaming. This study comprised 2 experiments. As eye movements (EMs) are currently considered the best noninvasive indicator of PGO burst activity in humans, the aim of the first experiment was to investigate the effect of low-intensity repeated auditory stimulation on EMs (and inferred PGO burst activity) during REM sleep. It was predicted that such auditory stimuli during REM sleep would have a suppressive effect on EMs. The aim of the second experiment was to examine the effects of this auditory stimulation on subsequent dream reporting on awakening. Design: Repeated measures design with counterbalanced order of experimental and control conditions across participants. Setting: Sleep laboratory based polysomnography (PSG) Participants: Experiment 1: 5 males and 10 females aged 18-35 years (M = 20.8, SD = 5.4). Experiment 2: 7 males and 13 females aged 18-35 years (M = 23.3, SD = 5.5). Interventions: Below-waking threshold tone presentations during REM sleep compared to control REM sleep conditions without tone presentations. Measurements and Results: PSG records were manually scored for sleep stages, EEG arousals, and EMs. Auditory stimulation during REM sleep was related to: (a) an increase in EEG arousal, (b) a decrease in the amplitude and frequency of EMs, and (c) a decrease in the frequency of visual imagery reports on awakening. Conclusions: The results of this study provide phenomenological support for PGO-based theories of dream reporting on awakening from sleep in humans. Citation: Stuart K; Conduit R. Auditory inhibition of rapid eye movements and dream recall from REM sleep. SLEEP 2009;32(3):399–408. PMID:19294960
The enchanted loom. [Book on evolution of intelligence
NASA Technical Reports Server (NTRS)
Jastrow, R.
1981-01-01
The evolution of intelligence began with the movement of Crossopterygian fish onto land. The eventual appearance of large dinosaurs eliminated all but the smallest of mammalian creatures, with the survivors forced to move only nocturnally, when enhanced olfactory and aural faculties were favored and involved a larger grey matter/body mass ratio than possessed by the dinosaurs. Additionally, the mammals made comparisons between the inputs of various senses, implying the presence of significant memory capacity and an ability to abstract survival information. More complex behavior occurred with the advent of tree dwellers (forward-looking eyes), hands, color vision, and the ability to grip and manipulate objects. An extra pound of brain evolved in the human skull in less than a million years. The neural processes that can lead to an action by a creature with a brain are mimicked by the basic AND and OR gates in computers, which are rapidly approaching the circuit density of the human brain. It is suggested that humans will eventually produce computers of higher intelligence than people possess, and computer spacecraft, alive in an electronic sense, will travel outward to explore the universe.
Brain processing of visual information during fast eye movements maintains motor performance.
Panouillères, Muriel; Gaveau, Valérie; Socasau, Camille; Urquizar, Christian; Pélisson, Denis
2013-01-01
Movement accuracy depends crucially on the ability to detect errors while actions are being performed. When inaccuracies occur repeatedly, both an immediate motor correction and a progressive adaptation of the motor command can unfold. Of all the movements in the motor repertoire of humans, saccadic eye movements are the fastest. Due to the high speed of saccades, and to the impairment of visual perception during saccades, a phenomenon called "saccadic suppression", it is widely believed that the adaptive mechanisms maintaining saccadic performance depend critically on visual error signals acquired after saccade completion. Here, we demonstrate that, contrary to this widespread view, saccadic adaptation can be based entirely on visual information presented during saccades. Our results show that visual error signals introduced during saccade execution--by shifting a visual target at saccade onset and blanking it at saccade offset--induce the same level of adaptation as error signals, presented for the same duration, but after saccade completion. In addition, they reveal that this processing of intra-saccadic visual information for adaptation depends critically on visual information presented during the deceleration phase, but not the acceleration phase, of the saccade. These findings demonstrate that the human central nervous system can use short intra-saccadic glimpses of visual information for motor adaptation, and they call for a reappraisal of current models of saccadic adaptation.
Otero-Millan, Jorge; Roberts, Dale C; Lasker, Adrian; Zee, David S; Kheradmand, Amir
2015-01-01
Torsional eye movements are rotations of the eye around the line of sight. Measuring torsion is essential to understanding how the brain controls eye position and how it creates a veridical perception of object orientation in three dimensions. Torsion is also important for diagnosis of many vestibular, neurological, and ophthalmological disorders. Currently, there are multiple devices and methods that produce reliable measurements of horizontal and vertical eye movements. Measuring torsion, however, noninvasively and reliably has been a longstanding challenge, with previous methods lacking real-time capabilities or suffering from intrusive artifacts. We propose a novel method for measuring eye movements in three dimensions using modern computer vision software (OpenCV) and concepts of iris recognition. To measure torsion, we use template matching of the entire iris and automatically account for occlusion of the iris and pupil by the eyelids. The current setup operates binocularly at 100 Hz with noise <0.1° and is accurate within 20° of gaze to the left, to the right, and up and 10° of gaze down. This new method can be widely applicable and fill a gap in many scientific and clinical disciplines.
Otero-Millan, Jorge; Roberts, Dale C.; Lasker, Adrian; Zee, David S.; Kheradmand, Amir
2015-01-01
Torsional eye movements are rotations of the eye around the line of sight. Measuring torsion is essential to understanding how the brain controls eye position and how it creates a veridical perception of object orientation in three dimensions. Torsion is also important for diagnosis of many vestibular, neurological, and ophthalmological disorders. Currently, there are multiple devices and methods that produce reliable measurements of horizontal and vertical eye movements. Measuring torsion, however, noninvasively and reliably has been a longstanding challenge, with previous methods lacking real-time capabilities or suffering from intrusive artifacts. We propose a novel method for measuring eye movements in three dimensions using modern computer vision software (OpenCV) and concepts of iris recognition. To measure torsion, we use template matching of the entire iris and automatically account for occlusion of the iris and pupil by the eyelids. The current setup operates binocularly at 100 Hz with noise <0.1° and is accurate within 20° of gaze to the left, to the right, and up and 10° of gaze down. This new method can be widely applicable and fill a gap in many scientific and clinical disciplines. PMID:26587699
2015 Summer Series - Lee Stone - Brain Function Through the Eyes of the Beholder
2015-06-09
The Visuomotor Control Laboratory (VCL) at NASA Ames conducts neuroscience research on the link between eye movements and brain function to provide an efficient and quantitative means of monitoring human perceptual performance. The VCL aims to make dramatic improvements in mission success through analysis, experimentation, and modeling of human performance and human-automation interaction. Dr. Lee Stone elaborates on how this research is conducted and how it contributes to NASA's mission and advances human-centered design and operations of complex aerospace systems.
NASA Technical Reports Server (NTRS)
Leigh, R. J.; Thurston, S. E.; Sharpe, J. A.; Ranalli, P. J.; Hamid, M. A.
1987-01-01
The effects of deficient labyrinthine function on smooth visual tracking with the eyes and head were investigated, using ten patients with bilateral peripheral vestibular disease and ten normal controls. Active, combined eye-head tracking (EHT) was significantly better in patients than smooth pursuit with the eyes alone, whereas normal subjects pursued equally well in both cases. Compensatory eye movements during active head rotation in darkness were always less in patients than in normal subjects. These data were used to examine current hypotheses that postulate central cancellation of the vestibulo-ocular reflex (VOR) during EHT. A model that proposes summation of an integral smooth pursuit command and VOR/compensatory eye movements is consistent with the findings. Observation of passive EHT (visual fixation of a head-fixed target during en bloc rotation) appears to indicate that in this mode parametric gain changes contribute to modulation of the VOR.
Shichinohe, Natsuko; Akao, Teppei; Kurkin, Sergei; Fukushima, Junko; Kaneko, Chris R S; Fukushima, Kikuro
2009-06-11
Cortical motor areas are thought to contribute "higher-order processing," but what that processing might include is unknown. Previous studies of the smooth pursuit-related discharge of supplementary eye field (SEF) neurons have not distinguished activity associated with the preparation for pursuit from discharge related to processing or memory of the target motion signals. Using a memory-based task designed to separate these components, we show that the SEF contains signals coding retinal image-slip-velocity, memory, and assessment of visual motion direction, the decision of whether to pursue, and the preparation for pursuit eye movements. Bilateral muscimol injection into SEF resulted in directional errors in smooth pursuit, errors of whether to pursue, and impairment of initial correct eye movements. These results suggest an important role for the SEF in memory and assessment of visual motion direction and the programming of appropriate pursuit eye movements.
Fast hierarchical knowledge-based approach for human face detection in color images
NASA Astrophysics Data System (ADS)
Jiang, Jun; Gong, Jie; Zhang, Guilin; Hu, Ruolan
2001-09-01
This paper presents a fast hierarchical knowledge-based approach for automatically detecting multi-scale upright faces in still color images. The approach consists of three levels. At the highest level, skin-like regions are determinated by skin model, which is based on the color attributes hue and saturation in HSV color space, as well color attributes red and green in normalized color space. In level 2, a new eye model is devised to select human face candidates in segmented skin-like regions. An important feature of the eye model is that it is independent of the scale of human face. So it is possible for finding human faces in different scale with scanning image only once, and it leads to reduction the computation time of face detection greatly. In level 3, a human face mosaic image model, which is consistent with physical structure features of human face well, is applied to judge whether there are face detects in human face candidate regions. This model includes edge and gray rules. Experiment results show that the approach has high robustness and fast speed. It has wide application perspective at human-computer interactions and visual telephone etc.
Kato, Takafumi; Toyota, Risa; Haraki, Shingo; Yano, Hiroyuki; Higashiyama, Makoto; Ueno, Yoshio; Yano, Hiroshi; Sato, Fumihiko; Yatani, Hirofumi; Yoshida, Atsushi
2017-09-27
Rhythmic masticatory muscle activity can be a normal variant of oromotor activity, which can be exaggerated in patients with sleep bruxism. However, few studies have tested the possibility in naturally sleeping animals to study the neurophysiological mechanisms of rhythmic masticatory muscle activity. This study aimed to investigate the similarity of cortical, cardiac and electromyographic manifestations of rhythmic masticatory muscle activity occurring during non-rapid eye movement sleep between guinea pigs and human subjects. Polysomnographic recordings were made in 30 freely moving guinea pigs and in eight healthy human subjects. Burst cycle length, duration and activity of rhythmic masticatory muscle activity were compared with those for chewing. The time between R-waves in the electrocardiogram (RR interval) and electroencephalogram power spectrum were calculated to assess time-course changes in cardiac and cortical activities in relation to rhythmic masticatory muscle activity. In animals, in comparison with chewing, rhythmic masticatory muscle activity had a lower burst activity, longer burst duration and longer cycle length (P < 0.05), and greater variabilities were observed (P < 0.05). Rhythmic masticatory muscle activity occurring during non-rapid eye movement sleep [median (interquartile range): 5.2 (2.6-8.9) times per h] was preceded by a transient decrease in RR intervals, and was accompanied by a transient decrease in delta elelctroencephalogram power. In humans, masseter bursts of rhythmic masticatory muscle activity were characterized by a lower activity, longer duration and longer cycle length than those of chewing (P < 0.05). Rhythmic masticatory muscle activity during non-rapid eye movement sleep [1.4 (1.18-2.11) times per h] was preceded by a transient decrease in RR intervals and an increase in cortical activity. Rhythmic masticatory muscle activity in animals had common physiological components representing transient arousal-related rhythmic jaw motor activation in comparison to human subjects. © 2017 European Sleep Research Society.
Visual Contrast Sensitivity in Early-Stage Parkinson's Disease.
Ming, Wendy; Palidis, Dimitrios J; Spering, Miriam; McKeown, Martin J
2016-10-01
Visual impairments are frequent in Parkinson's disease (PD) and impact normal functioning in daily activities. Visual contrast sensitivity is a powerful nonmotor sign for discriminating PD patients from controls. However, it is usually assessed with static visual stimuli. Here we examined the interaction between perception and eye movements in static and dynamic contrast sensitivity tasks in a cohort of mildly impaired, early-stage PD patients. Patients (n = 13) and healthy age-matched controls (n = 12) viewed stimuli of various spatial frequencies (0-8 cyc/deg) and speeds (0°/s, 10°/s, 30°/s) on a computer monitor. Detection thresholds were determined by asking participants to adjust luminance contrast until they could just barely see the stimulus. Eye position was recorded with a video-based eye tracker. Patients' static contrast sensitivity was impaired in the intermediate spatial-frequency range and this impairment correlated with fixational instability. However, dynamic contrast sensitivity and patients' smooth pursuit were relatively normal. An independent component analysis revealed contrast sensitivity profiles differentiating patients and controls. Our study simultaneously assesses perceptual contrast sensitivity and eye movements in PD, revealing a possible link between fixational instability and perceptual deficits. Spatiotemporal contrast sensitivity profiles may represent an easily measurable metric as a component of a broader combined biometric for nonmotor features observed in PD.
Parietal stimulation destabilizes spatial updating across saccadic eye movements.
Morris, Adam P; Chambers, Christopher D; Mattingley, Jason B
2007-05-22
Saccadic eye movements cause sudden and global shifts in the retinal image. Rather than causing confusion, however, eye movements expand our sense of space and detail. In macaques, a stable representation of space is embodied by neural populations in intraparietal cortex that redistribute activity with each saccade to compensate for eye displacement, but little is known about equivalent updating mechanisms in humans. We combined noninvasive cortical stimulation with a double-step saccade task to examine the contribution of two human intraparietal areas to transsaccadic spatial updating. Right hemisphere stimulation over the posterior termination of the intraparietal sulcus (IPSp) broadened and shifted the distribution of second-saccade endpoints, but only when the first-saccade was directed into the contralateral hemifield. By interleaving trials with and without cortical stimulation, we show that the shift in endpoints was caused by an enduring effect of stimulation on neural functioning (e.g., modulation of neuronal gain). By varying the onset time of stimulation, we show that the representation of space in IPSp is updated immediately after the first-saccade. In contrast, stimulation of an adjacent IPS site had no such effects on second-saccades. These experiments suggest that stimulation of IPSp distorts an eye position or displacement signal that updates the representation of space at the completion of a saccade. Such sensory-motor integration in IPSp is crucial for the ongoing control of action, and may contribute to visual stability across saccades.
Lossless compression of otoneurological eye movement signals.
Tossavainen, Timo; Juhola, Martti
2002-12-01
We studied the performance of several lossless compression algorithms on eye movement signals recorded in otoneurological balance and other physiological laboratories. Despite the wide use of these signals their compression has not been studied prior to our research. The compression methods were based on the common model of using a predictor to decorrelate the input and using an entropy coder to encode the residual. We found that these eye movement signals recorded at 400 Hz and with 13 bit amplitude resolution could losslessly be compressed with a compression ratio of about 2.7.
Kukona, Anuenue; Braze, David; Johns, Clinton L; Mencl, W Einar; Van Dyke, Julie A; Magnuson, James S; Pugh, Kenneth R; Shankweiler, Donald P; Tabor, Whitney
2016-11-01
Recent studies have found considerable individual variation in language comprehenders' predictive behaviors, as revealed by their anticipatory eye movements during language comprehension. The current study investigated the relationship between these predictive behaviors and the language and literacy skills of a diverse, community-based sample of young adults. We found that rapid automatized naming (RAN) was a key determinant of comprehenders' prediction ability (e.g., as reflected in predictive eye movements to a white cake on hearing "The boy will eat the white…"). Simultaneously, comprehension-based measures predicted participants' ability to inhibit eye movements to objects that shared features with predictable referents but were implausible completions (e.g., as reflected in eye movements to a white but inedible white car). These findings suggest that the excitatory and inhibitory mechanisms that support prediction during language processing are closely linked with specific cognitive abilities that support literacy. We show that a self-organizing cognitive architecture captures this pattern of results. Copyright © 2016 Elsevier B.V. All rights reserved.
(C)overt attention and visual speller design in an ERP-based brain-computer interface.
Treder, Matthias S; Blankertz, Benjamin
2010-05-28
In a visual oddball paradigm, attention to an event usually modulates the event-related potential (ERP). An ERP-based brain-computer interface (BCI) exploits this neural mechanism for communication. Hitherto, it was unclear to what extent the accuracy of such a BCI requires eye movements (overt attention) or whether it is also feasible for targets in the visual periphery (covert attention). Also unclear was how the visual design of the BCI can be improved to meet peculiarities of peripheral vision such as low spatial acuity and crowding. Healthy participants (N = 13) performed a copy-spelling task wherein they had to count target intensifications. EEG and eye movements were recorded concurrently. First, (c)overt attention was investigated by way of a target fixation condition and a central fixation condition. In the latter, participants had to fixate a dot in the center of the screen and allocate their attention to a target in the visual periphery. Second, the effect of visual speller layout was investigated by comparing the symbol Matrix to an ERP-based Hex-o-Spell, a two-levels speller consisting of six discs arranged on an invisible hexagon. We assessed counting errors, ERP amplitudes, and offline classification performance. There is an advantage (i.e., less errors, larger ERP amplitude modulation, better classification) of overt attention over covert attention, and there is also an advantage of the Hex-o-Spell over the Matrix. Using overt attention, P1, N1, P2, N2, and P3 components are enhanced by attention. Using covert attention, only N2 and P3 are enhanced for both spellers, and N1 and P2 are modulated when using the Hex-o-Spell but not when using the Matrix. Consequently, classifiers rely mainly on early evoked potentials in overt attention and on later cognitive components in covert attention. Both overt and covert attention can be used to drive an ERP-based BCI, but performance is markedly lower for covert attention. The Hex-o-Spell outperforms the Matrix, especially when eye movements are not permitted, illustrating that performance can be increased if one accounts for peculiarities of peripheral vision.
(C)overt attention and visual speller design in an ERP-based brain-computer interface
2010-01-01
Background In a visual oddball paradigm, attention to an event usually modulates the event-related potential (ERP). An ERP-based brain-computer interface (BCI) exploits this neural mechanism for communication. Hitherto, it was unclear to what extent the accuracy of such a BCI requires eye movements (overt attention) or whether it is also feasible for targets in the visual periphery (covert attention). Also unclear was how the visual design of the BCI can be improved to meet peculiarities of peripheral vision such as low spatial acuity and crowding. Method Healthy participants (N = 13) performed a copy-spelling task wherein they had to count target intensifications. EEG and eye movements were recorded concurrently. First, (c)overt attention was investigated by way of a target fixation condition and a central fixation condition. In the latter, participants had to fixate a dot in the center of the screen and allocate their attention to a target in the visual periphery. Second, the effect of visual speller layout was investigated by comparing the symbol Matrix to an ERP-based Hex-o-Spell, a two-levels speller consisting of six discs arranged on an invisible hexagon. Results We assessed counting errors, ERP amplitudes, and offline classification performance. There is an advantage (i.e., less errors, larger ERP amplitude modulation, better classification) of overt attention over covert attention, and there is also an advantage of the Hex-o-Spell over the Matrix. Using overt attention, P1, N1, P2, N2, and P3 components are enhanced by attention. Using covert attention, only N2 and P3 are enhanced for both spellers, and N1 and P2 are modulated when using the Hex-o-Spell but not when using the Matrix. Consequently, classifiers rely mainly on early evoked potentials in overt attention and on later cognitive components in covert attention. Conclusions Both overt and covert attention can be used to drive an ERP-based BCI, but performance is markedly lower for covert attention. The Hex-o-Spell outperforms the Matrix, especially when eye movements are not permitted, illustrating that performance can be increased if one accounts for peculiarities of peripheral vision. PMID:20509913
The Individual Virtual Eye: a Computer Model for Advanced Intraocular Lens Calculation
Einighammer, Jens; Oltrup, Theo; Bende, Thomas; Jean, Benedikt
2010-01-01
Purpose To describe the individual virtual eye, a computer model of a human eye with respect to its optical properties. It is based on measurements of an individual person and one of its major application is calculating intraocular lenses (IOLs) for cataract surgery. Methods The model is constructed from an eye's geometry, including axial length and topographic measurements of the anterior corneal surface. All optical components of a pseudophakic eye are modeled with computer scientific methods. A spline-based interpolation method efficiently includes data from corneal topographic measurements. The geometrical optical properties, such as the wavefront aberration, are simulated with real ray-tracing using Snell's law. Optical components can be calculated using computer scientific optimization procedures. The geometry of customized aspheric IOLs was calculated for 32 eyes and the resulting wavefront aberration was investigated. Results The more complex the calculated IOL is, the lower the residual wavefront error is. Spherical IOLs are only able to correct for the defocus, while toric IOLs also eliminate astigmatism. Spherical aberration is additionally reduced by aspheric and toric aspheric IOLs. The efficient implementation of time-critical numerical ray-tracing and optimization procedures allows for short calculation times, which may lead to a practicable method integrated in some device. Conclusions The individual virtual eye allows for simulations and calculations regarding geometrical optics for individual persons. This leads to clinical applications like IOL calculation, with the potential to overcome the limitations of those current calculation methods that are based on paraxial optics, exemplary shown by calculating customized aspheric IOLs.
Real-Time Detection and Measurement of Eye Features from Color Images
Borza, Diana; Darabant, Adrian Sergiu; Danescu, Radu
2016-01-01
The accurate extraction and measurement of eye features is crucial to a variety of domains, including human-computer interaction, biometry, and medical research. This paper presents a fast and accurate method for extracting multiple features around the eyes: the center of the pupil, the iris radius, and the external shape of the eye. These features are extracted using a multistage algorithm. On the first stage the pupil center is localized using a fast circular symmetry detector and the iris radius is computed using radial gradient projections, and on the second stage the external shape of the eye (of the eyelids) is determined through a Monte Carlo sampling framework based on both color and shape information. Extensive experiments performed on a different dataset demonstrate the effectiveness of our approach. In addition, this work provides eye annotation data for a publicly-available database. PMID:27438838
Dijk, D J; Shanahan, T L; Duffy, J F; Ronda, J M; Czeisler, C A
1997-01-01
1. The circadian pacemaker regulates the timing, structure and consolidation of human sleep. The extent to which this pacemaker affects electroencephalographic (EEG) activity during sleep remains unclear. 2. To investigate this, a total of 1.22 million power spectra were computed from EEGs recorded in seven men (total, 146 sleep episodes; 9 h 20 min each) who participated in a one-month-long protocol in which the sleep-wake cycle was desynchronized from the rhythm of plasma melatonin, which is driven by the circadian pacemaker. 3. In rapid eye movement (REM) sleep a small circadian variation in EEG activity was observed. The nadir of the circadian rhythm of alpha activity (8.25-10.5 Hz) coincided with the end of the interval during which plasma melatonin values were high, i.e. close to the crest of the REM sleep rhythm. 4. In non-REM sleep, variation in EEG activity between 0.25 and 11.5 Hz was primarily dependent on prior sleep time and only slightly affected by circadian phase, such that the lowest values coincided with the phase of melatonin secretion. 5. In the frequency range of sleep spindles, high-amplitude circadian rhythms with opposite phase positions relative to the melatonin rhythm were observed. Low-frequency sleep spindle activity (12.25-13.0 Hz) reached its crest and high-frequency sleep spindle activity (14.25-15.5 Hz) reached its nadir when sleep coincided with the phase of melatonin secretion. 6. These data indicate that the circadian pacemaker induces changes in EEG activity during REM and non-REM sleep. The changes in non-REM sleep EEG spectra are dissimilar from the spectral changes induced by sleep deprivation and exhibit a close temporal association with the melatonin rhythm and the endogenous circadian phase of sleep consolidation. PMID:9457658
Modelling eye movements in a categorical search task
Zelinsky, Gregory J.; Adeli, Hossein; Peng, Yifan; Samaras, Dimitris
2013-01-01
We introduce a model of eye movements during categorical search, the task of finding and recognizing categorically defined targets. It extends a previous model of eye movements during search (target acquisition model, TAM) by using distances from an support vector machine classification boundary to create probability maps indicating pixel-by-pixel evidence for the target category in search images. Other additions include functionality enabling target-absent searches, and a fixation-based blurring of the search images now based on a mapping between visual and collicular space. We tested this model on images from a previously conducted variable set-size (6/13/20) present/absent search experiment where participants searched for categorically defined teddy bear targets among random category distractors. The model not only captured target-present/absent set-size effects, but also accurately predicted for all conditions the numbers of fixations made prior to search judgements. It also predicted the percentages of first eye movements during search landing on targets, a conservative measure of search guidance. Effects of set size on false negative and false positive errors were also captured, but error rates in general were overestimated. We conclude that visual features discriminating a target category from non-targets can be learned and used to guide eye movements during categorical search. PMID:24018720
Diagnostic accuracy of eye movements in assessing pedophilia.
Fromberger, Peter; Jordan, Kirsten; Steinkrauss, Henrike; von Herder, Jakob; Witzel, Joachim; Stolpmann, Georg; Kröner-Herwig, Birgit; Müller, Jürgen Leo
2012-07-01
Given that recurrent sexual interest in prepubescent children is one of the strongest single predictors for pedosexual offense recidivism, valid and reliable diagnosis of pedophilia is of particular importance. Nevertheless, current assessment methods still fail to fulfill psychometric quality criteria. The aim of the study was to evaluate the diagnostic accuracy of eye-movement parameters in regard to pedophilic sexual preferences. Eye movements were measured while 22 pedophiles (according to ICD-10 F65.4 diagnosis), 8 non-pedophilic forensic controls, and 52 healthy controls simultaneously viewed the picture of a child and the picture of an adult. Fixation latency was assessed as a parameter for automatic attentional processes and relative fixation time to account for controlled attentional processes. Receiver operating characteristic (ROC) analyses, which are based on calculated age-preference indices, were carried out to determine the classifier performance. Cross-validation using the leave-one-out method was used to test the validity of classifiers. Pedophiles showed significantly shorter fixation latencies and significantly longer relative fixation times for child stimuli than either of the control groups. Classifier performance analysis revealed an area under the curve (AUC) = 0.902 for fixation latency and an AUC = 0.828 for relative fixation time. The eye-tracking method based on fixation latency discriminated between pedophiles and non-pedophiles with a sensitivity of 86.4% and a specificity of 90.0%. Cross-validation demonstrated good validity of eye-movement parameters. Despite some methodological limitations, measuring eye movements seems to be a promising approach to assess deviant pedophilic interests. Eye movements, which represent automatic attentional processes, demonstrated high diagnostic accuracy. © 2012 International Society for Sexual Medicine.
Suzuki, D A; Yamada, T; Hoedema, R; Yee, R D
1999-09-01
Anatomic and neuronal recordings suggest that the nucleus reticularis tegmenti pontis (NRTP) of macaques may be a major pontine component of a cortico-ponto-cerebellar pathway that subserves the control of smooth-pursuit eye movements. The existence of such a pathway was implicated by the lack of permanent pursuit impairment after bilateral lesions in the dorsolateral pontine nucleus. To provide more direct evidence that NRTP is involved with regulating smooth-pursuit eye movements, chemical lesions were made in macaque NRTP by injecting either lidocaine or ibotenic acid. Injection sites first were identified by the recording of smooth-pursuit-related modulations in neuronal activity. The resulting lesions caused significant deficits in both the maintenance and the initiation of smooth-pursuit eye movements. After lesion formation, the gain of constant-velocity, maintained smooth-pursuit eye movements decreased, on the average, by 44%. Recovery of the ability to maintain smooth-pursuit eye movements occurred over approximately 3 days when maintained pursuit gains attained normal values. The step-ramp, "Rashbass" task was used to investigate the effects of the lesions on the initiation of smooth-pursuit eye movements. Eye accelerations averaged over the initial 80 ms of pursuit initiation were determined and found to be decremented, on the average, by 48% after the administration of ibotenic acid. Impairments in the initiation and maintenance of smooth-pursuit eye movements were directional in nature. Upward pursuit seemed to be the most vulnerable and was impaired in all cases independent of lesioning agent and type of pursuit investigated. Downward smooth pursuit seemed more resistant to the effects of chemical lesions in NRTP. Impairments in horizontal tracking were observed with examples of deficits in ipsilaterally and contralaterally directed pursuit. The results provide behavioral support for the physiologically and anatomic-based conclusion that NRTP is a component of a cortico-ponto-cerebellar circuit that presumably involves the pursuit area of the frontal eye field (FEF) and projects to ocular motor-related areas of the cerebellum. This FEF-NRTP-cerebellum path would parallel a middle and medial superior temporal cerebral cortical area-dorsolateral pontine nucleus-cerebellum pathway also known to be involved with regulating smooth-pursuit eye movements.
Chuk, Tim; Chan, Antoni B; Hsiao, Janet H
2017-12-01
The hidden Markov model (HMM)-based approach for eye movement analysis is able to reflect individual differences in both spatial and temporal aspects of eye movements. Here we used this approach to understand the relationship between eye movements during face learning and recognition, and its association with recognition performance. We discovered holistic (i.e., mainly looking at the face center) and analytic (i.e., specifically looking at the two eyes in addition to the face center) patterns during both learning and recognition. Although for both learning and recognition, participants who adopted analytic patterns had better recognition performance than those with holistic patterns, a significant positive correlation between the likelihood of participants' patterns being classified as analytic and their recognition performance was only observed during recognition. Significantly more participants adopted holistic patterns during learning than recognition. Interestingly, about 40% of the participants used different patterns between learning and recognition, and among them 90% switched their patterns from holistic at learning to analytic at recognition. In contrast to the scan path theory, which posits that eye movements during learning have to be recapitulated during recognition for the recognition to be successful, participants who used the same or different patterns during learning and recognition did not differ in recognition performance. The similarity between their learning and recognition eye movement patterns also did not correlate with their recognition performance. These findings suggested that perceptuomotor memory elicited by eye movement patterns during learning does not play an important role in recognition. In contrast, the retrieval of diagnostic information for recognition, such as the eyes for face recognition, is a better predictor for recognition performance. Copyright © 2017 Elsevier Ltd. All rights reserved.
Neurophysiology and Neuroanatomy of Smooth Pursuit in Humans
ERIC Educational Resources Information Center
Lencer, Rebekka; Trillenberg, Peter
2008-01-01
Smooth pursuit eye movements enable us to focus our eyes on moving objects by utilizing well-established mechanisms of visual motion processing, sensorimotor transformation and cognition. Novel smooth pursuit tasks and quantitative measurement techniques can help unravel the different smooth pursuit components and complex neural systems involved…
Eye movement training is most effective when it involves a task-relevant sensorimotor decision.
Fooken, Jolande; Lalonde, Kathryn M; Mann, Gurkiran K; Spering, Miriam
2018-04-01
Eye and hand movements are closely linked when performing everyday actions. We conducted a perceptual-motor training study to investigate mutually beneficial effects of eye and hand movements, asking whether training in one modality benefits performance in the other. Observers had to predict the future trajectory of a briefly presented moving object, and intercept it at its assumed location as accurately as possible with their finger. Eye and hand movements were recorded simultaneously. Different training protocols either included eye movements or a combination of eye and hand movements with or without external performance feedback. Eye movement training did not transfer across modalities: Irrespective of feedback, finger interception accuracy and precision improved after training that involved the hand, but not after isolated eye movement training. Conversely, eye movements benefited from hand movement training or when external performance feedback was given, thus improving only when an active interceptive task component was involved. These findings indicate only limited transfer across modalities. However, they reveal the importance of creating a training task with an active sensorimotor decision to improve the accuracy and precision of eye and hand movements.
McMullen, David P.; Hotson, Guy; Katyal, Kapil D.; Wester, Brock A.; Fifer, Matthew S.; McGee, Timothy G.; Harris, Andrew; Johannes, Matthew S.; Vogelstein, R. Jacob; Ravitz, Alan D.; Anderson, William S.; Thakor, Nitish V.; Crone, Nathan E.
2014-01-01
To increase the ability of brain-machine interfaces (BMIs) to control advanced prostheses such as the modular prosthetic limb (MPL), we are developing a novel system: the Hybrid Augmented Reality Multimodal Operation Neural Integration Environment (HARMONIE). This system utilizes hybrid input, supervisory control, and intelligent robotics to allow users to identify an object (via eye tracking and computer vision) and initiate (via brain-control) a semi-autonomous reach-grasp-and-drop of the object by the MPL. Sequential iterations of HARMONIE were tested in two pilot subjects implanted with electrocorticographic (ECoG) and depth electrodes within motor areas. The subjects performed the complex task in 71.4% (20/28) and 67.7% (21/31) of trials after minimal training. Balanced accuracy for detecting movements was 91.1% and 92.9%, significantly greater than chance accuracies (p < 0.05). After BMI-based initiation, the MPL completed the entire task 100% (one object) and 70% (three objects) of the time. The MPL took approximately 12.2 seconds for task completion after system improvements implemented for the second subject. Our hybrid-BMI design prevented all but one baseline false positive from initiating the system. The novel approach demonstrated in this proof-of-principle study, using hybrid input, supervisory control, and intelligent robotics, addresses limitations of current BMIs. PMID:24760914
McMullen, David P; Hotson, Guy; Katyal, Kapil D; Wester, Brock A; Fifer, Matthew S; McGee, Timothy G; Harris, Andrew; Johannes, Matthew S; Vogelstein, R Jacob; Ravitz, Alan D; Anderson, William S; Thakor, Nitish V; Crone, Nathan E
2014-07-01
To increase the ability of brain-machine interfaces (BMIs) to control advanced prostheses such as the modular prosthetic limb (MPL), we are developing a novel system: the Hybrid Augmented Reality Multimodal Operation Neural Integration Environment (HARMONIE). This system utilizes hybrid input, supervisory control, and intelligent robotics to allow users to identify an object (via eye tracking and computer vision) and initiate (via brain-control) a semi-autonomous reach-grasp-and-drop of the object by the MPL. Sequential iterations of HARMONIE were tested in two pilot subjects implanted with electrocorticographic (ECoG) and depth electrodes within motor areas. The subjects performed the complex task in 71.4% (20/28) and 67.7% (21/31) of trials after minimal training. Balanced accuracy for detecting movements was 91.1% and 92.9%, significantly greater than chance accuracies (p < 0.05). After BMI-based initiation, the MPL completed the entire task 100% (one object) and 70% (three objects) of the time. The MPL took approximately 12.2 s for task completion after system improvements implemented for the second subject. Our hybrid-BMI design prevented all but one baseline false positive from initiating the system. The novel approach demonstrated in this proof-of-principle study, using hybrid input, supervisory control, and intelligent robotics, addresses limitations of current BMIs.
System identification of perilymphatic fistula in an animal model
NASA Technical Reports Server (NTRS)
Wall, C. 3rd; Casselbrant, M. L.
1992-01-01
An acute animal model has been developed in the chinchilla for the study of perilymphatic fistulas. Micropunctures were made in three sites to simulate bony, round window, and oval window fistulas. The eye movements in response to pressure applied to the external auditory canal were recorded after micropuncture induction and in preoperative controls. The main pressure stimulus was a pseudorandom binary sequence (PRBS) that rapidly changed between plus and minus 200 mm of water. The PRBS stimulus, with its wide frequency bandwidth, produced responses clearly above the preoperative baseline in 78 percent of the runs. The response was better between 0.5 and 3.3 Hz than it was below 0.5 Hz. The direction of horizontal eye movement was toward the side of the fistula with positive pressure applied in 92 percent of the runs. Vertical eye movements were also observed. The ratio of vertical eye displacement to horizontal eye displacement depended upon the site of the micropuncture induction. Thus, such a ratio measurement may be clinically useful in the noninvasive localization of perilymphatic fistulas in humans.
Extralenticular and Lenticular Aspects of Accommodation and Presbyopia in Human Versus Monkey Eyes
Croft, Mary Ann; McDonald, Jared P.; Katz, Alexander; Lin, Ting-Li; Lütjen-Drecoll, Elke; Kaufman, Paul L.
2013-01-01
Purpose. To determine if the accommodative forward movements of the vitreous zonule and lens equator occur in the human eye, as they do in the rhesus monkey eye; to investigate the connection between the vitreous zonule posterior insertion zone and the posterior lens equator; and to determine which components—muscle apex width, lens thickness, lens equator position, vitreous zonule, circumlental space, and/or other intraocular dimensions, including those stated in the objectives above—are most important in predicting accommodative amplitude and presbyopia. Methods. Accommodation was induced pharmacologically in 12 visually normal human subjects (ages 19–65 years) and by midbrain electrical stimulation in 11 rhesus monkeys (ages 6–27 years). Ultrasound biomicroscopy imaged the entire ciliary body, anterior and posterior lens surfaces, and the zonule. Relevant distances were measured in the resting and accommodated eyes. Stepwise regression analysis determined which variables were the most important predictors. Results. The human vitreous zonule and lens equator move forward (anteriorly) during accommodation, and their movements decline with age, as in the monkey. Over all ages studied, age could explain accommodative amplitude, but not as well as accommodative lens thickening and resting muscle apex thickness did together. Accommodative change in distances between the vitreous zonule insertion zone and the posterior lens equator or muscle apex were important for predicting accommodative lens thickening. Conclusions. Our findings quantify the movements of the zonule and ciliary muscle during accommodation, and identify their age-related changes that could impact the optical change that occurs during accommodation and IOL function. PMID:23745002
Spering, Miriam; Carrasco, Marisa
2012-01-01
Feature-based attention enhances visual processing and improves perception, even for visual features that we are not aware of. Does feature-based attention also modulate motor behavior in response to visual information that does or does not reach awareness? Here we compare the effect of feature-based attention on motion perception and smooth pursuit eye movements in response to moving dichoptic plaids–stimuli composed of two orthogonally-drifting gratings, presented separately to each eye–in human observers. Monocular adaptation to one grating prior to the presentation of both gratings renders the adapted grating perceptually weaker than the unadapted grating and decreases the level of awareness. Feature-based attention was directed to either the adapted or the unadapted grating’s motion direction or to both (neutral condition). We show that observers were better in detecting a speed change in the attended than the unattended motion direction, indicating that they had successfully attended to one grating. Speed change detection was also better when the change occurred in the unadapted than the adapted grating, indicating that the adapted grating was perceptually weaker. In neutral conditions, perception and pursuit in response to plaid motion were dissociated: While perception followed one grating’s motion direction almost exclusively (component motion), the eyes tracked the average of both gratings (pattern motion). In attention conditions, perception and pursuit were shifted towards the attended component. These results suggest that attention affects perception and pursuit similarly even though only the former reflects awareness. The eyes can track an attended feature even if observers do not perceive it. PMID:22649238
Eye movements during visual search in patients with glaucoma
2012-01-01
Background Glaucoma has been shown to lead to disability in many daily tasks including visual search. This study aims to determine whether the saccadic eye movements of people with glaucoma differ from those of people with normal vision, and to investigate the association between eye movements and impaired visual search. Methods Forty patients (mean age: 67 [SD: 9] years) with a range of glaucomatous visual field (VF) defects in both eyes (mean best eye mean deviation [MD]: –5.9 (SD: 5.4) dB) and 40 age-related people with normal vision (mean age: 66 [SD: 10] years) were timed as they searched for a series of target objects in computer displayed photographs of real world scenes. Eye movements were simultaneously recorded using an eye tracker. Average number of saccades per second, average saccade amplitude and average search duration across trials were recorded. These response variables were compared with measurements of VF and contrast sensitivity. Results The average rate of saccades made by the patient group was significantly smaller than the number made by controls during the visual search task (P = 0.02; mean reduction of 5.6% (95% CI: 0.1 to 10.4%). There was no difference in average saccade amplitude between the patients and the controls (P = 0.09). Average number of saccades was weakly correlated with aspects of visual function, with patients with worse contrast sensitivity (PR logCS; Spearman’s rho: 0.42; P = 0.006) and more severe VF defects (best eye MD; Spearman’s rho: 0.34; P = 0.037) tending to make less eye movements during the task. Average detection time in the search task was associated with the average rate of saccades in the patient group (Spearman’s rho = −0.65; P < 0.001) but this was not apparent in the controls. Conclusions The average rate of saccades made during visual search by this group of patients was fewer than those made by people with normal vision of a similar average age. There was wide variability in saccade rate in the patients but there was an association between an increase in this measure and better performance in the search task. Assessment of eye movements in individuals with glaucoma might provide insight into the functional deficits of the disease. PMID:22937814
Samadani, Uzma; Ritlop, Robert; Reyes, Marleen; Nehrbass, Elena; Li, Meng; Lamm, Elizabeth; Schneider, Julia; Shimunov, David; Sava, Maria; Kolecki, Radek; Burris, Paige; Altomare, Lindsey; Mehmood, Talha; Smith, Theodore; Huang, Jason H; McStay, Christopher; Todd, S Rob; Qian, Meng; Kondziolka, Douglas; Wall, Stephen; Huang, Paul
2015-04-15
Disconjugate eye movements have been associated with traumatic brain injury since ancient times. Ocular motility dysfunction may be present in up to 90% of patients with concussion or blast injury. We developed an algorithm for eye tracking in which the Cartesian coordinates of the right and left pupils are tracked over 200 sec and compared to each other as a subject watches a short film clip moving inside an aperture on a computer screen. We prospectively eye tracked 64 normal healthy noninjured control subjects and compared findings to 75 trauma subjects with either a positive head computed tomography (CT) scan (n=13), negative head CT (n=39), or nonhead injury (n=23) to determine whether eye tracking would reveal the disconjugate gaze associated with both structural brain injury and concussion. Tracking metrics were then correlated to the clinical concussion measure Sport Concussion Assessment Tool 3 (SCAT3) in trauma patients. Five out of five measures of horizontal disconjugacy were increased in positive and negative head CT patients relative to noninjured control subjects. Only one of five vertical disconjugacy measures was significantly increased in brain-injured patients relative to controls. Linear regression analysis of all 75 trauma patients demonstrated that three metrics for horizontal disconjugacy negatively correlated with SCAT3 symptom severity score and positively correlated with total Standardized Assessment of Concussion score. Abnormal eye-tracking metrics improved over time toward baseline in brain-injured subjects observed in follow-up. Eye tracking may help quantify the severity of ocular motility disruption associated with concussion and structural brain injury.
Ritlop, Robert; Reyes, Marleen; Nehrbass, Elena; Li, Meng; Lamm, Elizabeth; Schneider, Julia; Shimunov, David; Sava, Maria; Kolecki, Radek; Burris, Paige; Altomare, Lindsey; Mehmood, Talha; Smith, Theodore; Huang, Jason H.; McStay, Christopher; Todd, S. Rob; Qian, Meng; Kondziolka, Douglas; Wall, Stephen; Huang, Paul
2015-01-01
Abstract Disconjugate eye movements have been associated with traumatic brain injury since ancient times. Ocular motility dysfunction may be present in up to 90% of patients with concussion or blast injury. We developed an algorithm for eye tracking in which the Cartesian coordinates of the right and left pupils are tracked over 200 sec and compared to each other as a subject watches a short film clip moving inside an aperture on a computer screen. We prospectively eye tracked 64 normal healthy noninjured control subjects and compared findings to 75 trauma subjects with either a positive head computed tomography (CT) scan (n=13), negative head CT (n=39), or nonhead injury (n=23) to determine whether eye tracking would reveal the disconjugate gaze associated with both structural brain injury and concussion. Tracking metrics were then correlated to the clinical concussion measure Sport Concussion Assessment Tool 3 (SCAT3) in trauma patients. Five out of five measures of horizontal disconjugacy were increased in positive and negative head CT patients relative to noninjured control subjects. Only one of five vertical disconjugacy measures was significantly increased in brain-injured patients relative to controls. Linear regression analysis of all 75 trauma patients demonstrated that three metrics for horizontal disconjugacy negatively correlated with SCAT3 symptom severity score and positively correlated with total Standardized Assessment of Concussion score. Abnormal eye-tracking metrics improved over time toward baseline in brain-injured subjects observed in follow-up. Eye tracking may help quantify the severity of ocular motility disruption associated with concussion and structural brain injury. PMID:25582436
Naicker, Preshanta; Anoopkumar-Dukie, Shailendra; Grant, Gary D; Modenese, Luca; Kavanagh, Justin J
2017-02-01
Anticholinergic medications largely exert their effects due to actions on the muscarinic receptor, which mediates the functions of acetylcholine in the peripheral and central nervous systems. In the central nervous system, acetylcholine plays an important role in the modulation of movement. This study investigated the effects of over-the-counter medications with varying degrees of central anticholinergic properties on fixation stability, saccadic response time and the dynamics associated with this eye movement during a temporally-cued visual reaction time task, in order to establish the significance of central cholinergic pathways in influencing eye movements during reaction time tasks. Twenty-two participants were recruited into the placebo-controlled, human double-blind, four-way crossover investigation. Eye tracking technology recorded eye movements while participants reacted to visual stimuli following temporally informative and uninformative cues. The task was performed pre-ingestion as well as 0.5 and 2 h post-ingestion of promethazine hydrochloride (strong centrally acting anticholinergic), hyoscine hydrobromide (moderate centrally acting anticholinergic), hyoscine butylbromide (anticholinergic devoid of central properties) and a placebo. Promethazine decreased fixation stability during the reaction time task. In addition, promethazine was the only drug to increase saccadic response time during temporally informative and uninformative cued trials, whereby effects on response time were more pronounced following temporally informative cues. Promethazine also decreased saccadic amplitude and increased saccadic duration during the temporally-cued reaction time task. Collectively, the results of the study highlight the significant role that central cholinergic pathways play in the control of eye movements during tasks that involve stimulus identification and motor responses following temporal cues.
Yu, Chen; Smith, Linda B.
2013-01-01
The coordination of visual attention among social partners is central to many components of human behavior and human development. Previous research has focused on one pathway to the coordination of looking behavior by social partners, gaze following. The extant evidence shows that even very young infants follow the direction of another's gaze but they do so only in highly constrained spatial contexts because gaze direction is not a spatially precise cue as to the visual target and not easily used in spatially complex social interactions. Our findings, derived from the moment-to-moment tracking of eye gaze of one-year-olds and their parents as they actively played with toys, provide evidence for an alternative pathway, through the coordination of hands and eyes in goal-directed action. In goal-directed actions, the hands and eyes of the actor are tightly coordinated both temporally and spatially, and thus, in contexts including manual engagement with objects, hand movements and eye movements provide redundant information about where the eyes are looking. Our findings show that one-year-olds rarely look to the parent's face and eyes in these contexts but rather infants and parents coordinate looking behavior without gaze following by attending to objects held by the self or the social partner. This pathway, through eye-hand coupling, leads to coordinated joint switches in visual attention and to an overall high rate of looking at the same object at the same time, and may be the dominant pathway through which physically active toddlers align their looking behavior with a social partner. PMID:24236151
OCT angiography by absolute intensity difference applied to normal and diseased human retinas
Ruminski, Daniel; Sikorski, Bartosz L.; Bukowska, Danuta; Szkulmowski, Maciej; Krawiec, Krzysztof; Malukiewicz, Grazyna; Bieganowski, Lech; Wojtkowski, Maciej
2015-01-01
We compare four optical coherence tomography techniques for noninvasive visualization of microcapillary network in the human retina and murine cortex. We perform phantom studies to investigate contrast-to-noise ratio for angiographic images obtained with each of the algorithm. We show that the computationally simplest absolute intensity difference angiographic OCT algorithm that bases only on two cross-sectional intensity images may be successfully used in clinical study of healthy eyes and eyes with diabetic maculopathy and branch retinal vein occlusion. PMID:26309740
Using Eye Movement Desensitization and Reprocessing To Enhance Treatment of Couples.
ERIC Educational Resources Information Center
Protinsky, Howard; Sparks, Jennifer; Flemke, Kimberly
2001-01-01
Eye Movement Desensitization and Reprocessing (EMDR) as a clinical technique may enhance treatment effectiveness when applied in couple therapy that is emotionally and experientially oriented. Clinical experience indicates EMDR-based interventions are useful for accessing and reprocessing intense emotions in couple interactions. EMDR can amplify…
CROFT, MARY ANN; HEATLEY, GREGG; MCDONALD, JARED P.; KATZ, ALEXANDER; KAUFMAN, PAUL L.
2016-01-01
Purpose To elucidate the dynamic accommodative movements of the lens capsule, posterior lens and the strand that attaches to the posterior vitreous zonule insertion zone and posterior lens equator (PVZ INS-LE), and their age-related changes. Methods Twelve human subjects (ages 19–65 years) and twelve rhesus monkeys (ages 6–27 years) were studied. Accommodation was induced pharmacologically (humans) or by central electrical stimulation (monkeys). Ultrasound biomicroscopy was used to image intraocular structures in both species. Surgical procedures and contrast agents were utilized in the monkey eyes to elucidate function and allow visualization of the intraocular accommodative structures. Results Human: The posterior pole of the lens moves posteriorly during accommodation in proportion to accommodative amplitude and ciliary muscle movement. Monkey: Similar accommodative movements of the posterior lens pole were seen in the monkey eyes. Following extracapsular lens extraction (ECLE), the central capsule bows backward during accommodation in proportion to accommodative amplitude and ciliary muscle movement, while the peripheral capsule moves forward. During accommodation the ciliary muscle moved forward by ~1.0 mm, pulling forward the vitreous zonule and the PVZ INS-LE structure. During the accommodative response the PVZ INS-LE structure moved forward when the lens was intact and when the lens substance and capsule were removed. In both the monkey and the human eyes these movements declined with age. Conclusions The accommodative shape change of the central capsule may be due to the elastic properties of the capsule itself. For these capsule/lens accommodative posterior movements to occur, the vitreous face must either allow for it or facilitate it. The PVZ INS-LE structure may act as a “strut” to the posterior lens equator (pushing the lens equator forward) and thereby facilitate accommodative forward lens equator movement and lens thickening. The age-related posterior restriction of the ciliary muscle, vitreous zonule and the PVZ-INS LE structure dampens the accommodative lens shape change. Future descriptions of the accommodative mechanism, and approaches to presbyopia therapy, may need to incorporate these findings. PMID:26769326
Development and experimentation of an eye/brain/task testbed
NASA Technical Reports Server (NTRS)
Harrington, Nora; Villarreal, James
1987-01-01
The principal objective is to develop a laboratory testbed that will provide a unique capability to elicit, control, record, and analyze the relationship of operator task loading, operator eye movement, and operator brain wave data in a computer system environment. The ramifications of an integrated eye/brain monitor to the man machine interface are staggering. The success of such a system would benefit users of space and defense, paraplegics, and the monitoring of boring screens (nuclear power plants, air defense, etc.)
NASA Astrophysics Data System (ADS)
Felton, E. A.; Radwin, R. G.; Wilson, J. A.; Williams, J. C.
2009-10-01
A brain-computer interface (BCI) is a communication system that takes recorded brain signals and translates them into real-time actions, in this case movement of a cursor on a computer screen. This work applied Fitts' law to the evaluation of performance on a target acquisition task during sensorimotor rhythm-based BCI training. Fitts' law, which has been used as a predictor of movement time in studies of human movement, was used here to determine the information transfer rate, which was based on target acquisition time and target difficulty. The information transfer rate was used to make comparisons between control modalities and subject groups on the same task. Data were analyzed from eight able-bodied and five motor disabled participants who wore an electrode cap that recorded and translated their electroencephalogram (EEG) signals into computer cursor movements. Direct comparisons were made between able-bodied and disabled subjects, and between EEG and joystick cursor control in able-bodied subjects. Fitts' law aptly described the relationship between movement time and index of difficulty for each task movement direction when evaluated separately and averaged together. This study showed that Fitts' law can be successfully applied to computer cursor movement controlled by neural signals.
Hand Grasping Synergies As Biometrics.
Patel, Vrajeshri; Thukral, Poojita; Burns, Martin K; Florescu, Ionut; Chandramouli, Rajarathnam; Vinjamuri, Ramana
2017-01-01
Recently, the need for more secure identity verification systems has driven researchers to explore other sources of biometrics. This includes iris patterns, palm print, hand geometry, facial recognition, and movement patterns (hand motion, gait, and eye movements). Identity verification systems may benefit from the complexity of human movement that integrates multiple levels of control (neural, muscular, and kinematic). Using principal component analysis, we extracted spatiotemporal hand synergies (movement synergies) from an object grasping dataset to explore their use as a potential biometric. These movement synergies are in the form of joint angular velocity profiles of 10 joints. We explored the effect of joint type, digit, number of objects, and grasp type. In its best configuration, movement synergies achieved an equal error rate of 8.19%. While movement synergies can be integrated into an identity verification system with motion capture ability, we also explored a camera-ready version of hand synergies-postural synergies. In this proof of concept system, postural synergies performed well, but only when specific postures were chosen. Based on these results, hand synergies show promise as a potential biometric that can be combined with other hand-based biometrics for improved security.
Recognition method of construction conflict based on driver's eye movement.
Xu, Yi; Li, Shiwu; Gao, Song; Tan, Derong; Guo, Dong; Wang, Yuqiong
2018-04-01
Drivers eye movement data in simulated construction conflicts at different speeds were collected and analyzed to find the relationship between the drivers' eye movement and the construction conflict. On the basis of the relationship between the drivers' eye movement and the construction conflict, the peak point of wavelet processed pupil diameter, the first point on the left side of the peak point and the first blink point after the peak point are selected as key points for locating construction conflict periods. On the basis of the key points and the GSA, a construction conflict recognition method so called the CCFRM is proposed. And the construction conflict recognition speed and location accuracy of the CCFRM are verified. The good performance of the CCFRM verified the feasibility of proposed key points in construction conflict recognition. Copyright © 2018 Elsevier Ltd. All rights reserved.
Guiding the mind's eye: improving communication and vision by external control of the scanpath
NASA Astrophysics Data System (ADS)
Barth, Erhardt; Dorr, Michael; Böhme, Martin; Gegenfurtner, Karl; Martinetz, Thomas
2006-02-01
Larry Stark has emphasised that what we visually perceive is very much determined by the scanpath, i.e. the pattern of eye movements. Inspired by his view, we have studied the implications of the scanpath for visual communication and came up with the idea to not only sense and analyse eye movements, but also guide them by using a special kind of gaze-contingent information display. Our goal is to integrate gaze into visual communication systems by measuring and guiding eye movements. For guidance, we first predict a set of about 10 salient locations. We then change the probability for one of these candidates to be attended: for one candidate the probability is increased, for the others it is decreased. To increase saliency, for example, we add red dots that are displayed very briefly such that they are hardly perceived consciously. To decrease the probability, for example, we locally reduce the temporal frequency content. Again, if performed in a gaze-contingent fashion with low latencies, these manipulations remain unnoticed. Overall, the goal is to find the real-time video transformation minimising the difference between the actual and the desired scanpath without being obtrusive. Applications are in the area of vision-based communication (better control of what information is conveyed) and augmented vision and learning (guide a person's gaze by the gaze of an expert or a computer-vision system). We believe that our research is very much in the spirit of Larry Stark's views on visual perception and the close link between vision research and engineering.
Assisting Movement Training and Execution With Visual and Haptic Feedback.
Ewerton, Marco; Rother, David; Weimar, Jakob; Kollegger, Gerrit; Wiemeyer, Josef; Peters, Jan; Maeda, Guilherme
2018-01-01
In the practice of motor skills in general, errors in the execution of movements may go unnoticed when a human instructor is not available. In this case, a computer system or robotic device able to detect movement errors and propose corrections would be of great help. This paper addresses the problem of how to detect such execution errors and how to provide feedback to the human to correct his/her motor skill using a general, principled methodology based on imitation learning. The core idea is to compare the observed skill with a probabilistic model learned from expert demonstrations. The intensity of the feedback is regulated by the likelihood of the model given the observed skill. Based on demonstrations, our system can, for example, detect errors in the writing of characters with multiple strokes. Moreover, by using a haptic device, the Haption Virtuose 6D, we demonstrate a method to generate haptic feedback based on a distribution over trajectories, which could be used as an auxiliary means of communication between an instructor and an apprentice. Additionally, given a performance measurement, the haptic device can help the human discover and perform better movements to solve a given task. In this case, the human first tries a few times to solve the task without assistance. Our framework, in turn, uses a reinforcement learning algorithm to compute haptic feedback, which guides the human toward better solutions.
Hlavnička, Jan; Čmejla, Roman; Tykalová, Tereza; Šonka, Karel; Růžička, Evžen; Rusz, Jan
2017-02-02
For generations, the evaluation of speech abnormalities in neurodegenerative disorders such as Parkinson's disease (PD) has been limited to perceptual tests or user-controlled laboratory analysis based upon rather small samples of human vocalizations. Our study introduces a fully automated method that yields significant features related to respiratory deficits, dysphonia, imprecise articulation and dysrhythmia from acoustic microphone data of natural connected speech for predicting early and distinctive patterns of neurodegeneration. We compared speech recordings of 50 subjects with rapid eye movement sleep behaviour disorder (RBD), 30 newly diagnosed, untreated PD patients and 50 healthy controls, and showed that subliminal parkinsonian speech deficits can be reliably captured even in RBD patients, which are at high risk of developing PD or other synucleinopathies. Thus, automated vocal analysis should soon be able to contribute to screening and diagnostic procedures for prodromal parkinsonian neurodegeneration in natural environments.
Use of a genetic algorithm for the analysis of eye movements from the linear vestibulo-ocular reflex
NASA Technical Reports Server (NTRS)
Shelhamer, M.
2001-01-01
It is common in vestibular and oculomotor testing to use a single-frequency (sine) or combination of frequencies [sum-of-sines (SOS)] stimulus for head or target motion. The resulting eye movements typically contain a smooth tracking component, which follows the stimulus, in which are interspersed rapid eye movements (saccades or fast phases). The parameters of the smooth tracking--the amplitude and phase of each component frequency--are of interest; many methods have been devised that attempt to identify and remove the fast eye movements from the smooth. We describe a new approach to this problem, tailored to both single-frequency and sum-of-sines stimulation of the human linear vestibulo-ocular reflex. An approximate derivative is used to identify fast movements, which are then omitted from further analysis. The remaining points form a series of smooth tracking segments. A genetic algorithm is used to fit these segments together to form a smooth (but disconnected) wave form, by iteratively removing biases due to the missing fast phases. A genetic algorithm is an iterative optimization procedure; it provides a basis for extending this approach to more complex stimulus-response situations. In the SOS case, the genetic algorithm estimates the amplitude and phase values of the component frequencies as well as removing biases.
Brain mechanisms controlling decision making and motor planning.
Ramakrishnan, Arjun; Murthy, Aditya
2013-01-01
Accumulator models of decision making provide a unified framework to understand decision making and motor planning. In these models, the evolution of a decision is reflected in the accumulation of sensory information into a motor plan that reaches a threshold, leading to choice behavior. While these models provide an elegant framework to understand performance and reaction times, their ability to explain complex behaviors such as decision making and motor control of sequential movements in dynamic environments is unclear. To examine and probe the limits of online modification of decision making and motor planning, an oculomotor "redirect" task was used. Here, subjects were expected to change their eye movement plan when a new saccade target appeared. Based on task performance, saccade reaction time distributions, computational models of behavior, and intracortical microstimulation of monkey frontal eye fields, we show how accumulator models can be tested and extended to study dynamic aspects of decision making and motor control. Copyright © 2013 Elsevier B.V. All rights reserved.
Bergelt, Julia; Hamker, Fred H
2016-02-01
Understanding the subjective experience of a visually stable world during eye movements has been an important research topic for many years. Various studies were conducted to reveal fundamental mechanisms of this phenomenon. For example, in the paradigm saccadic suppression of displacement (SSD), it has been observed that a small displacement of a saccade target could not easily be reported if this displacement took place during a saccade. New results from Zimmermann et al. (J Neurophysiol 112(12):3066-3076, 2014) show that the effect of being oblivious to small displacements occurs not only during saccades, but also if a mask is introduced while the target is displaced. We address the question of how neurons in the parietal cortex may be connected to each other to account for the SSD effect in experiments involving a saccade and equally well in the absence of an eye movement while perception is disrupted by a mask.
The effect of beat frequency on eye movements during free viewing.
Maróti, Emese; Knakker, Balázs; Vidnyánszky, Zoltán; Weiss, Béla
2017-02-01
External periodic stimuli entrain brain oscillations and affect perception and attention. It has been shown that background music can change oculomotor behavior and facilitate detection of visual objects occurring on the musical beat. However, whether musical beats in different tempi modulate information sampling differently during natural viewing remains to be explored. Here we addressed this question by investigating how listening to naturalistic drum grooves in two different tempi affects eye movements of participants viewing natural scenes on a computer screen. We found that the beat frequency of the drum grooves modulated the rate of eye movements: fixation durations were increased at the lower beat frequency (1.7Hz) as compared to the higher beat frequency (2.4Hz) and no music conditions. Correspondingly, estimated visual sampling frequency decreased as fixation durations increased with lower beat frequency. These results imply that slow musical beats can retard sampling of visual information during natural viewing by increasing fixation durations. Copyright © 2016 Elsevier Ltd. All rights reserved.
An extensive dataset of eye movements during viewing of complex images.
Wilming, Niklas; Onat, Selim; Ossandón, José P; Açık, Alper; Kietzmann, Tim C; Kaspar, Kai; Gameiro, Ricardo R; Vormberg, Alexandra; König, Peter
2017-01-31
We present a dataset of free-viewing eye-movement recordings that contains more than 2.7 million fixation locations from 949 observers on more than 1000 images from different categories. This dataset aggregates and harmonizes data from 23 different studies conducted at the Institute of Cognitive Science at Osnabrück University and the University Medical Center in Hamburg-Eppendorf. Trained personnel recorded all studies under standard conditions with homogeneous equipment and parameter settings. All studies allowed for free eye-movements, and differed in the age range of participants (~7-80 years), stimulus sizes, stimulus modifications (phase scrambled, spatial filtering, mirrored), and stimuli categories (natural and urban scenes, web sites, fractal, pink-noise, and ambiguous artistic figures). The size and variability of viewing behavior within this dataset presents a strong opportunity for evaluating and comparing computational models of overt attention, and furthermore, for thoroughly quantifying strategies of viewing behavior. This also makes the dataset a good starting point for investigating whether viewing strategies change in patient groups.
Henderson, John M; Chanceaux, Myriam; Smith, Tim J
2009-01-23
We investigated the relationship between visual clutter and visual search in real-world scenes. Specifically, we investigated whether visual clutter, indexed by feature congestion, sub-band entropy, and edge density, correlates with search performance as assessed both by traditional behavioral measures (response time and error rate) and by eye movements. Our results demonstrate that clutter is related to search performance. These results hold for both traditional search measures and for eye movements. The results suggest that clutter may serve as an image-based proxy for search set size in real-world scenes.
Binocular summation for reflexive eye movements
Quaia, Christian; Optican, Lance M.; Cumming, Bruce G.
2018-01-01
Psychophysical studies and our own subjective experience suggest that, in natural viewing conditions (i.e., at medium to high contrasts), monocularly and binocularly viewed scenes appear very similar, with the exception of the improved depth perception provided by stereopsis. This phenomenon is usually described as a lack of binocular summation. We show here that there is an exception to this rule: Ocular following eye movements induced by the sudden motion of a large stimulus, which we recorded from three human subjects, are much larger when both eyes see the moving stimulus, than when only one eye does. We further discovered that this binocular advantage is a function of the interocular correlation between the two monocular images: It is maximal when they are identical, and reduced when the two eyes are presented with different images. This is possible only if the neurons that underlie ocular following are sensitive to binocular disparity. PMID:29621384
Fazl, Arash; Grossberg, Stephen; Mingolla, Ennio
2009-02-01
How does the brain learn to recognize an object from multiple viewpoints while scanning a scene with eye movements? How does the brain avoid the problem of erroneously classifying parts of different objects together? How are attention and eye movements intelligently coordinated to facilitate object learning? A neural model provides a unified mechanistic explanation of how spatial and object attention work together to search a scene and learn what is in it. The ARTSCAN model predicts how an object's surface representation generates a form-fitting distribution of spatial attention, or "attentional shroud". All surface representations dynamically compete for spatial attention to form a shroud. The winning shroud persists during active scanning of the object. The shroud maintains sustained activity of an emerging view-invariant category representation while multiple view-specific category representations are learned and are linked through associative learning to the view-invariant object category. The shroud also helps to restrict scanning eye movements to salient features on the attended object. Object attention plays a role in controlling and stabilizing the learning of view-specific object categories. Spatial attention hereby coordinates the deployment of object attention during object category learning. Shroud collapse releases a reset signal that inhibits the active view-invariant category in the What cortical processing stream. Then a new shroud, corresponding to a different object, forms in the Where cortical processing stream, and search using attention shifts and eye movements continues to learn new objects throughout a scene. The model mechanistically clarifies basic properties of attention shifts (engage, move, disengage) and inhibition of return. It simulates human reaction time data about object-based spatial attention shifts, and learns with 98.1% accuracy and a compression of 430 on a letter database whose letters vary in size, position, and orientation. The model provides a powerful framework for unifying many data about spatial and object attention, and their interactions during perception, cognition, and action.
Leadership in moving human groups.
Boos, Margarete; Pritz, Johannes; Lange, Simon; Belz, Michael
2014-04-01
How is movement of individuals coordinated as a group? This is a fundamental question of social behaviour, encompassing phenomena such as bird flocking, fish schooling, and the innumerable activities in human groups that require people to synchronise their actions. We have developed an experimental paradigm, the HoneyComb computer-based multi-client game, to empirically investigate human movement coordination and leadership. Using economic games as a model, we set monetary incentives to motivate players on a virtual playfield to reach goals via players' movements. We asked whether (I) humans coordinate their movements when information is limited to an individual group member's observation of adjacent group member motion, (II) whether an informed group minority can lead an uninformed group majority to the minority's goal, and if so, (III) how this minority exerts its influence. We showed that in a human group--on the basis of movement alone--a minority can successfully lead a majority. Minorities lead successfully when (a) their members choose similar initial steps towards their goal field and (b) they are among the first in the whole group to make a move. Using our approach, we empirically demonstrate that the rules of swarming behaviour apply to humans. Even complex human behaviour, such as leadership and directed group movement, follow simple rules that are based on visual perception of local movement.
Theory and simulent design of a type of auto-self-protecting optical switches
NASA Astrophysics Data System (ADS)
Li, Binhong; Peng, Songcun
1990-06-01
As the use of lasers in the military and in the civilian economy increases with each passing day, it is often necessary for the human eye or sensitive instruments to observe weak lasers, such as the return waves of laser radar and laser communications signals; but it is also necessary to provide protection against damage to the eye from the strong lasers of enemy laser weapons. For this reason, it is necessary to have a kind of automatic optical self-protecting switch. Based upon a study of the transmitting and scattering characteristics of multilayer dielectric optical waveguides, a practical computer program is set up for designing a type of auto-self-protecting optical switch with a computer model by using the nonlinear property of dielectric layers and the plasma behavior of metal substrates. This technique can be used to protect the human eye and sensitive detectors from damage caused by strong laser beams.
Migliaccio, Americo A; Cremer, Phillip D; Aw, Swee T; Halmagyi, G Michael; Curthoys, Ian S; Minor, Lloyd B; Todd, Michael J
2003-07-01
The aim of this study was to determine whether vergence-mediated changes in the axis of eye rotation in the human vestibulo-ocular reflex (VOR) would obey Listing's Law (normally associated with saccadic eye movements) independent of the initial eye position. We devised a paradigm for disassociating the saccadic velocity axis from eye position by presenting near and far targets that were centered with respect to one eye. We measured binocular 3-dimensional eye movements using search coils in ten normal subjects and 3-dimensional linear head acceleration using Optotrak in seven normal subjects. The stimuli consisted of passive, unpredictable, pitch head rotations with peak acceleration of approximately 2000 degrees /s(2 )and amplitude of approximately 20 degrees. During the pitch head rotation, each subject fixated straight ahead with one eye, whereas the other eye was adducted 4 degrees during far viewing (94 cm) and 25 degrees during near viewing (15 cm). Our data showed expected compensatory pitch rotations in both eyes, and a vergence-mediated horizontal rotation only in the adducting eye. In addition, during near viewing we observed torsional eye rotations not only in the adducting eye but also in the eye looking straight ahead. In the straight-ahead eye, the change in torsional eye velocity between near and far viewing, which began approximately 40 ms after the start of head rotation, was 10+/-6 degrees /s (mean +/- SD). This change in torsional eye velocity resulted in a 2.4+/-1.5 degrees axis tilt toward Listing's plane in that eye. In the adducting eye, the change in torsional eye velocity between near and far viewing was 16+/-6 degrees /s (mean +/- SD) and resulted in a 4.1+/-1.4 degrees axis tilt. The torsional eye velocities were conjugate and both eyes partially obeyed Listing's Law. The axis of eye rotation tilted in the direction of the line of sight by approximately one-third of the angle between the line of sight and a line orthogonal to Listing's plane. This tilt was higher than predicted by the one-quarter rule. The translational acceleration component of the pitch head rotation measured 0.5 g and may have contributed to the increased torsional component observed during near viewing. Our data show that vergence-mediated eye movements obey a VOR/Listing's Law compromise strategy independent of the initial eye position.
A Macintosh-Based Scientific Images Video Analysis System
NASA Technical Reports Server (NTRS)
Groleau, Nicolas; Friedland, Peter (Technical Monitor)
1994-01-01
A set of experiments was designed at MIT's Man-Vehicle Laboratory in order to evaluate the effects of zero gravity on the human orientation system. During many of these experiments, the movements of the eyes are recorded on high quality video cassettes. The images must be analyzed off-line to calculate the position of the eyes at every moment in time. To this aim, I have implemented a simple inexpensive computerized system which measures the angle of rotation of the eye from digitized video images. The system is implemented on a desktop Macintosh computer, processes one play-back frame per second and exhibits adequate levels of accuracy and precision. The system uses LabVIEW, a digital output board, and a video input board to control a VCR, digitize video images, analyze them, and provide a user friendly interface for the various phases of the process. The system uses the Concept Vi LabVIEW library (Graftek's Image, Meudon la Foret, France) for image grabbing and displaying as well as translation to and from LabVIEW arrays. Graftek's software layer drives an Image Grabber board from Neotech (Eastleigh, United Kingdom). A Colour Adapter box from Neotech provides adequate video signal synchronization. The system also requires a LabVIEW driven digital output board (MacADIOS II from GW Instruments, Cambridge, MA) controlling a slightly modified VCR remote control used mainly to advance the video tape frame by frame.
The perception of heading during eye movements
NASA Technical Reports Server (NTRS)
Royden, Constance S.; Banks, Martin S.; Crowell, James A.
1992-01-01
Warren and Hannon (1988, 1990), while studying the perception of heading during eye movements, concluded that people do not require extraretinal information to judge heading with eye/head movements present. Here, heading judgments are examined at higher, more typical eye movement velocities than the extremely slow tracking eye movements used by Warren and Hannon. It is found that people require extraretinal information about eye position to perceive heading accurately under many viewing conditions.
Validation of Eye Movements Model of NLP through Stressed Recalls.
ERIC Educational Resources Information Center
Sandhu, Daya S.
Neurolinguistic Progamming (NLP) has emerged as a new approach to counseling and psychotherapy. Though not to be confused with computer programming, NLP does claim to program, deprogram, and reprogram clients' behaviors with the precision and expedition akin to computer processes. It is as a tool for therapeutic communication that NLP has rapidly…
A Novel Computer-Based Set-Up to Study Movement Coordination in Human Ensembles
Alderisio, Francesco; Lombardi, Maria; Fiore, Gianfranco; di Bernardo, Mario
2017-01-01
Existing experimental works on movement coordination in human ensembles mostly investigate situations where each subject is connected to all the others through direct visual and auditory coupling, so that unavoidable social interaction affects their coordination level. Here, we present a novel computer-based set-up to study movement coordination in human groups so as to minimize the influence of social interaction among participants and implement different visual pairings between them. In so doing, players can only take into consideration the motion of a designated subset of the others. This allows the evaluation of the exclusive effects on coordination of the structure of interconnections among the players in the group and their own dynamics. In addition, our set-up enables the deployment of virtual computer players to investigate dyadic interaction between a human and a virtual agent, as well as group synchronization in mixed teams of human and virtual agents. We show how this novel set-up can be employed to study coordination both in dyads and in groups over different structures of interconnections, in the presence as well as in the absence of virtual agents acting as followers or leaders. Finally, in order to illustrate the capabilities of the architecture, we describe some preliminary results. The platform is available to any researcher who wishes to unfold the mechanisms underlying group synchronization in human ensembles and shed light on its socio-psychological aspects. PMID:28649217
Krieber, Magdalena; Bartl-Pokorny, Katrin D.; Pokorny, Florian B.; Zhang, Dajie; Landerl, Karin; Körner, Christof; Pernkopf, Franz; Pock, Thomas; Einspieler, Christa; Marschik, Peter B.
2017-01-01
The present study aimed to define differences between silent and oral reading with respect to spatial and temporal eye movement parameters. Eye movements of 22 German-speaking adolescents (14 females; mean age = 13;6 years;months) were recorded while reading an age-appropriate text silently and orally. Preschool cognitive abilities were assessed at the participants’ age of 5;7 (years;months) using the Kaufman Assessment Battery for Children. The participants’ reading speed and reading comprehension at the age of 13;6 (years;months) were determined using a standardized inventory to evaluate silent reading skills in German readers (Lesegeschwindigkeits- und -verständnistest für Klassen 6–12). The results show that (i) reading mode significantly influenced both spatial and temporal characteristics of eye movement patterns; (ii) articulation decreased the consistency of intraindividual reading performances with regard to a significant number of eye movement parameters; (iii) reading skills predicted the majority of eye movement parameters during silent reading, but influenced only a restricted number of eye movement parameters when reading orally; (iv) differences with respect to a subset of eye movement parameters increased with reading skills; (v) an overall preschool cognitive performance score predicted reading skills at the age of 13;6 (years;months), but not eye movement patterns during either silent or oral reading. However, we found a few significant correlations between preschool performances on subscales of sequential and simultaneous processing and eye movement parameters for both reading modes. Overall, the findings suggest that eye movement patterns depend on the reading mode. Preschool cognitive abilities were more closely related to eye movement patterns of oral than silent reading, while reading skills predicted eye movement patterns during silent reading, but less so during oral reading. PMID:28151950
NASA Astrophysics Data System (ADS)
Tera, Akemi; Shirai, Kiyoaki; Yuizono, Takaya; Sugiyama, Kozo
In order to investigate reading processes of Japanese language learners, we have conducted an experiment to record eye movements during Japanese text reading using an eye-tracking system. We showed that Japanese native speakers use “forward and backward jumping eye movements” frequently[13],[14]. In this paper, we analyzed further the same eye tracking data. Our goal is to examine whether Japanese learners fix their eye movements at boundaries of linguistic units such as words, phrases or clauses when they start or end “backward jumping”. We consider conventional linguistic boundaries as well as boundaries empirically defined based on the entropy of the N-gram model. Another goal is to examine the relation between the entropy of the N-gram model and the depth of syntactic structures of sentences. Our analysis shows that (1) Japanese learners often fix their eyes at linguistic boundaries, (2) the average of the entropy is the greatest at the fifth depth of syntactic structures.
Estimation of melanin content in iris of human eye: prognosis for glaucoma diagnostics
NASA Astrophysics Data System (ADS)
Bashkatov, Alexey N.; Koblova, Ekaterina V.; Genina, Elina A.; Kamenskikh, Tatyana G.; Dolotov, Leonid E.; Sinichkin, Yury P.; Tuchin, Valery V.
2007-02-01
Based on the experimental data obtained in vivo from digital analysis of color images of human irises, the mean melanin content in human eye irises has been estimated. For registration of the color images a digital camera Olympus C-5060 has been used. The images have been obtained from irises of healthy volunteers as well as from irises of patients with open-angle glaucoma. The computer program has been developed for digital analysis of the images. The result has been useful for development of novel and optimization of already existing methods of non-invasive glaucoma diagnostics.
Eye Movement Correlates of Acquired Central Dyslexia
ERIC Educational Resources Information Center
Schattka, Kerstin I.; Radach, Ralph; Huber, Walter
2010-01-01
Based on recent progress in theory and measurement techniques, the analysis of eye movements has become one of the major methodological tools in experimental reading research. Our work uses this approach to advance the understanding of impaired information processing in acquired central dyslexia of stroke patients with aphasia. Up to now there has…
Effects of diphenhydramine on human eye movements.
Hopfenbeck, J R; Cowley, D S; Radant, A; Greenblatt, D J; Roy-Byrne, P P
1995-04-01
Peak saccadic eye movement velocity (SEV) and average smooth pursuit gain (SP) are reduced in a dose-dependent manner by diazepam and provide reliable, quantitative measures of benzodiazepine agonist effects. To evaluate the specificity of these eye movement effects for agents acting at the central GABA-benzodiazepine receptor complex and the role of sedation in benzodiazepine effects, we studied eye movement effects of diphenhydramine, a sedating drug which does not act at the GABA-benzodiazepine receptor complex. Ten healthy males, aged 19-28 years, with no history of axis I psychiatric disorders or substance abuse, received 50 mg/70 kg intravenous diphenhydramine or a similar volume of saline on separate days 1 week apart. SEV, saccade latency and accuracy, SP, self-rated sedation, and short-term memory were assessed at baseline and at 5, 15, 30, 45, 60, 90 and 120 min after drug administration. Compared with placebo, diphenhydramine produced significant SEV slowing, and increases in saccade latency and self-rated sedation. There was no significant effect of diphenhydramine on smooth pursuit gain, saccade accuracy, or short-term memory. These results suggest that, like diazepam, diphenhydramine causes sedation, SEV slowing, and an increase in saccade latency. Since the degree of diphenhydramine-induced sedation was not correlated with changes in SEV or saccade latency, slowing of saccadic eye movements is unlikely to be attributable to sedation alone. Unlike diazepam, diphenhydramine does not impair smooth pursuit gain, saccadic accuracy, or memory. Different neurotransmitter systems may influence the neural pathways involved in SEV and smooth pursuit again.
Rewards modulate saccade latency but not exogenous spatial attention.
Dunne, Stephen; Ellison, Amanda; Smith, Daniel T
2015-01-01
The eye movement system is sensitive to reward. However, whilst the eye movement system is extremely flexible, the extent to which changes to oculomotor behavior induced by reward paradigms persist beyond the training period or transfer to other oculomotor tasks is unclear. To address these issues we examined the effects of presenting feedback that represented small monetary rewards to spatial locations on the latency of saccadic eye movements, the time-course of learning and extinction of the effects of rewarding saccades on exogenous spatial attention and oculomotor inhibition of return. Reward feedback produced a relative facilitation of saccadic latency in a stimulus driven saccade task which persisted for three blocks of extinction trials. However, this hemifield-specific effect failed to transfer to peripheral cueing tasks. We conclude that rewarding specific spatial locations is unlikely to induce long-term, systemic changes to the human oculomotor or attention systems.
The Brainstem Switch for Gaze Shifts in Humans
2001-10-25
Page 1 of 4 THE BRAINSTEM SWITCH FOR GAZE SHIFTS IN HUMANS A. N. Kumar1, R. J. Leigh1,2, S. Ramat3 Department of 1Biomedical Engineering, Case...omnipause neurons during gaze shifts. Using the scleral search coil technique, eye movements were measured in seven normal subjects, as they made...voluntary, disjunctive gaze shifts comprising saccades and vergence movements. Conjugate oscillations of small amplitude and high frequency were identified
The Dorsal Visual System Predicts Future and Remembers Past Eye Position
Morris, Adam P.; Bremmer, Frank; Krekelberg, Bart
2016-01-01
Eye movements are essential to primate vision but introduce potentially disruptive displacements of the retinal image. To maintain stable vision, the brain is thought to rely on neurons that carry both visual signals and information about the current direction of gaze in their firing rates. We have shown previously that these neurons provide an accurate representation of eye position during fixation, but whether they are updated fast enough during saccadic eye movements to support real-time vision remains controversial. Here we show that not only do these neurons carry a fast and accurate eye-position signal, but also that they support in parallel a range of time-lagged variants, including predictive and post dictive signals. We recorded extracellular activity in four areas of the macaque dorsal visual cortex during a saccade task, including the lateral and ventral intraparietal areas (LIP, VIP), and the middle temporal (MT) and medial superior temporal (MST) areas. As reported previously, neurons showed tonic eye-position-related activity during fixation. In addition, they showed a variety of transient changes in activity around the time of saccades, including relative suppression, enhancement, and pre-saccadic bursts for one saccade direction over another. We show that a hypothetical neuron that pools this rich population activity through a weighted sum can produce an output that mimics the true spatiotemporal dynamics of the eye. Further, with different pooling weights, this downstream eye position signal (EPS) could be updated long before (<100 ms) or after (<200 ms) an eye movement. The results suggest a flexible coding scheme in which downstream computations have access to past, current, and future eye positions simultaneously, providing a basis for visual stability and delay-free visually-guided behavior. PMID:26941617
... t work properly. There are many kinds of eye movement disorders. Two common ones are Strabismus - a disorder ... of the eyes, sometimes called "dancing eyes" Some eye movement disorders are present at birth. Others develop over ...
Computer modeling and simulation of human movement. Applications in sport and rehabilitation.
Neptune, R R
2000-05-01
Computer modeling and simulation of human movement plays an increasingly important role in sport and rehabilitation, with applications ranging from sport equipment design to understanding pathologic gait. The complex dynamic interactions within the musculoskeletal and neuromuscular systems make analyzing human movement with existing experimental techniques difficult but computer modeling and simulation allows for the identification of these complex interactions and causal relationships between input and output variables. This article provides an overview of computer modeling and simulation and presents an example application in the field of rehabilitation.
Semantic guidance of eye movements in real-world scenes
Hwang, Alex D.; Wang, Hsueh-Cheng; Pomplun, Marc
2011-01-01
The perception of objects in our visual world is influenced by not only their low-level visual features such as shape and color, but also their high-level features such as meaning and semantic relations among them. While it has been shown that low-level features in real-world scenes guide eye movements during scene inspection and search, the influence of semantic similarity among scene objects on eye movements in such situations has not been investigated. Here we study guidance of eye movements by semantic similarity among objects during real-world scene inspection and search. By selecting scenes from the LabelMe object-annotated image database and applying Latent Semantic Analysis (LSA) to the object labels, we generated semantic saliency maps of real-world scenes based on the semantic similarity of scene objects to the currently fixated object or the search target. An ROC analysis of these maps as predictors of subjects’ gaze transitions between objects during scene inspection revealed a preference for transitions to objects that were semantically similar to the currently inspected one. Furthermore, during the course of a scene search, subjects’ eye movements were progressively guided toward objects that were semantically similar to the search target. These findings demonstrate substantial semantic guidance of eye movements in real-world scenes and show its importance for understanding real-world attentional control. PMID:21426914
Semantic guidance of eye movements in real-world scenes.
Hwang, Alex D; Wang, Hsueh-Cheng; Pomplun, Marc
2011-05-25
The perception of objects in our visual world is influenced by not only their low-level visual features such as shape and color, but also their high-level features such as meaning and semantic relations among them. While it has been shown that low-level features in real-world scenes guide eye movements during scene inspection and search, the influence of semantic similarity among scene objects on eye movements in such situations has not been investigated. Here we study guidance of eye movements by semantic similarity among objects during real-world scene inspection and search. By selecting scenes from the LabelMe object-annotated image database and applying latent semantic analysis (LSA) to the object labels, we generated semantic saliency maps of real-world scenes based on the semantic similarity of scene objects to the currently fixated object or the search target. An ROC analysis of these maps as predictors of subjects' gaze transitions between objects during scene inspection revealed a preference for transitions to objects that were semantically similar to the currently inspected one. Furthermore, during the course of a scene search, subjects' eye movements were progressively guided toward objects that were semantically similar to the search target. These findings demonstrate substantial semantic guidance of eye movements in real-world scenes and show its importance for understanding real-world attentional control. Copyright © 2011 Elsevier Ltd. All rights reserved.
An auditory brain-computer interface evoked by natural speech
NASA Astrophysics Data System (ADS)
Lopez-Gordo, M. A.; Fernandez, E.; Romero, S.; Pelayo, F.; Prieto, Alberto
2012-06-01
Brain-computer interfaces (BCIs) are mainly intended for people unable to perform any muscular movement, such as patients in a complete locked-in state. The majority of BCIs interact visually with the user, either in the form of stimulation or biofeedback. However, visual BCIs challenge their ultimate use because they require the subjects to gaze, explore and shift eye-gaze using their muscles, thus excluding patients in a complete locked-in state or under the condition of the unresponsive wakefulness syndrome. In this study, we present a novel fully auditory EEG-BCI based on a dichotic listening paradigm using human voice for stimulation. This interface has been evaluated with healthy volunteers, achieving an average information transmission rate of 1.5 bits min-1 in full-length trials and 2.7 bits min-1 using the optimal length of trials, recorded with only one channel and without formal training. This novel technique opens the door to a more natural communication with users unable to use visual BCIs, with promising results in terms of performance, usability, training and cognitive effort.
Pioneers of eye movement research
Wade, Nicholas J
2010-01-01
Recent advances in the technology affording eye movement recordings carry the risk of neglecting past achievements. Without the assistance of this modern armoury, great strides were made in describing the ways the eyes move. For Aristotle the fundamental features of eye movements were binocular, and he described the combined functions of the eyes. This was later given support using simple procedures like placing a finger over the eyelid of the closed eye and culminated in Hering's law of equal innervation. However, the overriding concern in the 19th century was with eye position rather than eye movements. Appreciating discontinuities of eye movements arose from studies of vertigo. The characteristics of nystagmus were recorded before those of saccades and fixations. Eye movements during reading were described by Hering and by Lamare in 1879; both used similar techniques of listening to sounds made during contractions of the extraocular muscles. Photographic records of eye movements during reading were made by Dodge early in the 20th century, and this stimulated research using a wider array of patterns. In the mid-20th century attention shifted to the stability of the eyes during fixation, with the emphasis on involuntary movements. The contributions of pioneers from Aristotle to Yarbus are outlined. PMID:23396982
NASA Technical Reports Server (NTRS)
Das, V. E.; Thomas, C. W.; Zivotofsky, A. Z.; Leigh, R. J.
1996-01-01
Video-based eye-tracking systems are especially suited to studying eye movements during naturally occurring activities such as locomotion, but eye velocity records suffer from broad band noise that is not amenable to conventional filtering methods. We evaluated the effectiveness of combined median and moving-average filters by comparing prefiltered and postfiltered records made synchronously with a video eye-tracker and the magnetic search coil technique, which is relatively noise free. Root-mean-square noise was reduced by half, without distorting the eye velocity signal. To illustrate the practical use of this technique, we studied normal subjects and patients with deficient labyrinthine function and compared their ability to hold gaze on a visual target that moved with their heads (cancellation of the vestibulo-ocular reflex). Patients and normal subjects performed similarly during active head rotation but, during locomotion, patients held their eyes more steadily on the visual target than did subjects.
NASA Technical Reports Server (NTRS)
2002-01-01
NASA's Jet Propulsion Laboratory's collaborated with LC Technologies, Inc., to improve LCT's Eyegaze Communication System, an eye tracker that enables people with severe cerebral palsy, muscular dystrophy, multiple sclerosis, strokes, brain injuries, spinal cord injuries, and ALS (amyotrophic lateral sclerosis) to communicate and control their environment using their eye movements. To operate the system, the user sits in front of the computer monitor while the camera focuses on one eye. By looking at control keys on the monitor for a fraction of a second, the user can 'talk' with speech synthesis, type, operate a telephone, access the Internet and e-mail, and run computer software. Nothing is attached to the user's head or body, and the improved size and portability allow the system to be mounted on a wheelchair. LCT and JPL are working on several other areas of improvement that have commercial add-on potential.
Countermanding eye-head gaze shifts in humans: marching orders are delivered to the head first.
Corneil, Brian D; Elsley, James K
2005-07-01
The countermanding task requires subjects to cancel a planned movement on appearance of a stop signal, providing insights into response generation and suppression. Here, we studied human eye-head gaze shifts in a countermanding task with targets located beyond the horizontal oculomotor range. Consistent with head-restrained saccadic countermanding studies, the proportion of gaze shifts on stop trials increased the longer the stop signal was delayed after target presentation, and gaze shift stop-signal reaction times (SSRTs: a derived statistic measuring how long it takes to cancel a movement) averaged approximately 120 ms across seven subjects. We also observed a marked proportion of trials (13% of all stop trials) during which gaze remained stable but the head moved toward the target. Such head movements were more common at intermediate stop signal delays. We never observed the converse sequence wherein gaze moved while the head remained stable. SSRTs for head movements averaged approximately 190 ms or approximately 70-75 ms longer than gaze SSRTs. Although our findings are inconsistent with a single race to threshold as proposed for controlling saccadic eye movements, movement parameters on stop trials attested to interactions consistent with a race model architecture. To explain our data, we tested two extensions to the saccadic race model. The first assumed that gaze shifts and head movements are controlled by parallel but independent races. The second model assumed that gaze shifts and head movements are controlled by a single race, preceded by terminal ballistic intervals not under inhibitory control, and that the head-movement branch is activated at a lower threshold. Although simulations of both models produced acceptable fits to the empirical data, we favor the second alternative as it is more parsimonious with recent findings in the oculomotor system. Using the second model, estimates for gaze and head ballistic intervals were approximately 25 and 90 ms, respectively, consistent with the known physiology of the final motor paths. Further, the threshold of the head movement branch was estimated to be 85% of that required to activate gaze shifts. From these results, we conclude that a commitment to a head movement is made in advance of gaze shifts and that the comparative SSRT differences result primarily from biomechanical differences inherent to eye and head motion.
Evidence for object permanence in the smooth-pursuit eye movements of monkeys.
Churchland, Mark M; Chou, I-Han; Lisberger, Stephen G
2003-10-01
We recorded the smooth-pursuit eye movements of monkeys in response to targets that were extinguished (blinked) for 200 ms in mid-trajectory. Eye velocity declined considerably during the target blinks, even when the blinks were completely predictable in time and space. Eye velocity declined whether blinks were presented during steady-state pursuit of a constant-velocity target, during initiation of pursuit before target velocity was reached, or during eye accelerations induced by a change in target velocity. When a physical occluder covered the trajectory of the target during blinks, creating the impression that the target moved behind it, the decline in eye velocity was reduced or abolished. If the target was occluded once the eye had reached target velocity, pursuit was only slightly poorer than normal, uninterrupted pursuit. In contrast, if the target was occluded during the initiation of pursuit, while the eye was accelerating toward target velocity, pursuit during occlusion was very different from normal pursuit. Eye velocity remained relatively stable during target occlusion, showing much less acceleration than normal pursuit and much less of a decline than was produced by a target blink. Anticipatory or predictive eye acceleration was typically observed just prior to the reappearance of the target. Computer simulations show that these results are best understood by assuming that a mechanism of eye-velocity memory remains engaged during target occlusion but is disengaged during target blinks.
A 2D eye gaze estimation system with low-resolution webcam images
NASA Astrophysics Data System (ADS)
Ince, Ibrahim Furkan; Kim, Jin Woo
2011-12-01
In this article, a low-cost system for 2D eye gaze estimation with low-resolution webcam images is presented. Two algorithms are proposed for this purpose, one for the eye-ball detection with stable approximate pupil-center and the other one for the eye movements' direction detection. Eyeball is detected using deformable angular integral search by minimum intensity (DAISMI) algorithm. Deformable template-based 2D gaze estimation (DTBGE) algorithm is employed as a noise filter for deciding the stable movement decisions. While DTBGE employs binary images, DAISMI employs gray-scale images. Right and left eye estimates are evaluated separately. DAISMI finds the stable approximate pupil-center location by calculating the mass-center of eyeball border vertices to be employed for initial deformable template alignment. DTBGE starts running with initial alignment and updates the template alignment with resulting eye movements and eyeball size frame by frame. The horizontal and vertical deviation of eye movements through eyeball size is considered as if it is directly proportional with the deviation of cursor movements in a certain screen size and resolution. The core advantage of the system is that it does not employ the real pupil-center as a reference point for gaze estimation which is more reliable against corneal reflection. Visual angle accuracy is used for the evaluation and benchmarking of the system. Effectiveness of the proposed system is presented and experimental results are shown.
Initial eye movements during face identification are optimal and similar across cultures
Or, Charles C.-F.; Peterson, Matthew F.; Eckstein, Miguel P.
2015-01-01
Culture influences not only human high-level cognitive processes but also low-level perceptual operations. Some perceptual operations, such as initial eye movements to faces, are critical for extraction of information supporting evolutionarily important tasks such as face identification. The extent of cultural effects on these crucial perceptual processes is unknown. Here, we report that the first gaze location for face identification was similar across East Asian and Western Caucasian cultural groups: Both fixated a featureless point between the eyes and the nose, with smaller between-group than within-group differences and with a small horizontal difference across cultures (8% of the interocular distance). We also show that individuals of both cultural groups initially fixated at a slightly higher point on Asian faces than on Caucasian faces. The initial fixations were found to be both fundamental in acquiring the majority of information for face identification and optimal, as accuracy deteriorated when observers held their gaze away from their preferred fixations. An ideal observer that integrated facial information with the human visual system's varying spatial resolution across the visual field showed a similar information distribution across faces of both races and predicted initial human fixations. The model consistently replicated the small vertical difference between human fixations to Asian and Caucasian faces but did not predict the small horizontal leftward bias of Caucasian observers. Together, the results suggest that initial eye movements during face identification may be driven by brain mechanisms aimed at maximizing accuracy, and less influenced by culture. The findings increase our understanding of the interplay between the brain's aims to optimally accomplish basic perceptual functions and to respond to sociocultural influences. PMID:26382003
Eye movement perimetry in glaucoma.
Trope, G E; Eizenman, M; Coyle, E
1989-08-01
Present-day computerized perimetry is often inaccurate and unreliable owing to the need to maintain central fixation over long periods while repressing the normal response to presentation of peripheral stimuli. We tested a new method of perimetry that does not require prolonged central fixation. During this test eye movements were encouraged on presentation of a peripheral target. Twenty-three eyes were studied with an Octopus perimeter, with a technician monitoring eye movements. The sensitivity was 100% and the specificity 23%. The low specificity was due to the technician's inability to accurately monitor small eye movements in the central 6 degrees field. If small eye movements are monitored accurately with an eye tracker, eye movement perimetry could become an alternative method to standard perimetry.
Influence of social presence on eye movements in visual search tasks.
Liu, Na; Yu, Ruifeng
2017-12-01
This study employed an eye-tracking technique to investigate the influence of social presence on eye movements in visual search tasks. A total of 20 male subjects performed visual search tasks in a 2 (target presence: present vs. absent) × 2 (task complexity: complex vs. simple) × 2 (social presence: alone vs. a human audience) within-subject experiment. Results indicated that the presence of an audience could evoke a social facilitation effect on response time in visual search tasks. Compared with working alone, the participants made fewer and shorter fixations, larger saccades and shorter scan path in simple search tasks and more and longer fixations, smaller saccades and longer scan path in complex search tasks when working with an audience. The saccade velocity and pupil diameter in the audience-present condition were larger than those in the working-alone condition. No significant change in target fixation number was observed between two social presence conditions. Practitioner Summary: This study employed an eye-tracking technique to examine the influence of social presence on eye movements in visual search tasks. Results clarified the variation mechanism and characteristics of oculomotor scanning induced by social presence in visual search.
Gandhi, Neeraj J; Barton, Ellen J; Sparks, David L
2008-07-01
Constant frequency microstimulation of the paramedian pontine reticular formation (PPRF) in head-restrained monkeys evokes a constant velocity eye movement. Since the PPRF receives significant projections from structures that control coordinated eye-head movements, we asked whether stimulation of the pontine reticular formation in the head-unrestrained animal generates a combined eye-head movement or only an eye movement. Microstimulation of most sites yielded a constant-velocity gaze shift executed as a coordinated eye-head movement, although eye-only movements were evoked from some sites. The eye and head contributions to the stimulation-evoked movements varied across stimulation sites and were drastically different from the lawful relationship observed for visually-guided gaze shifts. These results indicate that the microstimulation activated elements that issued movement commands to the extraocular and, for most sites, neck motoneurons. In addition, the stimulation-evoked changes in gaze were similar in the head-restrained and head-unrestrained conditions despite the assortment of eye and head contributions, suggesting that the vestibulo-ocular reflex (VOR) gain must be near unity during the coordinated eye-head movements evoked by stimulation of the PPRF. These findings contrast the attenuation of VOR gain associated with visually-guided gaze shifts and suggest that the vestibulo-ocular pathway processes volitional and PPRF stimulation-evoked gaze shifts differently.
Marti, Sarah; Straumann, Dominik; Glasauer, Stefan
2005-04-01
Various hypotheses on the origin of cerebellar downbeat nystagmus (DBN) have been presented; the exact pathomechanism, however, is still not known. Based on previous anatomical and electrophysiological studies, we propose that an asymmetry in the distribution of on-directions of vertical gaze-velocity Purkinje cells leads to spontaneous upward ocular drift in cerebellar disease, and therefore, to DBN. Our hypothesis is supported by a computational model for vertical eye movements.
An SSVEP-actuated brain computer interface using phase-tagged flickering sequences: a cursor system.
Lee, Po-Lei; Sie, Jyun-Jie; Liu, Yu-Ju; Wu, Chi-Hsun; Lee, Ming-Huan; Shu, Chih-Hung; Li, Po-Hung; Sun, Chia-Wei; Shyu, Kuo-Kai
2010-07-01
This study presents a new steady-state visual evoked potential (SSVEP)-based brain computer interface (BCI). SSVEPs, induced by phase-tagged flashes in eight light emitting diodes (LEDs), were used to control four cursor movements (up, right, down, and left) and four button functions (on, off, right-, and left-clicks) on a screen menu. EEG signals were measured by one EEG electrode placed at Oz position, referring to the international EEG 10-20 system. Since SSVEPs are time-locked and phase-locked to the onsets of SSVEP flashes, EEG signals were bandpass-filtered and segmented into epochs, and then averaged across a number of epochs to sharpen the recorded SSVEPs. Phase lags between the measured SSVEPs and a reference SSVEP were measured, and targets were recognized based on these phase lags. The current design used eight LEDs to flicker at 31.25 Hz with 45 degrees phase margin between any two adjacent SSVEP flickers. The SSVEP responses were filtered within 29.25-33.25 Hz and then averaged over 60 epochs. Owing to the utilization of high-frequency flickers, the induced SSVEPs were away from low-frequency noises, 60 Hz electricity noise, and eye movement artifacts. As a consequence, we achieved a simple architecture that did not require eye movement monitoring or other artifact detection and removal. The high-frequency design also achieved a flicker fusion effect for better visualization. Seven subjects were recruited in this study to sequentially input a command sequence, consisting of a sequence of eight cursor functions, repeated three times. The accuracy and information transfer rate (mean +/- SD) over the seven subjects were 93.14 +/- 5.73% and 28.29 +/- 12.19 bits/min, respectively. The proposed system can provide a reliable channel for severely disabled patients to communicate with external environments.
High-speed adaptive optics line scan confocal retinal imaging for human eye.
Lu, Jing; Gu, Boyu; Wang, Xiaolin; Zhang, Yuhua
2017-01-01
Continuous and rapid eye movement causes significant intraframe distortion in adaptive optics high resolution retinal imaging. To minimize this artifact, we developed a high speed adaptive optics line scan confocal retinal imaging system. A high speed line camera was employed to acquire retinal image and custom adaptive optics was developed to compensate the wave aberration of the human eye's optics. The spatial resolution and signal to noise ratio were assessed in model eye and in living human eye. The improvement of imaging fidelity was estimated by reduction of intra-frame distortion of retinal images acquired in the living human eyes with frame rates at 30 frames/second (FPS), 100 FPS, and 200 FPS. The device produced retinal image with cellular level resolution at 200 FPS with a digitization of 512×512 pixels/frame in the living human eye. Cone photoreceptors in the central fovea and rod photoreceptors near the fovea were resolved in three human subjects in normal chorioretinal health. Compared with retinal images acquired at 30 FPS, the intra-frame distortion in images taken at 200 FPS was reduced by 50.9% to 79.7%. We demonstrated the feasibility of acquiring high resolution retinal images in the living human eye at a speed that minimizes retinal motion artifact. This device may facilitate research involving subjects with nystagmus or unsteady fixation due to central vision loss.
The Effectiveness of Gaze-Contingent Control in Computer Games.
Orlov, Paul A; Apraksin, Nikolay
2015-01-01
Eye-tracking technology and gaze-contingent control in human-computer interaction have become an objective reality. This article reports on a series of eye-tracking experiments, in which we concentrated on one aspect of gaze-contingent interaction: Its effectiveness compared with mouse-based control in a computer strategy game. We propose a measure for evaluating the effectiveness of interaction based on "the time of recognition" the game unit. In this article, we use this measure to compare gaze- and mouse-contingent systems, and we present the analysis of the differences as a function of the number of game units. Our results indicate that performance of gaze-contingent interaction is typically higher than mouse manipulation in a visual searching task. When tested on 60 subjects, the results showed that the effectiveness of gaze-contingent systems over 1.5 times higher. In addition, we obtained that eye behavior stays quite stabile with or without mouse interaction. © The Author(s) 2015.
Target Selection by the Frontal Cortex during Coordinated Saccadic and Smooth Pursuit Eye Movements
ERIC Educational Resources Information Center
Srihasam, Krishna; Bullock, Daniel; Grossberg, Stephen
2009-01-01
Oculomotor tracking of moving objects is an important component of visually based cognition and planning. Such tracking is achieved by a combination of saccades and smooth-pursuit eye movements. In particular, the saccadic and smooth-pursuit systems interact to often choose the same target, and to maximize its visibility through time. How do…
NASA Technical Reports Server (NTRS)
Badler, N. I.
1985-01-01
Human motion analysis is the task of converting actual human movements into computer readable data. Such movement information may be obtained though active or passive sensing methods. Active methods include physical measuring devices such as goniometers on joints of the body, force plates, and manually operated sensors such as a Cybex dynamometer. Passive sensing de-couples the position measuring device from actual human contact. Passive sensors include Selspot scanning systems (since there is no mechanical connection between the subject's attached LEDs and the infrared sensing cameras), sonic (spark-based) three-dimensional digitizers, Polhemus six-dimensional tracking systems, and image processing systems based on multiple views and photogrammetric calculations.
Visidep (TM): A Three-Dimensional Imaging System For The Unaided Eye
NASA Astrophysics Data System (ADS)
McLaurin, A. Porter; Jones, Edwin R.; Cathey, LeConte
1984-05-01
The VISIDEP process for creating images in three dimensions on flat screens is suitable for photographic, electrographic and computer generated imaging systems. Procedures for generating these images vary from medium to medium due to the specific requirements of each technology. Imaging requirements for photographic and electrographic media are more directly tied to the hardware than are computer based systems. Applications of these technologies are not limited to entertainment, but have implications for training, interactive computer/video systems, medical imaging, and inspection equipment. Through minor modification the system can provide three-dimensional images with accurately measureable relationships for robotics and adds this factor for future developments in artificial intelligence. In almost any area requiring image analysis or critical review, VISIDEP provides the added advantage of three-dimensionality. All of this is readily accomplished without aids to the human eye. The system can be viewed in full color, false-color infra-red, and monochromatic modalities from any angle and is also viewable with a single eye. Thus, the potential of application for this developing system is extensive and covers the broad spectrum of human endeavor from entertainment to scientific study.
Multiple levels of representation of reaching in the parieto-frontal network.
Battaglia-Mayer, Alexandra; Caminiti, Roberto; Lacquaniti, Francesco; Zago, Myrka
2003-10-01
In daily life, hand and eye movements occur in different contexts. Hand movements can be made to a visual target shortly after its presentation, or after a longer delay; alternatively, they can be made to a memorized target location. In both instances, the hand can move in a visually structured scene under normal illumination, which allows visual monitoring of its trajectory, or in darkness. Across these conditions, movement can be directed to points in space already foveated, or to extrafoveal ones, thus requiring different forms of eye-hand coordination. The ability to adapt to these different contexts by providing successful answers to their demands probably resides in the high degree of flexibility of the operations that govern cognitive visuomotor behavior. The neurophysiological substrates of these processes include, among others, the context-dependent nature of neural activity, and a transitory, or task-dependent, affiliation of neurons to the assemblies underlying different forms of sensorimotor behavior. Moreover, the ability to make independent or combined eye and hand movements in the appropriate order and time sequence must reside in a process that encodes retinal-, eye- and hand-related inputs in a spatially congruent fashion. This process, in fact, requires exact knowledge of where the eye and the hand are at any given time, although we have no or little conscious experience of where they stay at any instant. How this information is reflected in the activity of cortical neurons remains a central question to understanding the mechanisms underlying the planning of eye-hand movement in the cerebral cortex. In the last 10 years, psychophysical analyses in humans, as well as neurophysiological studies in monkeys, have provided new insights on the mechanisms of different forms of oculo-manual actions. These studies have also offered preliminary hints as to the cortical substrates of eye-hand coordination. In this review, we will highlight some of the results obtained as well as some of the questions raised, focusing on the role of eye- and hand-tuning signals in cortical neural activity. This choice rests on the crucial role this information exerts in the specification of movement, and coordinate transformation.
Yamada, T; Suzuki, D A; Yee, R D
1996-11-01
1. Smooth pursuitlike eye movements were evoked with low current microstimulation delivered to rostral portions of the nucleus reticularis tegmenti pontis (rNRTP) in alert macaques. Microstimulation sites were selected by the observation of modulations in single-cell firing rates that were correlated with periodic smoothpursuit eye movements. Current intensities ranged from 10 to 120 microA and were routinely < 40 microA. Microstimulation was delivered either in the dark with no fixation, 100 ms after a fixation target was extinguished, or during maintained fixation of a stationary or moving target. Evoked eye movements also were studied under open-loop conditions with the target image stabilized on the retina. 2. Eye movements evoked in the absence of a target rapidly accelerated to a constant velocity that was maintained for the duration of the microstimulation. Evoked eye speeds ranged from 3.7 to 23 deg/s and averaged 11 deg/s. Evoked eye speed appeared to be linearly related to initial eye position with a sensitivity to initial eye position that averaged 0.23 deg.s-1.deg-1. While some horizontal and oblique smooth eye movements were elicited, microstimulation resulted in upward eye movements in 89% of the sites. 3. Evoked eye speed was found to be dependent on microstimulation pulse frequency and current intensity. Within limits, evoked eye speed increased with increases in stimulation frequency or current intensity. For stimulation frequencies < 300-400 Hz, only smooth pursuit-like eye movements were evoked. At higher stimulation frequencies, accompanying saccades consistently were elicited. 4. Feedback of retinal image motion interacted with the evoked eye movements to decrease eye speed if the visual motion was in the opposite direction as the evoked, pursuit-like eye movements. 5. The results implicate rNRTP as part of the neuronal substrate that controls smooth-pursuit eye movements. NRTP appears to be divided functionally into a rostral, pursuit-related portion and a caudal, saccade-related area. rNRTP is a component of a corticopontocerebellar circuit that presumably involves the pursuit area of the frontal eye field and that parallels the middle and medial superior temporal cerebral cortical/dorsalateral pontine nucleus (MT/MST-DLPN-cerebellum) pathway known to be involved also with regulating smooth-pursuit eye movements.
ERIC Educational Resources Information Center
Tseng, Min-chen
2014-01-01
This study investigated the online reading performances and the level of visual fatigue from the perspectives of non-native speaking students (NNSs). Reading on a computer screen is more visually more demanding than reading printed text. Online reading requires frequent saccadic eye movements and imposes continuous focusing and alignment demand.…
The MicronEye Motion Monitor: A New Tool for Class and Laboratory Demonstrations.
ERIC Educational Resources Information Center
Nissan, M.; And Others
1988-01-01
Describes a special camera that can be directly linked to a computer that has been adapted for studying movement. Discusses capture, processing, and analysis of two-dimensional data with either IBM PC or Apple II computers. Gives examples of a variety of mechanical tests including pendulum motion, air track, and air table. (CW)
Knight, T A
2012-12-06
The frontal eye field (FEF) has a strong influence on saccadic eye movements with the head restrained. With the head unrestrained, eye saccades combine with head movements to produce large gaze shifts, and microstimulation of the FEF evokes both eye and head movements. To test whether the dorsomedial FEF provides commands for the entire gaze shift or its separate eye and head components, we recorded extracellular single-unit activity in monkeys trained to make large head-unrestrained gaze shifts. We recorded 80 units active during gaze shifts, and closely examined 26 of these that discharged a burst of action potentials that preceded horizontal gaze movements. These units were movement or visuomovement related and most exhibited open movement fields with respect to amplitude. To reveal the relations of burst parameters to gaze, eye, and/or head movement metrics, we used behavioral dissociations of gaze, eye, and head movements and linear regression analyses. The burst number of spikes (NOS) was strongly correlated with movement amplitude and burst temporal parameters were strongly correlated with movement temporal metrics for eight gaze-related burst neurons and five saccade-related burst neurons. For the remaining 13 neurons, the NOS was strongly correlated with the head movement amplitude, but burst temporal parameters were most strongly correlated with eye movement temporal metrics (head-eye-related burst neurons, HEBNs). These results suggest that FEF units do not encode a command for the unified gaze shift only; instead, different units may carry signals related to the overall gaze shift or its eye and/or head components. Moreover, the HEBNs exhibit bursts whose magnitude and timing may encode a head displacement signal and a signal that influences the timing of the eye saccade, thereby serving as a mechanism for coordinating the eye and head movements of a gaze shift. Copyright © 2012 IBRO. Published by Elsevier Ltd. All rights reserved.
A Gaze Independent Brain-Computer Interface Based on Visual Stimulation through Closed Eyelids
NASA Astrophysics Data System (ADS)
Hwang, Han-Jeong; Ferreria, Valeria Y.; Ulrich, Daniel; Kilic, Tayfun; Chatziliadis, Xenofon; Blankertz, Benjamin; Treder, Matthias
2015-10-01
A classical brain-computer interface (BCI) based on visual event-related potentials (ERPs) is of limited application value for paralyzed patients with severe oculomotor impairments. In this study, we introduce a novel gaze independent BCI paradigm that can be potentially used for such end-users because visual stimuli are administered on closed eyelids. The paradigm involved verbally presented questions with 3 possible answers. Online BCI experiments were conducted with twelve healthy subjects, where they selected one option by attending to one of three different visual stimuli. It was confirmed that typical cognitive ERPs can be evidently modulated by the attention of a target stimulus in eyes-closed and gaze independent condition, and further classified with high accuracy during online operation (74.58% ± 17.85 s.d.; chance level 33.33%), demonstrating the effectiveness of the proposed novel visual ERP paradigm. Also, stimulus-specific eye movements observed during stimulation were verified as reflex responses to light stimuli, and they did not contribute to classification. To the best of our knowledge, this study is the first to show the possibility of using a gaze independent visual ERP paradigm in an eyes-closed condition, thereby providing another communication option for severely locked-in patients suffering from complex ocular dysfunctions.
Parker, Andrew; Relph, Sarah; Dagnall, Neil
2008-01-01
Two experiments are reported that investigate the effects of saccadic bilateral eye movements on the retrieval of item, associative, and contextual information. Experiment 1 compared the effects of bilateral versus vertical versus no eye movements on tests of item recognition, followed by remember-know responses and associative recognition. Supporting previous research, bilateral eye movements enhanced item recognition by increasing the hit rate and decreasing the false alarm rate. Analysis of remember-know responses indicated that eye movement effects were accompanied by increases in remember responses. The test of associative recognition found that bilateral eye movements increased correct responses to intact pairs and decreased false alarms to rearranged pairs. Experiment 2 assessed the effects of eye movements on the recall of intrinsic (color) and extrinsic (spatial location) context. Bilateral eye movements increased correct recall for both types of context. The results are discussed within the framework of dual-process models of memory and the possible neural underpinnings of these effects are considered.
Eye movements reveal epistemic curiosity in human observers.
Baranes, Adrien; Oudeyer, Pierre-Yves; Gottlieb, Jacqueline
2015-12-01
Saccadic (rapid) eye movements are primary means by which humans and non-human primates sample visual information. However, while saccadic decisions are intensively investigated in instrumental contexts where saccades guide subsequent actions, it is largely unknown how they may be influenced by curiosity - the intrinsic desire to learn. While saccades are sensitive to visual novelty and visual surprise, no study has examined their relation to epistemic curiosity - interest in symbolic, semantic information. To investigate this question, we tracked the eye movements of human observers while they read trivia questions and, after a brief delay, were visually given the answer. We show that higher curiosity was associated with earlier anticipatory orienting of gaze toward the answer location without changes in other metrics of saccades or fixations, and that these influences were distinct from those produced by variations in confidence and surprise. Across subjects, the enhancement of anticipatory gaze was correlated with measures of trait curiosity from personality questionnaires. Finally, a machine learning algorithm could predict curiosity in a cross-subject manner, relying primarily on statistical features of the gaze position before the answer onset and independently of covariations in confidence or surprise, suggesting potential practical applications for educational technologies, recommender systems and research in cognitive sciences. With this article, we provide full access to the annotated database allowing readers to reproduce the results. Epistemic curiosity produces specific effects on oculomotor anticipation that can be used to read out curiosity states. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Application of the System Identification Technique to Goal-Directed Saccades.
1985-07-01
Saccadic eye movements are among the fastest voluntary muscle movements the human body is capable of producing and are characterized by a rapid shift of gaze ...moving the target the same distance the eyeball moves. Collewijn and Van der Mark (9), in their study of the slow phase of optokinetic nystagmus , used
ERIC Educational Resources Information Center
Nesbit, Larry L.
A research study was designed to test the relationship between the number of eye fixations and amount of learning as determined by a criterion referenced posttest. The study sought to answer the following questions: (1) Are differences in eye movement indices related to the posttest score? (2) Do differences in eye movement indices of subjects…
Zangemeister, W H; Nagel, M
2001-01-01
We investigated coordinated saccadic eye and head movements following predictive horizontal visual targets at +/- 30 degrees by applying transcranial magnetic stimulation (TMS) over the cerebellum before the start of the gaze movement in 10 young subjects. We found three effects of TMS on eye-head movements: 1. Saccadic latency effect. When stimulation took place shortly before movements commenced (75-25 ms before), significantly shorter latencies were found between predictive target presentation and initiation of saccades. Eye latencies were significantly decreased by 45 ms on average, but head latencies were not. 2. Gaze amplitude effect. Without TMS, for the 60 degrees target amplitudes, head movements usually preceded eye movements, as expected (predictive gaze type 3). With TMS 5-75 ms before the gaze movement, the number of eye movements preceding head movements by 20-50 ms was significantly increased (p < 0.001) and the delay between eye and head movements was reversed (p < 0.001), i.e. we found eye-predictive gaze type 1. 3. Saccadic peak velocity effect. For TMS 5-25 s before the start of head movement, mean peak velocity of synkinetic eye saccades increased by 20-30% up to 600 degrees/s, compared to 350-400 degrees/s without TMS. We conclude that transient functional cerebellar deficits exerted by means of TMS can change the central synkinesis of eye-head coordination, including the preprogramming of the saccadic pulse and step of a coordinated gaze movement.
Distinct eye movement patterns enhance dynamic visual acuity.
Palidis, Dimitrios J; Wyder-Hodge, Pearson A; Fooken, Jolande; Spering, Miriam
2017-01-01
Dynamic visual acuity (DVA) is the ability to resolve fine spatial detail in dynamic objects during head fixation, or in static objects during head or body rotation. This ability is important for many activities such as ball sports, and a close relation has been shown between DVA and sports expertise. DVA tasks involve eye movements, yet, it is unclear which aspects of eye movements contribute to successful performance. Here we examined the relation between DVA and the kinematics of smooth pursuit and saccadic eye movements in a cohort of 23 varsity baseball players. In a computerized dynamic-object DVA test, observers reported the location of the gap in a small Landolt-C ring moving at various speeds while eye movements were recorded. Smooth pursuit kinematics-eye latency, acceleration, velocity gain, position error-and the direction and amplitude of saccadic eye movements were linked to perceptual performance. Results reveal that distinct eye movement patterns-minimizing eye position error, tracking smoothly, and inhibiting reverse saccades-were related to dynamic visual acuity. The close link between eye movement quality and DVA performance has important implications for the development of perceptual training programs to improve DVA.
Distinct eye movement patterns enhance dynamic visual acuity
Palidis, Dimitrios J.; Wyder-Hodge, Pearson A.; Fooken, Jolande; Spering, Miriam
2017-01-01
Dynamic visual acuity (DVA) is the ability to resolve fine spatial detail in dynamic objects during head fixation, or in static objects during head or body rotation. This ability is important for many activities such as ball sports, and a close relation has been shown between DVA and sports expertise. DVA tasks involve eye movements, yet, it is unclear which aspects of eye movements contribute to successful performance. Here we examined the relation between DVA and the kinematics of smooth pursuit and saccadic eye movements in a cohort of 23 varsity baseball players. In a computerized dynamic-object DVA test, observers reported the location of the gap in a small Landolt-C ring moving at various speeds while eye movements were recorded. Smooth pursuit kinematics—eye latency, acceleration, velocity gain, position error—and the direction and amplitude of saccadic eye movements were linked to perceptual performance. Results reveal that distinct eye movement patterns—minimizing eye position error, tracking smoothly, and inhibiting reverse saccades—were related to dynamic visual acuity. The close link between eye movement quality and DVA performance has important implications for the development of perceptual training programs to improve DVA. PMID:28187157
Eye Movements during Art Appreciation by Students Taking a Photo Creation Course.
Ishiguro, Chiaki; Yokosawa, Kazuhiko; Okada, Takeshi
2016-01-01
Previous studies have focused on the differences in the art appreciation process between individuals, and indicated that novice viewers of artworks, in comparison to experts, rarely consider the creation process of the artwork or how this may relate to style. However, behavioral changes in individuals after educational interventions have not been examined. Art education researchers claim that technical knowledge and creation experiences help novice viewers to pay attention to technical features of artwork. Therefore, an artistic photo creation course was designed and conducted to help students acquire techniques and procedural knowledge of photo creation. The present study verified whether students' viewing strategies during appreciation of photographs changed after the course. Twenty-one students participated in two sessions, viewing the same 12 photographs before and after the course. Based on the analysis of recorded eye movements, the results indicated that the students' perceptual exploration became more active with photographs containing recognizable subjects (i.e., humans and objects), and their global saccades increased when they viewed classic photography, one of the categories of photography covered in the course. Interview data after the course indicated that students became aware of the technical effects in photographs. These results suggest that students' viewing strategies may change following a course, as assessed by behavioral measures of eye movements. Further examination is needed to validate this approach to educational effect measurement.
Eye Movements during Art Appreciation by Students Taking a Photo Creation Course
Ishiguro, Chiaki; Yokosawa, Kazuhiko; Okada, Takeshi
2016-01-01
Previous studies have focused on the differences in the art appreciation process between individuals, and indicated that novice viewers of artworks, in comparison to experts, rarely consider the creation process of the artwork or how this may relate to style. However, behavioral changes in individuals after educational interventions have not been examined. Art education researchers claim that technical knowledge and creation experiences help novice viewers to pay attention to technical features of artwork. Therefore, an artistic photo creation course was designed and conducted to help students acquire techniques and procedural knowledge of photo creation. The present study verified whether students' viewing strategies during appreciation of photographs changed after the course. Twenty-one students participated in two sessions, viewing the same 12 photographs before and after the course. Based on the analysis of recorded eye movements, the results indicated that the students' perceptual exploration became more active with photographs containing recognizable subjects (i.e., humans and objects), and their global saccades increased when they viewed classic photography, one of the categories of photography covered in the course. Interview data after the course indicated that students became aware of the technical effects in photographs. These results suggest that students' viewing strategies may change following a course, as assessed by behavioral measures of eye movements. Further examination is needed to validate this approach to educational effect measurement. PMID:27471485
A PC-based software test for measuring alcohol and drug effects in human subjects.
Mills, K C; Parkman, K M; Spruill, S E
1996-12-01
A new software-based visual search and divided-attention test of cognitive performance was developed and evaluated in an alcohol dose-response study with 24 human subjects aged 21-62 years. The test used language-free, color, graphic displays to represent the visuospatial demands of driving. Cognitive demands were increased over previous hardware-based tests, and the motor skills required for the test involved minimal eye movements and eye-hand coordination. Repeated performance on the test was evaluated with a latin-square design by using a placebo and two alcohol doses, low (0.48 g/kg/LBM) and moderate (0.72 g/kg/LBM). The data on 7 females and 17 males yielded significant falling and rising impairment effects coincident with moderate rising and falling breath alcohol levels (mean peak BrALs = 0.045 g/dl and 0.079 g/dl). None of the subjects reported eye-strain or psychomotor fatigue as compared with previous tests. The high sensitivity/variance relative to use in basic and applied research, and worksite fitness-for-duty testing, was discussed. The most distinct advantage of a software-based test that operates on readily available PCs is that it can be widely distributed to researchers with a common reference to compare a variety of alcohol and drug effects.
Looking at Op Art from a computational viewpoint.
Zanker, Johannes M
2004-01-01
Arts history tells an exciting story about repeated attempts to represent features that are crucial for the understanding of our environment and which, at the same time, go beyond the inherently two-dimensional nature of a flat painting surface: depth and motion. In the twentieth century, Op artists such as Bridget Riley began to experiment with simple black and white patterns that do not represent motion in an artistic way but actually create vivid dynamic illusions in static pictures. The cause of motion illusions in such paintings is still a matter of debate. The role of involuntary eye movements in this phenomenon is studied here with a computational approach. The possible consequences of shifting the retinal image of synthetic wave gratings, dubbed as 'riloids', were analysed by a two-dimensional array of motion detectors (2DMD model), which generates response maps representing the spatial distribution of motion signals generated by such a stimulus. For a two-frame sequence reflecting a saccadic displacement, these motion signal maps contain extended patches in which local directions change only little. These directions, however, do not usually precisely correspond to the direction of pattern displacement that can be expected from the geometry of the curved gratings as an instance of the so-called 'aperture problem'. The patchy structure of the simulated motion detector response to the displacement of riloids resembles the motion illusion, which is not perceived as a coherent shift of the whole pattern but as a wobbling and jazzing of ill-defined regions. Although other explanations are not excluded, this might support the view that the puzzle of Op Art motion illusions could potentially have an almost trivial solution in terms of small involuntary eye movement leading to image shifts that are picked up by well-known motion detectors in the early visual system. This view can have further consequences for our understanding of how the human visual system usually compensates for eye movements, in order to let us perceive a stable world despite continuous image shifts generated by gaze instability.
Initial component control in disparity vergence: a model-based study.
Horng, J L; Semmlow, J L; Hung, G K; Ciuffreda, K J
1998-02-01
The dual-mode theory for the control of disparity-vergence eye movements states that two components control the response to a step change in disparity. The initial component uses a motor preprogram to drive the eyes to an approximate final position. This initial component is followed by activation of a late component operating under visual feedback control that reduces residual disparity to within fusional limits. A quantitative model based on a pulse-step controller, similar to that postulated for saccadic eye movements, has been developed to represent the initial component. This model, an adaptation of one developed by Zee et al. [1], provides accurate simulations of isolated initial component movements and is compatible with the known underlying neurophysiology and existing neurophysiological data. The model has been employed to investigate the difference in dynamics between convergent and divergent movements. Results indicate that the pulse-control component active in convergence is reduced or absent from the control signals of divergence movements. This suggests somewhat different control structures of convergence versus divergence, and is consistent with other directional asymmetries seen in horizontal vergence.
Multi-step EMG Classification Algorithm for Human-Computer Interaction
NASA Astrophysics Data System (ADS)
Ren, Peng; Barreto, Armando; Adjouadi, Malek
A three-electrode human-computer interaction system, based on digital processing of the Electromyogram (EMG) signal, is presented. This system can effectively help disabled individuals paralyzed from the neck down to interact with computers or communicate with people through computers using point-and-click graphic interfaces. The three electrodes are placed on the right frontalis, the left temporalis and the right temporalis muscles in the head, respectively. The signal processing algorithm used translates the EMG signals during five kinds of facial movements (left jaw clenching, right jaw clenching, eyebrows up, eyebrows down, simultaneous left & right jaw clenching) into five corresponding types of cursor movements (left, right, up, down and left-click), to provide basic mouse control. The classification strategy is based on three principles: the EMG energy of one channel is typically larger than the others during one specific muscle contraction; the spectral characteristics of the EMG signals produced by the frontalis and temporalis muscles during different movements are different; the EMG signals from adjacent channels typically have correlated energy profiles. The algorithm is evaluated on 20 pre-recorded EMG signal sets, using Matlab simulations. The results show that this method provides improvements and is more robust than other previous approaches.
Eye movements and attention in reading, scene perception, and visual search.
Rayner, Keith
2009-08-01
Eye movements are now widely used to investigate cognitive processes during reading, scene perception, and visual search. In this article, research on the following topics is reviewed with respect to reading: (a) the perceptual span (or span of effective vision), (b) preview benefit, (c) eye movement control, and (d) models of eye movements. Related issues with respect to eye movements during scene perception and visual search are also reviewed. It is argued that research on eye movements during reading has been somewhat advanced over research on eye movements in scene perception and visual search and that some of the paradigms developed to study reading should be more widely adopted in the study of scene perception and visual search. Research dealing with "real-world" tasks and research utilizing the visual-world paradigm are also briefly discussed.
NASA Astrophysics Data System (ADS)
Abbott, W. W.; Faisal, A. A.
2012-08-01
Eye movements are highly correlated with motor intentions and are often retained by patients with serious motor deficiencies. Despite this, eye tracking is not widely used as control interface for movement in impaired patients due to poor signal interpretation and lack of control flexibility. We propose that tracking the gaze position in 3D rather than 2D provides a considerably richer signal for human machine interfaces by allowing direct interaction with the environment rather than via computer displays. We demonstrate here that by using mass-produced video-game hardware, it is possible to produce an ultra-low-cost binocular eye-tracker with comparable performance to commercial systems, yet 800 times cheaper. Our head-mounted system has 30 USD material costs and operates at over 120 Hz sampling rate with a 0.5-1 degree of visual angle resolution. We perform 2D and 3D gaze estimation, controlling a real-time volumetric cursor essential for driving complex user interfaces. Our approach yields an information throughput of 43 bits s-1, more than ten times that of invasive and semi-invasive brain-machine interfaces (BMIs) that are vastly more expensive. Unlike many BMIs our system yields effective real-time closed loop control of devices (10 ms latency), after just ten minutes of training, which we demonstrate through a novel BMI benchmark—the control of the video arcade game ‘Pong’.
Real-time recording and classification of eye movements in an immersive virtual environment.
Diaz, Gabriel; Cooper, Joseph; Kit, Dmitry; Hayhoe, Mary
2013-10-10
Despite the growing popularity of virtual reality environments, few laboratories are equipped to investigate eye movements within these environments. This primer is intended to reduce the time and effort required to incorporate eye-tracking equipment into a virtual reality environment. We discuss issues related to the initial startup and provide algorithms necessary for basic analysis. Algorithms are provided for the calculation of gaze angle within a virtual world using a monocular eye-tracker in a three-dimensional environment. In addition, we provide algorithms for the calculation of the angular distance between the gaze and a relevant virtual object and for the identification of fixations, saccades, and pursuit eye movements. Finally, we provide tools that temporally synchronize gaze data and the visual stimulus and enable real-time assembly of a video-based record of the experiment using the Quicktime MOV format, available at http://sourceforge.net/p/utdvrlibraries/. This record contains the visual stimulus, the gaze cursor, and associated numerical data and can be used for data exportation, visual inspection, and validation of calculated gaze movements.
Real-time recording and classification of eye movements in an immersive virtual environment
Diaz, Gabriel; Cooper, Joseph; Kit, Dmitry; Hayhoe, Mary
2013-01-01
Despite the growing popularity of virtual reality environments, few laboratories are equipped to investigate eye movements within these environments. This primer is intended to reduce the time and effort required to incorporate eye-tracking equipment into a virtual reality environment. We discuss issues related to the initial startup and provide algorithms necessary for basic analysis. Algorithms are provided for the calculation of gaze angle within a virtual world using a monocular eye-tracker in a three-dimensional environment. In addition, we provide algorithms for the calculation of the angular distance between the gaze and a relevant virtual object and for the identification of fixations, saccades, and pursuit eye movements. Finally, we provide tools that temporally synchronize gaze data and the visual stimulus and enable real-time assembly of a video-based record of the experiment using the Quicktime MOV format, available at http://sourceforge.net/p/utdvrlibraries/. This record contains the visual stimulus, the gaze cursor, and associated numerical data and can be used for data exportation, visual inspection, and validation of calculated gaze movements. PMID:24113087
NASA Astrophysics Data System (ADS)
Altıparmak, Hamit; Al Shahadat, Mohamad; Kiani, Ehsan; Dimililer, Kamil
2018-04-01
Robotic agriculture requires smart and doable techniques to substitute the human intelligence with machine intelligence. Strawberry is one of the important Mediterranean product and its productivity enhancement requires modern and machine-based methods. Whereas a human identifies the disease infected leaves by his eye, the machine should also be capable of vision-based disease identification. The objective of this paper is to practically verify the applicability of a new computer-vision method for discrimination between the healthy and disease infected strawberry leaves which does not require neural network or time consuming trainings. The proposed method was tested under outdoor lighting condition using a regular DLSR camera without any particular lens. Since the type and infection degree of disease is approximated a human brain a fuzzy decision maker classifies the leaves over the images captured on-site having the same properties of human vision. Optimizing the fuzzy parameters for a typical strawberry production area at a summer mid-day in Cyprus produced 96% accuracy for segmented iron deficiency and 93% accuracy for segmented using a typical human instant classification approximation as the benchmark holding higher accuracy than a human eye identifier. The fuzzy-base classifier provides approximate result for decision making on the leaf status as if it is healthy or not.
Influence of eye micromotions on spatially resolved refractometry
NASA Astrophysics Data System (ADS)
Chyzh, Igor H.; Sokurenko, Vyacheslav M.; Osipova, Irina Y.
2001-01-01
The influence eye micromotions on the accuracy of estimation of Zernike coefficients form eye transverse aberration measurements was investigated. By computer modeling, the following found eye aberrations have been examined: defocusing, primary astigmatism, spherical aberration of the 3rd and the 5th orders, as well as their combinations. It was determined that the standard deviation of estimated Zernike coefficients is proportional to the standard deviation of angular eye movements. Eye micromotions cause the estimation errors of Zernike coefficients of present aberrations and produce the appearance of Zernike coefficients of aberrations, absent in the eye. When solely defocusing is present, the biggest errors, cased by eye micromotions, are obtained for aberrations like coma and astigmatism. In comparison with other aberrations, spherical aberration of the 3rd and the 5th orders evokes the greatest increase of the standard deviation of other Zernike coefficients.
Hand Grasping Synergies As Biometrics
Patel, Vrajeshri; Thukral, Poojita; Burns, Martin K.; Florescu, Ionut; Chandramouli, Rajarathnam; Vinjamuri, Ramana
2017-01-01
Recently, the need for more secure identity verification systems has driven researchers to explore other sources of biometrics. This includes iris patterns, palm print, hand geometry, facial recognition, and movement patterns (hand motion, gait, and eye movements). Identity verification systems may benefit from the complexity of human movement that integrates multiple levels of control (neural, muscular, and kinematic). Using principal component analysis, we extracted spatiotemporal hand synergies (movement synergies) from an object grasping dataset to explore their use as a potential biometric. These movement synergies are in the form of joint angular velocity profiles of 10 joints. We explored the effect of joint type, digit, number of objects, and grasp type. In its best configuration, movement synergies achieved an equal error rate of 8.19%. While movement synergies can be integrated into an identity verification system with motion capture ability, we also explored a camera-ready version of hand synergies—postural synergies. In this proof of concept system, postural synergies performed well, but only when specific postures were chosen. Based on these results, hand synergies show promise as a potential biometric that can be combined with other hand-based biometrics for improved security. PMID:28512630
21 CFR 886.1510 - Eye movement monitor.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Eye movement monitor. 886.1510 Section 886.1510...) MEDICAL DEVICES OPHTHALMIC DEVICES Diagnostic Devices § 886.1510 Eye movement monitor. (a) Identification. An eye movement monitor is an AC-powered device with an electrode intended to measure and record...
21 CFR 886.1510 - Eye movement monitor.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Eye movement monitor. 886.1510 Section 886.1510...) MEDICAL DEVICES OPHTHALMIC DEVICES Diagnostic Devices § 886.1510 Eye movement monitor. (a) Identification. An eye movement monitor is an AC-powered device with an electrode intended to measure and record...
A Review on Eye Movement Studies in Childhood and Adolescent Psychiatry
ERIC Educational Resources Information Center
Rommelse, Nanda N. J.; Van der Stigchel, Stefan; Sergeant, Joseph A.
2008-01-01
The neural substrates of eye movement measures are largely known. Therefore, measurement of eye movements in psychiatric disorders may provide insight into the underlying neuropathology of these disorders. Visually guided saccades, antisaccades, memory guided saccades, and smooth pursuit eye movements will be reviewed in various childhood…
NASA Astrophysics Data System (ADS)
Nguyen, T. A. K.; DiGiovanna, J.; Cavuscens, S.; Ranieri, M.; Guinand, N.; van de Berg, R.; Carpaneto, J.; Kingma, H.; Guyot, J.-P.; Micera, S.; Perez Fornos, A.
2016-08-01
Objective. The vestibular system provides essential information about balance and spatial orientation via the brain to other sensory and motor systems. Bilateral vestibular loss significantly reduces quality of life, but vestibular implants (VIs) have demonstrated potential to restore lost function. However, optimal electrical stimulation strategies have not yet been identified in patients. In this study, we compared the two most common strategies, pulse amplitude modulation (PAM) and pulse rate modulation (PRM), in patients. Approach. Four subjects with a modified cochlear implant including electrodes targeting the peripheral vestibular nerve branches were tested. Charge-equivalent PAM and PRM were applied after adaptation to baseline stimulation. Vestibulo-ocular reflex eye movement responses were recorded to evaluate stimulation efficacy during acute clinical testing sessions. Main results. PAM evoked larger amplitude eye movement responses than PRM. Eye movement response axes for lateral canal stimulation were marginally better aligned with PRM than with PAM. A neural network model was developed for the tested stimulation strategies to provide insights on possible neural mechanisms. This model suggested that PAM would consistently cause a larger ensemble firing rate of neurons and thus larger responses than PRM. Significance. Due to the larger magnitude of eye movement responses, our findings strongly suggest PAM as the preferred strategy for initial VI modulation.
Pion-Massicotte, Joëlle; Godbout, Roger; Savard, Pierre; Roy, Jean-François
2018-02-23
Portable polysomnography is often too complex and encumbering for recording sleep at home. We recorded sleep using a biometric shirt (electrocardiogram sensors, respiratory inductance plethysmography bands and an accelerometer) in 21 healthy young adults recorded in a sleep laboratory for two consecutive nights, together with standard polysomnography. Polysomnographic recordings were scored using standard methods. An algorithm was developed to classify the biometric shirt recordings into rapid eye movement sleep, non-rapid eye movement sleep and wake. The algorithm was based on breathing rate and heart rate variability, body movement, and included a correction for sleep onset and offset. The overall mean percentage of agreement between the two sets of recordings was 77.4%; when non-rapid eye movement and rapid eye movement sleep epochs were grouped together, it increased to 90.8%. The overall kappa coefficient was 0.53. Five of the seven sleep variables were significantly correlated. The findings of this pilot study indicate that this simple portable system could be used to estimate the general sleep pattern of young healthy adults. © 2018 European Sleep Research Society.
Enhancing Assessments of Mental Health Programs and Program Planning
2012-06-01
Eye Movement Desensitization Reprocessing ( EMDR ) 3. Cognitive Processing Therapy (CPT) 4...Department of Defense Instruction EMDR Eye Movement Desensitization Reprocessing FEA Front End Assessment FM Field Manual FOB Forward Operating Base...1.26 N = 47 EMDR 1.04 N = 10 Group .46 N = 6 Other -.53-.78 N = 4 (some ns) Source: Roberts and Schnurr 2012. Slide 64. Large Number of Studies
Wibirama, Sunu; Nugroho, Hanung A
2017-07-01
Mobile devices addiction has been an important research topic in cognitive science, mental health, and human-machine interaction. Previous works observed mobile device addiction by logging mobile devices activity. Although immersion has been linked as a significant predictor of video game addiction, investigation on addiction factors of mobile device with behavioral measurement has never been done before. In this research, we demonstrated the usage of eye tracking to observe effect of screen size on experience of immersion. We compared subjective judgment with eye movements analysis. Non-parametric analysis on immersion score shows that screen size affects experience of immersion (p<;0.05). Furthermore, our experimental results suggest that fixational eye movements may be used as an indicator for future investigation of mobile devices addiction. Our experimental results are also useful to develop a guideline as well as intervention strategy to deal with smartphone addiction.
Factors influencing young chimpanzees' (Pan troglodytes) recognition of attention.
Povinelli, D J; Eddy, T J
1996-12-01
By 2 1/2 years of age, human infants appear to understand how others are connected to the external world through the mental state of attention and also appear to understand the specific role that the eyes play in deploying this attention. Previous research with chimpanzees suggests that, although they track the gaze of others, they may simultaneously be unaware of the underlying state of attention behind gaze. In a series of 3 experiments, the investigators systematically explored how the presence of eyes, direct eye contact, and head orientation and movement affected young chimpanzees' choice of 2 experimenters from whom to request food. The results indicate that young chimpanzees may be selectively attached to other organisms making direct eye contact with them or engaged in postures or movements that indicate attention, even though they may not appreciate the underlying mentalistic significance of these behaviors.
Determination of loss tangent of human tear film at 9.8 GHz
NASA Astrophysics Data System (ADS)
Bansal, Namita; Dhaliwal, A. S.; Mann, K. S.
2015-08-01
Basal (non-stimulated) tears that are produced by accessory lacrimal glands located in conjunctiva of human eye, form tear film which in turn keeps the eye moist and lubricate; nourishes the eye; protects the eye from dust, bacterial infection and shear forces generated during eye movements and blinking; and provides a refractive surface on the corneal epithelium. Film is known to contain water, mucin, lipids, lysozyme, glucose, urea, sodium etc. In present communication, loss tangent of human tear film has been determined at 9.8 GHz microwaves by employing cavity perturbation technique at a temperature of 37°C. The basal tears from a small population comprising six subjects were collected and average value of loss tangent is reported. Slater's technique was used to reduce the error caused in measuring the volume of sample. The determined values are useful to study the biological effects of microwaves on tear film as well as other parts of human eye such as eye lens and lens epithelial cells. To the best of author's knowledge, no such study is available in literature at any radio as well as microwave frequencies; therefore present determinations are first of its kind.
Momeni Safarabad, Nahid; Asgharnejad Farid, Ali-Asghar; Gharraee, Banafsheh; Habibi, Mojtaba
2018-01-01
Objective: This study aimed at reporting the effect of the 3-phase model of eye movement desensitization and reprocessing in the treatment of a patient with borderline personality disorder. Method : A 33-year-old female, who met the DSM-IV-TR criteria for borderline personality disorder, received a 20-session therapy based on the 3-phase model of eye movement desensitization and reprocessing. Borderline Personality Disorder Checklist (BPD-Checklist), Dissociative Experience Scale (DES-II), Beck Depression Inventory-II-second edition (BDI-II), and Anxiety Inventory (BAI) were filled out by the patient at all treatment phases and at the 3- month follow- up. Results: According to the obtained results, the patient's pretest scores in all research tools were 161, 44, 37, and 38 for BPD-Checklist, DES-II, BDI-II, and BAI, respectively. After treatment, these scores decreased significantly (69, 14, 6 and 10 respectively). So, the patient exhibited improvement in borderline personality disorder, dissociative, depression and anxiety symptoms, which were maintained after the 3-month follow-up. Conclusion: The results supported the positive effect of phasic model of eye movement desensitization and reprocessing on borderline personality disorder.
Momeni Safarabad, Nahid; Asgharnejad Farid, Ali-Asghar; Gharraee, Banafsheh; Habibi, Mojtaba
2018-01-01
Objective: This study aimed at reporting the effect of the 3-phase model of eye movement desensitization and reprocessing in the treatment of a patient with borderline personality disorder. Method : A 33-year-old female, who met the DSM-IV-TR criteria for borderline personality disorder, received a 20-session therapy based on the 3-phase model of eye movement desensitization and reprocessing. Borderline Personality Disorder Checklist (BPD-Checklist), Dissociative Experience Scale (DES-II), Beck Depression Inventory-II-second edition (BDI-II), and Anxiety Inventory (BAI) were filled out by the patient at all treatment phases and at the 3- month follow- up. Results: According to the obtained results, the patient’s pretest scores in all research tools were 161, 44, 37, and 38 for BPD-Checklist, DES-II, BDI-II, and BAI, respectively. After treatment, these scores decreased significantly (69, 14, 6 and 10 respectively). So, the patient exhibited improvement in borderline personality disorder, dissociative, depression and anxiety symptoms, which were maintained after the 3-month follow-up. Conclusion: The results supported the positive effect of phasic model of eye movement desensitization and reprocessing on borderline personality disorder. PMID:29892320
Evidence-based ergonomics. A comparison of Japanese and American office layouts.
Noro, Kageyu; Fujimaki, Goroh; Kishi, Shinsuke
2003-01-01
There is a variety of alternatives in office layouts. Yet the theoretical basis and criteria for predicting how well these layouts accommodate employees are poorly understood. The objective of this study was to evaluate criteria for selecting office layouts. Intensive computer workers worked in simulated office layouts in a controlled experimental laboratory. Eye movement measures indicate that knowledge work requires both concentration and interaction. Findings pointed to one layout as providing optimum balance between these 2 requirements. Recommendations for establishing a theoretical basis and design criteria for selecting office layouts based on work style are suggested.
MR-eyetracker: a new method for eye movement recording in functional magnetic resonance imaging.
Kimmig, H; Greenlee, M W; Huethe, F; Mergner, T
1999-06-01
We present a method for recording saccadic and pursuit eye movements in the magnetic resonance tomograph designed for visual functional magnetic resonance imaging (fMRI) experiments. To reliably classify brain areas as pursuit or saccade related it is important to carefully measure the actual eye movements. For this purpose, infrared light, created outside the scanner by light-emitting diodes (LEDs), is guided via optic fibers into the head coil and onto the eye of the subject. Two additional fiber optical cables pick up the light reflected by the iris. The illuminating and detecting cables are mounted in a plastic eyepiece that is manually lowered to the level of the eye. By means of differential amplification, we obtain a signal that covaries with the horizontal position of the eye. Calibration of eye position within the scanner yields an estimate of eye position with a resolution of 0.2 degrees at a sampling rate of 1000 Hz. Experiments are presented that employ echoplanar imaging with 12 image planes through visual, parietal and frontal cortex while subjects performed saccadic and pursuit eye movements. The distribution of BOLD (blood oxygen level dependent) responses is shown to depend on the type of eye movement performed. Our method yields high temporal and spatial resolution of the horizontal component of eye movements during fMRI scanning. Since the signal is purely optical, there is no interaction between the eye movement signals and the echoplanar images. This reasonably priced eye tracker can be used to control eye position and monitor eye movements during fMRI.
Burioka, Naoto; Cornélissen, Germaine; Halberg, Franz; Kaplan, Daniel T; Suyama, Hisashi; Sako, Takanori; Shimizu, Eiji
2003-01-01
The breath-to-breath variability of respiratory parameters changes with sleep stage. This study investigates any alteration in the approximate entropy (ApEn) of respiratory movement as a gauge of complexity in respiration, by stage of consciousness, in the light of putative brain interactions. Eight healthy men, who were between the ages of 23 and 29 years, were investigated. The signals of chest wall movement and EEG were recorded from 10:30 PM to 6:00 AM. After analog-to-digital conversion, the ApEn of respiratory movement (3 min) and EEG (20 s) were computed. Surrogate data were tested for nonlinearity in the original time series. The most impressive reduction in the ApEn of respiratory movement was associated with stage IV sleep, when the ApEn of the EEG was also statistically significantly decreased. A statistically significant linear relation is found between the ApEn of both variables. Surrogate data indicated that respiratory movement had nonlinear properties during all stages of consciousness that were investigated. Respiratory movement and EEG signals are more regular during stage IV sleep than during other stages of consciousness. The change in complexity described by the ApEn of respiration depends in part on the ApEn of the EEG, suggesting the involvement of nonlinear dynamic processes in the coordination between brain and lungs.
High-speed adaptive optics line scan confocal retinal imaging for human eye
Wang, Xiaolin; Zhang, Yuhua
2017-01-01
Purpose Continuous and rapid eye movement causes significant intraframe distortion in adaptive optics high resolution retinal imaging. To minimize this artifact, we developed a high speed adaptive optics line scan confocal retinal imaging system. Methods A high speed line camera was employed to acquire retinal image and custom adaptive optics was developed to compensate the wave aberration of the human eye’s optics. The spatial resolution and signal to noise ratio were assessed in model eye and in living human eye. The improvement of imaging fidelity was estimated by reduction of intra-frame distortion of retinal images acquired in the living human eyes with frame rates at 30 frames/second (FPS), 100 FPS, and 200 FPS. Results The device produced retinal image with cellular level resolution at 200 FPS with a digitization of 512×512 pixels/frame in the living human eye. Cone photoreceptors in the central fovea and rod photoreceptors near the fovea were resolved in three human subjects in normal chorioretinal health. Compared with retinal images acquired at 30 FPS, the intra-frame distortion in images taken at 200 FPS was reduced by 50.9% to 79.7%. Conclusions We demonstrated the feasibility of acquiring high resolution retinal images in the living human eye at a speed that minimizes retinal motion artifact. This device may facilitate research involving subjects with nystagmus or unsteady fixation due to central vision loss. PMID:28257458
Fukushima, Kikuro; Fukushima, Junko; Warabi, Tateo
2011-01-01
Smooth-pursuit eye movements are voluntary responses to small slow-moving objects in the fronto-parallel plane. They evolved in primates, who possess high-acuity foveae, to ensure clear vision about the moving target. The primate frontal cortex contains two smooth-pursuit related areas; the caudal part of the frontal eye fields (FEF) and the supplementary eye fields (SEF). Both areas receive vestibular inputs. We review functional differences between the two areas in smooth-pursuit. Most FEF pursuit neurons signal pursuit parameters such as eye velocity and gaze-velocity, and are involved in canceling the vestibulo-ocular reflex by linear addition of vestibular and smooth-pursuit responses. In contrast, gaze-velocity signals are rarely represented in the SEF. Most FEF pursuit neurons receive neck velocity inputs, while discharge modulation during pursuit and trunk-on-head rotation adds linearly. Linear addition also occurs between neck velocity responses and vestibular responses during head-on-trunk rotation in a task-dependent manner. During cross-axis pursuit–vestibular interactions, vestibular signals effectively initiate predictive pursuit eye movements. Most FEF pursuit neurons discharge during the interaction training after the onset of pursuit eye velocity, making their involvement unlikely in the initial stages of generating predictive pursuit. Comparison of representative signals in the two areas and the results of chemical inactivation during a memory-based smooth-pursuit task indicate they have different roles; the SEF plans smooth-pursuit including working memory of motion–direction, whereas the caudal FEF generates motor commands for pursuit eye movements. Patients with idiopathic Parkinson’s disease were asked to perform this task, since impaired smooth-pursuit and visual working memory deficit during cognitive tasks have been reported in most patients. Preliminary results suggested specific roles of the basal ganglia in memory-based smooth-pursuit. PMID:22174706
Lateral eye-movement responses to visual stimuli.
Wilbur, M P; Roberts-Wilbur, J
1985-08-01
The association of left lateral eye-movement with emotionality or arousal of affect and of right lateral eye-movement with cognitive/interpretive operations and functions was investigated. Participants were junior and senior students enrolled in an undergraduate course in developmental psychology. There were 37 women and 13 men, ranging from 19 to 45 yr. of age. Using videotaped lateral eye-movements of 50 participants' responses to 15 visually presented stimuli (precategorized as neutral, emotional, or intellectual), content and statistical analyses supported the association between left lateral eye-movement and emotional arousal and between right lateral eye-movement and cognitive functions. Precategorized visual stimuli included items such as a ball (neutral), gun (emotional), and calculator (intellectual). The findings are congruent with existing lateral eye-movement literature and also are additive by using visual stimuli that do not require the explicit response or implicit processing of verbal questioning.
Effects of background motion on eye-movement information.
Nakamura, S
1997-02-01
The effect of background stimulus on eye-movement information was investigated by analyzing the underestimation of the target velocity during pursuit eye movement (Aubert-Fleishl paradox). In the experiment, a striped pattern with various brightness contrasts and spatial frequencies was used as a background stimulus, which was moved at various velocities. Analysis showed that the perceived velocity of the pursuit target, which indicated the magnitudes of eye-movement information, decreased when the background stripes moved in the same direction as eye movement at higher velocities and increased when the background moved in the opposite direction. The results suggest that the eye-movement information varied as a linear function of the velocity of the motion of the background retinal image (optic flow). In addition, the effectiveness of optic flow on eye-movement information was determined by the attributes of the background stimulus such as the brightness contrast or the spatial frequency of the striped pattern.
Bicknell, Klinton; Levy, Roger
2012-01-01
Decades of empirical work have shown that a range of eye movement phenomena in reading are sensitive to the details of the process of word identification. Despite this, major models of eye movement control in reading do not explicitly model word identification from visual input. This paper presents a argument for developing models of eye movements that do include detailed models of word identification. Specifically, we argue that insights into eye movement behavior can be gained by understanding which phenomena naturally arise from an account in which the eyes move for efficient word identification, and that one important use of such models is to test which eye movement phenomena can be understood this way. As an extended case study, we present evidence from an extension of a previous model of eye movement control in reading that does explicitly model word identification from visual input, Mr. Chips (Legge, Klitz, & Tjan, 1997), to test two proposals for the effect of using linguistic context on reading efficiency. PMID:23074362
A review on eye movement studies in childhood and adolescent psychiatry.
Rommelse, Nanda N J; Van der Stigchel, Stefan; Sergeant, Joseph A
2008-12-01
The neural substrates of eye movement measures are largely known. Therefore, measurement of eye movements in psychiatric disorders may provide insight into the underlying neuropathology of these disorders. Visually guided saccades, antisaccades, memory guided saccades, and smooth pursuit eye movements will be reviewed in various childhood psychiatric disorders. The four aims of this review are (1) to give a thorough overview of eye movement studies in a wide array of psychiatric disorders occurring during childhood and adolescence (attention-deficit/hyperactivity disorder, oppositional deviant disorder and conduct disorder, autism spectrum disorders, reading disorder, childhood-onset schizophrenia, Tourette's syndrome, obsessive compulsive disorder, and anxiety and depression), (2) to discuss the specificity and overlap of eye movement findings across disorders and paradigms, (3) to discuss the developmental aspects of eye movement abnormalities in childhood and adolescence psychiatric disorders, and (4) to present suggestions for future research. In order to make this review of interest to a broad audience, attention will be given to the clinical manifestation of the disorders and the theoretical background of the eye movement paradigms.
Of bits and wows: A Bayesian theory of surprise with applications to attention.
Baldi, Pierre; Itti, Laurent
2010-06-01
The amount of information contained in a piece of data can be measured by the effect this data has on its observer. Fundamentally, this effect is to transform the observer's prior beliefs into posterior beliefs, according to Bayes theorem. Thus the amount of information can be measured in a natural way by the distance (relative entropy) between the prior and posterior distributions of the observer over the available space of hypotheses. This facet of information, termed "surprise", is important in dynamic situations where beliefs change, in particular during learning and adaptation. Surprise can often be computed analytically, for instance in the case of distributions from the exponential family, or it can be numerically approximated. During sequential Bayesian learning, surprise decreases as the inverse of the number of training examples. Theoretical properties of surprise are discussed, in particular how it differs and complements Shannon's definition of information. A computer vision neural network architecture is then presented capable of computing surprise over images and video stimuli. Hypothesizing that surprising data ought to attract natural or artificial attention systems, the output of this architecture is used in a psychophysical experiment to analyze human eye movements in the presence of natural video stimuli. Surprise is found to yield robust performance at predicting human gaze (ROC-like ordinal dominance score approximately 0.7 compared to approximately 0.8 for human inter-observer repeatability, approximately 0.6 for simpler intensity contrast-based predictor, and 0.5 for chance). The resulting theory of surprise is applicable across different spatio-temporal scales, modalities, and levels of abstraction. Copyright 2010 Elsevier Ltd. All rights reserved.
Individual predictions of eye-movements with dynamic scenes
NASA Astrophysics Data System (ADS)
Barth, Erhardt; Drewes, Jan; Martinetz, Thomas
2003-06-01
We present a model that predicts saccadic eye-movements and can be tuned to a particular human observer who is viewing a dynamic sequence of images. Our work is motivated by applications that involve gaze-contingent interactive displays on which information is displayed as a function of gaze direction. The approach therefore differs from standard approaches in two ways: (1) we deal with dynamic scenes, and (2) we provide means of adapting the model to a particular observer. As an indicator for the degree of saliency we evaluate the intrinsic dimension of the image sequence within a geometric approach implemented by using the structure tensor. Out of these candidate saliency-based locations, the currently attended location is selected according to a strategy found by supervised learning. The data are obtained with an eye-tracker and subjects who view video sequences. The selection algorithm receives candidate locations of current and past frames and a limited history of locations attended in the past. We use a linear mapping that is obtained by minimizing the quadratic difference between the predicted and the actually attended location by gradient descent. Being linear, the learned mapping can be quickly adapted to the individual observer.
A computer vision-based system for monitoring Vojta therapy.
Khan, Muhammad Hassan; Helsper, Julien; Farid, Muhammad Shahid; Grzegorzek, Marcin
2018-05-01
A neurological illness is t he disorder in human nervous system that can result in various diseases including the motor disabilities. Neurological disorders may affect the motor neurons, which are associated with skeletal muscles and control the body movement. Consequently, they introduce some diseases in the human e.g. cerebral palsy, spinal scoliosis, peripheral paralysis of arms/legs, hip joint dysplasia and various myopathies. Vojta therapy is considered a useful technique to treat the motor disabilities. In Vojta therapy, a specific stimulation is given to the patient's body to perform certain reflexive pattern movements which the patient is unable to perform in a normal manner. The repetition of stimulation ultimately brings forth the previously blocked connections between the spinal cord and the brain. After few therapy sessions, the patient can perform these movements without external stimulation. In this paper, we propose a computer vision-based system to monitor the correct movements of the patient during the therapy treatment using the RGBD data. The proposed framework works in three steps. In the first step, patient's body is automatically detected and segmented and two novel techniques are proposed for this purpose. In the second step, a multi-dimensional feature vector is computed to define various movements of patient's body during the therapy. In the final step, a multi-class support vector machine is used to classify these movements. The experimental evaluation carried out on the large captured dataset shows that the proposed system is highly useful in monitoring the patient's body movements during Vojta therapy. Copyright © 2018 Elsevier B.V. All rights reserved.
Eye Movement Patterns of the Elderly during Stair Descent:Effect of Illumination
NASA Astrophysics Data System (ADS)
Kasahara, Satoko; Okabe, Sonoko; Nakazato, Naoko; Ohno, Yuko
The relationship between the eye movement pattern during stair descent and illumination was studied in 4 elderly people in comparison with that in 5 young people. The illumination condition was light (85.0±30.9 lx) or dark (0.7±0.3 lx), and data of eye movements were obtained using an eye mark recorder. A flight of 15 steps was used for the experiment, and data on 3 steps in the middle, on which the descent movements were stabilized, were analyzed. The elderly subjects pointed their eyes mostly directly in front in the facial direction regardless of the illumination condition, but the young subjects tended to look down under the light condition. The young subjects are considered to have confirmed the safety of the front by peripheral vision, checked the stepping surface by central vision, and still maintained the upright position without leaning forward during stair descent. The elderly subjects, in contrast, always looked at the visual target by central vision even under the light condition and leaned forward. The range of eye movements was larger vertically than horizontally in both groups, and a characteristic eye movement pattern of repeating a vertical shuttle movement synchronous with descent of each step was observed. Under the dark condition, the young subjects widened the range of vertical eye movements and reduced duration of fixation. The elderly subjects showed no change in the range of eye movements but increased duration of fixation during stair descent. These differences in the eye movements are considered to be compensatory reactions to narrowing of the vertical visual field, reduced dark adaptation, and reduced dynamic visual acuity due to aging. These characteristics of eye movements of the elderly lead to an anteriorly leaned posture and lack of attention to the front during stair descent.
Wu, Howard G.
2013-01-01
The planning of goal-directed movements is highly adaptable; however, the basic mechanisms underlying this adaptability are not well understood. Even the features of movement that drive adaptation are hotly debated, with some studies suggesting remapping of goal locations and others suggesting remapping of the movement vectors leading to goal locations. However, several previous motor learning studies and the multiplicity of the neural coding underlying visually guided reaching movements stand in contrast to this either/or debate on the modes of motor planning and adaptation. Here we hypothesize that, during visuomotor learning, the target location and movement vector of trained movements are separately remapped, and we propose a novel computational model for how motor plans based on these remappings are combined during the control of visually guided reaching in humans. To test this hypothesis, we designed a set of experimental manipulations that effectively dissociated the effects of remapping goal location and movement vector by examining the transfer of visuomotor adaptation to untrained movements and movement sequences throughout the workspace. The results reveal that (1) motor adaptation differentially remaps goal locations and movement vectors, and (2) separate motor plans based on these features are effectively averaged during motor execution. We then show that, without any free parameters, the computational model we developed for combining movement-vector-based and goal-location-based planning predicts nearly 90% of the variance in novel movement sequences, even when multiple attributes are simultaneously adapted, demonstrating for the first time the ability to predict how motor adaptation affects movement sequence planning. PMID:23804099
Evidence that non-dreamers do dream: a REM sleep behaviour disorder model.
Herlin, Bastien; Leu-Semenescu, Smaranda; Chaumereuil, Charlotte; Arnulf, Isabelle
2015-12-01
To determine whether non-dreamers do not produce dreams or do not recall them, subjects were identified with no dream recall with dreamlike behaviours during rapid eye movement sleep behaviour disorder, which is typically characterised by dream-enacting behaviours congruent with sleep mentation. All consecutive patients with idiopathic rapid eye movement sleep behaviour disorder or rapid eye movement sleep behaviour disorder associated with Parkinson's disease who underwent a video-polysomnography were interviewed regarding the presence or absence of dream recall, retrospectively or upon spontaneous arousals. The patients with no dream recall for at least 10 years, and never-ever recallers were compared with dream recallers with rapid eye movement sleep behaviour disorder regarding their clinical, cognitive and sleep features. Of the 289 patients with rapid eye movement sleep behaviour disorder, eight (2.8%) patients had no dream recall, including four (1.4%) patients who had never ever recalled dreams, and four patients who had no dream recall for 10-56 years. All non-recallers exhibited, daily or almost nightly, several complex, scenic and dreamlike behaviours and speeches, which were also observed during rapid eye movement sleep on video-polysomnography (arguing, fighting and speaking). They did not recall a dream following sudden awakenings from rapid eye movement sleep. These eight non-recallers with rapid eye movement sleep behaviour disorder did not differ in terms of cognition, clinical, treatment or sleep measures from the 17 dreamers with rapid eye movement sleep behaviour disorder matched for age, sex and disease. The scenic dreamlike behaviours reported and observed during rapid eye movement sleep in the rare non-recallers with rapid eye movement sleep behaviour disorder (even in the never-ever recallers) provide strong evidence that non-recallers produce dreams, but do not recall them. Rapid eye movement sleep behaviour disorder provides a new model to evaluate cognitive processing during dreaming and subsequent recall. © 2015 European Sleep Research Society.
Wilming, Niklas; Kietzmann, Tim C; Jutras, Megan; Xue, Cheng; Treue, Stefan; Buffalo, Elizabeth A; König, Peter
2017-01-01
Oculomotor selection exerts a fundamental impact on our experience of the environment. To better understand the underlying principles, researchers typically rely on behavioral data from humans, and electrophysiological recordings in macaque monkeys. This approach rests on the assumption that the same selection processes are at play in both species. To test this assumption, we compared the viewing behavior of 106 humans and 11 macaques in an unconstrained free-viewing task. Our data-driven clustering analyses revealed distinct human and macaque clusters, indicating species-specific selection strategies. Yet, cross-species predictions were found to be above chance, indicating some level of shared behavior. Analyses relying on computational models of visual saliency indicate that such cross-species commonalities in free viewing are largely due to similar low-level selection mechanisms, with only a small contribution by shared higher level selection mechanisms and with consistent viewing behavior of monkeys being a subset of the consistent viewing behavior of humans. © The Author 2017. Published by Oxford University Press.
Wilming, Niklas; Kietzmann, Tim C.; Jutras, Megan; Xue, Cheng; Treue, Stefan; Buffalo, Elizabeth A.; König, Peter
2017-01-01
Abstract Oculomotor selection exerts a fundamental impact on our experience of the environment. To better understand the underlying principles, researchers typically rely on behavioral data from humans, and electrophysiological recordings in macaque monkeys. This approach rests on the assumption that the same selection processes are at play in both species. To test this assumption, we compared the viewing behavior of 106 humans and 11 macaques in an unconstrained free-viewing task. Our data-driven clustering analyses revealed distinct human and macaque clusters, indicating species-specific selection strategies. Yet, cross-species predictions were found to be above chance, indicating some level of shared behavior. Analyses relying on computational models of visual saliency indicate that such cross-species commonalities in free viewing are largely due to similar low-level selection mechanisms, with only a small contribution by shared higher level selection mechanisms and with consistent viewing behavior of monkeys being a subset of the consistent viewing behavior of humans. PMID:28077512
Eye gaze tracking based on the shape of pupil image
NASA Astrophysics Data System (ADS)
Wang, Rui; Qiu, Jian; Luo, Kaiqing; Peng, Li; Han, Peng
2018-01-01
Eye tracker is an important instrument for research in psychology, widely used in attention, visual perception, reading and other fields of research. Because of its potential function in human-computer interaction, the eye gaze tracking has already been a topic of research in many fields over the last decades. Nowadays, with the development of technology, non-intrusive methods are more and more welcomed. In this paper, we will present a method based on the shape of pupil image to estimate the gaze point of human eyes without any other intrusive devices such as a hat, a pair of glasses and so on. After using the ellipse fitting algorithm to deal with the pupil image we get, we can determine the direction of the fixation by the shape of the pupil.The innovative aspect of this method is to utilize the new idea of the shape of the pupil so that we can avoid much complicated algorithm. The performance proposed is very helpful for the study of eye gaze tracking, which just needs one camera without infrared light to know the changes in the shape of the pupil to determine the direction of the eye gazing, no additional condition is required.
Micro-patterned graphene-based sensing skins for human physiological monitoring
NASA Astrophysics Data System (ADS)
Wang, Long; Loh, Kenneth J.; Chiang, Wei-Hung; Manna, Kausik
2018-03-01
Ultrathin, flexible, conformal, and skin-like electronic transducers are emerging as promising candidates for noninvasive and nonintrusive human health monitoring. In this work, a wearable sensing membrane is developed by patterning a graphene-based solution onto ultrathin medical tape, which can then be attached to the skin for monitoring human physiological parameters and physical activity. Here, the sensor is validated for monitoring finger bending/movements and for recognizing hand motion patterns, thereby demonstrating its future potential for evaluating athletic performance, physical therapy, and designing next-generation human-machine interfaces. Furthermore, this study also quantifies the sensor’s ability to monitor eye blinking and radial pulse in real-time, which can find broader applications for the healthcare sector. Overall, the printed graphene-based sensing skin is highly conformable, flexible, lightweight, nonintrusive, mechanically robust, and is characterized by high strain sensitivity.
Facial expressions of emotion are not culturally universal.
Jack, Rachael E; Garrod, Oliver G B; Yu, Hui; Caldara, Roberto; Schyns, Philippe G
2012-05-08
Since Darwin's seminal works, the universality of facial expressions of emotion has remained one of the longest standing debates in the biological and social sciences. Briefly stated, the universality hypothesis claims that all humans communicate six basic internal emotional states (happy, surprise, fear, disgust, anger, and sad) using the same facial movements by virtue of their biological and evolutionary origins [Susskind JM, et al. (2008) Nat Neurosci 11:843-850]. Here, we refute this assumed universality. Using a unique computer graphics platform that combines generative grammars [Chomsky N (1965) MIT Press, Cambridge, MA] with visual perception, we accessed the mind's eye of 30 Western and Eastern culture individuals and reconstructed their mental representations of the six basic facial expressions of emotion. Cross-cultural comparisons of the mental representations challenge universality on two separate counts. First, whereas Westerners represent each of the six basic emotions with a distinct set of facial movements common to the group, Easterners do not. Second, Easterners represent emotional intensity with distinctive dynamic eye activity. By refuting the long-standing universality hypothesis, our data highlight the powerful influence of culture on shaping basic behaviors once considered biologically hardwired. Consequently, our data open a unique nature-nurture debate across broad fields from evolutionary psychology and social neuroscience to social networking via digital avatars.
Facial expressions of emotion are not culturally universal
Jack, Rachael E.; Garrod, Oliver G. B.; Yu, Hui; Caldara, Roberto; Schyns, Philippe G.
2012-01-01
Since Darwin’s seminal works, the universality of facial expressions of emotion has remained one of the longest standing debates in the biological and social sciences. Briefly stated, the universality hypothesis claims that all humans communicate six basic internal emotional states (happy, surprise, fear, disgust, anger, and sad) using the same facial movements by virtue of their biological and evolutionary origins [Susskind JM, et al. (2008) Nat Neurosci 11:843–850]. Here, we refute this assumed universality. Using a unique computer graphics platform that combines generative grammars [Chomsky N (1965) MIT Press, Cambridge, MA] with visual perception, we accessed the mind’s eye of 30 Western and Eastern culture individuals and reconstructed their mental representations of the six basic facial expressions of emotion. Cross-cultural comparisons of the mental representations challenge universality on two separate counts. First, whereas Westerners represent each of the six basic emotions with a distinct set of facial movements common to the group, Easterners do not. Second, Easterners represent emotional intensity with distinctive dynamic eye activity. By refuting the long-standing universality hypothesis, our data highlight the powerful influence of culture on shaping basic behaviors once considered biologically hardwired. Consequently, our data open a unique nature–nurture debate across broad fields from evolutionary psychology and social neuroscience to social networking via digital avatars. PMID:22509011
NASA Astrophysics Data System (ADS)
Mirkia, Hasti; Sangari, Arash; Nelson, Mark; Assadi, Amir H.
2013-03-01
Architecture brings together diverse elements to enhance the observer's measure of esthetics and the convenience of functionality. Architects often conceptualize synthesis of design elements to invoke the observer's sense of harmony and positive affect. How does an observer's brain respond to harmony of design in interior spaces? One implicit consideration by architects is the role of guided visual attention by observers while navigating indoors. Prior visual experience of natural scenes provides the perceptual basis for Gestalt of design elements. In contrast, Gestalt of organization in design varies according to the architect's decision. We outline a quantitative theory to measure the success in utilizing the observer's psychological factors to achieve the desired positive affect. We outline a unified framework for perception of geometry and motion in interior spaces, which integrates affective and cognitive aspects of human vision in the context of anthropocentric interior design. The affective criteria are derived from contemporary theories of interior design. Our contribution is to demonstrate that the neural computations in an observer's eye movement could be used to elucidate harmony in perception of form, space and motion, thus a measure of goodness of interior design. Through mathematical modeling, we argue the plausibility of the relevant hypotheses.
Gain Modulation as a Mechanism for Coding Depth from Motion Parallax in Macaque Area MT
Kim, HyungGoo R.; Angelaki, Dora E.
2017-01-01
Observer translation produces differential image motion between objects that are located at different distances from the observer's point of fixation [motion parallax (MP)]. However, MP can be ambiguous with respect to depth sign (near vs far), and this ambiguity can be resolved by combining retinal image motion with signals regarding eye movement relative to the scene. We have previously demonstrated that both extra-retinal and visual signals related to smooth eye movements can modulate the responses of neurons in area MT of macaque monkeys, and that these modulations generate neural selectivity for depth sign. However, the neural mechanisms that govern this selectivity have remained unclear. In this study, we analyze responses of MT neurons as a function of both retinal velocity and direction of eye movement, and we show that smooth eye movements modulate MT responses in a systematic, temporally precise, and directionally specific manner to generate depth-sign selectivity. We demonstrate that depth-sign selectivity is primarily generated by multiplicative modulations of the response gain of MT neurons. Through simulations, we further demonstrate that depth can be estimated reasonably well by a linear decoding of a population of MT neurons with response gains that depend on eye velocity. Together, our findings provide the first mechanistic description of how visual cortical neurons signal depth from MP. SIGNIFICANCE STATEMENT Motion parallax is a monocular cue to depth that commonly arises during observer translation. To compute from motion parallax whether an object appears nearer or farther than the point of fixation requires combining retinal image motion with signals related to eye rotation, but the neurobiological mechanisms have remained unclear. This study provides the first mechanistic account of how this interaction takes place in the responses of cortical neurons. Specifically, we show that smooth eye movements modulate the gain of responses of neurons in area MT in a directionally specific manner to generate selectivity for depth sign from motion parallax. We also show, through simulations, that depth could be estimated from a population of such gain-modulated neurons. PMID:28739582
The Face Perception System becomes Species-Specific at 3 Months: An Eye-Tracking Study
ERIC Educational Resources Information Center
Di Giorgio, Elisa; Meary, David; Pascalis, Olivier; Simion, Francesca
2013-01-01
The current study aimed at investigating own- vs. other-species preferences in 3-month-old infants. The infants' eye movements were recorded during a visual preference paradigm to assess whether they show a preference for own-species faces when contrasted with other-species faces. Human and monkey faces, equated for all low-level perceptual…
Uguccioni, Ginevra; Golmard, Jean-Louis; de Fontréaux, Alix Noël; Leu-Semenescu, Smaranda; Brion, Agnès; Arnulf, Isabelle
2013-05-01
Dreams enacted during sleepwalking or sleep terrors (SW/ST) may differ from those enacted during rapid eye movement sleep behavior disorder (RBD). Subjects completed aggression, depression, and anxiety questionnaires. The mentations associated with SW/ST and RBD behaviors were collected over their lifetime and on the morning after video polysomnography (PSG). The reports were analyzed for complexity, length, content, setting, bizarreness, and threat. Ninety-one percent of 32 subjects with SW/ST and 87.5% of 24 subjects with RBD remembered an enacted dream (121 dreams in a lifetime and 41 dreams recalled on the morning). These dreams were more complex and less bizarre, with a higher level of aggression in the RBD than in SW/ST subjects. In contrast, we found low aggression, anxiety, and depression scores during the daytime in both groups. As many as 70% of enacted dreams in SW/ST and 60% in RBD involved a threat, but there were more misfortunes and disasters in the SW/ST dreams and more human and animal aggressions in the RBD dreams. The response to these threats differed, as the sleepwalkers mostly fled from a disaster (and 25% fought back when attacked), while 75% of RBD subjects counterattacked when assaulted. The dreams setting included their bedrooms in 42% SW/ST dreams, though this finding was exceptional in the RBD dreams. Different threat simulations and modes of defense seem to play a role during dream-enacted behaviors (e.g., fleeing a disaster during SW/ST, counterattacking a human or animal assault during RBD), paralleling and exacerbating the differences observed between normal dreaming in nonrapid eye movement (NREM) vs rapid eye movement (REM) sleep. Copyright © 2013 Elsevier B.V. All rights reserved.
Time course of EEG background activity level before spontaneous awakening in infants.
Zampi, Chiara; Fagioli, Igino; Salzarulo, Piero
2002-12-01
This research aimed to investigate the time course of the cortical activity level preceding spontaneous awakening as a function of age and state. Two groups of infants (1-4 and 9-14 weeks of age) were continuously monitored by polygraphic recording and behavioural observation during the night. The electroencephalographic (EEG) activity recorded by the C3-O1 lead was analysed through an automatic analysis method which provides, for each 30-s epoch, a single measure, time domain based, of the EEG synchronization. The EEG parameter values were computed in the 6 min preceding each awakening out of non-rapid eye movement (NREM) sleep and out of rapid eye movement (REM) sleep. The EEG background activity level did not change in the minutes preceding awakening out of REM sleep. Awakening out of NREM sleep was preceded by a change of EEG activity level in the direction of higher activation with different time course according to the age. Both REM and NREM sleep results suggest that a high level of EEG activity is a prerequisite for the occurrence of a spontaneous awakening.
Role of the human supplementary eye field in the control of saccadic eye movements
Parton, Andrew; Nachev, Parashkev; Hodgson, Timothy L.; Mort, Dominic; Thomas, David; Ordidge, Roger; Morgan, Paul S.; Jackson, Stephen; Rees, Geraint; Husain, Masud
2007-01-01
The precise function of the supplementary eye field (SEF) is poorly understood. Although electrophysiological and functional imaging studies are important for demonstrating when SEF neurones are active, lesion studies are critical to establish the functions for which the SEF is essential. Here we report a series of investigations performed on an extremely rare individual with a highly focal lesion of the medial frontal cortex. High-resolution structural imaging demonstrated that his lesion was confined to the region of the left paracentral sulcus, the anatomical locus of the SEF. Behavioural testing revealed that the patient was significantly impaired when required to switch between anti- and pro-saccades, when there were conflicting rules governing stimulus–response mappings for saccades. Similarly, the results of an arbitrary stimulus–response associative learning task demonstrated that he was impaired when required to select the appropriate saccade from conflicting eye movement responses, but not for limb movements on an analogous manual task. When making memory-guided saccadic sequences, the patient demonstrated hypometria, like patients with Parkinson's disease, but had no significant difficulties in reproducing the order of saccades correctly on a task that emphasized accuracy with a wide temporal segregation between responses. These findings are consistent with the hypothesis that the SEF plays a key role in implementing control when there is conflict between several, ongoing competing saccadic responses, but not when eye movements need to be made accurately in sequence. PMID:17069864
A holographic waveguide based eye tracker
NASA Astrophysics Data System (ADS)
Liu, Changgeng; Pazzucconi, Beatrice; Liu, Juan; Liu, Lei; Yao, Xincheng
2018-02-01
We demonstrated the feasibility of using holographic waveguide for eye tracking. A custom-built holographic waveguide, a 20 mm x 60 mm x 3 mm flat glass substrate with integrated in- and out-couplers, was used for the prototype development. The in- and out-couplers, photopolymer films with holographic fringes, induced total internal reflection in the glass substrate. Diffractive optical elements were integrated into the in-coupler to serve as an optical collimator. The waveguide captured images of the anterior segment of the eye right in front of it and guided the images to a processing unit distant from the eye. The vector connecting the pupil center (PC) and the corneal reflex (CR) of the eye was used to compute eye position in the socket. An eye model, made of a high quality prosthetic eye, was used prototype validation. The benchtop prototype demonstrated a linear relationship between the angular eye position and the PC/CR vector over a range of 60 horizontal degrees and 30 vertical degrees at a resolution of 0.64-0.69 degrees/pixel by simple pixel count. The uncertainties of the measurements at different angular positions were within 1.2 pixels, which indicated that the prototype exhibited a high level of repeatability. These results confirmed that the holographic waveguide technology could be a feasible platform for developing a wearable eye tracker. Further development can lead to a compact, see-through eye tracker, which allows continuous monitoring of eye movement during real life tasks, and thus benefits diagnosis of oculomotor disorders.
NASA Technical Reports Server (NTRS)
Astafiev, Serguei V.; Shulman, Gordon L.; Stanley, Christine M.; Snyder, Abraham Z.; Van Essen, David C.; Corbetta, Maurizio
2003-01-01
We studied the functional organization of human posterior parietal and frontal cortex using functional magnetic resonance imaging (fMRI) to map preparatory signals for attending, looking, and pointing to a peripheral visual location. The human frontal eye field and two separate regions in the intraparietal sulcus were similarly recruited in all conditions, suggesting an attentional role that generalizes across response effectors. However, the preparation of a pointing movement selectively activated a different group of regions, suggesting a stronger role in motor planning. These regions were lateralized to the left hemisphere, activated by preparation of movements of either hand, and included the inferior and superior parietal lobule, precuneus, and posterior superior temporal sulcus, plus the dorsal premotor and anterior cingulate cortex anteriorly. Surface-based registration of macaque cortical areas onto the map of fMRI responses suggests a relatively good spatial correspondence between human and macaque parietal areas. In contrast, large interspecies differences were noted in the topography of frontal areas.
Quality of life in patients with an idiopathic rapid eye movement sleep behaviour disorder in Korea.
Kim, Keun Tae; Motamedi, Gholam K; Cho, Yong Won
2017-08-01
There have been few quality of life studies in patients with idiopathic rapid eye movement sleep behaviour disorder. We compared the quality of life in idiopathic rapid eye movement sleep behaviour disorder patients to healthy controls, patients with hypertension, type 2 diabetes mellitus without complication and idiopathic restless legs syndrome. Sixty patients with idiopathic rapid eye movement sleep behaviour disorder (24 female; mean age: 61.43 ± 8.99) were enrolled retrospectively. The diagnosis was established based on sleep history, overnight polysomnography, neurological examination and Mini-Mental State Examination to exclude secondary rapid eye movement sleep behavior disorder. All subjects completed questionnaires, including the Short Form 36-item Health Survey for quality of life. The total quality of life score in idiopathic rapid eye movement sleep behaviour disorder (70.63 ± 20.83) was lower than in the healthy control group (83.38 ± 7.96) but higher than in the hypertension (60.55 ± 24.82), diabetes mellitus (62.42 ± 19.37) and restless legs syndrome (61.77 ± 19.25) groups. The total score of idiopathic rapid eye movement sleep behaviour disorder patients had a negative correlation with the Pittsburg Sleep Quality Index (r = -0.498, P < 0.001), Insomnia Severity Index (r = -0.645, P < 0.001) and the Beck Depression Inventory-2 (r = -0.694, P < 0.001). Multiple regression showed a negative correlation between the Short Form 36-item Health Survey score and the Insomnia Severity Index (β = -1.100, P = 0.001) and Beck Depression Inventory-2 (β = -1.038, P < 0.001). idiopathic rapid eye movement sleep behaviour disorder had a significant negative impact on quality of life, although this effect was less than that of other chronic disorders. This negative effect might be related to a depressive mood associated with the disease. © 2016 European Sleep Research Society.
A Computer Graphics Human Figure Application Of Biostereometrics
NASA Astrophysics Data System (ADS)
Fetter, William A.
1980-07-01
A study of improved computer graphic representation of the human figure is being conducted under a National Science Foundation grant. Special emphasis is given biostereometrics as a primary data base from which applications requiring a variety of levels of detail may be prepared. For example, a human figure represented by a single point can be very useful in overview plots of a population. A crude ten point figure can be adequate for queuing theory studies and simulated movement of groups. A one hundred point figure can usefully be animated to achieve different overall body activities including male and female figures. A one thousand point figure si-milarly animated, begins to be useful in anthropometrics and kinesiology gross body movements. Extrapolations of this order-of-magnitude approach ultimately should achieve very complex data bases and a program which automatically selects the correct level of detail for the task at hand. See Summary Figure 1.
ERIC Educational Resources Information Center
Fazl, Arash; Grossberg, Stephen; Mingolla, Ennio
2009-01-01
How does the brain learn to recognize an object from multiple viewpoints while scanning a scene with eye movements? How does the brain avoid the problem of erroneously classifying parts of different objects together? How are attention and eye movements intelligently coordinated to facilitate object learning? A neural model provides a unified…
Advances in graphonomics: studies on fine motor control, its development and disorders.
Van Gemmert, Arend W A; Teulings, Hans-Leo
2006-10-01
During the past 20 years graphonomic research has become a major contributor to the understanding of human movement science. Graphonomic research investigates the relationship between the planning and generation of fine motor tasks, in particular, handwriting and drawing. Scientists in this field are at the forefront of using new paradigms to investigate human movement. The 16 articles in this special issue of Human Movement Science show that the field of graphonomics makes an important contribution to the understanding of fine motor control, motor development, and movement disorders. Topics discussed include writer's cramp, multiple sclerosis, Parkinson's disease, schizophrenia, drug-induced parkinsonism, dopamine depletion, dysgraphia, motor development, developmental coordination disorder, caffeine, alertness, arousal, sleep deprivation, visual feedback transformation and suppression, eye-hand coordination, pen grip, pen pressure, movement fluency, bimanual interference, dominant versus non-dominant hand, tracing, freehand drawing, spiral drawing, reading, typewriting, and automatic segmentation.
Sofroniew, Nicholas J; Svoboda, Karel
2015-02-16
Eyes may be 'the window to the soul' in humans, but whiskers provide a better path to the inner lives of rodents. The brain has remarkable abilities to focus its limited resources on information that matters, while ignoring a cacophony of distractions. While inspecting a visual scene, primates foveate to multiple salient locations, for example mouths and eyes in images of people, and ignore the rest. Similar processes have now been observed and studied in rodents in the context of whisker-based tactile sensation. Rodents use their mechanosensitive whiskers for a diverse range of tactile behaviors such as navigation, object recognition and social interactions. These animals move their whiskers in a purposive manner to locations of interest. The shapes of whiskers, as well as their movements, are exquisitely adapted for tactile exploration in the dark tight burrows where many rodents live. By studying whisker movements during tactile behaviors, we can learn about the tactile information available to rodents through their whiskers and how rodents direct their attention. In this primer, we focus on how the whisker movements of rats and mice are providing clues about the logic of active sensation and the underlying neural mechanisms. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Correia, Manning J.; Luke, Brian L.; McGrath, Braden J.; Clark, John B.; Rupert, Angus H.
1996-01-01
While considerable attention has been given to visual-vestibular interaction (VVI) during angular motion of the head as might occur during an aircraft spin, much less attention has been given to VVI during linear motion of the head. Such interaction might occur, for example, while viewing a stationary or moving display during vertical take-off and landing operations Research into linear VVI, particularly during prolonged periods of linear acceleration, has been hampered by the unavailability of a programmable translator capable of large excursions We collaborated with Otis Elevator Co. and used their research tower and elevator, whose motion could be digitally programmed, to vertically translate human subjects over a distance of 92.3 meters with a peak linear acceleration of 2 meters/sec(exp 2) During pulsatile or sinusoidal translation, the subjects viewed moving stripes (optokinetic stimulus) or a fixed point source (light emitting diode, led, display), respectively and it was generally found that. The direction of linear acceleration relative to the cardinal head axes and the direction of the slow component of optokinetic nystagmus (OKN) determined the extent of VVI during concomitant stripe motion and linear acceleration. Acceleration along the z head axis (A(sub z)) produced the largest VVI, particularly when the slow component of OKN was in the same direction as eye movements produced by the linear acceleration and Eye movements produced by linear acceleration are suppressed by viewing a fixed target at frequencies below 10 Hz But, above this frequency the suppression produced by VVI is removed. Finally, as demonstrated in non-human primates, vergence of the eyes appears to modulate the vertical eye movement response to linear acceleration in humans.
ECEM (eye closure eye movements): integrating aspects of EMDR with hypnosis for treatment of trauma.
Hollander, H E; Bender, S S
2001-01-01
The paper addresses distinctions between hypnotic interventions and Eye Movement Desensitizing and Reprocessing (EMDR) and discusses their effect on persons who have symptoms of Posttraumatic Stress Disorder (PTSD). Eye movements in hypnosis and EMDR are considered in terms of the different ways they may affect responses in treatment. A treatment intervention within hypnosis called ECEM (Eye Closure, Eye Movements) is described. ECEM can be used for patients with histories of trauma who did not benefit adequately from either interventions in hypnosis or the EMDR treatment protocol used separately. In ECEM the eye movement variable of EMDR is integrated within a hypnosis protocol to enhance benefits of hypnosis and reduce certain risks of EMDR.
Visual Discomfort From Flash Afterimages of Riloid Patterns.
O'Hare, Louise
2017-06-01
Op-art-based stimuli have been shown to be uncomfortable, possibly due to a combination of fixational eye movements (microsaccades) and excessive cortical responses. Efforts have been made to measure illusory phenomena arising from these stimuli in the absence of microsaccades, but there has been no attempt thus far to decouple the effects of the cortical response from the effect of fixational eye movements. This study uses flash afterimages to stabilise the image on the retina and thus reduce the systematic effect of eye movements, in order to investigate the role of the brain in discomfort from op-art-based stimuli. There was a relationship between spatial frequency and the magnitude of the P300 response, showing a similar pattern to that of discomfort judgements, which suggests that there might be a role of discomfort and excessive neural responses independently from the effects of microsaccades.
A stochastic model for eye movements during fixation on a stationary target.
NASA Technical Reports Server (NTRS)
Vasudevan, R.; Phatak, A. V.; Smith, J. D.
1971-01-01
A stochastic model describing small eye movements occurring during steady fixation on a stationary target is presented. Based on eye movement data for steady gaze, the model has a hierarchical structure; the principal level represents the random motion of the image point within a local area of fixation, while the higher level mimics the jump processes involved in transitions from one local area to another. Target image motion within a local area is described by a Langevin-like stochastic differential equation taking into consideration the microsaccadic jumps pictured as being due to point processes and the high frequency muscle tremor, represented as a white noise. The transform of the probability density function for local area motion is obtained, leading to explicit expressions for their means and moments. Evaluation of these moments based on the model is comparable with experimental results.
Protective ocular mechanisms in woodpeckers.
Wygnanski-Jaffe, T; Murphy, C J; Smith, C; Kubai, M; Christopherson, P; Ethier, C R; Levin, A V
2007-01-01
Woodpeckers possess mechanisms protecting the eye from shaking/impact. Mechanisms available to woodpeckers but not humans may help explain some eye injuries in Shaken Baby syndrome (SBS). Gross dissection and histologic examination of eyes and orbits of seven woodpeckers. All birds showed restricted axial globe movement due to the tight fit within the orbit and fascial connections between the orbital rim and sclera. The sclera was reinforced with cartilage and bone, the optic nerve lacked redundancy, and the vitreous lacked attachments to the posterior pole retina. Woodpecker eyes differ from human infants by an inability of the globe to move axially in the orbit, the sclera to deform, and the vitreous to shear the retina. These findings support current hypotheses that abusive acceleration-deceleration-induced ocular injury in human infants may be related to translation of vitreous within the globe and the globe within the orbit. The woodpecker presents a natural model resistant to mechanical forces that have some similarity to SBS.
Prevalence and phenomenology of eye tics in Gilles de la Tourette syndrome.
Martino, Davide; Cavanna, Andrea E; Robertson, Mary M; Orth, Michael
2012-10-01
Eye tics seem to be common in Gilles de la Tourette syndrome (GTS). We analyzed the frequency and clinical characteristics of eye tics in 212 GTS patients. Of the 212 patients, 201 (94.8 %) reported eye tics in their life-time; 166 (78.3 %) reported eye movement tics (rolling eyes up/down, eyes looking sideways, staring), and 194 (91.5 %) eyelid/eyebrow movement tics (frowning, raising eyebrows, blinking or winking). Patients with eye movement tics were younger at age of GTS onset (7.1 ± 4 years) than those without (8.9 ± 6.8; p = 0.024). Tic severity positively correlated to lifetime history of eye and/or eyelid/eyebrow movement tics. Our data confirm that eye and eyelid/eyebrow movement tics are very common in GTS, and most patients have several types of eye tics over time. Eye tic phenomenology was similar in patients with or without co-morbidity. Eye tics are therefore likely to be a core feature of GTS and should be routinely evaluated in order to strengthen the clinician's confidence in diagnosing GTS.
Variability of eye movements when viewing dynamic natural scenes.
Dorr, Michael; Martinetz, Thomas; Gegenfurtner, Karl R; Barth, Erhardt
2010-08-26
How similar are the eye movement patterns of different subjects when free viewing dynamic natural scenes? We collected a large database of eye movements from 54 subjects on 18 high-resolution videos of outdoor scenes and measured their variability using the Normalized Scanpath Saliency, which we extended to the temporal domain. Even though up to about 80% of subjects looked at the same image region in some video parts, variability usually was much greater. Eye movements on natural movies were then compared with eye movements in several control conditions. "Stop-motion" movies had almost identical semantic content as the original videos but lacked continuous motion. Hollywood action movie trailers were used to probe the upper limit of eye movement coherence that can be achieved by deliberate camera work, scene cuts, etc. In a "repetitive" condition, subjects viewed the same movies ten times each over the course of 2 days. Results show several systematic differences between conditions both for general eye movement parameters such as saccade amplitude and fixation duration and for eye movement variability. Most importantly, eye movements on static images are initially driven by stimulus onset effects and later, more so than on continuous videos, by subject-specific idiosyncrasies; eye movements on Hollywood movies are significantly more coherent than those on natural movies. We conclude that the stimuli types often used in laboratory experiments, static images and professionally cut material, are not very representative of natural viewing behavior. All stimuli and gaze data are publicly available at http://www.inb.uni-luebeck.de/tools-demos/gaze.
Eye Movements and Visual Search: A Bibliography,
1983-01-01
Stacey, S.R.; Snyder, H.L. Air-to-air target acquisition: Factors and means of improvement. School of Aerospace Medicine , Brooks AFB, TX, Final Report no...Annual Conference on en6ineerin6 in medicine and biology, Washington , 16-19 November, 1970, p. 136. New York: IEEZ, 1970. EYM, MOD, PSY, CTL 222...HDM, ABN, EYM 242 Freska, C., Ellis, S., Stark, L. Simplified measurement of eye fixation. Computers in Biology and Medicine , 1980. INlS, FIX 243
Biomechanical behavior of muscle-tendon complex during dynamic human movements.
Fukashiro, Senshi; Hay, Dean C; Nagano, Akinori
2006-05-01
This paper reviews the research findings regarding the force and length changes of the muscle-tendon complex during dynamic human movements, especially those using ultrasonography and computer simulation. The use of ultrasonography demonstrated that the tendinous structures of the muscle-tendon complex are compliant enough to influence the biomechanical behavior (length change, shortening velocity, and so on) of fascicles substantially. It was discussed that the fascicles are a force generator rather than a work generator; the tendinous structures function not only as an energy re-distributor but also as a power amplifier, and the interaction between fascicles and tendinous structures is essential for generating higher joint power outputs during the late pushoff phase in human vertical jumping. This phenomenon could be explained based on the force-length/velocity relationships of each element (contractile and series elastic elements) in the muscle-tendon complex during movements. Through computer simulation using a Hill-type muscle-tendon complex model, the benefit of making a countermovement was examined in relation to the compliance of the muscle-tendon complex and the length ratio between the contractile and series elastic elements. Also, the integral roles of the series elastic element were simulated in a cyclic human heel-raise exercise. It was suggested that the storage and reutilization of elastic energy by the tendinous structures play an important role in enhancing work output and movement efficiency in many sorts of human movements.
Özdem, Ceylan; Wiese, Eva; Wykowska, Agnieszka; Müller, Hermann; Brass, Marcel; Van Overwalle, Frank
2017-10-01
Attributing mind to interaction partners has been shown to increase the social relevance we ascribe to others' actions and to modulate the amount of attention dedicated to them. However, it remains unclear how the relationship between higher-order mind attribution and lower-level attention processes is established in the brain. In this neuroimaging study, participants saw images of an anthropomorphic robot that moved its eyes left- or rightwards to signal the appearance of an upcoming stimulus in the same (valid cue) or opposite location (invalid cue). Independently, participants' beliefs about the intentionality underlying the observed eye movements were manipulated by describing the eye movements as under human control or preprogrammed. As expected, we observed a validity effect behaviorally and neurologically (increased response times and activation in the invalid vs. valid condition). More importantly, we observed that this effect was more pronounced for the condition in which the robot's behavior was believed to be controlled by a human, as opposed to be preprogrammed. This interaction effect between cue validity and belief was, however, only found at the neural level and was manifested as a significant increase of activation in bilateral anterior temporoparietal junction.